diff --git a/docs/ReleaseNotes.rst b/docs/ReleaseNotes.rst index 982abb024525..5f9991439697 100644 --- a/docs/ReleaseNotes.rst +++ b/docs/ReleaseNotes.rst @@ -1,280 +1,418 @@ ======================================= Clang 5.0.0 (In-Progress) Release Notes ======================================= .. contents:: :local: :depth: 2 Written by the `LLVM Team `_ .. warning:: These are in-progress notes for the upcoming Clang 5 release. Release notes for previous releases can be found on `the Download Page `_. Introduction ============ This document contains the release notes for the Clang C/C++/Objective-C frontend, part of the LLVM Compiler Infrastructure, release 5.0.0. Here we describe the status of Clang in some detail, including major improvements from the previous release and new feature work. For the general LLVM release notes, see `the LLVM documentation `_. All LLVM releases may be downloaded from the `LLVM releases web site `_. For more information about Clang or LLVM, including information about the latest release, please check out the main please see the `Clang Web Site `_ or the `LLVM Web Site `_. Note that if you are reading this file from a Subversion checkout or the main Clang web page, this document applies to the *next* release, not the current one. To see the release notes for a specific release, please see the `releases page `_. What's New in Clang 5.0.0? ========================== Some of the major new features and improvements to Clang are listed here. Generic improvements to Clang as a whole or to its underlying infrastructure are described first, followed by language-specific sections with improvements to Clang's support for those languages. Major New Features ------------------ - ... +C++ coroutines +^^^^^^^^^^^^^^ +`C++ coroutines TS +`_ +implementation has landed. Use ``-fcoroutines-ts -stdlib=libc++`` to enable +coroutine support. Here is `an example +`_ to get you started. + + Improvements to Clang's diagnostics ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Wcast-qual was implemented for C++. C-style casts are now properly diagnosed. - -Wunused-lambda-capture warns when a variable explicitly captured by a lambda is not used in the body of the lambda. +- -Wstrict-prototypes is a new warning that warns about non-prototype + function and block declarations and types in C and Objective-C. + +- -Wunguarded-availability is a new warning that warns about uses of new + APIs that were introduced in a system whose version is newer than the + deployment target version. A new Objective-C expression ``@available`` has + been introduced to perform system version checking at runtime. This warning + is off by default to prevent unexpected warnings in existing projects. + However, its less strict sibling -Wunguarded-availability-new is on by + default. It warns about unguarded uses of APIs only when they were introduced + in or after macOS 10.13, iOS 11, tvOS 11 or watchOS 4. + +- The -Wdocumentation warning now allows the use of ``\param`` and + ``\returns`` documentation directives in the documentation comments for + declarations with a function or a block pointer type. + +- The compiler no longer warns about unreachable ``__builtin_unreachable`` + statements. + New Compiler Flags ------------------ - --autocomplete was implemented to obtain a list of flags and its arguments. This is used for shell autocompletion. Deprecated Compiler Flags ------------------------- The following options are deprecated and ignored. They will be removed in future versions of Clang. - -fslp-vectorize-aggressive used to enable the BB vectorizing pass. They have been superseeded by the normal SLP vectorizer. - -fno-slp-vectorize-aggressive used to be the default behavior of clang. New Pragmas in Clang ----------------------- -Clang now supports the ... +- Clang now supports the ``clang attribute`` pragma that allows users to apply + an attribute to multiple declarations. +- ``pragma pack`` directives that are included in a precompiled header are now + applied correctly to the declarations in the compilation unit that includes + that precompiled header. Attribute Changes in Clang -------------------------- - The ``overloadable`` attribute now allows at most one function with a given name to lack the ``overloadable`` attribute. This unmarked function will not have its name mangled. +- The ```ms_abi`` attribute and the ``__builtin_ms_va_list`` types and builtins + are now supported on AArch64. Windows Support --------------- Clang's support for building native Windows programs ... C Language Changes in Clang --------------------------- -- ... +- Added near complete support for implicit scalar to vector conversion, a GNU + C/C++ language extension. With this extension, the following code is + considered valid: + +.. code-block:: c + + typedef unsigned v4i32 __attribute__((vector_size(16))); + + v4i32 foo(v4i32 a) { + // Here 5 is implicitly casted to an unsigned value and replicated into a + // vector with as many elements as 'a'. + return a + 5; + } + +The implicit conversion of a scalar value to a vector value--in the context of +a vector expression--occurs when: + +- The type of the vector is that of a ``__attribute__((vector_size(size)))`` + vector, not an OpenCL ``__attribute__((ext_vector_type(size)))`` vector type. + +- The scalar value can be casted to that of the vector element's type without + the loss of precision based on the type of the scalar and the type of the + vector's elements. + +- For compile time constant values, the above rule is weakened to consider the + value of the scalar constant rather than the constant's type. + +- Floating point constants with precise integral representations are not + implicitly converted to integer values, this is for compatability with GCC. + + +Currently the basic integer and floating point types with the following +operators are supported: ``+``, ``/``, ``-``, ``*``, ``%``, ``>``, ``<``, +``>=``, ``<=``, ``==``, ``!=``, ``&``, ``|``, ``^`` and the corresponding +assignment operators where applicable. ... C11 Feature Support ^^^^^^^^^^^^^^^^^^^ ... C++ Language Changes in Clang ----------------------------- +- As mentioned in `C Language Changes in Clang`_, Clang's support for + implicit scalar to vector conversions also applies to C++. Additionally + the following operators are also supported: ``&&`` and ``||``. + ... C++1z Feature Support ^^^^^^^^^^^^^^^^^^^^^ ... Objective-C Language Changes in Clang ------------------------------------- -... +- Clang now guarantees that a ``readwrite`` property is synthesized when an + ambiguous property (i.e. a property that's declared in multiple protocols) + is synthesized. The ``-Wprotocol-property-synthesis-ambiguity`` warning that + warns about incompatible property types is now promoted to an error when + there's an ambiguity between ``readwrite`` and ``readonly`` properties. + +- Clang now prohibits synthesis of ambiguous properties with incompatible + explicit property attributes. The following property attributes are + checked for differences: ``copy``, ``retain``/``strong``, ``atomic``, + ``getter`` and ``setter``. OpenCL C Language Changes in Clang ---------------------------------- -... +Various bug fixes and improvements: + +- Extended OpenCL-related Clang tests. + +- Improved diagnostics across several areas: scoped address space + qualified variables, function pointers, atomics, type rank for overloading, + block captures, ``reserve_id_t``. + +- Several address space related fixes for constant address space function scope variables, + IR generation, mangling of ``generic`` and alloca (post-fix from general Clang + refactoring of address spaces). + +- Several improvements in extensions: fixed OpenCL version for ``cl_khr_mipmap_image``, + added missing ``cl_khr_3d_image_writes``. + +- Improvements in ``enqueue_kernel``, especially the implementation of ``ndrange_t`` and blocks. + +- OpenCL type related fixes: global samplers, the ``pipe_t`` size, internal type redefinition, + and type compatibility checking in ternary and other operations. + +- The OpenCL header has been extended with missing extension guards, and direct mapping of ``as_type`` + to ``__builtin_astype``. + +- Fixed ``kernel_arg_type_qual`` and OpenCL/SPIR version in metadata. + +- Added proper use of the kernel calling convention to various targets. + +The following new functionalities have been added: + +- Added documentation on OpenCL to Clang user manual. + +- Extended Clang builtins with required ``cl_khr_subgroups`` support. + +- Add ``intel_reqd_sub_group_size`` attribute support. + +- Added OpenCL types to ``CIndex``. OpenMP Support in Clang ---------------------------------- ... Internal API Changes -------------------- These are major API changes that have happened since the 4.0.0 release of Clang. If upgrading an external codebase that uses Clang as a library, this section should help get you past the largest hurdles of upgrading. - ... AST Matchers ------------ ... clang-format ------------ * Option **BreakBeforeInheritanceComma** added to break before ``:`` and ``,`` in case of multiple inheritance in a class declaration. Enabled by default in the Mozilla coding style. +---------------------+----------------------------------------+ | true | false | +=====================+========================================+ | .. code-block:: c++ | .. code-block:: c++ | | | | | class MyClass | class MyClass : public X, public Y { | | : public X | }; | | , public Y { | | | }; | | +---------------------+----------------------------------------+ * Align block comment decorations. +----------------------+---------------------+ | Before | After | +======================+=====================+ | .. code-block:: c++ | .. code-block:: c++ | | | | | /* line 1 | /* line 1 | | * line 2 | * line 2 | | */ | */ | +----------------------+---------------------+ * The :doc:`ClangFormatStyleOptions` documentation provides detailed examples for most options. * Namespace end comments are now added or updated automatically. +---------------------+---------------------+ | Before | After | +=====================+=====================+ | .. code-block:: c++ | .. code-block:: c++ | | | | | namespace A { | namespace A { | | int i; | int i; | | int j; | int j; | | } | } | +---------------------+---------------------+ * Comment reflow support added. Overly long comment lines will now be reflown with the rest of the paragraph instead of just broken. Option **ReflowComments** added and enabled by default. libclang -------- -... +- Libclang now provides code-completion results for more C++ constructs + and keywords. The following keywords/identifiers are now included in the + code-completion results: ``static_assert``, ``alignas``, ``constexpr``, + ``final``, ``noexcept``, ``override`` and ``thread_local``. + +- Libclang now provides code-completion results for members from dependent + classes. For example: + + .. code-block:: c++ + + template + void appendValue(std::vector &dest, const T &value) { + dest. // Relevant completion results are now shown after '.' + } + + Note that code-completion results are still not provided when the member + expression includes a dependent base expression. For example: + + .. code-block:: c++ + template + void appendValue(std::vector> &dest, const T &value) { + dest.at(0). // Libclang fails to provide completion results after '.' + } Static Analyzer --------------- - The static analyzer now supports using the `z3 theorem prover `_ from Microsoft Research as an external constraint solver. This allows reasoning over more complex queries, but performance is ~15x slower than the default range-based constraint solver. To enable the z3 solver backend, clang must be built with the ``CLANG_ANALYZER_BUILD_Z3=ON`` option, and the ``-Xanalyzer -analyzer-constraints=z3`` arguments passed at runtime. Undefined Behavior Sanitizer (UBSan) ------------------------------------ - The Undefined Behavior Sanitizer has a new check for pointer overflow. This check is on by default. The flag to control this functionality is -fsanitize=pointer-overflow. Pointer overflow is an indicator of undefined behavior: when a pointer indexing expression wraps around the address space, or produces other unexpected results, its result may not point to a valid object. - UBSan has several new checks which detect violations of nullability annotations. These checks are off by default. The flag to control this group of checks is -fsanitize=nullability. The checks can be individially enabled by -fsanitize=nullability-arg (which checks calls), -fsanitize=nullability-assign (which checks assignments), and -fsanitize=nullability-return (which checks return statements). - UBSan can now detect invalid loads from bitfields and from ObjC BOOLs. - UBSan can now avoid emitting unnecessary type checks in C++ class methods and in several other cases where the result is known at compile-time. UBSan can also avoid emitting unnecessary overflow checks in arithmetic expressions with promoted integer operands. Core Analysis Improvements ========================== - ... New Issues Found ================ - ... Python Binding Changes ---------------------- Python bindings now support both Python 2 and Python 3. The following methods have been added: - ``is_scoped_enum`` has been added to ``Cursor``. - ``exception_specification_kind`` has been added to ``Cursor``. - ``get_address_space`` has been added to ``Type``. - ``get_typedef_name`` has been added to ``Type``. - ``get_exception_specification_kind`` has been added to ``Type``. - ... Significant Known Problems ========================== Additional Information ====================== A wide variety of additional information is available on the `Clang web page `_. The web page contains versions of the API documentation which are up-to-date with the Subversion version of the source code. You can access versions of these documents specific to this release by going into the "``clang/docs/``" directory in the Clang tree. If you have any questions or comments about Clang, please feel free to contact us via the `mailing list `_. diff --git a/include/clang/AST/DeclCXX.h b/include/clang/AST/DeclCXX.h index 9d64f0244ec3..c39eaee9b124 100644 --- a/include/clang/AST/DeclCXX.h +++ b/include/clang/AST/DeclCXX.h @@ -1,3704 +1,3774 @@ //===-- DeclCXX.h - Classes for representing C++ declarations -*- C++ -*-=====// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// /// /// \file /// \brief Defines the C++ Decl subclasses, other than those for templates /// (found in DeclTemplate.h) and friends (in DeclFriend.h). /// //===----------------------------------------------------------------------===// #ifndef LLVM_CLANG_AST_DECLCXX_H #define LLVM_CLANG_AST_DECLCXX_H #include "clang/AST/ASTContext.h" #include "clang/AST/ASTUnresolvedSet.h" #include "clang/AST/Attr.h" #include "clang/AST/Decl.h" #include "clang/AST/Expr.h" #include "clang/AST/LambdaCapture.h" #include "llvm/ADT/DenseMap.h" #include "llvm/ADT/PointerIntPair.h" #include "llvm/Support/Compiler.h" namespace clang { class ClassTemplateDecl; class ClassTemplateSpecializationDecl; class ConstructorUsingShadowDecl; class CXXBasePath; class CXXBasePaths; class CXXConstructorDecl; class CXXConversionDecl; class CXXDestructorDecl; class CXXMethodDecl; class CXXRecordDecl; class CXXMemberLookupCriteria; class CXXFinalOverriderMap; class CXXIndirectPrimaryBaseSet; class FriendDecl; class LambdaExpr; class UsingDecl; /// \brief Represents any kind of function declaration, whether it is a /// concrete function or a function template. class AnyFunctionDecl { NamedDecl *Function; AnyFunctionDecl(NamedDecl *ND) : Function(ND) { } public: AnyFunctionDecl(FunctionDecl *FD) : Function(FD) { } AnyFunctionDecl(FunctionTemplateDecl *FTD); /// \brief Implicily converts any function or function template into a /// named declaration. operator NamedDecl *() const { return Function; } /// \brief Retrieve the underlying function or function template. NamedDecl *get() const { return Function; } static AnyFunctionDecl getFromNamedDecl(NamedDecl *ND) { return AnyFunctionDecl(ND); } }; } // end namespace clang namespace llvm { // Provide PointerLikeTypeTraits for non-cvr pointers. template<> class PointerLikeTypeTraits< ::clang::AnyFunctionDecl> { public: static inline void *getAsVoidPointer(::clang::AnyFunctionDecl F) { return F.get(); } static inline ::clang::AnyFunctionDecl getFromVoidPointer(void *P) { return ::clang::AnyFunctionDecl::getFromNamedDecl( static_cast< ::clang::NamedDecl*>(P)); } enum { NumLowBitsAvailable = 2 }; }; } // end namespace llvm namespace clang { /// \brief Represents an access specifier followed by colon ':'. /// /// An objects of this class represents sugar for the syntactic occurrence /// of an access specifier followed by a colon in the list of member /// specifiers of a C++ class definition. /// /// Note that they do not represent other uses of access specifiers, /// such as those occurring in a list of base specifiers. /// Also note that this class has nothing to do with so-called /// "access declarations" (C++98 11.3 [class.access.dcl]). class AccessSpecDecl : public Decl { virtual void anchor(); /// \brief The location of the ':'. SourceLocation ColonLoc; AccessSpecDecl(AccessSpecifier AS, DeclContext *DC, SourceLocation ASLoc, SourceLocation ColonLoc) : Decl(AccessSpec, DC, ASLoc), ColonLoc(ColonLoc) { setAccess(AS); } AccessSpecDecl(EmptyShell Empty) : Decl(AccessSpec, Empty) { } public: /// \brief The location of the access specifier. SourceLocation getAccessSpecifierLoc() const { return getLocation(); } /// \brief Sets the location of the access specifier. void setAccessSpecifierLoc(SourceLocation ASLoc) { setLocation(ASLoc); } /// \brief The location of the colon following the access specifier. SourceLocation getColonLoc() const { return ColonLoc; } /// \brief Sets the location of the colon. void setColonLoc(SourceLocation CLoc) { ColonLoc = CLoc; } SourceRange getSourceRange() const override LLVM_READONLY { return SourceRange(getAccessSpecifierLoc(), getColonLoc()); } static AccessSpecDecl *Create(ASTContext &C, AccessSpecifier AS, DeclContext *DC, SourceLocation ASLoc, SourceLocation ColonLoc) { return new (C, DC) AccessSpecDecl(AS, DC, ASLoc, ColonLoc); } static AccessSpecDecl *CreateDeserialized(ASTContext &C, unsigned ID); // Implement isa/cast/dyncast/etc. static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == AccessSpec; } }; /// \brief Represents a base class of a C++ class. /// /// Each CXXBaseSpecifier represents a single, direct base class (or /// struct) of a C++ class (or struct). It specifies the type of that /// base class, whether it is a virtual or non-virtual base, and what /// level of access (public, protected, private) is used for the /// derivation. For example: /// /// \code /// class A { }; /// class B { }; /// class C : public virtual A, protected B { }; /// \endcode /// /// In this code, C will have two CXXBaseSpecifiers, one for "public /// virtual A" and the other for "protected B". class CXXBaseSpecifier { /// \brief The source code range that covers the full base /// specifier, including the "virtual" (if present) and access /// specifier (if present). SourceRange Range; /// \brief The source location of the ellipsis, if this is a pack /// expansion. SourceLocation EllipsisLoc; /// \brief Whether this is a virtual base class or not. unsigned Virtual : 1; /// \brief Whether this is the base of a class (true) or of a struct (false). /// /// This determines the mapping from the access specifier as written in the /// source code to the access specifier used for semantic analysis. unsigned BaseOfClass : 1; /// \brief Access specifier as written in the source code (may be AS_none). /// /// The actual type of data stored here is an AccessSpecifier, but we use /// "unsigned" here to work around a VC++ bug. unsigned Access : 2; /// \brief Whether the class contains a using declaration /// to inherit the named class's constructors. unsigned InheritConstructors : 1; /// \brief The type of the base class. /// /// This will be a class or struct (or a typedef of such). The source code /// range does not include the \c virtual or the access specifier. TypeSourceInfo *BaseTypeInfo; public: CXXBaseSpecifier() { } CXXBaseSpecifier(SourceRange R, bool V, bool BC, AccessSpecifier A, TypeSourceInfo *TInfo, SourceLocation EllipsisLoc) : Range(R), EllipsisLoc(EllipsisLoc), Virtual(V), BaseOfClass(BC), Access(A), InheritConstructors(false), BaseTypeInfo(TInfo) { } /// \brief Retrieves the source range that contains the entire base specifier. SourceRange getSourceRange() const LLVM_READONLY { return Range; } SourceLocation getLocStart() const LLVM_READONLY { return Range.getBegin(); } SourceLocation getLocEnd() const LLVM_READONLY { return Range.getEnd(); } /// \brief Get the location at which the base class type was written. SourceLocation getBaseTypeLoc() const LLVM_READONLY { return BaseTypeInfo->getTypeLoc().getLocStart(); } /// \brief Determines whether the base class is a virtual base class (or not). bool isVirtual() const { return Virtual; } /// \brief Determine whether this base class is a base of a class declared /// with the 'class' keyword (vs. one declared with the 'struct' keyword). bool isBaseOfClass() const { return BaseOfClass; } /// \brief Determine whether this base specifier is a pack expansion. bool isPackExpansion() const { return EllipsisLoc.isValid(); } /// \brief Determine whether this base class's constructors get inherited. bool getInheritConstructors() const { return InheritConstructors; } /// \brief Set that this base class's constructors should be inherited. void setInheritConstructors(bool Inherit = true) { InheritConstructors = Inherit; } /// \brief For a pack expansion, determine the location of the ellipsis. SourceLocation getEllipsisLoc() const { return EllipsisLoc; } /// \brief Returns the access specifier for this base specifier. /// /// This is the actual base specifier as used for semantic analysis, so /// the result can never be AS_none. To retrieve the access specifier as /// written in the source code, use getAccessSpecifierAsWritten(). AccessSpecifier getAccessSpecifier() const { if ((AccessSpecifier)Access == AS_none) return BaseOfClass? AS_private : AS_public; else return (AccessSpecifier)Access; } /// \brief Retrieves the access specifier as written in the source code /// (which may mean that no access specifier was explicitly written). /// /// Use getAccessSpecifier() to retrieve the access specifier for use in /// semantic analysis. AccessSpecifier getAccessSpecifierAsWritten() const { return (AccessSpecifier)Access; } /// \brief Retrieves the type of the base class. /// /// This type will always be an unqualified class type. QualType getType() const { return BaseTypeInfo->getType().getUnqualifiedType(); } /// \brief Retrieves the type and source location of the base class. TypeSourceInfo *getTypeSourceInfo() const { return BaseTypeInfo; } }; /// \brief Represents a C++ struct/union/class. class CXXRecordDecl : public RecordDecl { friend void TagDecl::startDefinition(); /// Values used in DefinitionData fields to represent special members. enum SpecialMemberFlags { SMF_DefaultConstructor = 0x1, SMF_CopyConstructor = 0x2, SMF_MoveConstructor = 0x4, SMF_CopyAssignment = 0x8, SMF_MoveAssignment = 0x10, SMF_Destructor = 0x20, SMF_All = 0x3f }; struct DefinitionData { DefinitionData(CXXRecordDecl *D); /// \brief True if this class has any user-declared constructors. unsigned UserDeclaredConstructor : 1; /// \brief The user-declared special members which this class has. unsigned UserDeclaredSpecialMembers : 6; /// \brief True when this class is an aggregate. unsigned Aggregate : 1; /// \brief True when this class is a POD-type. unsigned PlainOldData : 1; /// true when this class is empty for traits purposes, /// i.e. has no data members other than 0-width bit-fields, has no /// virtual function/base, and doesn't inherit from a non-empty /// class. Doesn't take union-ness into account. unsigned Empty : 1; /// \brief True when this class is polymorphic, i.e., has at /// least one virtual member or derives from a polymorphic class. unsigned Polymorphic : 1; /// \brief True when this class is abstract, i.e., has at least /// one pure virtual function, (that can come from a base class). unsigned Abstract : 1; /// \brief True when this class has standard layout. /// /// C++11 [class]p7. A standard-layout class is a class that: /// * has no non-static data members of type non-standard-layout class (or /// array of such types) or reference, /// * has no virtual functions (10.3) and no virtual base classes (10.1), /// * has the same access control (Clause 11) for all non-static data /// members /// * has no non-standard-layout base classes, /// * either has no non-static data members in the most derived class and at /// most one base class with non-static data members, or has no base /// classes with non-static data members, and /// * has no base classes of the same type as the first non-static data /// member. unsigned IsStandardLayout : 1; /// \brief True when there are no non-empty base classes. /// /// This is a helper bit of state used to implement IsStandardLayout more /// efficiently. unsigned HasNoNonEmptyBases : 1; /// \brief True when there are private non-static data members. unsigned HasPrivateFields : 1; /// \brief True when there are protected non-static data members. unsigned HasProtectedFields : 1; /// \brief True when there are private non-static data members. unsigned HasPublicFields : 1; /// \brief True if this class (or any subobject) has mutable fields. unsigned HasMutableFields : 1; /// \brief True if this class (or any nested anonymous struct or union) /// has variant members. unsigned HasVariantMembers : 1; /// \brief True if there no non-field members declared by the user. unsigned HasOnlyCMembers : 1; /// \brief True if any field has an in-class initializer, including those /// within anonymous unions or structs. unsigned HasInClassInitializer : 1; /// \brief True if any field is of reference type, and does not have an /// in-class initializer. /// /// In this case, value-initialization of this class is illegal in C++98 /// even if the class has a trivial default constructor. unsigned HasUninitializedReferenceMember : 1; /// \brief True if any non-mutable field whose type doesn't have a user- /// provided default ctor also doesn't have an in-class initializer. unsigned HasUninitializedFields : 1; /// \brief True if there are any member using-declarations that inherit /// constructors from a base class. unsigned HasInheritedConstructor : 1; /// \brief True if there are any member using-declarations named /// 'operator='. unsigned HasInheritedAssignment : 1; /// \brief These flags are \c true if a defaulted corresponding special /// member can't be fully analyzed without performing overload resolution. /// @{ + unsigned NeedOverloadResolutionForCopyConstructor : 1; unsigned NeedOverloadResolutionForMoveConstructor : 1; unsigned NeedOverloadResolutionForMoveAssignment : 1; unsigned NeedOverloadResolutionForDestructor : 1; /// @} /// \brief These flags are \c true if an implicit defaulted corresponding /// special member would be defined as deleted. /// @{ + unsigned DefaultedCopyConstructorIsDeleted : 1; unsigned DefaultedMoveConstructorIsDeleted : 1; unsigned DefaultedMoveAssignmentIsDeleted : 1; unsigned DefaultedDestructorIsDeleted : 1; /// @} /// \brief The trivial special members which this class has, per /// C++11 [class.ctor]p5, C++11 [class.copy]p12, C++11 [class.copy]p25, /// C++11 [class.dtor]p5, or would have if the member were not suppressed. /// /// This excludes any user-declared but not user-provided special members /// which have been declared but not yet defined. unsigned HasTrivialSpecialMembers : 6; /// \brief The declared special members of this class which are known to be /// non-trivial. /// /// This excludes any user-declared but not user-provided special members /// which have been declared but not yet defined, and any implicit special /// members which have not yet been declared. unsigned DeclaredNonTrivialSpecialMembers : 6; /// \brief True when this class has a destructor with no semantic effect. unsigned HasIrrelevantDestructor : 1; /// \brief True when this class has at least one user-declared constexpr /// constructor which is neither the copy nor move constructor. unsigned HasConstexprNonCopyMoveConstructor : 1; /// \brief True if this class has a (possibly implicit) defaulted default /// constructor. unsigned HasDefaultedDefaultConstructor : 1; + /// \brief True if this class can be passed in a non-address-preserving + /// fashion (such as in registers) according to the C++ language rules. + /// This does not imply anything about how the ABI in use will actually + /// pass an object of this class. + unsigned CanPassInRegisters : 1; + /// \brief True if a defaulted default constructor for this class would /// be constexpr. unsigned DefaultedDefaultConstructorIsConstexpr : 1; /// \brief True if this class has a constexpr default constructor. /// /// This is true for either a user-declared constexpr default constructor /// or an implicitly declared constexpr default constructor. unsigned HasConstexprDefaultConstructor : 1; /// \brief True when this class contains at least one non-static data /// member or base class of non-literal or volatile type. unsigned HasNonLiteralTypeFieldsOrBases : 1; /// \brief True when visible conversion functions are already computed /// and are available. unsigned ComputedVisibleConversions : 1; /// \brief Whether we have a C++11 user-provided default constructor (not /// explicitly deleted or defaulted). unsigned UserProvidedDefaultConstructor : 1; /// \brief The special members which have been declared for this class, /// either by the user or implicitly. unsigned DeclaredSpecialMembers : 6; /// \brief Whether an implicit copy constructor could have a const-qualified /// parameter, for initializing virtual bases and for other subobjects. unsigned ImplicitCopyConstructorCanHaveConstParamForVBase : 1; unsigned ImplicitCopyConstructorCanHaveConstParamForNonVBase : 1; /// \brief Whether an implicit copy assignment operator would have a /// const-qualified parameter. unsigned ImplicitCopyAssignmentHasConstParam : 1; /// \brief Whether any declared copy constructor has a const-qualified /// parameter. unsigned HasDeclaredCopyConstructorWithConstParam : 1; /// \brief Whether any declared copy assignment operator has either a /// const-qualified reference parameter or a non-reference parameter. unsigned HasDeclaredCopyAssignmentWithConstParam : 1; /// \brief Whether this class describes a C++ lambda. unsigned IsLambda : 1; /// \brief Whether we are currently parsing base specifiers. unsigned IsParsingBaseSpecifiers : 1; unsigned HasODRHash : 1; /// \brief A hash of parts of the class to help in ODR checking. unsigned ODRHash; /// \brief The number of base class specifiers in Bases. unsigned NumBases; /// \brief The number of virtual base class specifiers in VBases. unsigned NumVBases; /// \brief Base classes of this class. /// /// FIXME: This is wasted space for a union. LazyCXXBaseSpecifiersPtr Bases; /// \brief direct and indirect virtual base classes of this class. LazyCXXBaseSpecifiersPtr VBases; /// \brief The conversion functions of this C++ class (but not its /// inherited conversion functions). /// /// Each of the entries in this overload set is a CXXConversionDecl. LazyASTUnresolvedSet Conversions; /// \brief The conversion functions of this C++ class and all those /// inherited conversion functions that are visible in this class. /// /// Each of the entries in this overload set is a CXXConversionDecl or a /// FunctionTemplateDecl. LazyASTUnresolvedSet VisibleConversions; /// \brief The declaration which defines this record. CXXRecordDecl *Definition; /// \brief The first friend declaration in this class, or null if there /// aren't any. /// /// This is actually currently stored in reverse order. LazyDeclPtr FirstFriend; /// \brief Retrieve the set of direct base classes. CXXBaseSpecifier *getBases() const { if (!Bases.isOffset()) return Bases.get(nullptr); return getBasesSlowCase(); } /// \brief Retrieve the set of virtual base classes. CXXBaseSpecifier *getVBases() const { if (!VBases.isOffset()) return VBases.get(nullptr); return getVBasesSlowCase(); } ArrayRef bases() const { return llvm::makeArrayRef(getBases(), NumBases); } ArrayRef vbases() const { return llvm::makeArrayRef(getVBases(), NumVBases); } private: CXXBaseSpecifier *getBasesSlowCase() const; CXXBaseSpecifier *getVBasesSlowCase() const; }; struct DefinitionData *DefinitionData; /// \brief Describes a C++ closure type (generated by a lambda expression). struct LambdaDefinitionData : public DefinitionData { typedef LambdaCapture Capture; LambdaDefinitionData(CXXRecordDecl *D, TypeSourceInfo *Info, bool Dependent, bool IsGeneric, LambdaCaptureDefault CaptureDefault) : DefinitionData(D), Dependent(Dependent), IsGenericLambda(IsGeneric), CaptureDefault(CaptureDefault), NumCaptures(0), NumExplicitCaptures(0), ManglingNumber(0), ContextDecl(nullptr), Captures(nullptr), MethodTyInfo(Info) { IsLambda = true; // C++1z [expr.prim.lambda]p4: // This class type is not an aggregate type. Aggregate = false; PlainOldData = false; } /// \brief Whether this lambda is known to be dependent, even if its /// context isn't dependent. /// /// A lambda with a non-dependent context can be dependent if it occurs /// within the default argument of a function template, because the /// lambda will have been created with the enclosing context as its /// declaration context, rather than function. This is an unfortunate /// artifact of having to parse the default arguments before. unsigned Dependent : 1; /// \brief Whether this lambda is a generic lambda. unsigned IsGenericLambda : 1; /// \brief The Default Capture. unsigned CaptureDefault : 2; /// \brief The number of captures in this lambda is limited 2^NumCaptures. unsigned NumCaptures : 15; /// \brief The number of explicit captures in this lambda. unsigned NumExplicitCaptures : 13; /// \brief The number used to indicate this lambda expression for name /// mangling in the Itanium C++ ABI. unsigned ManglingNumber; /// \brief The declaration that provides context for this lambda, if the /// actual DeclContext does not suffice. This is used for lambdas that /// occur within default arguments of function parameters within the class /// or within a data member initializer. LazyDeclPtr ContextDecl; /// \brief The list of captures, both explicit and implicit, for this /// lambda. Capture *Captures; /// \brief The type of the call method. TypeSourceInfo *MethodTyInfo; }; struct DefinitionData *dataPtr() const { // Complete the redecl chain (if necessary). getMostRecentDecl(); return DefinitionData; } struct DefinitionData &data() const { auto *DD = dataPtr(); assert(DD && "queried property of class with no definition"); return *DD; } struct LambdaDefinitionData &getLambdaData() const { // No update required: a merged definition cannot change any lambda // properties. auto *DD = DefinitionData; assert(DD && DD->IsLambda && "queried lambda property of non-lambda class"); return static_cast(*DD); } /// \brief The template or declaration that this declaration /// describes or was instantiated from, respectively. /// /// For non-templates, this value will be null. For record /// declarations that describe a class template, this will be a /// pointer to a ClassTemplateDecl. For member /// classes of class template specializations, this will be the /// MemberSpecializationInfo referring to the member class that was /// instantiated or specialized. llvm::PointerUnion TemplateOrInstantiation; friend class DeclContext; friend class LambdaExpr; /// \brief Called from setBases and addedMember to notify the class that a /// direct or virtual base class or a member of class type has been added. void addedClassSubobject(CXXRecordDecl *Base); /// \brief Notify the class that member has been added. /// /// This routine helps maintain information about the class based on which /// members have been added. It will be invoked by DeclContext::addDecl() /// whenever a member is added to this record. void addedMember(Decl *D); void markedVirtualFunctionPure(); friend void FunctionDecl::setPure(bool); friend class ASTNodeImporter; /// \brief Get the head of our list of friend declarations, possibly /// deserializing the friends from an external AST source. FriendDecl *getFirstFriend() const; protected: CXXRecordDecl(Kind K, TagKind TK, const ASTContext &C, DeclContext *DC, SourceLocation StartLoc, SourceLocation IdLoc, IdentifierInfo *Id, CXXRecordDecl *PrevDecl); public: /// \brief Iterator that traverses the base classes of a class. typedef CXXBaseSpecifier* base_class_iterator; /// \brief Iterator that traverses the base classes of a class. typedef const CXXBaseSpecifier* base_class_const_iterator; CXXRecordDecl *getCanonicalDecl() override { return cast(RecordDecl::getCanonicalDecl()); } const CXXRecordDecl *getCanonicalDecl() const { return const_cast(this)->getCanonicalDecl(); } CXXRecordDecl *getPreviousDecl() { return cast_or_null( static_cast(this)->getPreviousDecl()); } const CXXRecordDecl *getPreviousDecl() const { return const_cast(this)->getPreviousDecl(); } CXXRecordDecl *getMostRecentDecl() { return cast( static_cast(this)->getMostRecentDecl()); } const CXXRecordDecl *getMostRecentDecl() const { return const_cast(this)->getMostRecentDecl(); } CXXRecordDecl *getDefinition() const { // We only need an update if we don't already know which // declaration is the definition. auto *DD = DefinitionData ? DefinitionData : dataPtr(); return DD ? DD->Definition : nullptr; } bool hasDefinition() const { return DefinitionData || dataPtr(); } static CXXRecordDecl *Create(const ASTContext &C, TagKind TK, DeclContext *DC, SourceLocation StartLoc, SourceLocation IdLoc, IdentifierInfo *Id, CXXRecordDecl *PrevDecl = nullptr, bool DelayTypeCreation = false); static CXXRecordDecl *CreateLambda(const ASTContext &C, DeclContext *DC, TypeSourceInfo *Info, SourceLocation Loc, bool DependentLambda, bool IsGeneric, LambdaCaptureDefault CaptureDefault); static CXXRecordDecl *CreateDeserialized(const ASTContext &C, unsigned ID); bool isDynamicClass() const { return data().Polymorphic || data().NumVBases != 0; } void setIsParsingBaseSpecifiers() { data().IsParsingBaseSpecifiers = true; } bool isParsingBaseSpecifiers() const { return data().IsParsingBaseSpecifiers; } unsigned getODRHash() const; /// \brief Sets the base classes of this struct or class. void setBases(CXXBaseSpecifier const * const *Bases, unsigned NumBases); /// \brief Retrieves the number of base classes of this class. unsigned getNumBases() const { return data().NumBases; } typedef llvm::iterator_range base_class_range; typedef llvm::iterator_range base_class_const_range; base_class_range bases() { return base_class_range(bases_begin(), bases_end()); } base_class_const_range bases() const { return base_class_const_range(bases_begin(), bases_end()); } base_class_iterator bases_begin() { return data().getBases(); } base_class_const_iterator bases_begin() const { return data().getBases(); } base_class_iterator bases_end() { return bases_begin() + data().NumBases; } base_class_const_iterator bases_end() const { return bases_begin() + data().NumBases; } /// \brief Retrieves the number of virtual base classes of this class. unsigned getNumVBases() const { return data().NumVBases; } base_class_range vbases() { return base_class_range(vbases_begin(), vbases_end()); } base_class_const_range vbases() const { return base_class_const_range(vbases_begin(), vbases_end()); } base_class_iterator vbases_begin() { return data().getVBases(); } base_class_const_iterator vbases_begin() const { return data().getVBases(); } base_class_iterator vbases_end() { return vbases_begin() + data().NumVBases; } base_class_const_iterator vbases_end() const { return vbases_begin() + data().NumVBases; } /// \brief Determine whether this class has any dependent base classes which /// are not the current instantiation. bool hasAnyDependentBases() const; /// Iterator access to method members. The method iterator visits /// all method members of the class, including non-instance methods, /// special methods, etc. typedef specific_decl_iterator method_iterator; typedef llvm::iterator_range> method_range; method_range methods() const { return method_range(method_begin(), method_end()); } /// \brief Method begin iterator. Iterates in the order the methods /// were declared. method_iterator method_begin() const { return method_iterator(decls_begin()); } /// \brief Method past-the-end iterator. method_iterator method_end() const { return method_iterator(decls_end()); } /// Iterator access to constructor members. typedef specific_decl_iterator ctor_iterator; typedef llvm::iterator_range> ctor_range; ctor_range ctors() const { return ctor_range(ctor_begin(), ctor_end()); } ctor_iterator ctor_begin() const { return ctor_iterator(decls_begin()); } ctor_iterator ctor_end() const { return ctor_iterator(decls_end()); } /// An iterator over friend declarations. All of these are defined /// in DeclFriend.h. class friend_iterator; typedef llvm::iterator_range friend_range; friend_range friends() const; friend_iterator friend_begin() const; friend_iterator friend_end() const; void pushFriendDecl(FriendDecl *FD); /// Determines whether this record has any friends. bool hasFriends() const { return data().FirstFriend.isValid(); } + /// \brief \c true if a defaulted copy constructor for this class would be + /// deleted. + bool defaultedCopyConstructorIsDeleted() const { + assert((!needsOverloadResolutionForCopyConstructor() || + (data().DeclaredSpecialMembers & SMF_CopyConstructor)) && + "this property has not yet been computed by Sema"); + return data().DefaultedCopyConstructorIsDeleted; + } + + /// \brief \c true if a defaulted move constructor for this class would be + /// deleted. + bool defaultedMoveConstructorIsDeleted() const { + assert((!needsOverloadResolutionForMoveConstructor() || + (data().DeclaredSpecialMembers & SMF_MoveConstructor)) && + "this property has not yet been computed by Sema"); + return data().DefaultedMoveConstructorIsDeleted; + } + + /// \brief \c true if a defaulted destructor for this class would be deleted. + bool defaultedDestructorIsDeleted() const { + return !data().DefaultedDestructorIsDeleted; + } + + /// \brief \c true if we know for sure that this class has a single, + /// accessible, unambiguous copy constructor that is not deleted. + bool hasSimpleCopyConstructor() const { + return !hasUserDeclaredCopyConstructor() && + !data().DefaultedCopyConstructorIsDeleted; + } + /// \brief \c true if we know for sure that this class has a single, /// accessible, unambiguous move constructor that is not deleted. bool hasSimpleMoveConstructor() const { return !hasUserDeclaredMoveConstructor() && hasMoveConstructor() && !data().DefaultedMoveConstructorIsDeleted; } + /// \brief \c true if we know for sure that this class has a single, /// accessible, unambiguous move assignment operator that is not deleted. bool hasSimpleMoveAssignment() const { return !hasUserDeclaredMoveAssignment() && hasMoveAssignment() && !data().DefaultedMoveAssignmentIsDeleted; } + /// \brief \c true if we know for sure that this class has an accessible /// destructor that is not deleted. bool hasSimpleDestructor() const { return !hasUserDeclaredDestructor() && !data().DefaultedDestructorIsDeleted; } /// \brief Determine whether this class has any default constructors. bool hasDefaultConstructor() const { return (data().DeclaredSpecialMembers & SMF_DefaultConstructor) || needsImplicitDefaultConstructor(); } /// \brief Determine if we need to declare a default constructor for /// this class. /// /// This value is used for lazy creation of default constructors. bool needsImplicitDefaultConstructor() const { return !data().UserDeclaredConstructor && !(data().DeclaredSpecialMembers & SMF_DefaultConstructor) && // C++14 [expr.prim.lambda]p20: // The closure type associated with a lambda-expression has no // default constructor. !isLambda(); } /// \brief Determine whether this class has any user-declared constructors. /// /// When true, a default constructor will not be implicitly declared. bool hasUserDeclaredConstructor() const { return data().UserDeclaredConstructor; } /// \brief Whether this class has a user-provided default constructor /// per C++11. bool hasUserProvidedDefaultConstructor() const { return data().UserProvidedDefaultConstructor; } /// \brief Determine whether this class has a user-declared copy constructor. /// /// When false, a copy constructor will be implicitly declared. bool hasUserDeclaredCopyConstructor() const { return data().UserDeclaredSpecialMembers & SMF_CopyConstructor; } /// \brief Determine whether this class needs an implicit copy /// constructor to be lazily declared. bool needsImplicitCopyConstructor() const { return !(data().DeclaredSpecialMembers & SMF_CopyConstructor); } /// \brief Determine whether we need to eagerly declare a defaulted copy /// constructor for this class. bool needsOverloadResolutionForCopyConstructor() const { - return data().HasMutableFields; + // C++17 [class.copy.ctor]p6: + // If the class definition declares a move constructor or move assignment + // operator, the implicitly declared copy constructor is defined as + // deleted. + // In MSVC mode, sometimes a declared move assignment does not delete an + // implicit copy constructor, so defer this choice to Sema. + if (data().UserDeclaredSpecialMembers & + (SMF_MoveConstructor | SMF_MoveAssignment)) + return true; + return data().NeedOverloadResolutionForCopyConstructor; } /// \brief Determine whether an implicit copy constructor for this type /// would have a parameter with a const-qualified reference type. bool implicitCopyConstructorHasConstParam() const { return data().ImplicitCopyConstructorCanHaveConstParamForNonVBase && (isAbstract() || data().ImplicitCopyConstructorCanHaveConstParamForVBase); } /// \brief Determine whether this class has a copy constructor with /// a parameter type which is a reference to a const-qualified type. bool hasCopyConstructorWithConstParam() const { return data().HasDeclaredCopyConstructorWithConstParam || (needsImplicitCopyConstructor() && implicitCopyConstructorHasConstParam()); } /// \brief Whether this class has a user-declared move constructor or /// assignment operator. /// /// When false, a move constructor and assignment operator may be /// implicitly declared. bool hasUserDeclaredMoveOperation() const { return data().UserDeclaredSpecialMembers & (SMF_MoveConstructor | SMF_MoveAssignment); } /// \brief Determine whether this class has had a move constructor /// declared by the user. bool hasUserDeclaredMoveConstructor() const { return data().UserDeclaredSpecialMembers & SMF_MoveConstructor; } /// \brief Determine whether this class has a move constructor. bool hasMoveConstructor() const { return (data().DeclaredSpecialMembers & SMF_MoveConstructor) || needsImplicitMoveConstructor(); } - /// \brief Set that we attempted to declare an implicitly move + /// \brief Set that we attempted to declare an implicit copy + /// constructor, but overload resolution failed so we deleted it. + void setImplicitCopyConstructorIsDeleted() { + assert((data().DefaultedCopyConstructorIsDeleted || + needsOverloadResolutionForCopyConstructor()) && + "Copy constructor should not be deleted"); + data().DefaultedCopyConstructorIsDeleted = true; + } + + /// \brief Set that we attempted to declare an implicit move /// constructor, but overload resolution failed so we deleted it. void setImplicitMoveConstructorIsDeleted() { assert((data().DefaultedMoveConstructorIsDeleted || needsOverloadResolutionForMoveConstructor()) && "move constructor should not be deleted"); data().DefaultedMoveConstructorIsDeleted = true; } /// \brief Determine whether this class should get an implicit move /// constructor or if any existing special member function inhibits this. bool needsImplicitMoveConstructor() const { return !(data().DeclaredSpecialMembers & SMF_MoveConstructor) && !hasUserDeclaredCopyConstructor() && !hasUserDeclaredCopyAssignment() && !hasUserDeclaredMoveAssignment() && !hasUserDeclaredDestructor(); } /// \brief Determine whether we need to eagerly declare a defaulted move /// constructor for this class. bool needsOverloadResolutionForMoveConstructor() const { return data().NeedOverloadResolutionForMoveConstructor; } /// \brief Determine whether this class has a user-declared copy assignment /// operator. /// /// When false, a copy assigment operator will be implicitly declared. bool hasUserDeclaredCopyAssignment() const { return data().UserDeclaredSpecialMembers & SMF_CopyAssignment; } /// \brief Determine whether this class needs an implicit copy /// assignment operator to be lazily declared. bool needsImplicitCopyAssignment() const { return !(data().DeclaredSpecialMembers & SMF_CopyAssignment); } /// \brief Determine whether we need to eagerly declare a defaulted copy /// assignment operator for this class. bool needsOverloadResolutionForCopyAssignment() const { return data().HasMutableFields; } /// \brief Determine whether an implicit copy assignment operator for this /// type would have a parameter with a const-qualified reference type. bool implicitCopyAssignmentHasConstParam() const { return data().ImplicitCopyAssignmentHasConstParam; } /// \brief Determine whether this class has a copy assignment operator with /// a parameter type which is a reference to a const-qualified type or is not /// a reference. bool hasCopyAssignmentWithConstParam() const { return data().HasDeclaredCopyAssignmentWithConstParam || (needsImplicitCopyAssignment() && implicitCopyAssignmentHasConstParam()); } /// \brief Determine whether this class has had a move assignment /// declared by the user. bool hasUserDeclaredMoveAssignment() const { return data().UserDeclaredSpecialMembers & SMF_MoveAssignment; } /// \brief Determine whether this class has a move assignment operator. bool hasMoveAssignment() const { return (data().DeclaredSpecialMembers & SMF_MoveAssignment) || needsImplicitMoveAssignment(); } /// \brief Set that we attempted to declare an implicit move assignment /// operator, but overload resolution failed so we deleted it. void setImplicitMoveAssignmentIsDeleted() { assert((data().DefaultedMoveAssignmentIsDeleted || needsOverloadResolutionForMoveAssignment()) && "move assignment should not be deleted"); data().DefaultedMoveAssignmentIsDeleted = true; } /// \brief Determine whether this class should get an implicit move /// assignment operator or if any existing special member function inhibits /// this. bool needsImplicitMoveAssignment() const { return !(data().DeclaredSpecialMembers & SMF_MoveAssignment) && !hasUserDeclaredCopyConstructor() && !hasUserDeclaredCopyAssignment() && !hasUserDeclaredMoveConstructor() && !hasUserDeclaredDestructor() && // C++1z [expr.prim.lambda]p21: "the closure type has a deleted copy // assignment operator". The intent is that this counts as a user // declared copy assignment, but we do not model it that way. !isLambda(); } /// \brief Determine whether we need to eagerly declare a move assignment /// operator for this class. bool needsOverloadResolutionForMoveAssignment() const { return data().NeedOverloadResolutionForMoveAssignment; } /// \brief Determine whether this class has a user-declared destructor. /// /// When false, a destructor will be implicitly declared. bool hasUserDeclaredDestructor() const { return data().UserDeclaredSpecialMembers & SMF_Destructor; } /// \brief Determine whether this class needs an implicit destructor to /// be lazily declared. bool needsImplicitDestructor() const { return !(data().DeclaredSpecialMembers & SMF_Destructor); } /// \brief Determine whether we need to eagerly declare a destructor for this /// class. bool needsOverloadResolutionForDestructor() const { return data().NeedOverloadResolutionForDestructor; } /// \brief Determine whether this class describes a lambda function object. bool isLambda() const { // An update record can't turn a non-lambda into a lambda. auto *DD = DefinitionData; return DD && DD->IsLambda; } /// \brief Determine whether this class describes a generic /// lambda function object (i.e. function call operator is /// a template). bool isGenericLambda() const; /// \brief Retrieve the lambda call operator of the closure type /// if this is a closure type. CXXMethodDecl *getLambdaCallOperator() const; /// \brief Retrieve the lambda static invoker, the address of which /// is returned by the conversion operator, and the body of which /// is forwarded to the lambda call operator. CXXMethodDecl *getLambdaStaticInvoker() const; /// \brief Retrieve the generic lambda's template parameter list. /// Returns null if the class does not represent a lambda or a generic /// lambda. TemplateParameterList *getGenericLambdaTemplateParameterList() const; LambdaCaptureDefault getLambdaCaptureDefault() const { assert(isLambda()); return static_cast(getLambdaData().CaptureDefault); } /// \brief For a closure type, retrieve the mapping from captured /// variables and \c this to the non-static data members that store the /// values or references of the captures. /// /// \param Captures Will be populated with the mapping from captured /// variables to the corresponding fields. /// /// \param ThisCapture Will be set to the field declaration for the /// \c this capture. /// /// \note No entries will be added for init-captures, as they do not capture /// variables. void getCaptureFields(llvm::DenseMap &Captures, FieldDecl *&ThisCapture) const; typedef const LambdaCapture *capture_const_iterator; typedef llvm::iterator_range capture_const_range; capture_const_range captures() const { return capture_const_range(captures_begin(), captures_end()); } capture_const_iterator captures_begin() const { return isLambda() ? getLambdaData().Captures : nullptr; } capture_const_iterator captures_end() const { return isLambda() ? captures_begin() + getLambdaData().NumCaptures : nullptr; } typedef UnresolvedSetIterator conversion_iterator; conversion_iterator conversion_begin() const { return data().Conversions.get(getASTContext()).begin(); } conversion_iterator conversion_end() const { return data().Conversions.get(getASTContext()).end(); } /// Removes a conversion function from this class. The conversion /// function must currently be a member of this class. Furthermore, /// this class must currently be in the process of being defined. void removeConversion(const NamedDecl *Old); /// \brief Get all conversion functions visible in current class, /// including conversion function templates. llvm::iterator_range getVisibleConversionFunctions(); /// Determine whether this class is an aggregate (C++ [dcl.init.aggr]), /// which is a class with no user-declared constructors, no private /// or protected non-static data members, no base classes, and no virtual /// functions (C++ [dcl.init.aggr]p1). bool isAggregate() const { return data().Aggregate; } /// \brief Whether this class has any in-class initializers /// for non-static data members (including those in anonymous unions or /// structs). bool hasInClassInitializer() const { return data().HasInClassInitializer; } /// \brief Whether this class or any of its subobjects has any members of /// reference type which would make value-initialization ill-formed. /// /// Per C++03 [dcl.init]p5: /// - if T is a non-union class type without a user-declared constructor, /// then every non-static data member and base-class component of T is /// value-initialized [...] A program that calls for [...] /// value-initialization of an entity of reference type is ill-formed. bool hasUninitializedReferenceMember() const { return !isUnion() && !hasUserDeclaredConstructor() && data().HasUninitializedReferenceMember; } /// \brief Whether this class is a POD-type (C++ [class]p4) /// /// For purposes of this function a class is POD if it is an aggregate /// that has no non-static non-POD data members, no reference data /// members, no user-defined copy assignment operator and no /// user-defined destructor. /// /// Note that this is the C++ TR1 definition of POD. bool isPOD() const { return data().PlainOldData; } /// \brief True if this class is C-like, without C++-specific features, e.g. /// it contains only public fields, no bases, tag kind is not 'class', etc. bool isCLike() const; /// \brief Determine whether this is an empty class in the sense of /// (C++11 [meta.unary.prop]). /// /// The CXXRecordDecl is a class type, but not a union type, /// with no non-static data members other than bit-fields of length 0, /// no virtual member functions, no virtual base classes, /// and no base class B for which is_empty::value is false. /// /// \note This does NOT include a check for union-ness. bool isEmpty() const { return data().Empty; } /// \brief Determine whether this class has direct non-static data members. bool hasDirectFields() const { auto &D = data(); return D.HasPublicFields || D.HasProtectedFields || D.HasPrivateFields; } /// Whether this class is polymorphic (C++ [class.virtual]), /// which means that the class contains or inherits a virtual function. bool isPolymorphic() const { return data().Polymorphic; } /// \brief Determine whether this class has a pure virtual function. /// /// The class is is abstract per (C++ [class.abstract]p2) if it declares /// a pure virtual function or inherits a pure virtual function that is /// not overridden. bool isAbstract() const { return data().Abstract; } /// \brief Determine whether this class has standard layout per /// (C++ [class]p7) bool isStandardLayout() const { return data().IsStandardLayout; } /// \brief Determine whether this class, or any of its class subobjects, /// contains a mutable field. bool hasMutableFields() const { return data().HasMutableFields; } /// \brief Determine whether this class has any variant members. bool hasVariantMembers() const { return data().HasVariantMembers; } /// \brief Determine whether this class has a trivial default constructor /// (C++11 [class.ctor]p5). bool hasTrivialDefaultConstructor() const { return hasDefaultConstructor() && (data().HasTrivialSpecialMembers & SMF_DefaultConstructor); } /// \brief Determine whether this class has a non-trivial default constructor /// (C++11 [class.ctor]p5). bool hasNonTrivialDefaultConstructor() const { return (data().DeclaredNonTrivialSpecialMembers & SMF_DefaultConstructor) || (needsImplicitDefaultConstructor() && !(data().HasTrivialSpecialMembers & SMF_DefaultConstructor)); } /// \brief Determine whether this class has at least one constexpr constructor /// other than the copy or move constructors. bool hasConstexprNonCopyMoveConstructor() const { return data().HasConstexprNonCopyMoveConstructor || (needsImplicitDefaultConstructor() && defaultedDefaultConstructorIsConstexpr()); } /// \brief Determine whether a defaulted default constructor for this class /// would be constexpr. bool defaultedDefaultConstructorIsConstexpr() const { return data().DefaultedDefaultConstructorIsConstexpr && (!isUnion() || hasInClassInitializer() || !hasVariantMembers()); } /// \brief Determine whether this class has a constexpr default constructor. bool hasConstexprDefaultConstructor() const { return data().HasConstexprDefaultConstructor || (needsImplicitDefaultConstructor() && defaultedDefaultConstructorIsConstexpr()); } /// \brief Determine whether this class has a trivial copy constructor /// (C++ [class.copy]p6, C++11 [class.copy]p12) bool hasTrivialCopyConstructor() const { return data().HasTrivialSpecialMembers & SMF_CopyConstructor; } /// \brief Determine whether this class has a non-trivial copy constructor /// (C++ [class.copy]p6, C++11 [class.copy]p12) bool hasNonTrivialCopyConstructor() const { return data().DeclaredNonTrivialSpecialMembers & SMF_CopyConstructor || !hasTrivialCopyConstructor(); } /// \brief Determine whether this class has a trivial move constructor /// (C++11 [class.copy]p12) bool hasTrivialMoveConstructor() const { return hasMoveConstructor() && (data().HasTrivialSpecialMembers & SMF_MoveConstructor); } /// \brief Determine whether this class has a non-trivial move constructor /// (C++11 [class.copy]p12) bool hasNonTrivialMoveConstructor() const { return (data().DeclaredNonTrivialSpecialMembers & SMF_MoveConstructor) || (needsImplicitMoveConstructor() && !(data().HasTrivialSpecialMembers & SMF_MoveConstructor)); } /// \brief Determine whether this class has a trivial copy assignment operator /// (C++ [class.copy]p11, C++11 [class.copy]p25) bool hasTrivialCopyAssignment() const { return data().HasTrivialSpecialMembers & SMF_CopyAssignment; } /// \brief Determine whether this class has a non-trivial copy assignment /// operator (C++ [class.copy]p11, C++11 [class.copy]p25) bool hasNonTrivialCopyAssignment() const { return data().DeclaredNonTrivialSpecialMembers & SMF_CopyAssignment || !hasTrivialCopyAssignment(); } /// \brief Determine whether this class has a trivial move assignment operator /// (C++11 [class.copy]p25) bool hasTrivialMoveAssignment() const { return hasMoveAssignment() && (data().HasTrivialSpecialMembers & SMF_MoveAssignment); } /// \brief Determine whether this class has a non-trivial move assignment /// operator (C++11 [class.copy]p25) bool hasNonTrivialMoveAssignment() const { return (data().DeclaredNonTrivialSpecialMembers & SMF_MoveAssignment) || (needsImplicitMoveAssignment() && !(data().HasTrivialSpecialMembers & SMF_MoveAssignment)); } /// \brief Determine whether this class has a trivial destructor /// (C++ [class.dtor]p3) bool hasTrivialDestructor() const { return data().HasTrivialSpecialMembers & SMF_Destructor; } /// \brief Determine whether this class has a non-trivial destructor /// (C++ [class.dtor]p3) bool hasNonTrivialDestructor() const { return !(data().HasTrivialSpecialMembers & SMF_Destructor); } /// \brief Determine whether declaring a const variable with this type is ok /// per core issue 253. bool allowConstDefaultInit() const { return !data().HasUninitializedFields || !(data().HasDefaultedDefaultConstructor || needsImplicitDefaultConstructor()); } /// \brief Determine whether this class has a destructor which has no /// semantic effect. /// /// Any such destructor will be trivial, public, defaulted and not deleted, /// and will call only irrelevant destructors. bool hasIrrelevantDestructor() const { return data().HasIrrelevantDestructor; } + /// \brief Determine whether this class has at least one trivial, non-deleted + /// copy or move constructor. + bool canPassInRegisters() const { + return data().CanPassInRegisters; + } + + /// \brief Set that we can pass this RecordDecl in registers. + // FIXME: This should be set as part of completeDefinition. + void setCanPassInRegisters(bool CanPass) { + data().CanPassInRegisters = CanPass; + } + /// \brief Determine whether this class has a non-literal or/ volatile type /// non-static data member or base class. bool hasNonLiteralTypeFieldsOrBases() const { return data().HasNonLiteralTypeFieldsOrBases; } /// \brief Determine whether this class has a using-declaration that names /// a user-declared base class constructor. bool hasInheritedConstructor() const { return data().HasInheritedConstructor; } /// \brief Determine whether this class has a using-declaration that names /// a base class assignment operator. bool hasInheritedAssignment() const { return data().HasInheritedAssignment; } /// \brief Determine whether this class is considered trivially copyable per /// (C++11 [class]p6). bool isTriviallyCopyable() const; /// \brief Determine whether this class is considered trivial. /// /// C++11 [class]p6: /// "A trivial class is a class that has a trivial default constructor and /// is trivially copiable." bool isTrivial() const { return isTriviallyCopyable() && hasTrivialDefaultConstructor(); } /// \brief Determine whether this class is a literal type. /// /// C++11 [basic.types]p10: /// A class type that has all the following properties: /// - it has a trivial destructor /// - every constructor call and full-expression in the /// brace-or-equal-intializers for non-static data members (if any) is /// a constant expression. /// - it is an aggregate type or has at least one constexpr constructor /// or constructor template that is not a copy or move constructor, and /// - all of its non-static data members and base classes are of literal /// types /// /// We resolve DR1361 by ignoring the second bullet. We resolve DR1452 by /// treating types with trivial default constructors as literal types. /// /// Only in C++1z and beyond, are lambdas literal types. bool isLiteral() const { return hasTrivialDestructor() && (!isLambda() || getASTContext().getLangOpts().CPlusPlus1z) && !hasNonLiteralTypeFieldsOrBases() && (isAggregate() || isLambda() || hasConstexprNonCopyMoveConstructor() || hasTrivialDefaultConstructor()); } /// \brief If this record is an instantiation of a member class, /// retrieves the member class from which it was instantiated. /// /// This routine will return non-null for (non-templated) member /// classes of class templates. For example, given: /// /// \code /// template /// struct X { /// struct A { }; /// }; /// \endcode /// /// The declaration for X::A is a (non-templated) CXXRecordDecl /// whose parent is the class template specialization X. For /// this declaration, getInstantiatedFromMemberClass() will return /// the CXXRecordDecl X::A. When a complete definition of /// X::A is required, it will be instantiated from the /// declaration returned by getInstantiatedFromMemberClass(). CXXRecordDecl *getInstantiatedFromMemberClass() const; /// \brief If this class is an instantiation of a member class of a /// class template specialization, retrieves the member specialization /// information. MemberSpecializationInfo *getMemberSpecializationInfo() const; /// \brief Specify that this record is an instantiation of the /// member class \p RD. void setInstantiationOfMemberClass(CXXRecordDecl *RD, TemplateSpecializationKind TSK); /// \brief Retrieves the class template that is described by this /// class declaration. /// /// Every class template is represented as a ClassTemplateDecl and a /// CXXRecordDecl. The former contains template properties (such as /// the template parameter lists) while the latter contains the /// actual description of the template's /// contents. ClassTemplateDecl::getTemplatedDecl() retrieves the /// CXXRecordDecl that from a ClassTemplateDecl, while /// getDescribedClassTemplate() retrieves the ClassTemplateDecl from /// a CXXRecordDecl. ClassTemplateDecl *getDescribedClassTemplate() const; void setDescribedClassTemplate(ClassTemplateDecl *Template); /// \brief Determine whether this particular class is a specialization or /// instantiation of a class template or member class of a class template, /// and how it was instantiated or specialized. TemplateSpecializationKind getTemplateSpecializationKind() const; /// \brief Set the kind of specialization or template instantiation this is. void setTemplateSpecializationKind(TemplateSpecializationKind TSK); /// \brief Retrieve the record declaration from which this record could be /// instantiated. Returns null if this class is not a template instantiation. const CXXRecordDecl *getTemplateInstantiationPattern() const; CXXRecordDecl *getTemplateInstantiationPattern() { return const_cast(const_cast(this) ->getTemplateInstantiationPattern()); } /// \brief Returns the destructor decl for this class. CXXDestructorDecl *getDestructor() const; /// \brief Returns true if the class destructor, or any implicitly invoked /// destructors are marked noreturn. bool isAnyDestructorNoReturn() const; /// \brief If the class is a local class [class.local], returns /// the enclosing function declaration. const FunctionDecl *isLocalClass() const { if (const CXXRecordDecl *RD = dyn_cast(getDeclContext())) return RD->isLocalClass(); return dyn_cast(getDeclContext()); } FunctionDecl *isLocalClass() { return const_cast( const_cast(this)->isLocalClass()); } /// \brief Determine whether this dependent class is a current instantiation, /// when viewed from within the given context. bool isCurrentInstantiation(const DeclContext *CurContext) const; /// \brief Determine whether this class is derived from the class \p Base. /// /// This routine only determines whether this class is derived from \p Base, /// but does not account for factors that may make a Derived -> Base class /// ill-formed, such as private/protected inheritance or multiple, ambiguous /// base class subobjects. /// /// \param Base the base class we are searching for. /// /// \returns true if this class is derived from Base, false otherwise. bool isDerivedFrom(const CXXRecordDecl *Base) const; /// \brief Determine whether this class is derived from the type \p Base. /// /// This routine only determines whether this class is derived from \p Base, /// but does not account for factors that may make a Derived -> Base class /// ill-formed, such as private/protected inheritance or multiple, ambiguous /// base class subobjects. /// /// \param Base the base class we are searching for. /// /// \param Paths will contain the paths taken from the current class to the /// given \p Base class. /// /// \returns true if this class is derived from \p Base, false otherwise. /// /// \todo add a separate parameter to configure IsDerivedFrom, rather than /// tangling input and output in \p Paths bool isDerivedFrom(const CXXRecordDecl *Base, CXXBasePaths &Paths) const; /// \brief Determine whether this class is virtually derived from /// the class \p Base. /// /// This routine only determines whether this class is virtually /// derived from \p Base, but does not account for factors that may /// make a Derived -> Base class ill-formed, such as /// private/protected inheritance or multiple, ambiguous base class /// subobjects. /// /// \param Base the base class we are searching for. /// /// \returns true if this class is virtually derived from Base, /// false otherwise. bool isVirtuallyDerivedFrom(const CXXRecordDecl *Base) const; /// \brief Determine whether this class is provably not derived from /// the type \p Base. bool isProvablyNotDerivedFrom(const CXXRecordDecl *Base) const; /// \brief Function type used by forallBases() as a callback. /// /// \param BaseDefinition the definition of the base class /// /// \returns true if this base matched the search criteria typedef llvm::function_ref ForallBasesCallback; /// \brief Determines if the given callback holds for all the direct /// or indirect base classes of this type. /// /// The class itself does not count as a base class. This routine /// returns false if the class has non-computable base classes. /// /// \param BaseMatches Callback invoked for each (direct or indirect) base /// class of this type, or if \p AllowShortCircuit is true then until a call /// returns false. /// /// \param AllowShortCircuit if false, forces the callback to be called /// for every base class, even if a dependent or non-matching base was /// found. bool forallBases(ForallBasesCallback BaseMatches, bool AllowShortCircuit = true) const; /// \brief Function type used by lookupInBases() to determine whether a /// specific base class subobject matches the lookup criteria. /// /// \param Specifier the base-class specifier that describes the inheritance /// from the base class we are trying to match. /// /// \param Path the current path, from the most-derived class down to the /// base named by the \p Specifier. /// /// \returns true if this base matched the search criteria, false otherwise. typedef llvm::function_ref BaseMatchesCallback; /// \brief Look for entities within the base classes of this C++ class, /// transitively searching all base class subobjects. /// /// This routine uses the callback function \p BaseMatches to find base /// classes meeting some search criteria, walking all base class subobjects /// and populating the given \p Paths structure with the paths through the /// inheritance hierarchy that resulted in a match. On a successful search, /// the \p Paths structure can be queried to retrieve the matching paths and /// to determine if there were any ambiguities. /// /// \param BaseMatches callback function used to determine whether a given /// base matches the user-defined search criteria. /// /// \param Paths used to record the paths from this class to its base class /// subobjects that match the search criteria. /// /// \param LookupInDependent can be set to true to extend the search to /// dependent base classes. /// /// \returns true if there exists any path from this class to a base class /// subobject that matches the search criteria. bool lookupInBases(BaseMatchesCallback BaseMatches, CXXBasePaths &Paths, bool LookupInDependent = false) const; /// \brief Base-class lookup callback that determines whether the given /// base class specifier refers to a specific class declaration. /// /// This callback can be used with \c lookupInBases() to determine whether /// a given derived class has is a base class subobject of a particular type. /// The base record pointer should refer to the canonical CXXRecordDecl of the /// base class that we are searching for. static bool FindBaseClass(const CXXBaseSpecifier *Specifier, CXXBasePath &Path, const CXXRecordDecl *BaseRecord); /// \brief Base-class lookup callback that determines whether the /// given base class specifier refers to a specific class /// declaration and describes virtual derivation. /// /// This callback can be used with \c lookupInBases() to determine /// whether a given derived class has is a virtual base class /// subobject of a particular type. The base record pointer should /// refer to the canonical CXXRecordDecl of the base class that we /// are searching for. static bool FindVirtualBaseClass(const CXXBaseSpecifier *Specifier, CXXBasePath &Path, const CXXRecordDecl *BaseRecord); /// \brief Base-class lookup callback that determines whether there exists /// a tag with the given name. /// /// This callback can be used with \c lookupInBases() to find tag members /// of the given name within a C++ class hierarchy. static bool FindTagMember(const CXXBaseSpecifier *Specifier, CXXBasePath &Path, DeclarationName Name); /// \brief Base-class lookup callback that determines whether there exists /// a member with the given name. /// /// This callback can be used with \c lookupInBases() to find members /// of the given name within a C++ class hierarchy. static bool FindOrdinaryMember(const CXXBaseSpecifier *Specifier, CXXBasePath &Path, DeclarationName Name); /// \brief Base-class lookup callback that determines whether there exists /// a member with the given name. /// /// This callback can be used with \c lookupInBases() to find members /// of the given name within a C++ class hierarchy, including dependent /// classes. static bool FindOrdinaryMemberInDependentClasses(const CXXBaseSpecifier *Specifier, CXXBasePath &Path, DeclarationName Name); /// \brief Base-class lookup callback that determines whether there exists /// an OpenMP declare reduction member with the given name. /// /// This callback can be used with \c lookupInBases() to find members /// of the given name within a C++ class hierarchy. static bool FindOMPReductionMember(const CXXBaseSpecifier *Specifier, CXXBasePath &Path, DeclarationName Name); /// \brief Base-class lookup callback that determines whether there exists /// a member with the given name that can be used in a nested-name-specifier. /// /// This callback can be used with \c lookupInBases() to find members of /// the given name within a C++ class hierarchy that can occur within /// nested-name-specifiers. static bool FindNestedNameSpecifierMember(const CXXBaseSpecifier *Specifier, CXXBasePath &Path, DeclarationName Name); /// \brief Retrieve the final overriders for each virtual member /// function in the class hierarchy where this class is the /// most-derived class in the class hierarchy. void getFinalOverriders(CXXFinalOverriderMap &FinaOverriders) const; /// \brief Get the indirect primary bases for this class. void getIndirectPrimaryBases(CXXIndirectPrimaryBaseSet& Bases) const; /// Performs an imprecise lookup of a dependent name in this class. /// /// This function does not follow strict semantic rules and should be used /// only when lookup rules can be relaxed, e.g. indexing. std::vector lookupDependentName(const DeclarationName &Name, llvm::function_ref Filter); /// Renders and displays an inheritance diagram /// for this C++ class and all of its base classes (transitively) using /// GraphViz. void viewInheritance(ASTContext& Context) const; /// \brief Calculates the access of a decl that is reached /// along a path. static AccessSpecifier MergeAccess(AccessSpecifier PathAccess, AccessSpecifier DeclAccess) { assert(DeclAccess != AS_none); if (DeclAccess == AS_private) return AS_none; return (PathAccess > DeclAccess ? PathAccess : DeclAccess); } /// \brief Indicates that the declaration of a defaulted or deleted special /// member function is now complete. void finishedDefaultedOrDeletedMember(CXXMethodDecl *MD); /// \brief Indicates that the definition of this class is now complete. void completeDefinition() override; /// \brief Indicates that the definition of this class is now complete, /// and provides a final overrider map to help determine /// /// \param FinalOverriders The final overrider map for this class, which can /// be provided as an optimization for abstract-class checking. If NULL, /// final overriders will be computed if they are needed to complete the /// definition. void completeDefinition(CXXFinalOverriderMap *FinalOverriders); /// \brief Determine whether this class may end up being abstract, even though /// it is not yet known to be abstract. /// /// \returns true if this class is not known to be abstract but has any /// base classes that are abstract. In this case, \c completeDefinition() /// will need to compute final overriders to determine whether the class is /// actually abstract. bool mayBeAbstract() const; /// \brief If this is the closure type of a lambda expression, retrieve the /// number to be used for name mangling in the Itanium C++ ABI. /// /// Zero indicates that this closure type has internal linkage, so the /// mangling number does not matter, while a non-zero value indicates which /// lambda expression this is in this particular context. unsigned getLambdaManglingNumber() const { assert(isLambda() && "Not a lambda closure type!"); return getLambdaData().ManglingNumber; } /// \brief Retrieve the declaration that provides additional context for a /// lambda, when the normal declaration context is not specific enough. /// /// Certain contexts (default arguments of in-class function parameters and /// the initializers of data members) have separate name mangling rules for /// lambdas within the Itanium C++ ABI. For these cases, this routine provides /// the declaration in which the lambda occurs, e.g., the function parameter /// or the non-static data member. Otherwise, it returns NULL to imply that /// the declaration context suffices. Decl *getLambdaContextDecl() const; /// \brief Set the mangling number and context declaration for a lambda /// class. void setLambdaMangling(unsigned ManglingNumber, Decl *ContextDecl) { getLambdaData().ManglingNumber = ManglingNumber; getLambdaData().ContextDecl = ContextDecl; } /// \brief Returns the inheritance model used for this record. MSInheritanceAttr::Spelling getMSInheritanceModel() const; /// \brief Calculate what the inheritance model would be for this class. MSInheritanceAttr::Spelling calculateInheritanceModel() const; /// In the Microsoft C++ ABI, use zero for the field offset of a null data /// member pointer if we can guarantee that zero is not a valid field offset, /// or if the member pointer has multiple fields. Polymorphic classes have a /// vfptr at offset zero, so we can use zero for null. If there are multiple /// fields, we can use zero even if it is a valid field offset because /// null-ness testing will check the other fields. bool nullFieldOffsetIsZero() const { return !MSInheritanceAttr::hasOnlyOneField(/*IsMemberFunction=*/false, getMSInheritanceModel()) || (hasDefinition() && isPolymorphic()); } /// \brief Controls when vtordisps will be emitted if this record is used as a /// virtual base. MSVtorDispAttr::Mode getMSVtorDispMode() const; /// \brief Determine whether this lambda expression was known to be dependent /// at the time it was created, even if its context does not appear to be /// dependent. /// /// This flag is a workaround for an issue with parsing, where default /// arguments are parsed before their enclosing function declarations have /// been created. This means that any lambda expressions within those /// default arguments will have as their DeclContext the context enclosing /// the function declaration, which may be non-dependent even when the /// function declaration itself is dependent. This flag indicates when we /// know that the lambda is dependent despite that. bool isDependentLambda() const { return isLambda() && getLambdaData().Dependent; } TypeSourceInfo *getLambdaTypeInfo() const { return getLambdaData().MethodTyInfo; } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K >= firstCXXRecord && K <= lastCXXRecord; } friend class ASTDeclReader; friend class ASTDeclWriter; friend class ASTRecordWriter; friend class ASTReader; friend class ASTWriter; }; /// \brief Represents a C++ deduction guide declaration. /// /// \code /// template struct A { A(); A(T); }; /// A() -> A; /// \endcode /// /// In this example, there will be an explicit deduction guide from the /// second line, and implicit deduction guide templates synthesized from /// the constructors of \c A. class CXXDeductionGuideDecl : public FunctionDecl { void anchor() override; private: CXXDeductionGuideDecl(ASTContext &C, DeclContext *DC, SourceLocation StartLoc, bool IsExplicit, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, SourceLocation EndLocation) : FunctionDecl(CXXDeductionGuide, C, DC, StartLoc, NameInfo, T, TInfo, SC_None, false, false) { if (EndLocation.isValid()) setRangeEnd(EndLocation); IsExplicitSpecified = IsExplicit; } public: static CXXDeductionGuideDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation StartLoc, bool IsExplicit, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, SourceLocation EndLocation); static CXXDeductionGuideDecl *CreateDeserialized(ASTContext &C, unsigned ID); /// Whether this deduction guide is explicit. bool isExplicit() const { return IsExplicitSpecified; } /// Whether this deduction guide was declared with the 'explicit' specifier. bool isExplicitSpecified() const { return IsExplicitSpecified; } /// Get the template for which this guide performs deduction. TemplateDecl *getDeducedTemplate() const { return getDeclName().getCXXDeductionGuideTemplate(); } // Implement isa/cast/dyncast/etc. static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == CXXDeductionGuide; } friend class ASTDeclReader; friend class ASTDeclWriter; }; /// \brief Represents a static or instance method of a struct/union/class. /// /// In the terminology of the C++ Standard, these are the (static and /// non-static) member functions, whether virtual or not. class CXXMethodDecl : public FunctionDecl { void anchor() override; protected: CXXMethodDecl(Kind DK, ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, StorageClass SC, bool isInline, bool isConstexpr, SourceLocation EndLocation) : FunctionDecl(DK, C, RD, StartLoc, NameInfo, T, TInfo, SC, isInline, isConstexpr) { if (EndLocation.isValid()) setRangeEnd(EndLocation); } public: static CXXMethodDecl *Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, StorageClass SC, bool isInline, bool isConstexpr, SourceLocation EndLocation); static CXXMethodDecl *CreateDeserialized(ASTContext &C, unsigned ID); bool isStatic() const; bool isInstance() const { return !isStatic(); } /// Returns true if the given operator is implicitly static in a record /// context. static bool isStaticOverloadedOperator(OverloadedOperatorKind OOK) { // [class.free]p1: // Any allocation function for a class T is a static member // (even if not explicitly declared static). // [class.free]p6 Any deallocation function for a class X is a static member // (even if not explicitly declared static). return OOK == OO_New || OOK == OO_Array_New || OOK == OO_Delete || OOK == OO_Array_Delete; } bool isConst() const { return getType()->castAs()->isConst(); } bool isVolatile() const { return getType()->castAs()->isVolatile(); } bool isVirtual() const { CXXMethodDecl *CD = cast(const_cast(this)->getCanonicalDecl()); // Member function is virtual if it is marked explicitly so, or if it is // declared in __interface -- then it is automatically pure virtual. if (CD->isVirtualAsWritten() || CD->isPure()) return true; return (CD->begin_overridden_methods() != CD->end_overridden_methods()); } /// If it's possible to devirtualize a call to this method, return the called /// function. Otherwise, return null. /// \param Base The object on which this virtual function is called. /// \param IsAppleKext True if we are compiling for Apple kext. CXXMethodDecl *getDevirtualizedMethod(const Expr *Base, bool IsAppleKext); const CXXMethodDecl *getDevirtualizedMethod(const Expr *Base, bool IsAppleKext) const { return const_cast(this)->getDevirtualizedMethod( Base, IsAppleKext); } /// \brief Determine whether this is a usual deallocation function /// (C++ [basic.stc.dynamic.deallocation]p2), which is an overloaded /// delete or delete[] operator with a particular signature. bool isUsualDeallocationFunction() const; /// \brief Determine whether this is a copy-assignment operator, regardless /// of whether it was declared implicitly or explicitly. bool isCopyAssignmentOperator() const; /// \brief Determine whether this is a move assignment operator. bool isMoveAssignmentOperator() const; CXXMethodDecl *getCanonicalDecl() override { return cast(FunctionDecl::getCanonicalDecl()); } const CXXMethodDecl *getCanonicalDecl() const { return const_cast(this)->getCanonicalDecl(); } CXXMethodDecl *getMostRecentDecl() { return cast( static_cast(this)->getMostRecentDecl()); } const CXXMethodDecl *getMostRecentDecl() const { return const_cast(this)->getMostRecentDecl(); } /// True if this method is user-declared and was not /// deleted or defaulted on its first declaration. bool isUserProvided() const { return !(isDeleted() || getCanonicalDecl()->isDefaulted()); } /// void addOverriddenMethod(const CXXMethodDecl *MD); typedef const CXXMethodDecl *const* method_iterator; method_iterator begin_overridden_methods() const; method_iterator end_overridden_methods() const; unsigned size_overridden_methods() const; typedef ASTContext::overridden_method_range overridden_method_range; overridden_method_range overridden_methods() const; /// Returns the parent of this method declaration, which /// is the class in which this method is defined. const CXXRecordDecl *getParent() const { return cast(FunctionDecl::getParent()); } /// Returns the parent of this method declaration, which /// is the class in which this method is defined. CXXRecordDecl *getParent() { return const_cast( cast(FunctionDecl::getParent())); } /// \brief Returns the type of the \c this pointer. /// /// Should only be called for instance (i.e., non-static) methods. QualType getThisType(ASTContext &C) const; unsigned getTypeQualifiers() const { return getType()->getAs()->getTypeQuals(); } /// \brief Retrieve the ref-qualifier associated with this method. /// /// In the following example, \c f() has an lvalue ref-qualifier, \c g() /// has an rvalue ref-qualifier, and \c h() has no ref-qualifier. /// @code /// struct X { /// void f() &; /// void g() &&; /// void h(); /// }; /// @endcode RefQualifierKind getRefQualifier() const { return getType()->getAs()->getRefQualifier(); } bool hasInlineBody() const; /// \brief Determine whether this is a lambda closure type's static member /// function that is used for the result of the lambda's conversion to /// function pointer (for a lambda with no captures). /// /// The function itself, if used, will have a placeholder body that will be /// supplied by IR generation to either forward to the function call operator /// or clone the function call operator. bool isLambdaStaticInvoker() const; /// \brief Find the method in \p RD that corresponds to this one. /// /// Find if \p RD or one of the classes it inherits from override this method. /// If so, return it. \p RD is assumed to be a subclass of the class defining /// this method (or be the class itself), unless \p MayBeBase is set to true. CXXMethodDecl * getCorrespondingMethodInClass(const CXXRecordDecl *RD, bool MayBeBase = false); const CXXMethodDecl * getCorrespondingMethodInClass(const CXXRecordDecl *RD, bool MayBeBase = false) const { return const_cast(this) ->getCorrespondingMethodInClass(RD, MayBeBase); } // Implement isa/cast/dyncast/etc. static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K >= firstCXXMethod && K <= lastCXXMethod; } }; /// \brief Represents a C++ base or member initializer. /// /// This is part of a constructor initializer that /// initializes one non-static member variable or one base class. For /// example, in the following, both 'A(a)' and 'f(3.14159)' are member /// initializers: /// /// \code /// class A { }; /// class B : public A { /// float f; /// public: /// B(A& a) : A(a), f(3.14159) { } /// }; /// \endcode class CXXCtorInitializer final { /// \brief Either the base class name/delegating constructor type (stored as /// a TypeSourceInfo*), an normal field (FieldDecl), or an anonymous field /// (IndirectFieldDecl*) being initialized. llvm::PointerUnion3 Initializee; /// \brief The source location for the field name or, for a base initializer /// pack expansion, the location of the ellipsis. /// /// In the case of a delegating /// constructor, it will still include the type's source location as the /// Initializee points to the CXXConstructorDecl (to allow loop detection). SourceLocation MemberOrEllipsisLocation; /// \brief The argument used to initialize the base or member, which may /// end up constructing an object (when multiple arguments are involved). Stmt *Init; /// \brief Location of the left paren of the ctor-initializer. SourceLocation LParenLoc; /// \brief Location of the right paren of the ctor-initializer. SourceLocation RParenLoc; /// \brief If the initializee is a type, whether that type makes this /// a delegating initialization. unsigned IsDelegating : 1; /// \brief If the initializer is a base initializer, this keeps track /// of whether the base is virtual or not. unsigned IsVirtual : 1; /// \brief Whether or not the initializer is explicitly written /// in the sources. unsigned IsWritten : 1; /// If IsWritten is true, then this number keeps track of the textual order /// of this initializer in the original sources, counting from 0. unsigned SourceOrder : 13; public: /// \brief Creates a new base-class initializer. explicit CXXCtorInitializer(ASTContext &Context, TypeSourceInfo *TInfo, bool IsVirtual, SourceLocation L, Expr *Init, SourceLocation R, SourceLocation EllipsisLoc); /// \brief Creates a new member initializer. explicit CXXCtorInitializer(ASTContext &Context, FieldDecl *Member, SourceLocation MemberLoc, SourceLocation L, Expr *Init, SourceLocation R); /// \brief Creates a new anonymous field initializer. explicit CXXCtorInitializer(ASTContext &Context, IndirectFieldDecl *Member, SourceLocation MemberLoc, SourceLocation L, Expr *Init, SourceLocation R); /// \brief Creates a new delegating initializer. explicit CXXCtorInitializer(ASTContext &Context, TypeSourceInfo *TInfo, SourceLocation L, Expr *Init, SourceLocation R); /// \brief Determine whether this initializer is initializing a base class. bool isBaseInitializer() const { return Initializee.is() && !IsDelegating; } /// \brief Determine whether this initializer is initializing a non-static /// data member. bool isMemberInitializer() const { return Initializee.is(); } bool isAnyMemberInitializer() const { return isMemberInitializer() || isIndirectMemberInitializer(); } bool isIndirectMemberInitializer() const { return Initializee.is(); } /// \brief Determine whether this initializer is an implicit initializer /// generated for a field with an initializer defined on the member /// declaration. /// /// In-class member initializers (also known as "non-static data member /// initializations", NSDMIs) were introduced in C++11. bool isInClassMemberInitializer() const { return Init->getStmtClass() == Stmt::CXXDefaultInitExprClass; } /// \brief Determine whether this initializer is creating a delegating /// constructor. bool isDelegatingInitializer() const { return Initializee.is() && IsDelegating; } /// \brief Determine whether this initializer is a pack expansion. bool isPackExpansion() const { return isBaseInitializer() && MemberOrEllipsisLocation.isValid(); } // \brief For a pack expansion, returns the location of the ellipsis. SourceLocation getEllipsisLoc() const { assert(isPackExpansion() && "Initializer is not a pack expansion"); return MemberOrEllipsisLocation; } /// If this is a base class initializer, returns the type of the /// base class with location information. Otherwise, returns an NULL /// type location. TypeLoc getBaseClassLoc() const; /// If this is a base class initializer, returns the type of the base class. /// Otherwise, returns null. const Type *getBaseClass() const; /// Returns whether the base is virtual or not. bool isBaseVirtual() const { assert(isBaseInitializer() && "Must call this on base initializer!"); return IsVirtual; } /// \brief Returns the declarator information for a base class or delegating /// initializer. TypeSourceInfo *getTypeSourceInfo() const { return Initializee.dyn_cast(); } /// \brief If this is a member initializer, returns the declaration of the /// non-static data member being initialized. Otherwise, returns null. FieldDecl *getMember() const { if (isMemberInitializer()) return Initializee.get(); return nullptr; } FieldDecl *getAnyMember() const { if (isMemberInitializer()) return Initializee.get(); if (isIndirectMemberInitializer()) return Initializee.get()->getAnonField(); return nullptr; } IndirectFieldDecl *getIndirectMember() const { if (isIndirectMemberInitializer()) return Initializee.get(); return nullptr; } SourceLocation getMemberLocation() const { return MemberOrEllipsisLocation; } /// \brief Determine the source location of the initializer. SourceLocation getSourceLocation() const; /// \brief Determine the source range covering the entire initializer. SourceRange getSourceRange() const LLVM_READONLY; /// \brief Determine whether this initializer is explicitly written /// in the source code. bool isWritten() const { return IsWritten; } /// \brief Return the source position of the initializer, counting from 0. /// If the initializer was implicit, -1 is returned. int getSourceOrder() const { return IsWritten ? static_cast(SourceOrder) : -1; } /// \brief Set the source order of this initializer. /// /// This can only be called once for each initializer; it cannot be called /// on an initializer having a positive number of (implicit) array indices. /// /// This assumes that the initializer was written in the source code, and /// ensures that isWritten() returns true. void setSourceOrder(int Pos) { assert(!IsWritten && "setSourceOrder() used on implicit initializer"); assert(SourceOrder == 0 && "calling twice setSourceOrder() on the same initializer"); assert(Pos >= 0 && "setSourceOrder() used to make an initializer implicit"); IsWritten = true; SourceOrder = static_cast(Pos); } SourceLocation getLParenLoc() const { return LParenLoc; } SourceLocation getRParenLoc() const { return RParenLoc; } /// \brief Get the initializer. Expr *getInit() const { return static_cast(Init); } }; /// Description of a constructor that was inherited from a base class. class InheritedConstructor { ConstructorUsingShadowDecl *Shadow; CXXConstructorDecl *BaseCtor; public: InheritedConstructor() : Shadow(), BaseCtor() {} InheritedConstructor(ConstructorUsingShadowDecl *Shadow, CXXConstructorDecl *BaseCtor) : Shadow(Shadow), BaseCtor(BaseCtor) {} explicit operator bool() const { return Shadow; } ConstructorUsingShadowDecl *getShadowDecl() const { return Shadow; } CXXConstructorDecl *getConstructor() const { return BaseCtor; } }; /// \brief Represents a C++ constructor within a class. /// /// For example: /// /// \code /// class X { /// public: /// explicit X(int); // represented by a CXXConstructorDecl. /// }; /// \endcode class CXXConstructorDecl final : public CXXMethodDecl, private llvm::TrailingObjects { void anchor() override; /// \name Support for base and member initializers. /// \{ /// \brief The arguments used to initialize the base or member. LazyCXXCtorInitializersPtr CtorInitializers; unsigned NumCtorInitializers : 31; /// \} /// \brief Whether this constructor declaration is an implicitly-declared /// inheriting constructor. unsigned IsInheritingConstructor : 1; CXXConstructorDecl(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isExplicitSpecified, bool isInline, bool isImplicitlyDeclared, bool isConstexpr, InheritedConstructor Inherited) : CXXMethodDecl(CXXConstructor, C, RD, StartLoc, NameInfo, T, TInfo, SC_None, isInline, isConstexpr, SourceLocation()), CtorInitializers(nullptr), NumCtorInitializers(0), IsInheritingConstructor((bool)Inherited) { setImplicit(isImplicitlyDeclared); if (Inherited) *getTrailingObjects() = Inherited; IsExplicitSpecified = isExplicitSpecified; } public: static CXXConstructorDecl *CreateDeserialized(ASTContext &C, unsigned ID, bool InheritsConstructor); static CXXConstructorDecl * Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isExplicit, bool isInline, bool isImplicitlyDeclared, bool isConstexpr, InheritedConstructor Inherited = InheritedConstructor()); /// \brief Iterates through the member/base initializer list. typedef CXXCtorInitializer **init_iterator; /// \brief Iterates through the member/base initializer list. typedef CXXCtorInitializer *const *init_const_iterator; typedef llvm::iterator_range init_range; typedef llvm::iterator_range init_const_range; init_range inits() { return init_range(init_begin(), init_end()); } init_const_range inits() const { return init_const_range(init_begin(), init_end()); } /// \brief Retrieve an iterator to the first initializer. init_iterator init_begin() { const auto *ConstThis = this; return const_cast(ConstThis->init_begin()); } /// \brief Retrieve an iterator to the first initializer. init_const_iterator init_begin() const; /// \brief Retrieve an iterator past the last initializer. init_iterator init_end() { return init_begin() + NumCtorInitializers; } /// \brief Retrieve an iterator past the last initializer. init_const_iterator init_end() const { return init_begin() + NumCtorInitializers; } typedef std::reverse_iterator init_reverse_iterator; typedef std::reverse_iterator init_const_reverse_iterator; init_reverse_iterator init_rbegin() { return init_reverse_iterator(init_end()); } init_const_reverse_iterator init_rbegin() const { return init_const_reverse_iterator(init_end()); } init_reverse_iterator init_rend() { return init_reverse_iterator(init_begin()); } init_const_reverse_iterator init_rend() const { return init_const_reverse_iterator(init_begin()); } /// \brief Determine the number of arguments used to initialize the member /// or base. unsigned getNumCtorInitializers() const { return NumCtorInitializers; } void setNumCtorInitializers(unsigned numCtorInitializers) { NumCtorInitializers = numCtorInitializers; } void setCtorInitializers(CXXCtorInitializer **Initializers) { CtorInitializers = Initializers; } /// Whether this function is marked as explicit explicitly. bool isExplicitSpecified() const { return IsExplicitSpecified; } /// Whether this function is explicit. bool isExplicit() const { return getCanonicalDecl()->isExplicitSpecified(); } /// \brief Determine whether this constructor is a delegating constructor. bool isDelegatingConstructor() const { return (getNumCtorInitializers() == 1) && init_begin()[0]->isDelegatingInitializer(); } /// \brief When this constructor delegates to another, retrieve the target. CXXConstructorDecl *getTargetConstructor() const; /// Whether this constructor is a default /// constructor (C++ [class.ctor]p5), which can be used to /// default-initialize a class of this type. bool isDefaultConstructor() const; /// \brief Whether this constructor is a copy constructor (C++ [class.copy]p2, /// which can be used to copy the class. /// /// \p TypeQuals will be set to the qualifiers on the /// argument type. For example, \p TypeQuals would be set to \c /// Qualifiers::Const for the following copy constructor: /// /// \code /// class X { /// public: /// X(const X&); /// }; /// \endcode bool isCopyConstructor(unsigned &TypeQuals) const; /// Whether this constructor is a copy /// constructor (C++ [class.copy]p2, which can be used to copy the /// class. bool isCopyConstructor() const { unsigned TypeQuals = 0; return isCopyConstructor(TypeQuals); } /// \brief Determine whether this constructor is a move constructor /// (C++11 [class.copy]p3), which can be used to move values of the class. /// /// \param TypeQuals If this constructor is a move constructor, will be set /// to the type qualifiers on the referent of the first parameter's type. bool isMoveConstructor(unsigned &TypeQuals) const; /// \brief Determine whether this constructor is a move constructor /// (C++11 [class.copy]p3), which can be used to move values of the class. bool isMoveConstructor() const { unsigned TypeQuals = 0; return isMoveConstructor(TypeQuals); } /// \brief Determine whether this is a copy or move constructor. /// /// \param TypeQuals Will be set to the type qualifiers on the reference /// parameter, if in fact this is a copy or move constructor. bool isCopyOrMoveConstructor(unsigned &TypeQuals) const; /// \brief Determine whether this a copy or move constructor. bool isCopyOrMoveConstructor() const { unsigned Quals; return isCopyOrMoveConstructor(Quals); } /// Whether this constructor is a /// converting constructor (C++ [class.conv.ctor]), which can be /// used for user-defined conversions. bool isConvertingConstructor(bool AllowExplicit) const; /// \brief Determine whether this is a member template specialization that /// would copy the object to itself. Such constructors are never used to copy /// an object. bool isSpecializationCopyingObject() const; /// \brief Determine whether this is an implicit constructor synthesized to /// model a call to a constructor inherited from a base class. bool isInheritingConstructor() const { return IsInheritingConstructor; } /// \brief Get the constructor that this inheriting constructor is based on. InheritedConstructor getInheritedConstructor() const { return IsInheritingConstructor ? *getTrailingObjects() : InheritedConstructor(); } CXXConstructorDecl *getCanonicalDecl() override { return cast(FunctionDecl::getCanonicalDecl()); } const CXXConstructorDecl *getCanonicalDecl() const { return const_cast(this)->getCanonicalDecl(); } // Implement isa/cast/dyncast/etc. static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == CXXConstructor; } friend class ASTDeclReader; friend class ASTDeclWriter; friend TrailingObjects; }; /// \brief Represents a C++ destructor within a class. /// /// For example: /// /// \code /// class X { /// public: /// ~X(); // represented by a CXXDestructorDecl. /// }; /// \endcode class CXXDestructorDecl : public CXXMethodDecl { void anchor() override; FunctionDecl *OperatorDelete; CXXDestructorDecl(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isInline, bool isImplicitlyDeclared) : CXXMethodDecl(CXXDestructor, C, RD, StartLoc, NameInfo, T, TInfo, SC_None, isInline, /*isConstexpr=*/false, SourceLocation()), OperatorDelete(nullptr) { setImplicit(isImplicitlyDeclared); } public: static CXXDestructorDecl *Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo* TInfo, bool isInline, bool isImplicitlyDeclared); static CXXDestructorDecl *CreateDeserialized(ASTContext & C, unsigned ID); void setOperatorDelete(FunctionDecl *OD); const FunctionDecl *getOperatorDelete() const { return getCanonicalDecl()->OperatorDelete; } CXXDestructorDecl *getCanonicalDecl() override { return cast(FunctionDecl::getCanonicalDecl()); } const CXXDestructorDecl *getCanonicalDecl() const { return const_cast(this)->getCanonicalDecl(); } // Implement isa/cast/dyncast/etc. static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == CXXDestructor; } friend class ASTDeclReader; friend class ASTDeclWriter; }; /// \brief Represents a C++ conversion function within a class. /// /// For example: /// /// \code /// class X { /// public: /// operator bool(); /// }; /// \endcode class CXXConversionDecl : public CXXMethodDecl { void anchor() override; CXXConversionDecl(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isInline, bool isExplicitSpecified, bool isConstexpr, SourceLocation EndLocation) : CXXMethodDecl(CXXConversion, C, RD, StartLoc, NameInfo, T, TInfo, SC_None, isInline, isConstexpr, EndLocation) { IsExplicitSpecified = isExplicitSpecified; } public: static CXXConversionDecl *Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isInline, bool isExplicit, bool isConstexpr, SourceLocation EndLocation); static CXXConversionDecl *CreateDeserialized(ASTContext &C, unsigned ID); /// Whether this function is marked as explicit explicitly. bool isExplicitSpecified() const { return IsExplicitSpecified; } /// Whether this function is explicit. bool isExplicit() const { return getCanonicalDecl()->isExplicitSpecified(); } /// \brief Returns the type that this conversion function is converting to. QualType getConversionType() const { return getType()->getAs()->getReturnType(); } /// \brief Determine whether this conversion function is a conversion from /// a lambda closure type to a block pointer. bool isLambdaToBlockPointerConversion() const; CXXConversionDecl *getCanonicalDecl() override { return cast(FunctionDecl::getCanonicalDecl()); } const CXXConversionDecl *getCanonicalDecl() const { return const_cast(this)->getCanonicalDecl(); } // Implement isa/cast/dyncast/etc. static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == CXXConversion; } friend class ASTDeclReader; friend class ASTDeclWriter; }; /// \brief Represents a linkage specification. /// /// For example: /// \code /// extern "C" void foo(); /// \endcode class LinkageSpecDecl : public Decl, public DeclContext { virtual void anchor(); public: /// \brief Represents the language in a linkage specification. /// /// The values are part of the serialization ABI for /// ASTs and cannot be changed without altering that ABI. To help /// ensure a stable ABI for this, we choose the DW_LANG_ encodings /// from the dwarf standard. enum LanguageIDs { lang_c = /* DW_LANG_C */ 0x0002, lang_cxx = /* DW_LANG_C_plus_plus */ 0x0004 }; private: /// \brief The language for this linkage specification. unsigned Language : 3; /// \brief True if this linkage spec has braces. /// /// This is needed so that hasBraces() returns the correct result while the /// linkage spec body is being parsed. Once RBraceLoc has been set this is /// not used, so it doesn't need to be serialized. unsigned HasBraces : 1; /// \brief The source location for the extern keyword. SourceLocation ExternLoc; /// \brief The source location for the right brace (if valid). SourceLocation RBraceLoc; LinkageSpecDecl(DeclContext *DC, SourceLocation ExternLoc, SourceLocation LangLoc, LanguageIDs lang, bool HasBraces) : Decl(LinkageSpec, DC, LangLoc), DeclContext(LinkageSpec), Language(lang), HasBraces(HasBraces), ExternLoc(ExternLoc), RBraceLoc(SourceLocation()) { } public: static LinkageSpecDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation ExternLoc, SourceLocation LangLoc, LanguageIDs Lang, bool HasBraces); static LinkageSpecDecl *CreateDeserialized(ASTContext &C, unsigned ID); /// \brief Return the language specified by this linkage specification. LanguageIDs getLanguage() const { return LanguageIDs(Language); } /// \brief Set the language specified by this linkage specification. void setLanguage(LanguageIDs L) { Language = L; } /// \brief Determines whether this linkage specification had braces in /// its syntactic form. bool hasBraces() const { assert(!RBraceLoc.isValid() || HasBraces); return HasBraces; } SourceLocation getExternLoc() const { return ExternLoc; } SourceLocation getRBraceLoc() const { return RBraceLoc; } void setExternLoc(SourceLocation L) { ExternLoc = L; } void setRBraceLoc(SourceLocation L) { RBraceLoc = L; HasBraces = RBraceLoc.isValid(); } SourceLocation getLocEnd() const LLVM_READONLY { if (hasBraces()) return getRBraceLoc(); // No braces: get the end location of the (only) declaration in context // (if present). return decls_empty() ? getLocation() : decls_begin()->getLocEnd(); } SourceRange getSourceRange() const override LLVM_READONLY { return SourceRange(ExternLoc, getLocEnd()); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == LinkageSpec; } static DeclContext *castToDeclContext(const LinkageSpecDecl *D) { return static_cast(const_cast(D)); } static LinkageSpecDecl *castFromDeclContext(const DeclContext *DC) { return static_cast(const_cast(DC)); } }; /// \brief Represents C++ using-directive. /// /// For example: /// \code /// using namespace std; /// \endcode /// /// \note UsingDirectiveDecl should be Decl not NamedDecl, but we provide /// artificial names for all using-directives in order to store /// them in DeclContext effectively. class UsingDirectiveDecl : public NamedDecl { void anchor() override; /// \brief The location of the \c using keyword. SourceLocation UsingLoc; /// \brief The location of the \c namespace keyword. SourceLocation NamespaceLoc; /// \brief The nested-name-specifier that precedes the namespace. NestedNameSpecifierLoc QualifierLoc; /// \brief The namespace nominated by this using-directive. NamedDecl *NominatedNamespace; /// Enclosing context containing both using-directive and nominated /// namespace. DeclContext *CommonAncestor; /// \brief Returns special DeclarationName used by using-directives. /// /// This is only used by DeclContext for storing UsingDirectiveDecls in /// its lookup structure. static DeclarationName getName() { return DeclarationName::getUsingDirectiveName(); } UsingDirectiveDecl(DeclContext *DC, SourceLocation UsingLoc, SourceLocation NamespcLoc, NestedNameSpecifierLoc QualifierLoc, SourceLocation IdentLoc, NamedDecl *Nominated, DeclContext *CommonAncestor) : NamedDecl(UsingDirective, DC, IdentLoc, getName()), UsingLoc(UsingLoc), NamespaceLoc(NamespcLoc), QualifierLoc(QualifierLoc), NominatedNamespace(Nominated), CommonAncestor(CommonAncestor) { } public: /// \brief Retrieve the nested-name-specifier that qualifies the /// name of the namespace, with source-location information. NestedNameSpecifierLoc getQualifierLoc() const { return QualifierLoc; } /// \brief Retrieve the nested-name-specifier that qualifies the /// name of the namespace. NestedNameSpecifier *getQualifier() const { return QualifierLoc.getNestedNameSpecifier(); } NamedDecl *getNominatedNamespaceAsWritten() { return NominatedNamespace; } const NamedDecl *getNominatedNamespaceAsWritten() const { return NominatedNamespace; } /// \brief Returns the namespace nominated by this using-directive. NamespaceDecl *getNominatedNamespace(); const NamespaceDecl *getNominatedNamespace() const { return const_cast(this)->getNominatedNamespace(); } /// \brief Returns the common ancestor context of this using-directive and /// its nominated namespace. DeclContext *getCommonAncestor() { return CommonAncestor; } const DeclContext *getCommonAncestor() const { return CommonAncestor; } /// \brief Return the location of the \c using keyword. SourceLocation getUsingLoc() const { return UsingLoc; } // FIXME: Could omit 'Key' in name. /// \brief Returns the location of the \c namespace keyword. SourceLocation getNamespaceKeyLocation() const { return NamespaceLoc; } /// \brief Returns the location of this using declaration's identifier. SourceLocation getIdentLocation() const { return getLocation(); } static UsingDirectiveDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation UsingLoc, SourceLocation NamespaceLoc, NestedNameSpecifierLoc QualifierLoc, SourceLocation IdentLoc, NamedDecl *Nominated, DeclContext *CommonAncestor); static UsingDirectiveDecl *CreateDeserialized(ASTContext &C, unsigned ID); SourceRange getSourceRange() const override LLVM_READONLY { return SourceRange(UsingLoc, getLocation()); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == UsingDirective; } // Friend for getUsingDirectiveName. friend class DeclContext; friend class ASTDeclReader; }; /// \brief Represents a C++ namespace alias. /// /// For example: /// /// \code /// namespace Foo = Bar; /// \endcode class NamespaceAliasDecl : public NamedDecl, public Redeclarable { void anchor() override; /// \brief The location of the \c namespace keyword. SourceLocation NamespaceLoc; /// \brief The location of the namespace's identifier. /// /// This is accessed by TargetNameLoc. SourceLocation IdentLoc; /// \brief The nested-name-specifier that precedes the namespace. NestedNameSpecifierLoc QualifierLoc; /// \brief The Decl that this alias points to, either a NamespaceDecl or /// a NamespaceAliasDecl. NamedDecl *Namespace; NamespaceAliasDecl(ASTContext &C, DeclContext *DC, SourceLocation NamespaceLoc, SourceLocation AliasLoc, IdentifierInfo *Alias, NestedNameSpecifierLoc QualifierLoc, SourceLocation IdentLoc, NamedDecl *Namespace) : NamedDecl(NamespaceAlias, DC, AliasLoc, Alias), redeclarable_base(C), NamespaceLoc(NamespaceLoc), IdentLoc(IdentLoc), QualifierLoc(QualifierLoc), Namespace(Namespace) {} typedef Redeclarable redeclarable_base; NamespaceAliasDecl *getNextRedeclarationImpl() override; NamespaceAliasDecl *getPreviousDeclImpl() override; NamespaceAliasDecl *getMostRecentDeclImpl() override; friend class ASTDeclReader; public: static NamespaceAliasDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation NamespaceLoc, SourceLocation AliasLoc, IdentifierInfo *Alias, NestedNameSpecifierLoc QualifierLoc, SourceLocation IdentLoc, NamedDecl *Namespace); static NamespaceAliasDecl *CreateDeserialized(ASTContext &C, unsigned ID); typedef redeclarable_base::redecl_range redecl_range; typedef redeclarable_base::redecl_iterator redecl_iterator; using redeclarable_base::redecls_begin; using redeclarable_base::redecls_end; using redeclarable_base::redecls; using redeclarable_base::getPreviousDecl; using redeclarable_base::getMostRecentDecl; NamespaceAliasDecl *getCanonicalDecl() override { return getFirstDecl(); } const NamespaceAliasDecl *getCanonicalDecl() const { return getFirstDecl(); } /// \brief Retrieve the nested-name-specifier that qualifies the /// name of the namespace, with source-location information. NestedNameSpecifierLoc getQualifierLoc() const { return QualifierLoc; } /// \brief Retrieve the nested-name-specifier that qualifies the /// name of the namespace. NestedNameSpecifier *getQualifier() const { return QualifierLoc.getNestedNameSpecifier(); } /// \brief Retrieve the namespace declaration aliased by this directive. NamespaceDecl *getNamespace() { if (NamespaceAliasDecl *AD = dyn_cast(Namespace)) return AD->getNamespace(); return cast(Namespace); } const NamespaceDecl *getNamespace() const { return const_cast(this)->getNamespace(); } /// Returns the location of the alias name, i.e. 'foo' in /// "namespace foo = ns::bar;". SourceLocation getAliasLoc() const { return getLocation(); } /// Returns the location of the \c namespace keyword. SourceLocation getNamespaceLoc() const { return NamespaceLoc; } /// Returns the location of the identifier in the named namespace. SourceLocation getTargetNameLoc() const { return IdentLoc; } /// \brief Retrieve the namespace that this alias refers to, which /// may either be a NamespaceDecl or a NamespaceAliasDecl. NamedDecl *getAliasedNamespace() const { return Namespace; } SourceRange getSourceRange() const override LLVM_READONLY { return SourceRange(NamespaceLoc, IdentLoc); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == NamespaceAlias; } }; /// \brief Represents a shadow declaration introduced into a scope by a /// (resolved) using declaration. /// /// For example, /// \code /// namespace A { /// void foo(); /// } /// namespace B { /// using A::foo; // <- a UsingDecl /// // Also creates a UsingShadowDecl for A::foo() in B /// } /// \endcode class UsingShadowDecl : public NamedDecl, public Redeclarable { void anchor() override; /// The referenced declaration. NamedDecl *Underlying; /// \brief The using declaration which introduced this decl or the next using /// shadow declaration contained in the aforementioned using declaration. NamedDecl *UsingOrNextShadow; friend class UsingDecl; typedef Redeclarable redeclarable_base; UsingShadowDecl *getNextRedeclarationImpl() override { return getNextRedeclaration(); } UsingShadowDecl *getPreviousDeclImpl() override { return getPreviousDecl(); } UsingShadowDecl *getMostRecentDeclImpl() override { return getMostRecentDecl(); } protected: UsingShadowDecl(Kind K, ASTContext &C, DeclContext *DC, SourceLocation Loc, UsingDecl *Using, NamedDecl *Target); UsingShadowDecl(Kind K, ASTContext &C, EmptyShell); public: static UsingShadowDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation Loc, UsingDecl *Using, NamedDecl *Target) { return new (C, DC) UsingShadowDecl(UsingShadow, C, DC, Loc, Using, Target); } static UsingShadowDecl *CreateDeserialized(ASTContext &C, unsigned ID); typedef redeclarable_base::redecl_range redecl_range; typedef redeclarable_base::redecl_iterator redecl_iterator; using redeclarable_base::redecls_begin; using redeclarable_base::redecls_end; using redeclarable_base::redecls; using redeclarable_base::getPreviousDecl; using redeclarable_base::getMostRecentDecl; using redeclarable_base::isFirstDecl; UsingShadowDecl *getCanonicalDecl() override { return getFirstDecl(); } const UsingShadowDecl *getCanonicalDecl() const { return getFirstDecl(); } /// \brief Gets the underlying declaration which has been brought into the /// local scope. NamedDecl *getTargetDecl() const { return Underlying; } /// \brief Sets the underlying declaration which has been brought into the /// local scope. void setTargetDecl(NamedDecl* ND) { assert(ND && "Target decl is null!"); Underlying = ND; IdentifierNamespace = ND->getIdentifierNamespace(); } /// \brief Gets the using declaration to which this declaration is tied. UsingDecl *getUsingDecl() const; /// \brief The next using shadow declaration contained in the shadow decl /// chain of the using declaration which introduced this decl. UsingShadowDecl *getNextUsingShadowDecl() const { return dyn_cast_or_null(UsingOrNextShadow); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == Decl::UsingShadow || K == Decl::ConstructorUsingShadow; } friend class ASTDeclReader; friend class ASTDeclWriter; }; /// \brief Represents a shadow constructor declaration introduced into a /// class by a C++11 using-declaration that names a constructor. /// /// For example: /// \code /// struct Base { Base(int); }; /// struct Derived { /// using Base::Base; // creates a UsingDecl and a ConstructorUsingShadowDecl /// }; /// \endcode class ConstructorUsingShadowDecl final : public UsingShadowDecl { void anchor() override; /// \brief If this constructor using declaration inherted the constructor /// from an indirect base class, this is the ConstructorUsingShadowDecl /// in the named direct base class from which the declaration was inherited. ConstructorUsingShadowDecl *NominatedBaseClassShadowDecl; /// \brief If this constructor using declaration inherted the constructor /// from an indirect base class, this is the ConstructorUsingShadowDecl /// that will be used to construct the unique direct or virtual base class /// that receives the constructor arguments. ConstructorUsingShadowDecl *ConstructedBaseClassShadowDecl; /// \brief \c true if the constructor ultimately named by this using shadow /// declaration is within a virtual base class subobject of the class that /// contains this declaration. unsigned IsVirtual : 1; ConstructorUsingShadowDecl(ASTContext &C, DeclContext *DC, SourceLocation Loc, UsingDecl *Using, NamedDecl *Target, bool TargetInVirtualBase) : UsingShadowDecl(ConstructorUsingShadow, C, DC, Loc, Using, Target->getUnderlyingDecl()), NominatedBaseClassShadowDecl( dyn_cast(Target)), ConstructedBaseClassShadowDecl(NominatedBaseClassShadowDecl), IsVirtual(TargetInVirtualBase) { // If we found a constructor that chains to a constructor for a virtual // base, we should directly call that virtual base constructor instead. // FIXME: This logic belongs in Sema. if (NominatedBaseClassShadowDecl && NominatedBaseClassShadowDecl->constructsVirtualBase()) { ConstructedBaseClassShadowDecl = NominatedBaseClassShadowDecl->ConstructedBaseClassShadowDecl; IsVirtual = true; } } ConstructorUsingShadowDecl(ASTContext &C, EmptyShell Empty) : UsingShadowDecl(ConstructorUsingShadow, C, Empty), NominatedBaseClassShadowDecl(), ConstructedBaseClassShadowDecl(), IsVirtual(false) {} public: static ConstructorUsingShadowDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation Loc, UsingDecl *Using, NamedDecl *Target, bool IsVirtual); static ConstructorUsingShadowDecl *CreateDeserialized(ASTContext &C, unsigned ID); /// Returns the parent of this using shadow declaration, which /// is the class in which this is declared. //@{ const CXXRecordDecl *getParent() const { return cast(getDeclContext()); } CXXRecordDecl *getParent() { return cast(getDeclContext()); } //@} /// \brief Get the inheriting constructor declaration for the direct base /// class from which this using shadow declaration was inherited, if there is /// one. This can be different for each redeclaration of the same shadow decl. ConstructorUsingShadowDecl *getNominatedBaseClassShadowDecl() const { return NominatedBaseClassShadowDecl; } /// \brief Get the inheriting constructor declaration for the base class /// for which we don't have an explicit initializer, if there is one. ConstructorUsingShadowDecl *getConstructedBaseClassShadowDecl() const { return ConstructedBaseClassShadowDecl; } /// \brief Get the base class that was named in the using declaration. This /// can be different for each redeclaration of this same shadow decl. CXXRecordDecl *getNominatedBaseClass() const; /// \brief Get the base class whose constructor or constructor shadow /// declaration is passed the constructor arguments. CXXRecordDecl *getConstructedBaseClass() const { return cast((ConstructedBaseClassShadowDecl ? ConstructedBaseClassShadowDecl : getTargetDecl()) ->getDeclContext()); } /// \brief Returns \c true if the constructed base class is a virtual base /// class subobject of this declaration's class. bool constructsVirtualBase() const { return IsVirtual; } /// \brief Get the constructor or constructor template in the derived class /// correspnding to this using shadow declaration, if it has been implicitly /// declared already. CXXConstructorDecl *getConstructor() const; void setConstructor(NamedDecl *Ctor); static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == ConstructorUsingShadow; } friend class ASTDeclReader; friend class ASTDeclWriter; }; /// \brief Represents a C++ using-declaration. /// /// For example: /// \code /// using someNameSpace::someIdentifier; /// \endcode class UsingDecl : public NamedDecl, public Mergeable { void anchor() override; /// \brief The source location of the 'using' keyword itself. SourceLocation UsingLocation; /// \brief The nested-name-specifier that precedes the name. NestedNameSpecifierLoc QualifierLoc; /// \brief Provides source/type location info for the declaration name /// embedded in the ValueDecl base class. DeclarationNameLoc DNLoc; /// \brief The first shadow declaration of the shadow decl chain associated /// with this using declaration. /// /// The bool member of the pair store whether this decl has the \c typename /// keyword. llvm::PointerIntPair FirstUsingShadow; UsingDecl(DeclContext *DC, SourceLocation UL, NestedNameSpecifierLoc QualifierLoc, const DeclarationNameInfo &NameInfo, bool HasTypenameKeyword) : NamedDecl(Using, DC, NameInfo.getLoc(), NameInfo.getName()), UsingLocation(UL), QualifierLoc(QualifierLoc), DNLoc(NameInfo.getInfo()), FirstUsingShadow(nullptr, HasTypenameKeyword) { } public: /// \brief Return the source location of the 'using' keyword. SourceLocation getUsingLoc() const { return UsingLocation; } /// \brief Set the source location of the 'using' keyword. void setUsingLoc(SourceLocation L) { UsingLocation = L; } /// \brief Retrieve the nested-name-specifier that qualifies the name, /// with source-location information. NestedNameSpecifierLoc getQualifierLoc() const { return QualifierLoc; } /// \brief Retrieve the nested-name-specifier that qualifies the name. NestedNameSpecifier *getQualifier() const { return QualifierLoc.getNestedNameSpecifier(); } DeclarationNameInfo getNameInfo() const { return DeclarationNameInfo(getDeclName(), getLocation(), DNLoc); } /// \brief Return true if it is a C++03 access declaration (no 'using'). bool isAccessDeclaration() const { return UsingLocation.isInvalid(); } /// \brief Return true if the using declaration has 'typename'. bool hasTypename() const { return FirstUsingShadow.getInt(); } /// \brief Sets whether the using declaration has 'typename'. void setTypename(bool TN) { FirstUsingShadow.setInt(TN); } /// \brief Iterates through the using shadow declarations associated with /// this using declaration. class shadow_iterator { /// \brief The current using shadow declaration. UsingShadowDecl *Current; public: typedef UsingShadowDecl* value_type; typedef UsingShadowDecl* reference; typedef UsingShadowDecl* pointer; typedef std::forward_iterator_tag iterator_category; typedef std::ptrdiff_t difference_type; shadow_iterator() : Current(nullptr) { } explicit shadow_iterator(UsingShadowDecl *C) : Current(C) { } reference operator*() const { return Current; } pointer operator->() const { return Current; } shadow_iterator& operator++() { Current = Current->getNextUsingShadowDecl(); return *this; } shadow_iterator operator++(int) { shadow_iterator tmp(*this); ++(*this); return tmp; } friend bool operator==(shadow_iterator x, shadow_iterator y) { return x.Current == y.Current; } friend bool operator!=(shadow_iterator x, shadow_iterator y) { return x.Current != y.Current; } }; typedef llvm::iterator_range shadow_range; shadow_range shadows() const { return shadow_range(shadow_begin(), shadow_end()); } shadow_iterator shadow_begin() const { return shadow_iterator(FirstUsingShadow.getPointer()); } shadow_iterator shadow_end() const { return shadow_iterator(); } /// \brief Return the number of shadowed declarations associated with this /// using declaration. unsigned shadow_size() const { return std::distance(shadow_begin(), shadow_end()); } void addShadowDecl(UsingShadowDecl *S); void removeShadowDecl(UsingShadowDecl *S); static UsingDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation UsingL, NestedNameSpecifierLoc QualifierLoc, const DeclarationNameInfo &NameInfo, bool HasTypenameKeyword); static UsingDecl *CreateDeserialized(ASTContext &C, unsigned ID); SourceRange getSourceRange() const override LLVM_READONLY; /// Retrieves the canonical declaration of this declaration. UsingDecl *getCanonicalDecl() override { return getFirstDecl(); } const UsingDecl *getCanonicalDecl() const { return getFirstDecl(); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == Using; } friend class ASTDeclReader; friend class ASTDeclWriter; }; /// Represents a pack of using declarations that a single /// using-declarator pack-expanded into. /// /// \code /// template struct X : T... { /// using T::operator()...; /// using T::operator T...; /// }; /// \endcode /// /// In the second case above, the UsingPackDecl will have the name /// 'operator T' (which contains an unexpanded pack), but the individual /// UsingDecls and UsingShadowDecls will have more reasonable names. class UsingPackDecl final : public NamedDecl, public Mergeable, private llvm::TrailingObjects { void anchor() override; /// The UnresolvedUsingValueDecl or UnresolvedUsingTypenameDecl from /// which this waas instantiated. NamedDecl *InstantiatedFrom; /// The number of using-declarations created by this pack expansion. unsigned NumExpansions; UsingPackDecl(DeclContext *DC, NamedDecl *InstantiatedFrom, ArrayRef UsingDecls) : NamedDecl(UsingPack, DC, InstantiatedFrom ? InstantiatedFrom->getLocation() : SourceLocation(), InstantiatedFrom ? InstantiatedFrom->getDeclName() : DeclarationName()), InstantiatedFrom(InstantiatedFrom), NumExpansions(UsingDecls.size()) { std::uninitialized_copy(UsingDecls.begin(), UsingDecls.end(), getTrailingObjects()); } public: /// Get the using declaration from which this was instantiated. This will /// always be an UnresolvedUsingValueDecl or an UnresolvedUsingTypenameDecl /// that is a pack expansion. NamedDecl *getInstantiatedFromUsingDecl() const { return InstantiatedFrom; } /// Get the set of using declarations that this pack expanded into. Note that /// some of these may still be unresolved. ArrayRef expansions() const { return llvm::makeArrayRef(getTrailingObjects(), NumExpansions); } static UsingPackDecl *Create(ASTContext &C, DeclContext *DC, NamedDecl *InstantiatedFrom, ArrayRef UsingDecls); static UsingPackDecl *CreateDeserialized(ASTContext &C, unsigned ID, unsigned NumExpansions); SourceRange getSourceRange() const override LLVM_READONLY { return InstantiatedFrom->getSourceRange(); } UsingPackDecl *getCanonicalDecl() override { return getFirstDecl(); } const UsingPackDecl *getCanonicalDecl() const { return getFirstDecl(); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == UsingPack; } friend class ASTDeclReader; friend class ASTDeclWriter; friend TrailingObjects; }; /// \brief Represents a dependent using declaration which was not marked with /// \c typename. /// /// Unlike non-dependent using declarations, these *only* bring through /// non-types; otherwise they would break two-phase lookup. /// /// \code /// template \ class A : public Base { /// using Base::foo; /// }; /// \endcode class UnresolvedUsingValueDecl : public ValueDecl, public Mergeable { void anchor() override; /// \brief The source location of the 'using' keyword SourceLocation UsingLocation; /// \brief If this is a pack expansion, the location of the '...'. SourceLocation EllipsisLoc; /// \brief The nested-name-specifier that precedes the name. NestedNameSpecifierLoc QualifierLoc; /// \brief Provides source/type location info for the declaration name /// embedded in the ValueDecl base class. DeclarationNameLoc DNLoc; UnresolvedUsingValueDecl(DeclContext *DC, QualType Ty, SourceLocation UsingLoc, NestedNameSpecifierLoc QualifierLoc, const DeclarationNameInfo &NameInfo, SourceLocation EllipsisLoc) : ValueDecl(UnresolvedUsingValue, DC, NameInfo.getLoc(), NameInfo.getName(), Ty), UsingLocation(UsingLoc), EllipsisLoc(EllipsisLoc), QualifierLoc(QualifierLoc), DNLoc(NameInfo.getInfo()) { } public: /// \brief Returns the source location of the 'using' keyword. SourceLocation getUsingLoc() const { return UsingLocation; } /// \brief Set the source location of the 'using' keyword. void setUsingLoc(SourceLocation L) { UsingLocation = L; } /// \brief Return true if it is a C++03 access declaration (no 'using'). bool isAccessDeclaration() const { return UsingLocation.isInvalid(); } /// \brief Retrieve the nested-name-specifier that qualifies the name, /// with source-location information. NestedNameSpecifierLoc getQualifierLoc() const { return QualifierLoc; } /// \brief Retrieve the nested-name-specifier that qualifies the name. NestedNameSpecifier *getQualifier() const { return QualifierLoc.getNestedNameSpecifier(); } DeclarationNameInfo getNameInfo() const { return DeclarationNameInfo(getDeclName(), getLocation(), DNLoc); } /// \brief Determine whether this is a pack expansion. bool isPackExpansion() const { return EllipsisLoc.isValid(); } /// \brief Get the location of the ellipsis if this is a pack expansion. SourceLocation getEllipsisLoc() const { return EllipsisLoc; } static UnresolvedUsingValueDecl * Create(ASTContext &C, DeclContext *DC, SourceLocation UsingLoc, NestedNameSpecifierLoc QualifierLoc, const DeclarationNameInfo &NameInfo, SourceLocation EllipsisLoc); static UnresolvedUsingValueDecl * CreateDeserialized(ASTContext &C, unsigned ID); SourceRange getSourceRange() const override LLVM_READONLY; /// Retrieves the canonical declaration of this declaration. UnresolvedUsingValueDecl *getCanonicalDecl() override { return getFirstDecl(); } const UnresolvedUsingValueDecl *getCanonicalDecl() const { return getFirstDecl(); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == UnresolvedUsingValue; } friend class ASTDeclReader; friend class ASTDeclWriter; }; /// \brief Represents a dependent using declaration which was marked with /// \c typename. /// /// \code /// template \ class A : public Base { /// using typename Base::foo; /// }; /// \endcode /// /// The type associated with an unresolved using typename decl is /// currently always a typename type. class UnresolvedUsingTypenameDecl : public TypeDecl, public Mergeable { void anchor() override; /// \brief The source location of the 'typename' keyword SourceLocation TypenameLocation; /// \brief If this is a pack expansion, the location of the '...'. SourceLocation EllipsisLoc; /// \brief The nested-name-specifier that precedes the name. NestedNameSpecifierLoc QualifierLoc; UnresolvedUsingTypenameDecl(DeclContext *DC, SourceLocation UsingLoc, SourceLocation TypenameLoc, NestedNameSpecifierLoc QualifierLoc, SourceLocation TargetNameLoc, IdentifierInfo *TargetName, SourceLocation EllipsisLoc) : TypeDecl(UnresolvedUsingTypename, DC, TargetNameLoc, TargetName, UsingLoc), TypenameLocation(TypenameLoc), EllipsisLoc(EllipsisLoc), QualifierLoc(QualifierLoc) { } friend class ASTDeclReader; public: /// \brief Returns the source location of the 'using' keyword. SourceLocation getUsingLoc() const { return getLocStart(); } /// \brief Returns the source location of the 'typename' keyword. SourceLocation getTypenameLoc() const { return TypenameLocation; } /// \brief Retrieve the nested-name-specifier that qualifies the name, /// with source-location information. NestedNameSpecifierLoc getQualifierLoc() const { return QualifierLoc; } /// \brief Retrieve the nested-name-specifier that qualifies the name. NestedNameSpecifier *getQualifier() const { return QualifierLoc.getNestedNameSpecifier(); } DeclarationNameInfo getNameInfo() const { return DeclarationNameInfo(getDeclName(), getLocation()); } /// \brief Determine whether this is a pack expansion. bool isPackExpansion() const { return EllipsisLoc.isValid(); } /// \brief Get the location of the ellipsis if this is a pack expansion. SourceLocation getEllipsisLoc() const { return EllipsisLoc; } static UnresolvedUsingTypenameDecl * Create(ASTContext &C, DeclContext *DC, SourceLocation UsingLoc, SourceLocation TypenameLoc, NestedNameSpecifierLoc QualifierLoc, SourceLocation TargetNameLoc, DeclarationName TargetName, SourceLocation EllipsisLoc); static UnresolvedUsingTypenameDecl * CreateDeserialized(ASTContext &C, unsigned ID); /// Retrieves the canonical declaration of this declaration. UnresolvedUsingTypenameDecl *getCanonicalDecl() override { return getFirstDecl(); } const UnresolvedUsingTypenameDecl *getCanonicalDecl() const { return getFirstDecl(); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == UnresolvedUsingTypename; } }; /// \brief Represents a C++11 static_assert declaration. class StaticAssertDecl : public Decl { virtual void anchor(); llvm::PointerIntPair AssertExprAndFailed; StringLiteral *Message; SourceLocation RParenLoc; StaticAssertDecl(DeclContext *DC, SourceLocation StaticAssertLoc, Expr *AssertExpr, StringLiteral *Message, SourceLocation RParenLoc, bool Failed) : Decl(StaticAssert, DC, StaticAssertLoc), AssertExprAndFailed(AssertExpr, Failed), Message(Message), RParenLoc(RParenLoc) { } public: static StaticAssertDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation StaticAssertLoc, Expr *AssertExpr, StringLiteral *Message, SourceLocation RParenLoc, bool Failed); static StaticAssertDecl *CreateDeserialized(ASTContext &C, unsigned ID); Expr *getAssertExpr() { return AssertExprAndFailed.getPointer(); } const Expr *getAssertExpr() const { return AssertExprAndFailed.getPointer(); } StringLiteral *getMessage() { return Message; } const StringLiteral *getMessage() const { return Message; } bool isFailed() const { return AssertExprAndFailed.getInt(); } SourceLocation getRParenLoc() const { return RParenLoc; } SourceRange getSourceRange() const override LLVM_READONLY { return SourceRange(getLocation(), getRParenLoc()); } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == StaticAssert; } friend class ASTDeclReader; }; /// A binding in a decomposition declaration. For instance, given: /// /// int n[3]; /// auto &[a, b, c] = n; /// /// a, b, and c are BindingDecls, whose bindings are the expressions /// x[0], x[1], and x[2] respectively, where x is the implicit /// DecompositionDecl of type 'int (&)[3]'. class BindingDecl : public ValueDecl { void anchor() override; /// The binding represented by this declaration. References to this /// declaration are effectively equivalent to this expression (except /// that it is only evaluated once at the point of declaration of the /// binding). Expr *Binding; BindingDecl(DeclContext *DC, SourceLocation IdLoc, IdentifierInfo *Id) : ValueDecl(Decl::Binding, DC, IdLoc, Id, QualType()), Binding(nullptr) {} public: static BindingDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation IdLoc, IdentifierInfo *Id); static BindingDecl *CreateDeserialized(ASTContext &C, unsigned ID); /// Get the expression to which this declaration is bound. This may be null /// in two different cases: while parsing the initializer for the /// decomposition declaration, and when the initializer is type-dependent. Expr *getBinding() const { return Binding; } /// Get the variable (if any) that holds the value of evaluating the binding. /// Only present for user-defined bindings for tuple-like types. VarDecl *getHoldingVar() const; /// Set the binding for this BindingDecl, along with its declared type (which /// should be a possibly-cv-qualified form of the type of the binding, or a /// reference to such a type). void setBinding(QualType DeclaredType, Expr *Binding) { setType(DeclaredType); this->Binding = Binding; } static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == Decl::Binding; } friend class ASTDeclReader; }; /// A decomposition declaration. For instance, given: /// /// int n[3]; /// auto &[a, b, c] = n; /// /// the second line declares a DecompositionDecl of type 'int (&)[3]', and /// three BindingDecls (named a, b, and c). An instance of this class is always /// unnamed, but behaves in almost all other respects like a VarDecl. class DecompositionDecl final : public VarDecl, private llvm::TrailingObjects { void anchor() override; /// The number of BindingDecl*s following this object. unsigned NumBindings; DecompositionDecl(ASTContext &C, DeclContext *DC, SourceLocation StartLoc, SourceLocation LSquareLoc, QualType T, TypeSourceInfo *TInfo, StorageClass SC, ArrayRef Bindings) : VarDecl(Decomposition, C, DC, StartLoc, LSquareLoc, nullptr, T, TInfo, SC), NumBindings(Bindings.size()) { std::uninitialized_copy(Bindings.begin(), Bindings.end(), getTrailingObjects()); } public: static DecompositionDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation StartLoc, SourceLocation LSquareLoc, QualType T, TypeSourceInfo *TInfo, StorageClass S, ArrayRef Bindings); static DecompositionDecl *CreateDeserialized(ASTContext &C, unsigned ID, unsigned NumBindings); ArrayRef bindings() const { return llvm::makeArrayRef(getTrailingObjects(), NumBindings); } void printName(raw_ostream &os) const override; static bool classof(const Decl *D) { return classofKind(D->getKind()); } static bool classofKind(Kind K) { return K == Decomposition; } friend TrailingObjects; friend class ASTDeclReader; }; /// An instance of this class represents the declaration of a property /// member. This is a Microsoft extension to C++, first introduced in /// Visual Studio .NET 2003 as a parallel to similar features in C# /// and Managed C++. /// /// A property must always be a non-static class member. /// /// A property member superficially resembles a non-static data /// member, except preceded by a property attribute: /// __declspec(property(get=GetX, put=PutX)) int x; /// Either (but not both) of the 'get' and 'put' names may be omitted. /// /// A reference to a property is always an lvalue. If the lvalue /// undergoes lvalue-to-rvalue conversion, then a getter name is /// required, and that member is called with no arguments. /// If the lvalue is assigned into, then a setter name is required, /// and that member is called with one argument, the value assigned. /// Both operations are potentially overloaded. Compound assignments /// are permitted, as are the increment and decrement operators. /// /// The getter and putter methods are permitted to be overloaded, /// although their return and parameter types are subject to certain /// restrictions according to the type of the property. /// /// A property declared using an incomplete array type may /// additionally be subscripted, adding extra parameters to the getter /// and putter methods. class MSPropertyDecl : public DeclaratorDecl { IdentifierInfo *GetterId, *SetterId; MSPropertyDecl(DeclContext *DC, SourceLocation L, DeclarationName N, QualType T, TypeSourceInfo *TInfo, SourceLocation StartL, IdentifierInfo *Getter, IdentifierInfo *Setter) : DeclaratorDecl(MSProperty, DC, L, N, T, TInfo, StartL), GetterId(Getter), SetterId(Setter) {} public: static MSPropertyDecl *Create(ASTContext &C, DeclContext *DC, SourceLocation L, DeclarationName N, QualType T, TypeSourceInfo *TInfo, SourceLocation StartL, IdentifierInfo *Getter, IdentifierInfo *Setter); static MSPropertyDecl *CreateDeserialized(ASTContext &C, unsigned ID); static bool classof(const Decl *D) { return D->getKind() == MSProperty; } bool hasGetter() const { return GetterId != nullptr; } IdentifierInfo* getGetterId() const { return GetterId; } bool hasSetter() const { return SetterId != nullptr; } IdentifierInfo* getSetterId() const { return SetterId; } friend class ASTDeclReader; }; /// Insertion operator for diagnostics. This allows sending an AccessSpecifier /// into a diagnostic with <<. const DiagnosticBuilder &operator<<(const DiagnosticBuilder &DB, AccessSpecifier AS); const PartialDiagnostic &operator<<(const PartialDiagnostic &DB, AccessSpecifier AS); } // end namespace clang #endif diff --git a/include/clang/Lex/Preprocessor.h b/include/clang/Lex/Preprocessor.h index a058fbfbb4cf..dba4b80f6071 100644 --- a/include/clang/Lex/Preprocessor.h +++ b/include/clang/Lex/Preprocessor.h @@ -1,2080 +1,2080 @@ //===--- Preprocessor.h - C Language Family Preprocessor --------*- C++ -*-===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// /// /// \file /// \brief Defines the clang::Preprocessor interface. /// //===----------------------------------------------------------------------===// #ifndef LLVM_CLANG_LEX_PREPROCESSOR_H #define LLVM_CLANG_LEX_PREPROCESSOR_H #include "clang/Basic/Builtins.h" #include "clang/Basic/Diagnostic.h" #include "clang/Basic/IdentifierTable.h" #include "clang/Basic/SourceLocation.h" #include "clang/Lex/Lexer.h" #include "clang/Lex/MacroInfo.h" #include "clang/Lex/ModuleMap.h" #include "clang/Lex/PPCallbacks.h" #include "clang/Lex/PTHLexer.h" #include "clang/Lex/TokenLexer.h" #include "llvm/ADT/ArrayRef.h" #include "llvm/ADT/DenseMap.h" #include "llvm/ADT/IntrusiveRefCntPtr.h" #include "llvm/ADT/SmallPtrSet.h" #include "llvm/ADT/SmallVector.h" #include "llvm/ADT/TinyPtrVector.h" #include "llvm/Support/Allocator.h" #include "llvm/Support/Registry.h" #include #include namespace llvm { template class SmallString; } namespace clang { class SourceManager; class ExternalPreprocessorSource; class FileManager; class FileEntry; class HeaderSearch; class MemoryBufferCache; class PragmaNamespace; class PragmaHandler; class CommentHandler; class ScratchBuffer; class TargetInfo; class PPCallbacks; class CodeCompletionHandler; class DirectoryLookup; class PreprocessingRecord; class ModuleLoader; class PTHManager; class PreprocessorOptions; /// \brief Stores token information for comparing actual tokens with /// predefined values. Only handles simple tokens and identifiers. class TokenValue { tok::TokenKind Kind; IdentifierInfo *II; public: TokenValue(tok::TokenKind Kind) : Kind(Kind), II(nullptr) { assert(Kind != tok::raw_identifier && "Raw identifiers are not supported."); assert(Kind != tok::identifier && "Identifiers should be created by TokenValue(IdentifierInfo *)"); assert(!tok::isLiteral(Kind) && "Literals are not supported."); assert(!tok::isAnnotation(Kind) && "Annotations are not supported."); } TokenValue(IdentifierInfo *II) : Kind(tok::identifier), II(II) {} bool operator==(const Token &Tok) const { return Tok.getKind() == Kind && (!II || II == Tok.getIdentifierInfo()); } }; /// \brief Context in which macro name is used. enum MacroUse { MU_Other = 0, // other than #define or #undef MU_Define = 1, // macro name specified in #define MU_Undef = 2 // macro name specified in #undef }; /// \brief Engages in a tight little dance with the lexer to efficiently /// preprocess tokens. /// /// Lexers know only about tokens within a single source file, and don't /// know anything about preprocessor-level issues like the \#include stack, /// token expansion, etc. class Preprocessor { std::shared_ptr PPOpts; DiagnosticsEngine *Diags; LangOptions &LangOpts; const TargetInfo *Target; const TargetInfo *AuxTarget; FileManager &FileMgr; SourceManager &SourceMgr; MemoryBufferCache &PCMCache; std::unique_ptr ScratchBuf; HeaderSearch &HeaderInfo; ModuleLoader &TheModuleLoader; /// \brief External source of macros. ExternalPreprocessorSource *ExternalSource; /// An optional PTHManager object used for getting tokens from /// a token cache rather than lexing the original source file. std::unique_ptr PTH; /// A BumpPtrAllocator object used to quickly allocate and release /// objects internal to the Preprocessor. llvm::BumpPtrAllocator BP; /// Identifiers for builtin macros and other builtins. IdentifierInfo *Ident__LINE__, *Ident__FILE__; // __LINE__, __FILE__ IdentifierInfo *Ident__DATE__, *Ident__TIME__; // __DATE__, __TIME__ IdentifierInfo *Ident__INCLUDE_LEVEL__; // __INCLUDE_LEVEL__ IdentifierInfo *Ident__BASE_FILE__; // __BASE_FILE__ IdentifierInfo *Ident__TIMESTAMP__; // __TIMESTAMP__ IdentifierInfo *Ident__COUNTER__; // __COUNTER__ IdentifierInfo *Ident_Pragma, *Ident__pragma; // _Pragma, __pragma IdentifierInfo *Ident__identifier; // __identifier IdentifierInfo *Ident__VA_ARGS__; // __VA_ARGS__ IdentifierInfo *Ident__has_feature; // __has_feature IdentifierInfo *Ident__has_extension; // __has_extension IdentifierInfo *Ident__has_builtin; // __has_builtin IdentifierInfo *Ident__has_attribute; // __has_attribute IdentifierInfo *Ident__has_include; // __has_include IdentifierInfo *Ident__has_include_next; // __has_include_next IdentifierInfo *Ident__has_warning; // __has_warning IdentifierInfo *Ident__is_identifier; // __is_identifier IdentifierInfo *Ident__building_module; // __building_module IdentifierInfo *Ident__MODULE__; // __MODULE__ IdentifierInfo *Ident__has_cpp_attribute; // __has_cpp_attribute IdentifierInfo *Ident__has_declspec; // __has_declspec_attribute SourceLocation DATELoc, TIMELoc; unsigned CounterValue; // Next __COUNTER__ value. enum { /// \brief Maximum depth of \#includes. MaxAllowedIncludeStackDepth = 200 }; // State that is set before the preprocessor begins. bool KeepComments : 1; bool KeepMacroComments : 1; bool SuppressIncludeNotFoundError : 1; // State that changes while the preprocessor runs: bool InMacroArgs : 1; // True if parsing fn macro invocation args. /// Whether the preprocessor owns the header search object. bool OwnsHeaderSearch : 1; /// True if macro expansion is disabled. bool DisableMacroExpansion : 1; /// Temporarily disables DisableMacroExpansion (i.e. enables expansion) /// when parsing preprocessor directives. bool MacroExpansionInDirectivesOverride : 1; class ResetMacroExpansionHelper; /// \brief Whether we have already loaded macros from the external source. mutable bool ReadMacrosFromExternalSource : 1; /// \brief True if pragmas are enabled. bool PragmasEnabled : 1; /// \brief True if the current build action is a preprocessing action. bool PreprocessedOutput : 1; /// \brief True if we are currently preprocessing a #if or #elif directive bool ParsingIfOrElifDirective; /// \brief True if we are pre-expanding macro arguments. bool InMacroArgPreExpansion; /// \brief Mapping/lookup information for all identifiers in /// the program, including program keywords. mutable IdentifierTable Identifiers; /// \brief This table contains all the selectors in the program. /// /// Unlike IdentifierTable above, this table *isn't* populated by the /// preprocessor. It is declared/expanded here because its role/lifetime is /// conceptually similar to the IdentifierTable. In addition, the current /// control flow (in clang::ParseAST()), make it convenient to put here. /// /// FIXME: Make sure the lifetime of Identifiers/Selectors *isn't* tied to /// the lifetime of the preprocessor. SelectorTable Selectors; /// \brief Information about builtins. Builtin::Context BuiltinInfo; /// \brief Tracks all of the pragmas that the client registered /// with this preprocessor. std::unique_ptr PragmaHandlers; /// \brief Pragma handlers of the original source is stored here during the /// parsing of a model file. std::unique_ptr PragmaHandlersBackup; /// \brief Tracks all of the comment handlers that the client registered /// with this preprocessor. std::vector CommentHandlers; /// \brief True if we want to ignore EOF token and continue later on (thus /// avoid tearing the Lexer and etc. down). bool IncrementalProcessing; /// The kind of translation unit we are processing. TranslationUnitKind TUKind; /// \brief The code-completion handler. CodeCompletionHandler *CodeComplete; /// \brief The file that we're performing code-completion for, if any. const FileEntry *CodeCompletionFile; /// \brief The offset in file for the code-completion point. unsigned CodeCompletionOffset; /// \brief The location for the code-completion point. This gets instantiated /// when the CodeCompletionFile gets \#include'ed for preprocessing. SourceLocation CodeCompletionLoc; /// \brief The start location for the file of the code-completion point. /// /// This gets instantiated when the CodeCompletionFile gets \#include'ed /// for preprocessing. SourceLocation CodeCompletionFileLoc; /// \brief The source location of the \c import contextual keyword we just /// lexed, if any. SourceLocation ModuleImportLoc; /// \brief The module import path that we're currently processing. SmallVector, 2> ModuleImportPath; /// \brief Whether the last token we lexed was an '@'. bool LastTokenWasAt; /// \brief Whether the module import expects an identifier next. Otherwise, /// it expects a '.' or ';'. bool ModuleImportExpectsIdentifier; /// \brief The source location of the currently-active /// \#pragma clang arc_cf_code_audited begin. SourceLocation PragmaARCCFCodeAuditedLoc; /// \brief The source location of the currently-active /// \#pragma clang assume_nonnull begin. SourceLocation PragmaAssumeNonNullLoc; /// \brief True if we hit the code-completion point. bool CodeCompletionReached; /// \brief The code completion token containing the information /// on the stem that is to be code completed. IdentifierInfo *CodeCompletionII; /// \brief The directory that the main file should be considered to occupy, /// if it does not correspond to a real file (as happens when building a /// module). const DirectoryEntry *MainFileDir; /// \brief The number of bytes that we will initially skip when entering the /// main file, along with a flag that indicates whether skipping this number /// of bytes will place the lexer at the start of a line. /// /// This is used when loading a precompiled preamble. std::pair SkipMainFilePreamble; class PreambleConditionalStackStore { enum State { Off = 0, Recording = 1, Replaying = 2, }; public: PreambleConditionalStackStore() : ConditionalStackState(Off) {} void startRecording() { ConditionalStackState = Recording; } void startReplaying() { ConditionalStackState = Replaying; } bool isRecording() const { return ConditionalStackState == Recording; } bool isReplaying() const { return ConditionalStackState == Replaying; } ArrayRef getStack() const { return ConditionalStack; } void doneReplaying() { ConditionalStack.clear(); ConditionalStackState = Off; } void setStack(ArrayRef s) { if (!isRecording() && !isReplaying()) return; ConditionalStack.clear(); ConditionalStack.append(s.begin(), s.end()); } bool hasRecordedPreamble() const { return !ConditionalStack.empty(); } private: SmallVector ConditionalStack; State ConditionalStackState; } PreambleConditionalStack; /// \brief The current top of the stack that we're lexing from if /// not expanding a macro and we are lexing directly from source code. /// /// Only one of CurLexer, CurPTHLexer, or CurTokenLexer will be non-null. std::unique_ptr CurLexer; /// \brief The current top of stack that we're lexing from if /// not expanding from a macro and we are lexing from a PTH cache. /// /// Only one of CurLexer, CurPTHLexer, or CurTokenLexer will be non-null. std::unique_ptr CurPTHLexer; /// \brief The current top of the stack what we're lexing from /// if not expanding a macro. /// /// This is an alias for either CurLexer or CurPTHLexer. PreprocessorLexer *CurPPLexer; /// \brief Used to find the current FileEntry, if CurLexer is non-null /// and if applicable. /// /// This allows us to implement \#include_next and find directory-specific /// properties. const DirectoryLookup *CurDirLookup; /// \brief The current macro we are expanding, if we are expanding a macro. /// /// One of CurLexer and CurTokenLexer must be null. std::unique_ptr CurTokenLexer; /// \brief The kind of lexer we're currently working with. enum CurLexerKind { CLK_Lexer, CLK_PTHLexer, CLK_TokenLexer, CLK_CachingLexer, CLK_LexAfterModuleImport } CurLexerKind; /// \brief If the current lexer is for a submodule that is being built, this /// is that submodule. Module *CurLexerSubmodule; /// \brief Keeps track of the stack of files currently /// \#included, and macros currently being expanded from, not counting /// CurLexer/CurTokenLexer. struct IncludeStackInfo { enum CurLexerKind CurLexerKind; Module *TheSubmodule; std::unique_ptr TheLexer; std::unique_ptr ThePTHLexer; PreprocessorLexer *ThePPLexer; std::unique_ptr TheTokenLexer; const DirectoryLookup *TheDirLookup; // The following constructors are completely useless copies of the default // versions, only needed to pacify MSVC. IncludeStackInfo(enum CurLexerKind CurLexerKind, Module *TheSubmodule, std::unique_ptr &&TheLexer, std::unique_ptr &&ThePTHLexer, PreprocessorLexer *ThePPLexer, std::unique_ptr &&TheTokenLexer, const DirectoryLookup *TheDirLookup) : CurLexerKind(std::move(CurLexerKind)), TheSubmodule(std::move(TheSubmodule)), TheLexer(std::move(TheLexer)), ThePTHLexer(std::move(ThePTHLexer)), ThePPLexer(std::move(ThePPLexer)), TheTokenLexer(std::move(TheTokenLexer)), TheDirLookup(std::move(TheDirLookup)) {} }; std::vector IncludeMacroStack; /// \brief Actions invoked when some preprocessor activity is /// encountered (e.g. a file is \#included, etc). std::unique_ptr Callbacks; struct MacroExpandsInfo { Token Tok; MacroDefinition MD; SourceRange Range; MacroExpandsInfo(Token Tok, MacroDefinition MD, SourceRange Range) : Tok(Tok), MD(MD), Range(Range) { } }; SmallVector DelayedMacroExpandsCallbacks; /// Information about a name that has been used to define a module macro. struct ModuleMacroInfo { ModuleMacroInfo(MacroDirective *MD) : MD(MD), ActiveModuleMacrosGeneration(0), IsAmbiguous(false) {} /// The most recent macro directive for this identifier. MacroDirective *MD; /// The active module macros for this identifier. llvm::TinyPtrVector ActiveModuleMacros; /// The generation number at which we last updated ActiveModuleMacros. /// \see Preprocessor::VisibleModules. unsigned ActiveModuleMacrosGeneration; /// Whether this macro name is ambiguous. bool IsAmbiguous; /// The module macros that are overridden by this macro. llvm::TinyPtrVector OverriddenMacros; }; /// The state of a macro for an identifier. class MacroState { mutable llvm::PointerUnion State; ModuleMacroInfo *getModuleInfo(Preprocessor &PP, const IdentifierInfo *II) const { if (II->isOutOfDate()) PP.updateOutOfDateIdentifier(const_cast(*II)); // FIXME: Find a spare bit on IdentifierInfo and store a // HasModuleMacros flag. if (!II->hasMacroDefinition() || (!PP.getLangOpts().Modules && !PP.getLangOpts().ModulesLocalVisibility) || !PP.CurSubmoduleState->VisibleModules.getGeneration()) return nullptr; auto *Info = State.dyn_cast(); if (!Info) { Info = new (PP.getPreprocessorAllocator()) ModuleMacroInfo(State.get()); State = Info; } if (PP.CurSubmoduleState->VisibleModules.getGeneration() != Info->ActiveModuleMacrosGeneration) PP.updateModuleMacroInfo(II, *Info); return Info; } public: MacroState() : MacroState(nullptr) {} MacroState(MacroDirective *MD) : State(MD) {} MacroState(MacroState &&O) noexcept : State(O.State) { O.State = (MacroDirective *)nullptr; } MacroState &operator=(MacroState &&O) noexcept { auto S = O.State; O.State = (MacroDirective *)nullptr; State = S; return *this; } ~MacroState() { if (auto *Info = State.dyn_cast()) Info->~ModuleMacroInfo(); } MacroDirective *getLatest() const { if (auto *Info = State.dyn_cast()) return Info->MD; return State.get(); } void setLatest(MacroDirective *MD) { if (auto *Info = State.dyn_cast()) Info->MD = MD; else State = MD; } bool isAmbiguous(Preprocessor &PP, const IdentifierInfo *II) const { auto *Info = getModuleInfo(PP, II); return Info ? Info->IsAmbiguous : false; } ArrayRef getActiveModuleMacros(Preprocessor &PP, const IdentifierInfo *II) const { if (auto *Info = getModuleInfo(PP, II)) return Info->ActiveModuleMacros; return None; } MacroDirective::DefInfo findDirectiveAtLoc(SourceLocation Loc, SourceManager &SourceMgr) const { // FIXME: Incorporate module macros into the result of this. if (auto *Latest = getLatest()) return Latest->findDirectiveAtLoc(Loc, SourceMgr); return MacroDirective::DefInfo(); } void overrideActiveModuleMacros(Preprocessor &PP, IdentifierInfo *II) { if (auto *Info = getModuleInfo(PP, II)) { Info->OverriddenMacros.insert(Info->OverriddenMacros.end(), Info->ActiveModuleMacros.begin(), Info->ActiveModuleMacros.end()); Info->ActiveModuleMacros.clear(); Info->IsAmbiguous = false; } } ArrayRef getOverriddenMacros() const { if (auto *Info = State.dyn_cast()) return Info->OverriddenMacros; return None; } void setOverriddenMacros(Preprocessor &PP, ArrayRef Overrides) { auto *Info = State.dyn_cast(); if (!Info) { if (Overrides.empty()) return; Info = new (PP.getPreprocessorAllocator()) ModuleMacroInfo(State.get()); State = Info; } Info->OverriddenMacros.clear(); Info->OverriddenMacros.insert(Info->OverriddenMacros.end(), Overrides.begin(), Overrides.end()); Info->ActiveModuleMacrosGeneration = 0; } }; /// For each IdentifierInfo that was associated with a macro, we /// keep a mapping to the history of all macro definitions and #undefs in /// the reverse order (the latest one is in the head of the list). /// /// This mapping lives within the \p CurSubmoduleState. typedef llvm::DenseMap MacroMap; friend class ASTReader; struct SubmoduleState; /// \brief Information about a submodule that we're currently building. struct BuildingSubmoduleInfo { BuildingSubmoduleInfo(Module *M, SourceLocation ImportLoc, bool IsPragma, SubmoduleState *OuterSubmoduleState, unsigned OuterPendingModuleMacroNames) : M(M), ImportLoc(ImportLoc), IsPragma(IsPragma), OuterSubmoduleState(OuterSubmoduleState), OuterPendingModuleMacroNames(OuterPendingModuleMacroNames) {} /// The module that we are building. Module *M; /// The location at which the module was included. SourceLocation ImportLoc; /// Whether we entered this submodule via a pragma. bool IsPragma; /// The previous SubmoduleState. SubmoduleState *OuterSubmoduleState; /// The number of pending module macro names when we started building this. unsigned OuterPendingModuleMacroNames; }; SmallVector BuildingSubmoduleStack; /// \brief Information about a submodule's preprocessor state. struct SubmoduleState { /// The macros for the submodule. MacroMap Macros; /// The set of modules that are visible within the submodule. VisibleModuleSet VisibleModules; // FIXME: CounterValue? // FIXME: PragmaPushMacroInfo? }; std::map Submodules; /// The preprocessor state for preprocessing outside of any submodule. SubmoduleState NullSubmoduleState; /// The current submodule state. Will be \p NullSubmoduleState if we're not /// in a submodule. SubmoduleState *CurSubmoduleState; /// The set of known macros exported from modules. llvm::FoldingSet ModuleMacros; /// The names of potential module macros that we've not yet processed. llvm::SmallVector PendingModuleMacroNames; /// The list of module macros, for each identifier, that are not overridden by /// any other module macro. llvm::DenseMap> LeafModuleMacros; /// \brief Macros that we want to warn because they are not used at the end /// of the translation unit. /// /// We store just their SourceLocations instead of /// something like MacroInfo*. The benefit of this is that when we are /// deserializing from PCH, we don't need to deserialize identifier & macros /// just so that we can report that they are unused, we just warn using /// the SourceLocations of this set (that will be filled by the ASTReader). /// We are using SmallPtrSet instead of a vector for faster removal. typedef llvm::SmallPtrSet WarnUnusedMacroLocsTy; WarnUnusedMacroLocsTy WarnUnusedMacroLocs; /// \brief A "freelist" of MacroArg objects that can be /// reused for quick allocation. MacroArgs *MacroArgCache; friend class MacroArgs; /// For each IdentifierInfo used in a \#pragma push_macro directive, /// we keep a MacroInfo stack used to restore the previous macro value. llvm::DenseMap > PragmaPushMacroInfo; // Various statistics we track for performance analysis. unsigned NumDirectives, NumDefined, NumUndefined, NumPragma; unsigned NumIf, NumElse, NumEndif; unsigned NumEnteredSourceFiles, MaxIncludeStackDepth; unsigned NumMacroExpanded, NumFnMacroExpanded, NumBuiltinMacroExpanded; unsigned NumFastMacroExpanded, NumTokenPaste, NumFastTokenPaste; unsigned NumSkipped; /// \brief The predefined macros that preprocessor should use from the /// command line etc. std::string Predefines; /// \brief The file ID for the preprocessor predefines. FileID PredefinesFileID; /// \{ /// \brief Cache of macro expanders to reduce malloc traffic. enum { TokenLexerCacheSize = 8 }; unsigned NumCachedTokenLexers; std::unique_ptr TokenLexerCache[TokenLexerCacheSize]; /// \} /// \brief Keeps macro expanded tokens for TokenLexers. // /// Works like a stack; a TokenLexer adds the macro expanded tokens that is /// going to lex in the cache and when it finishes the tokens are removed /// from the end of the cache. SmallVector MacroExpandedTokens; std::vector > MacroExpandingLexersStack; /// \brief A record of the macro definitions and expansions that /// occurred during preprocessing. /// /// This is an optional side structure that can be enabled with /// \c createPreprocessingRecord() prior to preprocessing. PreprocessingRecord *Record; /// Cached tokens state. typedef SmallVector CachedTokensTy; /// \brief Cached tokens are stored here when we do backtracking or /// lookahead. They are "lexed" by the CachingLex() method. CachedTokensTy CachedTokens; /// \brief The position of the cached token that CachingLex() should /// "lex" next. /// /// If it points beyond the CachedTokens vector, it means that a normal /// Lex() should be invoked. CachedTokensTy::size_type CachedLexPos; /// \brief Stack of backtrack positions, allowing nested backtracks. /// /// The EnableBacktrackAtThisPos() method pushes a position to /// indicate where CachedLexPos should be set when the BackTrack() method is /// invoked (at which point the last position is popped). std::vector BacktrackPositions; struct MacroInfoChain { MacroInfo MI; MacroInfoChain *Next; }; /// MacroInfos are managed as a chain for easy disposal. This is the head /// of that list. MacroInfoChain *MIChainHead; void updateOutOfDateIdentifier(IdentifierInfo &II) const; public: Preprocessor(std::shared_ptr PPOpts, DiagnosticsEngine &diags, LangOptions &opts, SourceManager &SM, MemoryBufferCache &PCMCache, HeaderSearch &Headers, ModuleLoader &TheModuleLoader, IdentifierInfoLookup *IILookup = nullptr, bool OwnsHeaderSearch = false, TranslationUnitKind TUKind = TU_Complete); ~Preprocessor(); /// \brief Initialize the preprocessor using information about the target. /// /// \param Target is owned by the caller and must remain valid for the /// lifetime of the preprocessor. /// \param AuxTarget is owned by the caller and must remain valid for /// the lifetime of the preprocessor. void Initialize(const TargetInfo &Target, const TargetInfo *AuxTarget = nullptr); /// \brief Initialize the preprocessor to parse a model file /// /// To parse model files the preprocessor of the original source is reused to /// preserver the identifier table. However to avoid some duplicate /// information in the preprocessor some cleanup is needed before it is used /// to parse model files. This method does that cleanup. void InitializeForModelFile(); /// \brief Cleanup after model file parsing void FinalizeForModelFile(); /// \brief Retrieve the preprocessor options used to initialize this /// preprocessor. PreprocessorOptions &getPreprocessorOpts() const { return *PPOpts; } DiagnosticsEngine &getDiagnostics() const { return *Diags; } void setDiagnostics(DiagnosticsEngine &D) { Diags = &D; } const LangOptions &getLangOpts() const { return LangOpts; } const TargetInfo &getTargetInfo() const { return *Target; } const TargetInfo *getAuxTargetInfo() const { return AuxTarget; } FileManager &getFileManager() const { return FileMgr; } SourceManager &getSourceManager() const { return SourceMgr; } MemoryBufferCache &getPCMCache() const { return PCMCache; } HeaderSearch &getHeaderSearchInfo() const { return HeaderInfo; } IdentifierTable &getIdentifierTable() { return Identifiers; } const IdentifierTable &getIdentifierTable() const { return Identifiers; } SelectorTable &getSelectorTable() { return Selectors; } Builtin::Context &getBuiltinInfo() { return BuiltinInfo; } llvm::BumpPtrAllocator &getPreprocessorAllocator() { return BP; } void setPTHManager(PTHManager* pm); PTHManager *getPTHManager() { return PTH.get(); } void setExternalSource(ExternalPreprocessorSource *Source) { ExternalSource = Source; } ExternalPreprocessorSource *getExternalSource() const { return ExternalSource; } /// \brief Retrieve the module loader associated with this preprocessor. ModuleLoader &getModuleLoader() const { return TheModuleLoader; } bool hadModuleLoaderFatalFailure() const { return TheModuleLoader.HadFatalFailure; } /// \brief True if we are currently preprocessing a #if or #elif directive bool isParsingIfOrElifDirective() const { return ParsingIfOrElifDirective; } /// \brief Control whether the preprocessor retains comments in output. void SetCommentRetentionState(bool KeepComments, bool KeepMacroComments) { this->KeepComments = KeepComments | KeepMacroComments; this->KeepMacroComments = KeepMacroComments; } bool getCommentRetentionState() const { return KeepComments; } void setPragmasEnabled(bool Enabled) { PragmasEnabled = Enabled; } bool getPragmasEnabled() const { return PragmasEnabled; } void SetSuppressIncludeNotFoundError(bool Suppress) { SuppressIncludeNotFoundError = Suppress; } bool GetSuppressIncludeNotFoundError() { return SuppressIncludeNotFoundError; } /// Sets whether the preprocessor is responsible for producing output or if /// it is producing tokens to be consumed by Parse and Sema. void setPreprocessedOutput(bool IsPreprocessedOutput) { PreprocessedOutput = IsPreprocessedOutput; } /// Returns true if the preprocessor is responsible for generating output, /// false if it is producing tokens to be consumed by Parse and Sema. bool isPreprocessedOutput() const { return PreprocessedOutput; } /// \brief Return true if we are lexing directly from the specified lexer. bool isCurrentLexer(const PreprocessorLexer *L) const { return CurPPLexer == L; } /// \brief Return the current lexer being lexed from. /// /// Note that this ignores any potentially active macro expansions and _Pragma /// expansions going on at the time. PreprocessorLexer *getCurrentLexer() const { return CurPPLexer; } /// \brief Return the current file lexer being lexed from. /// /// Note that this ignores any potentially active macro expansions and _Pragma /// expansions going on at the time. PreprocessorLexer *getCurrentFileLexer() const; /// \brief Return the submodule owning the file being lexed. This may not be /// the current module if we have changed modules since entering the file. Module *getCurrentLexerSubmodule() const { return CurLexerSubmodule; } /// \brief Returns the FileID for the preprocessor predefines. FileID getPredefinesFileID() const { return PredefinesFileID; } /// \{ /// \brief Accessors for preprocessor callbacks. /// /// Note that this class takes ownership of any PPCallbacks object given to /// it. PPCallbacks *getPPCallbacks() const { return Callbacks.get(); } void addPPCallbacks(std::unique_ptr C) { if (Callbacks) C = llvm::make_unique(std::move(C), std::move(Callbacks)); Callbacks = std::move(C); } /// \} bool isMacroDefined(StringRef Id) { return isMacroDefined(&Identifiers.get(Id)); } bool isMacroDefined(const IdentifierInfo *II) { return II->hasMacroDefinition() && (!getLangOpts().Modules || (bool)getMacroDefinition(II)); } /// \brief Determine whether II is defined as a macro within the module M, /// if that is a module that we've already preprocessed. Does not check for /// macros imported into M. bool isMacroDefinedInLocalModule(const IdentifierInfo *II, Module *M) { if (!II->hasMacroDefinition()) return false; auto I = Submodules.find(M); if (I == Submodules.end()) return false; auto J = I->second.Macros.find(II); if (J == I->second.Macros.end()) return false; auto *MD = J->second.getLatest(); return MD && MD->isDefined(); } MacroDefinition getMacroDefinition(const IdentifierInfo *II) { if (!II->hasMacroDefinition()) return MacroDefinition(); MacroState &S = CurSubmoduleState->Macros[II]; auto *MD = S.getLatest(); while (MD && isa(MD)) MD = MD->getPrevious(); return MacroDefinition(dyn_cast_or_null(MD), S.getActiveModuleMacros(*this, II), S.isAmbiguous(*this, II)); } MacroDefinition getMacroDefinitionAtLoc(const IdentifierInfo *II, SourceLocation Loc) { if (!II->hadMacroDefinition()) return MacroDefinition(); MacroState &S = CurSubmoduleState->Macros[II]; MacroDirective::DefInfo DI; if (auto *MD = S.getLatest()) DI = MD->findDirectiveAtLoc(Loc, getSourceManager()); // FIXME: Compute the set of active module macros at the specified location. return MacroDefinition(DI.getDirective(), S.getActiveModuleMacros(*this, II), S.isAmbiguous(*this, II)); } /// \brief Given an identifier, return its latest non-imported MacroDirective /// if it is \#define'd and not \#undef'd, or null if it isn't \#define'd. MacroDirective *getLocalMacroDirective(const IdentifierInfo *II) const { if (!II->hasMacroDefinition()) return nullptr; auto *MD = getLocalMacroDirectiveHistory(II); if (!MD || MD->getDefinition().isUndefined()) return nullptr; return MD; } const MacroInfo *getMacroInfo(const IdentifierInfo *II) const { return const_cast(this)->getMacroInfo(II); } MacroInfo *getMacroInfo(const IdentifierInfo *II) { if (!II->hasMacroDefinition()) return nullptr; if (auto MD = getMacroDefinition(II)) return MD.getMacroInfo(); return nullptr; } /// \brief Given an identifier, return the latest non-imported macro /// directive for that identifier. /// /// One can iterate over all previous macro directives from the most recent /// one. MacroDirective *getLocalMacroDirectiveHistory(const IdentifierInfo *II) const; /// \brief Add a directive to the macro directive history for this identifier. void appendMacroDirective(IdentifierInfo *II, MacroDirective *MD); DefMacroDirective *appendDefMacroDirective(IdentifierInfo *II, MacroInfo *MI, SourceLocation Loc) { DefMacroDirective *MD = AllocateDefMacroDirective(MI, Loc); appendMacroDirective(II, MD); return MD; } DefMacroDirective *appendDefMacroDirective(IdentifierInfo *II, MacroInfo *MI) { return appendDefMacroDirective(II, MI, MI->getDefinitionLoc()); } /// \brief Set a MacroDirective that was loaded from a PCH file. void setLoadedMacroDirective(IdentifierInfo *II, MacroDirective *ED, MacroDirective *MD); /// \brief Register an exported macro for a module and identifier. ModuleMacro *addModuleMacro(Module *Mod, IdentifierInfo *II, MacroInfo *Macro, ArrayRef Overrides, bool &IsNew); ModuleMacro *getModuleMacro(Module *Mod, IdentifierInfo *II); /// \brief Get the list of leaf (non-overridden) module macros for a name. ArrayRef getLeafModuleMacros(const IdentifierInfo *II) const { if (II->isOutOfDate()) updateOutOfDateIdentifier(const_cast(*II)); auto I = LeafModuleMacros.find(II); if (I != LeafModuleMacros.end()) return I->second; return None; } /// \{ /// Iterators for the macro history table. Currently defined macros have /// IdentifierInfo::hasMacroDefinition() set and an empty /// MacroInfo::getUndefLoc() at the head of the list. typedef MacroMap::const_iterator macro_iterator; macro_iterator macro_begin(bool IncludeExternalMacros = true) const; macro_iterator macro_end(bool IncludeExternalMacros = true) const; llvm::iterator_range macros(bool IncludeExternalMacros = true) const { return llvm::make_range(macro_begin(IncludeExternalMacros), macro_end(IncludeExternalMacros)); } /// \} /// \brief Return the name of the macro defined before \p Loc that has /// spelling \p Tokens. If there are multiple macros with same spelling, /// return the last one defined. StringRef getLastMacroWithSpelling(SourceLocation Loc, ArrayRef Tokens) const; const std::string &getPredefines() const { return Predefines; } /// \brief Set the predefines for this Preprocessor. /// /// These predefines are automatically injected when parsing the main file. void setPredefines(const char *P) { Predefines = P; } void setPredefines(StringRef P) { Predefines = P; } /// Return information about the specified preprocessor /// identifier token. IdentifierInfo *getIdentifierInfo(StringRef Name) const { return &Identifiers.get(Name); } /// \brief Add the specified pragma handler to this preprocessor. /// /// If \p Namespace is non-null, then it is a token required to exist on the /// pragma line before the pragma string starts, e.g. "STDC" or "GCC". void AddPragmaHandler(StringRef Namespace, PragmaHandler *Handler); void AddPragmaHandler(PragmaHandler *Handler) { AddPragmaHandler(StringRef(), Handler); } /// \brief Remove the specific pragma handler from this preprocessor. /// /// If \p Namespace is non-null, then it should be the namespace that /// \p Handler was added to. It is an error to remove a handler that /// has not been registered. void RemovePragmaHandler(StringRef Namespace, PragmaHandler *Handler); void RemovePragmaHandler(PragmaHandler *Handler) { RemovePragmaHandler(StringRef(), Handler); } /// Install empty handlers for all pragmas (making them ignored). void IgnorePragmas(); /// \brief Add the specified comment handler to the preprocessor. void addCommentHandler(CommentHandler *Handler); /// \brief Remove the specified comment handler. /// /// It is an error to remove a handler that has not been registered. void removeCommentHandler(CommentHandler *Handler); /// \brief Set the code completion handler to the given object. void setCodeCompletionHandler(CodeCompletionHandler &Handler) { CodeComplete = &Handler; } /// \brief Retrieve the current code-completion handler. CodeCompletionHandler *getCodeCompletionHandler() const { return CodeComplete; } /// \brief Clear out the code completion handler. void clearCodeCompletionHandler() { CodeComplete = nullptr; } /// \brief Hook used by the lexer to invoke the "natural language" code /// completion point. void CodeCompleteNaturalLanguage(); /// \brief Set the code completion token for filtering purposes. void setCodeCompletionIdentifierInfo(IdentifierInfo *Filter) { CodeCompletionII = Filter; } /// \brief Get the code completion token for filtering purposes. StringRef getCodeCompletionFilter() { if (CodeCompletionII) return CodeCompletionII->getName(); return {}; } /// \brief Retrieve the preprocessing record, or NULL if there is no /// preprocessing record. PreprocessingRecord *getPreprocessingRecord() const { return Record; } /// \brief Create a new preprocessing record, which will keep track of /// all macro expansions, macro definitions, etc. void createPreprocessingRecord(); /// \brief Enter the specified FileID as the main source file, /// which implicitly adds the builtin defines etc. void EnterMainSourceFile(); - /// \brief After parser warm-up, initialize the conditional stack from - /// the preamble. - void replayPreambleConditionalStack(); - /// \brief Inform the preprocessor callbacks that processing is complete. void EndSourceFile(); /// \brief Add a source file to the top of the include stack and /// start lexing tokens from it instead of the current buffer. /// /// Emits a diagnostic, doesn't enter the file, and returns true on error. bool EnterSourceFile(FileID CurFileID, const DirectoryLookup *Dir, SourceLocation Loc); /// \brief Add a Macro to the top of the include stack and start lexing /// tokens from it instead of the current buffer. /// /// \param Args specifies the tokens input to a function-like macro. /// \param ILEnd specifies the location of the ')' for a function-like macro /// or the identifier for an object-like macro. void EnterMacro(Token &Identifier, SourceLocation ILEnd, MacroInfo *Macro, MacroArgs *Args); /// \brief Add a "macro" context to the top of the include stack, /// which will cause the lexer to start returning the specified tokens. /// /// If \p DisableMacroExpansion is true, tokens lexed from the token stream /// will not be subject to further macro expansion. Otherwise, these tokens /// will be re-macro-expanded when/if expansion is enabled. /// /// If \p OwnsTokens is false, this method assumes that the specified stream /// of tokens has a permanent owner somewhere, so they do not need to be /// copied. If it is true, it assumes the array of tokens is allocated with /// \c new[] and the Preprocessor will delete[] it. private: void EnterTokenStream(const Token *Toks, unsigned NumToks, bool DisableMacroExpansion, bool OwnsTokens); public: void EnterTokenStream(std::unique_ptr Toks, unsigned NumToks, bool DisableMacroExpansion) { EnterTokenStream(Toks.release(), NumToks, DisableMacroExpansion, true); } void EnterTokenStream(ArrayRef Toks, bool DisableMacroExpansion) { EnterTokenStream(Toks.data(), Toks.size(), DisableMacroExpansion, false); } /// \brief Pop the current lexer/macro exp off the top of the lexer stack. /// /// This should only be used in situations where the current state of the /// top-of-stack lexer is known. void RemoveTopOfLexerStack(); /// From the point that this method is called, and until /// CommitBacktrackedTokens() or Backtrack() is called, the Preprocessor /// keeps track of the lexed tokens so that a subsequent Backtrack() call will /// make the Preprocessor re-lex the same tokens. /// /// Nested backtracks are allowed, meaning that EnableBacktrackAtThisPos can /// be called multiple times and CommitBacktrackedTokens/Backtrack calls will /// be combined with the EnableBacktrackAtThisPos calls in reverse order. /// /// NOTE: *DO NOT* forget to call either CommitBacktrackedTokens or Backtrack /// at some point after EnableBacktrackAtThisPos. If you don't, caching of /// tokens will continue indefinitely. /// void EnableBacktrackAtThisPos(); /// \brief Disable the last EnableBacktrackAtThisPos call. void CommitBacktrackedTokens(); struct CachedTokensRange { CachedTokensTy::size_type Begin, End; }; private: /// \brief A range of cached tokens that should be erased after lexing /// when backtracking requires the erasure of such cached tokens. Optional CachedTokenRangeToErase; public: /// \brief Returns the range of cached tokens that were lexed since /// EnableBacktrackAtThisPos() was previously called. CachedTokensRange LastCachedTokenRange(); /// \brief Erase the range of cached tokens that were lexed since /// EnableBacktrackAtThisPos() was previously called. void EraseCachedTokens(CachedTokensRange TokenRange); /// \brief Make Preprocessor re-lex the tokens that were lexed since /// EnableBacktrackAtThisPos() was previously called. void Backtrack(); /// \brief True if EnableBacktrackAtThisPos() was called and /// caching of tokens is on. bool isBacktrackEnabled() const { return !BacktrackPositions.empty(); } /// \brief Lex the next token for this preprocessor. void Lex(Token &Result); void LexAfterModuleImport(Token &Result); void makeModuleVisible(Module *M, SourceLocation Loc); SourceLocation getModuleImportLoc(Module *M) const { return CurSubmoduleState->VisibleModules.getImportLoc(M); } /// \brief Lex a string literal, which may be the concatenation of multiple /// string literals and may even come from macro expansion. /// \returns true on success, false if a error diagnostic has been generated. bool LexStringLiteral(Token &Result, std::string &String, const char *DiagnosticTag, bool AllowMacroExpansion) { if (AllowMacroExpansion) Lex(Result); else LexUnexpandedToken(Result); return FinishLexStringLiteral(Result, String, DiagnosticTag, AllowMacroExpansion); } /// \brief Complete the lexing of a string literal where the first token has /// already been lexed (see LexStringLiteral). bool FinishLexStringLiteral(Token &Result, std::string &String, const char *DiagnosticTag, bool AllowMacroExpansion); /// \brief Lex a token. If it's a comment, keep lexing until we get /// something not a comment. /// /// This is useful in -E -C mode where comments would foul up preprocessor /// directive handling. void LexNonComment(Token &Result) { do Lex(Result); while (Result.getKind() == tok::comment); } /// \brief Just like Lex, but disables macro expansion of identifier tokens. void LexUnexpandedToken(Token &Result) { // Disable macro expansion. bool OldVal = DisableMacroExpansion; DisableMacroExpansion = true; // Lex the token. Lex(Result); // Reenable it. DisableMacroExpansion = OldVal; } /// \brief Like LexNonComment, but this disables macro expansion of /// identifier tokens. void LexUnexpandedNonComment(Token &Result) { do LexUnexpandedToken(Result); while (Result.getKind() == tok::comment); } /// \brief Parses a simple integer literal to get its numeric value. Floating /// point literals and user defined literals are rejected. Used primarily to /// handle pragmas that accept integer arguments. bool parseSimpleIntegerLiteral(Token &Tok, uint64_t &Value); /// Disables macro expansion everywhere except for preprocessor directives. void SetMacroExpansionOnlyInDirectives() { DisableMacroExpansion = true; MacroExpansionInDirectivesOverride = true; } /// \brief Peeks ahead N tokens and returns that token without consuming any /// tokens. /// /// LookAhead(0) returns the next token that would be returned by Lex(), /// LookAhead(1) returns the token after it, etc. This returns normal /// tokens after phase 5. As such, it is equivalent to using /// 'Lex', not 'LexUnexpandedToken'. const Token &LookAhead(unsigned N) { if (CachedLexPos + N < CachedTokens.size()) return CachedTokens[CachedLexPos+N]; else return PeekAhead(N+1); } /// \brief When backtracking is enabled and tokens are cached, /// this allows to revert a specific number of tokens. /// /// Note that the number of tokens being reverted should be up to the last /// backtrack position, not more. void RevertCachedTokens(unsigned N) { assert(isBacktrackEnabled() && "Should only be called when tokens are cached for backtracking"); assert(signed(CachedLexPos) - signed(N) >= signed(BacktrackPositions.back()) && "Should revert tokens up to the last backtrack position, not more"); assert(signed(CachedLexPos) - signed(N) >= 0 && "Corrupted backtrack positions ?"); CachedLexPos -= N; } /// \brief Enters a token in the token stream to be lexed next. /// /// If BackTrack() is called afterwards, the token will remain at the /// insertion point. void EnterToken(const Token &Tok) { EnterCachingLexMode(); CachedTokens.insert(CachedTokens.begin()+CachedLexPos, Tok); } /// We notify the Preprocessor that if it is caching tokens (because /// backtrack is enabled) it should replace the most recent cached tokens /// with the given annotation token. This function has no effect if /// backtracking is not enabled. /// /// Note that the use of this function is just for optimization, so that the /// cached tokens doesn't get re-parsed and re-resolved after a backtrack is /// invoked. void AnnotateCachedTokens(const Token &Tok) { assert(Tok.isAnnotation() && "Expected annotation token"); if (CachedLexPos != 0 && isBacktrackEnabled()) AnnotatePreviousCachedTokens(Tok); } /// Get the location of the last cached token, suitable for setting the end /// location of an annotation token. SourceLocation getLastCachedTokenLocation() const { assert(CachedLexPos != 0); return CachedTokens[CachedLexPos-1].getLastLoc(); } /// \brief Whether \p Tok is the most recent token (`CachedLexPos - 1`) in /// CachedTokens. bool IsPreviousCachedToken(const Token &Tok) const; /// \brief Replace token in `CachedLexPos - 1` in CachedTokens by the tokens /// in \p NewToks. /// /// Useful when a token needs to be split in smaller ones and CachedTokens /// most recent token must to be updated to reflect that. void ReplacePreviousCachedToken(ArrayRef NewToks); /// \brief Replace the last token with an annotation token. /// /// Like AnnotateCachedTokens(), this routine replaces an /// already-parsed (and resolved) token with an annotation /// token. However, this routine only replaces the last token with /// the annotation token; it does not affect any other cached /// tokens. This function has no effect if backtracking is not /// enabled. void ReplaceLastTokenWithAnnotation(const Token &Tok) { assert(Tok.isAnnotation() && "Expected annotation token"); if (CachedLexPos != 0 && isBacktrackEnabled()) CachedTokens[CachedLexPos-1] = Tok; } /// Enter an annotation token into the token stream. void EnterAnnotationToken(SourceRange Range, tok::TokenKind Kind, void *AnnotationVal); /// Update the current token to represent the provided /// identifier, in order to cache an action performed by typo correction. void TypoCorrectToken(const Token &Tok) { assert(Tok.getIdentifierInfo() && "Expected identifier token"); if (CachedLexPos != 0 && isBacktrackEnabled()) CachedTokens[CachedLexPos-1] = Tok; } /// \brief Recompute the current lexer kind based on the CurLexer/CurPTHLexer/ /// CurTokenLexer pointers. void recomputeCurLexerKind(); /// \brief Returns true if incremental processing is enabled bool isIncrementalProcessingEnabled() const { return IncrementalProcessing; } /// \brief Enables the incremental processing void enableIncrementalProcessing(bool value = true) { IncrementalProcessing = value; } /// \brief Specify the point at which code-completion will be performed. /// /// \param File the file in which code completion should occur. If /// this file is included multiple times, code-completion will /// perform completion the first time it is included. If NULL, this /// function clears out the code-completion point. /// /// \param Line the line at which code completion should occur /// (1-based). /// /// \param Column the column at which code completion should occur /// (1-based). /// /// \returns true if an error occurred, false otherwise. bool SetCodeCompletionPoint(const FileEntry *File, unsigned Line, unsigned Column); /// \brief Determine if we are performing code completion. bool isCodeCompletionEnabled() const { return CodeCompletionFile != nullptr; } /// \brief Returns the location of the code-completion point. /// /// Returns an invalid location if code-completion is not enabled or the file /// containing the code-completion point has not been lexed yet. SourceLocation getCodeCompletionLoc() const { return CodeCompletionLoc; } /// \brief Returns the start location of the file of code-completion point. /// /// Returns an invalid location if code-completion is not enabled or the file /// containing the code-completion point has not been lexed yet. SourceLocation getCodeCompletionFileLoc() const { return CodeCompletionFileLoc; } /// \brief Returns true if code-completion is enabled and we have hit the /// code-completion point. bool isCodeCompletionReached() const { return CodeCompletionReached; } /// \brief Note that we hit the code-completion point. void setCodeCompletionReached() { assert(isCodeCompletionEnabled() && "Code-completion not enabled!"); CodeCompletionReached = true; // Silence any diagnostics that occur after we hit the code-completion. getDiagnostics().setSuppressAllDiagnostics(true); } /// \brief The location of the currently-active \#pragma clang /// arc_cf_code_audited begin. /// /// Returns an invalid location if there is no such pragma active. SourceLocation getPragmaARCCFCodeAuditedLoc() const { return PragmaARCCFCodeAuditedLoc; } /// \brief Set the location of the currently-active \#pragma clang /// arc_cf_code_audited begin. An invalid location ends the pragma. void setPragmaARCCFCodeAuditedLoc(SourceLocation Loc) { PragmaARCCFCodeAuditedLoc = Loc; } /// \brief The location of the currently-active \#pragma clang /// assume_nonnull begin. /// /// Returns an invalid location if there is no such pragma active. SourceLocation getPragmaAssumeNonNullLoc() const { return PragmaAssumeNonNullLoc; } /// \brief Set the location of the currently-active \#pragma clang /// assume_nonnull begin. An invalid location ends the pragma. void setPragmaAssumeNonNullLoc(SourceLocation Loc) { PragmaAssumeNonNullLoc = Loc; } /// \brief Set the directory in which the main file should be considered /// to have been found, if it is not a real file. void setMainFileDir(const DirectoryEntry *Dir) { MainFileDir = Dir; } /// \brief Instruct the preprocessor to skip part of the main source file. /// /// \param Bytes The number of bytes in the preamble to skip. /// /// \param StartOfLine Whether skipping these bytes puts the lexer at the /// start of a line. void setSkipMainFilePreamble(unsigned Bytes, bool StartOfLine) { SkipMainFilePreamble.first = Bytes; SkipMainFilePreamble.second = StartOfLine; } /// Forwarding function for diagnostics. This emits a diagnostic at /// the specified Token's location, translating the token's start /// position in the current buffer into a SourcePosition object for rendering. DiagnosticBuilder Diag(SourceLocation Loc, unsigned DiagID) const { return Diags->Report(Loc, DiagID); } DiagnosticBuilder Diag(const Token &Tok, unsigned DiagID) const { return Diags->Report(Tok.getLocation(), DiagID); } /// Return the 'spelling' of the token at the given /// location; does not go up to the spelling location or down to the /// expansion location. /// /// \param buffer A buffer which will be used only if the token requires /// "cleaning", e.g. if it contains trigraphs or escaped newlines /// \param invalid If non-null, will be set \c true if an error occurs. StringRef getSpelling(SourceLocation loc, SmallVectorImpl &buffer, bool *invalid = nullptr) const { return Lexer::getSpelling(loc, buffer, SourceMgr, LangOpts, invalid); } /// \brief Return the 'spelling' of the Tok token. /// /// The spelling of a token is the characters used to represent the token in /// the source file after trigraph expansion and escaped-newline folding. In /// particular, this wants to get the true, uncanonicalized, spelling of /// things like digraphs, UCNs, etc. /// /// \param Invalid If non-null, will be set \c true if an error occurs. std::string getSpelling(const Token &Tok, bool *Invalid = nullptr) const { return Lexer::getSpelling(Tok, SourceMgr, LangOpts, Invalid); } /// \brief Get the spelling of a token into a preallocated buffer, instead /// of as an std::string. /// /// The caller is required to allocate enough space for the token, which is /// guaranteed to be at least Tok.getLength() bytes long. The length of the /// actual result is returned. /// /// Note that this method may do two possible things: it may either fill in /// the buffer specified with characters, or it may *change the input pointer* /// to point to a constant buffer with the data already in it (avoiding a /// copy). The caller is not allowed to modify the returned buffer pointer /// if an internal buffer is returned. unsigned getSpelling(const Token &Tok, const char *&Buffer, bool *Invalid = nullptr) const { return Lexer::getSpelling(Tok, Buffer, SourceMgr, LangOpts, Invalid); } /// \brief Get the spelling of a token into a SmallVector. /// /// Note that the returned StringRef may not point to the /// supplied buffer if a copy can be avoided. StringRef getSpelling(const Token &Tok, SmallVectorImpl &Buffer, bool *Invalid = nullptr) const; /// \brief Relex the token at the specified location. /// \returns true if there was a failure, false on success. bool getRawToken(SourceLocation Loc, Token &Result, bool IgnoreWhiteSpace = false) { return Lexer::getRawToken(Loc, Result, SourceMgr, LangOpts, IgnoreWhiteSpace); } /// \brief Given a Token \p Tok that is a numeric constant with length 1, /// return the character. char getSpellingOfSingleCharacterNumericConstant(const Token &Tok, bool *Invalid = nullptr) const { assert(Tok.is(tok::numeric_constant) && Tok.getLength() == 1 && "Called on unsupported token"); assert(!Tok.needsCleaning() && "Token can't need cleaning with length 1"); // If the token is carrying a literal data pointer, just use it. if (const char *D = Tok.getLiteralData()) return *D; // Otherwise, fall back on getCharacterData, which is slower, but always // works. return *SourceMgr.getCharacterData(Tok.getLocation(), Invalid); } /// \brief Retrieve the name of the immediate macro expansion. /// /// This routine starts from a source location, and finds the name of the /// macro responsible for its immediate expansion. It looks through any /// intervening macro argument expansions to compute this. It returns a /// StringRef that refers to the SourceManager-owned buffer of the source /// where that macro name is spelled. Thus, the result shouldn't out-live /// the SourceManager. StringRef getImmediateMacroName(SourceLocation Loc) { return Lexer::getImmediateMacroName(Loc, SourceMgr, getLangOpts()); } /// \brief Plop the specified string into a scratch buffer and set the /// specified token's location and length to it. /// /// If specified, the source location provides a location of the expansion /// point of the token. void CreateString(StringRef Str, Token &Tok, SourceLocation ExpansionLocStart = SourceLocation(), SourceLocation ExpansionLocEnd = SourceLocation()); /// \brief Computes the source location just past the end of the /// token at this source location. /// /// This routine can be used to produce a source location that /// points just past the end of the token referenced by \p Loc, and /// is generally used when a diagnostic needs to point just after a /// token where it expected something different that it received. If /// the returned source location would not be meaningful (e.g., if /// it points into a macro), this routine returns an invalid /// source location. /// /// \param Offset an offset from the end of the token, where the source /// location should refer to. The default offset (0) produces a source /// location pointing just past the end of the token; an offset of 1 produces /// a source location pointing to the last character in the token, etc. SourceLocation getLocForEndOfToken(SourceLocation Loc, unsigned Offset = 0) { return Lexer::getLocForEndOfToken(Loc, Offset, SourceMgr, LangOpts); } /// \brief Returns true if the given MacroID location points at the first /// token of the macro expansion. /// /// \param MacroBegin If non-null and function returns true, it is set to /// begin location of the macro. bool isAtStartOfMacroExpansion(SourceLocation loc, SourceLocation *MacroBegin = nullptr) const { return Lexer::isAtStartOfMacroExpansion(loc, SourceMgr, LangOpts, MacroBegin); } /// \brief Returns true if the given MacroID location points at the last /// token of the macro expansion. /// /// \param MacroEnd If non-null and function returns true, it is set to /// end location of the macro. bool isAtEndOfMacroExpansion(SourceLocation loc, SourceLocation *MacroEnd = nullptr) const { return Lexer::isAtEndOfMacroExpansion(loc, SourceMgr, LangOpts, MacroEnd); } /// \brief Print the token to stderr, used for debugging. void DumpToken(const Token &Tok, bool DumpFlags = false) const; void DumpLocation(SourceLocation Loc) const; void DumpMacro(const MacroInfo &MI) const; void dumpMacroInfo(const IdentifierInfo *II); /// \brief Given a location that specifies the start of a /// token, return a new location that specifies a character within the token. SourceLocation AdvanceToTokenCharacter(SourceLocation TokStart, unsigned Char) const { return Lexer::AdvanceToTokenCharacter(TokStart, Char, SourceMgr, LangOpts); } /// \brief Increment the counters for the number of token paste operations /// performed. /// /// If fast was specified, this is a 'fast paste' case we handled. void IncrementPasteCounter(bool isFast) { if (isFast) ++NumFastTokenPaste; else ++NumTokenPaste; } void PrintStats(); size_t getTotalMemory() const; /// When the macro expander pastes together a comment (/##/) in Microsoft /// mode, this method handles updating the current state, returning the /// token on the next source line. void HandleMicrosoftCommentPaste(Token &Tok); //===--------------------------------------------------------------------===// // Preprocessor callback methods. These are invoked by a lexer as various // directives and events are found. /// Given a tok::raw_identifier token, look up the /// identifier information for the token and install it into the token, /// updating the token kind accordingly. IdentifierInfo *LookUpIdentifierInfo(Token &Identifier) const; private: llvm::DenseMap PoisonReasons; public: /// \brief Specifies the reason for poisoning an identifier. /// /// If that identifier is accessed while poisoned, then this reason will be /// used instead of the default "poisoned" diagnostic. void SetPoisonReason(IdentifierInfo *II, unsigned DiagID); /// \brief Display reason for poisoned identifier. void HandlePoisonedIdentifier(Token & Tok); void MaybeHandlePoisonedIdentifier(Token & Identifier) { if(IdentifierInfo * II = Identifier.getIdentifierInfo()) { if(II->isPoisoned()) { HandlePoisonedIdentifier(Identifier); } } } private: /// Identifiers used for SEH handling in Borland. These are only /// allowed in particular circumstances // __except block IdentifierInfo *Ident__exception_code, *Ident___exception_code, *Ident_GetExceptionCode; // __except filter expression IdentifierInfo *Ident__exception_info, *Ident___exception_info, *Ident_GetExceptionInfo; // __finally IdentifierInfo *Ident__abnormal_termination, *Ident___abnormal_termination, *Ident_AbnormalTermination; const char *getCurLexerEndPos(); void diagnoseMissingHeaderInUmbrellaDir(const Module &Mod); public: void PoisonSEHIdentifiers(bool Poison = true); // Borland /// \brief Callback invoked when the lexer reads an identifier and has /// filled in the tokens IdentifierInfo member. /// /// This callback potentially macro expands it or turns it into a named /// token (like 'for'). /// /// \returns true if we actually computed a token, false if we need to /// lex again. bool HandleIdentifier(Token &Identifier); /// \brief Callback invoked when the lexer hits the end of the current file. /// /// This either returns the EOF token and returns true, or /// pops a level off the include stack and returns false, at which point the /// client should call lex again. bool HandleEndOfFile(Token &Result, bool isEndOfMacro = false); /// \brief Callback invoked when the current TokenLexer hits the end of its /// token stream. bool HandleEndOfTokenLexer(Token &Result); /// \brief Callback invoked when the lexer sees a # token at the start of a /// line. /// /// This consumes the directive, modifies the lexer/preprocessor state, and /// advances the lexer(s) so that the next token read is the correct one. void HandleDirective(Token &Result); /// \brief Ensure that the next token is a tok::eod token. /// /// If not, emit a diagnostic and consume up until the eod. /// If \p EnableMacros is true, then we consider macros that expand to zero /// tokens as being ok. void CheckEndOfDirective(const char *Directive, bool EnableMacros = false); /// \brief Read and discard all tokens remaining on the current line until /// the tok::eod token is found. void DiscardUntilEndOfDirective(); /// \brief Returns true if the preprocessor has seen a use of /// __DATE__ or __TIME__ in the file so far. bool SawDateOrTime() const { return DATELoc != SourceLocation() || TIMELoc != SourceLocation(); } unsigned getCounterValue() const { return CounterValue; } void setCounterValue(unsigned V) { CounterValue = V; } /// \brief Retrieves the module that we're currently building, if any. Module *getCurrentModule(); /// \brief Allocate a new MacroInfo object with the provided SourceLocation. MacroInfo *AllocateMacroInfo(SourceLocation L); /// \brief Turn the specified lexer token into a fully checked and spelled /// filename, e.g. as an operand of \#include. /// /// The caller is expected to provide a buffer that is large enough to hold /// the spelling of the filename, but is also expected to handle the case /// when this method decides to use a different buffer. /// /// \returns true if the input filename was in <>'s or false if it was /// in ""'s. bool GetIncludeFilenameSpelling(SourceLocation Loc,StringRef &Filename); /// \brief Given a "foo" or \ reference, look up the indicated file. /// /// Returns null on failure. \p isAngled indicates whether the file /// reference is for system \#include's or not (i.e. using <> instead of ""). const FileEntry *LookupFile(SourceLocation FilenameLoc, StringRef Filename, bool isAngled, const DirectoryLookup *FromDir, const FileEntry *FromFile, const DirectoryLookup *&CurDir, SmallVectorImpl *SearchPath, SmallVectorImpl *RelativePath, ModuleMap::KnownHeader *SuggestedModule, bool *IsMapped, bool SkipCache = false); /// \brief Get the DirectoryLookup structure used to find the current /// FileEntry, if CurLexer is non-null and if applicable. /// /// This allows us to implement \#include_next and find directory-specific /// properties. const DirectoryLookup *GetCurDirLookup() { return CurDirLookup; } /// \brief Return true if we're in the top-level file, not in a \#include. bool isInPrimaryFile() const; /// \brief Handle cases where the \#include name is expanded /// from a macro as multiple tokens, which need to be glued together. /// /// This occurs for code like: /// \code /// \#define FOO /// \#include FOO /// \endcode /// because in this case, "" is returned as 7 tokens, not one. /// /// This code concatenates and consumes tokens up to the '>' token. It /// returns false if the > was found, otherwise it returns true if it finds /// and consumes the EOD marker. bool ConcatenateIncludeName(SmallString<128> &FilenameBuffer, SourceLocation &End); /// \brief Lex an on-off-switch (C99 6.10.6p2) and verify that it is /// followed by EOD. Return true if the token is not a valid on-off-switch. bool LexOnOffSwitch(tok::OnOffSwitch &OOS); bool CheckMacroName(Token &MacroNameTok, MacroUse isDefineUndef, bool *ShadowFlag = nullptr); void EnterSubmodule(Module *M, SourceLocation ImportLoc, bool ForPragma); Module *LeaveSubmodule(bool ForPragma); private: void PushIncludeMacroStack() { assert(CurLexerKind != CLK_CachingLexer && "cannot push a caching lexer"); IncludeMacroStack.emplace_back(CurLexerKind, CurLexerSubmodule, std::move(CurLexer), std::move(CurPTHLexer), CurPPLexer, std::move(CurTokenLexer), CurDirLookup); CurPPLexer = nullptr; } void PopIncludeMacroStack() { CurLexer = std::move(IncludeMacroStack.back().TheLexer); CurPTHLexer = std::move(IncludeMacroStack.back().ThePTHLexer); CurPPLexer = IncludeMacroStack.back().ThePPLexer; CurTokenLexer = std::move(IncludeMacroStack.back().TheTokenLexer); CurDirLookup = IncludeMacroStack.back().TheDirLookup; CurLexerSubmodule = IncludeMacroStack.back().TheSubmodule; CurLexerKind = IncludeMacroStack.back().CurLexerKind; IncludeMacroStack.pop_back(); } void PropagateLineStartLeadingSpaceInfo(Token &Result); /// Determine whether we need to create module macros for #defines in the /// current context. bool needModuleMacros() const; /// Update the set of active module macros and ambiguity flag for a module /// macro name. void updateModuleMacroInfo(const IdentifierInfo *II, ModuleMacroInfo &Info); DefMacroDirective *AllocateDefMacroDirective(MacroInfo *MI, SourceLocation Loc); UndefMacroDirective *AllocateUndefMacroDirective(SourceLocation UndefLoc); VisibilityMacroDirective *AllocateVisibilityMacroDirective(SourceLocation Loc, bool isPublic); /// \brief Lex and validate a macro name, which occurs after a /// \#define or \#undef. /// /// \param MacroNameTok Token that represents the name defined or undefined. /// \param IsDefineUndef Kind if preprocessor directive. /// \param ShadowFlag Points to flag that is set if macro name shadows /// a keyword. /// /// This emits a diagnostic, sets the token kind to eod, /// and discards the rest of the macro line if the macro name is invalid. void ReadMacroName(Token &MacroNameTok, MacroUse IsDefineUndef = MU_Other, bool *ShadowFlag = nullptr); /// ReadOptionalMacroParameterListAndBody - This consumes all (i.e. the /// entire line) of the macro's tokens and adds them to MacroInfo, and while /// doing so performs certain validity checks including (but not limited to): /// - # (stringization) is followed by a macro parameter /// \param MacroNameTok - Token that represents the macro name /// \param ImmediatelyAfterHeaderGuard - Macro follows an #ifdef header guard /// /// Either returns a pointer to a MacroInfo object OR emits a diagnostic and /// returns a nullptr if an invalid sequence of tokens is encountered. MacroInfo *ReadOptionalMacroParameterListAndBody( const Token &MacroNameTok, bool ImmediatelyAfterHeaderGuard); /// The ( starting an argument list of a macro definition has just been read. /// Lex the rest of the parameters and the closing ), updating \p MI with /// what we learn and saving in \p LastTok the last token read. /// Return true if an error occurs parsing the arg list. bool ReadMacroParameterList(MacroInfo *MI, Token& LastTok); /// We just read a \#if or related directive and decided that the /// subsequent tokens are in the \#if'd out portion of the /// file. Lex the rest of the file, until we see an \#endif. If \p /// FoundNonSkipPortion is true, then we have already emitted code for part of /// this \#if directive, so \#else/\#elif blocks should never be entered. If /// \p FoundElse is false, then \#else directives are ok, if not, then we have /// already seen one so a \#else directive is a duplicate. When this returns, /// the caller can lex the first valid token. void SkipExcludedConditionalBlock(SourceLocation IfTokenLoc, bool FoundNonSkipPortion, bool FoundElse, SourceLocation ElseLoc = SourceLocation()); /// \brief A fast PTH version of SkipExcludedConditionalBlock. void PTHSkipExcludedConditionalBlock(); /// Information about the result for evaluating an expression for a /// preprocessor directive. struct DirectiveEvalResult { /// Whether the expression was evaluated as true or not. bool Conditional; /// True if the expression contained identifiers that were undefined. bool IncludedUndefinedIds; }; /// \brief Evaluate an integer constant expression that may occur after a /// \#if or \#elif directive and return a \p DirectiveEvalResult object. /// /// If the expression is equivalent to "!defined(X)" return X in IfNDefMacro. DirectiveEvalResult EvaluateDirectiveExpression(IdentifierInfo *&IfNDefMacro); /// \brief Install the standard preprocessor pragmas: /// \#pragma GCC poison/system_header/dependency and \#pragma once. void RegisterBuiltinPragmas(); /// \brief Register builtin macros such as __LINE__ with the identifier table. void RegisterBuiltinMacros(); /// If an identifier token is read that is to be expanded as a macro, handle /// it and return the next token as 'Tok'. If we lexed a token, return true; /// otherwise the caller should lex again. bool HandleMacroExpandedIdentifier(Token &Tok, const MacroDefinition &MD); /// \brief Cache macro expanded tokens for TokenLexers. // /// Works like a stack; a TokenLexer adds the macro expanded tokens that is /// going to lex in the cache and when it finishes the tokens are removed /// from the end of the cache. Token *cacheMacroExpandedTokens(TokenLexer *tokLexer, ArrayRef tokens); void removeCachedMacroExpandedTokensOfLastLexer(); friend void TokenLexer::ExpandFunctionArguments(); /// Determine whether the next preprocessor token to be /// lexed is a '('. If so, consume the token and return true, if not, this /// method should have no observable side-effect on the lexed tokens. bool isNextPPTokenLParen(); /// After reading "MACRO(", this method is invoked to read all of the formal /// arguments specified for the macro invocation. Returns null on error. MacroArgs *ReadMacroCallArgumentList(Token &MacroName, MacroInfo *MI, SourceLocation &ExpansionEnd); /// \brief If an identifier token is read that is to be expanded /// as a builtin macro, handle it and return the next token as 'Tok'. void ExpandBuiltinMacro(Token &Tok); /// \brief Read a \c _Pragma directive, slice it up, process it, then /// return the first token after the directive. /// This assumes that the \c _Pragma token has just been read into \p Tok. void Handle_Pragma(Token &Tok); /// \brief Like Handle_Pragma except the pragma text is not enclosed within /// a string literal. void HandleMicrosoft__pragma(Token &Tok); /// \brief Add a lexer to the top of the include stack and /// start lexing tokens from it instead of the current buffer. void EnterSourceFileWithLexer(Lexer *TheLexer, const DirectoryLookup *Dir); /// \brief Add a lexer to the top of the include stack and /// start getting tokens from it using the PTH cache. void EnterSourceFileWithPTH(PTHLexer *PL, const DirectoryLookup *Dir); /// \brief Set the FileID for the preprocessor predefines. void setPredefinesFileID(FileID FID) { assert(PredefinesFileID.isInvalid() && "PredefinesFileID already set!"); PredefinesFileID = FID; } /// \brief Returns true if we are lexing from a file and not a /// pragma or a macro. static bool IsFileLexer(const Lexer* L, const PreprocessorLexer* P) { return L ? !L->isPragmaLexer() : P != nullptr; } static bool IsFileLexer(const IncludeStackInfo& I) { return IsFileLexer(I.TheLexer.get(), I.ThePPLexer); } bool IsFileLexer() const { return IsFileLexer(CurLexer.get(), CurPPLexer); } //===--------------------------------------------------------------------===// // Caching stuff. void CachingLex(Token &Result); bool InCachingLexMode() const { // If the Lexer pointers are 0 and IncludeMacroStack is empty, it means // that we are past EOF, not that we are in CachingLex mode. return !CurPPLexer && !CurTokenLexer && !CurPTHLexer && !IncludeMacroStack.empty(); } void EnterCachingLexMode(); void ExitCachingLexMode() { if (InCachingLexMode()) RemoveTopOfLexerStack(); } const Token &PeekAhead(unsigned N); void AnnotatePreviousCachedTokens(const Token &Tok); //===--------------------------------------------------------------------===// /// Handle*Directive - implement the various preprocessor directives. These /// should side-effect the current preprocessor object so that the next call /// to Lex() will return the appropriate token next. void HandleLineDirective(); void HandleDigitDirective(Token &Tok); void HandleUserDiagnosticDirective(Token &Tok, bool isWarning); void HandleIdentSCCSDirective(Token &Tok); void HandleMacroPublicDirective(Token &Tok); void HandleMacroPrivateDirective(); // File inclusion. void HandleIncludeDirective(SourceLocation HashLoc, Token &Tok, const DirectoryLookup *LookupFrom = nullptr, const FileEntry *LookupFromFile = nullptr, bool isImport = false); void HandleIncludeNextDirective(SourceLocation HashLoc, Token &Tok); void HandleIncludeMacrosDirective(SourceLocation HashLoc, Token &Tok); void HandleImportDirective(SourceLocation HashLoc, Token &Tok); void HandleMicrosoftImportDirective(Token &Tok); public: /// Check that the given module is available, producing a diagnostic if not. /// \return \c true if the check failed (because the module is not available). /// \c false if the module appears to be usable. static bool checkModuleIsAvailable(const LangOptions &LangOpts, const TargetInfo &TargetInfo, DiagnosticsEngine &Diags, Module *M); // Module inclusion testing. /// \brief Find the module that owns the source or header file that /// \p Loc points to. If the location is in a file that was included /// into a module, or is outside any module, returns nullptr. Module *getModuleForLocation(SourceLocation Loc); /// \brief We want to produce a diagnostic at location IncLoc concerning a /// missing module import. /// /// \param IncLoc The location at which the missing import was detected. /// \param M The desired module. /// \param MLoc A location within the desired module at which some desired /// effect occurred (eg, where a desired entity was declared). /// /// \return A file that can be #included to import a module containing MLoc. /// Null if no such file could be determined or if a #include is not /// appropriate. const FileEntry *getModuleHeaderToIncludeForDiagnostics(SourceLocation IncLoc, Module *M, SourceLocation MLoc); bool isRecordingPreamble() const { return PreambleConditionalStack.isRecording(); } bool hasRecordedPreamble() const { return PreambleConditionalStack.hasRecordedPreamble(); } ArrayRef getPreambleConditionalStack() const { return PreambleConditionalStack.getStack(); } void setRecordedPreambleConditionalStack(ArrayRef s) { PreambleConditionalStack.setStack(s); } void setReplayablePreambleConditionalStack(ArrayRef s) { PreambleConditionalStack.startReplaying(); PreambleConditionalStack.setStack(s); } private: + /// \brief After processing predefined file, initialize the conditional stack from + /// the preamble. + void replayPreambleConditionalStack(); + // Macro handling. void HandleDefineDirective(Token &Tok, bool ImmediatelyAfterTopLevelIfndef); void HandleUndefDirective(); // Conditional Inclusion. void HandleIfdefDirective(Token &Tok, bool isIfndef, bool ReadAnyTokensBeforeDirective); void HandleIfDirective(Token &Tok, bool ReadAnyTokensBeforeDirective); void HandleEndifDirective(Token &Tok); void HandleElseDirective(Token &Tok); void HandleElifDirective(Token &Tok); // Pragmas. void HandlePragmaDirective(SourceLocation IntroducerLoc, PragmaIntroducerKind Introducer); public: void HandlePragmaOnce(Token &OnceTok); void HandlePragmaMark(); void HandlePragmaPoison(); void HandlePragmaSystemHeader(Token &SysHeaderTok); void HandlePragmaDependency(Token &DependencyTok); void HandlePragmaPushMacro(Token &Tok); void HandlePragmaPopMacro(Token &Tok); void HandlePragmaIncludeAlias(Token &Tok); void HandlePragmaModuleBuild(Token &Tok); IdentifierInfo *ParsePragmaPushOrPopMacro(Token &Tok); // Return true and store the first token only if any CommentHandler // has inserted some tokens and getCommentRetentionState() is false. bool HandleComment(Token &Token, SourceRange Comment); /// \brief A macro is used, update information about macros that need unused /// warnings. void markMacroAsUsed(MacroInfo *MI); }; /// \brief Abstract base class that describes a handler that will receive /// source ranges for each of the comments encountered in the source file. class CommentHandler { public: virtual ~CommentHandler(); // The handler shall return true if it has pushed any tokens // to be read using e.g. EnterToken or EnterTokenStream. virtual bool HandleComment(Preprocessor &PP, SourceRange Comment) = 0; }; /// \brief Registry of pragma handlers added by plugins typedef llvm::Registry PragmaHandlerRegistry; } // end namespace clang #endif diff --git a/lib/AST/ASTImporter.cpp b/lib/AST/ASTImporter.cpp index 6e33b98d2f18..2c0bb11cc4bc 100644 --- a/lib/AST/ASTImporter.cpp +++ b/lib/AST/ASTImporter.cpp @@ -1,6249 +1,6254 @@ //===--- ASTImporter.cpp - Importing ASTs from other Contexts ---*- C++ -*-===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file defines the ASTImporter class which imports AST nodes from one // context into another context. // //===----------------------------------------------------------------------===// #include "clang/AST/ASTImporter.h" #include "clang/AST/ASTContext.h" #include "clang/AST/ASTDiagnostic.h" #include "clang/AST/ASTStructuralEquivalence.h" #include "clang/AST/DeclCXX.h" #include "clang/AST/DeclObjC.h" #include "clang/AST/DeclVisitor.h" #include "clang/AST/StmtVisitor.h" #include "clang/AST/TypeVisitor.h" #include "clang/Basic/FileManager.h" #include "clang/Basic/SourceManager.h" #include "llvm/Support/MemoryBuffer.h" #include namespace clang { class ASTNodeImporter : public TypeVisitor, public DeclVisitor, public StmtVisitor { ASTImporter &Importer; public: explicit ASTNodeImporter(ASTImporter &Importer) : Importer(Importer) { } using TypeVisitor::Visit; using DeclVisitor::Visit; using StmtVisitor::Visit; // Importing types QualType VisitType(const Type *T); QualType VisitAtomicType(const AtomicType *T); QualType VisitBuiltinType(const BuiltinType *T); QualType VisitDecayedType(const DecayedType *T); QualType VisitComplexType(const ComplexType *T); QualType VisitPointerType(const PointerType *T); QualType VisitBlockPointerType(const BlockPointerType *T); QualType VisitLValueReferenceType(const LValueReferenceType *T); QualType VisitRValueReferenceType(const RValueReferenceType *T); QualType VisitMemberPointerType(const MemberPointerType *T); QualType VisitConstantArrayType(const ConstantArrayType *T); QualType VisitIncompleteArrayType(const IncompleteArrayType *T); QualType VisitVariableArrayType(const VariableArrayType *T); // FIXME: DependentSizedArrayType // FIXME: DependentSizedExtVectorType QualType VisitVectorType(const VectorType *T); QualType VisitExtVectorType(const ExtVectorType *T); QualType VisitFunctionNoProtoType(const FunctionNoProtoType *T); QualType VisitFunctionProtoType(const FunctionProtoType *T); // FIXME: UnresolvedUsingType QualType VisitParenType(const ParenType *T); QualType VisitTypedefType(const TypedefType *T); QualType VisitTypeOfExprType(const TypeOfExprType *T); // FIXME: DependentTypeOfExprType QualType VisitTypeOfType(const TypeOfType *T); QualType VisitDecltypeType(const DecltypeType *T); QualType VisitUnaryTransformType(const UnaryTransformType *T); QualType VisitAutoType(const AutoType *T); QualType VisitInjectedClassNameType(const InjectedClassNameType *T); // FIXME: DependentDecltypeType QualType VisitRecordType(const RecordType *T); QualType VisitEnumType(const EnumType *T); QualType VisitAttributedType(const AttributedType *T); QualType VisitTemplateTypeParmType(const TemplateTypeParmType *T); QualType VisitSubstTemplateTypeParmType(const SubstTemplateTypeParmType *T); QualType VisitTemplateSpecializationType(const TemplateSpecializationType *T); QualType VisitElaboratedType(const ElaboratedType *T); // FIXME: DependentNameType // FIXME: DependentTemplateSpecializationType QualType VisitObjCInterfaceType(const ObjCInterfaceType *T); QualType VisitObjCObjectType(const ObjCObjectType *T); QualType VisitObjCObjectPointerType(const ObjCObjectPointerType *T); // Importing declarations bool ImportDeclParts(NamedDecl *D, DeclContext *&DC, DeclContext *&LexicalDC, DeclarationName &Name, NamedDecl *&ToD, SourceLocation &Loc); void ImportDefinitionIfNeeded(Decl *FromD, Decl *ToD = nullptr); void ImportDeclarationNameLoc(const DeclarationNameInfo &From, DeclarationNameInfo& To); void ImportDeclContext(DeclContext *FromDC, bool ForceImport = false); bool ImportCastPath(CastExpr *E, CXXCastPath &Path); typedef DesignatedInitExpr::Designator Designator; Designator ImportDesignator(const Designator &D); /// \brief What we should import from the definition. enum ImportDefinitionKind { /// \brief Import the default subset of the definition, which might be /// nothing (if minimal import is set) or might be everything (if minimal /// import is not set). IDK_Default, /// \brief Import everything. IDK_Everything, /// \brief Import only the bare bones needed to establish a valid /// DeclContext. IDK_Basic }; bool shouldForceImportDeclContext(ImportDefinitionKind IDK) { return IDK == IDK_Everything || (IDK == IDK_Default && !Importer.isMinimalImport()); } bool ImportDefinition(RecordDecl *From, RecordDecl *To, ImportDefinitionKind Kind = IDK_Default); bool ImportDefinition(VarDecl *From, VarDecl *To, ImportDefinitionKind Kind = IDK_Default); bool ImportDefinition(EnumDecl *From, EnumDecl *To, ImportDefinitionKind Kind = IDK_Default); bool ImportDefinition(ObjCInterfaceDecl *From, ObjCInterfaceDecl *To, ImportDefinitionKind Kind = IDK_Default); bool ImportDefinition(ObjCProtocolDecl *From, ObjCProtocolDecl *To, ImportDefinitionKind Kind = IDK_Default); TemplateParameterList *ImportTemplateParameterList( TemplateParameterList *Params); TemplateArgument ImportTemplateArgument(const TemplateArgument &From); TemplateArgumentLoc ImportTemplateArgumentLoc( const TemplateArgumentLoc &TALoc, bool &Error); bool ImportTemplateArguments(const TemplateArgument *FromArgs, unsigned NumFromArgs, SmallVectorImpl &ToArgs); bool IsStructuralMatch(RecordDecl *FromRecord, RecordDecl *ToRecord, bool Complain = true); bool IsStructuralMatch(VarDecl *FromVar, VarDecl *ToVar, bool Complain = true); bool IsStructuralMatch(EnumDecl *FromEnum, EnumDecl *ToRecord); bool IsStructuralMatch(EnumConstantDecl *FromEC, EnumConstantDecl *ToEC); bool IsStructuralMatch(ClassTemplateDecl *From, ClassTemplateDecl *To); bool IsStructuralMatch(VarTemplateDecl *From, VarTemplateDecl *To); Decl *VisitDecl(Decl *D); Decl *VisitAccessSpecDecl(AccessSpecDecl *D); Decl *VisitStaticAssertDecl(StaticAssertDecl *D); Decl *VisitTranslationUnitDecl(TranslationUnitDecl *D); Decl *VisitNamespaceDecl(NamespaceDecl *D); Decl *VisitTypedefNameDecl(TypedefNameDecl *D, bool IsAlias); Decl *VisitTypedefDecl(TypedefDecl *D); Decl *VisitTypeAliasDecl(TypeAliasDecl *D); Decl *VisitLabelDecl(LabelDecl *D); Decl *VisitEnumDecl(EnumDecl *D); Decl *VisitRecordDecl(RecordDecl *D); Decl *VisitEnumConstantDecl(EnumConstantDecl *D); Decl *VisitFunctionDecl(FunctionDecl *D); Decl *VisitCXXMethodDecl(CXXMethodDecl *D); Decl *VisitCXXConstructorDecl(CXXConstructorDecl *D); Decl *VisitCXXDestructorDecl(CXXDestructorDecl *D); Decl *VisitCXXConversionDecl(CXXConversionDecl *D); Decl *VisitFieldDecl(FieldDecl *D); Decl *VisitIndirectFieldDecl(IndirectFieldDecl *D); Decl *VisitFriendDecl(FriendDecl *D); Decl *VisitObjCIvarDecl(ObjCIvarDecl *D); Decl *VisitVarDecl(VarDecl *D); Decl *VisitImplicitParamDecl(ImplicitParamDecl *D); Decl *VisitParmVarDecl(ParmVarDecl *D); Decl *VisitObjCMethodDecl(ObjCMethodDecl *D); Decl *VisitObjCTypeParamDecl(ObjCTypeParamDecl *D); Decl *VisitObjCCategoryDecl(ObjCCategoryDecl *D); Decl *VisitObjCProtocolDecl(ObjCProtocolDecl *D); Decl *VisitLinkageSpecDecl(LinkageSpecDecl *D); ObjCTypeParamList *ImportObjCTypeParamList(ObjCTypeParamList *list); Decl *VisitObjCInterfaceDecl(ObjCInterfaceDecl *D); Decl *VisitObjCCategoryImplDecl(ObjCCategoryImplDecl *D); Decl *VisitObjCImplementationDecl(ObjCImplementationDecl *D); Decl *VisitObjCPropertyDecl(ObjCPropertyDecl *D); Decl *VisitObjCPropertyImplDecl(ObjCPropertyImplDecl *D); Decl *VisitTemplateTypeParmDecl(TemplateTypeParmDecl *D); Decl *VisitNonTypeTemplateParmDecl(NonTypeTemplateParmDecl *D); Decl *VisitTemplateTemplateParmDecl(TemplateTemplateParmDecl *D); Decl *VisitClassTemplateDecl(ClassTemplateDecl *D); Decl *VisitClassTemplateSpecializationDecl( ClassTemplateSpecializationDecl *D); Decl *VisitVarTemplateDecl(VarTemplateDecl *D); Decl *VisitVarTemplateSpecializationDecl(VarTemplateSpecializationDecl *D); // Importing statements DeclGroupRef ImportDeclGroup(DeclGroupRef DG); Stmt *VisitStmt(Stmt *S); Stmt *VisitGCCAsmStmt(GCCAsmStmt *S); Stmt *VisitDeclStmt(DeclStmt *S); Stmt *VisitNullStmt(NullStmt *S); Stmt *VisitCompoundStmt(CompoundStmt *S); Stmt *VisitCaseStmt(CaseStmt *S); Stmt *VisitDefaultStmt(DefaultStmt *S); Stmt *VisitLabelStmt(LabelStmt *S); Stmt *VisitAttributedStmt(AttributedStmt *S); Stmt *VisitIfStmt(IfStmt *S); Stmt *VisitSwitchStmt(SwitchStmt *S); Stmt *VisitWhileStmt(WhileStmt *S); Stmt *VisitDoStmt(DoStmt *S); Stmt *VisitForStmt(ForStmt *S); Stmt *VisitGotoStmt(GotoStmt *S); Stmt *VisitIndirectGotoStmt(IndirectGotoStmt *S); Stmt *VisitContinueStmt(ContinueStmt *S); Stmt *VisitBreakStmt(BreakStmt *S); Stmt *VisitReturnStmt(ReturnStmt *S); // FIXME: MSAsmStmt // FIXME: SEHExceptStmt // FIXME: SEHFinallyStmt // FIXME: SEHTryStmt // FIXME: SEHLeaveStmt // FIXME: CapturedStmt Stmt *VisitCXXCatchStmt(CXXCatchStmt *S); Stmt *VisitCXXTryStmt(CXXTryStmt *S); Stmt *VisitCXXForRangeStmt(CXXForRangeStmt *S); // FIXME: MSDependentExistsStmt Stmt *VisitObjCForCollectionStmt(ObjCForCollectionStmt *S); Stmt *VisitObjCAtCatchStmt(ObjCAtCatchStmt *S); Stmt *VisitObjCAtFinallyStmt(ObjCAtFinallyStmt *S); Stmt *VisitObjCAtTryStmt(ObjCAtTryStmt *S); Stmt *VisitObjCAtSynchronizedStmt(ObjCAtSynchronizedStmt *S); Stmt *VisitObjCAtThrowStmt(ObjCAtThrowStmt *S); Stmt *VisitObjCAutoreleasePoolStmt(ObjCAutoreleasePoolStmt *S); // Importing expressions Expr *VisitExpr(Expr *E); Expr *VisitVAArgExpr(VAArgExpr *E); Expr *VisitGNUNullExpr(GNUNullExpr *E); Expr *VisitPredefinedExpr(PredefinedExpr *E); Expr *VisitDeclRefExpr(DeclRefExpr *E); Expr *VisitImplicitValueInitExpr(ImplicitValueInitExpr *ILE); Expr *VisitDesignatedInitExpr(DesignatedInitExpr *E); Expr *VisitCXXNullPtrLiteralExpr(CXXNullPtrLiteralExpr *E); Expr *VisitIntegerLiteral(IntegerLiteral *E); Expr *VisitFloatingLiteral(FloatingLiteral *E); Expr *VisitCharacterLiteral(CharacterLiteral *E); Expr *VisitStringLiteral(StringLiteral *E); Expr *VisitCompoundLiteralExpr(CompoundLiteralExpr *E); Expr *VisitAtomicExpr(AtomicExpr *E); Expr *VisitAddrLabelExpr(AddrLabelExpr *E); Expr *VisitParenExpr(ParenExpr *E); Expr *VisitParenListExpr(ParenListExpr *E); Expr *VisitStmtExpr(StmtExpr *E); Expr *VisitUnaryOperator(UnaryOperator *E); Expr *VisitUnaryExprOrTypeTraitExpr(UnaryExprOrTypeTraitExpr *E); Expr *VisitBinaryOperator(BinaryOperator *E); Expr *VisitConditionalOperator(ConditionalOperator *E); Expr *VisitBinaryConditionalOperator(BinaryConditionalOperator *E); Expr *VisitOpaqueValueExpr(OpaqueValueExpr *E); Expr *VisitArrayTypeTraitExpr(ArrayTypeTraitExpr *E); Expr *VisitExpressionTraitExpr(ExpressionTraitExpr *E); Expr *VisitArraySubscriptExpr(ArraySubscriptExpr *E); Expr *VisitCompoundAssignOperator(CompoundAssignOperator *E); Expr *VisitImplicitCastExpr(ImplicitCastExpr *E); Expr *VisitExplicitCastExpr(ExplicitCastExpr *E); Expr *VisitOffsetOfExpr(OffsetOfExpr *OE); Expr *VisitCXXThrowExpr(CXXThrowExpr *E); Expr *VisitCXXNoexceptExpr(CXXNoexceptExpr *E); Expr *VisitCXXDefaultArgExpr(CXXDefaultArgExpr *E); Expr *VisitCXXScalarValueInitExpr(CXXScalarValueInitExpr *E); Expr *VisitCXXBindTemporaryExpr(CXXBindTemporaryExpr *E); Expr *VisitCXXTemporaryObjectExpr(CXXTemporaryObjectExpr *CE); Expr *VisitMaterializeTemporaryExpr(MaterializeTemporaryExpr *E); Expr *VisitCXXNewExpr(CXXNewExpr *CE); Expr *VisitCXXDeleteExpr(CXXDeleteExpr *E); Expr *VisitCXXConstructExpr(CXXConstructExpr *E); Expr *VisitCXXMemberCallExpr(CXXMemberCallExpr *E); Expr *VisitExprWithCleanups(ExprWithCleanups *EWC); Expr *VisitCXXThisExpr(CXXThisExpr *E); Expr *VisitCXXBoolLiteralExpr(CXXBoolLiteralExpr *E); Expr *VisitMemberExpr(MemberExpr *E); Expr *VisitCallExpr(CallExpr *E); Expr *VisitInitListExpr(InitListExpr *E); Expr *VisitArrayInitLoopExpr(ArrayInitLoopExpr *E); Expr *VisitArrayInitIndexExpr(ArrayInitIndexExpr *E); Expr *VisitCXXDefaultInitExpr(CXXDefaultInitExpr *E); Expr *VisitCXXNamedCastExpr(CXXNamedCastExpr *E); Expr *VisitSubstNonTypeTemplateParmExpr(SubstNonTypeTemplateParmExpr *E); template void ImportArray(IIter Ibegin, IIter Iend, OIter Obegin) { typedef typename std::remove_reference::type ItemT; ASTImporter &ImporterRef = Importer; std::transform(Ibegin, Iend, Obegin, [&ImporterRef](ItemT From) -> ItemT { return ImporterRef.Import(From); }); } template bool ImportArrayChecked(IIter Ibegin, IIter Iend, OIter Obegin) { typedef typename std::remove_reference::type ItemT; ASTImporter &ImporterRef = Importer; bool Failed = false; std::transform(Ibegin, Iend, Obegin, [&ImporterRef, &Failed](ItemT *From) -> ItemT * { ItemT *To = cast_or_null( ImporterRef.Import(From)); if (!To && From) Failed = true; return To; }); return Failed; } template bool ImportContainerChecked(const InContainerTy &InContainer, OutContainerTy &OutContainer) { return ImportArrayChecked(InContainer.begin(), InContainer.end(), OutContainer.begin()); } template bool ImportArrayChecked(const InContainerTy &InContainer, OIter Obegin) { return ImportArrayChecked(InContainer.begin(), InContainer.end(), Obegin); } // Importing overrides. void ImportOverrides(CXXMethodDecl *ToMethod, CXXMethodDecl *FromMethod); }; } //---------------------------------------------------------------------------- // Import Types //---------------------------------------------------------------------------- using namespace clang; QualType ASTNodeImporter::VisitType(const Type *T) { Importer.FromDiag(SourceLocation(), diag::err_unsupported_ast_node) << T->getTypeClassName(); return QualType(); } QualType ASTNodeImporter::VisitAtomicType(const AtomicType *T){ QualType UnderlyingType = Importer.Import(T->getValueType()); if(UnderlyingType.isNull()) return QualType(); return Importer.getToContext().getAtomicType(UnderlyingType); } QualType ASTNodeImporter::VisitBuiltinType(const BuiltinType *T) { switch (T->getKind()) { #define IMAGE_TYPE(ImgType, Id, SingletonId, Access, Suffix) \ case BuiltinType::Id: \ return Importer.getToContext().SingletonId; #include "clang/Basic/OpenCLImageTypes.def" #define SHARED_SINGLETON_TYPE(Expansion) #define BUILTIN_TYPE(Id, SingletonId) \ case BuiltinType::Id: return Importer.getToContext().SingletonId; #include "clang/AST/BuiltinTypes.def" // FIXME: for Char16, Char32, and NullPtr, make sure that the "to" // context supports C++. // FIXME: for ObjCId, ObjCClass, and ObjCSel, make sure that the "to" // context supports ObjC. case BuiltinType::Char_U: // The context we're importing from has an unsigned 'char'. If we're // importing into a context with a signed 'char', translate to // 'unsigned char' instead. if (Importer.getToContext().getLangOpts().CharIsSigned) return Importer.getToContext().UnsignedCharTy; return Importer.getToContext().CharTy; case BuiltinType::Char_S: // The context we're importing from has an unsigned 'char'. If we're // importing into a context with a signed 'char', translate to // 'unsigned char' instead. if (!Importer.getToContext().getLangOpts().CharIsSigned) return Importer.getToContext().SignedCharTy; return Importer.getToContext().CharTy; case BuiltinType::WChar_S: case BuiltinType::WChar_U: // FIXME: If not in C++, shall we translate to the C equivalent of // wchar_t? return Importer.getToContext().WCharTy; } llvm_unreachable("Invalid BuiltinType Kind!"); } QualType ASTNodeImporter::VisitDecayedType(const DecayedType *T) { QualType OrigT = Importer.Import(T->getOriginalType()); if (OrigT.isNull()) return QualType(); return Importer.getToContext().getDecayedType(OrigT); } QualType ASTNodeImporter::VisitComplexType(const ComplexType *T) { QualType ToElementType = Importer.Import(T->getElementType()); if (ToElementType.isNull()) return QualType(); return Importer.getToContext().getComplexType(ToElementType); } QualType ASTNodeImporter::VisitPointerType(const PointerType *T) { QualType ToPointeeType = Importer.Import(T->getPointeeType()); if (ToPointeeType.isNull()) return QualType(); return Importer.getToContext().getPointerType(ToPointeeType); } QualType ASTNodeImporter::VisitBlockPointerType(const BlockPointerType *T) { // FIXME: Check for blocks support in "to" context. QualType ToPointeeType = Importer.Import(T->getPointeeType()); if (ToPointeeType.isNull()) return QualType(); return Importer.getToContext().getBlockPointerType(ToPointeeType); } QualType ASTNodeImporter::VisitLValueReferenceType(const LValueReferenceType *T) { // FIXME: Check for C++ support in "to" context. QualType ToPointeeType = Importer.Import(T->getPointeeTypeAsWritten()); if (ToPointeeType.isNull()) return QualType(); return Importer.getToContext().getLValueReferenceType(ToPointeeType); } QualType ASTNodeImporter::VisitRValueReferenceType(const RValueReferenceType *T) { // FIXME: Check for C++0x support in "to" context. QualType ToPointeeType = Importer.Import(T->getPointeeTypeAsWritten()); if (ToPointeeType.isNull()) return QualType(); return Importer.getToContext().getRValueReferenceType(ToPointeeType); } QualType ASTNodeImporter::VisitMemberPointerType(const MemberPointerType *T) { // FIXME: Check for C++ support in "to" context. QualType ToPointeeType = Importer.Import(T->getPointeeType()); if (ToPointeeType.isNull()) return QualType(); QualType ClassType = Importer.Import(QualType(T->getClass(), 0)); return Importer.getToContext().getMemberPointerType(ToPointeeType, ClassType.getTypePtr()); } QualType ASTNodeImporter::VisitConstantArrayType(const ConstantArrayType *T) { QualType ToElementType = Importer.Import(T->getElementType()); if (ToElementType.isNull()) return QualType(); return Importer.getToContext().getConstantArrayType(ToElementType, T->getSize(), T->getSizeModifier(), T->getIndexTypeCVRQualifiers()); } QualType ASTNodeImporter::VisitIncompleteArrayType(const IncompleteArrayType *T) { QualType ToElementType = Importer.Import(T->getElementType()); if (ToElementType.isNull()) return QualType(); return Importer.getToContext().getIncompleteArrayType(ToElementType, T->getSizeModifier(), T->getIndexTypeCVRQualifiers()); } QualType ASTNodeImporter::VisitVariableArrayType(const VariableArrayType *T) { QualType ToElementType = Importer.Import(T->getElementType()); if (ToElementType.isNull()) return QualType(); Expr *Size = Importer.Import(T->getSizeExpr()); if (!Size) return QualType(); SourceRange Brackets = Importer.Import(T->getBracketsRange()); return Importer.getToContext().getVariableArrayType(ToElementType, Size, T->getSizeModifier(), T->getIndexTypeCVRQualifiers(), Brackets); } QualType ASTNodeImporter::VisitVectorType(const VectorType *T) { QualType ToElementType = Importer.Import(T->getElementType()); if (ToElementType.isNull()) return QualType(); return Importer.getToContext().getVectorType(ToElementType, T->getNumElements(), T->getVectorKind()); } QualType ASTNodeImporter::VisitExtVectorType(const ExtVectorType *T) { QualType ToElementType = Importer.Import(T->getElementType()); if (ToElementType.isNull()) return QualType(); return Importer.getToContext().getExtVectorType(ToElementType, T->getNumElements()); } QualType ASTNodeImporter::VisitFunctionNoProtoType(const FunctionNoProtoType *T) { // FIXME: What happens if we're importing a function without a prototype // into C++? Should we make it variadic? QualType ToResultType = Importer.Import(T->getReturnType()); if (ToResultType.isNull()) return QualType(); return Importer.getToContext().getFunctionNoProtoType(ToResultType, T->getExtInfo()); } QualType ASTNodeImporter::VisitFunctionProtoType(const FunctionProtoType *T) { QualType ToResultType = Importer.Import(T->getReturnType()); if (ToResultType.isNull()) return QualType(); // Import argument types SmallVector ArgTypes; for (const auto &A : T->param_types()) { QualType ArgType = Importer.Import(A); if (ArgType.isNull()) return QualType(); ArgTypes.push_back(ArgType); } // Import exception types SmallVector ExceptionTypes; for (const auto &E : T->exceptions()) { QualType ExceptionType = Importer.Import(E); if (ExceptionType.isNull()) return QualType(); ExceptionTypes.push_back(ExceptionType); } FunctionProtoType::ExtProtoInfo FromEPI = T->getExtProtoInfo(); FunctionProtoType::ExtProtoInfo ToEPI; ToEPI.ExtInfo = FromEPI.ExtInfo; ToEPI.Variadic = FromEPI.Variadic; ToEPI.HasTrailingReturn = FromEPI.HasTrailingReturn; ToEPI.TypeQuals = FromEPI.TypeQuals; ToEPI.RefQualifier = FromEPI.RefQualifier; ToEPI.ExceptionSpec.Type = FromEPI.ExceptionSpec.Type; ToEPI.ExceptionSpec.Exceptions = ExceptionTypes; ToEPI.ExceptionSpec.NoexceptExpr = Importer.Import(FromEPI.ExceptionSpec.NoexceptExpr); ToEPI.ExceptionSpec.SourceDecl = cast_or_null( Importer.Import(FromEPI.ExceptionSpec.SourceDecl)); ToEPI.ExceptionSpec.SourceTemplate = cast_or_null( Importer.Import(FromEPI.ExceptionSpec.SourceTemplate)); return Importer.getToContext().getFunctionType(ToResultType, ArgTypes, ToEPI); } QualType ASTNodeImporter::VisitParenType(const ParenType *T) { QualType ToInnerType = Importer.Import(T->getInnerType()); if (ToInnerType.isNull()) return QualType(); return Importer.getToContext().getParenType(ToInnerType); } QualType ASTNodeImporter::VisitTypedefType(const TypedefType *T) { TypedefNameDecl *ToDecl = dyn_cast_or_null(Importer.Import(T->getDecl())); if (!ToDecl) return QualType(); return Importer.getToContext().getTypeDeclType(ToDecl); } QualType ASTNodeImporter::VisitTypeOfExprType(const TypeOfExprType *T) { Expr *ToExpr = Importer.Import(T->getUnderlyingExpr()); if (!ToExpr) return QualType(); return Importer.getToContext().getTypeOfExprType(ToExpr); } QualType ASTNodeImporter::VisitTypeOfType(const TypeOfType *T) { QualType ToUnderlyingType = Importer.Import(T->getUnderlyingType()); if (ToUnderlyingType.isNull()) return QualType(); return Importer.getToContext().getTypeOfType(ToUnderlyingType); } QualType ASTNodeImporter::VisitDecltypeType(const DecltypeType *T) { // FIXME: Make sure that the "to" context supports C++0x! Expr *ToExpr = Importer.Import(T->getUnderlyingExpr()); if (!ToExpr) return QualType(); QualType UnderlyingType = Importer.Import(T->getUnderlyingType()); if (UnderlyingType.isNull()) return QualType(); return Importer.getToContext().getDecltypeType(ToExpr, UnderlyingType); } QualType ASTNodeImporter::VisitUnaryTransformType(const UnaryTransformType *T) { QualType ToBaseType = Importer.Import(T->getBaseType()); QualType ToUnderlyingType = Importer.Import(T->getUnderlyingType()); if (ToBaseType.isNull() || ToUnderlyingType.isNull()) return QualType(); return Importer.getToContext().getUnaryTransformType(ToBaseType, ToUnderlyingType, T->getUTTKind()); } QualType ASTNodeImporter::VisitAutoType(const AutoType *T) { // FIXME: Make sure that the "to" context supports C++11! QualType FromDeduced = T->getDeducedType(); QualType ToDeduced; if (!FromDeduced.isNull()) { ToDeduced = Importer.Import(FromDeduced); if (ToDeduced.isNull()) return QualType(); } return Importer.getToContext().getAutoType(ToDeduced, T->getKeyword(), /*IsDependent*/false); } QualType ASTNodeImporter::VisitInjectedClassNameType( const InjectedClassNameType *T) { CXXRecordDecl *D = cast_or_null(Importer.Import(T->getDecl())); if (!D) return QualType(); QualType InjType = Importer.Import(T->getInjectedSpecializationType()); if (InjType.isNull()) return QualType(); // FIXME: ASTContext::getInjectedClassNameType is not suitable for AST reading // See comments in InjectedClassNameType definition for details // return Importer.getToContext().getInjectedClassNameType(D, InjType); enum { TypeAlignmentInBits = 4, TypeAlignment = 1 << TypeAlignmentInBits }; return QualType(new (Importer.getToContext(), TypeAlignment) InjectedClassNameType(D, InjType), 0); } QualType ASTNodeImporter::VisitRecordType(const RecordType *T) { RecordDecl *ToDecl = dyn_cast_or_null(Importer.Import(T->getDecl())); if (!ToDecl) return QualType(); return Importer.getToContext().getTagDeclType(ToDecl); } QualType ASTNodeImporter::VisitEnumType(const EnumType *T) { EnumDecl *ToDecl = dyn_cast_or_null(Importer.Import(T->getDecl())); if (!ToDecl) return QualType(); return Importer.getToContext().getTagDeclType(ToDecl); } QualType ASTNodeImporter::VisitAttributedType(const AttributedType *T) { QualType FromModifiedType = T->getModifiedType(); QualType FromEquivalentType = T->getEquivalentType(); QualType ToModifiedType; QualType ToEquivalentType; if (!FromModifiedType.isNull()) { ToModifiedType = Importer.Import(FromModifiedType); if (ToModifiedType.isNull()) return QualType(); } if (!FromEquivalentType.isNull()) { ToEquivalentType = Importer.Import(FromEquivalentType); if (ToEquivalentType.isNull()) return QualType(); } return Importer.getToContext().getAttributedType(T->getAttrKind(), ToModifiedType, ToEquivalentType); } QualType ASTNodeImporter::VisitTemplateTypeParmType( const TemplateTypeParmType *T) { TemplateTypeParmDecl *ParmDecl = cast_or_null(Importer.Import(T->getDecl())); if (!ParmDecl && T->getDecl()) return QualType(); return Importer.getToContext().getTemplateTypeParmType( T->getDepth(), T->getIndex(), T->isParameterPack(), ParmDecl); } QualType ASTNodeImporter::VisitSubstTemplateTypeParmType( const SubstTemplateTypeParmType *T) { const TemplateTypeParmType *Replaced = cast_or_null(Importer.Import( QualType(T->getReplacedParameter(), 0)).getTypePtr()); if (!Replaced) return QualType(); QualType Replacement = Importer.Import(T->getReplacementType()); if (Replacement.isNull()) return QualType(); Replacement = Replacement.getCanonicalType(); return Importer.getToContext().getSubstTemplateTypeParmType( Replaced, Replacement); } QualType ASTNodeImporter::VisitTemplateSpecializationType( const TemplateSpecializationType *T) { TemplateName ToTemplate = Importer.Import(T->getTemplateName()); if (ToTemplate.isNull()) return QualType(); SmallVector ToTemplateArgs; if (ImportTemplateArguments(T->getArgs(), T->getNumArgs(), ToTemplateArgs)) return QualType(); QualType ToCanonType; if (!QualType(T, 0).isCanonical()) { QualType FromCanonType = Importer.getFromContext().getCanonicalType(QualType(T, 0)); ToCanonType =Importer.Import(FromCanonType); if (ToCanonType.isNull()) return QualType(); } return Importer.getToContext().getTemplateSpecializationType(ToTemplate, ToTemplateArgs, ToCanonType); } QualType ASTNodeImporter::VisitElaboratedType(const ElaboratedType *T) { NestedNameSpecifier *ToQualifier = nullptr; // Note: the qualifier in an ElaboratedType is optional. if (T->getQualifier()) { ToQualifier = Importer.Import(T->getQualifier()); if (!ToQualifier) return QualType(); } QualType ToNamedType = Importer.Import(T->getNamedType()); if (ToNamedType.isNull()) return QualType(); return Importer.getToContext().getElaboratedType(T->getKeyword(), ToQualifier, ToNamedType); } QualType ASTNodeImporter::VisitObjCInterfaceType(const ObjCInterfaceType *T) { ObjCInterfaceDecl *Class = dyn_cast_or_null(Importer.Import(T->getDecl())); if (!Class) return QualType(); return Importer.getToContext().getObjCInterfaceType(Class); } QualType ASTNodeImporter::VisitObjCObjectType(const ObjCObjectType *T) { QualType ToBaseType = Importer.Import(T->getBaseType()); if (ToBaseType.isNull()) return QualType(); SmallVector TypeArgs; for (auto TypeArg : T->getTypeArgsAsWritten()) { QualType ImportedTypeArg = Importer.Import(TypeArg); if (ImportedTypeArg.isNull()) return QualType(); TypeArgs.push_back(ImportedTypeArg); } SmallVector Protocols; for (auto *P : T->quals()) { ObjCProtocolDecl *Protocol = dyn_cast_or_null(Importer.Import(P)); if (!Protocol) return QualType(); Protocols.push_back(Protocol); } return Importer.getToContext().getObjCObjectType(ToBaseType, TypeArgs, Protocols, T->isKindOfTypeAsWritten()); } QualType ASTNodeImporter::VisitObjCObjectPointerType(const ObjCObjectPointerType *T) { QualType ToPointeeType = Importer.Import(T->getPointeeType()); if (ToPointeeType.isNull()) return QualType(); return Importer.getToContext().getObjCObjectPointerType(ToPointeeType); } //---------------------------------------------------------------------------- // Import Declarations //---------------------------------------------------------------------------- bool ASTNodeImporter::ImportDeclParts(NamedDecl *D, DeclContext *&DC, DeclContext *&LexicalDC, DeclarationName &Name, NamedDecl *&ToD, SourceLocation &Loc) { // Import the context of this declaration. DC = Importer.ImportContext(D->getDeclContext()); if (!DC) return true; LexicalDC = DC; if (D->getDeclContext() != D->getLexicalDeclContext()) { LexicalDC = Importer.ImportContext(D->getLexicalDeclContext()); if (!LexicalDC) return true; } // Import the name of this declaration. Name = Importer.Import(D->getDeclName()); if (D->getDeclName() && !Name) return true; // Import the location of this declaration. Loc = Importer.Import(D->getLocation()); ToD = cast_or_null(Importer.GetAlreadyImportedOrNull(D)); return false; } void ASTNodeImporter::ImportDefinitionIfNeeded(Decl *FromD, Decl *ToD) { if (!FromD) return; if (!ToD) { ToD = Importer.Import(FromD); if (!ToD) return; } if (RecordDecl *FromRecord = dyn_cast(FromD)) { if (RecordDecl *ToRecord = cast_or_null(ToD)) { if (FromRecord->getDefinition() && FromRecord->isCompleteDefinition() && !ToRecord->getDefinition()) { ImportDefinition(FromRecord, ToRecord); } } return; } if (EnumDecl *FromEnum = dyn_cast(FromD)) { if (EnumDecl *ToEnum = cast_or_null(ToD)) { if (FromEnum->getDefinition() && !ToEnum->getDefinition()) { ImportDefinition(FromEnum, ToEnum); } } return; } } void ASTNodeImporter::ImportDeclarationNameLoc(const DeclarationNameInfo &From, DeclarationNameInfo& To) { // NOTE: To.Name and To.Loc are already imported. // We only have to import To.LocInfo. switch (To.getName().getNameKind()) { case DeclarationName::Identifier: case DeclarationName::ObjCZeroArgSelector: case DeclarationName::ObjCOneArgSelector: case DeclarationName::ObjCMultiArgSelector: case DeclarationName::CXXUsingDirective: case DeclarationName::CXXDeductionGuideName: return; case DeclarationName::CXXOperatorName: { SourceRange Range = From.getCXXOperatorNameRange(); To.setCXXOperatorNameRange(Importer.Import(Range)); return; } case DeclarationName::CXXLiteralOperatorName: { SourceLocation Loc = From.getCXXLiteralOperatorNameLoc(); To.setCXXLiteralOperatorNameLoc(Importer.Import(Loc)); return; } case DeclarationName::CXXConstructorName: case DeclarationName::CXXDestructorName: case DeclarationName::CXXConversionFunctionName: { TypeSourceInfo *FromTInfo = From.getNamedTypeInfo(); To.setNamedTypeInfo(Importer.Import(FromTInfo)); return; } } llvm_unreachable("Unknown name kind."); } void ASTNodeImporter::ImportDeclContext(DeclContext *FromDC, bool ForceImport) { if (Importer.isMinimalImport() && !ForceImport) { Importer.ImportContext(FromDC); return; } for (auto *From : FromDC->decls()) Importer.Import(From); } bool ASTNodeImporter::ImportDefinition(RecordDecl *From, RecordDecl *To, ImportDefinitionKind Kind) { if (To->getDefinition() || To->isBeingDefined()) { if (Kind == IDK_Everything) ImportDeclContext(From, /*ForceImport=*/true); return false; } To->startDefinition(); // Add base classes. if (CXXRecordDecl *ToCXX = dyn_cast(To)) { CXXRecordDecl *FromCXX = cast(From); struct CXXRecordDecl::DefinitionData &ToData = ToCXX->data(); struct CXXRecordDecl::DefinitionData &FromData = FromCXX->data(); ToData.UserDeclaredConstructor = FromData.UserDeclaredConstructor; ToData.UserDeclaredSpecialMembers = FromData.UserDeclaredSpecialMembers; ToData.Aggregate = FromData.Aggregate; ToData.PlainOldData = FromData.PlainOldData; ToData.Empty = FromData.Empty; ToData.Polymorphic = FromData.Polymorphic; ToData.Abstract = FromData.Abstract; ToData.IsStandardLayout = FromData.IsStandardLayout; ToData.HasNoNonEmptyBases = FromData.HasNoNonEmptyBases; ToData.HasPrivateFields = FromData.HasPrivateFields; ToData.HasProtectedFields = FromData.HasProtectedFields; ToData.HasPublicFields = FromData.HasPublicFields; ToData.HasMutableFields = FromData.HasMutableFields; ToData.HasVariantMembers = FromData.HasVariantMembers; ToData.HasOnlyCMembers = FromData.HasOnlyCMembers; ToData.HasInClassInitializer = FromData.HasInClassInitializer; ToData.HasUninitializedReferenceMember = FromData.HasUninitializedReferenceMember; ToData.HasUninitializedFields = FromData.HasUninitializedFields; ToData.HasInheritedConstructor = FromData.HasInheritedConstructor; ToData.HasInheritedAssignment = FromData.HasInheritedAssignment; + ToData.NeedOverloadResolutionForCopyConstructor + = FromData.NeedOverloadResolutionForCopyConstructor; ToData.NeedOverloadResolutionForMoveConstructor = FromData.NeedOverloadResolutionForMoveConstructor; ToData.NeedOverloadResolutionForMoveAssignment = FromData.NeedOverloadResolutionForMoveAssignment; ToData.NeedOverloadResolutionForDestructor = FromData.NeedOverloadResolutionForDestructor; + ToData.DefaultedCopyConstructorIsDeleted + = FromData.DefaultedCopyConstructorIsDeleted; ToData.DefaultedMoveConstructorIsDeleted = FromData.DefaultedMoveConstructorIsDeleted; ToData.DefaultedMoveAssignmentIsDeleted = FromData.DefaultedMoveAssignmentIsDeleted; ToData.DefaultedDestructorIsDeleted = FromData.DefaultedDestructorIsDeleted; ToData.HasTrivialSpecialMembers = FromData.HasTrivialSpecialMembers; ToData.HasIrrelevantDestructor = FromData.HasIrrelevantDestructor; ToData.HasConstexprNonCopyMoveConstructor = FromData.HasConstexprNonCopyMoveConstructor; ToData.HasDefaultedDefaultConstructor = FromData.HasDefaultedDefaultConstructor; + ToData.CanPassInRegisters = FromData.CanPassInRegisters; ToData.DefaultedDefaultConstructorIsConstexpr = FromData.DefaultedDefaultConstructorIsConstexpr; ToData.HasConstexprDefaultConstructor = FromData.HasConstexprDefaultConstructor; ToData.HasNonLiteralTypeFieldsOrBases = FromData.HasNonLiteralTypeFieldsOrBases; // ComputedVisibleConversions not imported. ToData.UserProvidedDefaultConstructor = FromData.UserProvidedDefaultConstructor; ToData.DeclaredSpecialMembers = FromData.DeclaredSpecialMembers; ToData.ImplicitCopyConstructorCanHaveConstParamForVBase = FromData.ImplicitCopyConstructorCanHaveConstParamForVBase; ToData.ImplicitCopyConstructorCanHaveConstParamForNonVBase = FromData.ImplicitCopyConstructorCanHaveConstParamForNonVBase; ToData.ImplicitCopyAssignmentHasConstParam = FromData.ImplicitCopyAssignmentHasConstParam; ToData.HasDeclaredCopyConstructorWithConstParam = FromData.HasDeclaredCopyConstructorWithConstParam; ToData.HasDeclaredCopyAssignmentWithConstParam = FromData.HasDeclaredCopyAssignmentWithConstParam; ToData.IsLambda = FromData.IsLambda; SmallVector Bases; for (const auto &Base1 : FromCXX->bases()) { QualType T = Importer.Import(Base1.getType()); if (T.isNull()) return true; SourceLocation EllipsisLoc; if (Base1.isPackExpansion()) EllipsisLoc = Importer.Import(Base1.getEllipsisLoc()); // Ensure that we have a definition for the base. ImportDefinitionIfNeeded(Base1.getType()->getAsCXXRecordDecl()); Bases.push_back( new (Importer.getToContext()) CXXBaseSpecifier(Importer.Import(Base1.getSourceRange()), Base1.isVirtual(), Base1.isBaseOfClass(), Base1.getAccessSpecifierAsWritten(), Importer.Import(Base1.getTypeSourceInfo()), EllipsisLoc)); } if (!Bases.empty()) ToCXX->setBases(Bases.data(), Bases.size()); } if (shouldForceImportDeclContext(Kind)) ImportDeclContext(From, /*ForceImport=*/true); To->completeDefinition(); return false; } bool ASTNodeImporter::ImportDefinition(VarDecl *From, VarDecl *To, ImportDefinitionKind Kind) { if (To->getAnyInitializer()) return false; // FIXME: Can we really import any initializer? Alternatively, we could force // ourselves to import every declaration of a variable and then only use // getInit() here. To->setInit(Importer.Import(const_cast(From->getAnyInitializer()))); // FIXME: Other bits to merge? return false; } bool ASTNodeImporter::ImportDefinition(EnumDecl *From, EnumDecl *To, ImportDefinitionKind Kind) { if (To->getDefinition() || To->isBeingDefined()) { if (Kind == IDK_Everything) ImportDeclContext(From, /*ForceImport=*/true); return false; } To->startDefinition(); QualType T = Importer.Import(Importer.getFromContext().getTypeDeclType(From)); if (T.isNull()) return true; QualType ToPromotionType = Importer.Import(From->getPromotionType()); if (ToPromotionType.isNull()) return true; if (shouldForceImportDeclContext(Kind)) ImportDeclContext(From, /*ForceImport=*/true); // FIXME: we might need to merge the number of positive or negative bits // if the enumerator lists don't match. To->completeDefinition(T, ToPromotionType, From->getNumPositiveBits(), From->getNumNegativeBits()); return false; } TemplateParameterList *ASTNodeImporter::ImportTemplateParameterList( TemplateParameterList *Params) { SmallVector ToParams(Params->size()); if (ImportContainerChecked(*Params, ToParams)) return nullptr; Expr *ToRequiresClause; if (Expr *const R = Params->getRequiresClause()) { ToRequiresClause = Importer.Import(R); if (!ToRequiresClause) return nullptr; } else { ToRequiresClause = nullptr; } return TemplateParameterList::Create(Importer.getToContext(), Importer.Import(Params->getTemplateLoc()), Importer.Import(Params->getLAngleLoc()), ToParams, Importer.Import(Params->getRAngleLoc()), ToRequiresClause); } TemplateArgument ASTNodeImporter::ImportTemplateArgument(const TemplateArgument &From) { switch (From.getKind()) { case TemplateArgument::Null: return TemplateArgument(); case TemplateArgument::Type: { QualType ToType = Importer.Import(From.getAsType()); if (ToType.isNull()) return TemplateArgument(); return TemplateArgument(ToType); } case TemplateArgument::Integral: { QualType ToType = Importer.Import(From.getIntegralType()); if (ToType.isNull()) return TemplateArgument(); return TemplateArgument(From, ToType); } case TemplateArgument::Declaration: { ValueDecl *To = cast_or_null(Importer.Import(From.getAsDecl())); QualType ToType = Importer.Import(From.getParamTypeForDecl()); if (!To || ToType.isNull()) return TemplateArgument(); return TemplateArgument(To, ToType); } case TemplateArgument::NullPtr: { QualType ToType = Importer.Import(From.getNullPtrType()); if (ToType.isNull()) return TemplateArgument(); return TemplateArgument(ToType, /*isNullPtr*/true); } case TemplateArgument::Template: { TemplateName ToTemplate = Importer.Import(From.getAsTemplate()); if (ToTemplate.isNull()) return TemplateArgument(); return TemplateArgument(ToTemplate); } case TemplateArgument::TemplateExpansion: { TemplateName ToTemplate = Importer.Import(From.getAsTemplateOrTemplatePattern()); if (ToTemplate.isNull()) return TemplateArgument(); return TemplateArgument(ToTemplate, From.getNumTemplateExpansions()); } case TemplateArgument::Expression: if (Expr *ToExpr = Importer.Import(From.getAsExpr())) return TemplateArgument(ToExpr); return TemplateArgument(); case TemplateArgument::Pack: { SmallVector ToPack; ToPack.reserve(From.pack_size()); if (ImportTemplateArguments(From.pack_begin(), From.pack_size(), ToPack)) return TemplateArgument(); return TemplateArgument( llvm::makeArrayRef(ToPack).copy(Importer.getToContext())); } } llvm_unreachable("Invalid template argument kind"); } TemplateArgumentLoc ASTNodeImporter::ImportTemplateArgumentLoc( const TemplateArgumentLoc &TALoc, bool &Error) { Error = false; TemplateArgument Arg = ImportTemplateArgument(TALoc.getArgument()); TemplateArgumentLocInfo FromInfo = TALoc.getLocInfo(); TemplateArgumentLocInfo ToInfo; if (Arg.getKind() == TemplateArgument::Expression) { Expr *E = Importer.Import(FromInfo.getAsExpr()); ToInfo = TemplateArgumentLocInfo(E); if (!E) Error = true; } else if (Arg.getKind() == TemplateArgument::Type) { if (TypeSourceInfo *TSI = Importer.Import(FromInfo.getAsTypeSourceInfo())) ToInfo = TemplateArgumentLocInfo(TSI); else Error = true; } else { ToInfo = TemplateArgumentLocInfo( Importer.Import(FromInfo.getTemplateQualifierLoc()), Importer.Import(FromInfo.getTemplateNameLoc()), Importer.Import(FromInfo.getTemplateEllipsisLoc())); } return TemplateArgumentLoc(Arg, ToInfo); } bool ASTNodeImporter::ImportTemplateArguments(const TemplateArgument *FromArgs, unsigned NumFromArgs, SmallVectorImpl &ToArgs) { for (unsigned I = 0; I != NumFromArgs; ++I) { TemplateArgument To = ImportTemplateArgument(FromArgs[I]); if (To.isNull() && !FromArgs[I].isNull()) return true; ToArgs.push_back(To); } return false; } bool ASTNodeImporter::IsStructuralMatch(RecordDecl *FromRecord, RecordDecl *ToRecord, bool Complain) { // Eliminate a potential failure point where we attempt to re-import // something we're trying to import while completing ToRecord. Decl *ToOrigin = Importer.GetOriginalDecl(ToRecord); if (ToOrigin) { RecordDecl *ToOriginRecord = dyn_cast(ToOrigin); if (ToOriginRecord) ToRecord = ToOriginRecord; } StructuralEquivalenceContext Ctx(Importer.getFromContext(), ToRecord->getASTContext(), Importer.getNonEquivalentDecls(), false, Complain); return Ctx.IsStructurallyEquivalent(FromRecord, ToRecord); } bool ASTNodeImporter::IsStructuralMatch(VarDecl *FromVar, VarDecl *ToVar, bool Complain) { StructuralEquivalenceContext Ctx( Importer.getFromContext(), Importer.getToContext(), Importer.getNonEquivalentDecls(), false, Complain); return Ctx.IsStructurallyEquivalent(FromVar, ToVar); } bool ASTNodeImporter::IsStructuralMatch(EnumDecl *FromEnum, EnumDecl *ToEnum) { StructuralEquivalenceContext Ctx(Importer.getFromContext(), Importer.getToContext(), Importer.getNonEquivalentDecls()); return Ctx.IsStructurallyEquivalent(FromEnum, ToEnum); } bool ASTNodeImporter::IsStructuralMatch(EnumConstantDecl *FromEC, EnumConstantDecl *ToEC) { const llvm::APSInt &FromVal = FromEC->getInitVal(); const llvm::APSInt &ToVal = ToEC->getInitVal(); return FromVal.isSigned() == ToVal.isSigned() && FromVal.getBitWidth() == ToVal.getBitWidth() && FromVal == ToVal; } bool ASTNodeImporter::IsStructuralMatch(ClassTemplateDecl *From, ClassTemplateDecl *To) { StructuralEquivalenceContext Ctx(Importer.getFromContext(), Importer.getToContext(), Importer.getNonEquivalentDecls()); return Ctx.IsStructurallyEquivalent(From, To); } bool ASTNodeImporter::IsStructuralMatch(VarTemplateDecl *From, VarTemplateDecl *To) { StructuralEquivalenceContext Ctx(Importer.getFromContext(), Importer.getToContext(), Importer.getNonEquivalentDecls()); return Ctx.IsStructurallyEquivalent(From, To); } Decl *ASTNodeImporter::VisitDecl(Decl *D) { Importer.FromDiag(D->getLocation(), diag::err_unsupported_ast_node) << D->getDeclKindName(); return nullptr; } Decl *ASTNodeImporter::VisitTranslationUnitDecl(TranslationUnitDecl *D) { TranslationUnitDecl *ToD = Importer.getToContext().getTranslationUnitDecl(); Importer.Imported(D, ToD); return ToD; } Decl *ASTNodeImporter::VisitAccessSpecDecl(AccessSpecDecl *D) { SourceLocation Loc = Importer.Import(D->getLocation()); SourceLocation ColonLoc = Importer.Import(D->getColonLoc()); // Import the context of this declaration. DeclContext *DC = Importer.ImportContext(D->getDeclContext()); if (!DC) return nullptr; AccessSpecDecl *accessSpecDecl = AccessSpecDecl::Create(Importer.getToContext(), D->getAccess(), DC, Loc, ColonLoc); if (!accessSpecDecl) return nullptr; // Lexical DeclContext and Semantic DeclContext // is always the same for the accessSpec. accessSpecDecl->setLexicalDeclContext(DC); DC->addDeclInternal(accessSpecDecl); return accessSpecDecl; } Decl *ASTNodeImporter::VisitStaticAssertDecl(StaticAssertDecl *D) { DeclContext *DC = Importer.ImportContext(D->getDeclContext()); if (!DC) return nullptr; DeclContext *LexicalDC = DC; // Import the location of this declaration. SourceLocation Loc = Importer.Import(D->getLocation()); Expr *AssertExpr = Importer.Import(D->getAssertExpr()); if (!AssertExpr) return nullptr; StringLiteral *FromMsg = D->getMessage(); StringLiteral *ToMsg = cast_or_null(Importer.Import(FromMsg)); if (!ToMsg && FromMsg) return nullptr; StaticAssertDecl *ToD = StaticAssertDecl::Create( Importer.getToContext(), DC, Loc, AssertExpr, ToMsg, Importer.Import(D->getRParenLoc()), D->isFailed()); ToD->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToD); Importer.Imported(D, ToD); return ToD; } Decl *ASTNodeImporter::VisitNamespaceDecl(NamespaceDecl *D) { // Import the major distinguishing characteristics of this namespace. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; NamespaceDecl *MergeWithNamespace = nullptr; if (!Name) { // This is an anonymous namespace. Adopt an existing anonymous // namespace if we can. // FIXME: Not testable. if (TranslationUnitDecl *TU = dyn_cast(DC)) MergeWithNamespace = TU->getAnonymousNamespace(); else MergeWithNamespace = cast(DC)->getAnonymousNamespace(); } else { SmallVector ConflictingDecls; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(Decl::IDNS_Namespace)) continue; if (NamespaceDecl *FoundNS = dyn_cast(FoundDecls[I])) { MergeWithNamespace = FoundNS; ConflictingDecls.clear(); break; } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, Decl::IDNS_Namespace, ConflictingDecls.data(), ConflictingDecls.size()); } } // Create the "to" namespace, if needed. NamespaceDecl *ToNamespace = MergeWithNamespace; if (!ToNamespace) { ToNamespace = NamespaceDecl::Create(Importer.getToContext(), DC, D->isInline(), Importer.Import(D->getLocStart()), Loc, Name.getAsIdentifierInfo(), /*PrevDecl=*/nullptr); ToNamespace->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToNamespace); // If this is an anonymous namespace, register it as the anonymous // namespace within its context. if (!Name) { if (TranslationUnitDecl *TU = dyn_cast(DC)) TU->setAnonymousNamespace(ToNamespace); else cast(DC)->setAnonymousNamespace(ToNamespace); } } Importer.Imported(D, ToNamespace); ImportDeclContext(D); return ToNamespace; } Decl *ASTNodeImporter::VisitTypedefNameDecl(TypedefNameDecl *D, bool IsAlias) { // Import the major distinguishing characteristics of this typedef. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // If this typedef is not in block scope, determine whether we've // seen a typedef with the same name (that we can merge with) or any // other entity by that name (which name lookup could conflict with). if (!DC->isFunctionOrMethod()) { SmallVector ConflictingDecls; unsigned IDNS = Decl::IDNS_Ordinary; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(IDNS)) continue; if (TypedefNameDecl *FoundTypedef = dyn_cast(FoundDecls[I])) { if (Importer.IsStructurallyEquivalent(D->getUnderlyingType(), FoundTypedef->getUnderlyingType())) return Importer.Imported(D, FoundTypedef); } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, IDNS, ConflictingDecls.data(), ConflictingDecls.size()); if (!Name) return nullptr; } } // Import the underlying type of this typedef; QualType T = Importer.Import(D->getUnderlyingType()); if (T.isNull()) return nullptr; // Create the new typedef node. TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); SourceLocation StartL = Importer.Import(D->getLocStart()); TypedefNameDecl *ToTypedef; if (IsAlias) ToTypedef = TypeAliasDecl::Create(Importer.getToContext(), DC, StartL, Loc, Name.getAsIdentifierInfo(), TInfo); else ToTypedef = TypedefDecl::Create(Importer.getToContext(), DC, StartL, Loc, Name.getAsIdentifierInfo(), TInfo); ToTypedef->setAccess(D->getAccess()); ToTypedef->setLexicalDeclContext(LexicalDC); Importer.Imported(D, ToTypedef); LexicalDC->addDeclInternal(ToTypedef); return ToTypedef; } Decl *ASTNodeImporter::VisitTypedefDecl(TypedefDecl *D) { return VisitTypedefNameDecl(D, /*IsAlias=*/false); } Decl *ASTNodeImporter::VisitTypeAliasDecl(TypeAliasDecl *D) { return VisitTypedefNameDecl(D, /*IsAlias=*/true); } Decl *ASTNodeImporter::VisitLabelDecl(LabelDecl *D) { // Import the major distinguishing characteristics of this label. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; assert(LexicalDC->isFunctionOrMethod()); LabelDecl *ToLabel = D->isGnuLocal() ? LabelDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getLocation()), Name.getAsIdentifierInfo(), Importer.Import(D->getLocStart())) : LabelDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getLocation()), Name.getAsIdentifierInfo()); Importer.Imported(D, ToLabel); LabelStmt *Label = cast_or_null(Importer.Import(D->getStmt())); if (!Label) return nullptr; ToLabel->setStmt(Label); ToLabel->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToLabel); return ToLabel; } Decl *ASTNodeImporter::VisitEnumDecl(EnumDecl *D) { // Import the major distinguishing characteristics of this enum. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Figure out what enum name we're looking for. unsigned IDNS = Decl::IDNS_Tag; DeclarationName SearchName = Name; if (!SearchName && D->getTypedefNameForAnonDecl()) { SearchName = Importer.Import(D->getTypedefNameForAnonDecl()->getDeclName()); IDNS = Decl::IDNS_Ordinary; } else if (Importer.getToContext().getLangOpts().CPlusPlus) IDNS |= Decl::IDNS_Ordinary; // We may already have an enum of the same name; try to find and match it. if (!DC->isFunctionOrMethod() && SearchName) { SmallVector ConflictingDecls; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(SearchName, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(IDNS)) continue; Decl *Found = FoundDecls[I]; if (TypedefNameDecl *Typedef = dyn_cast(Found)) { if (const TagType *Tag = Typedef->getUnderlyingType()->getAs()) Found = Tag->getDecl(); } if (EnumDecl *FoundEnum = dyn_cast(Found)) { if (IsStructuralMatch(D, FoundEnum)) return Importer.Imported(D, FoundEnum); } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, IDNS, ConflictingDecls.data(), ConflictingDecls.size()); } } // Create the enum declaration. EnumDecl *D2 = EnumDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getLocStart()), Loc, Name.getAsIdentifierInfo(), nullptr, D->isScoped(), D->isScopedUsingClassTag(), D->isFixed()); // Import the qualifier, if any. D2->setQualifierInfo(Importer.Import(D->getQualifierLoc())); D2->setAccess(D->getAccess()); D2->setLexicalDeclContext(LexicalDC); Importer.Imported(D, D2); LexicalDC->addDeclInternal(D2); // Import the integer type. QualType ToIntegerType = Importer.Import(D->getIntegerType()); if (ToIntegerType.isNull()) return nullptr; D2->setIntegerType(ToIntegerType); // Import the definition if (D->isCompleteDefinition() && ImportDefinition(D, D2)) return nullptr; return D2; } Decl *ASTNodeImporter::VisitRecordDecl(RecordDecl *D) { // If this record has a definition in the translation unit we're coming from, // but this particular declaration is not that definition, import the // definition and map to that. TagDecl *Definition = D->getDefinition(); if (Definition && Definition != D) { Decl *ImportedDef = Importer.Import(Definition); if (!ImportedDef) return nullptr; return Importer.Imported(D, ImportedDef); } // Import the major distinguishing characteristics of this record. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Figure out what structure name we're looking for. unsigned IDNS = Decl::IDNS_Tag; DeclarationName SearchName = Name; if (!SearchName && D->getTypedefNameForAnonDecl()) { SearchName = Importer.Import(D->getTypedefNameForAnonDecl()->getDeclName()); IDNS = Decl::IDNS_Ordinary; } else if (Importer.getToContext().getLangOpts().CPlusPlus) IDNS |= Decl::IDNS_Ordinary; // We may already have a record of the same name; try to find and match it. RecordDecl *AdoptDecl = nullptr; RecordDecl *PrevDecl = nullptr; if (!DC->isFunctionOrMethod()) { SmallVector ConflictingDecls; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(SearchName, FoundDecls); if (!FoundDecls.empty()) { // We're going to have to compare D against potentially conflicting Decls, so complete it. if (D->hasExternalLexicalStorage() && !D->isCompleteDefinition()) D->getASTContext().getExternalSource()->CompleteType(D); } for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(IDNS)) continue; Decl *Found = FoundDecls[I]; if (TypedefNameDecl *Typedef = dyn_cast(Found)) { if (const TagType *Tag = Typedef->getUnderlyingType()->getAs()) Found = Tag->getDecl(); } if (RecordDecl *FoundRecord = dyn_cast(Found)) { if (D->isAnonymousStructOrUnion() && FoundRecord->isAnonymousStructOrUnion()) { // If both anonymous structs/unions are in a record context, make sure // they occur in the same location in the context records. if (Optional Index1 = StructuralEquivalenceContext::findUntaggedStructOrUnionIndex( D)) { if (Optional Index2 = StructuralEquivalenceContext:: findUntaggedStructOrUnionIndex(FoundRecord)) { if (*Index1 != *Index2) continue; } } } PrevDecl = FoundRecord; if (RecordDecl *FoundDef = FoundRecord->getDefinition()) { if ((SearchName && !D->isCompleteDefinition()) || (D->isCompleteDefinition() && D->isAnonymousStructOrUnion() == FoundDef->isAnonymousStructOrUnion() && IsStructuralMatch(D, FoundDef))) { // The record types structurally match, or the "from" translation // unit only had a forward declaration anyway; call it the same // function. // FIXME: For C++, we should also merge methods here. return Importer.Imported(D, FoundDef); } } else if (!D->isCompleteDefinition()) { // We have a forward declaration of this type, so adopt that forward // declaration rather than building a new one. // If one or both can be completed from external storage then try one // last time to complete and compare them before doing this. if (FoundRecord->hasExternalLexicalStorage() && !FoundRecord->isCompleteDefinition()) FoundRecord->getASTContext().getExternalSource()->CompleteType(FoundRecord); if (D->hasExternalLexicalStorage()) D->getASTContext().getExternalSource()->CompleteType(D); if (FoundRecord->isCompleteDefinition() && D->isCompleteDefinition() && !IsStructuralMatch(D, FoundRecord)) continue; AdoptDecl = FoundRecord; continue; } else if (!SearchName) { continue; } } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty() && SearchName) { Name = Importer.HandleNameConflict(Name, DC, IDNS, ConflictingDecls.data(), ConflictingDecls.size()); } } // Create the record declaration. RecordDecl *D2 = AdoptDecl; SourceLocation StartLoc = Importer.Import(D->getLocStart()); if (!D2) { CXXRecordDecl *D2CXX = nullptr; if (CXXRecordDecl *DCXX = llvm::dyn_cast(D)) { if (DCXX->isLambda()) { TypeSourceInfo *TInfo = Importer.Import(DCXX->getLambdaTypeInfo()); D2CXX = CXXRecordDecl::CreateLambda(Importer.getToContext(), DC, TInfo, Loc, DCXX->isDependentLambda(), DCXX->isGenericLambda(), DCXX->getLambdaCaptureDefault()); Decl *CDecl = Importer.Import(DCXX->getLambdaContextDecl()); if (DCXX->getLambdaContextDecl() && !CDecl) return nullptr; D2CXX->setLambdaMangling(DCXX->getLambdaManglingNumber(), CDecl); } else if (DCXX->isInjectedClassName()) { // We have to be careful to do a similar dance to the one in // Sema::ActOnStartCXXMemberDeclarations CXXRecordDecl *const PrevDecl = nullptr; const bool DelayTypeCreation = true; D2CXX = CXXRecordDecl::Create( Importer.getToContext(), D->getTagKind(), DC, StartLoc, Loc, Name.getAsIdentifierInfo(), PrevDecl, DelayTypeCreation); Importer.getToContext().getTypeDeclType( D2CXX, llvm::dyn_cast(DC)); } else { D2CXX = CXXRecordDecl::Create(Importer.getToContext(), D->getTagKind(), DC, StartLoc, Loc, Name.getAsIdentifierInfo()); } D2 = D2CXX; D2->setAccess(D->getAccess()); } else { D2 = RecordDecl::Create(Importer.getToContext(), D->getTagKind(), DC, StartLoc, Loc, Name.getAsIdentifierInfo()); } D2->setQualifierInfo(Importer.Import(D->getQualifierLoc())); D2->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(D2); if (D->isAnonymousStructOrUnion()) D2->setAnonymousStructOrUnion(true); if (PrevDecl) { // FIXME: do this for all Redeclarables, not just RecordDecls. D2->setPreviousDecl(PrevDecl); } } Importer.Imported(D, D2); if (D->isCompleteDefinition() && ImportDefinition(D, D2, IDK_Default)) return nullptr; return D2; } Decl *ASTNodeImporter::VisitEnumConstantDecl(EnumConstantDecl *D) { // Import the major distinguishing characteristics of this enumerator. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; // Determine whether there are any other declarations with the same name and // in the same context. if (!LexicalDC->isFunctionOrMethod()) { SmallVector ConflictingDecls; unsigned IDNS = Decl::IDNS_Ordinary; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(IDNS)) continue; if (EnumConstantDecl *FoundEnumConstant = dyn_cast(FoundDecls[I])) { if (IsStructuralMatch(D, FoundEnumConstant)) return Importer.Imported(D, FoundEnumConstant); } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, IDNS, ConflictingDecls.data(), ConflictingDecls.size()); if (!Name) return nullptr; } } Expr *Init = Importer.Import(D->getInitExpr()); if (D->getInitExpr() && !Init) return nullptr; EnumConstantDecl *ToEnumerator = EnumConstantDecl::Create(Importer.getToContext(), cast(DC), Loc, Name.getAsIdentifierInfo(), T, Init, D->getInitVal()); ToEnumerator->setAccess(D->getAccess()); ToEnumerator->setLexicalDeclContext(LexicalDC); Importer.Imported(D, ToEnumerator); LexicalDC->addDeclInternal(ToEnumerator); return ToEnumerator; } Decl *ASTNodeImporter::VisitFunctionDecl(FunctionDecl *D) { // Import the major distinguishing characteristics of this function. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Try to find a function in our own ("to") context with the same name, same // type, and in the same context as the function we're importing. if (!LexicalDC->isFunctionOrMethod()) { SmallVector ConflictingDecls; unsigned IDNS = Decl::IDNS_Ordinary; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(IDNS)) continue; if (FunctionDecl *FoundFunction = dyn_cast(FoundDecls[I])) { if (FoundFunction->hasExternalFormalLinkage() && D->hasExternalFormalLinkage()) { if (Importer.IsStructurallyEquivalent(D->getType(), FoundFunction->getType())) { // FIXME: Actually try to merge the body and other attributes. return Importer.Imported(D, FoundFunction); } // FIXME: Check for overloading more carefully, e.g., by boosting // Sema::IsOverload out to the AST library. // Function overloading is okay in C++. if (Importer.getToContext().getLangOpts().CPlusPlus) continue; // Complain about inconsistent function types. Importer.ToDiag(Loc, diag::err_odr_function_type_inconsistent) << Name << D->getType() << FoundFunction->getType(); Importer.ToDiag(FoundFunction->getLocation(), diag::note_odr_value_here) << FoundFunction->getType(); } } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, IDNS, ConflictingDecls.data(), ConflictingDecls.size()); if (!Name) return nullptr; } } DeclarationNameInfo NameInfo(Name, Loc); // Import additional name location/type info. ImportDeclarationNameLoc(D->getNameInfo(), NameInfo); QualType FromTy = D->getType(); bool usedDifferentExceptionSpec = false; if (const FunctionProtoType * FromFPT = D->getType()->getAs()) { FunctionProtoType::ExtProtoInfo FromEPI = FromFPT->getExtProtoInfo(); // FunctionProtoType::ExtProtoInfo's ExceptionSpecDecl can point to the // FunctionDecl that we are importing the FunctionProtoType for. // To avoid an infinite recursion when importing, create the FunctionDecl // with a simplified function type and update it afterwards. if (FromEPI.ExceptionSpec.SourceDecl || FromEPI.ExceptionSpec.SourceTemplate || FromEPI.ExceptionSpec.NoexceptExpr) { FunctionProtoType::ExtProtoInfo DefaultEPI; FromTy = Importer.getFromContext().getFunctionType( FromFPT->getReturnType(), FromFPT->getParamTypes(), DefaultEPI); usedDifferentExceptionSpec = true; } } // Import the type. QualType T = Importer.Import(FromTy); if (T.isNull()) return nullptr; // Import the function parameters. SmallVector Parameters; for (auto P : D->parameters()) { ParmVarDecl *ToP = cast_or_null(Importer.Import(P)); if (!ToP) return nullptr; Parameters.push_back(ToP); } // Create the imported function. TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); FunctionDecl *ToFunction = nullptr; SourceLocation InnerLocStart = Importer.Import(D->getInnerLocStart()); if (CXXConstructorDecl *FromConstructor = dyn_cast(D)) { ToFunction = CXXConstructorDecl::Create(Importer.getToContext(), cast(DC), InnerLocStart, NameInfo, T, TInfo, FromConstructor->isExplicit(), D->isInlineSpecified(), D->isImplicit(), D->isConstexpr()); if (unsigned NumInitializers = FromConstructor->getNumCtorInitializers()) { SmallVector CtorInitializers; for (CXXCtorInitializer *I : FromConstructor->inits()) { CXXCtorInitializer *ToI = cast_or_null(Importer.Import(I)); if (!ToI && I) return nullptr; CtorInitializers.push_back(ToI); } CXXCtorInitializer **Memory = new (Importer.getToContext()) CXXCtorInitializer *[NumInitializers]; std::copy(CtorInitializers.begin(), CtorInitializers.end(), Memory); CXXConstructorDecl *ToCtor = llvm::cast(ToFunction); ToCtor->setCtorInitializers(Memory); ToCtor->setNumCtorInitializers(NumInitializers); } } else if (isa(D)) { ToFunction = CXXDestructorDecl::Create(Importer.getToContext(), cast(DC), InnerLocStart, NameInfo, T, TInfo, D->isInlineSpecified(), D->isImplicit()); } else if (CXXConversionDecl *FromConversion = dyn_cast(D)) { ToFunction = CXXConversionDecl::Create(Importer.getToContext(), cast(DC), InnerLocStart, NameInfo, T, TInfo, D->isInlineSpecified(), FromConversion->isExplicit(), D->isConstexpr(), Importer.Import(D->getLocEnd())); } else if (CXXMethodDecl *Method = dyn_cast(D)) { ToFunction = CXXMethodDecl::Create(Importer.getToContext(), cast(DC), InnerLocStart, NameInfo, T, TInfo, Method->getStorageClass(), Method->isInlineSpecified(), D->isConstexpr(), Importer.Import(D->getLocEnd())); } else { ToFunction = FunctionDecl::Create(Importer.getToContext(), DC, InnerLocStart, NameInfo, T, TInfo, D->getStorageClass(), D->isInlineSpecified(), D->hasWrittenPrototype(), D->isConstexpr()); } // Import the qualifier, if any. ToFunction->setQualifierInfo(Importer.Import(D->getQualifierLoc())); ToFunction->setAccess(D->getAccess()); ToFunction->setLexicalDeclContext(LexicalDC); ToFunction->setVirtualAsWritten(D->isVirtualAsWritten()); ToFunction->setTrivial(D->isTrivial()); ToFunction->setPure(D->isPure()); Importer.Imported(D, ToFunction); // Set the parameters. for (unsigned I = 0, N = Parameters.size(); I != N; ++I) { Parameters[I]->setOwningFunction(ToFunction); ToFunction->addDeclInternal(Parameters[I]); } ToFunction->setParams(Parameters); if (usedDifferentExceptionSpec) { // Update FunctionProtoType::ExtProtoInfo. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; ToFunction->setType(T); } // Import the body, if any. if (Stmt *FromBody = D->getBody()) { if (Stmt *ToBody = Importer.Import(FromBody)) { ToFunction->setBody(ToBody); } } // FIXME: Other bits to merge? // Add this function to the lexical context. LexicalDC->addDeclInternal(ToFunction); if (auto *FromCXXMethod = dyn_cast(D)) ImportOverrides(cast(ToFunction), FromCXXMethod); return ToFunction; } Decl *ASTNodeImporter::VisitCXXMethodDecl(CXXMethodDecl *D) { return VisitFunctionDecl(D); } Decl *ASTNodeImporter::VisitCXXConstructorDecl(CXXConstructorDecl *D) { return VisitCXXMethodDecl(D); } Decl *ASTNodeImporter::VisitCXXDestructorDecl(CXXDestructorDecl *D) { return VisitCXXMethodDecl(D); } Decl *ASTNodeImporter::VisitCXXConversionDecl(CXXConversionDecl *D) { return VisitCXXMethodDecl(D); } static unsigned getFieldIndex(Decl *F) { RecordDecl *Owner = dyn_cast(F->getDeclContext()); if (!Owner) return 0; unsigned Index = 1; for (const auto *D : Owner->noload_decls()) { if (D == F) return Index; if (isa(*D) || isa(*D)) ++Index; } return Index; } Decl *ASTNodeImporter::VisitFieldDecl(FieldDecl *D) { // Import the major distinguishing characteristics of a variable. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Determine whether we've already imported this field. SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (FieldDecl *FoundField = dyn_cast(FoundDecls[I])) { // For anonymous fields, match up by index. if (!Name && getFieldIndex(D) != getFieldIndex(FoundField)) continue; if (Importer.IsStructurallyEquivalent(D->getType(), FoundField->getType())) { Importer.Imported(D, FoundField); return FoundField; } Importer.ToDiag(Loc, diag::err_odr_field_type_inconsistent) << Name << D->getType() << FoundField->getType(); Importer.ToDiag(FoundField->getLocation(), diag::note_odr_value_here) << FoundField->getType(); return nullptr; } } // Import the type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); Expr *BitWidth = Importer.Import(D->getBitWidth()); if (!BitWidth && D->getBitWidth()) return nullptr; FieldDecl *ToField = FieldDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getInnerLocStart()), Loc, Name.getAsIdentifierInfo(), T, TInfo, BitWidth, D->isMutable(), D->getInClassInitStyle()); ToField->setAccess(D->getAccess()); ToField->setLexicalDeclContext(LexicalDC); if (Expr *FromInitializer = D->getInClassInitializer()) { Expr *ToInitializer = Importer.Import(FromInitializer); if (ToInitializer) ToField->setInClassInitializer(ToInitializer); else return nullptr; } ToField->setImplicit(D->isImplicit()); Importer.Imported(D, ToField); LexicalDC->addDeclInternal(ToField); return ToField; } Decl *ASTNodeImporter::VisitIndirectFieldDecl(IndirectFieldDecl *D) { // Import the major distinguishing characteristics of a variable. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Determine whether we've already imported this field. SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (IndirectFieldDecl *FoundField = dyn_cast(FoundDecls[I])) { // For anonymous indirect fields, match up by index. if (!Name && getFieldIndex(D) != getFieldIndex(FoundField)) continue; if (Importer.IsStructurallyEquivalent(D->getType(), FoundField->getType(), !Name.isEmpty())) { Importer.Imported(D, FoundField); return FoundField; } // If there are more anonymous fields to check, continue. if (!Name && I < N-1) continue; Importer.ToDiag(Loc, diag::err_odr_field_type_inconsistent) << Name << D->getType() << FoundField->getType(); Importer.ToDiag(FoundField->getLocation(), diag::note_odr_value_here) << FoundField->getType(); return nullptr; } } // Import the type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; NamedDecl **NamedChain = new (Importer.getToContext())NamedDecl*[D->getChainingSize()]; unsigned i = 0; for (auto *PI : D->chain()) { Decl *D = Importer.Import(PI); if (!D) return nullptr; NamedChain[i++] = cast(D); } IndirectFieldDecl *ToIndirectField = IndirectFieldDecl::Create( Importer.getToContext(), DC, Loc, Name.getAsIdentifierInfo(), T, {NamedChain, D->getChainingSize()}); for (const auto *Attr : D->attrs()) ToIndirectField->addAttr(Attr->clone(Importer.getToContext())); ToIndirectField->setAccess(D->getAccess()); ToIndirectField->setLexicalDeclContext(LexicalDC); Importer.Imported(D, ToIndirectField); LexicalDC->addDeclInternal(ToIndirectField); return ToIndirectField; } Decl *ASTNodeImporter::VisitFriendDecl(FriendDecl *D) { // Import the major distinguishing characteristics of a declaration. DeclContext *DC = Importer.ImportContext(D->getDeclContext()); DeclContext *LexicalDC = D->getDeclContext() == D->getLexicalDeclContext() ? DC : Importer.ImportContext(D->getLexicalDeclContext()); if (!DC || !LexicalDC) return nullptr; // Determine whether we've already imported this decl. // FriendDecl is not a NamedDecl so we cannot use localUncachedLookup. auto *RD = cast(DC); FriendDecl *ImportedFriend = RD->getFirstFriend(); StructuralEquivalenceContext Context( Importer.getFromContext(), Importer.getToContext(), Importer.getNonEquivalentDecls(), false, false); while (ImportedFriend) { if (D->getFriendDecl() && ImportedFriend->getFriendDecl()) { if (Context.IsStructurallyEquivalent(D->getFriendDecl(), ImportedFriend->getFriendDecl())) return Importer.Imported(D, ImportedFriend); } else if (D->getFriendType() && ImportedFriend->getFriendType()) { if (Importer.IsStructurallyEquivalent( D->getFriendType()->getType(), ImportedFriend->getFriendType()->getType(), true)) return Importer.Imported(D, ImportedFriend); } ImportedFriend = ImportedFriend->getNextFriend(); } // Not found. Create it. FriendDecl::FriendUnion ToFU; if (NamedDecl *FriendD = D->getFriendDecl()) ToFU = cast_or_null(Importer.Import(FriendD)); else ToFU = Importer.Import(D->getFriendType()); if (!ToFU) return nullptr; SmallVector ToTPLists(D->NumTPLists); TemplateParameterList **FromTPLists = D->getTrailingObjects(); for (unsigned I = 0; I < D->NumTPLists; I++) { TemplateParameterList *List = ImportTemplateParameterList(FromTPLists[I]); if (!List) return nullptr; ToTPLists[I] = List; } FriendDecl *FrD = FriendDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getLocation()), ToFU, Importer.Import(D->getFriendLoc()), ToTPLists); Importer.Imported(D, FrD); RD->pushFriendDecl(FrD); FrD->setAccess(D->getAccess()); FrD->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(FrD); return FrD; } Decl *ASTNodeImporter::VisitObjCIvarDecl(ObjCIvarDecl *D) { // Import the major distinguishing characteristics of an ivar. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Determine whether we've already imported this ivar SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (ObjCIvarDecl *FoundIvar = dyn_cast(FoundDecls[I])) { if (Importer.IsStructurallyEquivalent(D->getType(), FoundIvar->getType())) { Importer.Imported(D, FoundIvar); return FoundIvar; } Importer.ToDiag(Loc, diag::err_odr_ivar_type_inconsistent) << Name << D->getType() << FoundIvar->getType(); Importer.ToDiag(FoundIvar->getLocation(), diag::note_odr_value_here) << FoundIvar->getType(); return nullptr; } } // Import the type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); Expr *BitWidth = Importer.Import(D->getBitWidth()); if (!BitWidth && D->getBitWidth()) return nullptr; ObjCIvarDecl *ToIvar = ObjCIvarDecl::Create(Importer.getToContext(), cast(DC), Importer.Import(D->getInnerLocStart()), Loc, Name.getAsIdentifierInfo(), T, TInfo, D->getAccessControl(), BitWidth, D->getSynthesize()); ToIvar->setLexicalDeclContext(LexicalDC); Importer.Imported(D, ToIvar); LexicalDC->addDeclInternal(ToIvar); return ToIvar; } Decl *ASTNodeImporter::VisitVarDecl(VarDecl *D) { // Import the major distinguishing characteristics of a variable. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Try to find a variable in our own ("to") context with the same name and // in the same context as the variable we're importing. if (D->isFileVarDecl()) { VarDecl *MergeWithVar = nullptr; SmallVector ConflictingDecls; unsigned IDNS = Decl::IDNS_Ordinary; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(IDNS)) continue; if (VarDecl *FoundVar = dyn_cast(FoundDecls[I])) { // We have found a variable that we may need to merge with. Check it. if (FoundVar->hasExternalFormalLinkage() && D->hasExternalFormalLinkage()) { if (Importer.IsStructurallyEquivalent(D->getType(), FoundVar->getType())) { MergeWithVar = FoundVar; break; } const ArrayType *FoundArray = Importer.getToContext().getAsArrayType(FoundVar->getType()); const ArrayType *TArray = Importer.getToContext().getAsArrayType(D->getType()); if (FoundArray && TArray) { if (isa(FoundArray) && isa(TArray)) { // Import the type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; FoundVar->setType(T); MergeWithVar = FoundVar; break; } else if (isa(TArray) && isa(FoundArray)) { MergeWithVar = FoundVar; break; } } Importer.ToDiag(Loc, diag::err_odr_variable_type_inconsistent) << Name << D->getType() << FoundVar->getType(); Importer.ToDiag(FoundVar->getLocation(), diag::note_odr_value_here) << FoundVar->getType(); } } ConflictingDecls.push_back(FoundDecls[I]); } if (MergeWithVar) { // An equivalent variable with external linkage has been found. Link // the two declarations, then merge them. Importer.Imported(D, MergeWithVar); if (VarDecl *DDef = D->getDefinition()) { if (VarDecl *ExistingDef = MergeWithVar->getDefinition()) { Importer.ToDiag(ExistingDef->getLocation(), diag::err_odr_variable_multiple_def) << Name; Importer.FromDiag(DDef->getLocation(), diag::note_odr_defined_here); } else { Expr *Init = Importer.Import(DDef->getInit()); MergeWithVar->setInit(Init); if (DDef->isInitKnownICE()) { EvaluatedStmt *Eval = MergeWithVar->ensureEvaluatedStmt(); Eval->CheckedICE = true; Eval->IsICE = DDef->isInitICE(); } } } return MergeWithVar; } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, IDNS, ConflictingDecls.data(), ConflictingDecls.size()); if (!Name) return nullptr; } } // Import the type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; // Create the imported variable. TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); VarDecl *ToVar = VarDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getInnerLocStart()), Loc, Name.getAsIdentifierInfo(), T, TInfo, D->getStorageClass()); ToVar->setQualifierInfo(Importer.Import(D->getQualifierLoc())); ToVar->setAccess(D->getAccess()); ToVar->setLexicalDeclContext(LexicalDC); Importer.Imported(D, ToVar); LexicalDC->addDeclInternal(ToVar); if (!D->isFileVarDecl() && D->isUsed()) ToVar->setIsUsed(); // Merge the initializer. if (ImportDefinition(D, ToVar)) return nullptr; if (D->isConstexpr()) ToVar->setConstexpr(true); return ToVar; } Decl *ASTNodeImporter::VisitImplicitParamDecl(ImplicitParamDecl *D) { // Parameters are created in the translation unit's context, then moved // into the function declaration's context afterward. DeclContext *DC = Importer.getToContext().getTranslationUnitDecl(); // Import the name of this declaration. DeclarationName Name = Importer.Import(D->getDeclName()); if (D->getDeclName() && !Name) return nullptr; // Import the location of this declaration. SourceLocation Loc = Importer.Import(D->getLocation()); // Import the parameter's type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; // Create the imported parameter. auto *ToParm = ImplicitParamDecl::Create(Importer.getToContext(), DC, Loc, Name.getAsIdentifierInfo(), T, D->getParameterKind()); return Importer.Imported(D, ToParm); } Decl *ASTNodeImporter::VisitParmVarDecl(ParmVarDecl *D) { // Parameters are created in the translation unit's context, then moved // into the function declaration's context afterward. DeclContext *DC = Importer.getToContext().getTranslationUnitDecl(); // Import the name of this declaration. DeclarationName Name = Importer.Import(D->getDeclName()); if (D->getDeclName() && !Name) return nullptr; // Import the location of this declaration. SourceLocation Loc = Importer.Import(D->getLocation()); // Import the parameter's type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; // Create the imported parameter. TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); ParmVarDecl *ToParm = ParmVarDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getInnerLocStart()), Loc, Name.getAsIdentifierInfo(), T, TInfo, D->getStorageClass(), /*DefaultArg*/ nullptr); // Set the default argument. ToParm->setHasInheritedDefaultArg(D->hasInheritedDefaultArg()); ToParm->setKNRPromoted(D->isKNRPromoted()); Expr *ToDefArg = nullptr; Expr *FromDefArg = nullptr; if (D->hasUninstantiatedDefaultArg()) { FromDefArg = D->getUninstantiatedDefaultArg(); ToDefArg = Importer.Import(FromDefArg); ToParm->setUninstantiatedDefaultArg(ToDefArg); } else if (D->hasUnparsedDefaultArg()) { ToParm->setUnparsedDefaultArg(); } else if (D->hasDefaultArg()) { FromDefArg = D->getDefaultArg(); ToDefArg = Importer.Import(FromDefArg); ToParm->setDefaultArg(ToDefArg); } if (FromDefArg && !ToDefArg) return nullptr; if (D->isUsed()) ToParm->setIsUsed(); return Importer.Imported(D, ToParm); } Decl *ASTNodeImporter::VisitObjCMethodDecl(ObjCMethodDecl *D) { // Import the major distinguishing characteristics of a method. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (ObjCMethodDecl *FoundMethod = dyn_cast(FoundDecls[I])) { if (FoundMethod->isInstanceMethod() != D->isInstanceMethod()) continue; // Check return types. if (!Importer.IsStructurallyEquivalent(D->getReturnType(), FoundMethod->getReturnType())) { Importer.ToDiag(Loc, diag::err_odr_objc_method_result_type_inconsistent) << D->isInstanceMethod() << Name << D->getReturnType() << FoundMethod->getReturnType(); Importer.ToDiag(FoundMethod->getLocation(), diag::note_odr_objc_method_here) << D->isInstanceMethod() << Name; return nullptr; } // Check the number of parameters. if (D->param_size() != FoundMethod->param_size()) { Importer.ToDiag(Loc, diag::err_odr_objc_method_num_params_inconsistent) << D->isInstanceMethod() << Name << D->param_size() << FoundMethod->param_size(); Importer.ToDiag(FoundMethod->getLocation(), diag::note_odr_objc_method_here) << D->isInstanceMethod() << Name; return nullptr; } // Check parameter types. for (ObjCMethodDecl::param_iterator P = D->param_begin(), PEnd = D->param_end(), FoundP = FoundMethod->param_begin(); P != PEnd; ++P, ++FoundP) { if (!Importer.IsStructurallyEquivalent((*P)->getType(), (*FoundP)->getType())) { Importer.FromDiag((*P)->getLocation(), diag::err_odr_objc_method_param_type_inconsistent) << D->isInstanceMethod() << Name << (*P)->getType() << (*FoundP)->getType(); Importer.ToDiag((*FoundP)->getLocation(), diag::note_odr_value_here) << (*FoundP)->getType(); return nullptr; } } // Check variadic/non-variadic. // Check the number of parameters. if (D->isVariadic() != FoundMethod->isVariadic()) { Importer.ToDiag(Loc, diag::err_odr_objc_method_variadic_inconsistent) << D->isInstanceMethod() << Name; Importer.ToDiag(FoundMethod->getLocation(), diag::note_odr_objc_method_here) << D->isInstanceMethod() << Name; return nullptr; } // FIXME: Any other bits we need to merge? return Importer.Imported(D, FoundMethod); } } // Import the result type. QualType ResultTy = Importer.Import(D->getReturnType()); if (ResultTy.isNull()) return nullptr; TypeSourceInfo *ReturnTInfo = Importer.Import(D->getReturnTypeSourceInfo()); ObjCMethodDecl *ToMethod = ObjCMethodDecl::Create( Importer.getToContext(), Loc, Importer.Import(D->getLocEnd()), Name.getObjCSelector(), ResultTy, ReturnTInfo, DC, D->isInstanceMethod(), D->isVariadic(), D->isPropertyAccessor(), D->isImplicit(), D->isDefined(), D->getImplementationControl(), D->hasRelatedResultType()); // FIXME: When we decide to merge method definitions, we'll need to // deal with implicit parameters. // Import the parameters SmallVector ToParams; for (auto *FromP : D->parameters()) { ParmVarDecl *ToP = cast_or_null(Importer.Import(FromP)); if (!ToP) return nullptr; ToParams.push_back(ToP); } // Set the parameters. for (unsigned I = 0, N = ToParams.size(); I != N; ++I) { ToParams[I]->setOwningFunction(ToMethod); ToMethod->addDeclInternal(ToParams[I]); } SmallVector SelLocs; D->getSelectorLocs(SelLocs); ToMethod->setMethodParams(Importer.getToContext(), ToParams, SelLocs); ToMethod->setLexicalDeclContext(LexicalDC); Importer.Imported(D, ToMethod); LexicalDC->addDeclInternal(ToMethod); return ToMethod; } Decl *ASTNodeImporter::VisitObjCTypeParamDecl(ObjCTypeParamDecl *D) { // Import the major distinguishing characteristics of a category. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; TypeSourceInfo *BoundInfo = Importer.Import(D->getTypeSourceInfo()); if (!BoundInfo) return nullptr; ObjCTypeParamDecl *Result = ObjCTypeParamDecl::Create( Importer.getToContext(), DC, D->getVariance(), Importer.Import(D->getVarianceLoc()), D->getIndex(), Importer.Import(D->getLocation()), Name.getAsIdentifierInfo(), Importer.Import(D->getColonLoc()), BoundInfo); Importer.Imported(D, Result); Result->setLexicalDeclContext(LexicalDC); return Result; } Decl *ASTNodeImporter::VisitObjCCategoryDecl(ObjCCategoryDecl *D) { // Import the major distinguishing characteristics of a category. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; ObjCInterfaceDecl *ToInterface = cast_or_null(Importer.Import(D->getClassInterface())); if (!ToInterface) return nullptr; // Determine if we've already encountered this category. ObjCCategoryDecl *MergeWithCategory = ToInterface->FindCategoryDeclaration(Name.getAsIdentifierInfo()); ObjCCategoryDecl *ToCategory = MergeWithCategory; if (!ToCategory) { ToCategory = ObjCCategoryDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getAtStartLoc()), Loc, Importer.Import(D->getCategoryNameLoc()), Name.getAsIdentifierInfo(), ToInterface, /*TypeParamList=*/nullptr, Importer.Import(D->getIvarLBraceLoc()), Importer.Import(D->getIvarRBraceLoc())); ToCategory->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToCategory); Importer.Imported(D, ToCategory); // Import the type parameter list after calling Imported, to avoid // loops when bringing in their DeclContext. ToCategory->setTypeParamList(ImportObjCTypeParamList( D->getTypeParamList())); // Import protocols SmallVector Protocols; SmallVector ProtocolLocs; ObjCCategoryDecl::protocol_loc_iterator FromProtoLoc = D->protocol_loc_begin(); for (ObjCCategoryDecl::protocol_iterator FromProto = D->protocol_begin(), FromProtoEnd = D->protocol_end(); FromProto != FromProtoEnd; ++FromProto, ++FromProtoLoc) { ObjCProtocolDecl *ToProto = cast_or_null(Importer.Import(*FromProto)); if (!ToProto) return nullptr; Protocols.push_back(ToProto); ProtocolLocs.push_back(Importer.Import(*FromProtoLoc)); } // FIXME: If we're merging, make sure that the protocol list is the same. ToCategory->setProtocolList(Protocols.data(), Protocols.size(), ProtocolLocs.data(), Importer.getToContext()); } else { Importer.Imported(D, ToCategory); } // Import all of the members of this category. ImportDeclContext(D); // If we have an implementation, import it as well. if (D->getImplementation()) { ObjCCategoryImplDecl *Impl = cast_or_null( Importer.Import(D->getImplementation())); if (!Impl) return nullptr; ToCategory->setImplementation(Impl); } return ToCategory; } bool ASTNodeImporter::ImportDefinition(ObjCProtocolDecl *From, ObjCProtocolDecl *To, ImportDefinitionKind Kind) { if (To->getDefinition()) { if (shouldForceImportDeclContext(Kind)) ImportDeclContext(From); return false; } // Start the protocol definition To->startDefinition(); // Import protocols SmallVector Protocols; SmallVector ProtocolLocs; ObjCProtocolDecl::protocol_loc_iterator FromProtoLoc = From->protocol_loc_begin(); for (ObjCProtocolDecl::protocol_iterator FromProto = From->protocol_begin(), FromProtoEnd = From->protocol_end(); FromProto != FromProtoEnd; ++FromProto, ++FromProtoLoc) { ObjCProtocolDecl *ToProto = cast_or_null(Importer.Import(*FromProto)); if (!ToProto) return true; Protocols.push_back(ToProto); ProtocolLocs.push_back(Importer.Import(*FromProtoLoc)); } // FIXME: If we're merging, make sure that the protocol list is the same. To->setProtocolList(Protocols.data(), Protocols.size(), ProtocolLocs.data(), Importer.getToContext()); if (shouldForceImportDeclContext(Kind)) { // Import all of the members of this protocol. ImportDeclContext(From, /*ForceImport=*/true); } return false; } Decl *ASTNodeImporter::VisitObjCProtocolDecl(ObjCProtocolDecl *D) { // If this protocol has a definition in the translation unit we're coming // from, but this particular declaration is not that definition, import the // definition and map to that. ObjCProtocolDecl *Definition = D->getDefinition(); if (Definition && Definition != D) { Decl *ImportedDef = Importer.Import(Definition); if (!ImportedDef) return nullptr; return Importer.Imported(D, ImportedDef); } // Import the major distinguishing characteristics of a protocol. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; ObjCProtocolDecl *MergeWithProtocol = nullptr; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(Decl::IDNS_ObjCProtocol)) continue; if ((MergeWithProtocol = dyn_cast(FoundDecls[I]))) break; } ObjCProtocolDecl *ToProto = MergeWithProtocol; if (!ToProto) { ToProto = ObjCProtocolDecl::Create(Importer.getToContext(), DC, Name.getAsIdentifierInfo(), Loc, Importer.Import(D->getAtStartLoc()), /*PrevDecl=*/nullptr); ToProto->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToProto); } Importer.Imported(D, ToProto); if (D->isThisDeclarationADefinition() && ImportDefinition(D, ToProto)) return nullptr; return ToProto; } Decl *ASTNodeImporter::VisitLinkageSpecDecl(LinkageSpecDecl *D) { DeclContext *DC = Importer.ImportContext(D->getDeclContext()); DeclContext *LexicalDC = Importer.ImportContext(D->getLexicalDeclContext()); SourceLocation ExternLoc = Importer.Import(D->getExternLoc()); SourceLocation LangLoc = Importer.Import(D->getLocation()); bool HasBraces = D->hasBraces(); LinkageSpecDecl *ToLinkageSpec = LinkageSpecDecl::Create(Importer.getToContext(), DC, ExternLoc, LangLoc, D->getLanguage(), HasBraces); if (HasBraces) { SourceLocation RBraceLoc = Importer.Import(D->getRBraceLoc()); ToLinkageSpec->setRBraceLoc(RBraceLoc); } ToLinkageSpec->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToLinkageSpec); Importer.Imported(D, ToLinkageSpec); return ToLinkageSpec; } bool ASTNodeImporter::ImportDefinition(ObjCInterfaceDecl *From, ObjCInterfaceDecl *To, ImportDefinitionKind Kind) { if (To->getDefinition()) { // Check consistency of superclass. ObjCInterfaceDecl *FromSuper = From->getSuperClass(); if (FromSuper) { FromSuper = cast_or_null(Importer.Import(FromSuper)); if (!FromSuper) return true; } ObjCInterfaceDecl *ToSuper = To->getSuperClass(); if ((bool)FromSuper != (bool)ToSuper || (FromSuper && !declaresSameEntity(FromSuper, ToSuper))) { Importer.ToDiag(To->getLocation(), diag::err_odr_objc_superclass_inconsistent) << To->getDeclName(); if (ToSuper) Importer.ToDiag(To->getSuperClassLoc(), diag::note_odr_objc_superclass) << To->getSuperClass()->getDeclName(); else Importer.ToDiag(To->getLocation(), diag::note_odr_objc_missing_superclass); if (From->getSuperClass()) Importer.FromDiag(From->getSuperClassLoc(), diag::note_odr_objc_superclass) << From->getSuperClass()->getDeclName(); else Importer.FromDiag(From->getLocation(), diag::note_odr_objc_missing_superclass); } if (shouldForceImportDeclContext(Kind)) ImportDeclContext(From); return false; } // Start the definition. To->startDefinition(); // If this class has a superclass, import it. if (From->getSuperClass()) { TypeSourceInfo *SuperTInfo = Importer.Import(From->getSuperClassTInfo()); if (!SuperTInfo) return true; To->setSuperClass(SuperTInfo); } // Import protocols SmallVector Protocols; SmallVector ProtocolLocs; ObjCInterfaceDecl::protocol_loc_iterator FromProtoLoc = From->protocol_loc_begin(); for (ObjCInterfaceDecl::protocol_iterator FromProto = From->protocol_begin(), FromProtoEnd = From->protocol_end(); FromProto != FromProtoEnd; ++FromProto, ++FromProtoLoc) { ObjCProtocolDecl *ToProto = cast_or_null(Importer.Import(*FromProto)); if (!ToProto) return true; Protocols.push_back(ToProto); ProtocolLocs.push_back(Importer.Import(*FromProtoLoc)); } // FIXME: If we're merging, make sure that the protocol list is the same. To->setProtocolList(Protocols.data(), Protocols.size(), ProtocolLocs.data(), Importer.getToContext()); // Import categories. When the categories themselves are imported, they'll // hook themselves into this interface. for (auto *Cat : From->known_categories()) Importer.Import(Cat); // If we have an @implementation, import it as well. if (From->getImplementation()) { ObjCImplementationDecl *Impl = cast_or_null( Importer.Import(From->getImplementation())); if (!Impl) return true; To->setImplementation(Impl); } if (shouldForceImportDeclContext(Kind)) { // Import all of the members of this class. ImportDeclContext(From, /*ForceImport=*/true); } return false; } ObjCTypeParamList * ASTNodeImporter::ImportObjCTypeParamList(ObjCTypeParamList *list) { if (!list) return nullptr; SmallVector toTypeParams; for (auto fromTypeParam : *list) { auto toTypeParam = cast_or_null( Importer.Import(fromTypeParam)); if (!toTypeParam) return nullptr; toTypeParams.push_back(toTypeParam); } return ObjCTypeParamList::create(Importer.getToContext(), Importer.Import(list->getLAngleLoc()), toTypeParams, Importer.Import(list->getRAngleLoc())); } Decl *ASTNodeImporter::VisitObjCInterfaceDecl(ObjCInterfaceDecl *D) { // If this class has a definition in the translation unit we're coming from, // but this particular declaration is not that definition, import the // definition and map to that. ObjCInterfaceDecl *Definition = D->getDefinition(); if (Definition && Definition != D) { Decl *ImportedDef = Importer.Import(Definition); if (!ImportedDef) return nullptr; return Importer.Imported(D, ImportedDef); } // Import the major distinguishing characteristics of an @interface. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Look for an existing interface with the same name. ObjCInterfaceDecl *MergeWithIface = nullptr; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(Decl::IDNS_Ordinary)) continue; if ((MergeWithIface = dyn_cast(FoundDecls[I]))) break; } // Create an interface declaration, if one does not already exist. ObjCInterfaceDecl *ToIface = MergeWithIface; if (!ToIface) { ToIface = ObjCInterfaceDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getAtStartLoc()), Name.getAsIdentifierInfo(), /*TypeParamList=*/nullptr, /*PrevDecl=*/nullptr, Loc, D->isImplicitInterfaceDecl()); ToIface->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToIface); } Importer.Imported(D, ToIface); // Import the type parameter list after calling Imported, to avoid // loops when bringing in their DeclContext. ToIface->setTypeParamList(ImportObjCTypeParamList( D->getTypeParamListAsWritten())); if (D->isThisDeclarationADefinition() && ImportDefinition(D, ToIface)) return nullptr; return ToIface; } Decl *ASTNodeImporter::VisitObjCCategoryImplDecl(ObjCCategoryImplDecl *D) { ObjCCategoryDecl *Category = cast_or_null( Importer.Import(D->getCategoryDecl())); if (!Category) return nullptr; ObjCCategoryImplDecl *ToImpl = Category->getImplementation(); if (!ToImpl) { DeclContext *DC = Importer.ImportContext(D->getDeclContext()); if (!DC) return nullptr; SourceLocation CategoryNameLoc = Importer.Import(D->getCategoryNameLoc()); ToImpl = ObjCCategoryImplDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getIdentifier()), Category->getClassInterface(), Importer.Import(D->getLocation()), Importer.Import(D->getAtStartLoc()), CategoryNameLoc); DeclContext *LexicalDC = DC; if (D->getDeclContext() != D->getLexicalDeclContext()) { LexicalDC = Importer.ImportContext(D->getLexicalDeclContext()); if (!LexicalDC) return nullptr; ToImpl->setLexicalDeclContext(LexicalDC); } LexicalDC->addDeclInternal(ToImpl); Category->setImplementation(ToImpl); } Importer.Imported(D, ToImpl); ImportDeclContext(D); return ToImpl; } Decl *ASTNodeImporter::VisitObjCImplementationDecl(ObjCImplementationDecl *D) { // Find the corresponding interface. ObjCInterfaceDecl *Iface = cast_or_null( Importer.Import(D->getClassInterface())); if (!Iface) return nullptr; // Import the superclass, if any. ObjCInterfaceDecl *Super = nullptr; if (D->getSuperClass()) { Super = cast_or_null( Importer.Import(D->getSuperClass())); if (!Super) return nullptr; } ObjCImplementationDecl *Impl = Iface->getImplementation(); if (!Impl) { // We haven't imported an implementation yet. Create a new @implementation // now. Impl = ObjCImplementationDecl::Create(Importer.getToContext(), Importer.ImportContext(D->getDeclContext()), Iface, Super, Importer.Import(D->getLocation()), Importer.Import(D->getAtStartLoc()), Importer.Import(D->getSuperClassLoc()), Importer.Import(D->getIvarLBraceLoc()), Importer.Import(D->getIvarRBraceLoc())); if (D->getDeclContext() != D->getLexicalDeclContext()) { DeclContext *LexicalDC = Importer.ImportContext(D->getLexicalDeclContext()); if (!LexicalDC) return nullptr; Impl->setLexicalDeclContext(LexicalDC); } // Associate the implementation with the class it implements. Iface->setImplementation(Impl); Importer.Imported(D, Iface->getImplementation()); } else { Importer.Imported(D, Iface->getImplementation()); // Verify that the existing @implementation has the same superclass. if ((Super && !Impl->getSuperClass()) || (!Super && Impl->getSuperClass()) || (Super && Impl->getSuperClass() && !declaresSameEntity(Super->getCanonicalDecl(), Impl->getSuperClass()))) { Importer.ToDiag(Impl->getLocation(), diag::err_odr_objc_superclass_inconsistent) << Iface->getDeclName(); // FIXME: It would be nice to have the location of the superclass // below. if (Impl->getSuperClass()) Importer.ToDiag(Impl->getLocation(), diag::note_odr_objc_superclass) << Impl->getSuperClass()->getDeclName(); else Importer.ToDiag(Impl->getLocation(), diag::note_odr_objc_missing_superclass); if (D->getSuperClass()) Importer.FromDiag(D->getLocation(), diag::note_odr_objc_superclass) << D->getSuperClass()->getDeclName(); else Importer.FromDiag(D->getLocation(), diag::note_odr_objc_missing_superclass); return nullptr; } } // Import all of the members of this @implementation. ImportDeclContext(D); return Impl; } Decl *ASTNodeImporter::VisitObjCPropertyDecl(ObjCPropertyDecl *D) { // Import the major distinguishing characteristics of an @property. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // Check whether we have already imported this property. SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (ObjCPropertyDecl *FoundProp = dyn_cast(FoundDecls[I])) { // Check property types. if (!Importer.IsStructurallyEquivalent(D->getType(), FoundProp->getType())) { Importer.ToDiag(Loc, diag::err_odr_objc_property_type_inconsistent) << Name << D->getType() << FoundProp->getType(); Importer.ToDiag(FoundProp->getLocation(), diag::note_odr_value_here) << FoundProp->getType(); return nullptr; } // FIXME: Check property attributes, getters, setters, etc.? // Consider these properties to be equivalent. Importer.Imported(D, FoundProp); return FoundProp; } } // Import the type. TypeSourceInfo *TSI = Importer.Import(D->getTypeSourceInfo()); if (!TSI) return nullptr; // Create the new property. ObjCPropertyDecl *ToProperty = ObjCPropertyDecl::Create(Importer.getToContext(), DC, Loc, Name.getAsIdentifierInfo(), Importer.Import(D->getAtLoc()), Importer.Import(D->getLParenLoc()), Importer.Import(D->getType()), TSI, D->getPropertyImplementation()); Importer.Imported(D, ToProperty); ToProperty->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(ToProperty); ToProperty->setPropertyAttributes(D->getPropertyAttributes()); ToProperty->setPropertyAttributesAsWritten( D->getPropertyAttributesAsWritten()); ToProperty->setGetterName(Importer.Import(D->getGetterName()), Importer.Import(D->getGetterNameLoc())); ToProperty->setSetterName(Importer.Import(D->getSetterName()), Importer.Import(D->getSetterNameLoc())); ToProperty->setGetterMethodDecl( cast_or_null(Importer.Import(D->getGetterMethodDecl()))); ToProperty->setSetterMethodDecl( cast_or_null(Importer.Import(D->getSetterMethodDecl()))); ToProperty->setPropertyIvarDecl( cast_or_null(Importer.Import(D->getPropertyIvarDecl()))); return ToProperty; } Decl *ASTNodeImporter::VisitObjCPropertyImplDecl(ObjCPropertyImplDecl *D) { ObjCPropertyDecl *Property = cast_or_null( Importer.Import(D->getPropertyDecl())); if (!Property) return nullptr; DeclContext *DC = Importer.ImportContext(D->getDeclContext()); if (!DC) return nullptr; // Import the lexical declaration context. DeclContext *LexicalDC = DC; if (D->getDeclContext() != D->getLexicalDeclContext()) { LexicalDC = Importer.ImportContext(D->getLexicalDeclContext()); if (!LexicalDC) return nullptr; } ObjCImplDecl *InImpl = dyn_cast(LexicalDC); if (!InImpl) return nullptr; // Import the ivar (for an @synthesize). ObjCIvarDecl *Ivar = nullptr; if (D->getPropertyIvarDecl()) { Ivar = cast_or_null( Importer.Import(D->getPropertyIvarDecl())); if (!Ivar) return nullptr; } ObjCPropertyImplDecl *ToImpl = InImpl->FindPropertyImplDecl(Property->getIdentifier(), Property->getQueryKind()); if (!ToImpl) { ToImpl = ObjCPropertyImplDecl::Create(Importer.getToContext(), DC, Importer.Import(D->getLocStart()), Importer.Import(D->getLocation()), Property, D->getPropertyImplementation(), Ivar, Importer.Import(D->getPropertyIvarDeclLoc())); ToImpl->setLexicalDeclContext(LexicalDC); Importer.Imported(D, ToImpl); LexicalDC->addDeclInternal(ToImpl); } else { // Check that we have the same kind of property implementation (@synthesize // vs. @dynamic). if (D->getPropertyImplementation() != ToImpl->getPropertyImplementation()) { Importer.ToDiag(ToImpl->getLocation(), diag::err_odr_objc_property_impl_kind_inconsistent) << Property->getDeclName() << (ToImpl->getPropertyImplementation() == ObjCPropertyImplDecl::Dynamic); Importer.FromDiag(D->getLocation(), diag::note_odr_objc_property_impl_kind) << D->getPropertyDecl()->getDeclName() << (D->getPropertyImplementation() == ObjCPropertyImplDecl::Dynamic); return nullptr; } // For @synthesize, check that we have the same if (D->getPropertyImplementation() == ObjCPropertyImplDecl::Synthesize && Ivar != ToImpl->getPropertyIvarDecl()) { Importer.ToDiag(ToImpl->getPropertyIvarDeclLoc(), diag::err_odr_objc_synthesize_ivar_inconsistent) << Property->getDeclName() << ToImpl->getPropertyIvarDecl()->getDeclName() << Ivar->getDeclName(); Importer.FromDiag(D->getPropertyIvarDeclLoc(), diag::note_odr_objc_synthesize_ivar_here) << D->getPropertyIvarDecl()->getDeclName(); return nullptr; } // Merge the existing implementation with the new implementation. Importer.Imported(D, ToImpl); } return ToImpl; } Decl *ASTNodeImporter::VisitTemplateTypeParmDecl(TemplateTypeParmDecl *D) { // For template arguments, we adopt the translation unit as our declaration // context. This context will be fixed when the actual template declaration // is created. // FIXME: Import default argument. return TemplateTypeParmDecl::Create(Importer.getToContext(), Importer.getToContext().getTranslationUnitDecl(), Importer.Import(D->getLocStart()), Importer.Import(D->getLocation()), D->getDepth(), D->getIndex(), Importer.Import(D->getIdentifier()), D->wasDeclaredWithTypename(), D->isParameterPack()); } Decl * ASTNodeImporter::VisitNonTypeTemplateParmDecl(NonTypeTemplateParmDecl *D) { // Import the name of this declaration. DeclarationName Name = Importer.Import(D->getDeclName()); if (D->getDeclName() && !Name) return nullptr; // Import the location of this declaration. SourceLocation Loc = Importer.Import(D->getLocation()); // Import the type of this declaration. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; // Import type-source information. TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); if (D->getTypeSourceInfo() && !TInfo) return nullptr; // FIXME: Import default argument. return NonTypeTemplateParmDecl::Create(Importer.getToContext(), Importer.getToContext().getTranslationUnitDecl(), Importer.Import(D->getInnerLocStart()), Loc, D->getDepth(), D->getPosition(), Name.getAsIdentifierInfo(), T, D->isParameterPack(), TInfo); } Decl * ASTNodeImporter::VisitTemplateTemplateParmDecl(TemplateTemplateParmDecl *D) { // Import the name of this declaration. DeclarationName Name = Importer.Import(D->getDeclName()); if (D->getDeclName() && !Name) return nullptr; // Import the location of this declaration. SourceLocation Loc = Importer.Import(D->getLocation()); // Import template parameters. TemplateParameterList *TemplateParams = ImportTemplateParameterList(D->getTemplateParameters()); if (!TemplateParams) return nullptr; // FIXME: Import default argument. return TemplateTemplateParmDecl::Create(Importer.getToContext(), Importer.getToContext().getTranslationUnitDecl(), Loc, D->getDepth(), D->getPosition(), D->isParameterPack(), Name.getAsIdentifierInfo(), TemplateParams); } Decl *ASTNodeImporter::VisitClassTemplateDecl(ClassTemplateDecl *D) { // If this record has a definition in the translation unit we're coming from, // but this particular declaration is not that definition, import the // definition and map to that. CXXRecordDecl *Definition = cast_or_null(D->getTemplatedDecl()->getDefinition()); if (Definition && Definition != D->getTemplatedDecl()) { Decl *ImportedDef = Importer.Import(Definition->getDescribedClassTemplate()); if (!ImportedDef) return nullptr; return Importer.Imported(D, ImportedDef); } // Import the major distinguishing characteristics of this class template. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // We may already have a template of the same name; try to find and match it. if (!DC->isFunctionOrMethod()) { SmallVector ConflictingDecls; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(Decl::IDNS_Ordinary)) continue; Decl *Found = FoundDecls[I]; if (ClassTemplateDecl *FoundTemplate = dyn_cast(Found)) { if (IsStructuralMatch(D, FoundTemplate)) { // The class templates structurally match; call it the same template. // FIXME: We may be filling in a forward declaration here. Handle // this case! Importer.Imported(D->getTemplatedDecl(), FoundTemplate->getTemplatedDecl()); return Importer.Imported(D, FoundTemplate); } } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, Decl::IDNS_Ordinary, ConflictingDecls.data(), ConflictingDecls.size()); } if (!Name) return nullptr; } CXXRecordDecl *DTemplated = D->getTemplatedDecl(); // Create the declaration that is being templated. // Create the declaration that is being templated. CXXRecordDecl *D2Templated = cast_or_null( Importer.Import(DTemplated)); if (!D2Templated) return nullptr; // Resolve possible cyclic import. if (Decl *AlreadyImported = Importer.GetAlreadyImportedOrNull(D)) return AlreadyImported; // Create the class template declaration itself. TemplateParameterList *TemplateParams = ImportTemplateParameterList(D->getTemplateParameters()); if (!TemplateParams) return nullptr; ClassTemplateDecl *D2 = ClassTemplateDecl::Create(Importer.getToContext(), DC, Loc, Name, TemplateParams, D2Templated); D2Templated->setDescribedClassTemplate(D2); D2->setAccess(D->getAccess()); D2->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(D2); // Note the relationship between the class templates. Importer.Imported(D, D2); Importer.Imported(DTemplated, D2Templated); if (DTemplated->isCompleteDefinition() && !D2Templated->isCompleteDefinition()) { // FIXME: Import definition! } return D2; } Decl *ASTNodeImporter::VisitClassTemplateSpecializationDecl( ClassTemplateSpecializationDecl *D) { // If this record has a definition in the translation unit we're coming from, // but this particular declaration is not that definition, import the // definition and map to that. TagDecl *Definition = D->getDefinition(); if (Definition && Definition != D) { Decl *ImportedDef = Importer.Import(Definition); if (!ImportedDef) return nullptr; return Importer.Imported(D, ImportedDef); } ClassTemplateDecl *ClassTemplate = cast_or_null(Importer.Import( D->getSpecializedTemplate())); if (!ClassTemplate) return nullptr; // Import the context of this declaration. DeclContext *DC = ClassTemplate->getDeclContext(); if (!DC) return nullptr; DeclContext *LexicalDC = DC; if (D->getDeclContext() != D->getLexicalDeclContext()) { LexicalDC = Importer.ImportContext(D->getLexicalDeclContext()); if (!LexicalDC) return nullptr; } // Import the location of this declaration. SourceLocation StartLoc = Importer.Import(D->getLocStart()); SourceLocation IdLoc = Importer.Import(D->getLocation()); // Import template arguments. SmallVector TemplateArgs; if (ImportTemplateArguments(D->getTemplateArgs().data(), D->getTemplateArgs().size(), TemplateArgs)) return nullptr; // Try to find an existing specialization with these template arguments. void *InsertPos = nullptr; ClassTemplateSpecializationDecl *D2 = ClassTemplate->findSpecialization(TemplateArgs, InsertPos); if (D2) { // We already have a class template specialization with these template // arguments. // FIXME: Check for specialization vs. instantiation errors. if (RecordDecl *FoundDef = D2->getDefinition()) { if (!D->isCompleteDefinition() || IsStructuralMatch(D, FoundDef)) { // The record types structurally match, or the "from" translation // unit only had a forward declaration anyway; call it the same // function. return Importer.Imported(D, FoundDef); } } } else { // Create a new specialization. if (ClassTemplatePartialSpecializationDecl *PartialSpec = dyn_cast(D)) { // Import TemplateArgumentListInfo TemplateArgumentListInfo ToTAInfo; auto &ASTTemplateArgs = *PartialSpec->getTemplateArgsAsWritten(); for (unsigned I = 0, E = ASTTemplateArgs.NumTemplateArgs; I < E; ++I) { bool Error = false; auto ToLoc = ImportTemplateArgumentLoc(ASTTemplateArgs[I], Error); if (Error) return nullptr; ToTAInfo.addArgument(ToLoc); } QualType CanonInjType = Importer.Import( PartialSpec->getInjectedSpecializationType()); if (CanonInjType.isNull()) return nullptr; CanonInjType = CanonInjType.getCanonicalType(); TemplateParameterList *ToTPList = ImportTemplateParameterList( PartialSpec->getTemplateParameters()); if (!ToTPList && PartialSpec->getTemplateParameters()) return nullptr; D2 = ClassTemplatePartialSpecializationDecl::Create( Importer.getToContext(), D->getTagKind(), DC, StartLoc, IdLoc, ToTPList, ClassTemplate, llvm::makeArrayRef(TemplateArgs.data(), TemplateArgs.size()), ToTAInfo, CanonInjType, nullptr); } else { D2 = ClassTemplateSpecializationDecl::Create(Importer.getToContext(), D->getTagKind(), DC, StartLoc, IdLoc, ClassTemplate, TemplateArgs, /*PrevDecl=*/nullptr); } D2->setSpecializationKind(D->getSpecializationKind()); // Add this specialization to the class template. ClassTemplate->AddSpecialization(D2, InsertPos); // Import the qualifier, if any. D2->setQualifierInfo(Importer.Import(D->getQualifierLoc())); Importer.Imported(D, D2); if (auto *TSI = D->getTypeAsWritten()) { TypeSourceInfo *TInfo = Importer.Import(TSI); if (!TInfo) return nullptr; D2->setTypeAsWritten(TInfo); D2->setTemplateKeywordLoc(Importer.Import(D->getTemplateKeywordLoc())); D2->setExternLoc(Importer.Import(D->getExternLoc())); } SourceLocation POI = Importer.Import(D->getPointOfInstantiation()); if (POI.isValid()) D2->setPointOfInstantiation(POI); else if (D->getPointOfInstantiation().isValid()) return nullptr; D2->setTemplateSpecializationKind(D->getTemplateSpecializationKind()); // Add the specialization to this context. D2->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(D2); } Importer.Imported(D, D2); if (D->isCompleteDefinition() && ImportDefinition(D, D2)) return nullptr; return D2; } Decl *ASTNodeImporter::VisitVarTemplateDecl(VarTemplateDecl *D) { // If this variable has a definition in the translation unit we're coming // from, // but this particular declaration is not that definition, import the // definition and map to that. VarDecl *Definition = cast_or_null(D->getTemplatedDecl()->getDefinition()); if (Definition && Definition != D->getTemplatedDecl()) { Decl *ImportedDef = Importer.Import(Definition->getDescribedVarTemplate()); if (!ImportedDef) return nullptr; return Importer.Imported(D, ImportedDef); } // Import the major distinguishing characteristics of this variable template. DeclContext *DC, *LexicalDC; DeclarationName Name; SourceLocation Loc; NamedDecl *ToD; if (ImportDeclParts(D, DC, LexicalDC, Name, ToD, Loc)) return nullptr; if (ToD) return ToD; // We may already have a template of the same name; try to find and match it. assert(!DC->isFunctionOrMethod() && "Variable templates cannot be declared at function scope"); SmallVector ConflictingDecls; SmallVector FoundDecls; DC->getRedeclContext()->localUncachedLookup(Name, FoundDecls); for (unsigned I = 0, N = FoundDecls.size(); I != N; ++I) { if (!FoundDecls[I]->isInIdentifierNamespace(Decl::IDNS_Ordinary)) continue; Decl *Found = FoundDecls[I]; if (VarTemplateDecl *FoundTemplate = dyn_cast(Found)) { if (IsStructuralMatch(D, FoundTemplate)) { // The variable templates structurally match; call it the same template. Importer.Imported(D->getTemplatedDecl(), FoundTemplate->getTemplatedDecl()); return Importer.Imported(D, FoundTemplate); } } ConflictingDecls.push_back(FoundDecls[I]); } if (!ConflictingDecls.empty()) { Name = Importer.HandleNameConflict(Name, DC, Decl::IDNS_Ordinary, ConflictingDecls.data(), ConflictingDecls.size()); } if (!Name) return nullptr; VarDecl *DTemplated = D->getTemplatedDecl(); // Import the type. QualType T = Importer.Import(DTemplated->getType()); if (T.isNull()) return nullptr; // Create the declaration that is being templated. SourceLocation StartLoc = Importer.Import(DTemplated->getLocStart()); SourceLocation IdLoc = Importer.Import(DTemplated->getLocation()); TypeSourceInfo *TInfo = Importer.Import(DTemplated->getTypeSourceInfo()); VarDecl *D2Templated = VarDecl::Create(Importer.getToContext(), DC, StartLoc, IdLoc, Name.getAsIdentifierInfo(), T, TInfo, DTemplated->getStorageClass()); D2Templated->setAccess(DTemplated->getAccess()); D2Templated->setQualifierInfo(Importer.Import(DTemplated->getQualifierLoc())); D2Templated->setLexicalDeclContext(LexicalDC); // Importer.Imported(DTemplated, D2Templated); // LexicalDC->addDeclInternal(D2Templated); // Merge the initializer. if (ImportDefinition(DTemplated, D2Templated)) return nullptr; // Create the variable template declaration itself. TemplateParameterList *TemplateParams = ImportTemplateParameterList(D->getTemplateParameters()); if (!TemplateParams) return nullptr; VarTemplateDecl *D2 = VarTemplateDecl::Create( Importer.getToContext(), DC, Loc, Name, TemplateParams, D2Templated); D2Templated->setDescribedVarTemplate(D2); D2->setAccess(D->getAccess()); D2->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(D2); // Note the relationship between the variable templates. Importer.Imported(D, D2); Importer.Imported(DTemplated, D2Templated); if (DTemplated->isThisDeclarationADefinition() && !D2Templated->isThisDeclarationADefinition()) { // FIXME: Import definition! } return D2; } Decl *ASTNodeImporter::VisitVarTemplateSpecializationDecl( VarTemplateSpecializationDecl *D) { // If this record has a definition in the translation unit we're coming from, // but this particular declaration is not that definition, import the // definition and map to that. VarDecl *Definition = D->getDefinition(); if (Definition && Definition != D) { Decl *ImportedDef = Importer.Import(Definition); if (!ImportedDef) return nullptr; return Importer.Imported(D, ImportedDef); } VarTemplateDecl *VarTemplate = cast_or_null( Importer.Import(D->getSpecializedTemplate())); if (!VarTemplate) return nullptr; // Import the context of this declaration. DeclContext *DC = VarTemplate->getDeclContext(); if (!DC) return nullptr; DeclContext *LexicalDC = DC; if (D->getDeclContext() != D->getLexicalDeclContext()) { LexicalDC = Importer.ImportContext(D->getLexicalDeclContext()); if (!LexicalDC) return nullptr; } // Import the location of this declaration. SourceLocation StartLoc = Importer.Import(D->getLocStart()); SourceLocation IdLoc = Importer.Import(D->getLocation()); // Import template arguments. SmallVector TemplateArgs; if (ImportTemplateArguments(D->getTemplateArgs().data(), D->getTemplateArgs().size(), TemplateArgs)) return nullptr; // Try to find an existing specialization with these template arguments. void *InsertPos = nullptr; VarTemplateSpecializationDecl *D2 = VarTemplate->findSpecialization( TemplateArgs, InsertPos); if (D2) { // We already have a variable template specialization with these template // arguments. // FIXME: Check for specialization vs. instantiation errors. if (VarDecl *FoundDef = D2->getDefinition()) { if (!D->isThisDeclarationADefinition() || IsStructuralMatch(D, FoundDef)) { // The record types structurally match, or the "from" translation // unit only had a forward declaration anyway; call it the same // variable. return Importer.Imported(D, FoundDef); } } } else { // Import the type. QualType T = Importer.Import(D->getType()); if (T.isNull()) return nullptr; TypeSourceInfo *TInfo = Importer.Import(D->getTypeSourceInfo()); // Create a new specialization. D2 = VarTemplateSpecializationDecl::Create( Importer.getToContext(), DC, StartLoc, IdLoc, VarTemplate, T, TInfo, D->getStorageClass(), TemplateArgs); D2->setSpecializationKind(D->getSpecializationKind()); D2->setTemplateArgsInfo(D->getTemplateArgsInfo()); // Add this specialization to the class template. VarTemplate->AddSpecialization(D2, InsertPos); // Import the qualifier, if any. D2->setQualifierInfo(Importer.Import(D->getQualifierLoc())); // Add the specialization to this context. D2->setLexicalDeclContext(LexicalDC); LexicalDC->addDeclInternal(D2); } Importer.Imported(D, D2); if (D->isThisDeclarationADefinition() && ImportDefinition(D, D2)) return nullptr; return D2; } //---------------------------------------------------------------------------- // Import Statements //---------------------------------------------------------------------------- DeclGroupRef ASTNodeImporter::ImportDeclGroup(DeclGroupRef DG) { if (DG.isNull()) return DeclGroupRef::Create(Importer.getToContext(), nullptr, 0); size_t NumDecls = DG.end() - DG.begin(); SmallVector ToDecls(NumDecls); auto &_Importer = this->Importer; std::transform(DG.begin(), DG.end(), ToDecls.begin(), [&_Importer](Decl *D) -> Decl * { return _Importer.Import(D); }); return DeclGroupRef::Create(Importer.getToContext(), ToDecls.begin(), NumDecls); } Stmt *ASTNodeImporter::VisitStmt(Stmt *S) { Importer.FromDiag(S->getLocStart(), diag::err_unsupported_ast_node) << S->getStmtClassName(); return nullptr; } Stmt *ASTNodeImporter::VisitGCCAsmStmt(GCCAsmStmt *S) { SmallVector Names; for (unsigned I = 0, E = S->getNumOutputs(); I != E; I++) { IdentifierInfo *ToII = Importer.Import(S->getOutputIdentifier(I)); // ToII is nullptr when no symbolic name is given for output operand // see ParseStmtAsm::ParseAsmOperandsOpt if (!ToII && S->getOutputIdentifier(I)) return nullptr; Names.push_back(ToII); } for (unsigned I = 0, E = S->getNumInputs(); I != E; I++) { IdentifierInfo *ToII = Importer.Import(S->getInputIdentifier(I)); // ToII is nullptr when no symbolic name is given for input operand // see ParseStmtAsm::ParseAsmOperandsOpt if (!ToII && S->getInputIdentifier(I)) return nullptr; Names.push_back(ToII); } SmallVector Clobbers; for (unsigned I = 0, E = S->getNumClobbers(); I != E; I++) { StringLiteral *Clobber = cast_or_null( Importer.Import(S->getClobberStringLiteral(I))); if (!Clobber) return nullptr; Clobbers.push_back(Clobber); } SmallVector Constraints; for (unsigned I = 0, E = S->getNumOutputs(); I != E; I++) { StringLiteral *Output = cast_or_null( Importer.Import(S->getOutputConstraintLiteral(I))); if (!Output) return nullptr; Constraints.push_back(Output); } for (unsigned I = 0, E = S->getNumInputs(); I != E; I++) { StringLiteral *Input = cast_or_null( Importer.Import(S->getInputConstraintLiteral(I))); if (!Input) return nullptr; Constraints.push_back(Input); } SmallVector Exprs(S->getNumOutputs() + S->getNumInputs()); if (ImportContainerChecked(S->outputs(), Exprs)) return nullptr; if (ImportArrayChecked(S->inputs(), Exprs.begin() + S->getNumOutputs())) return nullptr; StringLiteral *AsmStr = cast_or_null( Importer.Import(S->getAsmString())); if (!AsmStr) return nullptr; return new (Importer.getToContext()) GCCAsmStmt( Importer.getToContext(), Importer.Import(S->getAsmLoc()), S->isSimple(), S->isVolatile(), S->getNumOutputs(), S->getNumInputs(), Names.data(), Constraints.data(), Exprs.data(), AsmStr, S->getNumClobbers(), Clobbers.data(), Importer.Import(S->getRParenLoc())); } Stmt *ASTNodeImporter::VisitDeclStmt(DeclStmt *S) { DeclGroupRef ToDG = ImportDeclGroup(S->getDeclGroup()); for (Decl *ToD : ToDG) { if (!ToD) return nullptr; } SourceLocation ToStartLoc = Importer.Import(S->getStartLoc()); SourceLocation ToEndLoc = Importer.Import(S->getEndLoc()); return new (Importer.getToContext()) DeclStmt(ToDG, ToStartLoc, ToEndLoc); } Stmt *ASTNodeImporter::VisitNullStmt(NullStmt *S) { SourceLocation ToSemiLoc = Importer.Import(S->getSemiLoc()); return new (Importer.getToContext()) NullStmt(ToSemiLoc, S->hasLeadingEmptyMacro()); } Stmt *ASTNodeImporter::VisitCompoundStmt(CompoundStmt *S) { llvm::SmallVector ToStmts(S->size()); if (ImportContainerChecked(S->body(), ToStmts)) return nullptr; SourceLocation ToLBraceLoc = Importer.Import(S->getLBracLoc()); SourceLocation ToRBraceLoc = Importer.Import(S->getRBracLoc()); return new (Importer.getToContext()) CompoundStmt(Importer.getToContext(), ToStmts, ToLBraceLoc, ToRBraceLoc); } Stmt *ASTNodeImporter::VisitCaseStmt(CaseStmt *S) { Expr *ToLHS = Importer.Import(S->getLHS()); if (!ToLHS) return nullptr; Expr *ToRHS = Importer.Import(S->getRHS()); if (!ToRHS && S->getRHS()) return nullptr; SourceLocation ToCaseLoc = Importer.Import(S->getCaseLoc()); SourceLocation ToEllipsisLoc = Importer.Import(S->getEllipsisLoc()); SourceLocation ToColonLoc = Importer.Import(S->getColonLoc()); return new (Importer.getToContext()) CaseStmt(ToLHS, ToRHS, ToCaseLoc, ToEllipsisLoc, ToColonLoc); } Stmt *ASTNodeImporter::VisitDefaultStmt(DefaultStmt *S) { SourceLocation ToDefaultLoc = Importer.Import(S->getDefaultLoc()); SourceLocation ToColonLoc = Importer.Import(S->getColonLoc()); Stmt *ToSubStmt = Importer.Import(S->getSubStmt()); if (!ToSubStmt && S->getSubStmt()) return nullptr; return new (Importer.getToContext()) DefaultStmt(ToDefaultLoc, ToColonLoc, ToSubStmt); } Stmt *ASTNodeImporter::VisitLabelStmt(LabelStmt *S) { SourceLocation ToIdentLoc = Importer.Import(S->getIdentLoc()); LabelDecl *ToLabelDecl = cast_or_null(Importer.Import(S->getDecl())); if (!ToLabelDecl && S->getDecl()) return nullptr; Stmt *ToSubStmt = Importer.Import(S->getSubStmt()); if (!ToSubStmt && S->getSubStmt()) return nullptr; return new (Importer.getToContext()) LabelStmt(ToIdentLoc, ToLabelDecl, ToSubStmt); } Stmt *ASTNodeImporter::VisitAttributedStmt(AttributedStmt *S) { SourceLocation ToAttrLoc = Importer.Import(S->getAttrLoc()); ArrayRef FromAttrs(S->getAttrs()); SmallVector ToAttrs(FromAttrs.size()); ASTContext &_ToContext = Importer.getToContext(); std::transform(FromAttrs.begin(), FromAttrs.end(), ToAttrs.begin(), [&_ToContext](const Attr *A) -> const Attr * { return A->clone(_ToContext); }); for (const Attr *ToA : ToAttrs) { if (!ToA) return nullptr; } Stmt *ToSubStmt = Importer.Import(S->getSubStmt()); if (!ToSubStmt && S->getSubStmt()) return nullptr; return AttributedStmt::Create(Importer.getToContext(), ToAttrLoc, ToAttrs, ToSubStmt); } Stmt *ASTNodeImporter::VisitIfStmt(IfStmt *S) { SourceLocation ToIfLoc = Importer.Import(S->getIfLoc()); Stmt *ToInit = Importer.Import(S->getInit()); if (!ToInit && S->getInit()) return nullptr; VarDecl *ToConditionVariable = nullptr; if (VarDecl *FromConditionVariable = S->getConditionVariable()) { ToConditionVariable = dyn_cast_or_null(Importer.Import(FromConditionVariable)); if (!ToConditionVariable) return nullptr; } Expr *ToCondition = Importer.Import(S->getCond()); if (!ToCondition && S->getCond()) return nullptr; Stmt *ToThenStmt = Importer.Import(S->getThen()); if (!ToThenStmt && S->getThen()) return nullptr; SourceLocation ToElseLoc = Importer.Import(S->getElseLoc()); Stmt *ToElseStmt = Importer.Import(S->getElse()); if (!ToElseStmt && S->getElse()) return nullptr; return new (Importer.getToContext()) IfStmt(Importer.getToContext(), ToIfLoc, S->isConstexpr(), ToInit, ToConditionVariable, ToCondition, ToThenStmt, ToElseLoc, ToElseStmt); } Stmt *ASTNodeImporter::VisitSwitchStmt(SwitchStmt *S) { Stmt *ToInit = Importer.Import(S->getInit()); if (!ToInit && S->getInit()) return nullptr; VarDecl *ToConditionVariable = nullptr; if (VarDecl *FromConditionVariable = S->getConditionVariable()) { ToConditionVariable = dyn_cast_or_null(Importer.Import(FromConditionVariable)); if (!ToConditionVariable) return nullptr; } Expr *ToCondition = Importer.Import(S->getCond()); if (!ToCondition && S->getCond()) return nullptr; SwitchStmt *ToStmt = new (Importer.getToContext()) SwitchStmt( Importer.getToContext(), ToInit, ToConditionVariable, ToCondition); Stmt *ToBody = Importer.Import(S->getBody()); if (!ToBody && S->getBody()) return nullptr; ToStmt->setBody(ToBody); ToStmt->setSwitchLoc(Importer.Import(S->getSwitchLoc())); // Now we have to re-chain the cases. SwitchCase *LastChainedSwitchCase = nullptr; for (SwitchCase *SC = S->getSwitchCaseList(); SC != nullptr; SC = SC->getNextSwitchCase()) { SwitchCase *ToSC = dyn_cast_or_null(Importer.Import(SC)); if (!ToSC) return nullptr; if (LastChainedSwitchCase) LastChainedSwitchCase->setNextSwitchCase(ToSC); else ToStmt->setSwitchCaseList(ToSC); LastChainedSwitchCase = ToSC; } return ToStmt; } Stmt *ASTNodeImporter::VisitWhileStmt(WhileStmt *S) { VarDecl *ToConditionVariable = nullptr; if (VarDecl *FromConditionVariable = S->getConditionVariable()) { ToConditionVariable = dyn_cast_or_null(Importer.Import(FromConditionVariable)); if (!ToConditionVariable) return nullptr; } Expr *ToCondition = Importer.Import(S->getCond()); if (!ToCondition && S->getCond()) return nullptr; Stmt *ToBody = Importer.Import(S->getBody()); if (!ToBody && S->getBody()) return nullptr; SourceLocation ToWhileLoc = Importer.Import(S->getWhileLoc()); return new (Importer.getToContext()) WhileStmt(Importer.getToContext(), ToConditionVariable, ToCondition, ToBody, ToWhileLoc); } Stmt *ASTNodeImporter::VisitDoStmt(DoStmt *S) { Stmt *ToBody = Importer.Import(S->getBody()); if (!ToBody && S->getBody()) return nullptr; Expr *ToCondition = Importer.Import(S->getCond()); if (!ToCondition && S->getCond()) return nullptr; SourceLocation ToDoLoc = Importer.Import(S->getDoLoc()); SourceLocation ToWhileLoc = Importer.Import(S->getWhileLoc()); SourceLocation ToRParenLoc = Importer.Import(S->getRParenLoc()); return new (Importer.getToContext()) DoStmt(ToBody, ToCondition, ToDoLoc, ToWhileLoc, ToRParenLoc); } Stmt *ASTNodeImporter::VisitForStmt(ForStmt *S) { Stmt *ToInit = Importer.Import(S->getInit()); if (!ToInit && S->getInit()) return nullptr; Expr *ToCondition = Importer.Import(S->getCond()); if (!ToCondition && S->getCond()) return nullptr; VarDecl *ToConditionVariable = nullptr; if (VarDecl *FromConditionVariable = S->getConditionVariable()) { ToConditionVariable = dyn_cast_or_null(Importer.Import(FromConditionVariable)); if (!ToConditionVariable) return nullptr; } Expr *ToInc = Importer.Import(S->getInc()); if (!ToInc && S->getInc()) return nullptr; Stmt *ToBody = Importer.Import(S->getBody()); if (!ToBody && S->getBody()) return nullptr; SourceLocation ToForLoc = Importer.Import(S->getForLoc()); SourceLocation ToLParenLoc = Importer.Import(S->getLParenLoc()); SourceLocation ToRParenLoc = Importer.Import(S->getRParenLoc()); return new (Importer.getToContext()) ForStmt(Importer.getToContext(), ToInit, ToCondition, ToConditionVariable, ToInc, ToBody, ToForLoc, ToLParenLoc, ToRParenLoc); } Stmt *ASTNodeImporter::VisitGotoStmt(GotoStmt *S) { LabelDecl *ToLabel = nullptr; if (LabelDecl *FromLabel = S->getLabel()) { ToLabel = dyn_cast_or_null(Importer.Import(FromLabel)); if (!ToLabel) return nullptr; } SourceLocation ToGotoLoc = Importer.Import(S->getGotoLoc()); SourceLocation ToLabelLoc = Importer.Import(S->getLabelLoc()); return new (Importer.getToContext()) GotoStmt(ToLabel, ToGotoLoc, ToLabelLoc); } Stmt *ASTNodeImporter::VisitIndirectGotoStmt(IndirectGotoStmt *S) { SourceLocation ToGotoLoc = Importer.Import(S->getGotoLoc()); SourceLocation ToStarLoc = Importer.Import(S->getStarLoc()); Expr *ToTarget = Importer.Import(S->getTarget()); if (!ToTarget && S->getTarget()) return nullptr; return new (Importer.getToContext()) IndirectGotoStmt(ToGotoLoc, ToStarLoc, ToTarget); } Stmt *ASTNodeImporter::VisitContinueStmt(ContinueStmt *S) { SourceLocation ToContinueLoc = Importer.Import(S->getContinueLoc()); return new (Importer.getToContext()) ContinueStmt(ToContinueLoc); } Stmt *ASTNodeImporter::VisitBreakStmt(BreakStmt *S) { SourceLocation ToBreakLoc = Importer.Import(S->getBreakLoc()); return new (Importer.getToContext()) BreakStmt(ToBreakLoc); } Stmt *ASTNodeImporter::VisitReturnStmt(ReturnStmt *S) { SourceLocation ToRetLoc = Importer.Import(S->getReturnLoc()); Expr *ToRetExpr = Importer.Import(S->getRetValue()); if (!ToRetExpr && S->getRetValue()) return nullptr; VarDecl *NRVOCandidate = const_cast(S->getNRVOCandidate()); VarDecl *ToNRVOCandidate = cast_or_null(Importer.Import(NRVOCandidate)); if (!ToNRVOCandidate && NRVOCandidate) return nullptr; return new (Importer.getToContext()) ReturnStmt(ToRetLoc, ToRetExpr, ToNRVOCandidate); } Stmt *ASTNodeImporter::VisitCXXCatchStmt(CXXCatchStmt *S) { SourceLocation ToCatchLoc = Importer.Import(S->getCatchLoc()); VarDecl *ToExceptionDecl = nullptr; if (VarDecl *FromExceptionDecl = S->getExceptionDecl()) { ToExceptionDecl = dyn_cast_or_null(Importer.Import(FromExceptionDecl)); if (!ToExceptionDecl) return nullptr; } Stmt *ToHandlerBlock = Importer.Import(S->getHandlerBlock()); if (!ToHandlerBlock && S->getHandlerBlock()) return nullptr; return new (Importer.getToContext()) CXXCatchStmt(ToCatchLoc, ToExceptionDecl, ToHandlerBlock); } Stmt *ASTNodeImporter::VisitCXXTryStmt(CXXTryStmt *S) { SourceLocation ToTryLoc = Importer.Import(S->getTryLoc()); Stmt *ToTryBlock = Importer.Import(S->getTryBlock()); if (!ToTryBlock && S->getTryBlock()) return nullptr; SmallVector ToHandlers(S->getNumHandlers()); for (unsigned HI = 0, HE = S->getNumHandlers(); HI != HE; ++HI) { CXXCatchStmt *FromHandler = S->getHandler(HI); if (Stmt *ToHandler = Importer.Import(FromHandler)) ToHandlers[HI] = ToHandler; else return nullptr; } return CXXTryStmt::Create(Importer.getToContext(), ToTryLoc, ToTryBlock, ToHandlers); } Stmt *ASTNodeImporter::VisitCXXForRangeStmt(CXXForRangeStmt *S) { DeclStmt *ToRange = dyn_cast_or_null(Importer.Import(S->getRangeStmt())); if (!ToRange && S->getRangeStmt()) return nullptr; DeclStmt *ToBegin = dyn_cast_or_null(Importer.Import(S->getBeginStmt())); if (!ToBegin && S->getBeginStmt()) return nullptr; DeclStmt *ToEnd = dyn_cast_or_null(Importer.Import(S->getEndStmt())); if (!ToEnd && S->getEndStmt()) return nullptr; Expr *ToCond = Importer.Import(S->getCond()); if (!ToCond && S->getCond()) return nullptr; Expr *ToInc = Importer.Import(S->getInc()); if (!ToInc && S->getInc()) return nullptr; DeclStmt *ToLoopVar = dyn_cast_or_null(Importer.Import(S->getLoopVarStmt())); if (!ToLoopVar && S->getLoopVarStmt()) return nullptr; Stmt *ToBody = Importer.Import(S->getBody()); if (!ToBody && S->getBody()) return nullptr; SourceLocation ToForLoc = Importer.Import(S->getForLoc()); SourceLocation ToCoawaitLoc = Importer.Import(S->getCoawaitLoc()); SourceLocation ToColonLoc = Importer.Import(S->getColonLoc()); SourceLocation ToRParenLoc = Importer.Import(S->getRParenLoc()); return new (Importer.getToContext()) CXXForRangeStmt(ToRange, ToBegin, ToEnd, ToCond, ToInc, ToLoopVar, ToBody, ToForLoc, ToCoawaitLoc, ToColonLoc, ToRParenLoc); } Stmt *ASTNodeImporter::VisitObjCForCollectionStmt(ObjCForCollectionStmt *S) { Stmt *ToElem = Importer.Import(S->getElement()); if (!ToElem && S->getElement()) return nullptr; Expr *ToCollect = Importer.Import(S->getCollection()); if (!ToCollect && S->getCollection()) return nullptr; Stmt *ToBody = Importer.Import(S->getBody()); if (!ToBody && S->getBody()) return nullptr; SourceLocation ToForLoc = Importer.Import(S->getForLoc()); SourceLocation ToRParenLoc = Importer.Import(S->getRParenLoc()); return new (Importer.getToContext()) ObjCForCollectionStmt(ToElem, ToCollect, ToBody, ToForLoc, ToRParenLoc); } Stmt *ASTNodeImporter::VisitObjCAtCatchStmt(ObjCAtCatchStmt *S) { SourceLocation ToAtCatchLoc = Importer.Import(S->getAtCatchLoc()); SourceLocation ToRParenLoc = Importer.Import(S->getRParenLoc()); VarDecl *ToExceptionDecl = nullptr; if (VarDecl *FromExceptionDecl = S->getCatchParamDecl()) { ToExceptionDecl = dyn_cast_or_null(Importer.Import(FromExceptionDecl)); if (!ToExceptionDecl) return nullptr; } Stmt *ToBody = Importer.Import(S->getCatchBody()); if (!ToBody && S->getCatchBody()) return nullptr; return new (Importer.getToContext()) ObjCAtCatchStmt(ToAtCatchLoc, ToRParenLoc, ToExceptionDecl, ToBody); } Stmt *ASTNodeImporter::VisitObjCAtFinallyStmt(ObjCAtFinallyStmt *S) { SourceLocation ToAtFinallyLoc = Importer.Import(S->getAtFinallyLoc()); Stmt *ToAtFinallyStmt = Importer.Import(S->getFinallyBody()); if (!ToAtFinallyStmt && S->getFinallyBody()) return nullptr; return new (Importer.getToContext()) ObjCAtFinallyStmt(ToAtFinallyLoc, ToAtFinallyStmt); } Stmt *ASTNodeImporter::VisitObjCAtTryStmt(ObjCAtTryStmt *S) { SourceLocation ToAtTryLoc = Importer.Import(S->getAtTryLoc()); Stmt *ToAtTryStmt = Importer.Import(S->getTryBody()); if (!ToAtTryStmt && S->getTryBody()) return nullptr; SmallVector ToCatchStmts(S->getNumCatchStmts()); for (unsigned CI = 0, CE = S->getNumCatchStmts(); CI != CE; ++CI) { ObjCAtCatchStmt *FromCatchStmt = S->getCatchStmt(CI); if (Stmt *ToCatchStmt = Importer.Import(FromCatchStmt)) ToCatchStmts[CI] = ToCatchStmt; else return nullptr; } Stmt *ToAtFinallyStmt = Importer.Import(S->getFinallyStmt()); if (!ToAtFinallyStmt && S->getFinallyStmt()) return nullptr; return ObjCAtTryStmt::Create(Importer.getToContext(), ToAtTryLoc, ToAtTryStmt, ToCatchStmts.begin(), ToCatchStmts.size(), ToAtFinallyStmt); } Stmt *ASTNodeImporter::VisitObjCAtSynchronizedStmt (ObjCAtSynchronizedStmt *S) { SourceLocation ToAtSynchronizedLoc = Importer.Import(S->getAtSynchronizedLoc()); Expr *ToSynchExpr = Importer.Import(S->getSynchExpr()); if (!ToSynchExpr && S->getSynchExpr()) return nullptr; Stmt *ToSynchBody = Importer.Import(S->getSynchBody()); if (!ToSynchBody && S->getSynchBody()) return nullptr; return new (Importer.getToContext()) ObjCAtSynchronizedStmt( ToAtSynchronizedLoc, ToSynchExpr, ToSynchBody); } Stmt *ASTNodeImporter::VisitObjCAtThrowStmt(ObjCAtThrowStmt *S) { SourceLocation ToAtThrowLoc = Importer.Import(S->getThrowLoc()); Expr *ToThrow = Importer.Import(S->getThrowExpr()); if (!ToThrow && S->getThrowExpr()) return nullptr; return new (Importer.getToContext()) ObjCAtThrowStmt(ToAtThrowLoc, ToThrow); } Stmt *ASTNodeImporter::VisitObjCAutoreleasePoolStmt (ObjCAutoreleasePoolStmt *S) { SourceLocation ToAtLoc = Importer.Import(S->getAtLoc()); Stmt *ToSubStmt = Importer.Import(S->getSubStmt()); if (!ToSubStmt && S->getSubStmt()) return nullptr; return new (Importer.getToContext()) ObjCAutoreleasePoolStmt(ToAtLoc, ToSubStmt); } //---------------------------------------------------------------------------- // Import Expressions //---------------------------------------------------------------------------- Expr *ASTNodeImporter::VisitExpr(Expr *E) { Importer.FromDiag(E->getLocStart(), diag::err_unsupported_ast_node) << E->getStmtClassName(); return nullptr; } Expr *ASTNodeImporter::VisitVAArgExpr(VAArgExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *SubExpr = Importer.Import(E->getSubExpr()); if (!SubExpr && E->getSubExpr()) return nullptr; TypeSourceInfo *TInfo = Importer.Import(E->getWrittenTypeInfo()); if (!TInfo) return nullptr; return new (Importer.getToContext()) VAArgExpr( Importer.Import(E->getBuiltinLoc()), SubExpr, TInfo, Importer.Import(E->getRParenLoc()), T, E->isMicrosoftABI()); } Expr *ASTNodeImporter::VisitGNUNullExpr(GNUNullExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return new (Importer.getToContext()) GNUNullExpr( T, Importer.Import(E->getLocStart())); } Expr *ASTNodeImporter::VisitPredefinedExpr(PredefinedExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; StringLiteral *SL = cast_or_null( Importer.Import(E->getFunctionName())); if (!SL && E->getFunctionName()) return nullptr; return new (Importer.getToContext()) PredefinedExpr( Importer.Import(E->getLocStart()), T, E->getIdentType(), SL); } Expr *ASTNodeImporter::VisitDeclRefExpr(DeclRefExpr *E) { ValueDecl *ToD = cast_or_null(Importer.Import(E->getDecl())); if (!ToD) return nullptr; NamedDecl *FoundD = nullptr; if (E->getDecl() != E->getFoundDecl()) { FoundD = cast_or_null(Importer.Import(E->getFoundDecl())); if (!FoundD) return nullptr; } QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; TemplateArgumentListInfo ToTAInfo; TemplateArgumentListInfo *ResInfo = nullptr; if (E->hasExplicitTemplateArgs()) { for (const auto &FromLoc : E->template_arguments()) { bool Error = false; TemplateArgumentLoc ToTALoc = ImportTemplateArgumentLoc(FromLoc, Error); if (Error) return nullptr; ToTAInfo.addArgument(ToTALoc); } ResInfo = &ToTAInfo; } DeclRefExpr *DRE = DeclRefExpr::Create(Importer.getToContext(), Importer.Import(E->getQualifierLoc()), Importer.Import(E->getTemplateKeywordLoc()), ToD, E->refersToEnclosingVariableOrCapture(), Importer.Import(E->getLocation()), T, E->getValueKind(), FoundD, ResInfo); if (E->hadMultipleCandidates()) DRE->setHadMultipleCandidates(true); return DRE; } Expr *ASTNodeImporter::VisitImplicitValueInitExpr(ImplicitValueInitExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return new (Importer.getToContext()) ImplicitValueInitExpr(T); } ASTNodeImporter::Designator ASTNodeImporter::ImportDesignator(const Designator &D) { if (D.isFieldDesignator()) { IdentifierInfo *ToFieldName = Importer.Import(D.getFieldName()); // Caller checks for import error return Designator(ToFieldName, Importer.Import(D.getDotLoc()), Importer.Import(D.getFieldLoc())); } if (D.isArrayDesignator()) return Designator(D.getFirstExprIndex(), Importer.Import(D.getLBracketLoc()), Importer.Import(D.getRBracketLoc())); assert(D.isArrayRangeDesignator()); return Designator(D.getFirstExprIndex(), Importer.Import(D.getLBracketLoc()), Importer.Import(D.getEllipsisLoc()), Importer.Import(D.getRBracketLoc())); } Expr *ASTNodeImporter::VisitDesignatedInitExpr(DesignatedInitExpr *DIE) { Expr *Init = cast_or_null(Importer.Import(DIE->getInit())); if (!Init) return nullptr; SmallVector IndexExprs(DIE->getNumSubExprs() - 1); // List elements from the second, the first is Init itself for (unsigned I = 1, E = DIE->getNumSubExprs(); I < E; I++) { if (Expr *Arg = cast_or_null(Importer.Import(DIE->getSubExpr(I)))) IndexExprs[I - 1] = Arg; else return nullptr; } SmallVector Designators(DIE->size()); llvm::transform(DIE->designators(), Designators.begin(), [this](const Designator &D) -> Designator { return ImportDesignator(D); }); for (const Designator &D : DIE->designators()) if (D.isFieldDesignator() && !D.getFieldName()) return nullptr; return DesignatedInitExpr::Create( Importer.getToContext(), Designators, IndexExprs, Importer.Import(DIE->getEqualOrColonLoc()), DIE->usesGNUSyntax(), Init); } Expr *ASTNodeImporter::VisitCXXNullPtrLiteralExpr(CXXNullPtrLiteralExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return new (Importer.getToContext()) CXXNullPtrLiteralExpr(T, Importer.Import(E->getLocation())); } Expr *ASTNodeImporter::VisitIntegerLiteral(IntegerLiteral *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return IntegerLiteral::Create(Importer.getToContext(), E->getValue(), T, Importer.Import(E->getLocation())); } Expr *ASTNodeImporter::VisitFloatingLiteral(FloatingLiteral *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return FloatingLiteral::Create(Importer.getToContext(), E->getValue(), E->isExact(), T, Importer.Import(E->getLocation())); } Expr *ASTNodeImporter::VisitCharacterLiteral(CharacterLiteral *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return new (Importer.getToContext()) CharacterLiteral(E->getValue(), E->getKind(), T, Importer.Import(E->getLocation())); } Expr *ASTNodeImporter::VisitStringLiteral(StringLiteral *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; SmallVector Locations(E->getNumConcatenated()); ImportArray(E->tokloc_begin(), E->tokloc_end(), Locations.begin()); return StringLiteral::Create(Importer.getToContext(), E->getBytes(), E->getKind(), E->isPascal(), T, Locations.data(), Locations.size()); } Expr *ASTNodeImporter::VisitCompoundLiteralExpr(CompoundLiteralExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; TypeSourceInfo *TInfo = Importer.Import(E->getTypeSourceInfo()); if (!TInfo) return nullptr; Expr *Init = Importer.Import(E->getInitializer()); if (!Init) return nullptr; return new (Importer.getToContext()) CompoundLiteralExpr( Importer.Import(E->getLParenLoc()), TInfo, T, E->getValueKind(), Init, E->isFileScope()); } Expr *ASTNodeImporter::VisitAtomicExpr(AtomicExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; SmallVector Exprs(E->getNumSubExprs()); if (ImportArrayChecked( E->getSubExprs(), E->getSubExprs() + E->getNumSubExprs(), Exprs.begin())) return nullptr; return new (Importer.getToContext()) AtomicExpr( Importer.Import(E->getBuiltinLoc()), Exprs, T, E->getOp(), Importer.Import(E->getRParenLoc())); } Expr *ASTNodeImporter::VisitAddrLabelExpr(AddrLabelExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; LabelDecl *ToLabel = cast_or_null(Importer.Import(E->getLabel())); if (!ToLabel) return nullptr; return new (Importer.getToContext()) AddrLabelExpr( Importer.Import(E->getAmpAmpLoc()), Importer.Import(E->getLabelLoc()), ToLabel, T); } Expr *ASTNodeImporter::VisitParenExpr(ParenExpr *E) { Expr *SubExpr = Importer.Import(E->getSubExpr()); if (!SubExpr) return nullptr; return new (Importer.getToContext()) ParenExpr(Importer.Import(E->getLParen()), Importer.Import(E->getRParen()), SubExpr); } Expr *ASTNodeImporter::VisitParenListExpr(ParenListExpr *E) { SmallVector Exprs(E->getNumExprs()); if (ImportContainerChecked(E->exprs(), Exprs)) return nullptr; return new (Importer.getToContext()) ParenListExpr( Importer.getToContext(), Importer.Import(E->getLParenLoc()), Exprs, Importer.Import(E->getLParenLoc())); } Expr *ASTNodeImporter::VisitStmtExpr(StmtExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; CompoundStmt *ToSubStmt = cast_or_null( Importer.Import(E->getSubStmt())); if (!ToSubStmt && E->getSubStmt()) return nullptr; return new (Importer.getToContext()) StmtExpr(ToSubStmt, T, Importer.Import(E->getLParenLoc()), Importer.Import(E->getRParenLoc())); } Expr *ASTNodeImporter::VisitUnaryOperator(UnaryOperator *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *SubExpr = Importer.Import(E->getSubExpr()); if (!SubExpr) return nullptr; return new (Importer.getToContext()) UnaryOperator(SubExpr, E->getOpcode(), T, E->getValueKind(), E->getObjectKind(), Importer.Import(E->getOperatorLoc())); } Expr *ASTNodeImporter::VisitUnaryExprOrTypeTraitExpr( UnaryExprOrTypeTraitExpr *E) { QualType ResultType = Importer.Import(E->getType()); if (E->isArgumentType()) { TypeSourceInfo *TInfo = Importer.Import(E->getArgumentTypeInfo()); if (!TInfo) return nullptr; return new (Importer.getToContext()) UnaryExprOrTypeTraitExpr(E->getKind(), TInfo, ResultType, Importer.Import(E->getOperatorLoc()), Importer.Import(E->getRParenLoc())); } Expr *SubExpr = Importer.Import(E->getArgumentExpr()); if (!SubExpr) return nullptr; return new (Importer.getToContext()) UnaryExprOrTypeTraitExpr(E->getKind(), SubExpr, ResultType, Importer.Import(E->getOperatorLoc()), Importer.Import(E->getRParenLoc())); } Expr *ASTNodeImporter::VisitBinaryOperator(BinaryOperator *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *LHS = Importer.Import(E->getLHS()); if (!LHS) return nullptr; Expr *RHS = Importer.Import(E->getRHS()); if (!RHS) return nullptr; return new (Importer.getToContext()) BinaryOperator(LHS, RHS, E->getOpcode(), T, E->getValueKind(), E->getObjectKind(), Importer.Import(E->getOperatorLoc()), E->getFPFeatures()); } Expr *ASTNodeImporter::VisitConditionalOperator(ConditionalOperator *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *ToLHS = Importer.Import(E->getLHS()); if (!ToLHS) return nullptr; Expr *ToRHS = Importer.Import(E->getRHS()); if (!ToRHS) return nullptr; Expr *ToCond = Importer.Import(E->getCond()); if (!ToCond) return nullptr; return new (Importer.getToContext()) ConditionalOperator( ToCond, Importer.Import(E->getQuestionLoc()), ToLHS, Importer.Import(E->getColonLoc()), ToRHS, T, E->getValueKind(), E->getObjectKind()); } Expr *ASTNodeImporter::VisitBinaryConditionalOperator( BinaryConditionalOperator *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *Common = Importer.Import(E->getCommon()); if (!Common) return nullptr; Expr *Cond = Importer.Import(E->getCond()); if (!Cond) return nullptr; OpaqueValueExpr *OpaqueValue = cast_or_null( Importer.Import(E->getOpaqueValue())); if (!OpaqueValue) return nullptr; Expr *TrueExpr = Importer.Import(E->getTrueExpr()); if (!TrueExpr) return nullptr; Expr *FalseExpr = Importer.Import(E->getFalseExpr()); if (!FalseExpr) return nullptr; return new (Importer.getToContext()) BinaryConditionalOperator( Common, OpaqueValue, Cond, TrueExpr, FalseExpr, Importer.Import(E->getQuestionLoc()), Importer.Import(E->getColonLoc()), T, E->getValueKind(), E->getObjectKind()); } Expr *ASTNodeImporter::VisitArrayTypeTraitExpr(ArrayTypeTraitExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; TypeSourceInfo *ToQueried = Importer.Import(E->getQueriedTypeSourceInfo()); if (!ToQueried) return nullptr; Expr *Dim = Importer.Import(E->getDimensionExpression()); if (!Dim && E->getDimensionExpression()) return nullptr; return new (Importer.getToContext()) ArrayTypeTraitExpr( Importer.Import(E->getLocStart()), E->getTrait(), ToQueried, E->getValue(), Dim, Importer.Import(E->getLocEnd()), T); } Expr *ASTNodeImporter::VisitExpressionTraitExpr(ExpressionTraitExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *ToQueried = Importer.Import(E->getQueriedExpression()); if (!ToQueried) return nullptr; return new (Importer.getToContext()) ExpressionTraitExpr( Importer.Import(E->getLocStart()), E->getTrait(), ToQueried, E->getValue(), Importer.Import(E->getLocEnd()), T); } Expr *ASTNodeImporter::VisitOpaqueValueExpr(OpaqueValueExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *SourceExpr = Importer.Import(E->getSourceExpr()); if (!SourceExpr && E->getSourceExpr()) return nullptr; return new (Importer.getToContext()) OpaqueValueExpr( Importer.Import(E->getLocation()), T, E->getValueKind(), E->getObjectKind(), SourceExpr); } Expr *ASTNodeImporter::VisitArraySubscriptExpr(ArraySubscriptExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *ToLHS = Importer.Import(E->getLHS()); if (!ToLHS) return nullptr; Expr *ToRHS = Importer.Import(E->getRHS()); if (!ToRHS) return nullptr; return new (Importer.getToContext()) ArraySubscriptExpr( ToLHS, ToRHS, T, E->getValueKind(), E->getObjectKind(), Importer.Import(E->getRBracketLoc())); } Expr *ASTNodeImporter::VisitCompoundAssignOperator(CompoundAssignOperator *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; QualType CompLHSType = Importer.Import(E->getComputationLHSType()); if (CompLHSType.isNull()) return nullptr; QualType CompResultType = Importer.Import(E->getComputationResultType()); if (CompResultType.isNull()) return nullptr; Expr *LHS = Importer.Import(E->getLHS()); if (!LHS) return nullptr; Expr *RHS = Importer.Import(E->getRHS()); if (!RHS) return nullptr; return new (Importer.getToContext()) CompoundAssignOperator(LHS, RHS, E->getOpcode(), T, E->getValueKind(), E->getObjectKind(), CompLHSType, CompResultType, Importer.Import(E->getOperatorLoc()), E->getFPFeatures()); } bool ASTNodeImporter::ImportCastPath(CastExpr *CE, CXXCastPath &Path) { for (auto I = CE->path_begin(), E = CE->path_end(); I != E; ++I) { if (CXXBaseSpecifier *Spec = Importer.Import(*I)) Path.push_back(Spec); else return true; } return false; } Expr *ASTNodeImporter::VisitImplicitCastExpr(ImplicitCastExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *SubExpr = Importer.Import(E->getSubExpr()); if (!SubExpr) return nullptr; CXXCastPath BasePath; if (ImportCastPath(E, BasePath)) return nullptr; return ImplicitCastExpr::Create(Importer.getToContext(), T, E->getCastKind(), SubExpr, &BasePath, E->getValueKind()); } Expr *ASTNodeImporter::VisitExplicitCastExpr(ExplicitCastExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *SubExpr = Importer.Import(E->getSubExpr()); if (!SubExpr) return nullptr; TypeSourceInfo *TInfo = Importer.Import(E->getTypeInfoAsWritten()); if (!TInfo && E->getTypeInfoAsWritten()) return nullptr; CXXCastPath BasePath; if (ImportCastPath(E, BasePath)) return nullptr; switch (E->getStmtClass()) { case Stmt::CStyleCastExprClass: { CStyleCastExpr *CCE = cast(E); return CStyleCastExpr::Create(Importer.getToContext(), T, E->getValueKind(), E->getCastKind(), SubExpr, &BasePath, TInfo, Importer.Import(CCE->getLParenLoc()), Importer.Import(CCE->getRParenLoc())); } case Stmt::CXXFunctionalCastExprClass: { CXXFunctionalCastExpr *FCE = cast(E); return CXXFunctionalCastExpr::Create(Importer.getToContext(), T, E->getValueKind(), TInfo, E->getCastKind(), SubExpr, &BasePath, Importer.Import(FCE->getLParenLoc()), Importer.Import(FCE->getRParenLoc())); } case Stmt::ObjCBridgedCastExprClass: { ObjCBridgedCastExpr *OCE = cast(E); return new (Importer.getToContext()) ObjCBridgedCastExpr( Importer.Import(OCE->getLParenLoc()), OCE->getBridgeKind(), E->getCastKind(), Importer.Import(OCE->getBridgeKeywordLoc()), TInfo, SubExpr); } default: break; // just fall through } CXXNamedCastExpr *Named = cast(E); SourceLocation ExprLoc = Importer.Import(Named->getOperatorLoc()), RParenLoc = Importer.Import(Named->getRParenLoc()); SourceRange Brackets = Importer.Import(Named->getAngleBrackets()); switch (E->getStmtClass()) { case Stmt::CXXStaticCastExprClass: return CXXStaticCastExpr::Create(Importer.getToContext(), T, E->getValueKind(), E->getCastKind(), SubExpr, &BasePath, TInfo, ExprLoc, RParenLoc, Brackets); case Stmt::CXXDynamicCastExprClass: return CXXDynamicCastExpr::Create(Importer.getToContext(), T, E->getValueKind(), E->getCastKind(), SubExpr, &BasePath, TInfo, ExprLoc, RParenLoc, Brackets); case Stmt::CXXReinterpretCastExprClass: return CXXReinterpretCastExpr::Create(Importer.getToContext(), T, E->getValueKind(), E->getCastKind(), SubExpr, &BasePath, TInfo, ExprLoc, RParenLoc, Brackets); case Stmt::CXXConstCastExprClass: return CXXConstCastExpr::Create(Importer.getToContext(), T, E->getValueKind(), SubExpr, TInfo, ExprLoc, RParenLoc, Brackets); default: llvm_unreachable("Cast expression of unsupported type!"); return nullptr; } } Expr *ASTNodeImporter::VisitOffsetOfExpr(OffsetOfExpr *OE) { QualType T = Importer.Import(OE->getType()); if (T.isNull()) return nullptr; SmallVector Nodes; for (int I = 0, E = OE->getNumComponents(); I < E; ++I) { const OffsetOfNode &Node = OE->getComponent(I); switch (Node.getKind()) { case OffsetOfNode::Array: Nodes.push_back(OffsetOfNode(Importer.Import(Node.getLocStart()), Node.getArrayExprIndex(), Importer.Import(Node.getLocEnd()))); break; case OffsetOfNode::Base: { CXXBaseSpecifier *BS = Importer.Import(Node.getBase()); if (!BS && Node.getBase()) return nullptr; Nodes.push_back(OffsetOfNode(BS)); break; } case OffsetOfNode::Field: { FieldDecl *FD = cast_or_null(Importer.Import(Node.getField())); if (!FD) return nullptr; Nodes.push_back(OffsetOfNode(Importer.Import(Node.getLocStart()), FD, Importer.Import(Node.getLocEnd()))); break; } case OffsetOfNode::Identifier: { IdentifierInfo *ToII = Importer.Import(Node.getFieldName()); if (!ToII) return nullptr; Nodes.push_back(OffsetOfNode(Importer.Import(Node.getLocStart()), ToII, Importer.Import(Node.getLocEnd()))); break; } } } SmallVector Exprs(OE->getNumExpressions()); for (int I = 0, E = OE->getNumExpressions(); I < E; ++I) { Expr *ToIndexExpr = Importer.Import(OE->getIndexExpr(I)); if (!ToIndexExpr) return nullptr; Exprs[I] = ToIndexExpr; } TypeSourceInfo *TInfo = Importer.Import(OE->getTypeSourceInfo()); if (!TInfo && OE->getTypeSourceInfo()) return nullptr; return OffsetOfExpr::Create(Importer.getToContext(), T, Importer.Import(OE->getOperatorLoc()), TInfo, Nodes, Exprs, Importer.Import(OE->getRParenLoc())); } Expr *ASTNodeImporter::VisitCXXNoexceptExpr(CXXNoexceptExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *Operand = Importer.Import(E->getOperand()); if (!Operand) return nullptr; CanThrowResult CanThrow; if (E->isValueDependent()) CanThrow = CT_Dependent; else CanThrow = E->getValue() ? CT_Can : CT_Cannot; return new (Importer.getToContext()) CXXNoexceptExpr( T, Operand, CanThrow, Importer.Import(E->getLocStart()), Importer.Import(E->getLocEnd())); } Expr *ASTNodeImporter::VisitCXXThrowExpr(CXXThrowExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *SubExpr = Importer.Import(E->getSubExpr()); if (!SubExpr && E->getSubExpr()) return nullptr; return new (Importer.getToContext()) CXXThrowExpr( SubExpr, T, Importer.Import(E->getThrowLoc()), E->isThrownVariableInScope()); } Expr *ASTNodeImporter::VisitCXXDefaultArgExpr(CXXDefaultArgExpr *E) { ParmVarDecl *Param = cast_or_null( Importer.Import(E->getParam())); if (!Param) return nullptr; return CXXDefaultArgExpr::Create( Importer.getToContext(), Importer.Import(E->getUsedLocation()), Param); } Expr *ASTNodeImporter::VisitCXXScalarValueInitExpr(CXXScalarValueInitExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; TypeSourceInfo *TypeInfo = Importer.Import(E->getTypeSourceInfo()); if (!TypeInfo) return nullptr; return new (Importer.getToContext()) CXXScalarValueInitExpr( T, TypeInfo, Importer.Import(E->getRParenLoc())); } Expr *ASTNodeImporter::VisitCXXBindTemporaryExpr(CXXBindTemporaryExpr *E) { Expr *SubExpr = Importer.Import(E->getSubExpr()); if (!SubExpr) return nullptr; auto *Dtor = cast_or_null( Importer.Import(const_cast( E->getTemporary()->getDestructor()))); if (!Dtor) return nullptr; ASTContext &ToCtx = Importer.getToContext(); CXXTemporary *Temp = CXXTemporary::Create(ToCtx, Dtor); return CXXBindTemporaryExpr::Create(ToCtx, Temp, SubExpr); } Expr *ASTNodeImporter::VisitCXXTemporaryObjectExpr(CXXTemporaryObjectExpr *CE) { QualType T = Importer.Import(CE->getType()); if (T.isNull()) return nullptr; SmallVector Args(CE->getNumArgs()); if (ImportContainerChecked(CE->arguments(), Args)) return nullptr; auto *Ctor = cast_or_null( Importer.Import(CE->getConstructor())); if (!Ctor) return nullptr; return CXXTemporaryObjectExpr::Create( Importer.getToContext(), T, Importer.Import(CE->getLocStart()), Ctor, CE->isElidable(), Args, CE->hadMultipleCandidates(), CE->isListInitialization(), CE->isStdInitListInitialization(), CE->requiresZeroInitialization(), CE->getConstructionKind(), Importer.Import(CE->getParenOrBraceRange())); } Expr * ASTNodeImporter::VisitMaterializeTemporaryExpr(MaterializeTemporaryExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *TempE = Importer.Import(E->GetTemporaryExpr()); if (!TempE) return nullptr; ValueDecl *ExtendedBy = cast_or_null( Importer.Import(const_cast(E->getExtendingDecl()))); if (!ExtendedBy && E->getExtendingDecl()) return nullptr; auto *ToMTE = new (Importer.getToContext()) MaterializeTemporaryExpr( T, TempE, E->isBoundToLvalueReference()); // FIXME: Should ManglingNumber get numbers associated with 'to' context? ToMTE->setExtendingDecl(ExtendedBy, E->getManglingNumber()); return ToMTE; } Expr *ASTNodeImporter::VisitCXXNewExpr(CXXNewExpr *CE) { QualType T = Importer.Import(CE->getType()); if (T.isNull()) return nullptr; SmallVector PlacementArgs(CE->getNumPlacementArgs()); if (ImportContainerChecked(CE->placement_arguments(), PlacementArgs)) return nullptr; FunctionDecl *OperatorNewDecl = cast_or_null( Importer.Import(CE->getOperatorNew())); if (!OperatorNewDecl && CE->getOperatorNew()) return nullptr; FunctionDecl *OperatorDeleteDecl = cast_or_null( Importer.Import(CE->getOperatorDelete())); if (!OperatorDeleteDecl && CE->getOperatorDelete()) return nullptr; Expr *ToInit = Importer.Import(CE->getInitializer()); if (!ToInit && CE->getInitializer()) return nullptr; TypeSourceInfo *TInfo = Importer.Import(CE->getAllocatedTypeSourceInfo()); if (!TInfo) return nullptr; Expr *ToArrSize = Importer.Import(CE->getArraySize()); if (!ToArrSize && CE->getArraySize()) return nullptr; return new (Importer.getToContext()) CXXNewExpr( Importer.getToContext(), CE->isGlobalNew(), OperatorNewDecl, OperatorDeleteDecl, CE->passAlignment(), CE->doesUsualArrayDeleteWantSize(), PlacementArgs, Importer.Import(CE->getTypeIdParens()), ToArrSize, CE->getInitializationStyle(), ToInit, T, TInfo, Importer.Import(CE->getSourceRange()), Importer.Import(CE->getDirectInitRange())); } Expr *ASTNodeImporter::VisitCXXDeleteExpr(CXXDeleteExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; FunctionDecl *OperatorDeleteDecl = cast_or_null( Importer.Import(E->getOperatorDelete())); if (!OperatorDeleteDecl && E->getOperatorDelete()) return nullptr; Expr *ToArg = Importer.Import(E->getArgument()); if (!ToArg && E->getArgument()) return nullptr; return new (Importer.getToContext()) CXXDeleteExpr( T, E->isGlobalDelete(), E->isArrayForm(), E->isArrayFormAsWritten(), E->doesUsualArrayDeleteWantSize(), OperatorDeleteDecl, ToArg, Importer.Import(E->getLocStart())); } Expr *ASTNodeImporter::VisitCXXConstructExpr(CXXConstructExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; CXXConstructorDecl *ToCCD = dyn_cast_or_null(Importer.Import(E->getConstructor())); if (!ToCCD) return nullptr; SmallVector ToArgs(E->getNumArgs()); if (ImportContainerChecked(E->arguments(), ToArgs)) return nullptr; return CXXConstructExpr::Create(Importer.getToContext(), T, Importer.Import(E->getLocation()), ToCCD, E->isElidable(), ToArgs, E->hadMultipleCandidates(), E->isListInitialization(), E->isStdInitListInitialization(), E->requiresZeroInitialization(), E->getConstructionKind(), Importer.Import(E->getParenOrBraceRange())); } Expr *ASTNodeImporter::VisitExprWithCleanups(ExprWithCleanups *EWC) { Expr *SubExpr = Importer.Import(EWC->getSubExpr()); if (!SubExpr && EWC->getSubExpr()) return nullptr; SmallVector Objs(EWC->getNumObjects()); for (unsigned I = 0, E = EWC->getNumObjects(); I < E; I++) if (ExprWithCleanups::CleanupObject Obj = cast_or_null(Importer.Import(EWC->getObject(I)))) Objs[I] = Obj; else return nullptr; return ExprWithCleanups::Create(Importer.getToContext(), SubExpr, EWC->cleanupsHaveSideEffects(), Objs); } Expr *ASTNodeImporter::VisitCXXMemberCallExpr(CXXMemberCallExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *ToFn = Importer.Import(E->getCallee()); if (!ToFn) return nullptr; SmallVector ToArgs(E->getNumArgs()); if (ImportContainerChecked(E->arguments(), ToArgs)) return nullptr; return new (Importer.getToContext()) CXXMemberCallExpr( Importer.getToContext(), ToFn, ToArgs, T, E->getValueKind(), Importer.Import(E->getRParenLoc())); } Expr *ASTNodeImporter::VisitCXXThisExpr(CXXThisExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return new (Importer.getToContext()) CXXThisExpr(Importer.Import(E->getLocation()), T, E->isImplicit()); } Expr *ASTNodeImporter::VisitCXXBoolLiteralExpr(CXXBoolLiteralExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; return new (Importer.getToContext()) CXXBoolLiteralExpr(E->getValue(), T, Importer.Import(E->getLocation())); } Expr *ASTNodeImporter::VisitMemberExpr(MemberExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *ToBase = Importer.Import(E->getBase()); if (!ToBase && E->getBase()) return nullptr; ValueDecl *ToMember = dyn_cast(Importer.Import(E->getMemberDecl())); if (!ToMember && E->getMemberDecl()) return nullptr; DeclAccessPair ToFoundDecl = DeclAccessPair::make( dyn_cast(Importer.Import(E->getFoundDecl().getDecl())), E->getFoundDecl().getAccess()); DeclarationNameInfo ToMemberNameInfo( Importer.Import(E->getMemberNameInfo().getName()), Importer.Import(E->getMemberNameInfo().getLoc())); if (E->hasExplicitTemplateArgs()) { return nullptr; // FIXME: handle template arguments } return MemberExpr::Create(Importer.getToContext(), ToBase, E->isArrow(), Importer.Import(E->getOperatorLoc()), Importer.Import(E->getQualifierLoc()), Importer.Import(E->getTemplateKeywordLoc()), ToMember, ToFoundDecl, ToMemberNameInfo, nullptr, T, E->getValueKind(), E->getObjectKind()); } Expr *ASTNodeImporter::VisitCallExpr(CallExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; Expr *ToCallee = Importer.Import(E->getCallee()); if (!ToCallee && E->getCallee()) return nullptr; unsigned NumArgs = E->getNumArgs(); llvm::SmallVector ToArgs(NumArgs); for (unsigned ai = 0, ae = NumArgs; ai != ae; ++ai) { Expr *FromArg = E->getArg(ai); Expr *ToArg = Importer.Import(FromArg); if (!ToArg) return nullptr; ToArgs[ai] = ToArg; } Expr **ToArgs_Copied = new (Importer.getToContext()) Expr*[NumArgs]; for (unsigned ai = 0, ae = NumArgs; ai != ae; ++ai) ToArgs_Copied[ai] = ToArgs[ai]; return new (Importer.getToContext()) CallExpr(Importer.getToContext(), ToCallee, llvm::makeArrayRef(ToArgs_Copied, NumArgs), T, E->getValueKind(), Importer.Import(E->getRParenLoc())); } Expr *ASTNodeImporter::VisitInitListExpr(InitListExpr *ILE) { QualType T = Importer.Import(ILE->getType()); if (T.isNull()) return nullptr; llvm::SmallVector Exprs(ILE->getNumInits()); if (ImportContainerChecked(ILE->inits(), Exprs)) return nullptr; ASTContext &ToCtx = Importer.getToContext(); InitListExpr *To = new (ToCtx) InitListExpr( ToCtx, Importer.Import(ILE->getLBraceLoc()), Exprs, Importer.Import(ILE->getLBraceLoc())); To->setType(T); if (ILE->hasArrayFiller()) { Expr *Filler = Importer.Import(ILE->getArrayFiller()); if (!Filler) return nullptr; To->setArrayFiller(Filler); } if (FieldDecl *FromFD = ILE->getInitializedFieldInUnion()) { FieldDecl *ToFD = cast_or_null(Importer.Import(FromFD)); if (!ToFD) return nullptr; To->setInitializedFieldInUnion(ToFD); } if (InitListExpr *SyntForm = ILE->getSyntacticForm()) { InitListExpr *ToSyntForm = cast_or_null( Importer.Import(SyntForm)); if (!ToSyntForm) return nullptr; To->setSyntacticForm(ToSyntForm); } To->sawArrayRangeDesignator(ILE->hadArrayRangeDesignator()); To->setValueDependent(ILE->isValueDependent()); To->setInstantiationDependent(ILE->isInstantiationDependent()); return To; } Expr *ASTNodeImporter::VisitArrayInitLoopExpr(ArrayInitLoopExpr *E) { QualType ToType = Importer.Import(E->getType()); if (ToType.isNull()) return nullptr; Expr *ToCommon = Importer.Import(E->getCommonExpr()); if (!ToCommon && E->getCommonExpr()) return nullptr; Expr *ToSubExpr = Importer.Import(E->getSubExpr()); if (!ToSubExpr && E->getSubExpr()) return nullptr; return new (Importer.getToContext()) ArrayInitLoopExpr(ToType, ToCommon, ToSubExpr); } Expr *ASTNodeImporter::VisitArrayInitIndexExpr(ArrayInitIndexExpr *E) { QualType ToType = Importer.Import(E->getType()); if (ToType.isNull()) return nullptr; return new (Importer.getToContext()) ArrayInitIndexExpr(ToType); } Expr *ASTNodeImporter::VisitCXXDefaultInitExpr(CXXDefaultInitExpr *DIE) { FieldDecl *ToField = llvm::dyn_cast_or_null( Importer.Import(DIE->getField())); if (!ToField && DIE->getField()) return nullptr; return CXXDefaultInitExpr::Create( Importer.getToContext(), Importer.Import(DIE->getLocStart()), ToField); } Expr *ASTNodeImporter::VisitCXXNamedCastExpr(CXXNamedCastExpr *E) { QualType ToType = Importer.Import(E->getType()); if (ToType.isNull() && !E->getType().isNull()) return nullptr; ExprValueKind VK = E->getValueKind(); CastKind CK = E->getCastKind(); Expr *ToOp = Importer.Import(E->getSubExpr()); if (!ToOp && E->getSubExpr()) return nullptr; CXXCastPath BasePath; if (ImportCastPath(E, BasePath)) return nullptr; TypeSourceInfo *ToWritten = Importer.Import(E->getTypeInfoAsWritten()); SourceLocation ToOperatorLoc = Importer.Import(E->getOperatorLoc()); SourceLocation ToRParenLoc = Importer.Import(E->getRParenLoc()); SourceRange ToAngleBrackets = Importer.Import(E->getAngleBrackets()); if (isa(E)) { return CXXStaticCastExpr::Create( Importer.getToContext(), ToType, VK, CK, ToOp, &BasePath, ToWritten, ToOperatorLoc, ToRParenLoc, ToAngleBrackets); } else if (isa(E)) { return CXXDynamicCastExpr::Create( Importer.getToContext(), ToType, VK, CK, ToOp, &BasePath, ToWritten, ToOperatorLoc, ToRParenLoc, ToAngleBrackets); } else if (isa(E)) { return CXXReinterpretCastExpr::Create( Importer.getToContext(), ToType, VK, CK, ToOp, &BasePath, ToWritten, ToOperatorLoc, ToRParenLoc, ToAngleBrackets); } else { return nullptr; } } Expr *ASTNodeImporter::VisitSubstNonTypeTemplateParmExpr( SubstNonTypeTemplateParmExpr *E) { QualType T = Importer.Import(E->getType()); if (T.isNull()) return nullptr; NonTypeTemplateParmDecl *Param = cast_or_null( Importer.Import(E->getParameter())); if (!Param) return nullptr; Expr *Replacement = Importer.Import(E->getReplacement()); if (!Replacement) return nullptr; return new (Importer.getToContext()) SubstNonTypeTemplateParmExpr( T, E->getValueKind(), Importer.Import(E->getExprLoc()), Param, Replacement); } void ASTNodeImporter::ImportOverrides(CXXMethodDecl *ToMethod, CXXMethodDecl *FromMethod) { for (auto *FromOverriddenMethod : FromMethod->overridden_methods()) ToMethod->addOverriddenMethod( cast(Importer.Import(const_cast( FromOverriddenMethod)))); } ASTImporter::ASTImporter(ASTContext &ToContext, FileManager &ToFileManager, ASTContext &FromContext, FileManager &FromFileManager, bool MinimalImport) : ToContext(ToContext), FromContext(FromContext), ToFileManager(ToFileManager), FromFileManager(FromFileManager), Minimal(MinimalImport), LastDiagFromFrom(false) { ImportedDecls[FromContext.getTranslationUnitDecl()] = ToContext.getTranslationUnitDecl(); } ASTImporter::~ASTImporter() { } QualType ASTImporter::Import(QualType FromT) { if (FromT.isNull()) return QualType(); const Type *fromTy = FromT.getTypePtr(); // Check whether we've already imported this type. llvm::DenseMap::iterator Pos = ImportedTypes.find(fromTy); if (Pos != ImportedTypes.end()) return ToContext.getQualifiedType(Pos->second, FromT.getLocalQualifiers()); // Import the type ASTNodeImporter Importer(*this); QualType ToT = Importer.Visit(fromTy); if (ToT.isNull()) return ToT; // Record the imported type. ImportedTypes[fromTy] = ToT.getTypePtr(); return ToContext.getQualifiedType(ToT, FromT.getLocalQualifiers()); } TypeSourceInfo *ASTImporter::Import(TypeSourceInfo *FromTSI) { if (!FromTSI) return FromTSI; // FIXME: For now we just create a "trivial" type source info based // on the type and a single location. Implement a real version of this. QualType T = Import(FromTSI->getType()); if (T.isNull()) return nullptr; return ToContext.getTrivialTypeSourceInfo(T, Import(FromTSI->getTypeLoc().getLocStart())); } Decl *ASTImporter::GetAlreadyImportedOrNull(Decl *FromD) { llvm::DenseMap::iterator Pos = ImportedDecls.find(FromD); if (Pos != ImportedDecls.end()) { Decl *ToD = Pos->second; ASTNodeImporter(*this).ImportDefinitionIfNeeded(FromD, ToD); return ToD; } else { return nullptr; } } Decl *ASTImporter::Import(Decl *FromD) { if (!FromD) return nullptr; ASTNodeImporter Importer(*this); // Check whether we've already imported this declaration. llvm::DenseMap::iterator Pos = ImportedDecls.find(FromD); if (Pos != ImportedDecls.end()) { Decl *ToD = Pos->second; Importer.ImportDefinitionIfNeeded(FromD, ToD); return ToD; } // Import the type Decl *ToD = Importer.Visit(FromD); if (!ToD) return nullptr; // Record the imported declaration. ImportedDecls[FromD] = ToD; if (TagDecl *FromTag = dyn_cast(FromD)) { // Keep track of anonymous tags that have an associated typedef. if (FromTag->getTypedefNameForAnonDecl()) AnonTagsWithPendingTypedefs.push_back(FromTag); } else if (TypedefNameDecl *FromTypedef = dyn_cast(FromD)) { // When we've finished transforming a typedef, see whether it was the // typedef for an anonymous tag. for (SmallVectorImpl::iterator FromTag = AnonTagsWithPendingTypedefs.begin(), FromTagEnd = AnonTagsWithPendingTypedefs.end(); FromTag != FromTagEnd; ++FromTag) { if ((*FromTag)->getTypedefNameForAnonDecl() == FromTypedef) { if (TagDecl *ToTag = cast_or_null(Import(*FromTag))) { // We found the typedef for an anonymous tag; link them. ToTag->setTypedefNameForAnonDecl(cast(ToD)); AnonTagsWithPendingTypedefs.erase(FromTag); break; } } } } return ToD; } DeclContext *ASTImporter::ImportContext(DeclContext *FromDC) { if (!FromDC) return FromDC; DeclContext *ToDC = cast_or_null(Import(cast(FromDC))); if (!ToDC) return nullptr; // When we're using a record/enum/Objective-C class/protocol as a context, we // need it to have a definition. if (RecordDecl *ToRecord = dyn_cast(ToDC)) { RecordDecl *FromRecord = cast(FromDC); if (ToRecord->isCompleteDefinition()) { // Do nothing. } else if (FromRecord->isCompleteDefinition()) { ASTNodeImporter(*this).ImportDefinition(FromRecord, ToRecord, ASTNodeImporter::IDK_Basic); } else { CompleteDecl(ToRecord); } } else if (EnumDecl *ToEnum = dyn_cast(ToDC)) { EnumDecl *FromEnum = cast(FromDC); if (ToEnum->isCompleteDefinition()) { // Do nothing. } else if (FromEnum->isCompleteDefinition()) { ASTNodeImporter(*this).ImportDefinition(FromEnum, ToEnum, ASTNodeImporter::IDK_Basic); } else { CompleteDecl(ToEnum); } } else if (ObjCInterfaceDecl *ToClass = dyn_cast(ToDC)) { ObjCInterfaceDecl *FromClass = cast(FromDC); if (ToClass->getDefinition()) { // Do nothing. } else if (ObjCInterfaceDecl *FromDef = FromClass->getDefinition()) { ASTNodeImporter(*this).ImportDefinition(FromDef, ToClass, ASTNodeImporter::IDK_Basic); } else { CompleteDecl(ToClass); } } else if (ObjCProtocolDecl *ToProto = dyn_cast(ToDC)) { ObjCProtocolDecl *FromProto = cast(FromDC); if (ToProto->getDefinition()) { // Do nothing. } else if (ObjCProtocolDecl *FromDef = FromProto->getDefinition()) { ASTNodeImporter(*this).ImportDefinition(FromDef, ToProto, ASTNodeImporter::IDK_Basic); } else { CompleteDecl(ToProto); } } return ToDC; } Expr *ASTImporter::Import(Expr *FromE) { if (!FromE) return nullptr; return cast_or_null(Import(cast(FromE))); } Stmt *ASTImporter::Import(Stmt *FromS) { if (!FromS) return nullptr; // Check whether we've already imported this declaration. llvm::DenseMap::iterator Pos = ImportedStmts.find(FromS); if (Pos != ImportedStmts.end()) return Pos->second; // Import the type ASTNodeImporter Importer(*this); Stmt *ToS = Importer.Visit(FromS); if (!ToS) return nullptr; // Record the imported declaration. ImportedStmts[FromS] = ToS; return ToS; } NestedNameSpecifier *ASTImporter::Import(NestedNameSpecifier *FromNNS) { if (!FromNNS) return nullptr; NestedNameSpecifier *prefix = Import(FromNNS->getPrefix()); switch (FromNNS->getKind()) { case NestedNameSpecifier::Identifier: if (IdentifierInfo *II = Import(FromNNS->getAsIdentifier())) { return NestedNameSpecifier::Create(ToContext, prefix, II); } return nullptr; case NestedNameSpecifier::Namespace: if (NamespaceDecl *NS = cast_or_null(Import(FromNNS->getAsNamespace()))) { return NestedNameSpecifier::Create(ToContext, prefix, NS); } return nullptr; case NestedNameSpecifier::NamespaceAlias: if (NamespaceAliasDecl *NSAD = cast_or_null(Import(FromNNS->getAsNamespaceAlias()))) { return NestedNameSpecifier::Create(ToContext, prefix, NSAD); } return nullptr; case NestedNameSpecifier::Global: return NestedNameSpecifier::GlobalSpecifier(ToContext); case NestedNameSpecifier::Super: if (CXXRecordDecl *RD = cast_or_null(Import(FromNNS->getAsRecordDecl()))) { return NestedNameSpecifier::SuperSpecifier(ToContext, RD); } return nullptr; case NestedNameSpecifier::TypeSpec: case NestedNameSpecifier::TypeSpecWithTemplate: { QualType T = Import(QualType(FromNNS->getAsType(), 0u)); if (!T.isNull()) { bool bTemplate = FromNNS->getKind() == NestedNameSpecifier::TypeSpecWithTemplate; return NestedNameSpecifier::Create(ToContext, prefix, bTemplate, T.getTypePtr()); } } return nullptr; } llvm_unreachable("Invalid nested name specifier kind"); } NestedNameSpecifierLoc ASTImporter::Import(NestedNameSpecifierLoc FromNNS) { // Copied from NestedNameSpecifier mostly. SmallVector NestedNames; NestedNameSpecifierLoc NNS = FromNNS; // Push each of the nested-name-specifiers's onto a stack for // serialization in reverse order. while (NNS) { NestedNames.push_back(NNS); NNS = NNS.getPrefix(); } NestedNameSpecifierLocBuilder Builder; while (!NestedNames.empty()) { NNS = NestedNames.pop_back_val(); NestedNameSpecifier *Spec = Import(NNS.getNestedNameSpecifier()); if (!Spec) return NestedNameSpecifierLoc(); NestedNameSpecifier::SpecifierKind Kind = Spec->getKind(); switch (Kind) { case NestedNameSpecifier::Identifier: Builder.Extend(getToContext(), Spec->getAsIdentifier(), Import(NNS.getLocalBeginLoc()), Import(NNS.getLocalEndLoc())); break; case NestedNameSpecifier::Namespace: Builder.Extend(getToContext(), Spec->getAsNamespace(), Import(NNS.getLocalBeginLoc()), Import(NNS.getLocalEndLoc())); break; case NestedNameSpecifier::NamespaceAlias: Builder.Extend(getToContext(), Spec->getAsNamespaceAlias(), Import(NNS.getLocalBeginLoc()), Import(NNS.getLocalEndLoc())); break; case NestedNameSpecifier::TypeSpec: case NestedNameSpecifier::TypeSpecWithTemplate: { TypeSourceInfo *TSI = getToContext().getTrivialTypeSourceInfo( QualType(Spec->getAsType(), 0)); Builder.Extend(getToContext(), Import(NNS.getLocalBeginLoc()), TSI->getTypeLoc(), Import(NNS.getLocalEndLoc())); break; } case NestedNameSpecifier::Global: Builder.MakeGlobal(getToContext(), Import(NNS.getLocalBeginLoc())); break; case NestedNameSpecifier::Super: { SourceRange ToRange = Import(NNS.getSourceRange()); Builder.MakeSuper(getToContext(), Spec->getAsRecordDecl(), ToRange.getBegin(), ToRange.getEnd()); } } } return Builder.getWithLocInContext(getToContext()); } TemplateName ASTImporter::Import(TemplateName From) { switch (From.getKind()) { case TemplateName::Template: if (TemplateDecl *ToTemplate = cast_or_null(Import(From.getAsTemplateDecl()))) return TemplateName(ToTemplate); return TemplateName(); case TemplateName::OverloadedTemplate: { OverloadedTemplateStorage *FromStorage = From.getAsOverloadedTemplate(); UnresolvedSet<2> ToTemplates; for (OverloadedTemplateStorage::iterator I = FromStorage->begin(), E = FromStorage->end(); I != E; ++I) { if (NamedDecl *To = cast_or_null(Import(*I))) ToTemplates.addDecl(To); else return TemplateName(); } return ToContext.getOverloadedTemplateName(ToTemplates.begin(), ToTemplates.end()); } case TemplateName::QualifiedTemplate: { QualifiedTemplateName *QTN = From.getAsQualifiedTemplateName(); NestedNameSpecifier *Qualifier = Import(QTN->getQualifier()); if (!Qualifier) return TemplateName(); if (TemplateDecl *ToTemplate = cast_or_null(Import(From.getAsTemplateDecl()))) return ToContext.getQualifiedTemplateName(Qualifier, QTN->hasTemplateKeyword(), ToTemplate); return TemplateName(); } case TemplateName::DependentTemplate: { DependentTemplateName *DTN = From.getAsDependentTemplateName(); NestedNameSpecifier *Qualifier = Import(DTN->getQualifier()); if (!Qualifier) return TemplateName(); if (DTN->isIdentifier()) { return ToContext.getDependentTemplateName(Qualifier, Import(DTN->getIdentifier())); } return ToContext.getDependentTemplateName(Qualifier, DTN->getOperator()); } case TemplateName::SubstTemplateTemplateParm: { SubstTemplateTemplateParmStorage *subst = From.getAsSubstTemplateTemplateParm(); TemplateTemplateParmDecl *param = cast_or_null(Import(subst->getParameter())); if (!param) return TemplateName(); TemplateName replacement = Import(subst->getReplacement()); if (replacement.isNull()) return TemplateName(); return ToContext.getSubstTemplateTemplateParm(param, replacement); } case TemplateName::SubstTemplateTemplateParmPack: { SubstTemplateTemplateParmPackStorage *SubstPack = From.getAsSubstTemplateTemplateParmPack(); TemplateTemplateParmDecl *Param = cast_or_null( Import(SubstPack->getParameterPack())); if (!Param) return TemplateName(); ASTNodeImporter Importer(*this); TemplateArgument ArgPack = Importer.ImportTemplateArgument(SubstPack->getArgumentPack()); if (ArgPack.isNull()) return TemplateName(); return ToContext.getSubstTemplateTemplateParmPack(Param, ArgPack); } } llvm_unreachable("Invalid template name kind"); } SourceLocation ASTImporter::Import(SourceLocation FromLoc) { if (FromLoc.isInvalid()) return SourceLocation(); SourceManager &FromSM = FromContext.getSourceManager(); // For now, map everything down to its file location, so that we // don't have to import macro expansions. // FIXME: Import macro expansions! FromLoc = FromSM.getFileLoc(FromLoc); std::pair Decomposed = FromSM.getDecomposedLoc(FromLoc); SourceManager &ToSM = ToContext.getSourceManager(); FileID ToFileID = Import(Decomposed.first); if (ToFileID.isInvalid()) return SourceLocation(); SourceLocation ret = ToSM.getLocForStartOfFile(ToFileID) .getLocWithOffset(Decomposed.second); return ret; } SourceRange ASTImporter::Import(SourceRange FromRange) { return SourceRange(Import(FromRange.getBegin()), Import(FromRange.getEnd())); } FileID ASTImporter::Import(FileID FromID) { llvm::DenseMap::iterator Pos = ImportedFileIDs.find(FromID); if (Pos != ImportedFileIDs.end()) return Pos->second; SourceManager &FromSM = FromContext.getSourceManager(); SourceManager &ToSM = ToContext.getSourceManager(); const SrcMgr::SLocEntry &FromSLoc = FromSM.getSLocEntry(FromID); assert(FromSLoc.isFile() && "Cannot handle macro expansions yet"); // Include location of this file. SourceLocation ToIncludeLoc = Import(FromSLoc.getFile().getIncludeLoc()); // Map the FileID for to the "to" source manager. FileID ToID; const SrcMgr::ContentCache *Cache = FromSLoc.getFile().getContentCache(); if (Cache->OrigEntry && Cache->OrigEntry->getDir()) { // FIXME: We probably want to use getVirtualFile(), so we don't hit the // disk again // FIXME: We definitely want to re-use the existing MemoryBuffer, rather // than mmap the files several times. const FileEntry *Entry = ToFileManager.getFile(Cache->OrigEntry->getName()); if (!Entry) return FileID(); ToID = ToSM.createFileID(Entry, ToIncludeLoc, FromSLoc.getFile().getFileCharacteristic()); } else { // FIXME: We want to re-use the existing MemoryBuffer! const llvm::MemoryBuffer * FromBuf = Cache->getBuffer(FromContext.getDiagnostics(), FromSM); std::unique_ptr ToBuf = llvm::MemoryBuffer::getMemBufferCopy(FromBuf->getBuffer(), FromBuf->getBufferIdentifier()); ToID = ToSM.createFileID(std::move(ToBuf), FromSLoc.getFile().getFileCharacteristic()); } ImportedFileIDs[FromID] = ToID; return ToID; } CXXCtorInitializer *ASTImporter::Import(CXXCtorInitializer *From) { Expr *ToExpr = Import(From->getInit()); if (!ToExpr && From->getInit()) return nullptr; if (From->isBaseInitializer()) { TypeSourceInfo *ToTInfo = Import(From->getTypeSourceInfo()); if (!ToTInfo && From->getTypeSourceInfo()) return nullptr; return new (ToContext) CXXCtorInitializer( ToContext, ToTInfo, From->isBaseVirtual(), Import(From->getLParenLoc()), ToExpr, Import(From->getRParenLoc()), From->isPackExpansion() ? Import(From->getEllipsisLoc()) : SourceLocation()); } else if (From->isMemberInitializer()) { FieldDecl *ToField = llvm::cast_or_null(Import(From->getMember())); if (!ToField && From->getMember()) return nullptr; return new (ToContext) CXXCtorInitializer( ToContext, ToField, Import(From->getMemberLocation()), Import(From->getLParenLoc()), ToExpr, Import(From->getRParenLoc())); } else if (From->isIndirectMemberInitializer()) { IndirectFieldDecl *ToIField = llvm::cast_or_null( Import(From->getIndirectMember())); if (!ToIField && From->getIndirectMember()) return nullptr; return new (ToContext) CXXCtorInitializer( ToContext, ToIField, Import(From->getMemberLocation()), Import(From->getLParenLoc()), ToExpr, Import(From->getRParenLoc())); } else if (From->isDelegatingInitializer()) { TypeSourceInfo *ToTInfo = Import(From->getTypeSourceInfo()); if (!ToTInfo && From->getTypeSourceInfo()) return nullptr; return new (ToContext) CXXCtorInitializer(ToContext, ToTInfo, Import(From->getLParenLoc()), ToExpr, Import(From->getRParenLoc())); } else { return nullptr; } } CXXBaseSpecifier *ASTImporter::Import(const CXXBaseSpecifier *BaseSpec) { auto Pos = ImportedCXXBaseSpecifiers.find(BaseSpec); if (Pos != ImportedCXXBaseSpecifiers.end()) return Pos->second; CXXBaseSpecifier *Imported = new (ToContext) CXXBaseSpecifier( Import(BaseSpec->getSourceRange()), BaseSpec->isVirtual(), BaseSpec->isBaseOfClass(), BaseSpec->getAccessSpecifierAsWritten(), Import(BaseSpec->getTypeSourceInfo()), Import(BaseSpec->getEllipsisLoc())); ImportedCXXBaseSpecifiers[BaseSpec] = Imported; return Imported; } void ASTImporter::ImportDefinition(Decl *From) { Decl *To = Import(From); if (!To) return; if (DeclContext *FromDC = cast(From)) { ASTNodeImporter Importer(*this); if (RecordDecl *ToRecord = dyn_cast(To)) { if (!ToRecord->getDefinition()) { Importer.ImportDefinition(cast(FromDC), ToRecord, ASTNodeImporter::IDK_Everything); return; } } if (EnumDecl *ToEnum = dyn_cast(To)) { if (!ToEnum->getDefinition()) { Importer.ImportDefinition(cast(FromDC), ToEnum, ASTNodeImporter::IDK_Everything); return; } } if (ObjCInterfaceDecl *ToIFace = dyn_cast(To)) { if (!ToIFace->getDefinition()) { Importer.ImportDefinition(cast(FromDC), ToIFace, ASTNodeImporter::IDK_Everything); return; } } if (ObjCProtocolDecl *ToProto = dyn_cast(To)) { if (!ToProto->getDefinition()) { Importer.ImportDefinition(cast(FromDC), ToProto, ASTNodeImporter::IDK_Everything); return; } } Importer.ImportDeclContext(FromDC, true); } } DeclarationName ASTImporter::Import(DeclarationName FromName) { if (!FromName) return DeclarationName(); switch (FromName.getNameKind()) { case DeclarationName::Identifier: return Import(FromName.getAsIdentifierInfo()); case DeclarationName::ObjCZeroArgSelector: case DeclarationName::ObjCOneArgSelector: case DeclarationName::ObjCMultiArgSelector: return Import(FromName.getObjCSelector()); case DeclarationName::CXXConstructorName: { QualType T = Import(FromName.getCXXNameType()); if (T.isNull()) return DeclarationName(); return ToContext.DeclarationNames.getCXXConstructorName( ToContext.getCanonicalType(T)); } case DeclarationName::CXXDestructorName: { QualType T = Import(FromName.getCXXNameType()); if (T.isNull()) return DeclarationName(); return ToContext.DeclarationNames.getCXXDestructorName( ToContext.getCanonicalType(T)); } case DeclarationName::CXXDeductionGuideName: { TemplateDecl *Template = cast_or_null( Import(FromName.getCXXDeductionGuideTemplate())); if (!Template) return DeclarationName(); return ToContext.DeclarationNames.getCXXDeductionGuideName(Template); } case DeclarationName::CXXConversionFunctionName: { QualType T = Import(FromName.getCXXNameType()); if (T.isNull()) return DeclarationName(); return ToContext.DeclarationNames.getCXXConversionFunctionName( ToContext.getCanonicalType(T)); } case DeclarationName::CXXOperatorName: return ToContext.DeclarationNames.getCXXOperatorName( FromName.getCXXOverloadedOperator()); case DeclarationName::CXXLiteralOperatorName: return ToContext.DeclarationNames.getCXXLiteralOperatorName( Import(FromName.getCXXLiteralIdentifier())); case DeclarationName::CXXUsingDirective: // FIXME: STATICS! return DeclarationName::getUsingDirectiveName(); } llvm_unreachable("Invalid DeclarationName Kind!"); } IdentifierInfo *ASTImporter::Import(const IdentifierInfo *FromId) { if (!FromId) return nullptr; IdentifierInfo *ToId = &ToContext.Idents.get(FromId->getName()); if (!ToId->getBuiltinID() && FromId->getBuiltinID()) ToId->setBuiltinID(FromId->getBuiltinID()); return ToId; } Selector ASTImporter::Import(Selector FromSel) { if (FromSel.isNull()) return Selector(); SmallVector Idents; Idents.push_back(Import(FromSel.getIdentifierInfoForSlot(0))); for (unsigned I = 1, N = FromSel.getNumArgs(); I < N; ++I) Idents.push_back(Import(FromSel.getIdentifierInfoForSlot(I))); return ToContext.Selectors.getSelector(FromSel.getNumArgs(), Idents.data()); } DeclarationName ASTImporter::HandleNameConflict(DeclarationName Name, DeclContext *DC, unsigned IDNS, NamedDecl **Decls, unsigned NumDecls) { return Name; } DiagnosticBuilder ASTImporter::ToDiag(SourceLocation Loc, unsigned DiagID) { if (LastDiagFromFrom) ToContext.getDiagnostics().notePriorDiagnosticFrom( FromContext.getDiagnostics()); LastDiagFromFrom = false; return ToContext.getDiagnostics().Report(Loc, DiagID); } DiagnosticBuilder ASTImporter::FromDiag(SourceLocation Loc, unsigned DiagID) { if (!LastDiagFromFrom) FromContext.getDiagnostics().notePriorDiagnosticFrom( ToContext.getDiagnostics()); LastDiagFromFrom = true; return FromContext.getDiagnostics().Report(Loc, DiagID); } void ASTImporter::CompleteDecl (Decl *D) { if (ObjCInterfaceDecl *ID = dyn_cast(D)) { if (!ID->getDefinition()) ID->startDefinition(); } else if (ObjCProtocolDecl *PD = dyn_cast(D)) { if (!PD->getDefinition()) PD->startDefinition(); } else if (TagDecl *TD = dyn_cast(D)) { if (!TD->getDefinition() && !TD->isBeingDefined()) { TD->startDefinition(); TD->setCompleteDefinition(true); } } else { assert (0 && "CompleteDecl called on a Decl that can't be completed"); } } Decl *ASTImporter::Imported(Decl *From, Decl *To) { if (From->hasAttrs()) { for (Attr *FromAttr : From->getAttrs()) To->addAttr(FromAttr->clone(To->getASTContext())); } if (From->isUsed()) { To->setIsUsed(); } if (From->isImplicit()) { To->setImplicit(); } ImportedDecls[From] = To; return To; } bool ASTImporter::IsStructurallyEquivalent(QualType From, QualType To, bool Complain) { llvm::DenseMap::iterator Pos = ImportedTypes.find(From.getTypePtr()); if (Pos != ImportedTypes.end() && ToContext.hasSameType(Import(From), To)) return true; StructuralEquivalenceContext Ctx(FromContext, ToContext, NonEquivalentDecls, false, Complain); return Ctx.IsStructurallyEquivalent(From, To); } diff --git a/lib/AST/DeclCXX.cpp b/lib/AST/DeclCXX.cpp index 1caceab85eea..5782b7b56c96 100644 --- a/lib/AST/DeclCXX.cpp +++ b/lib/AST/DeclCXX.cpp @@ -1,2566 +1,2592 @@ //===--- DeclCXX.cpp - C++ Declaration AST Node Implementation ------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements the C++ related Decl classes. // //===----------------------------------------------------------------------===// #include "clang/AST/DeclCXX.h" #include "clang/AST/ASTContext.h" #include "clang/AST/ASTLambda.h" #include "clang/AST/ASTMutationListener.h" #include "clang/AST/CXXInheritance.h" #include "clang/AST/DeclTemplate.h" #include "clang/AST/Expr.h" #include "clang/AST/ExprCXX.h" #include "clang/AST/ODRHash.h" #include "clang/AST/TypeLoc.h" #include "clang/Basic/IdentifierTable.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/SmallPtrSet.h" using namespace clang; //===----------------------------------------------------------------------===// // Decl Allocation/Deallocation Method Implementations //===----------------------------------------------------------------------===// void AccessSpecDecl::anchor() { } AccessSpecDecl *AccessSpecDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) AccessSpecDecl(EmptyShell()); } void LazyASTUnresolvedSet::getFromExternalSource(ASTContext &C) const { ExternalASTSource *Source = C.getExternalSource(); assert(Impl.Decls.isLazy() && "getFromExternalSource for non-lazy set"); assert(Source && "getFromExternalSource with no external source"); for (ASTUnresolvedSet::iterator I = Impl.begin(); I != Impl.end(); ++I) I.setDecl(cast(Source->GetExternalDecl( reinterpret_cast(I.getDecl()) >> 2))); Impl.Decls.setLazy(false); } CXXRecordDecl::DefinitionData::DefinitionData(CXXRecordDecl *D) : UserDeclaredConstructor(false), UserDeclaredSpecialMembers(0), Aggregate(true), PlainOldData(true), Empty(true), Polymorphic(false), Abstract(false), IsStandardLayout(true), HasNoNonEmptyBases(true), HasPrivateFields(false), HasProtectedFields(false), HasPublicFields(false), HasMutableFields(false), HasVariantMembers(false), HasOnlyCMembers(true), HasInClassInitializer(false), HasUninitializedReferenceMember(false), HasUninitializedFields(false), HasInheritedConstructor(false), HasInheritedAssignment(false), + NeedOverloadResolutionForCopyConstructor(false), NeedOverloadResolutionForMoveConstructor(false), NeedOverloadResolutionForMoveAssignment(false), NeedOverloadResolutionForDestructor(false), + DefaultedCopyConstructorIsDeleted(false), DefaultedMoveConstructorIsDeleted(false), DefaultedMoveAssignmentIsDeleted(false), DefaultedDestructorIsDeleted(false), HasTrivialSpecialMembers(SMF_All), DeclaredNonTrivialSpecialMembers(0), HasIrrelevantDestructor(true), HasConstexprNonCopyMoveConstructor(false), HasDefaultedDefaultConstructor(false), + CanPassInRegisters(true), DefaultedDefaultConstructorIsConstexpr(true), HasConstexprDefaultConstructor(false), HasNonLiteralTypeFieldsOrBases(false), ComputedVisibleConversions(false), UserProvidedDefaultConstructor(false), DeclaredSpecialMembers(0), ImplicitCopyConstructorCanHaveConstParamForVBase(true), ImplicitCopyConstructorCanHaveConstParamForNonVBase(true), ImplicitCopyAssignmentHasConstParam(true), HasDeclaredCopyConstructorWithConstParam(false), HasDeclaredCopyAssignmentWithConstParam(false), IsLambda(false), IsParsingBaseSpecifiers(false), HasODRHash(false), ODRHash(0), NumBases(0), NumVBases(0), Bases(), VBases(), Definition(D), FirstFriend() {} CXXBaseSpecifier *CXXRecordDecl::DefinitionData::getBasesSlowCase() const { return Bases.get(Definition->getASTContext().getExternalSource()); } CXXBaseSpecifier *CXXRecordDecl::DefinitionData::getVBasesSlowCase() const { return VBases.get(Definition->getASTContext().getExternalSource()); } CXXRecordDecl::CXXRecordDecl(Kind K, TagKind TK, const ASTContext &C, DeclContext *DC, SourceLocation StartLoc, SourceLocation IdLoc, IdentifierInfo *Id, CXXRecordDecl *PrevDecl) : RecordDecl(K, TK, C, DC, StartLoc, IdLoc, Id, PrevDecl), DefinitionData(PrevDecl ? PrevDecl->DefinitionData : nullptr), TemplateOrInstantiation() {} CXXRecordDecl *CXXRecordDecl::Create(const ASTContext &C, TagKind TK, DeclContext *DC, SourceLocation StartLoc, SourceLocation IdLoc, IdentifierInfo *Id, CXXRecordDecl* PrevDecl, bool DelayTypeCreation) { CXXRecordDecl *R = new (C, DC) CXXRecordDecl(CXXRecord, TK, C, DC, StartLoc, IdLoc, Id, PrevDecl); R->MayHaveOutOfDateDef = C.getLangOpts().Modules; // FIXME: DelayTypeCreation seems like such a hack if (!DelayTypeCreation) C.getTypeDeclType(R, PrevDecl); return R; } CXXRecordDecl * CXXRecordDecl::CreateLambda(const ASTContext &C, DeclContext *DC, TypeSourceInfo *Info, SourceLocation Loc, bool Dependent, bool IsGeneric, LambdaCaptureDefault CaptureDefault) { CXXRecordDecl *R = new (C, DC) CXXRecordDecl(CXXRecord, TTK_Class, C, DC, Loc, Loc, nullptr, nullptr); R->IsBeingDefined = true; R->DefinitionData = new (C) struct LambdaDefinitionData(R, Info, Dependent, IsGeneric, CaptureDefault); R->MayHaveOutOfDateDef = false; R->setImplicit(true); C.getTypeDeclType(R, /*PrevDecl=*/nullptr); return R; } CXXRecordDecl * CXXRecordDecl::CreateDeserialized(const ASTContext &C, unsigned ID) { CXXRecordDecl *R = new (C, ID) CXXRecordDecl( CXXRecord, TTK_Struct, C, nullptr, SourceLocation(), SourceLocation(), nullptr, nullptr); R->MayHaveOutOfDateDef = false; return R; } void CXXRecordDecl::setBases(CXXBaseSpecifier const * const *Bases, unsigned NumBases) { ASTContext &C = getASTContext(); if (!data().Bases.isOffset() && data().NumBases > 0) C.Deallocate(data().getBases()); if (NumBases) { if (!C.getLangOpts().CPlusPlus1z) { // C++ [dcl.init.aggr]p1: // An aggregate is [...] a class with [...] no base classes [...]. data().Aggregate = false; } // C++ [class]p4: // A POD-struct is an aggregate class... data().PlainOldData = false; } // The set of seen virtual base types. llvm::SmallPtrSet SeenVBaseTypes; // The virtual bases of this class. SmallVector VBases; data().Bases = new(C) CXXBaseSpecifier [NumBases]; data().NumBases = NumBases; for (unsigned i = 0; i < NumBases; ++i) { data().getBases()[i] = *Bases[i]; // Keep track of inherited vbases for this base class. const CXXBaseSpecifier *Base = Bases[i]; QualType BaseType = Base->getType(); // Skip dependent types; we can't do any checking on them now. if (BaseType->isDependentType()) continue; CXXRecordDecl *BaseClassDecl = cast(BaseType->getAs()->getDecl()); if (!BaseClassDecl->isEmpty()) { if (!data().Empty) { // C++0x [class]p7: // A standard-layout class is a class that: // [...] // -- either has no non-static data members in the most derived // class and at most one base class with non-static data members, // or has no base classes with non-static data members, and // If this is the second non-empty base, then neither of these two // clauses can be true. data().IsStandardLayout = false; } // C++14 [meta.unary.prop]p4: // T is a class type [...] with [...] no base class B for which // is_empty::value is false. data().Empty = false; data().HasNoNonEmptyBases = false; } // C++1z [dcl.init.agg]p1: // An aggregate is a class with [...] no private or protected base classes if (Base->getAccessSpecifier() != AS_public) data().Aggregate = false; // C++ [class.virtual]p1: // A class that declares or inherits a virtual function is called a // polymorphic class. if (BaseClassDecl->isPolymorphic()) data().Polymorphic = true; // C++0x [class]p7: // A standard-layout class is a class that: [...] // -- has no non-standard-layout base classes if (!BaseClassDecl->isStandardLayout()) data().IsStandardLayout = false; // Record if this base is the first non-literal field or base. if (!hasNonLiteralTypeFieldsOrBases() && !BaseType->isLiteralType(C)) data().HasNonLiteralTypeFieldsOrBases = true; // Now go through all virtual bases of this base and add them. for (const auto &VBase : BaseClassDecl->vbases()) { // Add this base if it's not already in the list. if (SeenVBaseTypes.insert(C.getCanonicalType(VBase.getType())).second) { VBases.push_back(&VBase); // C++11 [class.copy]p8: // The implicitly-declared copy constructor for a class X will have // the form 'X::X(const X&)' if each [...] virtual base class B of X // has a copy constructor whose first parameter is of type // 'const B&' or 'const volatile B&' [...] if (CXXRecordDecl *VBaseDecl = VBase.getType()->getAsCXXRecordDecl()) if (!VBaseDecl->hasCopyConstructorWithConstParam()) data().ImplicitCopyConstructorCanHaveConstParamForVBase = false; // C++1z [dcl.init.agg]p1: // An aggregate is a class with [...] no virtual base classes data().Aggregate = false; } } if (Base->isVirtual()) { // Add this base if it's not already in the list. if (SeenVBaseTypes.insert(C.getCanonicalType(BaseType)).second) VBases.push_back(Base); // C++14 [meta.unary.prop] is_empty: // T is a class type, but not a union type, with ... no virtual base // classes data().Empty = false; // C++1z [dcl.init.agg]p1: // An aggregate is a class with [...] no virtual base classes data().Aggregate = false; // C++11 [class.ctor]p5, C++11 [class.copy]p12, C++11 [class.copy]p25: // A [default constructor, copy/move constructor, or copy/move assignment // operator for a class X] is trivial [...] if: // -- class X has [...] no virtual base classes data().HasTrivialSpecialMembers &= SMF_Destructor; // C++0x [class]p7: // A standard-layout class is a class that: [...] // -- has [...] no virtual base classes data().IsStandardLayout = false; // C++11 [dcl.constexpr]p4: // In the definition of a constexpr constructor [...] // -- the class shall not have any virtual base classes data().DefaultedDefaultConstructorIsConstexpr = false; // C++1z [class.copy]p8: // The implicitly-declared copy constructor for a class X will have // the form 'X::X(const X&)' if each potentially constructed subobject // has a copy constructor whose first parameter is of type // 'const B&' or 'const volatile B&' [...] if (!BaseClassDecl->hasCopyConstructorWithConstParam()) data().ImplicitCopyConstructorCanHaveConstParamForVBase = false; } else { // C++ [class.ctor]p5: // A default constructor is trivial [...] if: // -- all the direct base classes of its class have trivial default // constructors. if (!BaseClassDecl->hasTrivialDefaultConstructor()) data().HasTrivialSpecialMembers &= ~SMF_DefaultConstructor; // C++0x [class.copy]p13: // A copy/move constructor for class X is trivial if [...] // [...] // -- the constructor selected to copy/move each direct base class // subobject is trivial, and if (!BaseClassDecl->hasTrivialCopyConstructor()) data().HasTrivialSpecialMembers &= ~SMF_CopyConstructor; // If the base class doesn't have a simple move constructor, we'll eagerly // declare it and perform overload resolution to determine which function // it actually calls. If it does have a simple move constructor, this // check is correct. if (!BaseClassDecl->hasTrivialMoveConstructor()) data().HasTrivialSpecialMembers &= ~SMF_MoveConstructor; // C++0x [class.copy]p27: // A copy/move assignment operator for class X is trivial if [...] // [...] // -- the assignment operator selected to copy/move each direct base // class subobject is trivial, and if (!BaseClassDecl->hasTrivialCopyAssignment()) data().HasTrivialSpecialMembers &= ~SMF_CopyAssignment; // If the base class doesn't have a simple move assignment, we'll eagerly // declare it and perform overload resolution to determine which function // it actually calls. If it does have a simple move assignment, this // check is correct. if (!BaseClassDecl->hasTrivialMoveAssignment()) data().HasTrivialSpecialMembers &= ~SMF_MoveAssignment; // C++11 [class.ctor]p6: // If that user-written default constructor would satisfy the // requirements of a constexpr constructor, the implicitly-defined // default constructor is constexpr. if (!BaseClassDecl->hasConstexprDefaultConstructor()) data().DefaultedDefaultConstructorIsConstexpr = false; // C++1z [class.copy]p8: // The implicitly-declared copy constructor for a class X will have // the form 'X::X(const X&)' if each potentially constructed subobject // has a copy constructor whose first parameter is of type // 'const B&' or 'const volatile B&' [...] if (!BaseClassDecl->hasCopyConstructorWithConstParam()) data().ImplicitCopyConstructorCanHaveConstParamForNonVBase = false; } // C++ [class.ctor]p3: // A destructor is trivial if all the direct base classes of its class // have trivial destructors. if (!BaseClassDecl->hasTrivialDestructor()) data().HasTrivialSpecialMembers &= ~SMF_Destructor; if (!BaseClassDecl->hasIrrelevantDestructor()) data().HasIrrelevantDestructor = false; // C++11 [class.copy]p18: // The implicitly-declared copy assignment oeprator for a class X will // have the form 'X& X::operator=(const X&)' if each direct base class B // of X has a copy assignment operator whose parameter is of type 'const // B&', 'const volatile B&', or 'B' [...] if (!BaseClassDecl->hasCopyAssignmentWithConstParam()) data().ImplicitCopyAssignmentHasConstParam = false; // A class has an Objective-C object member if... or any of its bases // has an Objective-C object member. if (BaseClassDecl->hasObjectMember()) setHasObjectMember(true); if (BaseClassDecl->hasVolatileMember()) setHasVolatileMember(true); // Keep track of the presence of mutable fields. - if (BaseClassDecl->hasMutableFields()) + if (BaseClassDecl->hasMutableFields()) { data().HasMutableFields = true; + data().NeedOverloadResolutionForCopyConstructor = true; + } if (BaseClassDecl->hasUninitializedReferenceMember()) data().HasUninitializedReferenceMember = true; if (!BaseClassDecl->allowConstDefaultInit()) data().HasUninitializedFields = true; addedClassSubobject(BaseClassDecl); } if (VBases.empty()) { data().IsParsingBaseSpecifiers = false; return; } // Create base specifier for any direct or indirect virtual bases. data().VBases = new (C) CXXBaseSpecifier[VBases.size()]; data().NumVBases = VBases.size(); for (int I = 0, E = VBases.size(); I != E; ++I) { QualType Type = VBases[I]->getType(); if (!Type->isDependentType()) addedClassSubobject(Type->getAsCXXRecordDecl()); data().getVBases()[I] = *VBases[I]; } data().IsParsingBaseSpecifiers = false; } unsigned CXXRecordDecl::getODRHash() const { assert(hasDefinition() && "ODRHash only for records with definitions"); // Previously calculated hash is stored in DefinitionData. if (DefinitionData->HasODRHash) return DefinitionData->ODRHash; // Only calculate hash on first call of getODRHash per record. ODRHash Hash; Hash.AddCXXRecordDecl(getDefinition()); DefinitionData->HasODRHash = true; DefinitionData->ODRHash = Hash.CalculateHash(); return DefinitionData->ODRHash; } void CXXRecordDecl::addedClassSubobject(CXXRecordDecl *Subobj) { // C++11 [class.copy]p11: // A defaulted copy/move constructor for a class X is defined as // deleted if X has: // -- a direct or virtual base class B that cannot be copied/moved [...] // -- a non-static data member of class type M (or array thereof) // that cannot be copied or moved [...] + if (!Subobj->hasSimpleCopyConstructor()) + data().NeedOverloadResolutionForCopyConstructor = true; if (!Subobj->hasSimpleMoveConstructor()) data().NeedOverloadResolutionForMoveConstructor = true; // C++11 [class.copy]p23: // A defaulted copy/move assignment operator for a class X is defined as // deleted if X has: // -- a direct or virtual base class B that cannot be copied/moved [...] // -- a non-static data member of class type M (or array thereof) // that cannot be copied or moved [...] if (!Subobj->hasSimpleMoveAssignment()) data().NeedOverloadResolutionForMoveAssignment = true; // C++11 [class.ctor]p5, C++11 [class.copy]p11, C++11 [class.dtor]p5: // A defaulted [ctor or dtor] for a class X is defined as // deleted if X has: // -- any direct or virtual base class [...] has a type with a destructor // that is deleted or inaccessible from the defaulted [ctor or dtor]. // -- any non-static data member has a type with a destructor // that is deleted or inaccessible from the defaulted [ctor or dtor]. if (!Subobj->hasSimpleDestructor()) { + data().NeedOverloadResolutionForCopyConstructor = true; data().NeedOverloadResolutionForMoveConstructor = true; data().NeedOverloadResolutionForDestructor = true; } } bool CXXRecordDecl::hasAnyDependentBases() const { if (!isDependentContext()) return false; return !forallBases([](const CXXRecordDecl *) { return true; }); } bool CXXRecordDecl::isTriviallyCopyable() const { // C++0x [class]p5: // A trivially copyable class is a class that: // -- has no non-trivial copy constructors, if (hasNonTrivialCopyConstructor()) return false; // -- has no non-trivial move constructors, if (hasNonTrivialMoveConstructor()) return false; // -- has no non-trivial copy assignment operators, if (hasNonTrivialCopyAssignment()) return false; // -- has no non-trivial move assignment operators, and if (hasNonTrivialMoveAssignment()) return false; // -- has a trivial destructor. if (!hasTrivialDestructor()) return false; return true; } void CXXRecordDecl::markedVirtualFunctionPure() { // C++ [class.abstract]p2: // A class is abstract if it has at least one pure virtual function. data().Abstract = true; } void CXXRecordDecl::addedMember(Decl *D) { if (!D->isImplicit() && !isa(D) && !isa(D) && (!isa(D) || cast(D)->getTagKind() == TTK_Class || cast(D)->getTagKind() == TTK_Interface)) data().HasOnlyCMembers = false; // Ignore friends and invalid declarations. if (D->getFriendObjectKind() || D->isInvalidDecl()) return; FunctionTemplateDecl *FunTmpl = dyn_cast(D); if (FunTmpl) D = FunTmpl->getTemplatedDecl(); // FIXME: Pass NamedDecl* to addedMember? Decl *DUnderlying = D; if (auto *ND = dyn_cast(DUnderlying)) { DUnderlying = ND->getUnderlyingDecl(); if (FunctionTemplateDecl *UnderlyingFunTmpl = dyn_cast(DUnderlying)) DUnderlying = UnderlyingFunTmpl->getTemplatedDecl(); } if (CXXMethodDecl *Method = dyn_cast(D)) { if (Method->isVirtual()) { // C++ [dcl.init.aggr]p1: // An aggregate is an array or a class with [...] no virtual functions. data().Aggregate = false; // C++ [class]p4: // A POD-struct is an aggregate class... data().PlainOldData = false; // C++14 [meta.unary.prop]p4: // T is a class type [...] with [...] no virtual member functions... data().Empty = false; // C++ [class.virtual]p1: // A class that declares or inherits a virtual function is called a // polymorphic class. data().Polymorphic = true; // C++11 [class.ctor]p5, C++11 [class.copy]p12, C++11 [class.copy]p25: // A [default constructor, copy/move constructor, or copy/move // assignment operator for a class X] is trivial [...] if: // -- class X has no virtual functions [...] data().HasTrivialSpecialMembers &= SMF_Destructor; // C++0x [class]p7: // A standard-layout class is a class that: [...] // -- has no virtual functions data().IsStandardLayout = false; } } // Notify the listener if an implicit member was added after the definition // was completed. if (!isBeingDefined() && D->isImplicit()) if (ASTMutationListener *L = getASTMutationListener()) L->AddedCXXImplicitMember(data().Definition, D); // The kind of special member this declaration is, if any. unsigned SMKind = 0; // Handle constructors. if (CXXConstructorDecl *Constructor = dyn_cast(D)) { if (!Constructor->isImplicit()) { // Note that we have a user-declared constructor. data().UserDeclaredConstructor = true; // C++ [class]p4: // A POD-struct is an aggregate class [...] // Since the POD bit is meant to be C++03 POD-ness, clear it even if the // type is technically an aggregate in C++0x since it wouldn't be in 03. data().PlainOldData = false; } if (Constructor->isDefaultConstructor()) { SMKind |= SMF_DefaultConstructor; if (Constructor->isUserProvided()) data().UserProvidedDefaultConstructor = true; if (Constructor->isConstexpr()) data().HasConstexprDefaultConstructor = true; if (Constructor->isDefaulted()) data().HasDefaultedDefaultConstructor = true; } if (!FunTmpl) { unsigned Quals; if (Constructor->isCopyConstructor(Quals)) { SMKind |= SMF_CopyConstructor; if (Quals & Qualifiers::Const) data().HasDeclaredCopyConstructorWithConstParam = true; } else if (Constructor->isMoveConstructor()) SMKind |= SMF_MoveConstructor; } // C++11 [dcl.init.aggr]p1: DR1518 // An aggregate is an array or a class with no user-provided, explicit, or // inherited constructors if (Constructor->isUserProvided() || Constructor->isExplicit()) data().Aggregate = false; } // Handle constructors, including those inherited from base classes. if (CXXConstructorDecl *Constructor = dyn_cast(DUnderlying)) { // Record if we see any constexpr constructors which are neither copy // nor move constructors. // C++1z [basic.types]p10: // [...] has at least one constexpr constructor or constructor template // (possibly inherited from a base class) that is not a copy or move // constructor [...] if (Constructor->isConstexpr() && !Constructor->isCopyOrMoveConstructor()) data().HasConstexprNonCopyMoveConstructor = true; } // Handle destructors. if (CXXDestructorDecl *DD = dyn_cast(D)) { SMKind |= SMF_Destructor; if (DD->isUserProvided()) data().HasIrrelevantDestructor = false; // If the destructor is explicitly defaulted and not trivial or not public // or if the destructor is deleted, we clear HasIrrelevantDestructor in // finishedDefaultedOrDeletedMember. // C++11 [class.dtor]p5: // A destructor is trivial if [...] the destructor is not virtual. if (DD->isVirtual()) data().HasTrivialSpecialMembers &= ~SMF_Destructor; } // Handle member functions. if (CXXMethodDecl *Method = dyn_cast(D)) { if (Method->isCopyAssignmentOperator()) { SMKind |= SMF_CopyAssignment; const ReferenceType *ParamTy = Method->getParamDecl(0)->getType()->getAs(); if (!ParamTy || ParamTy->getPointeeType().isConstQualified()) data().HasDeclaredCopyAssignmentWithConstParam = true; } if (Method->isMoveAssignmentOperator()) SMKind |= SMF_MoveAssignment; // Keep the list of conversion functions up-to-date. if (CXXConversionDecl *Conversion = dyn_cast(D)) { // FIXME: We use the 'unsafe' accessor for the access specifier here, // because Sema may not have set it yet. That's really just a misdesign // in Sema. However, LLDB *will* have set the access specifier correctly, // and adds declarations after the class is technically completed, // so completeDefinition()'s overriding of the access specifiers doesn't // work. AccessSpecifier AS = Conversion->getAccessUnsafe(); if (Conversion->getPrimaryTemplate()) { // We don't record specializations. } else { ASTContext &Ctx = getASTContext(); ASTUnresolvedSet &Conversions = data().Conversions.get(Ctx); NamedDecl *Primary = FunTmpl ? cast(FunTmpl) : cast(Conversion); if (Primary->getPreviousDecl()) Conversions.replace(cast(Primary->getPreviousDecl()), Primary, AS); else Conversions.addDecl(Ctx, Primary, AS); } } if (SMKind) { // If this is the first declaration of a special member, we no longer have // an implicit trivial special member. data().HasTrivialSpecialMembers &= data().DeclaredSpecialMembers | ~SMKind; if (!Method->isImplicit() && !Method->isUserProvided()) { // This method is user-declared but not user-provided. We can't work out // whether it's trivial yet (not until we get to the end of the class). // We'll handle this method in finishedDefaultedOrDeletedMember. } else if (Method->isTrivial()) data().HasTrivialSpecialMembers |= SMKind; else data().DeclaredNonTrivialSpecialMembers |= SMKind; // Note when we have declared a declared special member, and suppress the // implicit declaration of this special member. data().DeclaredSpecialMembers |= SMKind; if (!Method->isImplicit()) { data().UserDeclaredSpecialMembers |= SMKind; // C++03 [class]p4: // A POD-struct is an aggregate class that has [...] no user-defined // copy assignment operator and no user-defined destructor. // // Since the POD bit is meant to be C++03 POD-ness, and in C++03, // aggregates could not have any constructors, clear it even for an // explicitly defaulted or deleted constructor. // type is technically an aggregate in C++0x since it wouldn't be in 03. // // Also, a user-declared move assignment operator makes a class non-POD. // This is an extension in C++03. data().PlainOldData = false; } } return; } // Handle non-static data members. if (FieldDecl *Field = dyn_cast(D)) { // C++ [class.bit]p2: // A declaration for a bit-field that omits the identifier declares an // unnamed bit-field. Unnamed bit-fields are not members and cannot be // initialized. if (Field->isUnnamedBitfield()) return; // C++ [dcl.init.aggr]p1: // An aggregate is an array or a class (clause 9) with [...] no // private or protected non-static data members (clause 11). // // A POD must be an aggregate. if (D->getAccess() == AS_private || D->getAccess() == AS_protected) { data().Aggregate = false; data().PlainOldData = false; } // C++0x [class]p7: // A standard-layout class is a class that: // [...] // -- has the same access control for all non-static data members, switch (D->getAccess()) { case AS_private: data().HasPrivateFields = true; break; case AS_protected: data().HasProtectedFields = true; break; case AS_public: data().HasPublicFields = true; break; case AS_none: llvm_unreachable("Invalid access specifier"); }; if ((data().HasPrivateFields + data().HasProtectedFields + data().HasPublicFields) > 1) data().IsStandardLayout = false; // Keep track of the presence of mutable fields. - if (Field->isMutable()) + if (Field->isMutable()) { data().HasMutableFields = true; + data().NeedOverloadResolutionForCopyConstructor = true; + } // C++11 [class.union]p8, DR1460: // If X is a union, a non-static data member of X that is not an anonymous // union is a variant member of X. if (isUnion() && !Field->isAnonymousStructOrUnion()) data().HasVariantMembers = true; // C++0x [class]p9: // A POD struct is a class that is both a trivial class and a // standard-layout class, and has no non-static data members of type // non-POD struct, non-POD union (or array of such types). // // Automatic Reference Counting: the presence of a member of Objective-C pointer type // that does not explicitly have no lifetime makes the class a non-POD. ASTContext &Context = getASTContext(); QualType T = Context.getBaseElementType(Field->getType()); if (T->isObjCRetainableType() || T.isObjCGCStrong()) { if (T.hasNonTrivialObjCLifetime()) { // Objective-C Automatic Reference Counting: // If a class has a non-static data member of Objective-C pointer // type (or array thereof), it is a non-POD type and its // default constructor (if any), copy constructor, move constructor, // copy assignment operator, move assignment operator, and destructor are // non-trivial. setHasObjectMember(true); struct DefinitionData &Data = data(); Data.PlainOldData = false; Data.HasTrivialSpecialMembers = 0; Data.HasIrrelevantDestructor = false; } else if (!Context.getLangOpts().ObjCAutoRefCount) { setHasObjectMember(true); } } else if (!T.isCXX98PODType(Context)) data().PlainOldData = false; if (T->isReferenceType()) { if (!Field->hasInClassInitializer()) data().HasUninitializedReferenceMember = true; // C++0x [class]p7: // A standard-layout class is a class that: // -- has no non-static data members of type [...] reference, data().IsStandardLayout = false; + + // C++1z [class.copy.ctor]p10: + // A defaulted copy constructor for a class X is defined as deleted if X has: + // -- a non-static data member of rvalue reference type + if (T->isRValueReferenceType()) + data().DefaultedCopyConstructorIsDeleted = true; } if (!Field->hasInClassInitializer() && !Field->isMutable()) { if (CXXRecordDecl *FieldType = T->getAsCXXRecordDecl()) { if (FieldType->hasDefinition() && !FieldType->allowConstDefaultInit()) data().HasUninitializedFields = true; } else { data().HasUninitializedFields = true; } } // Record if this field is the first non-literal or volatile field or base. if (!T->isLiteralType(Context) || T.isVolatileQualified()) data().HasNonLiteralTypeFieldsOrBases = true; if (Field->hasInClassInitializer() || (Field->isAnonymousStructOrUnion() && Field->getType()->getAsCXXRecordDecl()->hasInClassInitializer())) { data().HasInClassInitializer = true; // C++11 [class]p5: // A default constructor is trivial if [...] no non-static data member // of its class has a brace-or-equal-initializer. data().HasTrivialSpecialMembers &= ~SMF_DefaultConstructor; // C++11 [dcl.init.aggr]p1: // An aggregate is a [...] class with [...] no // brace-or-equal-initializers for non-static data members. // // This rule was removed in C++14. if (!getASTContext().getLangOpts().CPlusPlus14) data().Aggregate = false; // C++11 [class]p10: // A POD struct is [...] a trivial class. data().PlainOldData = false; } // C++11 [class.copy]p23: // A defaulted copy/move assignment operator for a class X is defined // as deleted if X has: // -- a non-static data member of reference type if (T->isReferenceType()) data().DefaultedMoveAssignmentIsDeleted = true; if (const RecordType *RecordTy = T->getAs()) { CXXRecordDecl* FieldRec = cast(RecordTy->getDecl()); if (FieldRec->getDefinition()) { addedClassSubobject(FieldRec); // We may need to perform overload resolution to determine whether a // field can be moved if it's const or volatile qualified. if (T.getCVRQualifiers() & (Qualifiers::Const | Qualifiers::Volatile)) { + // We need to care about 'const' for the copy constructor because an + // implicit copy constructor might be declared with a non-const + // parameter. + data().NeedOverloadResolutionForCopyConstructor = true; data().NeedOverloadResolutionForMoveConstructor = true; data().NeedOverloadResolutionForMoveAssignment = true; } // C++11 [class.ctor]p5, C++11 [class.copy]p11: // A defaulted [special member] for a class X is defined as // deleted if: // -- X is a union-like class that has a variant member with a // non-trivial [corresponding special member] if (isUnion()) { + if (FieldRec->hasNonTrivialCopyConstructor()) + data().DefaultedCopyConstructorIsDeleted = true; if (FieldRec->hasNonTrivialMoveConstructor()) data().DefaultedMoveConstructorIsDeleted = true; if (FieldRec->hasNonTrivialMoveAssignment()) data().DefaultedMoveAssignmentIsDeleted = true; if (FieldRec->hasNonTrivialDestructor()) data().DefaultedDestructorIsDeleted = true; } // For an anonymous union member, our overload resolution will perform // overload resolution for its members. if (Field->isAnonymousStructOrUnion()) { + data().NeedOverloadResolutionForCopyConstructor |= + FieldRec->data().NeedOverloadResolutionForCopyConstructor; data().NeedOverloadResolutionForMoveConstructor |= FieldRec->data().NeedOverloadResolutionForMoveConstructor; data().NeedOverloadResolutionForMoveAssignment |= FieldRec->data().NeedOverloadResolutionForMoveAssignment; data().NeedOverloadResolutionForDestructor |= FieldRec->data().NeedOverloadResolutionForDestructor; } // C++0x [class.ctor]p5: // A default constructor is trivial [...] if: // -- for all the non-static data members of its class that are of // class type (or array thereof), each such class has a trivial // default constructor. if (!FieldRec->hasTrivialDefaultConstructor()) data().HasTrivialSpecialMembers &= ~SMF_DefaultConstructor; // C++0x [class.copy]p13: // A copy/move constructor for class X is trivial if [...] // [...] // -- for each non-static data member of X that is of class type (or // an array thereof), the constructor selected to copy/move that // member is trivial; if (!FieldRec->hasTrivialCopyConstructor()) data().HasTrivialSpecialMembers &= ~SMF_CopyConstructor; // If the field doesn't have a simple move constructor, we'll eagerly // declare the move constructor for this class and we'll decide whether // it's trivial then. if (!FieldRec->hasTrivialMoveConstructor()) data().HasTrivialSpecialMembers &= ~SMF_MoveConstructor; // C++0x [class.copy]p27: // A copy/move assignment operator for class X is trivial if [...] // [...] // -- for each non-static data member of X that is of class type (or // an array thereof), the assignment operator selected to // copy/move that member is trivial; if (!FieldRec->hasTrivialCopyAssignment()) data().HasTrivialSpecialMembers &= ~SMF_CopyAssignment; // If the field doesn't have a simple move assignment, we'll eagerly // declare the move assignment for this class and we'll decide whether // it's trivial then. if (!FieldRec->hasTrivialMoveAssignment()) data().HasTrivialSpecialMembers &= ~SMF_MoveAssignment; if (!FieldRec->hasTrivialDestructor()) data().HasTrivialSpecialMembers &= ~SMF_Destructor; if (!FieldRec->hasIrrelevantDestructor()) data().HasIrrelevantDestructor = false; if (FieldRec->hasObjectMember()) setHasObjectMember(true); if (FieldRec->hasVolatileMember()) setHasVolatileMember(true); // C++0x [class]p7: // A standard-layout class is a class that: // -- has no non-static data members of type non-standard-layout // class (or array of such types) [...] if (!FieldRec->isStandardLayout()) data().IsStandardLayout = false; // C++0x [class]p7: // A standard-layout class is a class that: // [...] // -- has no base classes of the same type as the first non-static // data member. // We don't want to expend bits in the state of the record decl // tracking whether this is the first non-static data member so we // cheat a bit and use some of the existing state: the empty bit. // Virtual bases and virtual methods make a class non-empty, but they // also make it non-standard-layout so we needn't check here. // A non-empty base class may leave the class standard-layout, but not // if we have arrived here, and have at least one non-static data // member. If IsStandardLayout remains true, then the first non-static // data member must come through here with Empty still true, and Empty // will subsequently be set to false below. if (data().IsStandardLayout && data().Empty) { for (const auto &BI : bases()) { if (Context.hasSameUnqualifiedType(BI.getType(), T)) { data().IsStandardLayout = false; break; } } } // Keep track of the presence of mutable fields. - if (FieldRec->hasMutableFields()) + if (FieldRec->hasMutableFields()) { data().HasMutableFields = true; + data().NeedOverloadResolutionForCopyConstructor = true; + } // C++11 [class.copy]p13: // If the implicitly-defined constructor would satisfy the // requirements of a constexpr constructor, the implicitly-defined // constructor is constexpr. // C++11 [dcl.constexpr]p4: // -- every constructor involved in initializing non-static data // members [...] shall be a constexpr constructor if (!Field->hasInClassInitializer() && !FieldRec->hasConstexprDefaultConstructor() && !isUnion()) // The standard requires any in-class initializer to be a constant // expression. We consider this to be a defect. data().DefaultedDefaultConstructorIsConstexpr = false; // C++11 [class.copy]p8: // The implicitly-declared copy constructor for a class X will have // the form 'X::X(const X&)' if each potentially constructed subobject // of a class type M (or array thereof) has a copy constructor whose // first parameter is of type 'const M&' or 'const volatile M&'. if (!FieldRec->hasCopyConstructorWithConstParam()) data().ImplicitCopyConstructorCanHaveConstParamForNonVBase = false; // C++11 [class.copy]p18: // The implicitly-declared copy assignment oeprator for a class X will // have the form 'X& X::operator=(const X&)' if [...] for all the // non-static data members of X that are of a class type M (or array // thereof), each such class type has a copy assignment operator whose // parameter is of type 'const M&', 'const volatile M&' or 'M'. if (!FieldRec->hasCopyAssignmentWithConstParam()) data().ImplicitCopyAssignmentHasConstParam = false; if (FieldRec->hasUninitializedReferenceMember() && !Field->hasInClassInitializer()) data().HasUninitializedReferenceMember = true; // C++11 [class.union]p8, DR1460: // a non-static data member of an anonymous union that is a member of // X is also a variant member of X. if (FieldRec->hasVariantMembers() && Field->isAnonymousStructOrUnion()) data().HasVariantMembers = true; } } else { // Base element type of field is a non-class type. if (!T->isLiteralType(Context) || (!Field->hasInClassInitializer() && !isUnion())) data().DefaultedDefaultConstructorIsConstexpr = false; // C++11 [class.copy]p23: // A defaulted copy/move assignment operator for a class X is defined // as deleted if X has: // -- a non-static data member of const non-class type (or array // thereof) if (T.isConstQualified()) data().DefaultedMoveAssignmentIsDeleted = true; } // C++0x [class]p7: // A standard-layout class is a class that: // [...] // -- either has no non-static data members in the most derived // class and at most one base class with non-static data members, // or has no base classes with non-static data members, and // At this point we know that we have a non-static data member, so the last // clause holds. if (!data().HasNoNonEmptyBases) data().IsStandardLayout = false; // C++14 [meta.unary.prop]p4: // T is a class type [...] with [...] no non-static data members other // than bit-fields of length 0... if (data().Empty) { if (!Field->isBitField() || (!Field->getBitWidth()->isTypeDependent() && !Field->getBitWidth()->isValueDependent() && Field->getBitWidthValue(Context) != 0)) data().Empty = false; } } // Handle using declarations of conversion functions. if (UsingShadowDecl *Shadow = dyn_cast(D)) { if (Shadow->getDeclName().getNameKind() == DeclarationName::CXXConversionFunctionName) { ASTContext &Ctx = getASTContext(); data().Conversions.get(Ctx).addDecl(Ctx, Shadow, Shadow->getAccess()); } } if (UsingDecl *Using = dyn_cast(D)) { if (Using->getDeclName().getNameKind() == DeclarationName::CXXConstructorName) { data().HasInheritedConstructor = true; // C++1z [dcl.init.aggr]p1: // An aggregate is [...] a class [...] with no inherited constructors data().Aggregate = false; } if (Using->getDeclName().getCXXOverloadedOperator() == OO_Equal) data().HasInheritedAssignment = true; } } void CXXRecordDecl::finishedDefaultedOrDeletedMember(CXXMethodDecl *D) { assert(!D->isImplicit() && !D->isUserProvided()); // The kind of special member this declaration is, if any. unsigned SMKind = 0; if (CXXConstructorDecl *Constructor = dyn_cast(D)) { if (Constructor->isDefaultConstructor()) { SMKind |= SMF_DefaultConstructor; if (Constructor->isConstexpr()) data().HasConstexprDefaultConstructor = true; } if (Constructor->isCopyConstructor()) SMKind |= SMF_CopyConstructor; else if (Constructor->isMoveConstructor()) SMKind |= SMF_MoveConstructor; else if (Constructor->isConstexpr()) // We may now know that the constructor is constexpr. data().HasConstexprNonCopyMoveConstructor = true; } else if (isa(D)) { SMKind |= SMF_Destructor; if (!D->isTrivial() || D->getAccess() != AS_public || D->isDeleted()) data().HasIrrelevantDestructor = false; } else if (D->isCopyAssignmentOperator()) SMKind |= SMF_CopyAssignment; else if (D->isMoveAssignmentOperator()) SMKind |= SMF_MoveAssignment; // Update which trivial / non-trivial special members we have. // addedMember will have skipped this step for this member. if (D->isTrivial()) data().HasTrivialSpecialMembers |= SMKind; else data().DeclaredNonTrivialSpecialMembers |= SMKind; } bool CXXRecordDecl::isCLike() const { if (getTagKind() == TTK_Class || getTagKind() == TTK_Interface || !TemplateOrInstantiation.isNull()) return false; if (!hasDefinition()) return true; return isPOD() && data().HasOnlyCMembers; } bool CXXRecordDecl::isGenericLambda() const { if (!isLambda()) return false; return getLambdaData().IsGenericLambda; } CXXMethodDecl* CXXRecordDecl::getLambdaCallOperator() const { if (!isLambda()) return nullptr; DeclarationName Name = getASTContext().DeclarationNames.getCXXOperatorName(OO_Call); DeclContext::lookup_result Calls = lookup(Name); assert(!Calls.empty() && "Missing lambda call operator!"); assert(Calls.size() == 1 && "More than one lambda call operator!"); NamedDecl *CallOp = Calls.front(); if (FunctionTemplateDecl *CallOpTmpl = dyn_cast(CallOp)) return cast(CallOpTmpl->getTemplatedDecl()); return cast(CallOp); } CXXMethodDecl* CXXRecordDecl::getLambdaStaticInvoker() const { if (!isLambda()) return nullptr; DeclarationName Name = &getASTContext().Idents.get(getLambdaStaticInvokerName()); DeclContext::lookup_result Invoker = lookup(Name); if (Invoker.empty()) return nullptr; assert(Invoker.size() == 1 && "More than one static invoker operator!"); NamedDecl *InvokerFun = Invoker.front(); if (FunctionTemplateDecl *InvokerTemplate = dyn_cast(InvokerFun)) return cast(InvokerTemplate->getTemplatedDecl()); return cast(InvokerFun); } void CXXRecordDecl::getCaptureFields( llvm::DenseMap &Captures, FieldDecl *&ThisCapture) const { Captures.clear(); ThisCapture = nullptr; LambdaDefinitionData &Lambda = getLambdaData(); RecordDecl::field_iterator Field = field_begin(); for (const LambdaCapture *C = Lambda.Captures, *CEnd = C + Lambda.NumCaptures; C != CEnd; ++C, ++Field) { if (C->capturesThis()) ThisCapture = *Field; else if (C->capturesVariable()) Captures[C->getCapturedVar()] = *Field; } assert(Field == field_end()); } TemplateParameterList * CXXRecordDecl::getGenericLambdaTemplateParameterList() const { if (!isLambda()) return nullptr; CXXMethodDecl *CallOp = getLambdaCallOperator(); if (FunctionTemplateDecl *Tmpl = CallOp->getDescribedFunctionTemplate()) return Tmpl->getTemplateParameters(); return nullptr; } Decl *CXXRecordDecl::getLambdaContextDecl() const { assert(isLambda() && "Not a lambda closure type!"); ExternalASTSource *Source = getParentASTContext().getExternalSource(); return getLambdaData().ContextDecl.get(Source); } static CanQualType GetConversionType(ASTContext &Context, NamedDecl *Conv) { QualType T = cast(Conv->getUnderlyingDecl()->getAsFunction()) ->getConversionType(); return Context.getCanonicalType(T); } /// Collect the visible conversions of a base class. /// /// \param Record a base class of the class we're considering /// \param InVirtual whether this base class is a virtual base (or a base /// of a virtual base) /// \param Access the access along the inheritance path to this base /// \param ParentHiddenTypes the conversions provided by the inheritors /// of this base /// \param Output the set to which to add conversions from non-virtual bases /// \param VOutput the set to which to add conversions from virtual bases /// \param HiddenVBaseCs the set of conversions which were hidden in a /// virtual base along some inheritance path static void CollectVisibleConversions(ASTContext &Context, CXXRecordDecl *Record, bool InVirtual, AccessSpecifier Access, const llvm::SmallPtrSet &ParentHiddenTypes, ASTUnresolvedSet &Output, UnresolvedSetImpl &VOutput, llvm::SmallPtrSet &HiddenVBaseCs) { // The set of types which have conversions in this class or its // subclasses. As an optimization, we don't copy the derived set // unless it might change. const llvm::SmallPtrSet *HiddenTypes = &ParentHiddenTypes; llvm::SmallPtrSet HiddenTypesBuffer; // Collect the direct conversions and figure out which conversions // will be hidden in the subclasses. CXXRecordDecl::conversion_iterator ConvI = Record->conversion_begin(); CXXRecordDecl::conversion_iterator ConvE = Record->conversion_end(); if (ConvI != ConvE) { HiddenTypesBuffer = ParentHiddenTypes; HiddenTypes = &HiddenTypesBuffer; for (CXXRecordDecl::conversion_iterator I = ConvI; I != ConvE; ++I) { CanQualType ConvType(GetConversionType(Context, I.getDecl())); bool Hidden = ParentHiddenTypes.count(ConvType); if (!Hidden) HiddenTypesBuffer.insert(ConvType); // If this conversion is hidden and we're in a virtual base, // remember that it's hidden along some inheritance path. if (Hidden && InVirtual) HiddenVBaseCs.insert(cast(I.getDecl()->getCanonicalDecl())); // If this conversion isn't hidden, add it to the appropriate output. else if (!Hidden) { AccessSpecifier IAccess = CXXRecordDecl::MergeAccess(Access, I.getAccess()); if (InVirtual) VOutput.addDecl(I.getDecl(), IAccess); else Output.addDecl(Context, I.getDecl(), IAccess); } } } // Collect information recursively from any base classes. for (const auto &I : Record->bases()) { const RecordType *RT = I.getType()->getAs(); if (!RT) continue; AccessSpecifier BaseAccess = CXXRecordDecl::MergeAccess(Access, I.getAccessSpecifier()); bool BaseInVirtual = InVirtual || I.isVirtual(); CXXRecordDecl *Base = cast(RT->getDecl()); CollectVisibleConversions(Context, Base, BaseInVirtual, BaseAccess, *HiddenTypes, Output, VOutput, HiddenVBaseCs); } } /// Collect the visible conversions of a class. /// /// This would be extremely straightforward if it weren't for virtual /// bases. It might be worth special-casing that, really. static void CollectVisibleConversions(ASTContext &Context, CXXRecordDecl *Record, ASTUnresolvedSet &Output) { // The collection of all conversions in virtual bases that we've // found. These will be added to the output as long as they don't // appear in the hidden-conversions set. UnresolvedSet<8> VBaseCs; // The set of conversions in virtual bases that we've determined to // be hidden. llvm::SmallPtrSet HiddenVBaseCs; // The set of types hidden by classes derived from this one. llvm::SmallPtrSet HiddenTypes; // Go ahead and collect the direct conversions and add them to the // hidden-types set. CXXRecordDecl::conversion_iterator ConvI = Record->conversion_begin(); CXXRecordDecl::conversion_iterator ConvE = Record->conversion_end(); Output.append(Context, ConvI, ConvE); for (; ConvI != ConvE; ++ConvI) HiddenTypes.insert(GetConversionType(Context, ConvI.getDecl())); // Recursively collect conversions from base classes. for (const auto &I : Record->bases()) { const RecordType *RT = I.getType()->getAs(); if (!RT) continue; CollectVisibleConversions(Context, cast(RT->getDecl()), I.isVirtual(), I.getAccessSpecifier(), HiddenTypes, Output, VBaseCs, HiddenVBaseCs); } // Add any unhidden conversions provided by virtual bases. for (UnresolvedSetIterator I = VBaseCs.begin(), E = VBaseCs.end(); I != E; ++I) { if (!HiddenVBaseCs.count(cast(I.getDecl()->getCanonicalDecl()))) Output.addDecl(Context, I.getDecl(), I.getAccess()); } } /// getVisibleConversionFunctions - get all conversion functions visible /// in current class; including conversion function templates. llvm::iterator_range CXXRecordDecl::getVisibleConversionFunctions() { ASTContext &Ctx = getASTContext(); ASTUnresolvedSet *Set; if (bases_begin() == bases_end()) { // If root class, all conversions are visible. Set = &data().Conversions.get(Ctx); } else { Set = &data().VisibleConversions.get(Ctx); // If visible conversion list is not evaluated, evaluate it. if (!data().ComputedVisibleConversions) { CollectVisibleConversions(Ctx, this, *Set); data().ComputedVisibleConversions = true; } } return llvm::make_range(Set->begin(), Set->end()); } void CXXRecordDecl::removeConversion(const NamedDecl *ConvDecl) { // This operation is O(N) but extremely rare. Sema only uses it to // remove UsingShadowDecls in a class that were followed by a direct // declaration, e.g.: // class A : B { // using B::operator int; // operator int(); // }; // This is uncommon by itself and even more uncommon in conjunction // with sufficiently large numbers of directly-declared conversions // that asymptotic behavior matters. ASTUnresolvedSet &Convs = data().Conversions.get(getASTContext()); for (unsigned I = 0, E = Convs.size(); I != E; ++I) { if (Convs[I].getDecl() == ConvDecl) { Convs.erase(I); assert(std::find(Convs.begin(), Convs.end(), ConvDecl) == Convs.end() && "conversion was found multiple times in unresolved set"); return; } } llvm_unreachable("conversion not found in set!"); } CXXRecordDecl *CXXRecordDecl::getInstantiatedFromMemberClass() const { if (MemberSpecializationInfo *MSInfo = getMemberSpecializationInfo()) return cast(MSInfo->getInstantiatedFrom()); return nullptr; } MemberSpecializationInfo *CXXRecordDecl::getMemberSpecializationInfo() const { return TemplateOrInstantiation.dyn_cast(); } void CXXRecordDecl::setInstantiationOfMemberClass(CXXRecordDecl *RD, TemplateSpecializationKind TSK) { assert(TemplateOrInstantiation.isNull() && "Previous template or instantiation?"); assert(!isa(this)); TemplateOrInstantiation = new (getASTContext()) MemberSpecializationInfo(RD, TSK); } ClassTemplateDecl *CXXRecordDecl::getDescribedClassTemplate() const { return TemplateOrInstantiation.dyn_cast(); } void CXXRecordDecl::setDescribedClassTemplate(ClassTemplateDecl *Template) { TemplateOrInstantiation = Template; } TemplateSpecializationKind CXXRecordDecl::getTemplateSpecializationKind() const{ if (const ClassTemplateSpecializationDecl *Spec = dyn_cast(this)) return Spec->getSpecializationKind(); if (MemberSpecializationInfo *MSInfo = getMemberSpecializationInfo()) return MSInfo->getTemplateSpecializationKind(); return TSK_Undeclared; } void CXXRecordDecl::setTemplateSpecializationKind(TemplateSpecializationKind TSK) { if (ClassTemplateSpecializationDecl *Spec = dyn_cast(this)) { Spec->setSpecializationKind(TSK); return; } if (MemberSpecializationInfo *MSInfo = getMemberSpecializationInfo()) { MSInfo->setTemplateSpecializationKind(TSK); return; } llvm_unreachable("Not a class template or member class specialization"); } const CXXRecordDecl *CXXRecordDecl::getTemplateInstantiationPattern() const { auto GetDefinitionOrSelf = [](const CXXRecordDecl *D) -> const CXXRecordDecl * { if (auto *Def = D->getDefinition()) return Def; return D; }; // If it's a class template specialization, find the template or partial // specialization from which it was instantiated. if (auto *TD = dyn_cast(this)) { auto From = TD->getInstantiatedFrom(); if (auto *CTD = From.dyn_cast()) { while (auto *NewCTD = CTD->getInstantiatedFromMemberTemplate()) { if (NewCTD->isMemberSpecialization()) break; CTD = NewCTD; } return GetDefinitionOrSelf(CTD->getTemplatedDecl()); } if (auto *CTPSD = From.dyn_cast()) { while (auto *NewCTPSD = CTPSD->getInstantiatedFromMember()) { if (NewCTPSD->isMemberSpecialization()) break; CTPSD = NewCTPSD; } return GetDefinitionOrSelf(CTPSD); } } if (MemberSpecializationInfo *MSInfo = getMemberSpecializationInfo()) { if (isTemplateInstantiation(MSInfo->getTemplateSpecializationKind())) { const CXXRecordDecl *RD = this; while (auto *NewRD = RD->getInstantiatedFromMemberClass()) RD = NewRD; return GetDefinitionOrSelf(RD); } } assert(!isTemplateInstantiation(this->getTemplateSpecializationKind()) && "couldn't find pattern for class template instantiation"); return nullptr; } CXXDestructorDecl *CXXRecordDecl::getDestructor() const { ASTContext &Context = getASTContext(); QualType ClassType = Context.getTypeDeclType(this); DeclarationName Name = Context.DeclarationNames.getCXXDestructorName( Context.getCanonicalType(ClassType)); DeclContext::lookup_result R = lookup(Name); return R.empty() ? nullptr : dyn_cast(R.front()); } bool CXXRecordDecl::isAnyDestructorNoReturn() const { // Destructor is noreturn. if (const CXXDestructorDecl *Destructor = getDestructor()) if (Destructor->isNoReturn()) return true; // Check base classes destructor for noreturn. for (const auto &Base : bases()) if (const CXXRecordDecl *RD = Base.getType()->getAsCXXRecordDecl()) if (RD->isAnyDestructorNoReturn()) return true; // Check fields for noreturn. for (const auto *Field : fields()) if (const CXXRecordDecl *RD = Field->getType()->getBaseElementTypeUnsafe()->getAsCXXRecordDecl()) if (RD->isAnyDestructorNoReturn()) return true; // All destructors are not noreturn. return false; } void CXXRecordDecl::completeDefinition() { completeDefinition(nullptr); } void CXXRecordDecl::completeDefinition(CXXFinalOverriderMap *FinalOverriders) { RecordDecl::completeDefinition(); - + // If the class may be abstract (but hasn't been marked as such), check for // any pure final overriders. if (mayBeAbstract()) { CXXFinalOverriderMap MyFinalOverriders; if (!FinalOverriders) { getFinalOverriders(MyFinalOverriders); FinalOverriders = &MyFinalOverriders; } bool Done = false; for (CXXFinalOverriderMap::iterator M = FinalOverriders->begin(), MEnd = FinalOverriders->end(); M != MEnd && !Done; ++M) { for (OverridingMethods::iterator SO = M->second.begin(), SOEnd = M->second.end(); SO != SOEnd && !Done; ++SO) { assert(SO->second.size() > 0 && "All virtual functions have overridding virtual functions"); // C++ [class.abstract]p4: // A class is abstract if it contains or inherits at least one // pure virtual function for which the final overrider is pure // virtual. if (SO->second.front().Method->isPure()) { data().Abstract = true; Done = true; break; } } } } // Set access bits correctly on the directly-declared conversions. for (conversion_iterator I = conversion_begin(), E = conversion_end(); I != E; ++I) I.setAccess((*I)->getAccess()); } bool CXXRecordDecl::mayBeAbstract() const { if (data().Abstract || isInvalidDecl() || !data().Polymorphic || isDependentContext()) return false; for (const auto &B : bases()) { CXXRecordDecl *BaseDecl = cast(B.getType()->getAs()->getDecl()); if (BaseDecl->isAbstract()) return true; } return false; } void CXXDeductionGuideDecl::anchor() { } CXXDeductionGuideDecl *CXXDeductionGuideDecl::Create( ASTContext &C, DeclContext *DC, SourceLocation StartLoc, bool IsExplicit, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, SourceLocation EndLocation) { return new (C, DC) CXXDeductionGuideDecl(C, DC, StartLoc, IsExplicit, NameInfo, T, TInfo, EndLocation); } CXXDeductionGuideDecl *CXXDeductionGuideDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) CXXDeductionGuideDecl(C, nullptr, SourceLocation(), false, DeclarationNameInfo(), QualType(), nullptr, SourceLocation()); } void CXXMethodDecl::anchor() { } bool CXXMethodDecl::isStatic() const { const CXXMethodDecl *MD = getCanonicalDecl(); if (MD->getStorageClass() == SC_Static) return true; OverloadedOperatorKind OOK = getDeclName().getCXXOverloadedOperator(); return isStaticOverloadedOperator(OOK); } static bool recursivelyOverrides(const CXXMethodDecl *DerivedMD, const CXXMethodDecl *BaseMD) { for (CXXMethodDecl::method_iterator I = DerivedMD->begin_overridden_methods(), E = DerivedMD->end_overridden_methods(); I != E; ++I) { const CXXMethodDecl *MD = *I; if (MD->getCanonicalDecl() == BaseMD->getCanonicalDecl()) return true; if (recursivelyOverrides(MD, BaseMD)) return true; } return false; } CXXMethodDecl * CXXMethodDecl::getCorrespondingMethodInClass(const CXXRecordDecl *RD, bool MayBeBase) { if (this->getParent()->getCanonicalDecl() == RD->getCanonicalDecl()) return this; // Lookup doesn't work for destructors, so handle them separately. if (isa(this)) { CXXMethodDecl *MD = RD->getDestructor(); if (MD) { if (recursivelyOverrides(MD, this)) return MD; if (MayBeBase && recursivelyOverrides(this, MD)) return MD; } return nullptr; } for (auto *ND : RD->lookup(getDeclName())) { CXXMethodDecl *MD = dyn_cast(ND); if (!MD) continue; if (recursivelyOverrides(MD, this)) return MD; if (MayBeBase && recursivelyOverrides(this, MD)) return MD; } for (const auto &I : RD->bases()) { const RecordType *RT = I.getType()->getAs(); if (!RT) continue; const CXXRecordDecl *Base = cast(RT->getDecl()); CXXMethodDecl *T = this->getCorrespondingMethodInClass(Base); if (T) return T; } return nullptr; } CXXMethodDecl * CXXMethodDecl::Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, StorageClass SC, bool isInline, bool isConstexpr, SourceLocation EndLocation) { return new (C, RD) CXXMethodDecl(CXXMethod, C, RD, StartLoc, NameInfo, T, TInfo, SC, isInline, isConstexpr, EndLocation); } CXXMethodDecl *CXXMethodDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) CXXMethodDecl(CXXMethod, C, nullptr, SourceLocation(), DeclarationNameInfo(), QualType(), nullptr, SC_None, false, false, SourceLocation()); } CXXMethodDecl *CXXMethodDecl::getDevirtualizedMethod(const Expr *Base, bool IsAppleKext) { assert(isVirtual() && "this method is expected to be virtual"); // When building with -fapple-kext, all calls must go through the vtable since // the kernel linker can do runtime patching of vtables. if (IsAppleKext) return nullptr; // If the member function is marked 'final', we know that it can't be // overridden and can therefore devirtualize it unless it's pure virtual. if (hasAttr()) return isPure() ? nullptr : this; // If Base is unknown, we cannot devirtualize. if (!Base) return nullptr; // If the base expression (after skipping derived-to-base conversions) is a // class prvalue, then we can devirtualize. Base = Base->getBestDynamicClassTypeExpr(); if (Base->isRValue() && Base->getType()->isRecordType()) return this; // If we don't even know what we would call, we can't devirtualize. const CXXRecordDecl *BestDynamicDecl = Base->getBestDynamicClassType(); if (!BestDynamicDecl) return nullptr; // There may be a method corresponding to MD in a derived class. CXXMethodDecl *DevirtualizedMethod = getCorrespondingMethodInClass(BestDynamicDecl); // If that method is pure virtual, we can't devirtualize. If this code is // reached, the result would be UB, not a direct call to the derived class // function, and we can't assume the derived class function is defined. if (DevirtualizedMethod->isPure()) return nullptr; // If that method is marked final, we can devirtualize it. if (DevirtualizedMethod->hasAttr()) return DevirtualizedMethod; // Similarly, if the class itself is marked 'final' it can't be overridden // and we can therefore devirtualize the member function call. if (BestDynamicDecl->hasAttr()) return DevirtualizedMethod; if (const DeclRefExpr *DRE = dyn_cast(Base)) { if (const VarDecl *VD = dyn_cast(DRE->getDecl())) if (VD->getType()->isRecordType()) // This is a record decl. We know the type and can devirtualize it. return DevirtualizedMethod; return nullptr; } // We can devirtualize calls on an object accessed by a class member access // expression, since by C++11 [basic.life]p6 we know that it can't refer to // a derived class object constructed in the same location. if (const MemberExpr *ME = dyn_cast(Base)) if (const ValueDecl *VD = dyn_cast(ME->getMemberDecl())) return VD->getType()->isRecordType() ? DevirtualizedMethod : nullptr; // Likewise for calls on an object accessed by a (non-reference) pointer to // member access. if (auto *BO = dyn_cast(Base)) { if (BO->isPtrMemOp()) { auto *MPT = BO->getRHS()->getType()->castAs(); if (MPT->getPointeeType()->isRecordType()) return DevirtualizedMethod; } } // We can't devirtualize the call. return nullptr; } bool CXXMethodDecl::isUsualDeallocationFunction() const { if (getOverloadedOperator() != OO_Delete && getOverloadedOperator() != OO_Array_Delete) return false; // C++ [basic.stc.dynamic.deallocation]p2: // A template instance is never a usual deallocation function, // regardless of its signature. if (getPrimaryTemplate()) return false; // C++ [basic.stc.dynamic.deallocation]p2: // If a class T has a member deallocation function named operator delete // with exactly one parameter, then that function is a usual (non-placement) // deallocation function. [...] if (getNumParams() == 1) return true; unsigned UsualParams = 1; // C++ <=14 [basic.stc.dynamic.deallocation]p2: // [...] If class T does not declare such an operator delete but does // declare a member deallocation function named operator delete with // exactly two parameters, the second of which has type std::size_t (18.1), // then this function is a usual deallocation function. // // C++17 says a usual deallocation function is one with the signature // (void* [, size_t] [, std::align_val_t] [, ...]) // and all such functions are usual deallocation functions. It's not clear // that allowing varargs functions was intentional. ASTContext &Context = getASTContext(); if (UsualParams < getNumParams() && Context.hasSameUnqualifiedType(getParamDecl(UsualParams)->getType(), Context.getSizeType())) ++UsualParams; if (UsualParams < getNumParams() && getParamDecl(UsualParams)->getType()->isAlignValT()) ++UsualParams; if (UsualParams != getNumParams()) return false; // In C++17 onwards, all potential usual deallocation functions are actual // usual deallocation functions. if (Context.getLangOpts().AlignedAllocation) return true; // This function is a usual deallocation function if there are no // single-parameter deallocation functions of the same kind. DeclContext::lookup_result R = getDeclContext()->lookup(getDeclName()); for (DeclContext::lookup_result::iterator I = R.begin(), E = R.end(); I != E; ++I) { if (const FunctionDecl *FD = dyn_cast(*I)) if (FD->getNumParams() == 1) return false; } return true; } bool CXXMethodDecl::isCopyAssignmentOperator() const { // C++0x [class.copy]p17: // A user-declared copy assignment operator X::operator= is a non-static // non-template member function of class X with exactly one parameter of // type X, X&, const X&, volatile X& or const volatile X&. if (/*operator=*/getOverloadedOperator() != OO_Equal || /*non-static*/ isStatic() || /*non-template*/getPrimaryTemplate() || getDescribedFunctionTemplate() || getNumParams() != 1) return false; QualType ParamType = getParamDecl(0)->getType(); if (const LValueReferenceType *Ref = ParamType->getAs()) ParamType = Ref->getPointeeType(); ASTContext &Context = getASTContext(); QualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(getParent())); return Context.hasSameUnqualifiedType(ClassType, ParamType); } bool CXXMethodDecl::isMoveAssignmentOperator() const { // C++0x [class.copy]p19: // A user-declared move assignment operator X::operator= is a non-static // non-template member function of class X with exactly one parameter of type // X&&, const X&&, volatile X&&, or const volatile X&&. if (getOverloadedOperator() != OO_Equal || isStatic() || getPrimaryTemplate() || getDescribedFunctionTemplate() || getNumParams() != 1) return false; QualType ParamType = getParamDecl(0)->getType(); if (!isa(ParamType)) return false; ParamType = ParamType->getPointeeType(); ASTContext &Context = getASTContext(); QualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(getParent())); return Context.hasSameUnqualifiedType(ClassType, ParamType); } void CXXMethodDecl::addOverriddenMethod(const CXXMethodDecl *MD) { assert(MD->isCanonicalDecl() && "Method is not canonical!"); assert(!MD->getParent()->isDependentContext() && "Can't add an overridden method to a class template!"); assert(MD->isVirtual() && "Method is not virtual!"); getASTContext().addOverriddenMethod(this, MD); } CXXMethodDecl::method_iterator CXXMethodDecl::begin_overridden_methods() const { if (isa(this)) return nullptr; return getASTContext().overridden_methods_begin(this); } CXXMethodDecl::method_iterator CXXMethodDecl::end_overridden_methods() const { if (isa(this)) return nullptr; return getASTContext().overridden_methods_end(this); } unsigned CXXMethodDecl::size_overridden_methods() const { if (isa(this)) return 0; return getASTContext().overridden_methods_size(this); } CXXMethodDecl::overridden_method_range CXXMethodDecl::overridden_methods() const { if (isa(this)) return overridden_method_range(nullptr, nullptr); return getASTContext().overridden_methods(this); } QualType CXXMethodDecl::getThisType(ASTContext &C) const { // C++ 9.3.2p1: The type of this in a member function of a class X is X*. // If the member function is declared const, the type of this is const X*, // if the member function is declared volatile, the type of this is // volatile X*, and if the member function is declared const volatile, // the type of this is const volatile X*. assert(isInstance() && "No 'this' for static methods!"); QualType ClassTy = C.getTypeDeclType(getParent()); ClassTy = C.getQualifiedType(ClassTy, Qualifiers::fromCVRUMask(getTypeQualifiers())); return C.getPointerType(ClassTy); } bool CXXMethodDecl::hasInlineBody() const { // If this function is a template instantiation, look at the template from // which it was instantiated. const FunctionDecl *CheckFn = getTemplateInstantiationPattern(); if (!CheckFn) CheckFn = this; const FunctionDecl *fn; return CheckFn->isDefined(fn) && !fn->isOutOfLine() && (fn->doesThisDeclarationHaveABody() || fn->willHaveBody()); } bool CXXMethodDecl::isLambdaStaticInvoker() const { const CXXRecordDecl *P = getParent(); if (P->isLambda()) { if (const CXXMethodDecl *StaticInvoker = P->getLambdaStaticInvoker()) { if (StaticInvoker == this) return true; if (P->isGenericLambda() && this->isFunctionTemplateSpecialization()) return StaticInvoker == this->getPrimaryTemplate()->getTemplatedDecl(); } } return false; } CXXCtorInitializer::CXXCtorInitializer(ASTContext &Context, TypeSourceInfo *TInfo, bool IsVirtual, SourceLocation L, Expr *Init, SourceLocation R, SourceLocation EllipsisLoc) : Initializee(TInfo), MemberOrEllipsisLocation(EllipsisLoc), Init(Init), LParenLoc(L), RParenLoc(R), IsDelegating(false), IsVirtual(IsVirtual), IsWritten(false), SourceOrder(0) { } CXXCtorInitializer::CXXCtorInitializer(ASTContext &Context, FieldDecl *Member, SourceLocation MemberLoc, SourceLocation L, Expr *Init, SourceLocation R) : Initializee(Member), MemberOrEllipsisLocation(MemberLoc), Init(Init), LParenLoc(L), RParenLoc(R), IsDelegating(false), IsVirtual(false), IsWritten(false), SourceOrder(0) { } CXXCtorInitializer::CXXCtorInitializer(ASTContext &Context, IndirectFieldDecl *Member, SourceLocation MemberLoc, SourceLocation L, Expr *Init, SourceLocation R) : Initializee(Member), MemberOrEllipsisLocation(MemberLoc), Init(Init), LParenLoc(L), RParenLoc(R), IsDelegating(false), IsVirtual(false), IsWritten(false), SourceOrder(0) { } CXXCtorInitializer::CXXCtorInitializer(ASTContext &Context, TypeSourceInfo *TInfo, SourceLocation L, Expr *Init, SourceLocation R) : Initializee(TInfo), MemberOrEllipsisLocation(), Init(Init), LParenLoc(L), RParenLoc(R), IsDelegating(true), IsVirtual(false), IsWritten(false), SourceOrder(0) { } TypeLoc CXXCtorInitializer::getBaseClassLoc() const { if (isBaseInitializer()) return Initializee.get()->getTypeLoc(); else return TypeLoc(); } const Type *CXXCtorInitializer::getBaseClass() const { if (isBaseInitializer()) return Initializee.get()->getType().getTypePtr(); else return nullptr; } SourceLocation CXXCtorInitializer::getSourceLocation() const { if (isInClassMemberInitializer()) return getAnyMember()->getLocation(); if (isAnyMemberInitializer()) return getMemberLocation(); if (TypeSourceInfo *TSInfo = Initializee.get()) return TSInfo->getTypeLoc().getLocalSourceRange().getBegin(); return SourceLocation(); } SourceRange CXXCtorInitializer::getSourceRange() const { if (isInClassMemberInitializer()) { FieldDecl *D = getAnyMember(); if (Expr *I = D->getInClassInitializer()) return I->getSourceRange(); return SourceRange(); } return SourceRange(getSourceLocation(), getRParenLoc()); } void CXXConstructorDecl::anchor() { } CXXConstructorDecl *CXXConstructorDecl::CreateDeserialized(ASTContext &C, unsigned ID, bool Inherited) { unsigned Extra = additionalSizeToAlloc(Inherited); auto *Result = new (C, ID, Extra) CXXConstructorDecl( C, nullptr, SourceLocation(), DeclarationNameInfo(), QualType(), nullptr, false, false, false, false, InheritedConstructor()); Result->IsInheritingConstructor = Inherited; return Result; } CXXConstructorDecl * CXXConstructorDecl::Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isExplicit, bool isInline, bool isImplicitlyDeclared, bool isConstexpr, InheritedConstructor Inherited) { assert(NameInfo.getName().getNameKind() == DeclarationName::CXXConstructorName && "Name must refer to a constructor"); unsigned Extra = additionalSizeToAlloc(Inherited ? 1 : 0); return new (C, RD, Extra) CXXConstructorDecl( C, RD, StartLoc, NameInfo, T, TInfo, isExplicit, isInline, isImplicitlyDeclared, isConstexpr, Inherited); } CXXConstructorDecl::init_const_iterator CXXConstructorDecl::init_begin() const { return CtorInitializers.get(getASTContext().getExternalSource()); } CXXConstructorDecl *CXXConstructorDecl::getTargetConstructor() const { assert(isDelegatingConstructor() && "Not a delegating constructor!"); Expr *E = (*init_begin())->getInit()->IgnoreImplicit(); if (CXXConstructExpr *Construct = dyn_cast(E)) return Construct->getConstructor(); return nullptr; } bool CXXConstructorDecl::isDefaultConstructor() const { // C++ [class.ctor]p5: // A default constructor for a class X is a constructor of class // X that can be called without an argument. return (getNumParams() == 0) || (getNumParams() > 0 && getParamDecl(0)->hasDefaultArg()); } bool CXXConstructorDecl::isCopyConstructor(unsigned &TypeQuals) const { return isCopyOrMoveConstructor(TypeQuals) && getParamDecl(0)->getType()->isLValueReferenceType(); } bool CXXConstructorDecl::isMoveConstructor(unsigned &TypeQuals) const { return isCopyOrMoveConstructor(TypeQuals) && getParamDecl(0)->getType()->isRValueReferenceType(); } /// \brief Determine whether this is a copy or move constructor. bool CXXConstructorDecl::isCopyOrMoveConstructor(unsigned &TypeQuals) const { // C++ [class.copy]p2: // A non-template constructor for class X is a copy constructor // if its first parameter is of type X&, const X&, volatile X& or // const volatile X&, and either there are no other parameters // or else all other parameters have default arguments (8.3.6). // C++0x [class.copy]p3: // A non-template constructor for class X is a move constructor if its // first parameter is of type X&&, const X&&, volatile X&&, or // const volatile X&&, and either there are no other parameters or else // all other parameters have default arguments. if ((getNumParams() < 1) || (getNumParams() > 1 && !getParamDecl(1)->hasDefaultArg()) || (getPrimaryTemplate() != nullptr) || (getDescribedFunctionTemplate() != nullptr)) return false; const ParmVarDecl *Param = getParamDecl(0); // Do we have a reference type? const ReferenceType *ParamRefType = Param->getType()->getAs(); if (!ParamRefType) return false; // Is it a reference to our class type? ASTContext &Context = getASTContext(); CanQualType PointeeType = Context.getCanonicalType(ParamRefType->getPointeeType()); CanQualType ClassTy = Context.getCanonicalType(Context.getTagDeclType(getParent())); if (PointeeType.getUnqualifiedType() != ClassTy) return false; // FIXME: other qualifiers? // We have a copy or move constructor. TypeQuals = PointeeType.getCVRQualifiers(); return true; } bool CXXConstructorDecl::isConvertingConstructor(bool AllowExplicit) const { // C++ [class.conv.ctor]p1: // A constructor declared without the function-specifier explicit // that can be called with a single parameter specifies a // conversion from the type of its first parameter to the type of // its class. Such a constructor is called a converting // constructor. if (isExplicit() && !AllowExplicit) return false; return (getNumParams() == 0 && getType()->getAs()->isVariadic()) || (getNumParams() == 1) || (getNumParams() > 1 && (getParamDecl(1)->hasDefaultArg() || getParamDecl(1)->isParameterPack())); } bool CXXConstructorDecl::isSpecializationCopyingObject() const { if ((getNumParams() < 1) || (getNumParams() > 1 && !getParamDecl(1)->hasDefaultArg()) || (getDescribedFunctionTemplate() != nullptr)) return false; const ParmVarDecl *Param = getParamDecl(0); ASTContext &Context = getASTContext(); CanQualType ParamType = Context.getCanonicalType(Param->getType()); // Is it the same as our our class type? CanQualType ClassTy = Context.getCanonicalType(Context.getTagDeclType(getParent())); if (ParamType.getUnqualifiedType() != ClassTy) return false; return true; } void CXXDestructorDecl::anchor() { } CXXDestructorDecl * CXXDestructorDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) CXXDestructorDecl(C, nullptr, SourceLocation(), DeclarationNameInfo(), QualType(), nullptr, false, false); } CXXDestructorDecl * CXXDestructorDecl::Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isInline, bool isImplicitlyDeclared) { assert(NameInfo.getName().getNameKind() == DeclarationName::CXXDestructorName && "Name must refer to a destructor"); return new (C, RD) CXXDestructorDecl(C, RD, StartLoc, NameInfo, T, TInfo, isInline, isImplicitlyDeclared); } void CXXDestructorDecl::setOperatorDelete(FunctionDecl *OD) { auto *First = cast(getFirstDecl()); if (OD && !First->OperatorDelete) { First->OperatorDelete = OD; if (auto *L = getASTMutationListener()) L->ResolvedOperatorDelete(First, OD); } } void CXXConversionDecl::anchor() { } CXXConversionDecl * CXXConversionDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) CXXConversionDecl(C, nullptr, SourceLocation(), DeclarationNameInfo(), QualType(), nullptr, false, false, false, SourceLocation()); } CXXConversionDecl * CXXConversionDecl::Create(ASTContext &C, CXXRecordDecl *RD, SourceLocation StartLoc, const DeclarationNameInfo &NameInfo, QualType T, TypeSourceInfo *TInfo, bool isInline, bool isExplicit, bool isConstexpr, SourceLocation EndLocation) { assert(NameInfo.getName().getNameKind() == DeclarationName::CXXConversionFunctionName && "Name must refer to a conversion function"); return new (C, RD) CXXConversionDecl(C, RD, StartLoc, NameInfo, T, TInfo, isInline, isExplicit, isConstexpr, EndLocation); } bool CXXConversionDecl::isLambdaToBlockPointerConversion() const { return isImplicit() && getParent()->isLambda() && getConversionType()->isBlockPointerType(); } void LinkageSpecDecl::anchor() { } LinkageSpecDecl *LinkageSpecDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation ExternLoc, SourceLocation LangLoc, LanguageIDs Lang, bool HasBraces) { return new (C, DC) LinkageSpecDecl(DC, ExternLoc, LangLoc, Lang, HasBraces); } LinkageSpecDecl *LinkageSpecDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) LinkageSpecDecl(nullptr, SourceLocation(), SourceLocation(), lang_c, false); } void UsingDirectiveDecl::anchor() { } UsingDirectiveDecl *UsingDirectiveDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation L, SourceLocation NamespaceLoc, NestedNameSpecifierLoc QualifierLoc, SourceLocation IdentLoc, NamedDecl *Used, DeclContext *CommonAncestor) { if (NamespaceDecl *NS = dyn_cast_or_null(Used)) Used = NS->getOriginalNamespace(); return new (C, DC) UsingDirectiveDecl(DC, L, NamespaceLoc, QualifierLoc, IdentLoc, Used, CommonAncestor); } UsingDirectiveDecl *UsingDirectiveDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) UsingDirectiveDecl(nullptr, SourceLocation(), SourceLocation(), NestedNameSpecifierLoc(), SourceLocation(), nullptr, nullptr); } NamespaceDecl *UsingDirectiveDecl::getNominatedNamespace() { if (NamespaceAliasDecl *NA = dyn_cast_or_null(NominatedNamespace)) return NA->getNamespace(); return cast_or_null(NominatedNamespace); } NamespaceDecl::NamespaceDecl(ASTContext &C, DeclContext *DC, bool Inline, SourceLocation StartLoc, SourceLocation IdLoc, IdentifierInfo *Id, NamespaceDecl *PrevDecl) : NamedDecl(Namespace, DC, IdLoc, Id), DeclContext(Namespace), redeclarable_base(C), LocStart(StartLoc), RBraceLoc(), AnonOrFirstNamespaceAndInline(nullptr, Inline) { setPreviousDecl(PrevDecl); if (PrevDecl) AnonOrFirstNamespaceAndInline.setPointer(PrevDecl->getOriginalNamespace()); } NamespaceDecl *NamespaceDecl::Create(ASTContext &C, DeclContext *DC, bool Inline, SourceLocation StartLoc, SourceLocation IdLoc, IdentifierInfo *Id, NamespaceDecl *PrevDecl) { return new (C, DC) NamespaceDecl(C, DC, Inline, StartLoc, IdLoc, Id, PrevDecl); } NamespaceDecl *NamespaceDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) NamespaceDecl(C, nullptr, false, SourceLocation(), SourceLocation(), nullptr, nullptr); } NamespaceDecl *NamespaceDecl::getOriginalNamespace() { if (isFirstDecl()) return this; return AnonOrFirstNamespaceAndInline.getPointer(); } const NamespaceDecl *NamespaceDecl::getOriginalNamespace() const { if (isFirstDecl()) return this; return AnonOrFirstNamespaceAndInline.getPointer(); } bool NamespaceDecl::isOriginalNamespace() const { return isFirstDecl(); } NamespaceDecl *NamespaceDecl::getNextRedeclarationImpl() { return getNextRedeclaration(); } NamespaceDecl *NamespaceDecl::getPreviousDeclImpl() { return getPreviousDecl(); } NamespaceDecl *NamespaceDecl::getMostRecentDeclImpl() { return getMostRecentDecl(); } void NamespaceAliasDecl::anchor() { } NamespaceAliasDecl *NamespaceAliasDecl::getNextRedeclarationImpl() { return getNextRedeclaration(); } NamespaceAliasDecl *NamespaceAliasDecl::getPreviousDeclImpl() { return getPreviousDecl(); } NamespaceAliasDecl *NamespaceAliasDecl::getMostRecentDeclImpl() { return getMostRecentDecl(); } NamespaceAliasDecl *NamespaceAliasDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation UsingLoc, SourceLocation AliasLoc, IdentifierInfo *Alias, NestedNameSpecifierLoc QualifierLoc, SourceLocation IdentLoc, NamedDecl *Namespace) { // FIXME: Preserve the aliased namespace as written. if (NamespaceDecl *NS = dyn_cast_or_null(Namespace)) Namespace = NS->getOriginalNamespace(); return new (C, DC) NamespaceAliasDecl(C, DC, UsingLoc, AliasLoc, Alias, QualifierLoc, IdentLoc, Namespace); } NamespaceAliasDecl * NamespaceAliasDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) NamespaceAliasDecl(C, nullptr, SourceLocation(), SourceLocation(), nullptr, NestedNameSpecifierLoc(), SourceLocation(), nullptr); } void UsingShadowDecl::anchor() { } UsingShadowDecl::UsingShadowDecl(Kind K, ASTContext &C, DeclContext *DC, SourceLocation Loc, UsingDecl *Using, NamedDecl *Target) : NamedDecl(K, DC, Loc, Using ? Using->getDeclName() : DeclarationName()), redeclarable_base(C), Underlying(Target), UsingOrNextShadow(cast(Using)) { if (Target) IdentifierNamespace = Target->getIdentifierNamespace(); setImplicit(); } UsingShadowDecl::UsingShadowDecl(Kind K, ASTContext &C, EmptyShell Empty) : NamedDecl(K, nullptr, SourceLocation(), DeclarationName()), redeclarable_base(C), Underlying(), UsingOrNextShadow() {} UsingShadowDecl * UsingShadowDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) UsingShadowDecl(UsingShadow, C, EmptyShell()); } UsingDecl *UsingShadowDecl::getUsingDecl() const { const UsingShadowDecl *Shadow = this; while (const UsingShadowDecl *NextShadow = dyn_cast(Shadow->UsingOrNextShadow)) Shadow = NextShadow; return cast(Shadow->UsingOrNextShadow); } void ConstructorUsingShadowDecl::anchor() { } ConstructorUsingShadowDecl * ConstructorUsingShadowDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation Loc, UsingDecl *Using, NamedDecl *Target, bool IsVirtual) { return new (C, DC) ConstructorUsingShadowDecl(C, DC, Loc, Using, Target, IsVirtual); } ConstructorUsingShadowDecl * ConstructorUsingShadowDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) ConstructorUsingShadowDecl(C, EmptyShell()); } CXXRecordDecl *ConstructorUsingShadowDecl::getNominatedBaseClass() const { return getUsingDecl()->getQualifier()->getAsRecordDecl(); } void UsingDecl::anchor() { } void UsingDecl::addShadowDecl(UsingShadowDecl *S) { assert(std::find(shadow_begin(), shadow_end(), S) == shadow_end() && "declaration already in set"); assert(S->getUsingDecl() == this); if (FirstUsingShadow.getPointer()) S->UsingOrNextShadow = FirstUsingShadow.getPointer(); FirstUsingShadow.setPointer(S); } void UsingDecl::removeShadowDecl(UsingShadowDecl *S) { assert(std::find(shadow_begin(), shadow_end(), S) != shadow_end() && "declaration not in set"); assert(S->getUsingDecl() == this); // Remove S from the shadow decl chain. This is O(n) but hopefully rare. if (FirstUsingShadow.getPointer() == S) { FirstUsingShadow.setPointer( dyn_cast(S->UsingOrNextShadow)); S->UsingOrNextShadow = this; return; } UsingShadowDecl *Prev = FirstUsingShadow.getPointer(); while (Prev->UsingOrNextShadow != S) Prev = cast(Prev->UsingOrNextShadow); Prev->UsingOrNextShadow = S->UsingOrNextShadow; S->UsingOrNextShadow = this; } UsingDecl *UsingDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation UL, NestedNameSpecifierLoc QualifierLoc, const DeclarationNameInfo &NameInfo, bool HasTypename) { return new (C, DC) UsingDecl(DC, UL, QualifierLoc, NameInfo, HasTypename); } UsingDecl *UsingDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) UsingDecl(nullptr, SourceLocation(), NestedNameSpecifierLoc(), DeclarationNameInfo(), false); } SourceRange UsingDecl::getSourceRange() const { SourceLocation Begin = isAccessDeclaration() ? getQualifierLoc().getBeginLoc() : UsingLocation; return SourceRange(Begin, getNameInfo().getEndLoc()); } void UsingPackDecl::anchor() { } UsingPackDecl *UsingPackDecl::Create(ASTContext &C, DeclContext *DC, NamedDecl *InstantiatedFrom, ArrayRef UsingDecls) { size_t Extra = additionalSizeToAlloc(UsingDecls.size()); return new (C, DC, Extra) UsingPackDecl(DC, InstantiatedFrom, UsingDecls); } UsingPackDecl *UsingPackDecl::CreateDeserialized(ASTContext &C, unsigned ID, unsigned NumExpansions) { size_t Extra = additionalSizeToAlloc(NumExpansions); auto *Result = new (C, ID, Extra) UsingPackDecl(nullptr, nullptr, None); Result->NumExpansions = NumExpansions; auto *Trail = Result->getTrailingObjects(); for (unsigned I = 0; I != NumExpansions; ++I) new (Trail + I) NamedDecl*(nullptr); return Result; } void UnresolvedUsingValueDecl::anchor() { } UnresolvedUsingValueDecl * UnresolvedUsingValueDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation UsingLoc, NestedNameSpecifierLoc QualifierLoc, const DeclarationNameInfo &NameInfo, SourceLocation EllipsisLoc) { return new (C, DC) UnresolvedUsingValueDecl(DC, C.DependentTy, UsingLoc, QualifierLoc, NameInfo, EllipsisLoc); } UnresolvedUsingValueDecl * UnresolvedUsingValueDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) UnresolvedUsingValueDecl(nullptr, QualType(), SourceLocation(), NestedNameSpecifierLoc(), DeclarationNameInfo(), SourceLocation()); } SourceRange UnresolvedUsingValueDecl::getSourceRange() const { SourceLocation Begin = isAccessDeclaration() ? getQualifierLoc().getBeginLoc() : UsingLocation; return SourceRange(Begin, getNameInfo().getEndLoc()); } void UnresolvedUsingTypenameDecl::anchor() { } UnresolvedUsingTypenameDecl * UnresolvedUsingTypenameDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation UsingLoc, SourceLocation TypenameLoc, NestedNameSpecifierLoc QualifierLoc, SourceLocation TargetNameLoc, DeclarationName TargetName, SourceLocation EllipsisLoc) { return new (C, DC) UnresolvedUsingTypenameDecl( DC, UsingLoc, TypenameLoc, QualifierLoc, TargetNameLoc, TargetName.getAsIdentifierInfo(), EllipsisLoc); } UnresolvedUsingTypenameDecl * UnresolvedUsingTypenameDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) UnresolvedUsingTypenameDecl( nullptr, SourceLocation(), SourceLocation(), NestedNameSpecifierLoc(), SourceLocation(), nullptr, SourceLocation()); } void StaticAssertDecl::anchor() { } StaticAssertDecl *StaticAssertDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation StaticAssertLoc, Expr *AssertExpr, StringLiteral *Message, SourceLocation RParenLoc, bool Failed) { return new (C, DC) StaticAssertDecl(DC, StaticAssertLoc, AssertExpr, Message, RParenLoc, Failed); } StaticAssertDecl *StaticAssertDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) StaticAssertDecl(nullptr, SourceLocation(), nullptr, nullptr, SourceLocation(), false); } void BindingDecl::anchor() {} BindingDecl *BindingDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation IdLoc, IdentifierInfo *Id) { return new (C, DC) BindingDecl(DC, IdLoc, Id); } BindingDecl *BindingDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) BindingDecl(nullptr, SourceLocation(), nullptr); } VarDecl *BindingDecl::getHoldingVar() const { Expr *B = getBinding(); if (!B) return nullptr; auto *DRE = dyn_cast(B->IgnoreImplicit()); if (!DRE) return nullptr; auto *VD = dyn_cast(DRE->getDecl()); assert(VD->isImplicit() && "holding var for binding decl not implicit"); return VD; } void DecompositionDecl::anchor() {} DecompositionDecl *DecompositionDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation StartLoc, SourceLocation LSquareLoc, QualType T, TypeSourceInfo *TInfo, StorageClass SC, ArrayRef Bindings) { size_t Extra = additionalSizeToAlloc(Bindings.size()); return new (C, DC, Extra) DecompositionDecl(C, DC, StartLoc, LSquareLoc, T, TInfo, SC, Bindings); } DecompositionDecl *DecompositionDecl::CreateDeserialized(ASTContext &C, unsigned ID, unsigned NumBindings) { size_t Extra = additionalSizeToAlloc(NumBindings); auto *Result = new (C, ID, Extra) DecompositionDecl(C, nullptr, SourceLocation(), SourceLocation(), QualType(), nullptr, StorageClass(), None); // Set up and clean out the bindings array. Result->NumBindings = NumBindings; auto *Trail = Result->getTrailingObjects(); for (unsigned I = 0; I != NumBindings; ++I) new (Trail + I) BindingDecl*(nullptr); return Result; } void DecompositionDecl::printName(llvm::raw_ostream &os) const { os << '['; bool Comma = false; for (auto *B : bindings()) { if (Comma) os << ", "; B->printName(os); Comma = true; } os << ']'; } MSPropertyDecl *MSPropertyDecl::Create(ASTContext &C, DeclContext *DC, SourceLocation L, DeclarationName N, QualType T, TypeSourceInfo *TInfo, SourceLocation StartL, IdentifierInfo *Getter, IdentifierInfo *Setter) { return new (C, DC) MSPropertyDecl(DC, L, N, T, TInfo, StartL, Getter, Setter); } MSPropertyDecl *MSPropertyDecl::CreateDeserialized(ASTContext &C, unsigned ID) { return new (C, ID) MSPropertyDecl(nullptr, SourceLocation(), DeclarationName(), QualType(), nullptr, SourceLocation(), nullptr, nullptr); } static const char *getAccessName(AccessSpecifier AS) { switch (AS) { case AS_none: llvm_unreachable("Invalid access specifier!"); case AS_public: return "public"; case AS_private: return "private"; case AS_protected: return "protected"; } llvm_unreachable("Invalid access specifier!"); } const DiagnosticBuilder &clang::operator<<(const DiagnosticBuilder &DB, AccessSpecifier AS) { return DB << getAccessName(AS); } const PartialDiagnostic &clang::operator<<(const PartialDiagnostic &DB, AccessSpecifier AS) { return DB << getAccessName(AS); } diff --git a/lib/CodeGen/CGCXXABI.cpp b/lib/CodeGen/CGCXXABI.cpp index e29e525edd24..033258643ddf 100644 --- a/lib/CodeGen/CGCXXABI.cpp +++ b/lib/CodeGen/CGCXXABI.cpp @@ -1,333 +1,304 @@ //===----- CGCXXABI.cpp - Interface to C++ ABIs ---------------------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This provides an abstract class for C++ code generation. Concrete subclasses // of this implement code generation for specific C++ ABIs. // //===----------------------------------------------------------------------===// #include "CGCXXABI.h" #include "CGCleanup.h" using namespace clang; using namespace CodeGen; CGCXXABI::~CGCXXABI() { } void CGCXXABI::ErrorUnsupportedABI(CodeGenFunction &CGF, StringRef S) { DiagnosticsEngine &Diags = CGF.CGM.getDiags(); unsigned DiagID = Diags.getCustomDiagID(DiagnosticsEngine::Error, "cannot yet compile %0 in this ABI"); Diags.Report(CGF.getContext().getFullLoc(CGF.CurCodeDecl->getLocation()), DiagID) << S; } bool CGCXXABI::canCopyArgument(const CXXRecordDecl *RD) const { - // If RD has a non-trivial move or copy constructor, we cannot copy the - // argument. - if (RD->hasNonTrivialCopyConstructor() || RD->hasNonTrivialMoveConstructor()) - return false; - - // If RD has a non-trivial destructor, we cannot copy the argument. - if (RD->hasNonTrivialDestructor()) - return false; - // We can only copy the argument if there exists at least one trivial, // non-deleted copy or move constructor. - // FIXME: This assumes that all lazily declared copy and move constructors are - // not deleted. This assumption might not be true in some corner cases. - bool CopyDeleted = false; - bool MoveDeleted = false; - for (const CXXConstructorDecl *CD : RD->ctors()) { - if (CD->isCopyConstructor() || CD->isMoveConstructor()) { - assert(CD->isTrivial()); - // We had at least one undeleted trivial copy or move ctor. Return - // directly. - if (!CD->isDeleted()) - return true; - if (CD->isCopyConstructor()) - CopyDeleted = true; - else - MoveDeleted = true; - } - } - - // If all trivial copy and move constructors are deleted, we cannot copy the - // argument. - return !(CopyDeleted && MoveDeleted); + return RD->canPassInRegisters(); } llvm::Constant *CGCXXABI::GetBogusMemberPointer(QualType T) { return llvm::Constant::getNullValue(CGM.getTypes().ConvertType(T)); } llvm::Type * CGCXXABI::ConvertMemberPointerType(const MemberPointerType *MPT) { return CGM.getTypes().ConvertType(CGM.getContext().getPointerDiffType()); } CGCallee CGCXXABI::EmitLoadOfMemberFunctionPointer( CodeGenFunction &CGF, const Expr *E, Address This, llvm::Value *&ThisPtrForCall, llvm::Value *MemPtr, const MemberPointerType *MPT) { ErrorUnsupportedABI(CGF, "calls through member pointers"); ThisPtrForCall = This.getPointer(); const FunctionProtoType *FPT = MPT->getPointeeType()->getAs(); const CXXRecordDecl *RD = cast(MPT->getClass()->getAs()->getDecl()); llvm::FunctionType *FTy = CGM.getTypes().GetFunctionType( CGM.getTypes().arrangeCXXMethodType(RD, FPT, /*FD=*/nullptr)); llvm::Constant *FnPtr = llvm::Constant::getNullValue(FTy->getPointerTo()); return CGCallee::forDirect(FnPtr, FPT); } llvm::Value * CGCXXABI::EmitMemberDataPointerAddress(CodeGenFunction &CGF, const Expr *E, Address Base, llvm::Value *MemPtr, const MemberPointerType *MPT) { ErrorUnsupportedABI(CGF, "loads of member pointers"); llvm::Type *Ty = CGF.ConvertType(MPT->getPointeeType()) ->getPointerTo(Base.getAddressSpace()); return llvm::Constant::getNullValue(Ty); } llvm::Value *CGCXXABI::EmitMemberPointerConversion(CodeGenFunction &CGF, const CastExpr *E, llvm::Value *Src) { ErrorUnsupportedABI(CGF, "member function pointer conversions"); return GetBogusMemberPointer(E->getType()); } llvm::Constant *CGCXXABI::EmitMemberPointerConversion(const CastExpr *E, llvm::Constant *Src) { return GetBogusMemberPointer(E->getType()); } llvm::Value * CGCXXABI::EmitMemberPointerComparison(CodeGenFunction &CGF, llvm::Value *L, llvm::Value *R, const MemberPointerType *MPT, bool Inequality) { ErrorUnsupportedABI(CGF, "member function pointer comparison"); return CGF.Builder.getFalse(); } llvm::Value * CGCXXABI::EmitMemberPointerIsNotNull(CodeGenFunction &CGF, llvm::Value *MemPtr, const MemberPointerType *MPT) { ErrorUnsupportedABI(CGF, "member function pointer null testing"); return CGF.Builder.getFalse(); } llvm::Constant * CGCXXABI::EmitNullMemberPointer(const MemberPointerType *MPT) { return GetBogusMemberPointer(QualType(MPT, 0)); } llvm::Constant *CGCXXABI::EmitMemberFunctionPointer(const CXXMethodDecl *MD) { return GetBogusMemberPointer(CGM.getContext().getMemberPointerType( MD->getType(), MD->getParent()->getTypeForDecl())); } llvm::Constant *CGCXXABI::EmitMemberDataPointer(const MemberPointerType *MPT, CharUnits offset) { return GetBogusMemberPointer(QualType(MPT, 0)); } llvm::Constant *CGCXXABI::EmitMemberPointer(const APValue &MP, QualType MPT) { return GetBogusMemberPointer(MPT); } bool CGCXXABI::isZeroInitializable(const MemberPointerType *MPT) { // Fake answer. return true; } void CGCXXABI::buildThisParam(CodeGenFunction &CGF, FunctionArgList ¶ms) { const CXXMethodDecl *MD = cast(CGF.CurGD.getDecl()); // FIXME: I'm not entirely sure I like using a fake decl just for code // generation. Maybe we can come up with a better way? auto *ThisDecl = ImplicitParamDecl::Create( CGM.getContext(), nullptr, MD->getLocation(), &CGM.getContext().Idents.get("this"), MD->getThisType(CGM.getContext()), ImplicitParamDecl::CXXThis); params.push_back(ThisDecl); CGF.CXXABIThisDecl = ThisDecl; // Compute the presumed alignment of 'this', which basically comes // down to whether we know it's a complete object or not. auto &Layout = CGF.getContext().getASTRecordLayout(MD->getParent()); if (MD->getParent()->getNumVBases() == 0 || // avoid vcall in common case MD->getParent()->hasAttr() || !isThisCompleteObject(CGF.CurGD)) { CGF.CXXABIThisAlignment = Layout.getAlignment(); } else { CGF.CXXABIThisAlignment = Layout.getNonVirtualAlignment(); } } void CGCXXABI::EmitThisParam(CodeGenFunction &CGF) { /// Initialize the 'this' slot. assert(getThisDecl(CGF) && "no 'this' variable for function"); CGF.CXXABIThisValue = CGF.Builder.CreateLoad(CGF.GetAddrOfLocalVar(getThisDecl(CGF)), "this"); } void CGCXXABI::EmitReturnFromThunk(CodeGenFunction &CGF, RValue RV, QualType ResultType) { CGF.EmitReturnOfRValue(RV, ResultType); } CharUnits CGCXXABI::GetArrayCookieSize(const CXXNewExpr *expr) { if (!requiresArrayCookie(expr)) return CharUnits::Zero(); return getArrayCookieSizeImpl(expr->getAllocatedType()); } CharUnits CGCXXABI::getArrayCookieSizeImpl(QualType elementType) { // BOGUS return CharUnits::Zero(); } Address CGCXXABI::InitializeArrayCookie(CodeGenFunction &CGF, Address NewPtr, llvm::Value *NumElements, const CXXNewExpr *expr, QualType ElementType) { // Should never be called. ErrorUnsupportedABI(CGF, "array cookie initialization"); return Address::invalid(); } bool CGCXXABI::requiresArrayCookie(const CXXDeleteExpr *expr, QualType elementType) { // If the class's usual deallocation function takes two arguments, // it needs a cookie. if (expr->doesUsualArrayDeleteWantSize()) return true; return elementType.isDestructedType(); } bool CGCXXABI::requiresArrayCookie(const CXXNewExpr *expr) { // If the class's usual deallocation function takes two arguments, // it needs a cookie. if (expr->doesUsualArrayDeleteWantSize()) return true; return expr->getAllocatedType().isDestructedType(); } void CGCXXABI::ReadArrayCookie(CodeGenFunction &CGF, Address ptr, const CXXDeleteExpr *expr, QualType eltTy, llvm::Value *&numElements, llvm::Value *&allocPtr, CharUnits &cookieSize) { // Derive a char* in the same address space as the pointer. ptr = CGF.Builder.CreateElementBitCast(ptr, CGF.Int8Ty); // If we don't need an array cookie, bail out early. if (!requiresArrayCookie(expr, eltTy)) { allocPtr = ptr.getPointer(); numElements = nullptr; cookieSize = CharUnits::Zero(); return; } cookieSize = getArrayCookieSizeImpl(eltTy); Address allocAddr = CGF.Builder.CreateConstInBoundsByteGEP(ptr, -cookieSize); allocPtr = allocAddr.getPointer(); numElements = readArrayCookieImpl(CGF, allocAddr, cookieSize); } llvm::Value *CGCXXABI::readArrayCookieImpl(CodeGenFunction &CGF, Address ptr, CharUnits cookieSize) { ErrorUnsupportedABI(CGF, "reading a new[] cookie"); return llvm::ConstantInt::get(CGF.SizeTy, 0); } /// Returns the adjustment, in bytes, required for the given /// member-pointer operation. Returns null if no adjustment is /// required. llvm::Constant *CGCXXABI::getMemberPointerAdjustment(const CastExpr *E) { assert(E->getCastKind() == CK_DerivedToBaseMemberPointer || E->getCastKind() == CK_BaseToDerivedMemberPointer); QualType derivedType; if (E->getCastKind() == CK_DerivedToBaseMemberPointer) derivedType = E->getSubExpr()->getType(); else derivedType = E->getType(); const CXXRecordDecl *derivedClass = derivedType->castAs()->getClass()->getAsCXXRecordDecl(); return CGM.GetNonVirtualBaseClassOffset(derivedClass, E->path_begin(), E->path_end()); } CharUnits CGCXXABI::getMemberPointerPathAdjustment(const APValue &MP) { // TODO: Store base specifiers in APValue member pointer paths so we can // easily reuse CGM.GetNonVirtualBaseClassOffset(). const ValueDecl *MPD = MP.getMemberPointerDecl(); CharUnits ThisAdjustment = CharUnits::Zero(); ArrayRef Path = MP.getMemberPointerPath(); bool DerivedMember = MP.isMemberPointerToDerivedMember(); const CXXRecordDecl *RD = cast(MPD->getDeclContext()); for (unsigned I = 0, N = Path.size(); I != N; ++I) { const CXXRecordDecl *Base = RD; const CXXRecordDecl *Derived = Path[I]; if (DerivedMember) std::swap(Base, Derived); ThisAdjustment += getContext().getASTRecordLayout(Derived).getBaseClassOffset(Base); RD = Path[I]; } if (DerivedMember) ThisAdjustment = -ThisAdjustment; return ThisAdjustment; } llvm::BasicBlock * CGCXXABI::EmitCtorCompleteObjectHandler(CodeGenFunction &CGF, const CXXRecordDecl *RD) { if (CGM.getTarget().getCXXABI().hasConstructorVariants()) llvm_unreachable("shouldn't be called in this ABI"); ErrorUnsupportedABI(CGF, "complete object detection in ctor"); return nullptr; } bool CGCXXABI::NeedsVTTParameter(GlobalDecl GD) { return false; } llvm::CallInst * CGCXXABI::emitTerminateForUnexpectedException(CodeGenFunction &CGF, llvm::Value *Exn) { // Just call std::terminate and ignore the violating exception. return CGF.EmitNounwindRuntimeCall(CGF.CGM.getTerminateFn()); } CatchTypeInfo CGCXXABI::getCatchAllTypeInfo() { return CatchTypeInfo{nullptr, 0}; } std::vector CGCXXABI::getVBPtrOffsets(const CXXRecordDecl *RD) { return std::vector(); } diff --git a/lib/CodeGen/ItaniumCXXABI.cpp b/lib/CodeGen/ItaniumCXXABI.cpp index c82b9677eacf..e7963674fc29 100644 --- a/lib/CodeGen/ItaniumCXXABI.cpp +++ b/lib/CodeGen/ItaniumCXXABI.cpp @@ -1,4007 +1,4002 @@ //===------- ItaniumCXXABI.cpp - Emit LLVM Code from ASTs for a Module ----===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This provides C++ code generation targeting the Itanium C++ ABI. The class // in this file generates structures that follow the Itanium C++ ABI, which is // documented at: // http://www.codesourcery.com/public/cxx-abi/abi.html // http://www.codesourcery.com/public/cxx-abi/abi-eh.html // // It also supports the closely-related ARM ABI, documented at: // http://infocenter.arm.com/help/topic/com.arm.doc.ihi0041c/IHI0041C_cppabi.pdf // //===----------------------------------------------------------------------===// #include "CGCXXABI.h" #include "CGCleanup.h" #include "CGRecordLayout.h" #include "CGVTables.h" #include "CodeGenFunction.h" #include "CodeGenModule.h" #include "TargetInfo.h" #include "clang/CodeGen/ConstantInitBuilder.h" #include "clang/AST/Mangle.h" #include "clang/AST/Type.h" #include "clang/AST/StmtCXX.h" #include "llvm/IR/CallSite.h" #include "llvm/IR/DataLayout.h" #include "llvm/IR/Instructions.h" #include "llvm/IR/Intrinsics.h" #include "llvm/IR/Value.h" using namespace clang; using namespace CodeGen; namespace { class ItaniumCXXABI : public CodeGen::CGCXXABI { /// VTables - All the vtables which have been defined. llvm::DenseMap VTables; protected: bool UseARMMethodPtrABI; bool UseARMGuardVarABI; bool Use32BitVTableOffsetABI; ItaniumMangleContext &getMangleContext() { return cast(CodeGen::CGCXXABI::getMangleContext()); } public: ItaniumCXXABI(CodeGen::CodeGenModule &CGM, bool UseARMMethodPtrABI = false, bool UseARMGuardVarABI = false) : CGCXXABI(CGM), UseARMMethodPtrABI(UseARMMethodPtrABI), UseARMGuardVarABI(UseARMGuardVarABI), Use32BitVTableOffsetABI(false) { } bool classifyReturnType(CGFunctionInfo &FI) const override; RecordArgABI getRecordArgABI(const CXXRecordDecl *RD) const override { - // Structures with either a non-trivial destructor or a non-trivial - // copy constructor are always indirect. - // FIXME: Use canCopyArgument() when it is fixed to handle lazily declared - // special members. - if (RD->hasNonTrivialDestructor() || RD->hasNonTrivialCopyConstructor()) + // If C++ prohibits us from making a copy, pass by address. + if (!canCopyArgument(RD)) return RAA_Indirect; return RAA_Default; } bool isThisCompleteObject(GlobalDecl GD) const override { // The Itanium ABI has separate complete-object vs. base-object // variants of both constructors and destructors. if (isa(GD.getDecl())) { switch (GD.getDtorType()) { case Dtor_Complete: case Dtor_Deleting: return true; case Dtor_Base: return false; case Dtor_Comdat: llvm_unreachable("emitting dtor comdat as function?"); } llvm_unreachable("bad dtor kind"); } if (isa(GD.getDecl())) { switch (GD.getCtorType()) { case Ctor_Complete: return true; case Ctor_Base: return false; case Ctor_CopyingClosure: case Ctor_DefaultClosure: llvm_unreachable("closure ctors in Itanium ABI?"); case Ctor_Comdat: llvm_unreachable("emitting ctor comdat as function?"); } llvm_unreachable("bad dtor kind"); } // No other kinds. return false; } bool isZeroInitializable(const MemberPointerType *MPT) override; llvm::Type *ConvertMemberPointerType(const MemberPointerType *MPT) override; CGCallee EmitLoadOfMemberFunctionPointer(CodeGenFunction &CGF, const Expr *E, Address This, llvm::Value *&ThisPtrForCall, llvm::Value *MemFnPtr, const MemberPointerType *MPT) override; llvm::Value * EmitMemberDataPointerAddress(CodeGenFunction &CGF, const Expr *E, Address Base, llvm::Value *MemPtr, const MemberPointerType *MPT) override; llvm::Value *EmitMemberPointerConversion(CodeGenFunction &CGF, const CastExpr *E, llvm::Value *Src) override; llvm::Constant *EmitMemberPointerConversion(const CastExpr *E, llvm::Constant *Src) override; llvm::Constant *EmitNullMemberPointer(const MemberPointerType *MPT) override; llvm::Constant *EmitMemberFunctionPointer(const CXXMethodDecl *MD) override; llvm::Constant *EmitMemberDataPointer(const MemberPointerType *MPT, CharUnits offset) override; llvm::Constant *EmitMemberPointer(const APValue &MP, QualType MPT) override; llvm::Constant *BuildMemberPointer(const CXXMethodDecl *MD, CharUnits ThisAdjustment); llvm::Value *EmitMemberPointerComparison(CodeGenFunction &CGF, llvm::Value *L, llvm::Value *R, const MemberPointerType *MPT, bool Inequality) override; llvm::Value *EmitMemberPointerIsNotNull(CodeGenFunction &CGF, llvm::Value *Addr, const MemberPointerType *MPT) override; void emitVirtualObjectDelete(CodeGenFunction &CGF, const CXXDeleteExpr *DE, Address Ptr, QualType ElementType, const CXXDestructorDecl *Dtor) override; CharUnits getAlignmentOfExnObject() { unsigned Align = CGM.getContext().getTargetInfo().getExnObjectAlignment(); return CGM.getContext().toCharUnitsFromBits(Align); } void emitRethrow(CodeGenFunction &CGF, bool isNoReturn) override; void emitThrow(CodeGenFunction &CGF, const CXXThrowExpr *E) override; void emitBeginCatch(CodeGenFunction &CGF, const CXXCatchStmt *C) override; llvm::CallInst * emitTerminateForUnexpectedException(CodeGenFunction &CGF, llvm::Value *Exn) override; void EmitFundamentalRTTIDescriptor(QualType Type, bool DLLExport); void EmitFundamentalRTTIDescriptors(bool DLLExport); llvm::Constant *getAddrOfRTTIDescriptor(QualType Ty) override; CatchTypeInfo getAddrOfCXXCatchHandlerType(QualType Ty, QualType CatchHandlerType) override { return CatchTypeInfo{getAddrOfRTTIDescriptor(Ty), 0}; } bool shouldTypeidBeNullChecked(bool IsDeref, QualType SrcRecordTy) override; void EmitBadTypeidCall(CodeGenFunction &CGF) override; llvm::Value *EmitTypeid(CodeGenFunction &CGF, QualType SrcRecordTy, Address ThisPtr, llvm::Type *StdTypeInfoPtrTy) override; bool shouldDynamicCastCallBeNullChecked(bool SrcIsPtr, QualType SrcRecordTy) override; llvm::Value *EmitDynamicCastCall(CodeGenFunction &CGF, Address Value, QualType SrcRecordTy, QualType DestTy, QualType DestRecordTy, llvm::BasicBlock *CastEnd) override; llvm::Value *EmitDynamicCastToVoid(CodeGenFunction &CGF, Address Value, QualType SrcRecordTy, QualType DestTy) override; bool EmitBadCastCall(CodeGenFunction &CGF) override; llvm::Value * GetVirtualBaseClassOffset(CodeGenFunction &CGF, Address This, const CXXRecordDecl *ClassDecl, const CXXRecordDecl *BaseClassDecl) override; void EmitCXXConstructors(const CXXConstructorDecl *D) override; AddedStructorArgs buildStructorSignature(const CXXMethodDecl *MD, StructorType T, SmallVectorImpl &ArgTys) override; bool useThunkForDtorVariant(const CXXDestructorDecl *Dtor, CXXDtorType DT) const override { // Itanium does not emit any destructor variant as an inline thunk. // Delegating may occur as an optimization, but all variants are either // emitted with external linkage or as linkonce if they are inline and used. return false; } void EmitCXXDestructors(const CXXDestructorDecl *D) override; void addImplicitStructorParams(CodeGenFunction &CGF, QualType &ResTy, FunctionArgList &Params) override; void EmitInstanceFunctionProlog(CodeGenFunction &CGF) override; AddedStructorArgs addImplicitConstructorArgs(CodeGenFunction &CGF, const CXXConstructorDecl *D, CXXCtorType Type, bool ForVirtualBase, bool Delegating, CallArgList &Args) override; void EmitDestructorCall(CodeGenFunction &CGF, const CXXDestructorDecl *DD, CXXDtorType Type, bool ForVirtualBase, bool Delegating, Address This) override; void emitVTableDefinitions(CodeGenVTables &CGVT, const CXXRecordDecl *RD) override; bool isVirtualOffsetNeededForVTableField(CodeGenFunction &CGF, CodeGenFunction::VPtr Vptr) override; bool doStructorsInitializeVPtrs(const CXXRecordDecl *VTableClass) override { return true; } llvm::Constant * getVTableAddressPoint(BaseSubobject Base, const CXXRecordDecl *VTableClass) override; llvm::Value *getVTableAddressPointInStructor( CodeGenFunction &CGF, const CXXRecordDecl *VTableClass, BaseSubobject Base, const CXXRecordDecl *NearestVBase) override; llvm::Value *getVTableAddressPointInStructorWithVTT( CodeGenFunction &CGF, const CXXRecordDecl *VTableClass, BaseSubobject Base, const CXXRecordDecl *NearestVBase); llvm::Constant * getVTableAddressPointForConstExpr(BaseSubobject Base, const CXXRecordDecl *VTableClass) override; llvm::GlobalVariable *getAddrOfVTable(const CXXRecordDecl *RD, CharUnits VPtrOffset) override; CGCallee getVirtualFunctionPointer(CodeGenFunction &CGF, GlobalDecl GD, Address This, llvm::Type *Ty, SourceLocation Loc) override; llvm::Value *EmitVirtualDestructorCall(CodeGenFunction &CGF, const CXXDestructorDecl *Dtor, CXXDtorType DtorType, Address This, const CXXMemberCallExpr *CE) override; void emitVirtualInheritanceTables(const CXXRecordDecl *RD) override; bool canSpeculativelyEmitVTable(const CXXRecordDecl *RD) const override; void setThunkLinkage(llvm::Function *Thunk, bool ForVTable, GlobalDecl GD, bool ReturnAdjustment) override { // Allow inlining of thunks by emitting them with available_externally // linkage together with vtables when needed. if (ForVTable && !Thunk->hasLocalLinkage()) Thunk->setLinkage(llvm::GlobalValue::AvailableExternallyLinkage); } llvm::Value *performThisAdjustment(CodeGenFunction &CGF, Address This, const ThisAdjustment &TA) override; llvm::Value *performReturnAdjustment(CodeGenFunction &CGF, Address Ret, const ReturnAdjustment &RA) override; size_t getSrcArgforCopyCtor(const CXXConstructorDecl *, FunctionArgList &Args) const override { assert(!Args.empty() && "expected the arglist to not be empty!"); return Args.size() - 1; } StringRef GetPureVirtualCallName() override { return "__cxa_pure_virtual"; } StringRef GetDeletedVirtualCallName() override { return "__cxa_deleted_virtual"; } CharUnits getArrayCookieSizeImpl(QualType elementType) override; Address InitializeArrayCookie(CodeGenFunction &CGF, Address NewPtr, llvm::Value *NumElements, const CXXNewExpr *expr, QualType ElementType) override; llvm::Value *readArrayCookieImpl(CodeGenFunction &CGF, Address allocPtr, CharUnits cookieSize) override; void EmitGuardedInit(CodeGenFunction &CGF, const VarDecl &D, llvm::GlobalVariable *DeclPtr, bool PerformInit) override; void registerGlobalDtor(CodeGenFunction &CGF, const VarDecl &D, llvm::Constant *dtor, llvm::Constant *addr) override; llvm::Function *getOrCreateThreadLocalWrapper(const VarDecl *VD, llvm::Value *Val); void EmitThreadLocalInitFuncs( CodeGenModule &CGM, ArrayRef CXXThreadLocals, ArrayRef CXXThreadLocalInits, ArrayRef CXXThreadLocalInitVars) override; bool usesThreadWrapperFunction() const override { return true; } LValue EmitThreadLocalVarDeclLValue(CodeGenFunction &CGF, const VarDecl *VD, QualType LValType) override; bool NeedsVTTParameter(GlobalDecl GD) override; /**************************** RTTI Uniqueness ******************************/ protected: /// Returns true if the ABI requires RTTI type_info objects to be unique /// across a program. virtual bool shouldRTTIBeUnique() const { return true; } public: /// What sort of unique-RTTI behavior should we use? enum RTTIUniquenessKind { /// We are guaranteeing, or need to guarantee, that the RTTI string /// is unique. RUK_Unique, /// We are not guaranteeing uniqueness for the RTTI string, so we /// can demote to hidden visibility but must use string comparisons. RUK_NonUniqueHidden, /// We are not guaranteeing uniqueness for the RTTI string, so we /// have to use string comparisons, but we also have to emit it with /// non-hidden visibility. RUK_NonUniqueVisible }; /// Return the required visibility status for the given type and linkage in /// the current ABI. RTTIUniquenessKind classifyRTTIUniqueness(QualType CanTy, llvm::GlobalValue::LinkageTypes Linkage) const; friend class ItaniumRTTIBuilder; void emitCXXStructor(const CXXMethodDecl *MD, StructorType Type) override; private: bool hasAnyUnusedVirtualInlineFunction(const CXXRecordDecl *RD) const { const auto &VtableLayout = CGM.getItaniumVTableContext().getVTableLayout(RD); for (const auto &VtableComponent : VtableLayout.vtable_components()) { // Skip empty slot. if (!VtableComponent.isUsedFunctionPointerKind()) continue; const CXXMethodDecl *Method = VtableComponent.getFunctionDecl(); if (!Method->getCanonicalDecl()->isInlined()) continue; StringRef Name = CGM.getMangledName(VtableComponent.getGlobalDecl()); auto *Entry = CGM.GetGlobalValue(Name); // This checks if virtual inline function has already been emitted. // Note that it is possible that this inline function would be emitted // after trying to emit vtable speculatively. Because of this we do // an extra pass after emitting all deferred vtables to find and emit // these vtables opportunistically. if (!Entry || Entry->isDeclaration()) return true; } return false; } bool isVTableHidden(const CXXRecordDecl *RD) const { const auto &VtableLayout = CGM.getItaniumVTableContext().getVTableLayout(RD); for (const auto &VtableComponent : VtableLayout.vtable_components()) { if (VtableComponent.isRTTIKind()) { const CXXRecordDecl *RTTIDecl = VtableComponent.getRTTIDecl(); if (RTTIDecl->getVisibility() == Visibility::HiddenVisibility) return true; } else if (VtableComponent.isUsedFunctionPointerKind()) { const CXXMethodDecl *Method = VtableComponent.getFunctionDecl(); if (Method->getVisibility() == Visibility::HiddenVisibility && !Method->isDefined()) return true; } } return false; } }; class ARMCXXABI : public ItaniumCXXABI { public: ARMCXXABI(CodeGen::CodeGenModule &CGM) : ItaniumCXXABI(CGM, /* UseARMMethodPtrABI = */ true, /* UseARMGuardVarABI = */ true) {} bool HasThisReturn(GlobalDecl GD) const override { return (isa(GD.getDecl()) || ( isa(GD.getDecl()) && GD.getDtorType() != Dtor_Deleting)); } void EmitReturnFromThunk(CodeGenFunction &CGF, RValue RV, QualType ResTy) override; CharUnits getArrayCookieSizeImpl(QualType elementType) override; Address InitializeArrayCookie(CodeGenFunction &CGF, Address NewPtr, llvm::Value *NumElements, const CXXNewExpr *expr, QualType ElementType) override; llvm::Value *readArrayCookieImpl(CodeGenFunction &CGF, Address allocPtr, CharUnits cookieSize) override; }; class iOS64CXXABI : public ARMCXXABI { public: iOS64CXXABI(CodeGen::CodeGenModule &CGM) : ARMCXXABI(CGM) { Use32BitVTableOffsetABI = true; } // ARM64 libraries are prepared for non-unique RTTI. bool shouldRTTIBeUnique() const override { return false; } }; class WebAssemblyCXXABI final : public ItaniumCXXABI { public: explicit WebAssemblyCXXABI(CodeGen::CodeGenModule &CGM) : ItaniumCXXABI(CGM, /*UseARMMethodPtrABI=*/true, /*UseARMGuardVarABI=*/true) {} private: bool HasThisReturn(GlobalDecl GD) const override { return isa(GD.getDecl()) || (isa(GD.getDecl()) && GD.getDtorType() != Dtor_Deleting); } bool canCallMismatchedFunctionType() const override { return false; } }; } CodeGen::CGCXXABI *CodeGen::CreateItaniumCXXABI(CodeGenModule &CGM) { switch (CGM.getTarget().getCXXABI().getKind()) { // For IR-generation purposes, there's no significant difference // between the ARM and iOS ABIs. case TargetCXXABI::GenericARM: case TargetCXXABI::iOS: case TargetCXXABI::WatchOS: return new ARMCXXABI(CGM); case TargetCXXABI::iOS64: return new iOS64CXXABI(CGM); // Note that AArch64 uses the generic ItaniumCXXABI class since it doesn't // include the other 32-bit ARM oddities: constructor/destructor return values // and array cookies. case TargetCXXABI::GenericAArch64: return new ItaniumCXXABI(CGM, /* UseARMMethodPtrABI = */ true, /* UseARMGuardVarABI = */ true); case TargetCXXABI::GenericMIPS: return new ItaniumCXXABI(CGM, /* UseARMMethodPtrABI = */ true); case TargetCXXABI::WebAssembly: return new WebAssemblyCXXABI(CGM); case TargetCXXABI::GenericItanium: if (CGM.getContext().getTargetInfo().getTriple().getArch() == llvm::Triple::le32) { // For PNaCl, use ARM-style method pointers so that PNaCl code // does not assume anything about the alignment of function // pointers. return new ItaniumCXXABI(CGM, /* UseARMMethodPtrABI = */ true, /* UseARMGuardVarABI = */ false); } return new ItaniumCXXABI(CGM); case TargetCXXABI::Microsoft: llvm_unreachable("Microsoft ABI is not Itanium-based"); } llvm_unreachable("bad ABI kind"); } llvm::Type * ItaniumCXXABI::ConvertMemberPointerType(const MemberPointerType *MPT) { if (MPT->isMemberDataPointer()) return CGM.PtrDiffTy; return llvm::StructType::get(CGM.PtrDiffTy, CGM.PtrDiffTy); } /// In the Itanium and ARM ABIs, method pointers have the form: /// struct { ptrdiff_t ptr; ptrdiff_t adj; } memptr; /// /// In the Itanium ABI: /// - method pointers are virtual if (memptr.ptr & 1) is nonzero /// - the this-adjustment is (memptr.adj) /// - the virtual offset is (memptr.ptr - 1) /// /// In the ARM ABI: /// - method pointers are virtual if (memptr.adj & 1) is nonzero /// - the this-adjustment is (memptr.adj >> 1) /// - the virtual offset is (memptr.ptr) /// ARM uses 'adj' for the virtual flag because Thumb functions /// may be only single-byte aligned. /// /// If the member is virtual, the adjusted 'this' pointer points /// to a vtable pointer from which the virtual offset is applied. /// /// If the member is non-virtual, memptr.ptr is the address of /// the function to call. CGCallee ItaniumCXXABI::EmitLoadOfMemberFunctionPointer( CodeGenFunction &CGF, const Expr *E, Address ThisAddr, llvm::Value *&ThisPtrForCall, llvm::Value *MemFnPtr, const MemberPointerType *MPT) { CGBuilderTy &Builder = CGF.Builder; const FunctionProtoType *FPT = MPT->getPointeeType()->getAs(); const CXXRecordDecl *RD = cast(MPT->getClass()->getAs()->getDecl()); llvm::FunctionType *FTy = CGM.getTypes().GetFunctionType( CGM.getTypes().arrangeCXXMethodType(RD, FPT, /*FD=*/nullptr)); llvm::Constant *ptrdiff_1 = llvm::ConstantInt::get(CGM.PtrDiffTy, 1); llvm::BasicBlock *FnVirtual = CGF.createBasicBlock("memptr.virtual"); llvm::BasicBlock *FnNonVirtual = CGF.createBasicBlock("memptr.nonvirtual"); llvm::BasicBlock *FnEnd = CGF.createBasicBlock("memptr.end"); // Extract memptr.adj, which is in the second field. llvm::Value *RawAdj = Builder.CreateExtractValue(MemFnPtr, 1, "memptr.adj"); // Compute the true adjustment. llvm::Value *Adj = RawAdj; if (UseARMMethodPtrABI) Adj = Builder.CreateAShr(Adj, ptrdiff_1, "memptr.adj.shifted"); // Apply the adjustment and cast back to the original struct type // for consistency. llvm::Value *This = ThisAddr.getPointer(); llvm::Value *Ptr = Builder.CreateBitCast(This, Builder.getInt8PtrTy()); Ptr = Builder.CreateInBoundsGEP(Ptr, Adj); This = Builder.CreateBitCast(Ptr, This->getType(), "this.adjusted"); ThisPtrForCall = This; // Load the function pointer. llvm::Value *FnAsInt = Builder.CreateExtractValue(MemFnPtr, 0, "memptr.ptr"); // If the LSB in the function pointer is 1, the function pointer points to // a virtual function. llvm::Value *IsVirtual; if (UseARMMethodPtrABI) IsVirtual = Builder.CreateAnd(RawAdj, ptrdiff_1); else IsVirtual = Builder.CreateAnd(FnAsInt, ptrdiff_1); IsVirtual = Builder.CreateIsNotNull(IsVirtual, "memptr.isvirtual"); Builder.CreateCondBr(IsVirtual, FnVirtual, FnNonVirtual); // In the virtual path, the adjustment left 'This' pointing to the // vtable of the correct base subobject. The "function pointer" is an // offset within the vtable (+1 for the virtual flag on non-ARM). CGF.EmitBlock(FnVirtual); // Cast the adjusted this to a pointer to vtable pointer and load. llvm::Type *VTableTy = Builder.getInt8PtrTy(); CharUnits VTablePtrAlign = CGF.CGM.getDynamicOffsetAlignment(ThisAddr.getAlignment(), RD, CGF.getPointerAlign()); llvm::Value *VTable = CGF.GetVTablePtr(Address(This, VTablePtrAlign), VTableTy, RD); // Apply the offset. // On ARM64, to reserve extra space in virtual member function pointers, // we only pay attention to the low 32 bits of the offset. llvm::Value *VTableOffset = FnAsInt; if (!UseARMMethodPtrABI) VTableOffset = Builder.CreateSub(VTableOffset, ptrdiff_1); if (Use32BitVTableOffsetABI) { VTableOffset = Builder.CreateTrunc(VTableOffset, CGF.Int32Ty); VTableOffset = Builder.CreateZExt(VTableOffset, CGM.PtrDiffTy); } VTable = Builder.CreateGEP(VTable, VTableOffset); // Load the virtual function to call. VTable = Builder.CreateBitCast(VTable, FTy->getPointerTo()->getPointerTo()); llvm::Value *VirtualFn = Builder.CreateAlignedLoad(VTable, CGF.getPointerAlign(), "memptr.virtualfn"); CGF.EmitBranch(FnEnd); // In the non-virtual path, the function pointer is actually a // function pointer. CGF.EmitBlock(FnNonVirtual); llvm::Value *NonVirtualFn = Builder.CreateIntToPtr(FnAsInt, FTy->getPointerTo(), "memptr.nonvirtualfn"); // We're done. CGF.EmitBlock(FnEnd); llvm::PHINode *CalleePtr = Builder.CreatePHI(FTy->getPointerTo(), 2); CalleePtr->addIncoming(VirtualFn, FnVirtual); CalleePtr->addIncoming(NonVirtualFn, FnNonVirtual); CGCallee Callee(FPT, CalleePtr); return Callee; } /// Compute an l-value by applying the given pointer-to-member to a /// base object. llvm::Value *ItaniumCXXABI::EmitMemberDataPointerAddress( CodeGenFunction &CGF, const Expr *E, Address Base, llvm::Value *MemPtr, const MemberPointerType *MPT) { assert(MemPtr->getType() == CGM.PtrDiffTy); CGBuilderTy &Builder = CGF.Builder; // Cast to char*. Base = Builder.CreateElementBitCast(Base, CGF.Int8Ty); // Apply the offset, which we assume is non-null. llvm::Value *Addr = Builder.CreateInBoundsGEP(Base.getPointer(), MemPtr, "memptr.offset"); // Cast the address to the appropriate pointer type, adopting the // address space of the base pointer. llvm::Type *PType = CGF.ConvertTypeForMem(MPT->getPointeeType()) ->getPointerTo(Base.getAddressSpace()); return Builder.CreateBitCast(Addr, PType); } /// Perform a bitcast, derived-to-base, or base-to-derived member pointer /// conversion. /// /// Bitcast conversions are always a no-op under Itanium. /// /// Obligatory offset/adjustment diagram: /// <-- offset --> <-- adjustment --> /// |--------------------------|----------------------|--------------------| /// ^Derived address point ^Base address point ^Member address point /// /// So when converting a base member pointer to a derived member pointer, /// we add the offset to the adjustment because the address point has /// decreased; and conversely, when converting a derived MP to a base MP /// we subtract the offset from the adjustment because the address point /// has increased. /// /// The standard forbids (at compile time) conversion to and from /// virtual bases, which is why we don't have to consider them here. /// /// The standard forbids (at run time) casting a derived MP to a base /// MP when the derived MP does not point to a member of the base. /// This is why -1 is a reasonable choice for null data member /// pointers. llvm::Value * ItaniumCXXABI::EmitMemberPointerConversion(CodeGenFunction &CGF, const CastExpr *E, llvm::Value *src) { assert(E->getCastKind() == CK_DerivedToBaseMemberPointer || E->getCastKind() == CK_BaseToDerivedMemberPointer || E->getCastKind() == CK_ReinterpretMemberPointer); // Under Itanium, reinterprets don't require any additional processing. if (E->getCastKind() == CK_ReinterpretMemberPointer) return src; // Use constant emission if we can. if (isa(src)) return EmitMemberPointerConversion(E, cast(src)); llvm::Constant *adj = getMemberPointerAdjustment(E); if (!adj) return src; CGBuilderTy &Builder = CGF.Builder; bool isDerivedToBase = (E->getCastKind() == CK_DerivedToBaseMemberPointer); const MemberPointerType *destTy = E->getType()->castAs(); // For member data pointers, this is just a matter of adding the // offset if the source is non-null. if (destTy->isMemberDataPointer()) { llvm::Value *dst; if (isDerivedToBase) dst = Builder.CreateNSWSub(src, adj, "adj"); else dst = Builder.CreateNSWAdd(src, adj, "adj"); // Null check. llvm::Value *null = llvm::Constant::getAllOnesValue(src->getType()); llvm::Value *isNull = Builder.CreateICmpEQ(src, null, "memptr.isnull"); return Builder.CreateSelect(isNull, src, dst); } // The this-adjustment is left-shifted by 1 on ARM. if (UseARMMethodPtrABI) { uint64_t offset = cast(adj)->getZExtValue(); offset <<= 1; adj = llvm::ConstantInt::get(adj->getType(), offset); } llvm::Value *srcAdj = Builder.CreateExtractValue(src, 1, "src.adj"); llvm::Value *dstAdj; if (isDerivedToBase) dstAdj = Builder.CreateNSWSub(srcAdj, adj, "adj"); else dstAdj = Builder.CreateNSWAdd(srcAdj, adj, "adj"); return Builder.CreateInsertValue(src, dstAdj, 1); } llvm::Constant * ItaniumCXXABI::EmitMemberPointerConversion(const CastExpr *E, llvm::Constant *src) { assert(E->getCastKind() == CK_DerivedToBaseMemberPointer || E->getCastKind() == CK_BaseToDerivedMemberPointer || E->getCastKind() == CK_ReinterpretMemberPointer); // Under Itanium, reinterprets don't require any additional processing. if (E->getCastKind() == CK_ReinterpretMemberPointer) return src; // If the adjustment is trivial, we don't need to do anything. llvm::Constant *adj = getMemberPointerAdjustment(E); if (!adj) return src; bool isDerivedToBase = (E->getCastKind() == CK_DerivedToBaseMemberPointer); const MemberPointerType *destTy = E->getType()->castAs(); // For member data pointers, this is just a matter of adding the // offset if the source is non-null. if (destTy->isMemberDataPointer()) { // null maps to null. if (src->isAllOnesValue()) return src; if (isDerivedToBase) return llvm::ConstantExpr::getNSWSub(src, adj); else return llvm::ConstantExpr::getNSWAdd(src, adj); } // The this-adjustment is left-shifted by 1 on ARM. if (UseARMMethodPtrABI) { uint64_t offset = cast(adj)->getZExtValue(); offset <<= 1; adj = llvm::ConstantInt::get(adj->getType(), offset); } llvm::Constant *srcAdj = llvm::ConstantExpr::getExtractValue(src, 1); llvm::Constant *dstAdj; if (isDerivedToBase) dstAdj = llvm::ConstantExpr::getNSWSub(srcAdj, adj); else dstAdj = llvm::ConstantExpr::getNSWAdd(srcAdj, adj); return llvm::ConstantExpr::getInsertValue(src, dstAdj, 1); } llvm::Constant * ItaniumCXXABI::EmitNullMemberPointer(const MemberPointerType *MPT) { // Itanium C++ ABI 2.3: // A NULL pointer is represented as -1. if (MPT->isMemberDataPointer()) return llvm::ConstantInt::get(CGM.PtrDiffTy, -1ULL, /*isSigned=*/true); llvm::Constant *Zero = llvm::ConstantInt::get(CGM.PtrDiffTy, 0); llvm::Constant *Values[2] = { Zero, Zero }; return llvm::ConstantStruct::getAnon(Values); } llvm::Constant * ItaniumCXXABI::EmitMemberDataPointer(const MemberPointerType *MPT, CharUnits offset) { // Itanium C++ ABI 2.3: // A pointer to data member is an offset from the base address of // the class object containing it, represented as a ptrdiff_t return llvm::ConstantInt::get(CGM.PtrDiffTy, offset.getQuantity()); } llvm::Constant * ItaniumCXXABI::EmitMemberFunctionPointer(const CXXMethodDecl *MD) { return BuildMemberPointer(MD, CharUnits::Zero()); } llvm::Constant *ItaniumCXXABI::BuildMemberPointer(const CXXMethodDecl *MD, CharUnits ThisAdjustment) { assert(MD->isInstance() && "Member function must not be static!"); MD = MD->getCanonicalDecl(); CodeGenTypes &Types = CGM.getTypes(); // Get the function pointer (or index if this is a virtual function). llvm::Constant *MemPtr[2]; if (MD->isVirtual()) { uint64_t Index = CGM.getItaniumVTableContext().getMethodVTableIndex(MD); const ASTContext &Context = getContext(); CharUnits PointerWidth = Context.toCharUnitsFromBits(Context.getTargetInfo().getPointerWidth(0)); uint64_t VTableOffset = (Index * PointerWidth.getQuantity()); if (UseARMMethodPtrABI) { // ARM C++ ABI 3.2.1: // This ABI specifies that adj contains twice the this // adjustment, plus 1 if the member function is virtual. The // least significant bit of adj then makes exactly the same // discrimination as the least significant bit of ptr does for // Itanium. MemPtr[0] = llvm::ConstantInt::get(CGM.PtrDiffTy, VTableOffset); MemPtr[1] = llvm::ConstantInt::get(CGM.PtrDiffTy, 2 * ThisAdjustment.getQuantity() + 1); } else { // Itanium C++ ABI 2.3: // For a virtual function, [the pointer field] is 1 plus the // virtual table offset (in bytes) of the function, // represented as a ptrdiff_t. MemPtr[0] = llvm::ConstantInt::get(CGM.PtrDiffTy, VTableOffset + 1); MemPtr[1] = llvm::ConstantInt::get(CGM.PtrDiffTy, ThisAdjustment.getQuantity()); } } else { const FunctionProtoType *FPT = MD->getType()->castAs(); llvm::Type *Ty; // Check whether the function has a computable LLVM signature. if (Types.isFuncTypeConvertible(FPT)) { // The function has a computable LLVM signature; use the correct type. Ty = Types.GetFunctionType(Types.arrangeCXXMethodDeclaration(MD)); } else { // Use an arbitrary non-function type to tell GetAddrOfFunction that the // function type is incomplete. Ty = CGM.PtrDiffTy; } llvm::Constant *addr = CGM.GetAddrOfFunction(MD, Ty); MemPtr[0] = llvm::ConstantExpr::getPtrToInt(addr, CGM.PtrDiffTy); MemPtr[1] = llvm::ConstantInt::get(CGM.PtrDiffTy, (UseARMMethodPtrABI ? 2 : 1) * ThisAdjustment.getQuantity()); } return llvm::ConstantStruct::getAnon(MemPtr); } llvm::Constant *ItaniumCXXABI::EmitMemberPointer(const APValue &MP, QualType MPType) { const MemberPointerType *MPT = MPType->castAs(); const ValueDecl *MPD = MP.getMemberPointerDecl(); if (!MPD) return EmitNullMemberPointer(MPT); CharUnits ThisAdjustment = getMemberPointerPathAdjustment(MP); if (const CXXMethodDecl *MD = dyn_cast(MPD)) return BuildMemberPointer(MD, ThisAdjustment); CharUnits FieldOffset = getContext().toCharUnitsFromBits(getContext().getFieldOffset(MPD)); return EmitMemberDataPointer(MPT, ThisAdjustment + FieldOffset); } /// The comparison algorithm is pretty easy: the member pointers are /// the same if they're either bitwise identical *or* both null. /// /// ARM is different here only because null-ness is more complicated. llvm::Value * ItaniumCXXABI::EmitMemberPointerComparison(CodeGenFunction &CGF, llvm::Value *L, llvm::Value *R, const MemberPointerType *MPT, bool Inequality) { CGBuilderTy &Builder = CGF.Builder; llvm::ICmpInst::Predicate Eq; llvm::Instruction::BinaryOps And, Or; if (Inequality) { Eq = llvm::ICmpInst::ICMP_NE; And = llvm::Instruction::Or; Or = llvm::Instruction::And; } else { Eq = llvm::ICmpInst::ICMP_EQ; And = llvm::Instruction::And; Or = llvm::Instruction::Or; } // Member data pointers are easy because there's a unique null // value, so it just comes down to bitwise equality. if (MPT->isMemberDataPointer()) return Builder.CreateICmp(Eq, L, R); // For member function pointers, the tautologies are more complex. // The Itanium tautology is: // (L == R) <==> (L.ptr == R.ptr && (L.ptr == 0 || L.adj == R.adj)) // The ARM tautology is: // (L == R) <==> (L.ptr == R.ptr && // (L.adj == R.adj || // (L.ptr == 0 && ((L.adj|R.adj) & 1) == 0))) // The inequality tautologies have exactly the same structure, except // applying De Morgan's laws. llvm::Value *LPtr = Builder.CreateExtractValue(L, 0, "lhs.memptr.ptr"); llvm::Value *RPtr = Builder.CreateExtractValue(R, 0, "rhs.memptr.ptr"); // This condition tests whether L.ptr == R.ptr. This must always be // true for equality to hold. llvm::Value *PtrEq = Builder.CreateICmp(Eq, LPtr, RPtr, "cmp.ptr"); // This condition, together with the assumption that L.ptr == R.ptr, // tests whether the pointers are both null. ARM imposes an extra // condition. llvm::Value *Zero = llvm::Constant::getNullValue(LPtr->getType()); llvm::Value *EqZero = Builder.CreateICmp(Eq, LPtr, Zero, "cmp.ptr.null"); // This condition tests whether L.adj == R.adj. If this isn't // true, the pointers are unequal unless they're both null. llvm::Value *LAdj = Builder.CreateExtractValue(L, 1, "lhs.memptr.adj"); llvm::Value *RAdj = Builder.CreateExtractValue(R, 1, "rhs.memptr.adj"); llvm::Value *AdjEq = Builder.CreateICmp(Eq, LAdj, RAdj, "cmp.adj"); // Null member function pointers on ARM clear the low bit of Adj, // so the zero condition has to check that neither low bit is set. if (UseARMMethodPtrABI) { llvm::Value *One = llvm::ConstantInt::get(LPtr->getType(), 1); // Compute (l.adj | r.adj) & 1 and test it against zero. llvm::Value *OrAdj = Builder.CreateOr(LAdj, RAdj, "or.adj"); llvm::Value *OrAdjAnd1 = Builder.CreateAnd(OrAdj, One); llvm::Value *OrAdjAnd1EqZero = Builder.CreateICmp(Eq, OrAdjAnd1, Zero, "cmp.or.adj"); EqZero = Builder.CreateBinOp(And, EqZero, OrAdjAnd1EqZero); } // Tie together all our conditions. llvm::Value *Result = Builder.CreateBinOp(Or, EqZero, AdjEq); Result = Builder.CreateBinOp(And, PtrEq, Result, Inequality ? "memptr.ne" : "memptr.eq"); return Result; } llvm::Value * ItaniumCXXABI::EmitMemberPointerIsNotNull(CodeGenFunction &CGF, llvm::Value *MemPtr, const MemberPointerType *MPT) { CGBuilderTy &Builder = CGF.Builder; /// For member data pointers, this is just a check against -1. if (MPT->isMemberDataPointer()) { assert(MemPtr->getType() == CGM.PtrDiffTy); llvm::Value *NegativeOne = llvm::Constant::getAllOnesValue(MemPtr->getType()); return Builder.CreateICmpNE(MemPtr, NegativeOne, "memptr.tobool"); } // In Itanium, a member function pointer is not null if 'ptr' is not null. llvm::Value *Ptr = Builder.CreateExtractValue(MemPtr, 0, "memptr.ptr"); llvm::Constant *Zero = llvm::ConstantInt::get(Ptr->getType(), 0); llvm::Value *Result = Builder.CreateICmpNE(Ptr, Zero, "memptr.tobool"); // On ARM, a member function pointer is also non-null if the low bit of 'adj' // (the virtual bit) is set. if (UseARMMethodPtrABI) { llvm::Constant *One = llvm::ConstantInt::get(Ptr->getType(), 1); llvm::Value *Adj = Builder.CreateExtractValue(MemPtr, 1, "memptr.adj"); llvm::Value *VirtualBit = Builder.CreateAnd(Adj, One, "memptr.virtualbit"); llvm::Value *IsVirtual = Builder.CreateICmpNE(VirtualBit, Zero, "memptr.isvirtual"); Result = Builder.CreateOr(Result, IsVirtual); } return Result; } bool ItaniumCXXABI::classifyReturnType(CGFunctionInfo &FI) const { const CXXRecordDecl *RD = FI.getReturnType()->getAsCXXRecordDecl(); if (!RD) return false; - // Return indirectly if we have a non-trivial copy ctor or non-trivial dtor. - // FIXME: Use canCopyArgument() when it is fixed to handle lazily declared - // special members. - if (RD->hasNonTrivialDestructor() || RD->hasNonTrivialCopyConstructor()) { + // If C++ prohibits us from making a copy, return by address. + if (!canCopyArgument(RD)) { auto Align = CGM.getContext().getTypeAlignInChars(FI.getReturnType()); FI.getReturnInfo() = ABIArgInfo::getIndirect(Align, /*ByVal=*/false); return true; } return false; } /// The Itanium ABI requires non-zero initialization only for data /// member pointers, for which '0' is a valid offset. bool ItaniumCXXABI::isZeroInitializable(const MemberPointerType *MPT) { return MPT->isMemberFunctionPointer(); } /// The Itanium ABI always places an offset to the complete object /// at entry -2 in the vtable. void ItaniumCXXABI::emitVirtualObjectDelete(CodeGenFunction &CGF, const CXXDeleteExpr *DE, Address Ptr, QualType ElementType, const CXXDestructorDecl *Dtor) { bool UseGlobalDelete = DE->isGlobalDelete(); if (UseGlobalDelete) { // Derive the complete-object pointer, which is what we need // to pass to the deallocation function. // Grab the vtable pointer as an intptr_t*. auto *ClassDecl = cast(ElementType->getAs()->getDecl()); llvm::Value *VTable = CGF.GetVTablePtr(Ptr, CGF.IntPtrTy->getPointerTo(), ClassDecl); // Track back to entry -2 and pull out the offset there. llvm::Value *OffsetPtr = CGF.Builder.CreateConstInBoundsGEP1_64( VTable, -2, "complete-offset.ptr"); llvm::Value *Offset = CGF.Builder.CreateAlignedLoad(OffsetPtr, CGF.getPointerAlign()); // Apply the offset. llvm::Value *CompletePtr = CGF.Builder.CreateBitCast(Ptr.getPointer(), CGF.Int8PtrTy); CompletePtr = CGF.Builder.CreateInBoundsGEP(CompletePtr, Offset); // If we're supposed to call the global delete, make sure we do so // even if the destructor throws. CGF.pushCallObjectDeleteCleanup(DE->getOperatorDelete(), CompletePtr, ElementType); } // FIXME: Provide a source location here even though there's no // CXXMemberCallExpr for dtor call. CXXDtorType DtorType = UseGlobalDelete ? Dtor_Complete : Dtor_Deleting; EmitVirtualDestructorCall(CGF, Dtor, DtorType, Ptr, /*CE=*/nullptr); if (UseGlobalDelete) CGF.PopCleanupBlock(); } void ItaniumCXXABI::emitRethrow(CodeGenFunction &CGF, bool isNoReturn) { // void __cxa_rethrow(); llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, /*IsVarArgs=*/false); llvm::Constant *Fn = CGM.CreateRuntimeFunction(FTy, "__cxa_rethrow"); if (isNoReturn) CGF.EmitNoreturnRuntimeCallOrInvoke(Fn, None); else CGF.EmitRuntimeCallOrInvoke(Fn); } static llvm::Constant *getAllocateExceptionFn(CodeGenModule &CGM) { // void *__cxa_allocate_exception(size_t thrown_size); llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.Int8PtrTy, CGM.SizeTy, /*IsVarArgs=*/false); return CGM.CreateRuntimeFunction(FTy, "__cxa_allocate_exception"); } static llvm::Constant *getThrowFn(CodeGenModule &CGM) { // void __cxa_throw(void *thrown_exception, std::type_info *tinfo, // void (*dest) (void *)); llvm::Type *Args[3] = { CGM.Int8PtrTy, CGM.Int8PtrTy, CGM.Int8PtrTy }; llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, Args, /*IsVarArgs=*/false); return CGM.CreateRuntimeFunction(FTy, "__cxa_throw"); } void ItaniumCXXABI::emitThrow(CodeGenFunction &CGF, const CXXThrowExpr *E) { QualType ThrowType = E->getSubExpr()->getType(); // Now allocate the exception object. llvm::Type *SizeTy = CGF.ConvertType(getContext().getSizeType()); uint64_t TypeSize = getContext().getTypeSizeInChars(ThrowType).getQuantity(); llvm::Constant *AllocExceptionFn = getAllocateExceptionFn(CGM); llvm::CallInst *ExceptionPtr = CGF.EmitNounwindRuntimeCall( AllocExceptionFn, llvm::ConstantInt::get(SizeTy, TypeSize), "exception"); CharUnits ExnAlign = getAlignmentOfExnObject(); CGF.EmitAnyExprToExn(E->getSubExpr(), Address(ExceptionPtr, ExnAlign)); // Now throw the exception. llvm::Constant *TypeInfo = CGM.GetAddrOfRTTIDescriptor(ThrowType, /*ForEH=*/true); // The address of the destructor. If the exception type has a // trivial destructor (or isn't a record), we just pass null. llvm::Constant *Dtor = nullptr; if (const RecordType *RecordTy = ThrowType->getAs()) { CXXRecordDecl *Record = cast(RecordTy->getDecl()); if (!Record->hasTrivialDestructor()) { CXXDestructorDecl *DtorD = Record->getDestructor(); Dtor = CGM.getAddrOfCXXStructor(DtorD, StructorType::Complete); Dtor = llvm::ConstantExpr::getBitCast(Dtor, CGM.Int8PtrTy); } } if (!Dtor) Dtor = llvm::Constant::getNullValue(CGM.Int8PtrTy); llvm::Value *args[] = { ExceptionPtr, TypeInfo, Dtor }; CGF.EmitNoreturnRuntimeCallOrInvoke(getThrowFn(CGM), args); } static llvm::Constant *getItaniumDynamicCastFn(CodeGenFunction &CGF) { // void *__dynamic_cast(const void *sub, // const abi::__class_type_info *src, // const abi::__class_type_info *dst, // std::ptrdiff_t src2dst_offset); llvm::Type *Int8PtrTy = CGF.Int8PtrTy; llvm::Type *PtrDiffTy = CGF.ConvertType(CGF.getContext().getPointerDiffType()); llvm::Type *Args[4] = { Int8PtrTy, Int8PtrTy, Int8PtrTy, PtrDiffTy }; llvm::FunctionType *FTy = llvm::FunctionType::get(Int8PtrTy, Args, false); // Mark the function as nounwind readonly. llvm::Attribute::AttrKind FuncAttrs[] = { llvm::Attribute::NoUnwind, llvm::Attribute::ReadOnly }; llvm::AttributeList Attrs = llvm::AttributeList::get( CGF.getLLVMContext(), llvm::AttributeList::FunctionIndex, FuncAttrs); return CGF.CGM.CreateRuntimeFunction(FTy, "__dynamic_cast", Attrs); } static llvm::Constant *getBadCastFn(CodeGenFunction &CGF) { // void __cxa_bad_cast(); llvm::FunctionType *FTy = llvm::FunctionType::get(CGF.VoidTy, false); return CGF.CGM.CreateRuntimeFunction(FTy, "__cxa_bad_cast"); } /// \brief Compute the src2dst_offset hint as described in the /// Itanium C++ ABI [2.9.7] static CharUnits computeOffsetHint(ASTContext &Context, const CXXRecordDecl *Src, const CXXRecordDecl *Dst) { CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/false); // If Dst is not derived from Src we can skip the whole computation below and // return that Src is not a public base of Dst. Record all inheritance paths. if (!Dst->isDerivedFrom(Src, Paths)) return CharUnits::fromQuantity(-2ULL); unsigned NumPublicPaths = 0; CharUnits Offset; // Now walk all possible inheritance paths. for (const CXXBasePath &Path : Paths) { if (Path.Access != AS_public) // Ignore non-public inheritance. continue; ++NumPublicPaths; for (const CXXBasePathElement &PathElement : Path) { // If the path contains a virtual base class we can't give any hint. // -1: no hint. if (PathElement.Base->isVirtual()) return CharUnits::fromQuantity(-1ULL); if (NumPublicPaths > 1) // Won't use offsets, skip computation. continue; // Accumulate the base class offsets. const ASTRecordLayout &L = Context.getASTRecordLayout(PathElement.Class); Offset += L.getBaseClassOffset( PathElement.Base->getType()->getAsCXXRecordDecl()); } } // -2: Src is not a public base of Dst. if (NumPublicPaths == 0) return CharUnits::fromQuantity(-2ULL); // -3: Src is a multiple public base type but never a virtual base type. if (NumPublicPaths > 1) return CharUnits::fromQuantity(-3ULL); // Otherwise, the Src type is a unique public nonvirtual base type of Dst. // Return the offset of Src from the origin of Dst. return Offset; } static llvm::Constant *getBadTypeidFn(CodeGenFunction &CGF) { // void __cxa_bad_typeid(); llvm::FunctionType *FTy = llvm::FunctionType::get(CGF.VoidTy, false); return CGF.CGM.CreateRuntimeFunction(FTy, "__cxa_bad_typeid"); } bool ItaniumCXXABI::shouldTypeidBeNullChecked(bool IsDeref, QualType SrcRecordTy) { return IsDeref; } void ItaniumCXXABI::EmitBadTypeidCall(CodeGenFunction &CGF) { llvm::Value *Fn = getBadTypeidFn(CGF); CGF.EmitRuntimeCallOrInvoke(Fn).setDoesNotReturn(); CGF.Builder.CreateUnreachable(); } llvm::Value *ItaniumCXXABI::EmitTypeid(CodeGenFunction &CGF, QualType SrcRecordTy, Address ThisPtr, llvm::Type *StdTypeInfoPtrTy) { auto *ClassDecl = cast(SrcRecordTy->getAs()->getDecl()); llvm::Value *Value = CGF.GetVTablePtr(ThisPtr, StdTypeInfoPtrTy->getPointerTo(), ClassDecl); // Load the type info. Value = CGF.Builder.CreateConstInBoundsGEP1_64(Value, -1ULL); return CGF.Builder.CreateAlignedLoad(Value, CGF.getPointerAlign()); } bool ItaniumCXXABI::shouldDynamicCastCallBeNullChecked(bool SrcIsPtr, QualType SrcRecordTy) { return SrcIsPtr; } llvm::Value *ItaniumCXXABI::EmitDynamicCastCall( CodeGenFunction &CGF, Address ThisAddr, QualType SrcRecordTy, QualType DestTy, QualType DestRecordTy, llvm::BasicBlock *CastEnd) { llvm::Type *PtrDiffLTy = CGF.ConvertType(CGF.getContext().getPointerDiffType()); llvm::Type *DestLTy = CGF.ConvertType(DestTy); llvm::Value *SrcRTTI = CGF.CGM.GetAddrOfRTTIDescriptor(SrcRecordTy.getUnqualifiedType()); llvm::Value *DestRTTI = CGF.CGM.GetAddrOfRTTIDescriptor(DestRecordTy.getUnqualifiedType()); // Compute the offset hint. const CXXRecordDecl *SrcDecl = SrcRecordTy->getAsCXXRecordDecl(); const CXXRecordDecl *DestDecl = DestRecordTy->getAsCXXRecordDecl(); llvm::Value *OffsetHint = llvm::ConstantInt::get( PtrDiffLTy, computeOffsetHint(CGF.getContext(), SrcDecl, DestDecl).getQuantity()); // Emit the call to __dynamic_cast. llvm::Value *Value = ThisAddr.getPointer(); Value = CGF.EmitCastToVoidPtr(Value); llvm::Value *args[] = {Value, SrcRTTI, DestRTTI, OffsetHint}; Value = CGF.EmitNounwindRuntimeCall(getItaniumDynamicCastFn(CGF), args); Value = CGF.Builder.CreateBitCast(Value, DestLTy); /// C++ [expr.dynamic.cast]p9: /// A failed cast to reference type throws std::bad_cast if (DestTy->isReferenceType()) { llvm::BasicBlock *BadCastBlock = CGF.createBasicBlock("dynamic_cast.bad_cast"); llvm::Value *IsNull = CGF.Builder.CreateIsNull(Value); CGF.Builder.CreateCondBr(IsNull, BadCastBlock, CastEnd); CGF.EmitBlock(BadCastBlock); EmitBadCastCall(CGF); } return Value; } llvm::Value *ItaniumCXXABI::EmitDynamicCastToVoid(CodeGenFunction &CGF, Address ThisAddr, QualType SrcRecordTy, QualType DestTy) { llvm::Type *PtrDiffLTy = CGF.ConvertType(CGF.getContext().getPointerDiffType()); llvm::Type *DestLTy = CGF.ConvertType(DestTy); auto *ClassDecl = cast(SrcRecordTy->getAs()->getDecl()); // Get the vtable pointer. llvm::Value *VTable = CGF.GetVTablePtr(ThisAddr, PtrDiffLTy->getPointerTo(), ClassDecl); // Get the offset-to-top from the vtable. llvm::Value *OffsetToTop = CGF.Builder.CreateConstInBoundsGEP1_64(VTable, -2ULL); OffsetToTop = CGF.Builder.CreateAlignedLoad(OffsetToTop, CGF.getPointerAlign(), "offset.to.top"); // Finally, add the offset to the pointer. llvm::Value *Value = ThisAddr.getPointer(); Value = CGF.EmitCastToVoidPtr(Value); Value = CGF.Builder.CreateInBoundsGEP(Value, OffsetToTop); return CGF.Builder.CreateBitCast(Value, DestLTy); } bool ItaniumCXXABI::EmitBadCastCall(CodeGenFunction &CGF) { llvm::Value *Fn = getBadCastFn(CGF); CGF.EmitRuntimeCallOrInvoke(Fn).setDoesNotReturn(); CGF.Builder.CreateUnreachable(); return true; } llvm::Value * ItaniumCXXABI::GetVirtualBaseClassOffset(CodeGenFunction &CGF, Address This, const CXXRecordDecl *ClassDecl, const CXXRecordDecl *BaseClassDecl) { llvm::Value *VTablePtr = CGF.GetVTablePtr(This, CGM.Int8PtrTy, ClassDecl); CharUnits VBaseOffsetOffset = CGM.getItaniumVTableContext().getVirtualBaseOffsetOffset(ClassDecl, BaseClassDecl); llvm::Value *VBaseOffsetPtr = CGF.Builder.CreateConstGEP1_64(VTablePtr, VBaseOffsetOffset.getQuantity(), "vbase.offset.ptr"); VBaseOffsetPtr = CGF.Builder.CreateBitCast(VBaseOffsetPtr, CGM.PtrDiffTy->getPointerTo()); llvm::Value *VBaseOffset = CGF.Builder.CreateAlignedLoad(VBaseOffsetPtr, CGF.getPointerAlign(), "vbase.offset"); return VBaseOffset; } void ItaniumCXXABI::EmitCXXConstructors(const CXXConstructorDecl *D) { // Just make sure we're in sync with TargetCXXABI. assert(CGM.getTarget().getCXXABI().hasConstructorVariants()); // The constructor used for constructing this as a base class; // ignores virtual bases. CGM.EmitGlobal(GlobalDecl(D, Ctor_Base)); // The constructor used for constructing this as a complete class; // constructs the virtual bases, then calls the base constructor. if (!D->getParent()->isAbstract()) { // We don't need to emit the complete ctor if the class is abstract. CGM.EmitGlobal(GlobalDecl(D, Ctor_Complete)); } } CGCXXABI::AddedStructorArgs ItaniumCXXABI::buildStructorSignature(const CXXMethodDecl *MD, StructorType T, SmallVectorImpl &ArgTys) { ASTContext &Context = getContext(); // All parameters are already in place except VTT, which goes after 'this'. // These are Clang types, so we don't need to worry about sret yet. // Check if we need to add a VTT parameter (which has type void **). if (T == StructorType::Base && MD->getParent()->getNumVBases() != 0) { ArgTys.insert(ArgTys.begin() + 1, Context.getPointerType(Context.VoidPtrTy)); return AddedStructorArgs::prefix(1); } return AddedStructorArgs{}; } void ItaniumCXXABI::EmitCXXDestructors(const CXXDestructorDecl *D) { // The destructor used for destructing this as a base class; ignores // virtual bases. CGM.EmitGlobal(GlobalDecl(D, Dtor_Base)); // The destructor used for destructing this as a most-derived class; // call the base destructor and then destructs any virtual bases. CGM.EmitGlobal(GlobalDecl(D, Dtor_Complete)); // The destructor in a virtual table is always a 'deleting' // destructor, which calls the complete destructor and then uses the // appropriate operator delete. if (D->isVirtual()) CGM.EmitGlobal(GlobalDecl(D, Dtor_Deleting)); } void ItaniumCXXABI::addImplicitStructorParams(CodeGenFunction &CGF, QualType &ResTy, FunctionArgList &Params) { const CXXMethodDecl *MD = cast(CGF.CurGD.getDecl()); assert(isa(MD) || isa(MD)); // Check if we need a VTT parameter as well. if (NeedsVTTParameter(CGF.CurGD)) { ASTContext &Context = getContext(); // FIXME: avoid the fake decl QualType T = Context.getPointerType(Context.VoidPtrTy); auto *VTTDecl = ImplicitParamDecl::Create( Context, /*DC=*/nullptr, MD->getLocation(), &Context.Idents.get("vtt"), T, ImplicitParamDecl::CXXVTT); Params.insert(Params.begin() + 1, VTTDecl); getStructorImplicitParamDecl(CGF) = VTTDecl; } } void ItaniumCXXABI::EmitInstanceFunctionProlog(CodeGenFunction &CGF) { // Naked functions have no prolog. if (CGF.CurFuncDecl && CGF.CurFuncDecl->hasAttr()) return; /// Initialize the 'this' slot. EmitThisParam(CGF); /// Initialize the 'vtt' slot if needed. if (getStructorImplicitParamDecl(CGF)) { getStructorImplicitParamValue(CGF) = CGF.Builder.CreateLoad( CGF.GetAddrOfLocalVar(getStructorImplicitParamDecl(CGF)), "vtt"); } /// If this is a function that the ABI specifies returns 'this', initialize /// the return slot to 'this' at the start of the function. /// /// Unlike the setting of return types, this is done within the ABI /// implementation instead of by clients of CGCXXABI because: /// 1) getThisValue is currently protected /// 2) in theory, an ABI could implement 'this' returns some other way; /// HasThisReturn only specifies a contract, not the implementation if (HasThisReturn(CGF.CurGD)) CGF.Builder.CreateStore(getThisValue(CGF), CGF.ReturnValue); } CGCXXABI::AddedStructorArgs ItaniumCXXABI::addImplicitConstructorArgs( CodeGenFunction &CGF, const CXXConstructorDecl *D, CXXCtorType Type, bool ForVirtualBase, bool Delegating, CallArgList &Args) { if (!NeedsVTTParameter(GlobalDecl(D, Type))) return AddedStructorArgs{}; // Insert the implicit 'vtt' argument as the second argument. llvm::Value *VTT = CGF.GetVTTParameter(GlobalDecl(D, Type), ForVirtualBase, Delegating); QualType VTTTy = getContext().getPointerType(getContext().VoidPtrTy); Args.insert(Args.begin() + 1, CallArg(RValue::get(VTT), VTTTy, /*needscopy=*/false)); return AddedStructorArgs::prefix(1); // Added one arg. } void ItaniumCXXABI::EmitDestructorCall(CodeGenFunction &CGF, const CXXDestructorDecl *DD, CXXDtorType Type, bool ForVirtualBase, bool Delegating, Address This) { GlobalDecl GD(DD, Type); llvm::Value *VTT = CGF.GetVTTParameter(GD, ForVirtualBase, Delegating); QualType VTTTy = getContext().getPointerType(getContext().VoidPtrTy); CGCallee Callee; if (getContext().getLangOpts().AppleKext && Type != Dtor_Base && DD->isVirtual()) Callee = CGF.BuildAppleKextVirtualDestructorCall(DD, Type, DD->getParent()); else Callee = CGCallee::forDirect(CGM.getAddrOfCXXStructor(DD, getFromDtorType(Type)), DD); CGF.EmitCXXMemberOrOperatorCall(DD, Callee, ReturnValueSlot(), This.getPointer(), VTT, VTTTy, nullptr, nullptr); } void ItaniumCXXABI::emitVTableDefinitions(CodeGenVTables &CGVT, const CXXRecordDecl *RD) { llvm::GlobalVariable *VTable = getAddrOfVTable(RD, CharUnits()); if (VTable->hasInitializer()) return; ItaniumVTableContext &VTContext = CGM.getItaniumVTableContext(); const VTableLayout &VTLayout = VTContext.getVTableLayout(RD); llvm::GlobalVariable::LinkageTypes Linkage = CGM.getVTableLinkage(RD); llvm::Constant *RTTI = CGM.GetAddrOfRTTIDescriptor(CGM.getContext().getTagDeclType(RD)); // Create and set the initializer. ConstantInitBuilder Builder(CGM); auto Components = Builder.beginStruct(); CGVT.createVTableInitializer(Components, VTLayout, RTTI); Components.finishAndSetAsInitializer(VTable); // Set the correct linkage. VTable->setLinkage(Linkage); if (CGM.supportsCOMDAT() && VTable->isWeakForLinker()) VTable->setComdat(CGM.getModule().getOrInsertComdat(VTable->getName())); // Set the right visibility. CGM.setGlobalVisibility(VTable, RD); // Use pointer alignment for the vtable. Otherwise we would align them based // on the size of the initializer which doesn't make sense as only single // values are read. unsigned PAlign = CGM.getTarget().getPointerAlign(0); VTable->setAlignment(getContext().toCharUnitsFromBits(PAlign).getQuantity()); // If this is the magic class __cxxabiv1::__fundamental_type_info, // we will emit the typeinfo for the fundamental types. This is the // same behaviour as GCC. const DeclContext *DC = RD->getDeclContext(); if (RD->getIdentifier() && RD->getIdentifier()->isStr("__fundamental_type_info") && isa(DC) && cast(DC)->getIdentifier() && cast(DC)->getIdentifier()->isStr("__cxxabiv1") && DC->getParent()->isTranslationUnit()) EmitFundamentalRTTIDescriptors(RD->hasAttr()); if (!VTable->isDeclarationForLinker()) CGM.EmitVTableTypeMetadata(VTable, VTLayout); } bool ItaniumCXXABI::isVirtualOffsetNeededForVTableField( CodeGenFunction &CGF, CodeGenFunction::VPtr Vptr) { if (Vptr.NearestVBase == nullptr) return false; return NeedsVTTParameter(CGF.CurGD); } llvm::Value *ItaniumCXXABI::getVTableAddressPointInStructor( CodeGenFunction &CGF, const CXXRecordDecl *VTableClass, BaseSubobject Base, const CXXRecordDecl *NearestVBase) { if ((Base.getBase()->getNumVBases() || NearestVBase != nullptr) && NeedsVTTParameter(CGF.CurGD)) { return getVTableAddressPointInStructorWithVTT(CGF, VTableClass, Base, NearestVBase); } return getVTableAddressPoint(Base, VTableClass); } llvm::Constant * ItaniumCXXABI::getVTableAddressPoint(BaseSubobject Base, const CXXRecordDecl *VTableClass) { llvm::GlobalValue *VTable = getAddrOfVTable(VTableClass, CharUnits()); // Find the appropriate vtable within the vtable group, and the address point // within that vtable. VTableLayout::AddressPointLocation AddressPoint = CGM.getItaniumVTableContext() .getVTableLayout(VTableClass) .getAddressPoint(Base); llvm::Value *Indices[] = { llvm::ConstantInt::get(CGM.Int32Ty, 0), llvm::ConstantInt::get(CGM.Int32Ty, AddressPoint.VTableIndex), llvm::ConstantInt::get(CGM.Int32Ty, AddressPoint.AddressPointIndex), }; return llvm::ConstantExpr::getGetElementPtr(VTable->getValueType(), VTable, Indices, /*InBounds=*/true, /*InRangeIndex=*/1); } llvm::Value *ItaniumCXXABI::getVTableAddressPointInStructorWithVTT( CodeGenFunction &CGF, const CXXRecordDecl *VTableClass, BaseSubobject Base, const CXXRecordDecl *NearestVBase) { assert((Base.getBase()->getNumVBases() || NearestVBase != nullptr) && NeedsVTTParameter(CGF.CurGD) && "This class doesn't have VTT"); // Get the secondary vpointer index. uint64_t VirtualPointerIndex = CGM.getVTables().getSecondaryVirtualPointerIndex(VTableClass, Base); /// Load the VTT. llvm::Value *VTT = CGF.LoadCXXVTT(); if (VirtualPointerIndex) VTT = CGF.Builder.CreateConstInBoundsGEP1_64(VTT, VirtualPointerIndex); // And load the address point from the VTT. return CGF.Builder.CreateAlignedLoad(VTT, CGF.getPointerAlign()); } llvm::Constant *ItaniumCXXABI::getVTableAddressPointForConstExpr( BaseSubobject Base, const CXXRecordDecl *VTableClass) { return getVTableAddressPoint(Base, VTableClass); } llvm::GlobalVariable *ItaniumCXXABI::getAddrOfVTable(const CXXRecordDecl *RD, CharUnits VPtrOffset) { assert(VPtrOffset.isZero() && "Itanium ABI only supports zero vptr offsets"); llvm::GlobalVariable *&VTable = VTables[RD]; if (VTable) return VTable; // Queue up this vtable for possible deferred emission. CGM.addDeferredVTable(RD); SmallString<256> Name; llvm::raw_svector_ostream Out(Name); getMangleContext().mangleCXXVTable(RD, Out); const VTableLayout &VTLayout = CGM.getItaniumVTableContext().getVTableLayout(RD); llvm::Type *VTableType = CGM.getVTables().getVTableType(VTLayout); VTable = CGM.CreateOrReplaceCXXRuntimeVariable( Name, VTableType, llvm::GlobalValue::ExternalLinkage); VTable->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global); if (RD->hasAttr()) VTable->setDLLStorageClass(llvm::GlobalValue::DLLImportStorageClass); else if (RD->hasAttr()) VTable->setDLLStorageClass(llvm::GlobalValue::DLLExportStorageClass); return VTable; } CGCallee ItaniumCXXABI::getVirtualFunctionPointer(CodeGenFunction &CGF, GlobalDecl GD, Address This, llvm::Type *Ty, SourceLocation Loc) { GD = GD.getCanonicalDecl(); Ty = Ty->getPointerTo()->getPointerTo(); auto *MethodDecl = cast(GD.getDecl()); llvm::Value *VTable = CGF.GetVTablePtr(This, Ty, MethodDecl->getParent()); uint64_t VTableIndex = CGM.getItaniumVTableContext().getMethodVTableIndex(GD); llvm::Value *VFunc; if (CGF.ShouldEmitVTableTypeCheckedLoad(MethodDecl->getParent())) { VFunc = CGF.EmitVTableTypeCheckedLoad( MethodDecl->getParent(), VTable, VTableIndex * CGM.getContext().getTargetInfo().getPointerWidth(0) / 8); } else { CGF.EmitTypeMetadataCodeForVCall(MethodDecl->getParent(), VTable, Loc); llvm::Value *VFuncPtr = CGF.Builder.CreateConstInBoundsGEP1_64(VTable, VTableIndex, "vfn"); auto *VFuncLoad = CGF.Builder.CreateAlignedLoad(VFuncPtr, CGF.getPointerAlign()); // Add !invariant.load md to virtual function load to indicate that // function didn't change inside vtable. // It's safe to add it without -fstrict-vtable-pointers, but it would not // help in devirtualization because it will only matter if we will have 2 // the same virtual function loads from the same vtable load, which won't // happen without enabled devirtualization with -fstrict-vtable-pointers. if (CGM.getCodeGenOpts().OptimizationLevel > 0 && CGM.getCodeGenOpts().StrictVTablePointers) VFuncLoad->setMetadata( llvm::LLVMContext::MD_invariant_load, llvm::MDNode::get(CGM.getLLVMContext(), llvm::ArrayRef())); VFunc = VFuncLoad; } CGCallee Callee(MethodDecl, VFunc); return Callee; } llvm::Value *ItaniumCXXABI::EmitVirtualDestructorCall( CodeGenFunction &CGF, const CXXDestructorDecl *Dtor, CXXDtorType DtorType, Address This, const CXXMemberCallExpr *CE) { assert(CE == nullptr || CE->arg_begin() == CE->arg_end()); assert(DtorType == Dtor_Deleting || DtorType == Dtor_Complete); const CGFunctionInfo *FInfo = &CGM.getTypes().arrangeCXXStructorDeclaration( Dtor, getFromDtorType(DtorType)); llvm::Type *Ty = CGF.CGM.getTypes().GetFunctionType(*FInfo); CGCallee Callee = getVirtualFunctionPointer(CGF, GlobalDecl(Dtor, DtorType), This, Ty, CE ? CE->getLocStart() : SourceLocation()); CGF.EmitCXXMemberOrOperatorCall(Dtor, Callee, ReturnValueSlot(), This.getPointer(), /*ImplicitParam=*/nullptr, QualType(), CE, nullptr); return nullptr; } void ItaniumCXXABI::emitVirtualInheritanceTables(const CXXRecordDecl *RD) { CodeGenVTables &VTables = CGM.getVTables(); llvm::GlobalVariable *VTT = VTables.GetAddrOfVTT(RD); VTables.EmitVTTDefinition(VTT, CGM.getVTableLinkage(RD), RD); } bool ItaniumCXXABI::canSpeculativelyEmitVTable(const CXXRecordDecl *RD) const { // We don't emit available_externally vtables if we are in -fapple-kext mode // because kext mode does not permit devirtualization. if (CGM.getLangOpts().AppleKext) return false; // If we don't have any not emitted inline virtual function, and if vtable is // not hidden, then we are safe to emit available_externally copy of vtable. // FIXME we can still emit a copy of the vtable if we // can emit definition of the inline functions. return !hasAnyUnusedVirtualInlineFunction(RD) && !isVTableHidden(RD); } static llvm::Value *performTypeAdjustment(CodeGenFunction &CGF, Address InitialPtr, int64_t NonVirtualAdjustment, int64_t VirtualAdjustment, bool IsReturnAdjustment) { if (!NonVirtualAdjustment && !VirtualAdjustment) return InitialPtr.getPointer(); Address V = CGF.Builder.CreateElementBitCast(InitialPtr, CGF.Int8Ty); // In a base-to-derived cast, the non-virtual adjustment is applied first. if (NonVirtualAdjustment && !IsReturnAdjustment) { V = CGF.Builder.CreateConstInBoundsByteGEP(V, CharUnits::fromQuantity(NonVirtualAdjustment)); } // Perform the virtual adjustment if we have one. llvm::Value *ResultPtr; if (VirtualAdjustment) { llvm::Type *PtrDiffTy = CGF.ConvertType(CGF.getContext().getPointerDiffType()); Address VTablePtrPtr = CGF.Builder.CreateElementBitCast(V, CGF.Int8PtrTy); llvm::Value *VTablePtr = CGF.Builder.CreateLoad(VTablePtrPtr); llvm::Value *OffsetPtr = CGF.Builder.CreateConstInBoundsGEP1_64(VTablePtr, VirtualAdjustment); OffsetPtr = CGF.Builder.CreateBitCast(OffsetPtr, PtrDiffTy->getPointerTo()); // Load the adjustment offset from the vtable. llvm::Value *Offset = CGF.Builder.CreateAlignedLoad(OffsetPtr, CGF.getPointerAlign()); // Adjust our pointer. ResultPtr = CGF.Builder.CreateInBoundsGEP(V.getPointer(), Offset); } else { ResultPtr = V.getPointer(); } // In a derived-to-base conversion, the non-virtual adjustment is // applied second. if (NonVirtualAdjustment && IsReturnAdjustment) { ResultPtr = CGF.Builder.CreateConstInBoundsGEP1_64(ResultPtr, NonVirtualAdjustment); } // Cast back to the original type. return CGF.Builder.CreateBitCast(ResultPtr, InitialPtr.getType()); } llvm::Value *ItaniumCXXABI::performThisAdjustment(CodeGenFunction &CGF, Address This, const ThisAdjustment &TA) { return performTypeAdjustment(CGF, This, TA.NonVirtual, TA.Virtual.Itanium.VCallOffsetOffset, /*IsReturnAdjustment=*/false); } llvm::Value * ItaniumCXXABI::performReturnAdjustment(CodeGenFunction &CGF, Address Ret, const ReturnAdjustment &RA) { return performTypeAdjustment(CGF, Ret, RA.NonVirtual, RA.Virtual.Itanium.VBaseOffsetOffset, /*IsReturnAdjustment=*/true); } void ARMCXXABI::EmitReturnFromThunk(CodeGenFunction &CGF, RValue RV, QualType ResultType) { if (!isa(CGF.CurGD.getDecl())) return ItaniumCXXABI::EmitReturnFromThunk(CGF, RV, ResultType); // Destructor thunks in the ARM ABI have indeterminate results. llvm::Type *T = CGF.ReturnValue.getElementType(); RValue Undef = RValue::get(llvm::UndefValue::get(T)); return ItaniumCXXABI::EmitReturnFromThunk(CGF, Undef, ResultType); } /************************** Array allocation cookies **************************/ CharUnits ItaniumCXXABI::getArrayCookieSizeImpl(QualType elementType) { // The array cookie is a size_t; pad that up to the element alignment. // The cookie is actually right-justified in that space. return std::max(CharUnits::fromQuantity(CGM.SizeSizeInBytes), CGM.getContext().getTypeAlignInChars(elementType)); } Address ItaniumCXXABI::InitializeArrayCookie(CodeGenFunction &CGF, Address NewPtr, llvm::Value *NumElements, const CXXNewExpr *expr, QualType ElementType) { assert(requiresArrayCookie(expr)); unsigned AS = NewPtr.getAddressSpace(); ASTContext &Ctx = getContext(); CharUnits SizeSize = CGF.getSizeSize(); // The size of the cookie. CharUnits CookieSize = std::max(SizeSize, Ctx.getTypeAlignInChars(ElementType)); assert(CookieSize == getArrayCookieSizeImpl(ElementType)); // Compute an offset to the cookie. Address CookiePtr = NewPtr; CharUnits CookieOffset = CookieSize - SizeSize; if (!CookieOffset.isZero()) CookiePtr = CGF.Builder.CreateConstInBoundsByteGEP(CookiePtr, CookieOffset); // Write the number of elements into the appropriate slot. Address NumElementsPtr = CGF.Builder.CreateElementBitCast(CookiePtr, CGF.SizeTy); llvm::Instruction *SI = CGF.Builder.CreateStore(NumElements, NumElementsPtr); // Handle the array cookie specially in ASan. if (CGM.getLangOpts().Sanitize.has(SanitizerKind::Address) && AS == 0 && expr->getOperatorNew()->isReplaceableGlobalAllocationFunction()) { // The store to the CookiePtr does not need to be instrumented. CGM.getSanitizerMetadata()->disableSanitizerForInstruction(SI); llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, NumElementsPtr.getType(), false); llvm::Constant *F = CGM.CreateRuntimeFunction(FTy, "__asan_poison_cxx_array_cookie"); CGF.Builder.CreateCall(F, NumElementsPtr.getPointer()); } // Finally, compute a pointer to the actual data buffer by skipping // over the cookie completely. return CGF.Builder.CreateConstInBoundsByteGEP(NewPtr, CookieSize); } llvm::Value *ItaniumCXXABI::readArrayCookieImpl(CodeGenFunction &CGF, Address allocPtr, CharUnits cookieSize) { // The element size is right-justified in the cookie. Address numElementsPtr = allocPtr; CharUnits numElementsOffset = cookieSize - CGF.getSizeSize(); if (!numElementsOffset.isZero()) numElementsPtr = CGF.Builder.CreateConstInBoundsByteGEP(numElementsPtr, numElementsOffset); unsigned AS = allocPtr.getAddressSpace(); numElementsPtr = CGF.Builder.CreateElementBitCast(numElementsPtr, CGF.SizeTy); if (!CGM.getLangOpts().Sanitize.has(SanitizerKind::Address) || AS != 0) return CGF.Builder.CreateLoad(numElementsPtr); // In asan mode emit a function call instead of a regular load and let the // run-time deal with it: if the shadow is properly poisoned return the // cookie, otherwise return 0 to avoid an infinite loop calling DTORs. // We can't simply ignore this load using nosanitize metadata because // the metadata may be lost. llvm::FunctionType *FTy = llvm::FunctionType::get(CGF.SizeTy, CGF.SizeTy->getPointerTo(0), false); llvm::Constant *F = CGM.CreateRuntimeFunction(FTy, "__asan_load_cxx_array_cookie"); return CGF.Builder.CreateCall(F, numElementsPtr.getPointer()); } CharUnits ARMCXXABI::getArrayCookieSizeImpl(QualType elementType) { // ARM says that the cookie is always: // struct array_cookie { // std::size_t element_size; // element_size != 0 // std::size_t element_count; // }; // But the base ABI doesn't give anything an alignment greater than // 8, so we can dismiss this as typical ABI-author blindness to // actual language complexity and round up to the element alignment. return std::max(CharUnits::fromQuantity(2 * CGM.SizeSizeInBytes), CGM.getContext().getTypeAlignInChars(elementType)); } Address ARMCXXABI::InitializeArrayCookie(CodeGenFunction &CGF, Address newPtr, llvm::Value *numElements, const CXXNewExpr *expr, QualType elementType) { assert(requiresArrayCookie(expr)); // The cookie is always at the start of the buffer. Address cookie = newPtr; // The first element is the element size. cookie = CGF.Builder.CreateElementBitCast(cookie, CGF.SizeTy); llvm::Value *elementSize = llvm::ConstantInt::get(CGF.SizeTy, getContext().getTypeSizeInChars(elementType).getQuantity()); CGF.Builder.CreateStore(elementSize, cookie); // The second element is the element count. cookie = CGF.Builder.CreateConstInBoundsGEP(cookie, 1, CGF.getSizeSize()); CGF.Builder.CreateStore(numElements, cookie); // Finally, compute a pointer to the actual data buffer by skipping // over the cookie completely. CharUnits cookieSize = ARMCXXABI::getArrayCookieSizeImpl(elementType); return CGF.Builder.CreateConstInBoundsByteGEP(newPtr, cookieSize); } llvm::Value *ARMCXXABI::readArrayCookieImpl(CodeGenFunction &CGF, Address allocPtr, CharUnits cookieSize) { // The number of elements is at offset sizeof(size_t) relative to // the allocated pointer. Address numElementsPtr = CGF.Builder.CreateConstInBoundsByteGEP(allocPtr, CGF.getSizeSize()); numElementsPtr = CGF.Builder.CreateElementBitCast(numElementsPtr, CGF.SizeTy); return CGF.Builder.CreateLoad(numElementsPtr); } /*********************** Static local initialization **************************/ static llvm::Constant *getGuardAcquireFn(CodeGenModule &CGM, llvm::PointerType *GuardPtrTy) { // int __cxa_guard_acquire(__guard *guard_object); llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.getTypes().ConvertType(CGM.getContext().IntTy), GuardPtrTy, /*isVarArg=*/false); return CGM.CreateRuntimeFunction( FTy, "__cxa_guard_acquire", llvm::AttributeList::get(CGM.getLLVMContext(), llvm::AttributeList::FunctionIndex, llvm::Attribute::NoUnwind)); } static llvm::Constant *getGuardReleaseFn(CodeGenModule &CGM, llvm::PointerType *GuardPtrTy) { // void __cxa_guard_release(__guard *guard_object); llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, GuardPtrTy, /*isVarArg=*/false); return CGM.CreateRuntimeFunction( FTy, "__cxa_guard_release", llvm::AttributeList::get(CGM.getLLVMContext(), llvm::AttributeList::FunctionIndex, llvm::Attribute::NoUnwind)); } static llvm::Constant *getGuardAbortFn(CodeGenModule &CGM, llvm::PointerType *GuardPtrTy) { // void __cxa_guard_abort(__guard *guard_object); llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, GuardPtrTy, /*isVarArg=*/false); return CGM.CreateRuntimeFunction( FTy, "__cxa_guard_abort", llvm::AttributeList::get(CGM.getLLVMContext(), llvm::AttributeList::FunctionIndex, llvm::Attribute::NoUnwind)); } namespace { struct CallGuardAbort final : EHScopeStack::Cleanup { llvm::GlobalVariable *Guard; CallGuardAbort(llvm::GlobalVariable *Guard) : Guard(Guard) {} void Emit(CodeGenFunction &CGF, Flags flags) override { CGF.EmitNounwindRuntimeCall(getGuardAbortFn(CGF.CGM, Guard->getType()), Guard); } }; } /// The ARM code here follows the Itanium code closely enough that we /// just special-case it at particular places. void ItaniumCXXABI::EmitGuardedInit(CodeGenFunction &CGF, const VarDecl &D, llvm::GlobalVariable *var, bool shouldPerformInit) { CGBuilderTy &Builder = CGF.Builder; // Inline variables that weren't instantiated from variable templates have // partially-ordered initialization within their translation unit. bool NonTemplateInline = D.isInline() && !isTemplateInstantiation(D.getTemplateSpecializationKind()); // We only need to use thread-safe statics for local non-TLS variables and // inline variables; other global initialization is always single-threaded // or (through lazy dynamic loading in multiple threads) unsequenced. bool threadsafe = getContext().getLangOpts().ThreadsafeStatics && (D.isLocalVarDecl() || NonTemplateInline) && !D.getTLSKind(); // If we have a global variable with internal linkage and thread-safe statics // are disabled, we can just let the guard variable be of type i8. bool useInt8GuardVariable = !threadsafe && var->hasInternalLinkage(); llvm::IntegerType *guardTy; CharUnits guardAlignment; if (useInt8GuardVariable) { guardTy = CGF.Int8Ty; guardAlignment = CharUnits::One(); } else { // Guard variables are 64 bits in the generic ABI and size width on ARM // (i.e. 32-bit on AArch32, 64-bit on AArch64). if (UseARMGuardVarABI) { guardTy = CGF.SizeTy; guardAlignment = CGF.getSizeAlign(); } else { guardTy = CGF.Int64Ty; guardAlignment = CharUnits::fromQuantity( CGM.getDataLayout().getABITypeAlignment(guardTy)); } } llvm::PointerType *guardPtrTy = guardTy->getPointerTo(); // Create the guard variable if we don't already have it (as we // might if we're double-emitting this function body). llvm::GlobalVariable *guard = CGM.getStaticLocalDeclGuardAddress(&D); if (!guard) { // Mangle the name for the guard. SmallString<256> guardName; { llvm::raw_svector_ostream out(guardName); getMangleContext().mangleStaticGuardVariable(&D, out); } // Create the guard variable with a zero-initializer. // Just absorb linkage and visibility from the guarded variable. guard = new llvm::GlobalVariable(CGM.getModule(), guardTy, false, var->getLinkage(), llvm::ConstantInt::get(guardTy, 0), guardName.str()); guard->setVisibility(var->getVisibility()); // If the variable is thread-local, so is its guard variable. guard->setThreadLocalMode(var->getThreadLocalMode()); guard->setAlignment(guardAlignment.getQuantity()); // The ABI says: "It is suggested that it be emitted in the same COMDAT // group as the associated data object." In practice, this doesn't work for // non-ELF and non-Wasm object formats, so only do it for ELF and Wasm. llvm::Comdat *C = var->getComdat(); if (!D.isLocalVarDecl() && C && (CGM.getTarget().getTriple().isOSBinFormatELF() || CGM.getTarget().getTriple().isOSBinFormatWasm())) { guard->setComdat(C); // An inline variable's guard function is run from the per-TU // initialization function, not via a dedicated global ctor function, so // we can't put it in a comdat. if (!NonTemplateInline) CGF.CurFn->setComdat(C); } else if (CGM.supportsCOMDAT() && guard->isWeakForLinker()) { guard->setComdat(CGM.getModule().getOrInsertComdat(guard->getName())); } CGM.setStaticLocalDeclGuardAddress(&D, guard); } Address guardAddr = Address(guard, guardAlignment); // Test whether the variable has completed initialization. // // Itanium C++ ABI 3.3.2: // The following is pseudo-code showing how these functions can be used: // if (obj_guard.first_byte == 0) { // if ( __cxa_guard_acquire (&obj_guard) ) { // try { // ... initialize the object ...; // } catch (...) { // __cxa_guard_abort (&obj_guard); // throw; // } // ... queue object destructor with __cxa_atexit() ...; // __cxa_guard_release (&obj_guard); // } // } // Load the first byte of the guard variable. llvm::LoadInst *LI = Builder.CreateLoad(Builder.CreateElementBitCast(guardAddr, CGM.Int8Ty)); // Itanium ABI: // An implementation supporting thread-safety on multiprocessor // systems must also guarantee that references to the initialized // object do not occur before the load of the initialization flag. // // In LLVM, we do this by marking the load Acquire. if (threadsafe) LI->setAtomic(llvm::AtomicOrdering::Acquire); // For ARM, we should only check the first bit, rather than the entire byte: // // ARM C++ ABI 3.2.3.1: // To support the potential use of initialization guard variables // as semaphores that are the target of ARM SWP and LDREX/STREX // synchronizing instructions we define a static initialization // guard variable to be a 4-byte aligned, 4-byte word with the // following inline access protocol. // #define INITIALIZED 1 // if ((obj_guard & INITIALIZED) != INITIALIZED) { // if (__cxa_guard_acquire(&obj_guard)) // ... // } // // and similarly for ARM64: // // ARM64 C++ ABI 3.2.2: // This ABI instead only specifies the value bit 0 of the static guard // variable; all other bits are platform defined. Bit 0 shall be 0 when the // variable is not initialized and 1 when it is. llvm::Value *V = (UseARMGuardVarABI && !useInt8GuardVariable) ? Builder.CreateAnd(LI, llvm::ConstantInt::get(CGM.Int8Ty, 1)) : LI; llvm::Value *isInitialized = Builder.CreateIsNull(V, "guard.uninitialized"); llvm::BasicBlock *InitCheckBlock = CGF.createBasicBlock("init.check"); llvm::BasicBlock *EndBlock = CGF.createBasicBlock("init.end"); // Check if the first byte of the guard variable is zero. Builder.CreateCondBr(isInitialized, InitCheckBlock, EndBlock); CGF.EmitBlock(InitCheckBlock); // Variables used when coping with thread-safe statics and exceptions. if (threadsafe) { // Call __cxa_guard_acquire. llvm::Value *V = CGF.EmitNounwindRuntimeCall(getGuardAcquireFn(CGM, guardPtrTy), guard); llvm::BasicBlock *InitBlock = CGF.createBasicBlock("init"); Builder.CreateCondBr(Builder.CreateIsNotNull(V, "tobool"), InitBlock, EndBlock); // Call __cxa_guard_abort along the exceptional edge. CGF.EHStack.pushCleanup(EHCleanup, guard); CGF.EmitBlock(InitBlock); } // Emit the initializer and add a global destructor if appropriate. CGF.EmitCXXGlobalVarDeclInit(D, var, shouldPerformInit); if (threadsafe) { // Pop the guard-abort cleanup if we pushed one. CGF.PopCleanupBlock(); // Call __cxa_guard_release. This cannot throw. CGF.EmitNounwindRuntimeCall(getGuardReleaseFn(CGM, guardPtrTy), guardAddr.getPointer()); } else { Builder.CreateStore(llvm::ConstantInt::get(guardTy, 1), guardAddr); } CGF.EmitBlock(EndBlock); } /// Register a global destructor using __cxa_atexit. static void emitGlobalDtorWithCXAAtExit(CodeGenFunction &CGF, llvm::Constant *dtor, llvm::Constant *addr, bool TLS) { const char *Name = "__cxa_atexit"; if (TLS) { const llvm::Triple &T = CGF.getTarget().getTriple(); Name = T.isOSDarwin() ? "_tlv_atexit" : "__cxa_thread_atexit"; } // We're assuming that the destructor function is something we can // reasonably call with the default CC. Go ahead and cast it to the // right prototype. llvm::Type *dtorTy = llvm::FunctionType::get(CGF.VoidTy, CGF.Int8PtrTy, false)->getPointerTo(); // extern "C" int __cxa_atexit(void (*f)(void *), void *p, void *d); llvm::Type *paramTys[] = { dtorTy, CGF.Int8PtrTy, CGF.Int8PtrTy }; llvm::FunctionType *atexitTy = llvm::FunctionType::get(CGF.IntTy, paramTys, false); // Fetch the actual function. llvm::Constant *atexit = CGF.CGM.CreateRuntimeFunction(atexitTy, Name); if (llvm::Function *fn = dyn_cast(atexit)) fn->setDoesNotThrow(); // Create a variable that binds the atexit to this shared object. llvm::Constant *handle = CGF.CGM.CreateRuntimeVariable(CGF.Int8Ty, "__dso_handle"); auto *GV = cast(handle->stripPointerCasts()); GV->setVisibility(llvm::GlobalValue::HiddenVisibility); llvm::Value *args[] = { llvm::ConstantExpr::getBitCast(dtor, dtorTy), llvm::ConstantExpr::getBitCast(addr, CGF.Int8PtrTy), handle }; CGF.EmitNounwindRuntimeCall(atexit, args); } /// Register a global destructor as best as we know how. void ItaniumCXXABI::registerGlobalDtor(CodeGenFunction &CGF, const VarDecl &D, llvm::Constant *dtor, llvm::Constant *addr) { // Use __cxa_atexit if available. if (CGM.getCodeGenOpts().CXAAtExit) return emitGlobalDtorWithCXAAtExit(CGF, dtor, addr, D.getTLSKind()); if (D.getTLSKind()) CGM.ErrorUnsupported(&D, "non-trivial TLS destruction"); // In Apple kexts, we want to add a global destructor entry. // FIXME: shouldn't this be guarded by some variable? if (CGM.getLangOpts().AppleKext) { // Generate a global destructor entry. return CGM.AddCXXDtorEntry(dtor, addr); } CGF.registerGlobalDtorWithAtExit(D, dtor, addr); } static bool isThreadWrapperReplaceable(const VarDecl *VD, CodeGen::CodeGenModule &CGM) { assert(!VD->isStaticLocal() && "static local VarDecls don't need wrappers!"); // Darwin prefers to have references to thread local variables to go through // the thread wrapper instead of directly referencing the backing variable. return VD->getTLSKind() == VarDecl::TLS_Dynamic && CGM.getTarget().getTriple().isOSDarwin(); } /// Get the appropriate linkage for the wrapper function. This is essentially /// the weak form of the variable's linkage; every translation unit which needs /// the wrapper emits a copy, and we want the linker to merge them. static llvm::GlobalValue::LinkageTypes getThreadLocalWrapperLinkage(const VarDecl *VD, CodeGen::CodeGenModule &CGM) { llvm::GlobalValue::LinkageTypes VarLinkage = CGM.getLLVMLinkageVarDefinition(VD, /*isConstant=*/false); // For internal linkage variables, we don't need an external or weak wrapper. if (llvm::GlobalValue::isLocalLinkage(VarLinkage)) return VarLinkage; // If the thread wrapper is replaceable, give it appropriate linkage. if (isThreadWrapperReplaceable(VD, CGM)) if (!llvm::GlobalVariable::isLinkOnceLinkage(VarLinkage) && !llvm::GlobalVariable::isWeakODRLinkage(VarLinkage)) return VarLinkage; return llvm::GlobalValue::WeakODRLinkage; } llvm::Function * ItaniumCXXABI::getOrCreateThreadLocalWrapper(const VarDecl *VD, llvm::Value *Val) { // Mangle the name for the thread_local wrapper function. SmallString<256> WrapperName; { llvm::raw_svector_ostream Out(WrapperName); getMangleContext().mangleItaniumThreadLocalWrapper(VD, Out); } // FIXME: If VD is a definition, we should regenerate the function attributes // before returning. if (llvm::Value *V = CGM.getModule().getNamedValue(WrapperName)) return cast(V); QualType RetQT = VD->getType(); if (RetQT->isReferenceType()) RetQT = RetQT.getNonReferenceType(); const CGFunctionInfo &FI = CGM.getTypes().arrangeBuiltinFunctionDeclaration( getContext().getPointerType(RetQT), FunctionArgList()); llvm::FunctionType *FnTy = CGM.getTypes().GetFunctionType(FI); llvm::Function *Wrapper = llvm::Function::Create(FnTy, getThreadLocalWrapperLinkage(VD, CGM), WrapperName.str(), &CGM.getModule()); CGM.SetLLVMFunctionAttributes(nullptr, FI, Wrapper); if (VD->hasDefinition()) CGM.SetLLVMFunctionAttributesForDefinition(nullptr, Wrapper); // Always resolve references to the wrapper at link time. if (!Wrapper->hasLocalLinkage() && !(isThreadWrapperReplaceable(VD, CGM) && !llvm::GlobalVariable::isLinkOnceLinkage(Wrapper->getLinkage()) && !llvm::GlobalVariable::isWeakODRLinkage(Wrapper->getLinkage()))) Wrapper->setVisibility(llvm::GlobalValue::HiddenVisibility); if (isThreadWrapperReplaceable(VD, CGM)) { Wrapper->setCallingConv(llvm::CallingConv::CXX_FAST_TLS); Wrapper->addFnAttr(llvm::Attribute::NoUnwind); } return Wrapper; } void ItaniumCXXABI::EmitThreadLocalInitFuncs( CodeGenModule &CGM, ArrayRef CXXThreadLocals, ArrayRef CXXThreadLocalInits, ArrayRef CXXThreadLocalInitVars) { llvm::Function *InitFunc = nullptr; // Separate initializers into those with ordered (or partially-ordered) // initialization and those with unordered initialization. llvm::SmallVector OrderedInits; llvm::SmallDenseMap UnorderedInits; for (unsigned I = 0; I != CXXThreadLocalInits.size(); ++I) { if (isTemplateInstantiation( CXXThreadLocalInitVars[I]->getTemplateSpecializationKind())) UnorderedInits[CXXThreadLocalInitVars[I]->getCanonicalDecl()] = CXXThreadLocalInits[I]; else OrderedInits.push_back(CXXThreadLocalInits[I]); } if (!OrderedInits.empty()) { // Generate a guarded initialization function. llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, /*isVarArg=*/false); const CGFunctionInfo &FI = CGM.getTypes().arrangeNullaryFunction(); InitFunc = CGM.CreateGlobalInitOrDestructFunction(FTy, "__tls_init", FI, SourceLocation(), /*TLS=*/true); llvm::GlobalVariable *Guard = new llvm::GlobalVariable( CGM.getModule(), CGM.Int8Ty, /*isConstant=*/false, llvm::GlobalVariable::InternalLinkage, llvm::ConstantInt::get(CGM.Int8Ty, 0), "__tls_guard"); Guard->setThreadLocal(true); CharUnits GuardAlign = CharUnits::One(); Guard->setAlignment(GuardAlign.getQuantity()); CodeGenFunction(CGM).GenerateCXXGlobalInitFunc(InitFunc, OrderedInits, Address(Guard, GuardAlign)); // On Darwin platforms, use CXX_FAST_TLS calling convention. if (CGM.getTarget().getTriple().isOSDarwin()) { InitFunc->setCallingConv(llvm::CallingConv::CXX_FAST_TLS); InitFunc->addFnAttr(llvm::Attribute::NoUnwind); } } // Emit thread wrappers. for (const VarDecl *VD : CXXThreadLocals) { llvm::GlobalVariable *Var = cast(CGM.GetGlobalValue(CGM.getMangledName(VD))); llvm::Function *Wrapper = getOrCreateThreadLocalWrapper(VD, Var); // Some targets require that all access to thread local variables go through // the thread wrapper. This means that we cannot attempt to create a thread // wrapper or a thread helper. if (isThreadWrapperReplaceable(VD, CGM) && !VD->hasDefinition()) { Wrapper->setLinkage(llvm::Function::ExternalLinkage); continue; } // Mangle the name for the thread_local initialization function. SmallString<256> InitFnName; { llvm::raw_svector_ostream Out(InitFnName); getMangleContext().mangleItaniumThreadLocalInit(VD, Out); } // If we have a definition for the variable, emit the initialization // function as an alias to the global Init function (if any). Otherwise, // produce a declaration of the initialization function. llvm::GlobalValue *Init = nullptr; bool InitIsInitFunc = false; if (VD->hasDefinition()) { InitIsInitFunc = true; llvm::Function *InitFuncToUse = InitFunc; if (isTemplateInstantiation(VD->getTemplateSpecializationKind())) InitFuncToUse = UnorderedInits.lookup(VD->getCanonicalDecl()); if (InitFuncToUse) Init = llvm::GlobalAlias::create(Var->getLinkage(), InitFnName.str(), InitFuncToUse); } else { // Emit a weak global function referring to the initialization function. // This function will not exist if the TU defining the thread_local // variable in question does not need any dynamic initialization for // its thread_local variables. llvm::FunctionType *FnTy = llvm::FunctionType::get(CGM.VoidTy, false); Init = llvm::Function::Create(FnTy, llvm::GlobalVariable::ExternalWeakLinkage, InitFnName.str(), &CGM.getModule()); const CGFunctionInfo &FI = CGM.getTypes().arrangeNullaryFunction(); CGM.SetLLVMFunctionAttributes(nullptr, FI, cast(Init)); } if (Init) Init->setVisibility(Var->getVisibility()); llvm::LLVMContext &Context = CGM.getModule().getContext(); llvm::BasicBlock *Entry = llvm::BasicBlock::Create(Context, "", Wrapper); CGBuilderTy Builder(CGM, Entry); if (InitIsInitFunc) { if (Init) { llvm::CallInst *CallVal = Builder.CreateCall(Init); if (isThreadWrapperReplaceable(VD, CGM)) CallVal->setCallingConv(llvm::CallingConv::CXX_FAST_TLS); } } else { // Don't know whether we have an init function. Call it if it exists. llvm::Value *Have = Builder.CreateIsNotNull(Init); llvm::BasicBlock *InitBB = llvm::BasicBlock::Create(Context, "", Wrapper); llvm::BasicBlock *ExitBB = llvm::BasicBlock::Create(Context, "", Wrapper); Builder.CreateCondBr(Have, InitBB, ExitBB); Builder.SetInsertPoint(InitBB); Builder.CreateCall(Init); Builder.CreateBr(ExitBB); Builder.SetInsertPoint(ExitBB); } // For a reference, the result of the wrapper function is a pointer to // the referenced object. llvm::Value *Val = Var; if (VD->getType()->isReferenceType()) { CharUnits Align = CGM.getContext().getDeclAlign(VD); Val = Builder.CreateAlignedLoad(Val, Align); } if (Val->getType() != Wrapper->getReturnType()) Val = Builder.CreatePointerBitCastOrAddrSpaceCast( Val, Wrapper->getReturnType(), ""); Builder.CreateRet(Val); } } LValue ItaniumCXXABI::EmitThreadLocalVarDeclLValue(CodeGenFunction &CGF, const VarDecl *VD, QualType LValType) { llvm::Value *Val = CGF.CGM.GetAddrOfGlobalVar(VD); llvm::Function *Wrapper = getOrCreateThreadLocalWrapper(VD, Val); llvm::CallInst *CallVal = CGF.Builder.CreateCall(Wrapper); CallVal->setCallingConv(Wrapper->getCallingConv()); LValue LV; if (VD->getType()->isReferenceType()) LV = CGF.MakeNaturalAlignAddrLValue(CallVal, LValType); else LV = CGF.MakeAddrLValue(CallVal, LValType, CGF.getContext().getDeclAlign(VD)); // FIXME: need setObjCGCLValueClass? return LV; } /// Return whether the given global decl needs a VTT parameter, which it does /// if it's a base constructor or destructor with virtual bases. bool ItaniumCXXABI::NeedsVTTParameter(GlobalDecl GD) { const CXXMethodDecl *MD = cast(GD.getDecl()); // We don't have any virtual bases, just return early. if (!MD->getParent()->getNumVBases()) return false; // Check if we have a base constructor. if (isa(MD) && GD.getCtorType() == Ctor_Base) return true; // Check if we have a base destructor. if (isa(MD) && GD.getDtorType() == Dtor_Base) return true; return false; } namespace { class ItaniumRTTIBuilder { CodeGenModule &CGM; // Per-module state. llvm::LLVMContext &VMContext; const ItaniumCXXABI &CXXABI; // Per-module state. /// Fields - The fields of the RTTI descriptor currently being built. SmallVector Fields; /// GetAddrOfTypeName - Returns the mangled type name of the given type. llvm::GlobalVariable * GetAddrOfTypeName(QualType Ty, llvm::GlobalVariable::LinkageTypes Linkage); /// GetAddrOfExternalRTTIDescriptor - Returns the constant for the RTTI /// descriptor of the given type. llvm::Constant *GetAddrOfExternalRTTIDescriptor(QualType Ty); /// BuildVTablePointer - Build the vtable pointer for the given type. void BuildVTablePointer(const Type *Ty); /// BuildSIClassTypeInfo - Build an abi::__si_class_type_info, used for single /// inheritance, according to the Itanium C++ ABI, 2.9.5p6b. void BuildSIClassTypeInfo(const CXXRecordDecl *RD); /// BuildVMIClassTypeInfo - Build an abi::__vmi_class_type_info, used for /// classes with bases that do not satisfy the abi::__si_class_type_info /// constraints, according ti the Itanium C++ ABI, 2.9.5p5c. void BuildVMIClassTypeInfo(const CXXRecordDecl *RD); /// BuildPointerTypeInfo - Build an abi::__pointer_type_info struct, used /// for pointer types. void BuildPointerTypeInfo(QualType PointeeTy); /// BuildObjCObjectTypeInfo - Build the appropriate kind of /// type_info for an object type. void BuildObjCObjectTypeInfo(const ObjCObjectType *Ty); /// BuildPointerToMemberTypeInfo - Build an abi::__pointer_to_member_type_info /// struct, used for member pointer types. void BuildPointerToMemberTypeInfo(const MemberPointerType *Ty); public: ItaniumRTTIBuilder(const ItaniumCXXABI &ABI) : CGM(ABI.CGM), VMContext(CGM.getModule().getContext()), CXXABI(ABI) {} // Pointer type info flags. enum { /// PTI_Const - Type has const qualifier. PTI_Const = 0x1, /// PTI_Volatile - Type has volatile qualifier. PTI_Volatile = 0x2, /// PTI_Restrict - Type has restrict qualifier. PTI_Restrict = 0x4, /// PTI_Incomplete - Type is incomplete. PTI_Incomplete = 0x8, /// PTI_ContainingClassIncomplete - Containing class is incomplete. /// (in pointer to member). PTI_ContainingClassIncomplete = 0x10, /// PTI_TransactionSafe - Pointee is transaction_safe function (C++ TM TS). //PTI_TransactionSafe = 0x20, /// PTI_Noexcept - Pointee is noexcept function (C++1z). PTI_Noexcept = 0x40, }; // VMI type info flags. enum { /// VMI_NonDiamondRepeat - Class has non-diamond repeated inheritance. VMI_NonDiamondRepeat = 0x1, /// VMI_DiamondShaped - Class is diamond shaped. VMI_DiamondShaped = 0x2 }; // Base class type info flags. enum { /// BCTI_Virtual - Base class is virtual. BCTI_Virtual = 0x1, /// BCTI_Public - Base class is public. BCTI_Public = 0x2 }; /// BuildTypeInfo - Build the RTTI type info struct for the given type. /// /// \param Force - true to force the creation of this RTTI value /// \param DLLExport - true to mark the RTTI value as DLLExport llvm::Constant *BuildTypeInfo(QualType Ty, bool Force = false, bool DLLExport = false); }; } llvm::GlobalVariable *ItaniumRTTIBuilder::GetAddrOfTypeName( QualType Ty, llvm::GlobalVariable::LinkageTypes Linkage) { SmallString<256> Name; llvm::raw_svector_ostream Out(Name); CGM.getCXXABI().getMangleContext().mangleCXXRTTIName(Ty, Out); // We know that the mangled name of the type starts at index 4 of the // mangled name of the typename, so we can just index into it in order to // get the mangled name of the type. llvm::Constant *Init = llvm::ConstantDataArray::getString(VMContext, Name.substr(4)); llvm::GlobalVariable *GV = CGM.CreateOrReplaceCXXRuntimeVariable(Name, Init->getType(), Linkage); GV->setInitializer(Init); return GV; } llvm::Constant * ItaniumRTTIBuilder::GetAddrOfExternalRTTIDescriptor(QualType Ty) { // Mangle the RTTI name. SmallString<256> Name; llvm::raw_svector_ostream Out(Name); CGM.getCXXABI().getMangleContext().mangleCXXRTTI(Ty, Out); // Look for an existing global. llvm::GlobalVariable *GV = CGM.getModule().getNamedGlobal(Name); if (!GV) { // Create a new global variable. // Note for the future: If we would ever like to do deferred emission of // RTTI, check if emitting vtables opportunistically need any adjustment. GV = new llvm::GlobalVariable(CGM.getModule(), CGM.Int8PtrTy, /*Constant=*/true, llvm::GlobalValue::ExternalLinkage, nullptr, Name); if (const RecordType *RecordTy = dyn_cast(Ty)) { const CXXRecordDecl *RD = cast(RecordTy->getDecl()); if (RD->hasAttr()) GV->setDLLStorageClass(llvm::GlobalVariable::DLLImportStorageClass); } } return llvm::ConstantExpr::getBitCast(GV, CGM.Int8PtrTy); } /// TypeInfoIsInStandardLibrary - Given a builtin type, returns whether the type /// info for that type is defined in the standard library. static bool TypeInfoIsInStandardLibrary(const BuiltinType *Ty) { // Itanium C++ ABI 2.9.2: // Basic type information (e.g. for "int", "bool", etc.) will be kept in // the run-time support library. Specifically, the run-time support // library should contain type_info objects for the types X, X* and // X const*, for every X in: void, std::nullptr_t, bool, wchar_t, char, // unsigned char, signed char, short, unsigned short, int, unsigned int, // long, unsigned long, long long, unsigned long long, float, double, // long double, char16_t, char32_t, and the IEEE 754r decimal and // half-precision floating point types. // // GCC also emits RTTI for __int128. // FIXME: We do not emit RTTI information for decimal types here. // Types added here must also be added to EmitFundamentalRTTIDescriptors. switch (Ty->getKind()) { case BuiltinType::Void: case BuiltinType::NullPtr: case BuiltinType::Bool: case BuiltinType::WChar_S: case BuiltinType::WChar_U: case BuiltinType::Char_U: case BuiltinType::Char_S: case BuiltinType::UChar: case BuiltinType::SChar: case BuiltinType::Short: case BuiltinType::UShort: case BuiltinType::Int: case BuiltinType::UInt: case BuiltinType::Long: case BuiltinType::ULong: case BuiltinType::LongLong: case BuiltinType::ULongLong: case BuiltinType::Half: case BuiltinType::Float: case BuiltinType::Double: case BuiltinType::LongDouble: case BuiltinType::Float128: case BuiltinType::Char16: case BuiltinType::Char32: case BuiltinType::Int128: case BuiltinType::UInt128: return true; #define IMAGE_TYPE(ImgType, Id, SingletonId, Access, Suffix) \ case BuiltinType::Id: #include "clang/Basic/OpenCLImageTypes.def" case BuiltinType::OCLSampler: case BuiltinType::OCLEvent: case BuiltinType::OCLClkEvent: case BuiltinType::OCLQueue: case BuiltinType::OCLReserveID: return false; case BuiltinType::Dependent: #define BUILTIN_TYPE(Id, SingletonId) #define PLACEHOLDER_TYPE(Id, SingletonId) \ case BuiltinType::Id: #include "clang/AST/BuiltinTypes.def" llvm_unreachable("asking for RRTI for a placeholder type!"); case BuiltinType::ObjCId: case BuiltinType::ObjCClass: case BuiltinType::ObjCSel: llvm_unreachable("FIXME: Objective-C types are unsupported!"); } llvm_unreachable("Invalid BuiltinType Kind!"); } static bool TypeInfoIsInStandardLibrary(const PointerType *PointerTy) { QualType PointeeTy = PointerTy->getPointeeType(); const BuiltinType *BuiltinTy = dyn_cast(PointeeTy); if (!BuiltinTy) return false; // Check the qualifiers. Qualifiers Quals = PointeeTy.getQualifiers(); Quals.removeConst(); if (!Quals.empty()) return false; return TypeInfoIsInStandardLibrary(BuiltinTy); } /// IsStandardLibraryRTTIDescriptor - Returns whether the type /// information for the given type exists in the standard library. static bool IsStandardLibraryRTTIDescriptor(QualType Ty) { // Type info for builtin types is defined in the standard library. if (const BuiltinType *BuiltinTy = dyn_cast(Ty)) return TypeInfoIsInStandardLibrary(BuiltinTy); // Type info for some pointer types to builtin types is defined in the // standard library. if (const PointerType *PointerTy = dyn_cast(Ty)) return TypeInfoIsInStandardLibrary(PointerTy); return false; } /// ShouldUseExternalRTTIDescriptor - Returns whether the type information for /// the given type exists somewhere else, and that we should not emit the type /// information in this translation unit. Assumes that it is not a /// standard-library type. static bool ShouldUseExternalRTTIDescriptor(CodeGenModule &CGM, QualType Ty) { ASTContext &Context = CGM.getContext(); // If RTTI is disabled, assume it might be disabled in the // translation unit that defines any potential key function, too. if (!Context.getLangOpts().RTTI) return false; if (const RecordType *RecordTy = dyn_cast(Ty)) { const CXXRecordDecl *RD = cast(RecordTy->getDecl()); if (!RD->hasDefinition()) return false; if (!RD->isDynamicClass()) return false; // FIXME: this may need to be reconsidered if the key function // changes. // N.B. We must always emit the RTTI data ourselves if there exists a key // function. bool IsDLLImport = RD->hasAttr(); if (CGM.getVTables().isVTableExternal(RD)) return IsDLLImport && !CGM.getTriple().isWindowsItaniumEnvironment() ? false : true; if (IsDLLImport) return true; } return false; } /// IsIncompleteClassType - Returns whether the given record type is incomplete. static bool IsIncompleteClassType(const RecordType *RecordTy) { return !RecordTy->getDecl()->isCompleteDefinition(); } /// ContainsIncompleteClassType - Returns whether the given type contains an /// incomplete class type. This is true if /// /// * The given type is an incomplete class type. /// * The given type is a pointer type whose pointee type contains an /// incomplete class type. /// * The given type is a member pointer type whose class is an incomplete /// class type. /// * The given type is a member pointer type whoise pointee type contains an /// incomplete class type. /// is an indirect or direct pointer to an incomplete class type. static bool ContainsIncompleteClassType(QualType Ty) { if (const RecordType *RecordTy = dyn_cast(Ty)) { if (IsIncompleteClassType(RecordTy)) return true; } if (const PointerType *PointerTy = dyn_cast(Ty)) return ContainsIncompleteClassType(PointerTy->getPointeeType()); if (const MemberPointerType *MemberPointerTy = dyn_cast(Ty)) { // Check if the class type is incomplete. const RecordType *ClassType = cast(MemberPointerTy->getClass()); if (IsIncompleteClassType(ClassType)) return true; return ContainsIncompleteClassType(MemberPointerTy->getPointeeType()); } return false; } // CanUseSingleInheritance - Return whether the given record decl has a "single, // public, non-virtual base at offset zero (i.e. the derived class is dynamic // iff the base is)", according to Itanium C++ ABI, 2.95p6b. static bool CanUseSingleInheritance(const CXXRecordDecl *RD) { // Check the number of bases. if (RD->getNumBases() != 1) return false; // Get the base. CXXRecordDecl::base_class_const_iterator Base = RD->bases_begin(); // Check that the base is not virtual. if (Base->isVirtual()) return false; // Check that the base is public. if (Base->getAccessSpecifier() != AS_public) return false; // Check that the class is dynamic iff the base is. const CXXRecordDecl *BaseDecl = cast(Base->getType()->getAs()->getDecl()); if (!BaseDecl->isEmpty() && BaseDecl->isDynamicClass() != RD->isDynamicClass()) return false; return true; } void ItaniumRTTIBuilder::BuildVTablePointer(const Type *Ty) { // abi::__class_type_info. static const char * const ClassTypeInfo = "_ZTVN10__cxxabiv117__class_type_infoE"; // abi::__si_class_type_info. static const char * const SIClassTypeInfo = "_ZTVN10__cxxabiv120__si_class_type_infoE"; // abi::__vmi_class_type_info. static const char * const VMIClassTypeInfo = "_ZTVN10__cxxabiv121__vmi_class_type_infoE"; const char *VTableName = nullptr; switch (Ty->getTypeClass()) { #define TYPE(Class, Base) #define ABSTRACT_TYPE(Class, Base) #define NON_CANONICAL_UNLESS_DEPENDENT_TYPE(Class, Base) case Type::Class: #define NON_CANONICAL_TYPE(Class, Base) case Type::Class: #define DEPENDENT_TYPE(Class, Base) case Type::Class: #include "clang/AST/TypeNodes.def" llvm_unreachable("Non-canonical and dependent types shouldn't get here"); case Type::LValueReference: case Type::RValueReference: llvm_unreachable("References shouldn't get here"); case Type::Auto: case Type::DeducedTemplateSpecialization: llvm_unreachable("Undeduced type shouldn't get here"); case Type::Pipe: llvm_unreachable("Pipe types shouldn't get here"); case Type::Builtin: // GCC treats vector and complex types as fundamental types. case Type::Vector: case Type::ExtVector: case Type::Complex: case Type::Atomic: // FIXME: GCC treats block pointers as fundamental types?! case Type::BlockPointer: // abi::__fundamental_type_info. VTableName = "_ZTVN10__cxxabiv123__fundamental_type_infoE"; break; case Type::ConstantArray: case Type::IncompleteArray: case Type::VariableArray: // abi::__array_type_info. VTableName = "_ZTVN10__cxxabiv117__array_type_infoE"; break; case Type::FunctionNoProto: case Type::FunctionProto: // abi::__function_type_info. VTableName = "_ZTVN10__cxxabiv120__function_type_infoE"; break; case Type::Enum: // abi::__enum_type_info. VTableName = "_ZTVN10__cxxabiv116__enum_type_infoE"; break; case Type::Record: { const CXXRecordDecl *RD = cast(cast(Ty)->getDecl()); if (!RD->hasDefinition() || !RD->getNumBases()) { VTableName = ClassTypeInfo; } else if (CanUseSingleInheritance(RD)) { VTableName = SIClassTypeInfo; } else { VTableName = VMIClassTypeInfo; } break; } case Type::ObjCObject: // Ignore protocol qualifiers. Ty = cast(Ty)->getBaseType().getTypePtr(); // Handle id and Class. if (isa(Ty)) { VTableName = ClassTypeInfo; break; } assert(isa(Ty)); // Fall through. case Type::ObjCInterface: if (cast(Ty)->getDecl()->getSuperClass()) { VTableName = SIClassTypeInfo; } else { VTableName = ClassTypeInfo; } break; case Type::ObjCObjectPointer: case Type::Pointer: // abi::__pointer_type_info. VTableName = "_ZTVN10__cxxabiv119__pointer_type_infoE"; break; case Type::MemberPointer: // abi::__pointer_to_member_type_info. VTableName = "_ZTVN10__cxxabiv129__pointer_to_member_type_infoE"; break; } llvm::Constant *VTable = CGM.getModule().getOrInsertGlobal(VTableName, CGM.Int8PtrTy); llvm::Type *PtrDiffTy = CGM.getTypes().ConvertType(CGM.getContext().getPointerDiffType()); // The vtable address point is 2. llvm::Constant *Two = llvm::ConstantInt::get(PtrDiffTy, 2); VTable = llvm::ConstantExpr::getInBoundsGetElementPtr(CGM.Int8PtrTy, VTable, Two); VTable = llvm::ConstantExpr::getBitCast(VTable, CGM.Int8PtrTy); Fields.push_back(VTable); } /// \brief Return the linkage that the type info and type info name constants /// should have for the given type. static llvm::GlobalVariable::LinkageTypes getTypeInfoLinkage(CodeGenModule &CGM, QualType Ty) { // Itanium C++ ABI 2.9.5p7: // In addition, it and all of the intermediate abi::__pointer_type_info // structs in the chain down to the abi::__class_type_info for the // incomplete class type must be prevented from resolving to the // corresponding type_info structs for the complete class type, possibly // by making them local static objects. Finally, a dummy class RTTI is // generated for the incomplete type that will not resolve to the final // complete class RTTI (because the latter need not exist), possibly by // making it a local static object. if (ContainsIncompleteClassType(Ty)) return llvm::GlobalValue::InternalLinkage; switch (Ty->getLinkage()) { case NoLinkage: case InternalLinkage: case UniqueExternalLinkage: return llvm::GlobalValue::InternalLinkage; case VisibleNoLinkage: case ModuleInternalLinkage: case ModuleLinkage: case ExternalLinkage: // RTTI is not enabled, which means that this type info struct is going // to be used for exception handling. Give it linkonce_odr linkage. if (!CGM.getLangOpts().RTTI) return llvm::GlobalValue::LinkOnceODRLinkage; if (const RecordType *Record = dyn_cast(Ty)) { const CXXRecordDecl *RD = cast(Record->getDecl()); if (RD->hasAttr()) return llvm::GlobalValue::WeakODRLinkage; if (CGM.getTriple().isWindowsItaniumEnvironment()) if (RD->hasAttr() && ShouldUseExternalRTTIDescriptor(CGM, Ty)) return llvm::GlobalValue::ExternalLinkage; if (RD->isDynamicClass()) { llvm::GlobalValue::LinkageTypes LT = CGM.getVTableLinkage(RD); // MinGW won't export the RTTI information when there is a key function. // Make sure we emit our own copy instead of attempting to dllimport it. if (RD->hasAttr() && llvm::GlobalValue::isAvailableExternallyLinkage(LT)) LT = llvm::GlobalValue::LinkOnceODRLinkage; return LT; } } return llvm::GlobalValue::LinkOnceODRLinkage; } llvm_unreachable("Invalid linkage!"); } llvm::Constant *ItaniumRTTIBuilder::BuildTypeInfo(QualType Ty, bool Force, bool DLLExport) { // We want to operate on the canonical type. Ty = Ty.getCanonicalType(); // Check if we've already emitted an RTTI descriptor for this type. SmallString<256> Name; llvm::raw_svector_ostream Out(Name); CGM.getCXXABI().getMangleContext().mangleCXXRTTI(Ty, Out); llvm::GlobalVariable *OldGV = CGM.getModule().getNamedGlobal(Name); if (OldGV && !OldGV->isDeclaration()) { assert(!OldGV->hasAvailableExternallyLinkage() && "available_externally typeinfos not yet implemented"); return llvm::ConstantExpr::getBitCast(OldGV, CGM.Int8PtrTy); } // Check if there is already an external RTTI descriptor for this type. bool IsStdLib = IsStandardLibraryRTTIDescriptor(Ty); if (!Force && (IsStdLib || ShouldUseExternalRTTIDescriptor(CGM, Ty))) return GetAddrOfExternalRTTIDescriptor(Ty); // Emit the standard library with external linkage. llvm::GlobalVariable::LinkageTypes Linkage; if (IsStdLib) Linkage = llvm::GlobalValue::ExternalLinkage; else Linkage = getTypeInfoLinkage(CGM, Ty); // Add the vtable pointer. BuildVTablePointer(cast(Ty)); // And the name. llvm::GlobalVariable *TypeName = GetAddrOfTypeName(Ty, Linkage); llvm::Constant *TypeNameField; // If we're supposed to demote the visibility, be sure to set a flag // to use a string comparison for type_info comparisons. ItaniumCXXABI::RTTIUniquenessKind RTTIUniqueness = CXXABI.classifyRTTIUniqueness(Ty, Linkage); if (RTTIUniqueness != ItaniumCXXABI::RUK_Unique) { // The flag is the sign bit, which on ARM64 is defined to be clear // for global pointers. This is very ARM64-specific. TypeNameField = llvm::ConstantExpr::getPtrToInt(TypeName, CGM.Int64Ty); llvm::Constant *flag = llvm::ConstantInt::get(CGM.Int64Ty, ((uint64_t)1) << 63); TypeNameField = llvm::ConstantExpr::getAdd(TypeNameField, flag); TypeNameField = llvm::ConstantExpr::getIntToPtr(TypeNameField, CGM.Int8PtrTy); } else { TypeNameField = llvm::ConstantExpr::getBitCast(TypeName, CGM.Int8PtrTy); } Fields.push_back(TypeNameField); switch (Ty->getTypeClass()) { #define TYPE(Class, Base) #define ABSTRACT_TYPE(Class, Base) #define NON_CANONICAL_UNLESS_DEPENDENT_TYPE(Class, Base) case Type::Class: #define NON_CANONICAL_TYPE(Class, Base) case Type::Class: #define DEPENDENT_TYPE(Class, Base) case Type::Class: #include "clang/AST/TypeNodes.def" llvm_unreachable("Non-canonical and dependent types shouldn't get here"); // GCC treats vector types as fundamental types. case Type::Builtin: case Type::Vector: case Type::ExtVector: case Type::Complex: case Type::BlockPointer: // Itanium C++ ABI 2.9.5p4: // abi::__fundamental_type_info adds no data members to std::type_info. break; case Type::LValueReference: case Type::RValueReference: llvm_unreachable("References shouldn't get here"); case Type::Auto: case Type::DeducedTemplateSpecialization: llvm_unreachable("Undeduced type shouldn't get here"); case Type::Pipe: llvm_unreachable("Pipe type shouldn't get here"); case Type::ConstantArray: case Type::IncompleteArray: case Type::VariableArray: // Itanium C++ ABI 2.9.5p5: // abi::__array_type_info adds no data members to std::type_info. break; case Type::FunctionNoProto: case Type::FunctionProto: // Itanium C++ ABI 2.9.5p5: // abi::__function_type_info adds no data members to std::type_info. break; case Type::Enum: // Itanium C++ ABI 2.9.5p5: // abi::__enum_type_info adds no data members to std::type_info. break; case Type::Record: { const CXXRecordDecl *RD = cast(cast(Ty)->getDecl()); if (!RD->hasDefinition() || !RD->getNumBases()) { // We don't need to emit any fields. break; } if (CanUseSingleInheritance(RD)) BuildSIClassTypeInfo(RD); else BuildVMIClassTypeInfo(RD); break; } case Type::ObjCObject: case Type::ObjCInterface: BuildObjCObjectTypeInfo(cast(Ty)); break; case Type::ObjCObjectPointer: BuildPointerTypeInfo(cast(Ty)->getPointeeType()); break; case Type::Pointer: BuildPointerTypeInfo(cast(Ty)->getPointeeType()); break; case Type::MemberPointer: BuildPointerToMemberTypeInfo(cast(Ty)); break; case Type::Atomic: // No fields, at least for the moment. break; } llvm::Constant *Init = llvm::ConstantStruct::getAnon(Fields); llvm::Module &M = CGM.getModule(); llvm::GlobalVariable *GV = new llvm::GlobalVariable(M, Init->getType(), /*Constant=*/true, Linkage, Init, Name); // If there's already an old global variable, replace it with the new one. if (OldGV) { GV->takeName(OldGV); llvm::Constant *NewPtr = llvm::ConstantExpr::getBitCast(GV, OldGV->getType()); OldGV->replaceAllUsesWith(NewPtr); OldGV->eraseFromParent(); } if (CGM.supportsCOMDAT() && GV->isWeakForLinker()) GV->setComdat(M.getOrInsertComdat(GV->getName())); // The Itanium ABI specifies that type_info objects must be globally // unique, with one exception: if the type is an incomplete class // type or a (possibly indirect) pointer to one. That exception // affects the general case of comparing type_info objects produced // by the typeid operator, which is why the comparison operators on // std::type_info generally use the type_info name pointers instead // of the object addresses. However, the language's built-in uses // of RTTI generally require class types to be complete, even when // manipulating pointers to those class types. This allows the // implementation of dynamic_cast to rely on address equality tests, // which is much faster. // All of this is to say that it's important that both the type_info // object and the type_info name be uniqued when weakly emitted. // Give the type_info object and name the formal visibility of the // type itself. llvm::GlobalValue::VisibilityTypes llvmVisibility; if (llvm::GlobalValue::isLocalLinkage(Linkage)) // If the linkage is local, only default visibility makes sense. llvmVisibility = llvm::GlobalValue::DefaultVisibility; else if (RTTIUniqueness == ItaniumCXXABI::RUK_NonUniqueHidden) llvmVisibility = llvm::GlobalValue::HiddenVisibility; else llvmVisibility = CodeGenModule::GetLLVMVisibility(Ty->getVisibility()); TypeName->setVisibility(llvmVisibility); GV->setVisibility(llvmVisibility); if (CGM.getTriple().isWindowsItaniumEnvironment()) { auto RD = Ty->getAsCXXRecordDecl(); if (DLLExport || (RD && RD->hasAttr())) { TypeName->setDLLStorageClass(llvm::GlobalValue::DLLExportStorageClass); GV->setDLLStorageClass(llvm::GlobalValue::DLLExportStorageClass); } else if (RD && RD->hasAttr() && ShouldUseExternalRTTIDescriptor(CGM, Ty)) { TypeName->setDLLStorageClass(llvm::GlobalValue::DLLImportStorageClass); GV->setDLLStorageClass(llvm::GlobalValue::DLLImportStorageClass); // Because the typename and the typeinfo are DLL import, convert them to // declarations rather than definitions. The initializers still need to // be constructed to calculate the type for the declarations. TypeName->setInitializer(nullptr); GV->setInitializer(nullptr); } } return llvm::ConstantExpr::getBitCast(GV, CGM.Int8PtrTy); } /// BuildObjCObjectTypeInfo - Build the appropriate kind of type_info /// for the given Objective-C object type. void ItaniumRTTIBuilder::BuildObjCObjectTypeInfo(const ObjCObjectType *OT) { // Drop qualifiers. const Type *T = OT->getBaseType().getTypePtr(); assert(isa(T) || isa(T)); // The builtin types are abi::__class_type_infos and don't require // extra fields. if (isa(T)) return; ObjCInterfaceDecl *Class = cast(T)->getDecl(); ObjCInterfaceDecl *Super = Class->getSuperClass(); // Root classes are also __class_type_info. if (!Super) return; QualType SuperTy = CGM.getContext().getObjCInterfaceType(Super); // Everything else is single inheritance. llvm::Constant *BaseTypeInfo = ItaniumRTTIBuilder(CXXABI).BuildTypeInfo(SuperTy); Fields.push_back(BaseTypeInfo); } /// BuildSIClassTypeInfo - Build an abi::__si_class_type_info, used for single /// inheritance, according to the Itanium C++ ABI, 2.95p6b. void ItaniumRTTIBuilder::BuildSIClassTypeInfo(const CXXRecordDecl *RD) { // Itanium C++ ABI 2.9.5p6b: // It adds to abi::__class_type_info a single member pointing to the // type_info structure for the base type, llvm::Constant *BaseTypeInfo = ItaniumRTTIBuilder(CXXABI).BuildTypeInfo(RD->bases_begin()->getType()); Fields.push_back(BaseTypeInfo); } namespace { /// SeenBases - Contains virtual and non-virtual bases seen when traversing /// a class hierarchy. struct SeenBases { llvm::SmallPtrSet NonVirtualBases; llvm::SmallPtrSet VirtualBases; }; } /// ComputeVMIClassTypeInfoFlags - Compute the value of the flags member in /// abi::__vmi_class_type_info. /// static unsigned ComputeVMIClassTypeInfoFlags(const CXXBaseSpecifier *Base, SeenBases &Bases) { unsigned Flags = 0; const CXXRecordDecl *BaseDecl = cast(Base->getType()->getAs()->getDecl()); if (Base->isVirtual()) { // Mark the virtual base as seen. if (!Bases.VirtualBases.insert(BaseDecl).second) { // If this virtual base has been seen before, then the class is diamond // shaped. Flags |= ItaniumRTTIBuilder::VMI_DiamondShaped; } else { if (Bases.NonVirtualBases.count(BaseDecl)) Flags |= ItaniumRTTIBuilder::VMI_NonDiamondRepeat; } } else { // Mark the non-virtual base as seen. if (!Bases.NonVirtualBases.insert(BaseDecl).second) { // If this non-virtual base has been seen before, then the class has non- // diamond shaped repeated inheritance. Flags |= ItaniumRTTIBuilder::VMI_NonDiamondRepeat; } else { if (Bases.VirtualBases.count(BaseDecl)) Flags |= ItaniumRTTIBuilder::VMI_NonDiamondRepeat; } } // Walk all bases. for (const auto &I : BaseDecl->bases()) Flags |= ComputeVMIClassTypeInfoFlags(&I, Bases); return Flags; } static unsigned ComputeVMIClassTypeInfoFlags(const CXXRecordDecl *RD) { unsigned Flags = 0; SeenBases Bases; // Walk all bases. for (const auto &I : RD->bases()) Flags |= ComputeVMIClassTypeInfoFlags(&I, Bases); return Flags; } /// BuildVMIClassTypeInfo - Build an abi::__vmi_class_type_info, used for /// classes with bases that do not satisfy the abi::__si_class_type_info /// constraints, according ti the Itanium C++ ABI, 2.9.5p5c. void ItaniumRTTIBuilder::BuildVMIClassTypeInfo(const CXXRecordDecl *RD) { llvm::Type *UnsignedIntLTy = CGM.getTypes().ConvertType(CGM.getContext().UnsignedIntTy); // Itanium C++ ABI 2.9.5p6c: // __flags is a word with flags describing details about the class // structure, which may be referenced by using the __flags_masks // enumeration. These flags refer to both direct and indirect bases. unsigned Flags = ComputeVMIClassTypeInfoFlags(RD); Fields.push_back(llvm::ConstantInt::get(UnsignedIntLTy, Flags)); // Itanium C++ ABI 2.9.5p6c: // __base_count is a word with the number of direct proper base class // descriptions that follow. Fields.push_back(llvm::ConstantInt::get(UnsignedIntLTy, RD->getNumBases())); if (!RD->getNumBases()) return; // Now add the base class descriptions. // Itanium C++ ABI 2.9.5p6c: // __base_info[] is an array of base class descriptions -- one for every // direct proper base. Each description is of the type: // // struct abi::__base_class_type_info { // public: // const __class_type_info *__base_type; // long __offset_flags; // // enum __offset_flags_masks { // __virtual_mask = 0x1, // __public_mask = 0x2, // __offset_shift = 8 // }; // }; // If we're in mingw and 'long' isn't wide enough for a pointer, use 'long // long' instead of 'long' for __offset_flags. libstdc++abi uses long long on // LLP64 platforms. // FIXME: Consider updating libc++abi to match, and extend this logic to all // LLP64 platforms. QualType OffsetFlagsTy = CGM.getContext().LongTy; const TargetInfo &TI = CGM.getContext().getTargetInfo(); if (TI.getTriple().isOSCygMing() && TI.getPointerWidth(0) > TI.getLongWidth()) OffsetFlagsTy = CGM.getContext().LongLongTy; llvm::Type *OffsetFlagsLTy = CGM.getTypes().ConvertType(OffsetFlagsTy); for (const auto &Base : RD->bases()) { // The __base_type member points to the RTTI for the base type. Fields.push_back(ItaniumRTTIBuilder(CXXABI).BuildTypeInfo(Base.getType())); const CXXRecordDecl *BaseDecl = cast(Base.getType()->getAs()->getDecl()); int64_t OffsetFlags = 0; // All but the lower 8 bits of __offset_flags are a signed offset. // For a non-virtual base, this is the offset in the object of the base // subobject. For a virtual base, this is the offset in the virtual table of // the virtual base offset for the virtual base referenced (negative). CharUnits Offset; if (Base.isVirtual()) Offset = CGM.getItaniumVTableContext().getVirtualBaseOffsetOffset(RD, BaseDecl); else { const ASTRecordLayout &Layout = CGM.getContext().getASTRecordLayout(RD); Offset = Layout.getBaseClassOffset(BaseDecl); }; OffsetFlags = uint64_t(Offset.getQuantity()) << 8; // The low-order byte of __offset_flags contains flags, as given by the // masks from the enumeration __offset_flags_masks. if (Base.isVirtual()) OffsetFlags |= BCTI_Virtual; if (Base.getAccessSpecifier() == AS_public) OffsetFlags |= BCTI_Public; Fields.push_back(llvm::ConstantInt::get(OffsetFlagsLTy, OffsetFlags)); } } /// Compute the flags for a __pbase_type_info, and remove the corresponding /// pieces from \p Type. static unsigned extractPBaseFlags(ASTContext &Ctx, QualType &Type) { unsigned Flags = 0; if (Type.isConstQualified()) Flags |= ItaniumRTTIBuilder::PTI_Const; if (Type.isVolatileQualified()) Flags |= ItaniumRTTIBuilder::PTI_Volatile; if (Type.isRestrictQualified()) Flags |= ItaniumRTTIBuilder::PTI_Restrict; Type = Type.getUnqualifiedType(); // Itanium C++ ABI 2.9.5p7: // When the abi::__pbase_type_info is for a direct or indirect pointer to an // incomplete class type, the incomplete target type flag is set. if (ContainsIncompleteClassType(Type)) Flags |= ItaniumRTTIBuilder::PTI_Incomplete; if (auto *Proto = Type->getAs()) { if (Proto->isNothrow(Ctx)) { Flags |= ItaniumRTTIBuilder::PTI_Noexcept; Type = Ctx.getFunctionType( Proto->getReturnType(), Proto->getParamTypes(), Proto->getExtProtoInfo().withExceptionSpec(EST_None)); } } return Flags; } /// BuildPointerTypeInfo - Build an abi::__pointer_type_info struct, /// used for pointer types. void ItaniumRTTIBuilder::BuildPointerTypeInfo(QualType PointeeTy) { // Itanium C++ ABI 2.9.5p7: // __flags is a flag word describing the cv-qualification and other // attributes of the type pointed to unsigned Flags = extractPBaseFlags(CGM.getContext(), PointeeTy); llvm::Type *UnsignedIntLTy = CGM.getTypes().ConvertType(CGM.getContext().UnsignedIntTy); Fields.push_back(llvm::ConstantInt::get(UnsignedIntLTy, Flags)); // Itanium C++ ABI 2.9.5p7: // __pointee is a pointer to the std::type_info derivation for the // unqualified type being pointed to. llvm::Constant *PointeeTypeInfo = ItaniumRTTIBuilder(CXXABI).BuildTypeInfo(PointeeTy); Fields.push_back(PointeeTypeInfo); } /// BuildPointerToMemberTypeInfo - Build an abi::__pointer_to_member_type_info /// struct, used for member pointer types. void ItaniumRTTIBuilder::BuildPointerToMemberTypeInfo(const MemberPointerType *Ty) { QualType PointeeTy = Ty->getPointeeType(); // Itanium C++ ABI 2.9.5p7: // __flags is a flag word describing the cv-qualification and other // attributes of the type pointed to. unsigned Flags = extractPBaseFlags(CGM.getContext(), PointeeTy); const RecordType *ClassType = cast(Ty->getClass()); if (IsIncompleteClassType(ClassType)) Flags |= PTI_ContainingClassIncomplete; llvm::Type *UnsignedIntLTy = CGM.getTypes().ConvertType(CGM.getContext().UnsignedIntTy); Fields.push_back(llvm::ConstantInt::get(UnsignedIntLTy, Flags)); // Itanium C++ ABI 2.9.5p7: // __pointee is a pointer to the std::type_info derivation for the // unqualified type being pointed to. llvm::Constant *PointeeTypeInfo = ItaniumRTTIBuilder(CXXABI).BuildTypeInfo(PointeeTy); Fields.push_back(PointeeTypeInfo); // Itanium C++ ABI 2.9.5p9: // __context is a pointer to an abi::__class_type_info corresponding to the // class type containing the member pointed to // (e.g., the "A" in "int A::*"). Fields.push_back( ItaniumRTTIBuilder(CXXABI).BuildTypeInfo(QualType(ClassType, 0))); } llvm::Constant *ItaniumCXXABI::getAddrOfRTTIDescriptor(QualType Ty) { return ItaniumRTTIBuilder(*this).BuildTypeInfo(Ty); } void ItaniumCXXABI::EmitFundamentalRTTIDescriptor(QualType Type, bool DLLExport) { QualType PointerType = getContext().getPointerType(Type); QualType PointerTypeConst = getContext().getPointerType(Type.withConst()); ItaniumRTTIBuilder(*this).BuildTypeInfo(Type, /*Force=*/true, DLLExport); ItaniumRTTIBuilder(*this).BuildTypeInfo(PointerType, /*Force=*/true, DLLExport); ItaniumRTTIBuilder(*this).BuildTypeInfo(PointerTypeConst, /*Force=*/true, DLLExport); } void ItaniumCXXABI::EmitFundamentalRTTIDescriptors(bool DLLExport) { // Types added here must also be added to TypeInfoIsInStandardLibrary. QualType FundamentalTypes[] = { getContext().VoidTy, getContext().NullPtrTy, getContext().BoolTy, getContext().WCharTy, getContext().CharTy, getContext().UnsignedCharTy, getContext().SignedCharTy, getContext().ShortTy, getContext().UnsignedShortTy, getContext().IntTy, getContext().UnsignedIntTy, getContext().LongTy, getContext().UnsignedLongTy, getContext().LongLongTy, getContext().UnsignedLongLongTy, getContext().Int128Ty, getContext().UnsignedInt128Ty, getContext().HalfTy, getContext().FloatTy, getContext().DoubleTy, getContext().LongDoubleTy, getContext().Float128Ty, getContext().Char16Ty, getContext().Char32Ty }; for (const QualType &FundamentalType : FundamentalTypes) EmitFundamentalRTTIDescriptor(FundamentalType, DLLExport); } /// What sort of uniqueness rules should we use for the RTTI for the /// given type? ItaniumCXXABI::RTTIUniquenessKind ItaniumCXXABI::classifyRTTIUniqueness( QualType CanTy, llvm::GlobalValue::LinkageTypes Linkage) const { if (shouldRTTIBeUnique()) return RUK_Unique; // It's only necessary for linkonce_odr or weak_odr linkage. if (Linkage != llvm::GlobalValue::LinkOnceODRLinkage && Linkage != llvm::GlobalValue::WeakODRLinkage) return RUK_Unique; // It's only necessary with default visibility. if (CanTy->getVisibility() != DefaultVisibility) return RUK_Unique; // If we're not required to publish this symbol, hide it. if (Linkage == llvm::GlobalValue::LinkOnceODRLinkage) return RUK_NonUniqueHidden; // If we're required to publish this symbol, as we might be under an // explicit instantiation, leave it with default visibility but // enable string-comparisons. assert(Linkage == llvm::GlobalValue::WeakODRLinkage); return RUK_NonUniqueVisible; } // Find out how to codegen the complete destructor and constructor namespace { enum class StructorCodegen { Emit, RAUW, Alias, COMDAT }; } static StructorCodegen getCodegenToUse(CodeGenModule &CGM, const CXXMethodDecl *MD) { if (!CGM.getCodeGenOpts().CXXCtorDtorAliases) return StructorCodegen::Emit; // The complete and base structors are not equivalent if there are any virtual // bases, so emit separate functions. if (MD->getParent()->getNumVBases()) return StructorCodegen::Emit; GlobalDecl AliasDecl; if (const auto *DD = dyn_cast(MD)) { AliasDecl = GlobalDecl(DD, Dtor_Complete); } else { const auto *CD = cast(MD); AliasDecl = GlobalDecl(CD, Ctor_Complete); } llvm::GlobalValue::LinkageTypes Linkage = CGM.getFunctionLinkage(AliasDecl); if (llvm::GlobalValue::isDiscardableIfUnused(Linkage)) return StructorCodegen::RAUW; // FIXME: Should we allow available_externally aliases? if (!llvm::GlobalAlias::isValidLinkage(Linkage)) return StructorCodegen::RAUW; if (llvm::GlobalValue::isWeakForLinker(Linkage)) { // Only ELF and wasm support COMDATs with arbitrary names (C5/D5). if (CGM.getTarget().getTriple().isOSBinFormatELF() || CGM.getTarget().getTriple().isOSBinFormatWasm()) return StructorCodegen::COMDAT; return StructorCodegen::Emit; } return StructorCodegen::Alias; } static void emitConstructorDestructorAlias(CodeGenModule &CGM, GlobalDecl AliasDecl, GlobalDecl TargetDecl) { llvm::GlobalValue::LinkageTypes Linkage = CGM.getFunctionLinkage(AliasDecl); StringRef MangledName = CGM.getMangledName(AliasDecl); llvm::GlobalValue *Entry = CGM.GetGlobalValue(MangledName); if (Entry && !Entry->isDeclaration()) return; auto *Aliasee = cast(CGM.GetAddrOfGlobal(TargetDecl)); // Create the alias with no name. auto *Alias = llvm::GlobalAlias::create(Linkage, "", Aliasee); // Switch any previous uses to the alias. if (Entry) { assert(Entry->getType() == Aliasee->getType() && "declaration exists with different type"); Alias->takeName(Entry); Entry->replaceAllUsesWith(Alias); Entry->eraseFromParent(); } else { Alias->setName(MangledName); } // Finally, set up the alias with its proper name and attributes. CGM.setAliasAttributes(cast(AliasDecl.getDecl()), Alias); } void ItaniumCXXABI::emitCXXStructor(const CXXMethodDecl *MD, StructorType Type) { auto *CD = dyn_cast(MD); const CXXDestructorDecl *DD = CD ? nullptr : cast(MD); StructorCodegen CGType = getCodegenToUse(CGM, MD); if (Type == StructorType::Complete) { GlobalDecl CompleteDecl; GlobalDecl BaseDecl; if (CD) { CompleteDecl = GlobalDecl(CD, Ctor_Complete); BaseDecl = GlobalDecl(CD, Ctor_Base); } else { CompleteDecl = GlobalDecl(DD, Dtor_Complete); BaseDecl = GlobalDecl(DD, Dtor_Base); } if (CGType == StructorCodegen::Alias || CGType == StructorCodegen::COMDAT) { emitConstructorDestructorAlias(CGM, CompleteDecl, BaseDecl); return; } if (CGType == StructorCodegen::RAUW) { StringRef MangledName = CGM.getMangledName(CompleteDecl); auto *Aliasee = CGM.GetAddrOfGlobal(BaseDecl); CGM.addReplacement(MangledName, Aliasee); return; } } // The base destructor is equivalent to the base destructor of its // base class if there is exactly one non-virtual base class with a // non-trivial destructor, there are no fields with a non-trivial // destructor, and the body of the destructor is trivial. if (DD && Type == StructorType::Base && CGType != StructorCodegen::COMDAT && !CGM.TryEmitBaseDestructorAsAlias(DD)) return; llvm::Function *Fn = CGM.codegenCXXStructor(MD, Type); if (CGType == StructorCodegen::COMDAT) { SmallString<256> Buffer; llvm::raw_svector_ostream Out(Buffer); if (DD) getMangleContext().mangleCXXDtorComdat(DD, Out); else getMangleContext().mangleCXXCtorComdat(CD, Out); llvm::Comdat *C = CGM.getModule().getOrInsertComdat(Out.str()); Fn->setComdat(C); } else { CGM.maybeSetTrivialComdat(*MD, *Fn); } } static llvm::Constant *getBeginCatchFn(CodeGenModule &CGM) { // void *__cxa_begin_catch(void*); llvm::FunctionType *FTy = llvm::FunctionType::get( CGM.Int8PtrTy, CGM.Int8PtrTy, /*IsVarArgs=*/false); return CGM.CreateRuntimeFunction(FTy, "__cxa_begin_catch"); } static llvm::Constant *getEndCatchFn(CodeGenModule &CGM) { // void __cxa_end_catch(); llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, /*IsVarArgs=*/false); return CGM.CreateRuntimeFunction(FTy, "__cxa_end_catch"); } static llvm::Constant *getGetExceptionPtrFn(CodeGenModule &CGM) { // void *__cxa_get_exception_ptr(void*); llvm::FunctionType *FTy = llvm::FunctionType::get( CGM.Int8PtrTy, CGM.Int8PtrTy, /*IsVarArgs=*/false); return CGM.CreateRuntimeFunction(FTy, "__cxa_get_exception_ptr"); } namespace { /// A cleanup to call __cxa_end_catch. In many cases, the caught /// exception type lets us state definitively that the thrown exception /// type does not have a destructor. In particular: /// - Catch-alls tell us nothing, so we have to conservatively /// assume that the thrown exception might have a destructor. /// - Catches by reference behave according to their base types. /// - Catches of non-record types will only trigger for exceptions /// of non-record types, which never have destructors. /// - Catches of record types can trigger for arbitrary subclasses /// of the caught type, so we have to assume the actual thrown /// exception type might have a throwing destructor, even if the /// caught type's destructor is trivial or nothrow. struct CallEndCatch final : EHScopeStack::Cleanup { CallEndCatch(bool MightThrow) : MightThrow(MightThrow) {} bool MightThrow; void Emit(CodeGenFunction &CGF, Flags flags) override { if (!MightThrow) { CGF.EmitNounwindRuntimeCall(getEndCatchFn(CGF.CGM)); return; } CGF.EmitRuntimeCallOrInvoke(getEndCatchFn(CGF.CGM)); } }; } /// Emits a call to __cxa_begin_catch and enters a cleanup to call /// __cxa_end_catch. /// /// \param EndMightThrow - true if __cxa_end_catch might throw static llvm::Value *CallBeginCatch(CodeGenFunction &CGF, llvm::Value *Exn, bool EndMightThrow) { llvm::CallInst *call = CGF.EmitNounwindRuntimeCall(getBeginCatchFn(CGF.CGM), Exn); CGF.EHStack.pushCleanup(NormalAndEHCleanup, EndMightThrow); return call; } /// A "special initializer" callback for initializing a catch /// parameter during catch initialization. static void InitCatchParam(CodeGenFunction &CGF, const VarDecl &CatchParam, Address ParamAddr, SourceLocation Loc) { // Load the exception from where the landing pad saved it. llvm::Value *Exn = CGF.getExceptionFromSlot(); CanQualType CatchType = CGF.CGM.getContext().getCanonicalType(CatchParam.getType()); llvm::Type *LLVMCatchTy = CGF.ConvertTypeForMem(CatchType); // If we're catching by reference, we can just cast the object // pointer to the appropriate pointer. if (isa(CatchType)) { QualType CaughtType = cast(CatchType)->getPointeeType(); bool EndCatchMightThrow = CaughtType->isRecordType(); // __cxa_begin_catch returns the adjusted object pointer. llvm::Value *AdjustedExn = CallBeginCatch(CGF, Exn, EndCatchMightThrow); // We have no way to tell the personality function that we're // catching by reference, so if we're catching a pointer, // __cxa_begin_catch will actually return that pointer by value. if (const PointerType *PT = dyn_cast(CaughtType)) { QualType PointeeType = PT->getPointeeType(); // When catching by reference, generally we should just ignore // this by-value pointer and use the exception object instead. if (!PointeeType->isRecordType()) { // Exn points to the struct _Unwind_Exception header, which // we have to skip past in order to reach the exception data. unsigned HeaderSize = CGF.CGM.getTargetCodeGenInfo().getSizeOfUnwindException(); AdjustedExn = CGF.Builder.CreateConstGEP1_32(Exn, HeaderSize); // However, if we're catching a pointer-to-record type that won't // work, because the personality function might have adjusted // the pointer. There's actually no way for us to fully satisfy // the language/ABI contract here: we can't use Exn because it // might have the wrong adjustment, but we can't use the by-value // pointer because it's off by a level of abstraction. // // The current solution is to dump the adjusted pointer into an // alloca, which breaks language semantics (because changing the // pointer doesn't change the exception) but at least works. // The better solution would be to filter out non-exact matches // and rethrow them, but this is tricky because the rethrow // really needs to be catchable by other sites at this landing // pad. The best solution is to fix the personality function. } else { // Pull the pointer for the reference type off. llvm::Type *PtrTy = cast(LLVMCatchTy)->getElementType(); // Create the temporary and write the adjusted pointer into it. Address ExnPtrTmp = CGF.CreateTempAlloca(PtrTy, CGF.getPointerAlign(), "exn.byref.tmp"); llvm::Value *Casted = CGF.Builder.CreateBitCast(AdjustedExn, PtrTy); CGF.Builder.CreateStore(Casted, ExnPtrTmp); // Bind the reference to the temporary. AdjustedExn = ExnPtrTmp.getPointer(); } } llvm::Value *ExnCast = CGF.Builder.CreateBitCast(AdjustedExn, LLVMCatchTy, "exn.byref"); CGF.Builder.CreateStore(ExnCast, ParamAddr); return; } // Scalars and complexes. TypeEvaluationKind TEK = CGF.getEvaluationKind(CatchType); if (TEK != TEK_Aggregate) { llvm::Value *AdjustedExn = CallBeginCatch(CGF, Exn, false); // If the catch type is a pointer type, __cxa_begin_catch returns // the pointer by value. if (CatchType->hasPointerRepresentation()) { llvm::Value *CastExn = CGF.Builder.CreateBitCast(AdjustedExn, LLVMCatchTy, "exn.casted"); switch (CatchType.getQualifiers().getObjCLifetime()) { case Qualifiers::OCL_Strong: CastExn = CGF.EmitARCRetainNonBlock(CastExn); // fallthrough case Qualifiers::OCL_None: case Qualifiers::OCL_ExplicitNone: case Qualifiers::OCL_Autoreleasing: CGF.Builder.CreateStore(CastExn, ParamAddr); return; case Qualifiers::OCL_Weak: CGF.EmitARCInitWeak(ParamAddr, CastExn); return; } llvm_unreachable("bad ownership qualifier!"); } // Otherwise, it returns a pointer into the exception object. llvm::Type *PtrTy = LLVMCatchTy->getPointerTo(0); // addrspace 0 ok llvm::Value *Cast = CGF.Builder.CreateBitCast(AdjustedExn, PtrTy); LValue srcLV = CGF.MakeNaturalAlignAddrLValue(Cast, CatchType); LValue destLV = CGF.MakeAddrLValue(ParamAddr, CatchType); switch (TEK) { case TEK_Complex: CGF.EmitStoreOfComplex(CGF.EmitLoadOfComplex(srcLV, Loc), destLV, /*init*/ true); return; case TEK_Scalar: { llvm::Value *ExnLoad = CGF.EmitLoadOfScalar(srcLV, Loc); CGF.EmitStoreOfScalar(ExnLoad, destLV, /*init*/ true); return; } case TEK_Aggregate: llvm_unreachable("evaluation kind filtered out!"); } llvm_unreachable("bad evaluation kind"); } assert(isa(CatchType) && "unexpected catch type!"); auto catchRD = CatchType->getAsCXXRecordDecl(); CharUnits caughtExnAlignment = CGF.CGM.getClassPointerAlignment(catchRD); llvm::Type *PtrTy = LLVMCatchTy->getPointerTo(0); // addrspace 0 ok // Check for a copy expression. If we don't have a copy expression, // that means a trivial copy is okay. const Expr *copyExpr = CatchParam.getInit(); if (!copyExpr) { llvm::Value *rawAdjustedExn = CallBeginCatch(CGF, Exn, true); Address adjustedExn(CGF.Builder.CreateBitCast(rawAdjustedExn, PtrTy), caughtExnAlignment); CGF.EmitAggregateCopy(ParamAddr, adjustedExn, CatchType); return; } // We have to call __cxa_get_exception_ptr to get the adjusted // pointer before copying. llvm::CallInst *rawAdjustedExn = CGF.EmitNounwindRuntimeCall(getGetExceptionPtrFn(CGF.CGM), Exn); // Cast that to the appropriate type. Address adjustedExn(CGF.Builder.CreateBitCast(rawAdjustedExn, PtrTy), caughtExnAlignment); // The copy expression is defined in terms of an OpaqueValueExpr. // Find it and map it to the adjusted expression. CodeGenFunction::OpaqueValueMapping opaque(CGF, OpaqueValueExpr::findInCopyConstruct(copyExpr), CGF.MakeAddrLValue(adjustedExn, CatchParam.getType())); // Call the copy ctor in a terminate scope. CGF.EHStack.pushTerminate(); // Perform the copy construction. CGF.EmitAggExpr(copyExpr, AggValueSlot::forAddr(ParamAddr, Qualifiers(), AggValueSlot::IsNotDestructed, AggValueSlot::DoesNotNeedGCBarriers, AggValueSlot::IsNotAliased)); // Leave the terminate scope. CGF.EHStack.popTerminate(); // Undo the opaque value mapping. opaque.pop(); // Finally we can call __cxa_begin_catch. CallBeginCatch(CGF, Exn, true); } /// Begins a catch statement by initializing the catch variable and /// calling __cxa_begin_catch. void ItaniumCXXABI::emitBeginCatch(CodeGenFunction &CGF, const CXXCatchStmt *S) { // We have to be very careful with the ordering of cleanups here: // C++ [except.throw]p4: // The destruction [of the exception temporary] occurs // immediately after the destruction of the object declared in // the exception-declaration in the handler. // // So the precise ordering is: // 1. Construct catch variable. // 2. __cxa_begin_catch // 3. Enter __cxa_end_catch cleanup // 4. Enter dtor cleanup // // We do this by using a slightly abnormal initialization process. // Delegation sequence: // - ExitCXXTryStmt opens a RunCleanupsScope // - EmitAutoVarAlloca creates the variable and debug info // - InitCatchParam initializes the variable from the exception // - CallBeginCatch calls __cxa_begin_catch // - CallBeginCatch enters the __cxa_end_catch cleanup // - EmitAutoVarCleanups enters the variable destructor cleanup // - EmitCXXTryStmt emits the code for the catch body // - EmitCXXTryStmt close the RunCleanupsScope VarDecl *CatchParam = S->getExceptionDecl(); if (!CatchParam) { llvm::Value *Exn = CGF.getExceptionFromSlot(); CallBeginCatch(CGF, Exn, true); return; } // Emit the local. CodeGenFunction::AutoVarEmission var = CGF.EmitAutoVarAlloca(*CatchParam); InitCatchParam(CGF, *CatchParam, var.getObjectAddress(CGF), S->getLocStart()); CGF.EmitAutoVarCleanups(var); } /// Get or define the following function: /// void @__clang_call_terminate(i8* %exn) nounwind noreturn /// This code is used only in C++. static llvm::Constant *getClangCallTerminateFn(CodeGenModule &CGM) { llvm::FunctionType *fnTy = llvm::FunctionType::get(CGM.VoidTy, CGM.Int8PtrTy, /*IsVarArgs=*/false); llvm::Constant *fnRef = CGM.CreateRuntimeFunction( fnTy, "__clang_call_terminate", llvm::AttributeList(), /*Local=*/true); llvm::Function *fn = dyn_cast(fnRef); if (fn && fn->empty()) { fn->setDoesNotThrow(); fn->setDoesNotReturn(); // What we really want is to massively penalize inlining without // forbidding it completely. The difference between that and // 'noinline' is negligible. fn->addFnAttr(llvm::Attribute::NoInline); // Allow this function to be shared across translation units, but // we don't want it to turn into an exported symbol. fn->setLinkage(llvm::Function::LinkOnceODRLinkage); fn->setVisibility(llvm::Function::HiddenVisibility); if (CGM.supportsCOMDAT()) fn->setComdat(CGM.getModule().getOrInsertComdat(fn->getName())); // Set up the function. llvm::BasicBlock *entry = llvm::BasicBlock::Create(CGM.getLLVMContext(), "", fn); CGBuilderTy builder(CGM, entry); // Pull the exception pointer out of the parameter list. llvm::Value *exn = &*fn->arg_begin(); // Call __cxa_begin_catch(exn). llvm::CallInst *catchCall = builder.CreateCall(getBeginCatchFn(CGM), exn); catchCall->setDoesNotThrow(); catchCall->setCallingConv(CGM.getRuntimeCC()); // Call std::terminate(). llvm::CallInst *termCall = builder.CreateCall(CGM.getTerminateFn()); termCall->setDoesNotThrow(); termCall->setDoesNotReturn(); termCall->setCallingConv(CGM.getRuntimeCC()); // std::terminate cannot return. builder.CreateUnreachable(); } return fnRef; } llvm::CallInst * ItaniumCXXABI::emitTerminateForUnexpectedException(CodeGenFunction &CGF, llvm::Value *Exn) { // In C++, we want to call __cxa_begin_catch() before terminating. if (Exn) { assert(CGF.CGM.getLangOpts().CPlusPlus); return CGF.EmitNounwindRuntimeCall(getClangCallTerminateFn(CGF.CGM), Exn); } return CGF.EmitNounwindRuntimeCall(CGF.CGM.getTerminateFn()); } diff --git a/lib/CodeGen/MicrosoftCXXABI.cpp b/lib/CodeGen/MicrosoftCXXABI.cpp index 78b510bb4665..1bd2937e4747 100644 --- a/lib/CodeGen/MicrosoftCXXABI.cpp +++ b/lib/CodeGen/MicrosoftCXXABI.cpp @@ -1,4245 +1,4243 @@ //===--- MicrosoftCXXABI.cpp - Emit LLVM Code from ASTs for a Module ------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This provides C++ code generation targeting the Microsoft Visual C++ ABI. // The class in this file generates structures that follow the Microsoft // Visual C++ ABI, which is actually not very well documented at all outside // of Microsoft. // //===----------------------------------------------------------------------===// #include "CGCXXABI.h" #include "CGCleanup.h" #include "CGVTables.h" #include "CodeGenModule.h" #include "CodeGenTypes.h" #include "TargetInfo.h" #include "clang/CodeGen/ConstantInitBuilder.h" #include "clang/AST/Decl.h" #include "clang/AST/DeclCXX.h" #include "clang/AST/StmtCXX.h" #include "clang/AST/VTableBuilder.h" #include "llvm/ADT/StringExtras.h" #include "llvm/ADT/StringSet.h" #include "llvm/IR/CallSite.h" #include "llvm/IR/Intrinsics.h" using namespace clang; using namespace CodeGen; namespace { /// Holds all the vbtable globals for a given class. struct VBTableGlobals { const VPtrInfoVector *VBTables; SmallVector Globals; }; class MicrosoftCXXABI : public CGCXXABI { public: MicrosoftCXXABI(CodeGenModule &CGM) : CGCXXABI(CGM), BaseClassDescriptorType(nullptr), ClassHierarchyDescriptorType(nullptr), CompleteObjectLocatorType(nullptr), CatchableTypeType(nullptr), ThrowInfoType(nullptr) {} bool HasThisReturn(GlobalDecl GD) const override; bool hasMostDerivedReturn(GlobalDecl GD) const override; bool classifyReturnType(CGFunctionInfo &FI) const override; RecordArgABI getRecordArgABI(const CXXRecordDecl *RD) const override; bool isSRetParameterAfterThis() const override { return true; } bool isThisCompleteObject(GlobalDecl GD) const override { // The Microsoft ABI doesn't use separate complete-object vs. // base-object variants of constructors, but it does of destructors. if (isa(GD.getDecl())) { switch (GD.getDtorType()) { case Dtor_Complete: case Dtor_Deleting: return true; case Dtor_Base: return false; case Dtor_Comdat: llvm_unreachable("emitting dtor comdat as function?"); } llvm_unreachable("bad dtor kind"); } // No other kinds. return false; } size_t getSrcArgforCopyCtor(const CXXConstructorDecl *CD, FunctionArgList &Args) const override { assert(Args.size() >= 2 && "expected the arglist to have at least two args!"); // The 'most_derived' parameter goes second if the ctor is variadic and // has v-bases. if (CD->getParent()->getNumVBases() > 0 && CD->getType()->castAs()->isVariadic()) return 2; return 1; } std::vector getVBPtrOffsets(const CXXRecordDecl *RD) override { std::vector VBPtrOffsets; const ASTContext &Context = getContext(); const ASTRecordLayout &Layout = Context.getASTRecordLayout(RD); const VBTableGlobals &VBGlobals = enumerateVBTables(RD); for (const std::unique_ptr &VBT : *VBGlobals.VBTables) { const ASTRecordLayout &SubobjectLayout = Context.getASTRecordLayout(VBT->IntroducingObject); CharUnits Offs = VBT->NonVirtualOffset; Offs += SubobjectLayout.getVBPtrOffset(); if (VBT->getVBaseWithVPtr()) Offs += Layout.getVBaseClassOffset(VBT->getVBaseWithVPtr()); VBPtrOffsets.push_back(Offs); } llvm::array_pod_sort(VBPtrOffsets.begin(), VBPtrOffsets.end()); return VBPtrOffsets; } StringRef GetPureVirtualCallName() override { return "_purecall"; } StringRef GetDeletedVirtualCallName() override { return "_purecall"; } void emitVirtualObjectDelete(CodeGenFunction &CGF, const CXXDeleteExpr *DE, Address Ptr, QualType ElementType, const CXXDestructorDecl *Dtor) override; void emitRethrow(CodeGenFunction &CGF, bool isNoReturn) override; void emitThrow(CodeGenFunction &CGF, const CXXThrowExpr *E) override; void emitBeginCatch(CodeGenFunction &CGF, const CXXCatchStmt *C) override; llvm::GlobalVariable *getMSCompleteObjectLocator(const CXXRecordDecl *RD, const VPtrInfo &Info); llvm::Constant *getAddrOfRTTIDescriptor(QualType Ty) override; CatchTypeInfo getAddrOfCXXCatchHandlerType(QualType Ty, QualType CatchHandlerType) override; /// MSVC needs an extra flag to indicate a catchall. CatchTypeInfo getCatchAllTypeInfo() override { return CatchTypeInfo{nullptr, 0x40}; } bool shouldTypeidBeNullChecked(bool IsDeref, QualType SrcRecordTy) override; void EmitBadTypeidCall(CodeGenFunction &CGF) override; llvm::Value *EmitTypeid(CodeGenFunction &CGF, QualType SrcRecordTy, Address ThisPtr, llvm::Type *StdTypeInfoPtrTy) override; bool shouldDynamicCastCallBeNullChecked(bool SrcIsPtr, QualType SrcRecordTy) override; llvm::Value *EmitDynamicCastCall(CodeGenFunction &CGF, Address Value, QualType SrcRecordTy, QualType DestTy, QualType DestRecordTy, llvm::BasicBlock *CastEnd) override; llvm::Value *EmitDynamicCastToVoid(CodeGenFunction &CGF, Address Value, QualType SrcRecordTy, QualType DestTy) override; bool EmitBadCastCall(CodeGenFunction &CGF) override; bool canSpeculativelyEmitVTable(const CXXRecordDecl *RD) const override { return false; } llvm::Value * GetVirtualBaseClassOffset(CodeGenFunction &CGF, Address This, const CXXRecordDecl *ClassDecl, const CXXRecordDecl *BaseClassDecl) override; llvm::BasicBlock * EmitCtorCompleteObjectHandler(CodeGenFunction &CGF, const CXXRecordDecl *RD) override; llvm::BasicBlock * EmitDtorCompleteObjectHandler(CodeGenFunction &CGF); void initializeHiddenVirtualInheritanceMembers(CodeGenFunction &CGF, const CXXRecordDecl *RD) override; void EmitCXXConstructors(const CXXConstructorDecl *D) override; // Background on MSVC destructors // ============================== // // Both Itanium and MSVC ABIs have destructor variants. The variant names // roughly correspond in the following way: // Itanium Microsoft // Base -> no name, just ~Class // Complete -> vbase destructor // Deleting -> scalar deleting destructor // vector deleting destructor // // The base and complete destructors are the same as in Itanium, although the // complete destructor does not accept a VTT parameter when there are virtual // bases. A separate mechanism involving vtordisps is used to ensure that // virtual methods of destroyed subobjects are not called. // // The deleting destructors accept an i32 bitfield as a second parameter. Bit // 1 indicates if the memory should be deleted. Bit 2 indicates if the this // pointer points to an array. The scalar deleting destructor assumes that // bit 2 is zero, and therefore does not contain a loop. // // For virtual destructors, only one entry is reserved in the vftable, and it // always points to the vector deleting destructor. The vector deleting // destructor is the most general, so it can be used to destroy objects in // place, delete single heap objects, or delete arrays. // // A TU defining a non-inline destructor is only guaranteed to emit a base // destructor, and all of the other variants are emitted on an as-needed basis // in COMDATs. Because a non-base destructor can be emitted in a TU that // lacks a definition for the destructor, non-base destructors must always // delegate to or alias the base destructor. AddedStructorArgs buildStructorSignature(const CXXMethodDecl *MD, StructorType T, SmallVectorImpl &ArgTys) override; /// Non-base dtors should be emitted as delegating thunks in this ABI. bool useThunkForDtorVariant(const CXXDestructorDecl *Dtor, CXXDtorType DT) const override { return DT != Dtor_Base; } void EmitCXXDestructors(const CXXDestructorDecl *D) override; const CXXRecordDecl * getThisArgumentTypeForMethod(const CXXMethodDecl *MD) override { MD = MD->getCanonicalDecl(); if (MD->isVirtual() && !isa(MD)) { MicrosoftVTableContext::MethodVFTableLocation ML = CGM.getMicrosoftVTableContext().getMethodVFTableLocation(MD); // The vbases might be ordered differently in the final overrider object // and the complete object, so the "this" argument may sometimes point to // memory that has no particular type (e.g. past the complete object). // In this case, we just use a generic pointer type. // FIXME: might want to have a more precise type in the non-virtual // multiple inheritance case. if (ML.VBase || !ML.VFPtrOffset.isZero()) return nullptr; } return MD->getParent(); } Address adjustThisArgumentForVirtualFunctionCall(CodeGenFunction &CGF, GlobalDecl GD, Address This, bool VirtualCall) override; void addImplicitStructorParams(CodeGenFunction &CGF, QualType &ResTy, FunctionArgList &Params) override; llvm::Value *adjustThisParameterInVirtualFunctionPrologue( CodeGenFunction &CGF, GlobalDecl GD, llvm::Value *This) override; void EmitInstanceFunctionProlog(CodeGenFunction &CGF) override; AddedStructorArgs addImplicitConstructorArgs(CodeGenFunction &CGF, const CXXConstructorDecl *D, CXXCtorType Type, bool ForVirtualBase, bool Delegating, CallArgList &Args) override; void EmitDestructorCall(CodeGenFunction &CGF, const CXXDestructorDecl *DD, CXXDtorType Type, bool ForVirtualBase, bool Delegating, Address This) override; void emitVTableTypeMetadata(const VPtrInfo &Info, const CXXRecordDecl *RD, llvm::GlobalVariable *VTable); void emitVTableDefinitions(CodeGenVTables &CGVT, const CXXRecordDecl *RD) override; bool isVirtualOffsetNeededForVTableField(CodeGenFunction &CGF, CodeGenFunction::VPtr Vptr) override; /// Don't initialize vptrs if dynamic class /// is marked with with the 'novtable' attribute. bool doStructorsInitializeVPtrs(const CXXRecordDecl *VTableClass) override { return !VTableClass->hasAttr(); } llvm::Constant * getVTableAddressPoint(BaseSubobject Base, const CXXRecordDecl *VTableClass) override; llvm::Value *getVTableAddressPointInStructor( CodeGenFunction &CGF, const CXXRecordDecl *VTableClass, BaseSubobject Base, const CXXRecordDecl *NearestVBase) override; llvm::Constant * getVTableAddressPointForConstExpr(BaseSubobject Base, const CXXRecordDecl *VTableClass) override; llvm::GlobalVariable *getAddrOfVTable(const CXXRecordDecl *RD, CharUnits VPtrOffset) override; CGCallee getVirtualFunctionPointer(CodeGenFunction &CGF, GlobalDecl GD, Address This, llvm::Type *Ty, SourceLocation Loc) override; llvm::Value *EmitVirtualDestructorCall(CodeGenFunction &CGF, const CXXDestructorDecl *Dtor, CXXDtorType DtorType, Address This, const CXXMemberCallExpr *CE) override; void adjustCallArgsForDestructorThunk(CodeGenFunction &CGF, GlobalDecl GD, CallArgList &CallArgs) override { assert(GD.getDtorType() == Dtor_Deleting && "Only deleting destructor thunks are available in this ABI"); CallArgs.add(RValue::get(getStructorImplicitParamValue(CGF)), getContext().IntTy); } void emitVirtualInheritanceTables(const CXXRecordDecl *RD) override; llvm::GlobalVariable * getAddrOfVBTable(const VPtrInfo &VBT, const CXXRecordDecl *RD, llvm::GlobalVariable::LinkageTypes Linkage); llvm::GlobalVariable * getAddrOfVirtualDisplacementMap(const CXXRecordDecl *SrcRD, const CXXRecordDecl *DstRD) { SmallString<256> OutName; llvm::raw_svector_ostream Out(OutName); getMangleContext().mangleCXXVirtualDisplacementMap(SrcRD, DstRD, Out); StringRef MangledName = OutName.str(); if (auto *VDispMap = CGM.getModule().getNamedGlobal(MangledName)) return VDispMap; MicrosoftVTableContext &VTContext = CGM.getMicrosoftVTableContext(); unsigned NumEntries = 1 + SrcRD->getNumVBases(); SmallVector Map(NumEntries, llvm::UndefValue::get(CGM.IntTy)); Map[0] = llvm::ConstantInt::get(CGM.IntTy, 0); bool AnyDifferent = false; for (const auto &I : SrcRD->vbases()) { const CXXRecordDecl *VBase = I.getType()->getAsCXXRecordDecl(); if (!DstRD->isVirtuallyDerivedFrom(VBase)) continue; unsigned SrcVBIndex = VTContext.getVBTableIndex(SrcRD, VBase); unsigned DstVBIndex = VTContext.getVBTableIndex(DstRD, VBase); Map[SrcVBIndex] = llvm::ConstantInt::get(CGM.IntTy, DstVBIndex * 4); AnyDifferent |= SrcVBIndex != DstVBIndex; } // This map would be useless, don't use it. if (!AnyDifferent) return nullptr; llvm::ArrayType *VDispMapTy = llvm::ArrayType::get(CGM.IntTy, Map.size()); llvm::Constant *Init = llvm::ConstantArray::get(VDispMapTy, Map); llvm::GlobalValue::LinkageTypes Linkage = SrcRD->isExternallyVisible() && DstRD->isExternallyVisible() ? llvm::GlobalValue::LinkOnceODRLinkage : llvm::GlobalValue::InternalLinkage; auto *VDispMap = new llvm::GlobalVariable( CGM.getModule(), VDispMapTy, /*Constant=*/true, Linkage, /*Initializer=*/Init, MangledName); return VDispMap; } void emitVBTableDefinition(const VPtrInfo &VBT, const CXXRecordDecl *RD, llvm::GlobalVariable *GV) const; void setThunkLinkage(llvm::Function *Thunk, bool ForVTable, GlobalDecl GD, bool ReturnAdjustment) override { // Never dllimport/dllexport thunks. Thunk->setDLLStorageClass(llvm::GlobalValue::DefaultStorageClass); GVALinkage Linkage = getContext().GetGVALinkageForFunction(cast(GD.getDecl())); if (Linkage == GVA_Internal) Thunk->setLinkage(llvm::GlobalValue::InternalLinkage); else if (ReturnAdjustment) Thunk->setLinkage(llvm::GlobalValue::WeakODRLinkage); else Thunk->setLinkage(llvm::GlobalValue::LinkOnceODRLinkage); } llvm::Value *performThisAdjustment(CodeGenFunction &CGF, Address This, const ThisAdjustment &TA) override; llvm::Value *performReturnAdjustment(CodeGenFunction &CGF, Address Ret, const ReturnAdjustment &RA) override; void EmitThreadLocalInitFuncs( CodeGenModule &CGM, ArrayRef CXXThreadLocals, ArrayRef CXXThreadLocalInits, ArrayRef CXXThreadLocalInitVars) override; bool usesThreadWrapperFunction() const override { return false; } LValue EmitThreadLocalVarDeclLValue(CodeGenFunction &CGF, const VarDecl *VD, QualType LValType) override; void EmitGuardedInit(CodeGenFunction &CGF, const VarDecl &D, llvm::GlobalVariable *DeclPtr, bool PerformInit) override; void registerGlobalDtor(CodeGenFunction &CGF, const VarDecl &D, llvm::Constant *Dtor, llvm::Constant *Addr) override; // ==== Notes on array cookies ========= // // MSVC seems to only use cookies when the class has a destructor; a // two-argument usual array deallocation function isn't sufficient. // // For example, this code prints "100" and "1": // struct A { // char x; // void *operator new[](size_t sz) { // printf("%u\n", sz); // return malloc(sz); // } // void operator delete[](void *p, size_t sz) { // printf("%u\n", sz); // free(p); // } // }; // int main() { // A *p = new A[100]; // delete[] p; // } // Whereas it prints "104" and "104" if you give A a destructor. bool requiresArrayCookie(const CXXDeleteExpr *expr, QualType elementType) override; bool requiresArrayCookie(const CXXNewExpr *expr) override; CharUnits getArrayCookieSizeImpl(QualType type) override; Address InitializeArrayCookie(CodeGenFunction &CGF, Address NewPtr, llvm::Value *NumElements, const CXXNewExpr *expr, QualType ElementType) override; llvm::Value *readArrayCookieImpl(CodeGenFunction &CGF, Address allocPtr, CharUnits cookieSize) override; friend struct MSRTTIBuilder; bool isImageRelative() const { return CGM.getTarget().getPointerWidth(/*AddressSpace=*/0) == 64; } // 5 routines for constructing the llvm types for MS RTTI structs. llvm::StructType *getTypeDescriptorType(StringRef TypeInfoString) { llvm::SmallString<32> TDTypeName("rtti.TypeDescriptor"); TDTypeName += llvm::utostr(TypeInfoString.size()); llvm::StructType *&TypeDescriptorType = TypeDescriptorTypeMap[TypeInfoString.size()]; if (TypeDescriptorType) return TypeDescriptorType; llvm::Type *FieldTypes[] = { CGM.Int8PtrPtrTy, CGM.Int8PtrTy, llvm::ArrayType::get(CGM.Int8Ty, TypeInfoString.size() + 1)}; TypeDescriptorType = llvm::StructType::create(CGM.getLLVMContext(), FieldTypes, TDTypeName); return TypeDescriptorType; } llvm::Type *getImageRelativeType(llvm::Type *PtrType) { if (!isImageRelative()) return PtrType; return CGM.IntTy; } llvm::StructType *getBaseClassDescriptorType() { if (BaseClassDescriptorType) return BaseClassDescriptorType; llvm::Type *FieldTypes[] = { getImageRelativeType(CGM.Int8PtrTy), CGM.IntTy, CGM.IntTy, CGM.IntTy, CGM.IntTy, CGM.IntTy, getImageRelativeType(getClassHierarchyDescriptorType()->getPointerTo()), }; BaseClassDescriptorType = llvm::StructType::create( CGM.getLLVMContext(), FieldTypes, "rtti.BaseClassDescriptor"); return BaseClassDescriptorType; } llvm::StructType *getClassHierarchyDescriptorType() { if (ClassHierarchyDescriptorType) return ClassHierarchyDescriptorType; // Forward-declare RTTIClassHierarchyDescriptor to break a cycle. ClassHierarchyDescriptorType = llvm::StructType::create( CGM.getLLVMContext(), "rtti.ClassHierarchyDescriptor"); llvm::Type *FieldTypes[] = { CGM.IntTy, CGM.IntTy, CGM.IntTy, getImageRelativeType( getBaseClassDescriptorType()->getPointerTo()->getPointerTo()), }; ClassHierarchyDescriptorType->setBody(FieldTypes); return ClassHierarchyDescriptorType; } llvm::StructType *getCompleteObjectLocatorType() { if (CompleteObjectLocatorType) return CompleteObjectLocatorType; CompleteObjectLocatorType = llvm::StructType::create( CGM.getLLVMContext(), "rtti.CompleteObjectLocator"); llvm::Type *FieldTypes[] = { CGM.IntTy, CGM.IntTy, CGM.IntTy, getImageRelativeType(CGM.Int8PtrTy), getImageRelativeType(getClassHierarchyDescriptorType()->getPointerTo()), getImageRelativeType(CompleteObjectLocatorType), }; llvm::ArrayRef FieldTypesRef(FieldTypes); if (!isImageRelative()) FieldTypesRef = FieldTypesRef.drop_back(); CompleteObjectLocatorType->setBody(FieldTypesRef); return CompleteObjectLocatorType; } llvm::GlobalVariable *getImageBase() { StringRef Name = "__ImageBase"; if (llvm::GlobalVariable *GV = CGM.getModule().getNamedGlobal(Name)) return GV; return new llvm::GlobalVariable(CGM.getModule(), CGM.Int8Ty, /*isConstant=*/true, llvm::GlobalValue::ExternalLinkage, /*Initializer=*/nullptr, Name); } llvm::Constant *getImageRelativeConstant(llvm::Constant *PtrVal) { if (!isImageRelative()) return PtrVal; if (PtrVal->isNullValue()) return llvm::Constant::getNullValue(CGM.IntTy); llvm::Constant *ImageBaseAsInt = llvm::ConstantExpr::getPtrToInt(getImageBase(), CGM.IntPtrTy); llvm::Constant *PtrValAsInt = llvm::ConstantExpr::getPtrToInt(PtrVal, CGM.IntPtrTy); llvm::Constant *Diff = llvm::ConstantExpr::getSub(PtrValAsInt, ImageBaseAsInt, /*HasNUW=*/true, /*HasNSW=*/true); return llvm::ConstantExpr::getTrunc(Diff, CGM.IntTy); } private: MicrosoftMangleContext &getMangleContext() { return cast(CodeGen::CGCXXABI::getMangleContext()); } llvm::Constant *getZeroInt() { return llvm::ConstantInt::get(CGM.IntTy, 0); } llvm::Constant *getAllOnesInt() { return llvm::Constant::getAllOnesValue(CGM.IntTy); } CharUnits getVirtualFunctionPrologueThisAdjustment(GlobalDecl GD) override; void GetNullMemberPointerFields(const MemberPointerType *MPT, llvm::SmallVectorImpl &fields); /// \brief Shared code for virtual base adjustment. Returns the offset from /// the vbptr to the virtual base. Optionally returns the address of the /// vbptr itself. llvm::Value *GetVBaseOffsetFromVBPtr(CodeGenFunction &CGF, Address Base, llvm::Value *VBPtrOffset, llvm::Value *VBTableOffset, llvm::Value **VBPtr = nullptr); llvm::Value *GetVBaseOffsetFromVBPtr(CodeGenFunction &CGF, Address Base, int32_t VBPtrOffset, int32_t VBTableOffset, llvm::Value **VBPtr = nullptr) { assert(VBTableOffset % 4 == 0 && "should be byte offset into table of i32s"); llvm::Value *VBPOffset = llvm::ConstantInt::get(CGM.IntTy, VBPtrOffset), *VBTOffset = llvm::ConstantInt::get(CGM.IntTy, VBTableOffset); return GetVBaseOffsetFromVBPtr(CGF, Base, VBPOffset, VBTOffset, VBPtr); } std::pair performBaseAdjustment(CodeGenFunction &CGF, Address Value, QualType SrcRecordTy); /// \brief Performs a full virtual base adjustment. Used to dereference /// pointers to members of virtual bases. llvm::Value *AdjustVirtualBase(CodeGenFunction &CGF, const Expr *E, const CXXRecordDecl *RD, Address Base, llvm::Value *VirtualBaseAdjustmentOffset, llvm::Value *VBPtrOffset /* optional */); /// \brief Emits a full member pointer with the fields common to data and /// function member pointers. llvm::Constant *EmitFullMemberPointer(llvm::Constant *FirstField, bool IsMemberFunction, const CXXRecordDecl *RD, CharUnits NonVirtualBaseAdjustment, unsigned VBTableIndex); bool MemberPointerConstantIsNull(const MemberPointerType *MPT, llvm::Constant *MP); /// \brief - Initialize all vbptrs of 'this' with RD as the complete type. void EmitVBPtrStores(CodeGenFunction &CGF, const CXXRecordDecl *RD); /// \brief Caching wrapper around VBTableBuilder::enumerateVBTables(). const VBTableGlobals &enumerateVBTables(const CXXRecordDecl *RD); /// \brief Generate a thunk for calling a virtual member function MD. llvm::Function *EmitVirtualMemPtrThunk( const CXXMethodDecl *MD, const MicrosoftVTableContext::MethodVFTableLocation &ML); public: llvm::Type *ConvertMemberPointerType(const MemberPointerType *MPT) override; bool isZeroInitializable(const MemberPointerType *MPT) override; bool isMemberPointerConvertible(const MemberPointerType *MPT) const override { const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); return RD->hasAttr(); } llvm::Constant *EmitNullMemberPointer(const MemberPointerType *MPT) override; llvm::Constant *EmitMemberDataPointer(const MemberPointerType *MPT, CharUnits offset) override; llvm::Constant *EmitMemberFunctionPointer(const CXXMethodDecl *MD) override; llvm::Constant *EmitMemberPointer(const APValue &MP, QualType MPT) override; llvm::Value *EmitMemberPointerComparison(CodeGenFunction &CGF, llvm::Value *L, llvm::Value *R, const MemberPointerType *MPT, bool Inequality) override; llvm::Value *EmitMemberPointerIsNotNull(CodeGenFunction &CGF, llvm::Value *MemPtr, const MemberPointerType *MPT) override; llvm::Value * EmitMemberDataPointerAddress(CodeGenFunction &CGF, const Expr *E, Address Base, llvm::Value *MemPtr, const MemberPointerType *MPT) override; llvm::Value *EmitNonNullMemberPointerConversion( const MemberPointerType *SrcTy, const MemberPointerType *DstTy, CastKind CK, CastExpr::path_const_iterator PathBegin, CastExpr::path_const_iterator PathEnd, llvm::Value *Src, CGBuilderTy &Builder); llvm::Value *EmitMemberPointerConversion(CodeGenFunction &CGF, const CastExpr *E, llvm::Value *Src) override; llvm::Constant *EmitMemberPointerConversion(const CastExpr *E, llvm::Constant *Src) override; llvm::Constant *EmitMemberPointerConversion( const MemberPointerType *SrcTy, const MemberPointerType *DstTy, CastKind CK, CastExpr::path_const_iterator PathBegin, CastExpr::path_const_iterator PathEnd, llvm::Constant *Src); CGCallee EmitLoadOfMemberFunctionPointer(CodeGenFunction &CGF, const Expr *E, Address This, llvm::Value *&ThisPtrForCall, llvm::Value *MemPtr, const MemberPointerType *MPT) override; void emitCXXStructor(const CXXMethodDecl *MD, StructorType Type) override; llvm::StructType *getCatchableTypeType() { if (CatchableTypeType) return CatchableTypeType; llvm::Type *FieldTypes[] = { CGM.IntTy, // Flags getImageRelativeType(CGM.Int8PtrTy), // TypeDescriptor CGM.IntTy, // NonVirtualAdjustment CGM.IntTy, // OffsetToVBPtr CGM.IntTy, // VBTableIndex CGM.IntTy, // Size getImageRelativeType(CGM.Int8PtrTy) // CopyCtor }; CatchableTypeType = llvm::StructType::create( CGM.getLLVMContext(), FieldTypes, "eh.CatchableType"); return CatchableTypeType; } llvm::StructType *getCatchableTypeArrayType(uint32_t NumEntries) { llvm::StructType *&CatchableTypeArrayType = CatchableTypeArrayTypeMap[NumEntries]; if (CatchableTypeArrayType) return CatchableTypeArrayType; llvm::SmallString<23> CTATypeName("eh.CatchableTypeArray."); CTATypeName += llvm::utostr(NumEntries); llvm::Type *CTType = getImageRelativeType(getCatchableTypeType()->getPointerTo()); llvm::Type *FieldTypes[] = { CGM.IntTy, // NumEntries llvm::ArrayType::get(CTType, NumEntries) // CatchableTypes }; CatchableTypeArrayType = llvm::StructType::create(CGM.getLLVMContext(), FieldTypes, CTATypeName); return CatchableTypeArrayType; } llvm::StructType *getThrowInfoType() { if (ThrowInfoType) return ThrowInfoType; llvm::Type *FieldTypes[] = { CGM.IntTy, // Flags getImageRelativeType(CGM.Int8PtrTy), // CleanupFn getImageRelativeType(CGM.Int8PtrTy), // ForwardCompat getImageRelativeType(CGM.Int8PtrTy) // CatchableTypeArray }; ThrowInfoType = llvm::StructType::create(CGM.getLLVMContext(), FieldTypes, "eh.ThrowInfo"); return ThrowInfoType; } llvm::Constant *getThrowFn() { // _CxxThrowException is passed an exception object and a ThrowInfo object // which describes the exception. llvm::Type *Args[] = {CGM.Int8PtrTy, getThrowInfoType()->getPointerTo()}; llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, Args, /*IsVarArgs=*/false); auto *Fn = cast( CGM.CreateRuntimeFunction(FTy, "_CxxThrowException")); // _CxxThrowException is stdcall on 32-bit x86 platforms. if (CGM.getTarget().getTriple().getArch() == llvm::Triple::x86) Fn->setCallingConv(llvm::CallingConv::X86_StdCall); return Fn; } llvm::Function *getAddrOfCXXCtorClosure(const CXXConstructorDecl *CD, CXXCtorType CT); llvm::Constant *getCatchableType(QualType T, uint32_t NVOffset = 0, int32_t VBPtrOffset = -1, uint32_t VBIndex = 0); llvm::GlobalVariable *getCatchableTypeArray(QualType T); llvm::GlobalVariable *getThrowInfo(QualType T) override; private: typedef std::pair VFTableIdTy; typedef llvm::DenseMap VTablesMapTy; typedef llvm::DenseMap VFTablesMapTy; /// \brief All the vftables that have been referenced. VFTablesMapTy VFTablesMap; VTablesMapTy VTablesMap; /// \brief This set holds the record decls we've deferred vtable emission for. llvm::SmallPtrSet DeferredVFTables; /// \brief All the vbtables which have been referenced. llvm::DenseMap VBTablesMap; /// Info on the global variable used to guard initialization of static locals. /// The BitIndex field is only used for externally invisible declarations. struct GuardInfo { GuardInfo() : Guard(nullptr), BitIndex(0) {} llvm::GlobalVariable *Guard; unsigned BitIndex; }; /// Map from DeclContext to the current guard variable. We assume that the /// AST is visited in source code order. llvm::DenseMap GuardVariableMap; llvm::DenseMap ThreadLocalGuardVariableMap; llvm::DenseMap ThreadSafeGuardNumMap; llvm::DenseMap TypeDescriptorTypeMap; llvm::StructType *BaseClassDescriptorType; llvm::StructType *ClassHierarchyDescriptorType; llvm::StructType *CompleteObjectLocatorType; llvm::DenseMap CatchableTypeArrays; llvm::StructType *CatchableTypeType; llvm::DenseMap CatchableTypeArrayTypeMap; llvm::StructType *ThrowInfoType; }; } CGCXXABI::RecordArgABI MicrosoftCXXABI::getRecordArgABI(const CXXRecordDecl *RD) const { switch (CGM.getTarget().getTriple().getArch()) { default: // FIXME: Implement for other architectures. return RAA_Default; case llvm::Triple::thumb: // Use the simple Itanium rules for now. // FIXME: This is incompatible with MSVC for arguments with a dtor and no // copy ctor. return !canCopyArgument(RD) ? RAA_Indirect : RAA_Default; case llvm::Triple::x86: // All record arguments are passed in memory on x86. Decide whether to // construct the object directly in argument memory, or to construct the // argument elsewhere and copy the bytes during the call. // If C++ prohibits us from making a copy, construct the arguments directly // into argument memory. if (!canCopyArgument(RD)) return RAA_DirectInMemory; // Otherwise, construct the argument into a temporary and copy the bytes // into the outgoing argument memory. return RAA_Default; case llvm::Triple::x86_64: - // Win64 passes objects with non-trivial copy ctors indirectly. - if (RD->hasNonTrivialCopyConstructor()) - return RAA_Indirect; - - // If an object has a destructor, we'd really like to pass it indirectly + // If a class has a destructor, we'd really like to pass it indirectly // because it allows us to elide copies. Unfortunately, MSVC makes that // impossible for small types, which it will pass in a single register or // stack slot. Most objects with dtors are large-ish, so handle that early. // We can't call out all large objects as being indirect because there are // multiple x64 calling conventions and the C++ ABI code shouldn't dictate // how we pass large POD types. + // + // Note: This permits small classes with nontrivial destructors to be + // passed in registers, which is non-conforming. if (RD->hasNonTrivialDestructor() && getContext().getTypeSize(RD->getTypeForDecl()) > 64) return RAA_Indirect; - // If this is true, the implicit copy constructor that Sema would have - // created would not be deleted. FIXME: We should provide a more direct way - // for CodeGen to ask whether the constructor was deleted. - if (!RD->hasUserDeclaredCopyConstructor() && - !RD->hasUserDeclaredMoveConstructor() && - !RD->needsOverloadResolutionForMoveConstructor() && - !RD->hasUserDeclaredMoveAssignment() && - !RD->needsOverloadResolutionForMoveAssignment()) - return RAA_Default; - - // Otherwise, Sema should have created an implicit copy constructor if - // needed. - assert(!RD->needsImplicitCopyConstructor()); - - // We have to make sure the trivial copy constructor isn't deleted. - for (const CXXConstructorDecl *CD : RD->ctors()) { - if (CD->isCopyConstructor()) { - assert(CD->isTrivial()); - // We had at least one undeleted trivial copy ctor. Return directly. - if (!CD->isDeleted()) - return RAA_Default; + // If a class has at least one non-deleted, trivial copy constructor, it + // is passed according to the C ABI. Otherwise, it is passed indirectly. + // + // Note: This permits classes with non-trivial copy or move ctors to be + // passed in registers, so long as they *also* have a trivial copy ctor, + // which is non-conforming. + if (RD->needsImplicitCopyConstructor()) { + // If the copy ctor has not yet been declared, we can read its triviality + // off the AST. + if (!RD->defaultedCopyConstructorIsDeleted() && + RD->hasTrivialCopyConstructor()) + return RAA_Default; + } else { + // Otherwise, we need to find the copy constructor(s) and ask. + for (const CXXConstructorDecl *CD : RD->ctors()) { + if (CD->isCopyConstructor()) { + // We had at least one nondeleted trivial copy ctor. Return directly. + if (!CD->isDeleted() && CD->isTrivial()) + return RAA_Default; + } } } - // The trivial copy constructor was deleted. Return indirectly. + // We have no trivial, non-deleted copy constructor. return RAA_Indirect; } llvm_unreachable("invalid enum"); } void MicrosoftCXXABI::emitVirtualObjectDelete(CodeGenFunction &CGF, const CXXDeleteExpr *DE, Address Ptr, QualType ElementType, const CXXDestructorDecl *Dtor) { // FIXME: Provide a source location here even though there's no // CXXMemberCallExpr for dtor call. bool UseGlobalDelete = DE->isGlobalDelete(); CXXDtorType DtorType = UseGlobalDelete ? Dtor_Complete : Dtor_Deleting; llvm::Value *MDThis = EmitVirtualDestructorCall(CGF, Dtor, DtorType, Ptr, /*CE=*/nullptr); if (UseGlobalDelete) CGF.EmitDeleteCall(DE->getOperatorDelete(), MDThis, ElementType); } void MicrosoftCXXABI::emitRethrow(CodeGenFunction &CGF, bool isNoReturn) { llvm::Value *Args[] = { llvm::ConstantPointerNull::get(CGM.Int8PtrTy), llvm::ConstantPointerNull::get(getThrowInfoType()->getPointerTo())}; auto *Fn = getThrowFn(); if (isNoReturn) CGF.EmitNoreturnRuntimeCallOrInvoke(Fn, Args); else CGF.EmitRuntimeCallOrInvoke(Fn, Args); } namespace { struct CatchRetScope final : EHScopeStack::Cleanup { llvm::CatchPadInst *CPI; CatchRetScope(llvm::CatchPadInst *CPI) : CPI(CPI) {} void Emit(CodeGenFunction &CGF, Flags flags) override { llvm::BasicBlock *BB = CGF.createBasicBlock("catchret.dest"); CGF.Builder.CreateCatchRet(CPI, BB); CGF.EmitBlock(BB); } }; } void MicrosoftCXXABI::emitBeginCatch(CodeGenFunction &CGF, const CXXCatchStmt *S) { // In the MS ABI, the runtime handles the copy, and the catch handler is // responsible for destruction. VarDecl *CatchParam = S->getExceptionDecl(); llvm::BasicBlock *CatchPadBB = CGF.Builder.GetInsertBlock(); llvm::CatchPadInst *CPI = cast(CatchPadBB->getFirstNonPHI()); CGF.CurrentFuncletPad = CPI; // If this is a catch-all or the catch parameter is unnamed, we don't need to // emit an alloca to the object. if (!CatchParam || !CatchParam->getDeclName()) { CGF.EHStack.pushCleanup(NormalCleanup, CPI); return; } CodeGenFunction::AutoVarEmission var = CGF.EmitAutoVarAlloca(*CatchParam); CPI->setArgOperand(2, var.getObjectAddress(CGF).getPointer()); CGF.EHStack.pushCleanup(NormalCleanup, CPI); CGF.EmitAutoVarCleanups(var); } /// We need to perform a generic polymorphic operation (like a typeid /// or a cast), which requires an object with a vfptr. Adjust the /// address to point to an object with a vfptr. std::pair MicrosoftCXXABI::performBaseAdjustment(CodeGenFunction &CGF, Address Value, QualType SrcRecordTy) { Value = CGF.Builder.CreateBitCast(Value, CGF.Int8PtrTy); const CXXRecordDecl *SrcDecl = SrcRecordTy->getAsCXXRecordDecl(); const ASTContext &Context = getContext(); // If the class itself has a vfptr, great. This check implicitly // covers non-virtual base subobjects: a class with its own virtual // functions would be a candidate to be a primary base. if (Context.getASTRecordLayout(SrcDecl).hasExtendableVFPtr()) return std::make_pair(Value, llvm::ConstantInt::get(CGF.Int32Ty, 0)); // Okay, one of the vbases must have a vfptr, or else this isn't // actually a polymorphic class. const CXXRecordDecl *PolymorphicBase = nullptr; for (auto &Base : SrcDecl->vbases()) { const CXXRecordDecl *BaseDecl = Base.getType()->getAsCXXRecordDecl(); if (Context.getASTRecordLayout(BaseDecl).hasExtendableVFPtr()) { PolymorphicBase = BaseDecl; break; } } assert(PolymorphicBase && "polymorphic class has no apparent vfptr?"); llvm::Value *Offset = GetVirtualBaseClassOffset(CGF, Value, SrcDecl, PolymorphicBase); llvm::Value *Ptr = CGF.Builder.CreateInBoundsGEP(Value.getPointer(), Offset); CharUnits VBaseAlign = CGF.CGM.getVBaseAlignment(Value.getAlignment(), SrcDecl, PolymorphicBase); return std::make_pair(Address(Ptr, VBaseAlign), Offset); } bool MicrosoftCXXABI::shouldTypeidBeNullChecked(bool IsDeref, QualType SrcRecordTy) { const CXXRecordDecl *SrcDecl = SrcRecordTy->getAsCXXRecordDecl(); return IsDeref && !getContext().getASTRecordLayout(SrcDecl).hasExtendableVFPtr(); } static llvm::CallSite emitRTtypeidCall(CodeGenFunction &CGF, llvm::Value *Argument) { llvm::Type *ArgTypes[] = {CGF.Int8PtrTy}; llvm::FunctionType *FTy = llvm::FunctionType::get(CGF.Int8PtrTy, ArgTypes, false); llvm::Value *Args[] = {Argument}; llvm::Constant *Fn = CGF.CGM.CreateRuntimeFunction(FTy, "__RTtypeid"); return CGF.EmitRuntimeCallOrInvoke(Fn, Args); } void MicrosoftCXXABI::EmitBadTypeidCall(CodeGenFunction &CGF) { llvm::CallSite Call = emitRTtypeidCall(CGF, llvm::Constant::getNullValue(CGM.VoidPtrTy)); Call.setDoesNotReturn(); CGF.Builder.CreateUnreachable(); } llvm::Value *MicrosoftCXXABI::EmitTypeid(CodeGenFunction &CGF, QualType SrcRecordTy, Address ThisPtr, llvm::Type *StdTypeInfoPtrTy) { std::tie(ThisPtr, std::ignore) = performBaseAdjustment(CGF, ThisPtr, SrcRecordTy); auto Typeid = emitRTtypeidCall(CGF, ThisPtr.getPointer()).getInstruction(); return CGF.Builder.CreateBitCast(Typeid, StdTypeInfoPtrTy); } bool MicrosoftCXXABI::shouldDynamicCastCallBeNullChecked(bool SrcIsPtr, QualType SrcRecordTy) { const CXXRecordDecl *SrcDecl = SrcRecordTy->getAsCXXRecordDecl(); return SrcIsPtr && !getContext().getASTRecordLayout(SrcDecl).hasExtendableVFPtr(); } llvm::Value *MicrosoftCXXABI::EmitDynamicCastCall( CodeGenFunction &CGF, Address This, QualType SrcRecordTy, QualType DestTy, QualType DestRecordTy, llvm::BasicBlock *CastEnd) { llvm::Type *DestLTy = CGF.ConvertType(DestTy); llvm::Value *SrcRTTI = CGF.CGM.GetAddrOfRTTIDescriptor(SrcRecordTy.getUnqualifiedType()); llvm::Value *DestRTTI = CGF.CGM.GetAddrOfRTTIDescriptor(DestRecordTy.getUnqualifiedType()); llvm::Value *Offset; std::tie(This, Offset) = performBaseAdjustment(CGF, This, SrcRecordTy); llvm::Value *ThisPtr = This.getPointer(); Offset = CGF.Builder.CreateTrunc(Offset, CGF.Int32Ty); // PVOID __RTDynamicCast( // PVOID inptr, // LONG VfDelta, // PVOID SrcType, // PVOID TargetType, // BOOL isReference) llvm::Type *ArgTypes[] = {CGF.Int8PtrTy, CGF.Int32Ty, CGF.Int8PtrTy, CGF.Int8PtrTy, CGF.Int32Ty}; llvm::Constant *Function = CGF.CGM.CreateRuntimeFunction( llvm::FunctionType::get(CGF.Int8PtrTy, ArgTypes, false), "__RTDynamicCast"); llvm::Value *Args[] = { ThisPtr, Offset, SrcRTTI, DestRTTI, llvm::ConstantInt::get(CGF.Int32Ty, DestTy->isReferenceType())}; ThisPtr = CGF.EmitRuntimeCallOrInvoke(Function, Args).getInstruction(); return CGF.Builder.CreateBitCast(ThisPtr, DestLTy); } llvm::Value * MicrosoftCXXABI::EmitDynamicCastToVoid(CodeGenFunction &CGF, Address Value, QualType SrcRecordTy, QualType DestTy) { std::tie(Value, std::ignore) = performBaseAdjustment(CGF, Value, SrcRecordTy); // PVOID __RTCastToVoid( // PVOID inptr) llvm::Type *ArgTypes[] = {CGF.Int8PtrTy}; llvm::Constant *Function = CGF.CGM.CreateRuntimeFunction( llvm::FunctionType::get(CGF.Int8PtrTy, ArgTypes, false), "__RTCastToVoid"); llvm::Value *Args[] = {Value.getPointer()}; return CGF.EmitRuntimeCall(Function, Args); } bool MicrosoftCXXABI::EmitBadCastCall(CodeGenFunction &CGF) { return false; } llvm::Value *MicrosoftCXXABI::GetVirtualBaseClassOffset( CodeGenFunction &CGF, Address This, const CXXRecordDecl *ClassDecl, const CXXRecordDecl *BaseClassDecl) { const ASTContext &Context = getContext(); int64_t VBPtrChars = Context.getASTRecordLayout(ClassDecl).getVBPtrOffset().getQuantity(); llvm::Value *VBPtrOffset = llvm::ConstantInt::get(CGM.PtrDiffTy, VBPtrChars); CharUnits IntSize = Context.getTypeSizeInChars(Context.IntTy); CharUnits VBTableChars = IntSize * CGM.getMicrosoftVTableContext().getVBTableIndex(ClassDecl, BaseClassDecl); llvm::Value *VBTableOffset = llvm::ConstantInt::get(CGM.IntTy, VBTableChars.getQuantity()); llvm::Value *VBPtrToNewBase = GetVBaseOffsetFromVBPtr(CGF, This, VBPtrOffset, VBTableOffset); VBPtrToNewBase = CGF.Builder.CreateSExtOrBitCast(VBPtrToNewBase, CGM.PtrDiffTy); return CGF.Builder.CreateNSWAdd(VBPtrOffset, VBPtrToNewBase); } bool MicrosoftCXXABI::HasThisReturn(GlobalDecl GD) const { return isa(GD.getDecl()); } static bool isDeletingDtor(GlobalDecl GD) { return isa(GD.getDecl()) && GD.getDtorType() == Dtor_Deleting; } bool MicrosoftCXXABI::hasMostDerivedReturn(GlobalDecl GD) const { return isDeletingDtor(GD); } bool MicrosoftCXXABI::classifyReturnType(CGFunctionInfo &FI) const { const CXXRecordDecl *RD = FI.getReturnType()->getAsCXXRecordDecl(); if (!RD) return false; CharUnits Align = CGM.getContext().getTypeAlignInChars(FI.getReturnType()); if (FI.isInstanceMethod()) { // If it's an instance method, aggregates are always returned indirectly via // the second parameter. FI.getReturnInfo() = ABIArgInfo::getIndirect(Align, /*ByVal=*/false); FI.getReturnInfo().setSRetAfterThis(FI.isInstanceMethod()); return true; } else if (!RD->isPOD()) { // If it's a free function, non-POD types are returned indirectly. FI.getReturnInfo() = ABIArgInfo::getIndirect(Align, /*ByVal=*/false); return true; } // Otherwise, use the C ABI rules. return false; } llvm::BasicBlock * MicrosoftCXXABI::EmitCtorCompleteObjectHandler(CodeGenFunction &CGF, const CXXRecordDecl *RD) { llvm::Value *IsMostDerivedClass = getStructorImplicitParamValue(CGF); assert(IsMostDerivedClass && "ctor for a class with virtual bases must have an implicit parameter"); llvm::Value *IsCompleteObject = CGF.Builder.CreateIsNotNull(IsMostDerivedClass, "is_complete_object"); llvm::BasicBlock *CallVbaseCtorsBB = CGF.createBasicBlock("ctor.init_vbases"); llvm::BasicBlock *SkipVbaseCtorsBB = CGF.createBasicBlock("ctor.skip_vbases"); CGF.Builder.CreateCondBr(IsCompleteObject, CallVbaseCtorsBB, SkipVbaseCtorsBB); CGF.EmitBlock(CallVbaseCtorsBB); // Fill in the vbtable pointers here. EmitVBPtrStores(CGF, RD); // CGF will put the base ctor calls in this basic block for us later. return SkipVbaseCtorsBB; } llvm::BasicBlock * MicrosoftCXXABI::EmitDtorCompleteObjectHandler(CodeGenFunction &CGF) { llvm::Value *IsMostDerivedClass = getStructorImplicitParamValue(CGF); assert(IsMostDerivedClass && "ctor for a class with virtual bases must have an implicit parameter"); llvm::Value *IsCompleteObject = CGF.Builder.CreateIsNotNull(IsMostDerivedClass, "is_complete_object"); llvm::BasicBlock *CallVbaseDtorsBB = CGF.createBasicBlock("Dtor.dtor_vbases"); llvm::BasicBlock *SkipVbaseDtorsBB = CGF.createBasicBlock("Dtor.skip_vbases"); CGF.Builder.CreateCondBr(IsCompleteObject, CallVbaseDtorsBB, SkipVbaseDtorsBB); CGF.EmitBlock(CallVbaseDtorsBB); // CGF will put the base dtor calls in this basic block for us later. return SkipVbaseDtorsBB; } void MicrosoftCXXABI::initializeHiddenVirtualInheritanceMembers( CodeGenFunction &CGF, const CXXRecordDecl *RD) { // In most cases, an override for a vbase virtual method can adjust // the "this" parameter by applying a constant offset. // However, this is not enough while a constructor or a destructor of some // class X is being executed if all the following conditions are met: // - X has virtual bases, (1) // - X overrides a virtual method M of a vbase Y, (2) // - X itself is a vbase of the most derived class. // // If (1) and (2) are true, the vtorDisp for vbase Y is a hidden member of X // which holds the extra amount of "this" adjustment we must do when we use // the X vftables (i.e. during X ctor or dtor). // Outside the ctors and dtors, the values of vtorDisps are zero. const ASTRecordLayout &Layout = getContext().getASTRecordLayout(RD); typedef ASTRecordLayout::VBaseOffsetsMapTy VBOffsets; const VBOffsets &VBaseMap = Layout.getVBaseOffsetsMap(); CGBuilderTy &Builder = CGF.Builder; unsigned AS = getThisAddress(CGF).getAddressSpace(); llvm::Value *Int8This = nullptr; // Initialize lazily. for (VBOffsets::const_iterator I = VBaseMap.begin(), E = VBaseMap.end(); I != E; ++I) { if (!I->second.hasVtorDisp()) continue; llvm::Value *VBaseOffset = GetVirtualBaseClassOffset(CGF, getThisAddress(CGF), RD, I->first); uint64_t ConstantVBaseOffset = Layout.getVBaseClassOffset(I->first).getQuantity(); // vtorDisp_for_vbase = vbptr[vbase_idx] - offsetof(RD, vbase). llvm::Value *VtorDispValue = Builder.CreateSub( VBaseOffset, llvm::ConstantInt::get(CGM.PtrDiffTy, ConstantVBaseOffset), "vtordisp.value"); VtorDispValue = Builder.CreateTruncOrBitCast(VtorDispValue, CGF.Int32Ty); if (!Int8This) Int8This = Builder.CreateBitCast(getThisValue(CGF), CGF.Int8Ty->getPointerTo(AS)); llvm::Value *VtorDispPtr = Builder.CreateInBoundsGEP(Int8This, VBaseOffset); // vtorDisp is always the 32-bits before the vbase in the class layout. VtorDispPtr = Builder.CreateConstGEP1_32(VtorDispPtr, -4); VtorDispPtr = Builder.CreateBitCast( VtorDispPtr, CGF.Int32Ty->getPointerTo(AS), "vtordisp.ptr"); Builder.CreateAlignedStore(VtorDispValue, VtorDispPtr, CharUnits::fromQuantity(4)); } } static bool hasDefaultCXXMethodCC(ASTContext &Context, const CXXMethodDecl *MD) { CallingConv ExpectedCallingConv = Context.getDefaultCallingConvention( /*IsVariadic=*/false, /*IsCXXMethod=*/true); CallingConv ActualCallingConv = MD->getType()->getAs()->getCallConv(); return ExpectedCallingConv == ActualCallingConv; } void MicrosoftCXXABI::EmitCXXConstructors(const CXXConstructorDecl *D) { // There's only one constructor type in this ABI. CGM.EmitGlobal(GlobalDecl(D, Ctor_Complete)); // Exported default constructors either have a simple call-site where they use // the typical calling convention and have a single 'this' pointer for an // argument -or- they get a wrapper function which appropriately thunks to the // real default constructor. This thunk is the default constructor closure. if (D->hasAttr() && D->isDefaultConstructor()) if (!hasDefaultCXXMethodCC(getContext(), D) || D->getNumParams() != 0) { llvm::Function *Fn = getAddrOfCXXCtorClosure(D, Ctor_DefaultClosure); Fn->setLinkage(llvm::GlobalValue::WeakODRLinkage); Fn->setDLLStorageClass(llvm::GlobalValue::DLLExportStorageClass); } } void MicrosoftCXXABI::EmitVBPtrStores(CodeGenFunction &CGF, const CXXRecordDecl *RD) { Address This = getThisAddress(CGF); This = CGF.Builder.CreateElementBitCast(This, CGM.Int8Ty, "this.int8"); const ASTContext &Context = getContext(); const ASTRecordLayout &Layout = Context.getASTRecordLayout(RD); const VBTableGlobals &VBGlobals = enumerateVBTables(RD); for (unsigned I = 0, E = VBGlobals.VBTables->size(); I != E; ++I) { const std::unique_ptr &VBT = (*VBGlobals.VBTables)[I]; llvm::GlobalVariable *GV = VBGlobals.Globals[I]; const ASTRecordLayout &SubobjectLayout = Context.getASTRecordLayout(VBT->IntroducingObject); CharUnits Offs = VBT->NonVirtualOffset; Offs += SubobjectLayout.getVBPtrOffset(); if (VBT->getVBaseWithVPtr()) Offs += Layout.getVBaseClassOffset(VBT->getVBaseWithVPtr()); Address VBPtr = CGF.Builder.CreateConstInBoundsByteGEP(This, Offs); llvm::Value *GVPtr = CGF.Builder.CreateConstInBoundsGEP2_32(GV->getValueType(), GV, 0, 0); VBPtr = CGF.Builder.CreateElementBitCast(VBPtr, GVPtr->getType(), "vbptr." + VBT->ObjectWithVPtr->getName()); CGF.Builder.CreateStore(GVPtr, VBPtr); } } CGCXXABI::AddedStructorArgs MicrosoftCXXABI::buildStructorSignature(const CXXMethodDecl *MD, StructorType T, SmallVectorImpl &ArgTys) { AddedStructorArgs Added; // TODO: 'for base' flag if (T == StructorType::Deleting) { // The scalar deleting destructor takes an implicit int parameter. ArgTys.push_back(getContext().IntTy); ++Added.Suffix; } auto *CD = dyn_cast(MD); if (!CD) return Added; // All parameters are already in place except is_most_derived, which goes // after 'this' if it's variadic and last if it's not. const CXXRecordDecl *Class = CD->getParent(); const FunctionProtoType *FPT = CD->getType()->castAs(); if (Class->getNumVBases()) { if (FPT->isVariadic()) { ArgTys.insert(ArgTys.begin() + 1, getContext().IntTy); ++Added.Prefix; } else { ArgTys.push_back(getContext().IntTy); ++Added.Suffix; } } return Added; } void MicrosoftCXXABI::EmitCXXDestructors(const CXXDestructorDecl *D) { // The TU defining a dtor is only guaranteed to emit a base destructor. All // other destructor variants are delegating thunks. CGM.EmitGlobal(GlobalDecl(D, Dtor_Base)); } CharUnits MicrosoftCXXABI::getVirtualFunctionPrologueThisAdjustment(GlobalDecl GD) { GD = GD.getCanonicalDecl(); const CXXMethodDecl *MD = cast(GD.getDecl()); GlobalDecl LookupGD = GD; if (const CXXDestructorDecl *DD = dyn_cast(MD)) { // Complete destructors take a pointer to the complete object as a // parameter, thus don't need this adjustment. if (GD.getDtorType() == Dtor_Complete) return CharUnits(); // There's no Dtor_Base in vftable but it shares the this adjustment with // the deleting one, so look it up instead. LookupGD = GlobalDecl(DD, Dtor_Deleting); } MicrosoftVTableContext::MethodVFTableLocation ML = CGM.getMicrosoftVTableContext().getMethodVFTableLocation(LookupGD); CharUnits Adjustment = ML.VFPtrOffset; // Normal virtual instance methods need to adjust from the vfptr that first // defined the virtual method to the virtual base subobject, but destructors // do not. The vector deleting destructor thunk applies this adjustment for // us if necessary. if (isa(MD)) Adjustment = CharUnits::Zero(); if (ML.VBase) { const ASTRecordLayout &DerivedLayout = getContext().getASTRecordLayout(MD->getParent()); Adjustment += DerivedLayout.getVBaseClassOffset(ML.VBase); } return Adjustment; } Address MicrosoftCXXABI::adjustThisArgumentForVirtualFunctionCall( CodeGenFunction &CGF, GlobalDecl GD, Address This, bool VirtualCall) { if (!VirtualCall) { // If the call of a virtual function is not virtual, we just have to // compensate for the adjustment the virtual function does in its prologue. CharUnits Adjustment = getVirtualFunctionPrologueThisAdjustment(GD); if (Adjustment.isZero()) return This; This = CGF.Builder.CreateElementBitCast(This, CGF.Int8Ty); assert(Adjustment.isPositive()); return CGF.Builder.CreateConstByteGEP(This, Adjustment); } GD = GD.getCanonicalDecl(); const CXXMethodDecl *MD = cast(GD.getDecl()); GlobalDecl LookupGD = GD; if (const CXXDestructorDecl *DD = dyn_cast(MD)) { // Complete dtors take a pointer to the complete object, // thus don't need adjustment. if (GD.getDtorType() == Dtor_Complete) return This; // There's only Dtor_Deleting in vftable but it shares the this adjustment // with the base one, so look up the deleting one instead. LookupGD = GlobalDecl(DD, Dtor_Deleting); } MicrosoftVTableContext::MethodVFTableLocation ML = CGM.getMicrosoftVTableContext().getMethodVFTableLocation(LookupGD); CharUnits StaticOffset = ML.VFPtrOffset; // Base destructors expect 'this' to point to the beginning of the base // subobject, not the first vfptr that happens to contain the virtual dtor. // However, we still need to apply the virtual base adjustment. if (isa(MD) && GD.getDtorType() == Dtor_Base) StaticOffset = CharUnits::Zero(); Address Result = This; if (ML.VBase) { Result = CGF.Builder.CreateElementBitCast(Result, CGF.Int8Ty); const CXXRecordDecl *Derived = MD->getParent(); const CXXRecordDecl *VBase = ML.VBase; llvm::Value *VBaseOffset = GetVirtualBaseClassOffset(CGF, Result, Derived, VBase); llvm::Value *VBasePtr = CGF.Builder.CreateInBoundsGEP(Result.getPointer(), VBaseOffset); CharUnits VBaseAlign = CGF.CGM.getVBaseAlignment(Result.getAlignment(), Derived, VBase); Result = Address(VBasePtr, VBaseAlign); } if (!StaticOffset.isZero()) { assert(StaticOffset.isPositive()); Result = CGF.Builder.CreateElementBitCast(Result, CGF.Int8Ty); if (ML.VBase) { // Non-virtual adjustment might result in a pointer outside the allocated // object, e.g. if the final overrider class is laid out after the virtual // base that declares a method in the most derived class. // FIXME: Update the code that emits this adjustment in thunks prologues. Result = CGF.Builder.CreateConstByteGEP(Result, StaticOffset); } else { Result = CGF.Builder.CreateConstInBoundsByteGEP(Result, StaticOffset); } } return Result; } void MicrosoftCXXABI::addImplicitStructorParams(CodeGenFunction &CGF, QualType &ResTy, FunctionArgList &Params) { ASTContext &Context = getContext(); const CXXMethodDecl *MD = cast(CGF.CurGD.getDecl()); assert(isa(MD) || isa(MD)); if (isa(MD) && MD->getParent()->getNumVBases()) { auto *IsMostDerived = ImplicitParamDecl::Create( Context, /*DC=*/nullptr, CGF.CurGD.getDecl()->getLocation(), &Context.Idents.get("is_most_derived"), Context.IntTy, ImplicitParamDecl::Other); // The 'most_derived' parameter goes second if the ctor is variadic and last // if it's not. Dtors can't be variadic. const FunctionProtoType *FPT = MD->getType()->castAs(); if (FPT->isVariadic()) Params.insert(Params.begin() + 1, IsMostDerived); else Params.push_back(IsMostDerived); getStructorImplicitParamDecl(CGF) = IsMostDerived; } else if (isDeletingDtor(CGF.CurGD)) { auto *ShouldDelete = ImplicitParamDecl::Create( Context, /*DC=*/nullptr, CGF.CurGD.getDecl()->getLocation(), &Context.Idents.get("should_call_delete"), Context.IntTy, ImplicitParamDecl::Other); Params.push_back(ShouldDelete); getStructorImplicitParamDecl(CGF) = ShouldDelete; } } llvm::Value *MicrosoftCXXABI::adjustThisParameterInVirtualFunctionPrologue( CodeGenFunction &CGF, GlobalDecl GD, llvm::Value *This) { // In this ABI, every virtual function takes a pointer to one of the // subobjects that first defines it as the 'this' parameter, rather than a // pointer to the final overrider subobject. Thus, we need to adjust it back // to the final overrider subobject before use. // See comments in the MicrosoftVFTableContext implementation for the details. CharUnits Adjustment = getVirtualFunctionPrologueThisAdjustment(GD); if (Adjustment.isZero()) return This; unsigned AS = cast(This->getType())->getAddressSpace(); llvm::Type *charPtrTy = CGF.Int8Ty->getPointerTo(AS), *thisTy = This->getType(); This = CGF.Builder.CreateBitCast(This, charPtrTy); assert(Adjustment.isPositive()); This = CGF.Builder.CreateConstInBoundsGEP1_32(CGF.Int8Ty, This, -Adjustment.getQuantity()); return CGF.Builder.CreateBitCast(This, thisTy); } void MicrosoftCXXABI::EmitInstanceFunctionProlog(CodeGenFunction &CGF) { // Naked functions have no prolog. if (CGF.CurFuncDecl && CGF.CurFuncDecl->hasAttr()) return; EmitThisParam(CGF); /// If this is a function that the ABI specifies returns 'this', initialize /// the return slot to 'this' at the start of the function. /// /// Unlike the setting of return types, this is done within the ABI /// implementation instead of by clients of CGCXXABI because: /// 1) getThisValue is currently protected /// 2) in theory, an ABI could implement 'this' returns some other way; /// HasThisReturn only specifies a contract, not the implementation if (HasThisReturn(CGF.CurGD)) CGF.Builder.CreateStore(getThisValue(CGF), CGF.ReturnValue); else if (hasMostDerivedReturn(CGF.CurGD)) CGF.Builder.CreateStore(CGF.EmitCastToVoidPtr(getThisValue(CGF)), CGF.ReturnValue); const CXXMethodDecl *MD = cast(CGF.CurGD.getDecl()); if (isa(MD) && MD->getParent()->getNumVBases()) { assert(getStructorImplicitParamDecl(CGF) && "no implicit parameter for a constructor with virtual bases?"); getStructorImplicitParamValue(CGF) = CGF.Builder.CreateLoad( CGF.GetAddrOfLocalVar(getStructorImplicitParamDecl(CGF)), "is_most_derived"); } if (isDeletingDtor(CGF.CurGD)) { assert(getStructorImplicitParamDecl(CGF) && "no implicit parameter for a deleting destructor?"); getStructorImplicitParamValue(CGF) = CGF.Builder.CreateLoad( CGF.GetAddrOfLocalVar(getStructorImplicitParamDecl(CGF)), "should_call_delete"); } } CGCXXABI::AddedStructorArgs MicrosoftCXXABI::addImplicitConstructorArgs( CodeGenFunction &CGF, const CXXConstructorDecl *D, CXXCtorType Type, bool ForVirtualBase, bool Delegating, CallArgList &Args) { assert(Type == Ctor_Complete || Type == Ctor_Base); // Check if we need a 'most_derived' parameter. if (!D->getParent()->getNumVBases()) return AddedStructorArgs{}; // Add the 'most_derived' argument second if we are variadic or last if not. const FunctionProtoType *FPT = D->getType()->castAs(); llvm::Value *MostDerivedArg; if (Delegating) { MostDerivedArg = getStructorImplicitParamValue(CGF); } else { MostDerivedArg = llvm::ConstantInt::get(CGM.Int32Ty, Type == Ctor_Complete); } RValue RV = RValue::get(MostDerivedArg); if (FPT->isVariadic()) { Args.insert(Args.begin() + 1, CallArg(RV, getContext().IntTy, /*needscopy=*/false)); return AddedStructorArgs::prefix(1); } Args.add(RV, getContext().IntTy); return AddedStructorArgs::suffix(1); } void MicrosoftCXXABI::EmitDestructorCall(CodeGenFunction &CGF, const CXXDestructorDecl *DD, CXXDtorType Type, bool ForVirtualBase, bool Delegating, Address This) { CGCallee Callee = CGCallee::forDirect( CGM.getAddrOfCXXStructor(DD, getFromDtorType(Type)), DD); if (DD->isVirtual()) { assert(Type != CXXDtorType::Dtor_Deleting && "The deleting destructor should only be called via a virtual call"); This = adjustThisArgumentForVirtualFunctionCall(CGF, GlobalDecl(DD, Type), This, false); } llvm::BasicBlock *BaseDtorEndBB = nullptr; if (ForVirtualBase && isa(CGF.CurCodeDecl)) { BaseDtorEndBB = EmitDtorCompleteObjectHandler(CGF); } CGF.EmitCXXDestructorCall(DD, Callee, This.getPointer(), /*ImplicitParam=*/nullptr, /*ImplicitParamTy=*/QualType(), nullptr, getFromDtorType(Type)); if (BaseDtorEndBB) { // Complete object handler should continue to be the remaining CGF.Builder.CreateBr(BaseDtorEndBB); CGF.EmitBlock(BaseDtorEndBB); } } void MicrosoftCXXABI::emitVTableTypeMetadata(const VPtrInfo &Info, const CXXRecordDecl *RD, llvm::GlobalVariable *VTable) { if (!CGM.getCodeGenOpts().LTOUnit) return; // The location of the first virtual function pointer in the virtual table, // aka the "address point" on Itanium. This is at offset 0 if RTTI is // disabled, or sizeof(void*) if RTTI is enabled. CharUnits AddressPoint = getContext().getLangOpts().RTTIData ? getContext().toCharUnitsFromBits( getContext().getTargetInfo().getPointerWidth(0)) : CharUnits::Zero(); if (Info.PathToIntroducingObject.empty()) { CGM.AddVTableTypeMetadata(VTable, AddressPoint, RD); return; } // Add a bitset entry for the least derived base belonging to this vftable. CGM.AddVTableTypeMetadata(VTable, AddressPoint, Info.PathToIntroducingObject.back()); // Add a bitset entry for each derived class that is laid out at the same // offset as the least derived base. for (unsigned I = Info.PathToIntroducingObject.size() - 1; I != 0; --I) { const CXXRecordDecl *DerivedRD = Info.PathToIntroducingObject[I - 1]; const CXXRecordDecl *BaseRD = Info.PathToIntroducingObject[I]; const ASTRecordLayout &Layout = getContext().getASTRecordLayout(DerivedRD); CharUnits Offset; auto VBI = Layout.getVBaseOffsetsMap().find(BaseRD); if (VBI == Layout.getVBaseOffsetsMap().end()) Offset = Layout.getBaseClassOffset(BaseRD); else Offset = VBI->second.VBaseOffset; if (!Offset.isZero()) return; CGM.AddVTableTypeMetadata(VTable, AddressPoint, DerivedRD); } // Finally do the same for the most derived class. if (Info.FullOffsetInMDC.isZero()) CGM.AddVTableTypeMetadata(VTable, AddressPoint, RD); } void MicrosoftCXXABI::emitVTableDefinitions(CodeGenVTables &CGVT, const CXXRecordDecl *RD) { MicrosoftVTableContext &VFTContext = CGM.getMicrosoftVTableContext(); const VPtrInfoVector &VFPtrs = VFTContext.getVFPtrOffsets(RD); for (const std::unique_ptr& Info : VFPtrs) { llvm::GlobalVariable *VTable = getAddrOfVTable(RD, Info->FullOffsetInMDC); if (VTable->hasInitializer()) continue; const VTableLayout &VTLayout = VFTContext.getVFTableLayout(RD, Info->FullOffsetInMDC); llvm::Constant *RTTI = nullptr; if (any_of(VTLayout.vtable_components(), [](const VTableComponent &VTC) { return VTC.isRTTIKind(); })) RTTI = getMSCompleteObjectLocator(RD, *Info); ConstantInitBuilder Builder(CGM); auto Components = Builder.beginStruct(); CGVT.createVTableInitializer(Components, VTLayout, RTTI); Components.finishAndSetAsInitializer(VTable); emitVTableTypeMetadata(*Info, RD, VTable); } } bool MicrosoftCXXABI::isVirtualOffsetNeededForVTableField( CodeGenFunction &CGF, CodeGenFunction::VPtr Vptr) { return Vptr.NearestVBase != nullptr; } llvm::Value *MicrosoftCXXABI::getVTableAddressPointInStructor( CodeGenFunction &CGF, const CXXRecordDecl *VTableClass, BaseSubobject Base, const CXXRecordDecl *NearestVBase) { llvm::Constant *VTableAddressPoint = getVTableAddressPoint(Base, VTableClass); if (!VTableAddressPoint) { assert(Base.getBase()->getNumVBases() && !getContext().getASTRecordLayout(Base.getBase()).hasOwnVFPtr()); } return VTableAddressPoint; } static void mangleVFTableName(MicrosoftMangleContext &MangleContext, const CXXRecordDecl *RD, const VPtrInfo &VFPtr, SmallString<256> &Name) { llvm::raw_svector_ostream Out(Name); MangleContext.mangleCXXVFTable(RD, VFPtr.MangledPath, Out); } llvm::Constant * MicrosoftCXXABI::getVTableAddressPoint(BaseSubobject Base, const CXXRecordDecl *VTableClass) { (void)getAddrOfVTable(VTableClass, Base.getBaseOffset()); VFTableIdTy ID(VTableClass, Base.getBaseOffset()); return VFTablesMap[ID]; } llvm::Constant *MicrosoftCXXABI::getVTableAddressPointForConstExpr( BaseSubobject Base, const CXXRecordDecl *VTableClass) { llvm::Constant *VFTable = getVTableAddressPoint(Base, VTableClass); assert(VFTable && "Couldn't find a vftable for the given base?"); return VFTable; } llvm::GlobalVariable *MicrosoftCXXABI::getAddrOfVTable(const CXXRecordDecl *RD, CharUnits VPtrOffset) { // getAddrOfVTable may return 0 if asked to get an address of a vtable which // shouldn't be used in the given record type. We want to cache this result in // VFTablesMap, thus a simple zero check is not sufficient. VFTableIdTy ID(RD, VPtrOffset); VTablesMapTy::iterator I; bool Inserted; std::tie(I, Inserted) = VTablesMap.insert(std::make_pair(ID, nullptr)); if (!Inserted) return I->second; llvm::GlobalVariable *&VTable = I->second; MicrosoftVTableContext &VTContext = CGM.getMicrosoftVTableContext(); const VPtrInfoVector &VFPtrs = VTContext.getVFPtrOffsets(RD); if (DeferredVFTables.insert(RD).second) { // We haven't processed this record type before. // Queue up this vtable for possible deferred emission. CGM.addDeferredVTable(RD); #ifndef NDEBUG // Create all the vftables at once in order to make sure each vftable has // a unique mangled name. llvm::StringSet<> ObservedMangledNames; for (size_t J = 0, F = VFPtrs.size(); J != F; ++J) { SmallString<256> Name; mangleVFTableName(getMangleContext(), RD, *VFPtrs[J], Name); if (!ObservedMangledNames.insert(Name.str()).second) llvm_unreachable("Already saw this mangling before?"); } #endif } const std::unique_ptr *VFPtrI = std::find_if( VFPtrs.begin(), VFPtrs.end(), [&](const std::unique_ptr& VPI) { return VPI->FullOffsetInMDC == VPtrOffset; }); if (VFPtrI == VFPtrs.end()) { VFTablesMap[ID] = nullptr; return nullptr; } const std::unique_ptr &VFPtr = *VFPtrI; SmallString<256> VFTableName; mangleVFTableName(getMangleContext(), RD, *VFPtr, VFTableName); // Classes marked __declspec(dllimport) need vftables generated on the // import-side in order to support features like constexpr. No other // translation unit relies on the emission of the local vftable, translation // units are expected to generate them as needed. // // Because of this unique behavior, we maintain this logic here instead of // getVTableLinkage. llvm::GlobalValue::LinkageTypes VFTableLinkage = RD->hasAttr() ? llvm::GlobalValue::LinkOnceODRLinkage : CGM.getVTableLinkage(RD); bool VFTableComesFromAnotherTU = llvm::GlobalValue::isAvailableExternallyLinkage(VFTableLinkage) || llvm::GlobalValue::isExternalLinkage(VFTableLinkage); bool VTableAliasIsRequred = !VFTableComesFromAnotherTU && getContext().getLangOpts().RTTIData; if (llvm::GlobalValue *VFTable = CGM.getModule().getNamedGlobal(VFTableName)) { VFTablesMap[ID] = VFTable; VTable = VTableAliasIsRequred ? cast( cast(VFTable)->getBaseObject()) : cast(VFTable); return VTable; } const VTableLayout &VTLayout = VTContext.getVFTableLayout(RD, VFPtr->FullOffsetInMDC); llvm::GlobalValue::LinkageTypes VTableLinkage = VTableAliasIsRequred ? llvm::GlobalValue::PrivateLinkage : VFTableLinkage; StringRef VTableName = VTableAliasIsRequred ? StringRef() : VFTableName.str(); llvm::Type *VTableType = CGM.getVTables().getVTableType(VTLayout); // Create a backing variable for the contents of VTable. The VTable may // or may not include space for a pointer to RTTI data. llvm::GlobalValue *VFTable; VTable = new llvm::GlobalVariable(CGM.getModule(), VTableType, /*isConstant=*/true, VTableLinkage, /*Initializer=*/nullptr, VTableName); VTable->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global); llvm::Comdat *C = nullptr; if (!VFTableComesFromAnotherTU && (llvm::GlobalValue::isWeakForLinker(VFTableLinkage) || (llvm::GlobalValue::isLocalLinkage(VFTableLinkage) && VTableAliasIsRequred))) C = CGM.getModule().getOrInsertComdat(VFTableName.str()); // Only insert a pointer into the VFTable for RTTI data if we are not // importing it. We never reference the RTTI data directly so there is no // need to make room for it. if (VTableAliasIsRequred) { llvm::Value *GEPIndices[] = {llvm::ConstantInt::get(CGM.Int32Ty, 0), llvm::ConstantInt::get(CGM.Int32Ty, 0), llvm::ConstantInt::get(CGM.Int32Ty, 1)}; // Create a GEP which points just after the first entry in the VFTable, // this should be the location of the first virtual method. llvm::Constant *VTableGEP = llvm::ConstantExpr::getInBoundsGetElementPtr( VTable->getValueType(), VTable, GEPIndices); if (llvm::GlobalValue::isWeakForLinker(VFTableLinkage)) { VFTableLinkage = llvm::GlobalValue::ExternalLinkage; if (C) C->setSelectionKind(llvm::Comdat::Largest); } VFTable = llvm::GlobalAlias::create(CGM.Int8PtrTy, /*AddressSpace=*/0, VFTableLinkage, VFTableName.str(), VTableGEP, &CGM.getModule()); VFTable->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global); } else { // We don't need a GlobalAlias to be a symbol for the VTable if we won't // be referencing any RTTI data. // The GlobalVariable will end up being an appropriate definition of the // VFTable. VFTable = VTable; } if (C) VTable->setComdat(C); if (RD->hasAttr()) VFTable->setDLLStorageClass(llvm::GlobalValue::DLLExportStorageClass); VFTablesMap[ID] = VFTable; return VTable; } CGCallee MicrosoftCXXABI::getVirtualFunctionPointer(CodeGenFunction &CGF, GlobalDecl GD, Address This, llvm::Type *Ty, SourceLocation Loc) { GD = GD.getCanonicalDecl(); CGBuilderTy &Builder = CGF.Builder; Ty = Ty->getPointerTo()->getPointerTo(); Address VPtr = adjustThisArgumentForVirtualFunctionCall(CGF, GD, This, true); auto *MethodDecl = cast(GD.getDecl()); llvm::Value *VTable = CGF.GetVTablePtr(VPtr, Ty, MethodDecl->getParent()); MicrosoftVTableContext &VFTContext = CGM.getMicrosoftVTableContext(); MicrosoftVTableContext::MethodVFTableLocation ML = VFTContext.getMethodVFTableLocation(GD); // Compute the identity of the most derived class whose virtual table is // located at the MethodVFTableLocation ML. auto getObjectWithVPtr = [&] { return llvm::find_if(VFTContext.getVFPtrOffsets( ML.VBase ? ML.VBase : MethodDecl->getParent()), [&](const std::unique_ptr &Info) { return Info->FullOffsetInMDC == ML.VFPtrOffset; }) ->get() ->ObjectWithVPtr; }; llvm::Value *VFunc; if (CGF.ShouldEmitVTableTypeCheckedLoad(MethodDecl->getParent())) { VFunc = CGF.EmitVTableTypeCheckedLoad( getObjectWithVPtr(), VTable, ML.Index * CGM.getContext().getTargetInfo().getPointerWidth(0) / 8); } else { if (CGM.getCodeGenOpts().PrepareForLTO) CGF.EmitTypeMetadataCodeForVCall(getObjectWithVPtr(), VTable, Loc); llvm::Value *VFuncPtr = Builder.CreateConstInBoundsGEP1_64(VTable, ML.Index, "vfn"); VFunc = Builder.CreateAlignedLoad(VFuncPtr, CGF.getPointerAlign()); } CGCallee Callee(MethodDecl, VFunc); return Callee; } llvm::Value *MicrosoftCXXABI::EmitVirtualDestructorCall( CodeGenFunction &CGF, const CXXDestructorDecl *Dtor, CXXDtorType DtorType, Address This, const CXXMemberCallExpr *CE) { assert(CE == nullptr || CE->arg_begin() == CE->arg_end()); assert(DtorType == Dtor_Deleting || DtorType == Dtor_Complete); // We have only one destructor in the vftable but can get both behaviors // by passing an implicit int parameter. GlobalDecl GD(Dtor, Dtor_Deleting); const CGFunctionInfo *FInfo = &CGM.getTypes().arrangeCXXStructorDeclaration( Dtor, StructorType::Deleting); llvm::Type *Ty = CGF.CGM.getTypes().GetFunctionType(*FInfo); CGCallee Callee = getVirtualFunctionPointer( CGF, GD, This, Ty, CE ? CE->getLocStart() : SourceLocation()); ASTContext &Context = getContext(); llvm::Value *ImplicitParam = llvm::ConstantInt::get( llvm::IntegerType::getInt32Ty(CGF.getLLVMContext()), DtorType == Dtor_Deleting); This = adjustThisArgumentForVirtualFunctionCall(CGF, GD, This, true); RValue RV = CGF.EmitCXXDestructorCall(Dtor, Callee, This.getPointer(), ImplicitParam, Context.IntTy, CE, StructorType::Deleting); return RV.getScalarVal(); } const VBTableGlobals & MicrosoftCXXABI::enumerateVBTables(const CXXRecordDecl *RD) { // At this layer, we can key the cache off of a single class, which is much // easier than caching each vbtable individually. llvm::DenseMap::iterator Entry; bool Added; std::tie(Entry, Added) = VBTablesMap.insert(std::make_pair(RD, VBTableGlobals())); VBTableGlobals &VBGlobals = Entry->second; if (!Added) return VBGlobals; MicrosoftVTableContext &Context = CGM.getMicrosoftVTableContext(); VBGlobals.VBTables = &Context.enumerateVBTables(RD); // Cache the globals for all vbtables so we don't have to recompute the // mangled names. llvm::GlobalVariable::LinkageTypes Linkage = CGM.getVTableLinkage(RD); for (VPtrInfoVector::const_iterator I = VBGlobals.VBTables->begin(), E = VBGlobals.VBTables->end(); I != E; ++I) { VBGlobals.Globals.push_back(getAddrOfVBTable(**I, RD, Linkage)); } return VBGlobals; } llvm::Function *MicrosoftCXXABI::EmitVirtualMemPtrThunk( const CXXMethodDecl *MD, const MicrosoftVTableContext::MethodVFTableLocation &ML) { assert(!isa(MD) && !isa(MD) && "can't form pointers to ctors or virtual dtors"); // Calculate the mangled name. SmallString<256> ThunkName; llvm::raw_svector_ostream Out(ThunkName); getMangleContext().mangleVirtualMemPtrThunk(MD, Out); // If the thunk has been generated previously, just return it. if (llvm::GlobalValue *GV = CGM.getModule().getNamedValue(ThunkName)) return cast(GV); // Create the llvm::Function. const CGFunctionInfo &FnInfo = CGM.getTypes().arrangeMSMemberPointerThunk(MD); llvm::FunctionType *ThunkTy = CGM.getTypes().GetFunctionType(FnInfo); llvm::Function *ThunkFn = llvm::Function::Create(ThunkTy, llvm::Function::ExternalLinkage, ThunkName.str(), &CGM.getModule()); assert(ThunkFn->getName() == ThunkName && "name was uniqued!"); ThunkFn->setLinkage(MD->isExternallyVisible() ? llvm::GlobalValue::LinkOnceODRLinkage : llvm::GlobalValue::InternalLinkage); if (MD->isExternallyVisible()) ThunkFn->setComdat(CGM.getModule().getOrInsertComdat(ThunkFn->getName())); CGM.SetLLVMFunctionAttributes(MD, FnInfo, ThunkFn); CGM.SetLLVMFunctionAttributesForDefinition(MD, ThunkFn); // Add the "thunk" attribute so that LLVM knows that the return type is // meaningless. These thunks can be used to call functions with differing // return types, and the caller is required to cast the prototype // appropriately to extract the correct value. ThunkFn->addFnAttr("thunk"); // These thunks can be compared, so they are not unnamed. ThunkFn->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::None); // Start codegen. CodeGenFunction CGF(CGM); CGF.CurGD = GlobalDecl(MD); CGF.CurFuncIsThunk = true; // Build FunctionArgs, but only include the implicit 'this' parameter // declaration. FunctionArgList FunctionArgs; buildThisParam(CGF, FunctionArgs); // Start defining the function. CGF.StartFunction(GlobalDecl(), FnInfo.getReturnType(), ThunkFn, FnInfo, FunctionArgs, MD->getLocation(), SourceLocation()); EmitThisParam(CGF); // Load the vfptr and then callee from the vftable. The callee should have // adjusted 'this' so that the vfptr is at offset zero. llvm::Value *VTable = CGF.GetVTablePtr( getThisAddress(CGF), ThunkTy->getPointerTo()->getPointerTo(), MD->getParent()); llvm::Value *VFuncPtr = CGF.Builder.CreateConstInBoundsGEP1_64(VTable, ML.Index, "vfn"); llvm::Value *Callee = CGF.Builder.CreateAlignedLoad(VFuncPtr, CGF.getPointerAlign()); CGF.EmitMustTailThunk(MD, getThisValue(CGF), Callee); return ThunkFn; } void MicrosoftCXXABI::emitVirtualInheritanceTables(const CXXRecordDecl *RD) { const VBTableGlobals &VBGlobals = enumerateVBTables(RD); for (unsigned I = 0, E = VBGlobals.VBTables->size(); I != E; ++I) { const std::unique_ptr& VBT = (*VBGlobals.VBTables)[I]; llvm::GlobalVariable *GV = VBGlobals.Globals[I]; if (GV->isDeclaration()) emitVBTableDefinition(*VBT, RD, GV); } } llvm::GlobalVariable * MicrosoftCXXABI::getAddrOfVBTable(const VPtrInfo &VBT, const CXXRecordDecl *RD, llvm::GlobalVariable::LinkageTypes Linkage) { SmallString<256> OutName; llvm::raw_svector_ostream Out(OutName); getMangleContext().mangleCXXVBTable(RD, VBT.MangledPath, Out); StringRef Name = OutName.str(); llvm::ArrayType *VBTableType = llvm::ArrayType::get(CGM.IntTy, 1 + VBT.ObjectWithVPtr->getNumVBases()); assert(!CGM.getModule().getNamedGlobal(Name) && "vbtable with this name already exists: mangling bug?"); llvm::GlobalVariable *GV = CGM.CreateOrReplaceCXXRuntimeVariable(Name, VBTableType, Linkage); GV->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global); if (RD->hasAttr()) GV->setDLLStorageClass(llvm::GlobalValue::DLLImportStorageClass); else if (RD->hasAttr()) GV->setDLLStorageClass(llvm::GlobalValue::DLLExportStorageClass); if (!GV->hasExternalLinkage()) emitVBTableDefinition(VBT, RD, GV); return GV; } void MicrosoftCXXABI::emitVBTableDefinition(const VPtrInfo &VBT, const CXXRecordDecl *RD, llvm::GlobalVariable *GV) const { const CXXRecordDecl *ObjectWithVPtr = VBT.ObjectWithVPtr; assert(RD->getNumVBases() && ObjectWithVPtr->getNumVBases() && "should only emit vbtables for classes with vbtables"); const ASTRecordLayout &BaseLayout = getContext().getASTRecordLayout(VBT.IntroducingObject); const ASTRecordLayout &DerivedLayout = getContext().getASTRecordLayout(RD); SmallVector Offsets(1 + ObjectWithVPtr->getNumVBases(), nullptr); // The offset from ObjectWithVPtr's vbptr to itself always leads. CharUnits VBPtrOffset = BaseLayout.getVBPtrOffset(); Offsets[0] = llvm::ConstantInt::get(CGM.IntTy, -VBPtrOffset.getQuantity()); MicrosoftVTableContext &Context = CGM.getMicrosoftVTableContext(); for (const auto &I : ObjectWithVPtr->vbases()) { const CXXRecordDecl *VBase = I.getType()->getAsCXXRecordDecl(); CharUnits Offset = DerivedLayout.getVBaseClassOffset(VBase); assert(!Offset.isNegative()); // Make it relative to the subobject vbptr. CharUnits CompleteVBPtrOffset = VBT.NonVirtualOffset + VBPtrOffset; if (VBT.getVBaseWithVPtr()) CompleteVBPtrOffset += DerivedLayout.getVBaseClassOffset(VBT.getVBaseWithVPtr()); Offset -= CompleteVBPtrOffset; unsigned VBIndex = Context.getVBTableIndex(ObjectWithVPtr, VBase); assert(Offsets[VBIndex] == nullptr && "The same vbindex seen twice?"); Offsets[VBIndex] = llvm::ConstantInt::get(CGM.IntTy, Offset.getQuantity()); } assert(Offsets.size() == cast(cast(GV->getType()) ->getElementType())->getNumElements()); llvm::ArrayType *VBTableType = llvm::ArrayType::get(CGM.IntTy, Offsets.size()); llvm::Constant *Init = llvm::ConstantArray::get(VBTableType, Offsets); GV->setInitializer(Init); if (RD->hasAttr()) GV->setLinkage(llvm::GlobalVariable::AvailableExternallyLinkage); } llvm::Value *MicrosoftCXXABI::performThisAdjustment(CodeGenFunction &CGF, Address This, const ThisAdjustment &TA) { if (TA.isEmpty()) return This.getPointer(); This = CGF.Builder.CreateElementBitCast(This, CGF.Int8Ty); llvm::Value *V; if (TA.Virtual.isEmpty()) { V = This.getPointer(); } else { assert(TA.Virtual.Microsoft.VtordispOffset < 0); // Adjust the this argument based on the vtordisp value. Address VtorDispPtr = CGF.Builder.CreateConstInBoundsByteGEP(This, CharUnits::fromQuantity(TA.Virtual.Microsoft.VtordispOffset)); VtorDispPtr = CGF.Builder.CreateElementBitCast(VtorDispPtr, CGF.Int32Ty); llvm::Value *VtorDisp = CGF.Builder.CreateLoad(VtorDispPtr, "vtordisp"); V = CGF.Builder.CreateGEP(This.getPointer(), CGF.Builder.CreateNeg(VtorDisp)); // Unfortunately, having applied the vtordisp means that we no // longer really have a known alignment for the vbptr step. // We'll assume the vbptr is pointer-aligned. if (TA.Virtual.Microsoft.VBPtrOffset) { // If the final overrider is defined in a virtual base other than the one // that holds the vfptr, we have to use a vtordispex thunk which looks up // the vbtable of the derived class. assert(TA.Virtual.Microsoft.VBPtrOffset > 0); assert(TA.Virtual.Microsoft.VBOffsetOffset >= 0); llvm::Value *VBPtr; llvm::Value *VBaseOffset = GetVBaseOffsetFromVBPtr(CGF, Address(V, CGF.getPointerAlign()), -TA.Virtual.Microsoft.VBPtrOffset, TA.Virtual.Microsoft.VBOffsetOffset, &VBPtr); V = CGF.Builder.CreateInBoundsGEP(VBPtr, VBaseOffset); } } if (TA.NonVirtual) { // Non-virtual adjustment might result in a pointer outside the allocated // object, e.g. if the final overrider class is laid out after the virtual // base that declares a method in the most derived class. V = CGF.Builder.CreateConstGEP1_32(V, TA.NonVirtual); } // Don't need to bitcast back, the call CodeGen will handle this. return V; } llvm::Value * MicrosoftCXXABI::performReturnAdjustment(CodeGenFunction &CGF, Address Ret, const ReturnAdjustment &RA) { if (RA.isEmpty()) return Ret.getPointer(); auto OrigTy = Ret.getType(); Ret = CGF.Builder.CreateElementBitCast(Ret, CGF.Int8Ty); llvm::Value *V = Ret.getPointer(); if (RA.Virtual.Microsoft.VBIndex) { assert(RA.Virtual.Microsoft.VBIndex > 0); int32_t IntSize = CGF.getIntSize().getQuantity(); llvm::Value *VBPtr; llvm::Value *VBaseOffset = GetVBaseOffsetFromVBPtr(CGF, Ret, RA.Virtual.Microsoft.VBPtrOffset, IntSize * RA.Virtual.Microsoft.VBIndex, &VBPtr); V = CGF.Builder.CreateInBoundsGEP(VBPtr, VBaseOffset); } if (RA.NonVirtual) V = CGF.Builder.CreateConstInBoundsGEP1_32(CGF.Int8Ty, V, RA.NonVirtual); // Cast back to the original type. return CGF.Builder.CreateBitCast(V, OrigTy); } bool MicrosoftCXXABI::requiresArrayCookie(const CXXDeleteExpr *expr, QualType elementType) { // Microsoft seems to completely ignore the possibility of a // two-argument usual deallocation function. return elementType.isDestructedType(); } bool MicrosoftCXXABI::requiresArrayCookie(const CXXNewExpr *expr) { // Microsoft seems to completely ignore the possibility of a // two-argument usual deallocation function. return expr->getAllocatedType().isDestructedType(); } CharUnits MicrosoftCXXABI::getArrayCookieSizeImpl(QualType type) { // The array cookie is always a size_t; we then pad that out to the // alignment of the element type. ASTContext &Ctx = getContext(); return std::max(Ctx.getTypeSizeInChars(Ctx.getSizeType()), Ctx.getTypeAlignInChars(type)); } llvm::Value *MicrosoftCXXABI::readArrayCookieImpl(CodeGenFunction &CGF, Address allocPtr, CharUnits cookieSize) { Address numElementsPtr = CGF.Builder.CreateElementBitCast(allocPtr, CGF.SizeTy); return CGF.Builder.CreateLoad(numElementsPtr); } Address MicrosoftCXXABI::InitializeArrayCookie(CodeGenFunction &CGF, Address newPtr, llvm::Value *numElements, const CXXNewExpr *expr, QualType elementType) { assert(requiresArrayCookie(expr)); // The size of the cookie. CharUnits cookieSize = getArrayCookieSizeImpl(elementType); // Compute an offset to the cookie. Address cookiePtr = newPtr; // Write the number of elements into the appropriate slot. Address numElementsPtr = CGF.Builder.CreateElementBitCast(cookiePtr, CGF.SizeTy); CGF.Builder.CreateStore(numElements, numElementsPtr); // Finally, compute a pointer to the actual data buffer by skipping // over the cookie completely. return CGF.Builder.CreateConstInBoundsByteGEP(newPtr, cookieSize); } static void emitGlobalDtorWithTLRegDtor(CodeGenFunction &CGF, const VarDecl &VD, llvm::Constant *Dtor, llvm::Constant *Addr) { // Create a function which calls the destructor. llvm::Constant *DtorStub = CGF.createAtExitStub(VD, Dtor, Addr); // extern "C" int __tlregdtor(void (*f)(void)); llvm::FunctionType *TLRegDtorTy = llvm::FunctionType::get( CGF.IntTy, DtorStub->getType(), /*IsVarArg=*/false); llvm::Constant *TLRegDtor = CGF.CGM.CreateRuntimeFunction( TLRegDtorTy, "__tlregdtor", llvm::AttributeList(), /*Local=*/true); if (llvm::Function *TLRegDtorFn = dyn_cast(TLRegDtor)) TLRegDtorFn->setDoesNotThrow(); CGF.EmitNounwindRuntimeCall(TLRegDtor, DtorStub); } void MicrosoftCXXABI::registerGlobalDtor(CodeGenFunction &CGF, const VarDecl &D, llvm::Constant *Dtor, llvm::Constant *Addr) { if (D.getTLSKind()) return emitGlobalDtorWithTLRegDtor(CGF, D, Dtor, Addr); // The default behavior is to use atexit. CGF.registerGlobalDtorWithAtExit(D, Dtor, Addr); } void MicrosoftCXXABI::EmitThreadLocalInitFuncs( CodeGenModule &CGM, ArrayRef CXXThreadLocals, ArrayRef CXXThreadLocalInits, ArrayRef CXXThreadLocalInitVars) { if (CXXThreadLocalInits.empty()) return; CGM.AppendLinkerOptions(CGM.getTarget().getTriple().getArch() == llvm::Triple::x86 ? "/include:___dyn_tls_init@12" : "/include:__dyn_tls_init"); // This will create a GV in the .CRT$XDU section. It will point to our // initialization function. The CRT will call all of these function // pointers at start-up time and, eventually, at thread-creation time. auto AddToXDU = [&CGM](llvm::Function *InitFunc) { llvm::GlobalVariable *InitFuncPtr = new llvm::GlobalVariable( CGM.getModule(), InitFunc->getType(), /*IsConstant=*/true, llvm::GlobalVariable::InternalLinkage, InitFunc, Twine(InitFunc->getName(), "$initializer$")); InitFuncPtr->setSection(".CRT$XDU"); // This variable has discardable linkage, we have to add it to @llvm.used to // ensure it won't get discarded. CGM.addUsedGlobal(InitFuncPtr); return InitFuncPtr; }; std::vector NonComdatInits; for (size_t I = 0, E = CXXThreadLocalInitVars.size(); I != E; ++I) { llvm::GlobalVariable *GV = cast( CGM.GetGlobalValue(CGM.getMangledName(CXXThreadLocalInitVars[I]))); llvm::Function *F = CXXThreadLocalInits[I]; // If the GV is already in a comdat group, then we have to join it. if (llvm::Comdat *C = GV->getComdat()) AddToXDU(F)->setComdat(C); else NonComdatInits.push_back(F); } if (!NonComdatInits.empty()) { llvm::FunctionType *FTy = llvm::FunctionType::get(CGM.VoidTy, /*isVarArg=*/false); llvm::Function *InitFunc = CGM.CreateGlobalInitOrDestructFunction( FTy, "__tls_init", CGM.getTypes().arrangeNullaryFunction(), SourceLocation(), /*TLS=*/true); CodeGenFunction(CGM).GenerateCXXGlobalInitFunc(InitFunc, NonComdatInits); AddToXDU(InitFunc); } } LValue MicrosoftCXXABI::EmitThreadLocalVarDeclLValue(CodeGenFunction &CGF, const VarDecl *VD, QualType LValType) { CGF.CGM.ErrorUnsupported(VD, "thread wrappers"); return LValue(); } static ConstantAddress getInitThreadEpochPtr(CodeGenModule &CGM) { StringRef VarName("_Init_thread_epoch"); CharUnits Align = CGM.getIntAlign(); if (auto *GV = CGM.getModule().getNamedGlobal(VarName)) return ConstantAddress(GV, Align); auto *GV = new llvm::GlobalVariable( CGM.getModule(), CGM.IntTy, /*Constant=*/false, llvm::GlobalVariable::ExternalLinkage, /*Initializer=*/nullptr, VarName, /*InsertBefore=*/nullptr, llvm::GlobalVariable::GeneralDynamicTLSModel); GV->setAlignment(Align.getQuantity()); return ConstantAddress(GV, Align); } static llvm::Constant *getInitThreadHeaderFn(CodeGenModule &CGM) { llvm::FunctionType *FTy = llvm::FunctionType::get(llvm::Type::getVoidTy(CGM.getLLVMContext()), CGM.IntTy->getPointerTo(), /*isVarArg=*/false); return CGM.CreateRuntimeFunction( FTy, "_Init_thread_header", llvm::AttributeList::get(CGM.getLLVMContext(), llvm::AttributeList::FunctionIndex, llvm::Attribute::NoUnwind), /*Local=*/true); } static llvm::Constant *getInitThreadFooterFn(CodeGenModule &CGM) { llvm::FunctionType *FTy = llvm::FunctionType::get(llvm::Type::getVoidTy(CGM.getLLVMContext()), CGM.IntTy->getPointerTo(), /*isVarArg=*/false); return CGM.CreateRuntimeFunction( FTy, "_Init_thread_footer", llvm::AttributeList::get(CGM.getLLVMContext(), llvm::AttributeList::FunctionIndex, llvm::Attribute::NoUnwind), /*Local=*/true); } static llvm::Constant *getInitThreadAbortFn(CodeGenModule &CGM) { llvm::FunctionType *FTy = llvm::FunctionType::get(llvm::Type::getVoidTy(CGM.getLLVMContext()), CGM.IntTy->getPointerTo(), /*isVarArg=*/false); return CGM.CreateRuntimeFunction( FTy, "_Init_thread_abort", llvm::AttributeList::get(CGM.getLLVMContext(), llvm::AttributeList::FunctionIndex, llvm::Attribute::NoUnwind), /*Local=*/true); } namespace { struct ResetGuardBit final : EHScopeStack::Cleanup { Address Guard; unsigned GuardNum; ResetGuardBit(Address Guard, unsigned GuardNum) : Guard(Guard), GuardNum(GuardNum) {} void Emit(CodeGenFunction &CGF, Flags flags) override { // Reset the bit in the mask so that the static variable may be // reinitialized. CGBuilderTy &Builder = CGF.Builder; llvm::LoadInst *LI = Builder.CreateLoad(Guard); llvm::ConstantInt *Mask = llvm::ConstantInt::get(CGF.IntTy, ~(1ULL << GuardNum)); Builder.CreateStore(Builder.CreateAnd(LI, Mask), Guard); } }; struct CallInitThreadAbort final : EHScopeStack::Cleanup { llvm::Value *Guard; CallInitThreadAbort(Address Guard) : Guard(Guard.getPointer()) {} void Emit(CodeGenFunction &CGF, Flags flags) override { // Calling _Init_thread_abort will reset the guard's state. CGF.EmitNounwindRuntimeCall(getInitThreadAbortFn(CGF.CGM), Guard); } }; } void MicrosoftCXXABI::EmitGuardedInit(CodeGenFunction &CGF, const VarDecl &D, llvm::GlobalVariable *GV, bool PerformInit) { // MSVC only uses guards for static locals. if (!D.isStaticLocal()) { assert(GV->hasWeakLinkage() || GV->hasLinkOnceLinkage()); // GlobalOpt is allowed to discard the initializer, so use linkonce_odr. llvm::Function *F = CGF.CurFn; F->setLinkage(llvm::GlobalValue::LinkOnceODRLinkage); F->setComdat(CGM.getModule().getOrInsertComdat(F->getName())); CGF.EmitCXXGlobalVarDeclInit(D, GV, PerformInit); return; } bool ThreadlocalStatic = D.getTLSKind(); bool ThreadsafeStatic = getContext().getLangOpts().ThreadsafeStatics; // Thread-safe static variables which aren't thread-specific have a // per-variable guard. bool HasPerVariableGuard = ThreadsafeStatic && !ThreadlocalStatic; CGBuilderTy &Builder = CGF.Builder; llvm::IntegerType *GuardTy = CGF.Int32Ty; llvm::ConstantInt *Zero = llvm::ConstantInt::get(GuardTy, 0); CharUnits GuardAlign = CharUnits::fromQuantity(4); // Get the guard variable for this function if we have one already. GuardInfo *GI = nullptr; if (ThreadlocalStatic) GI = &ThreadLocalGuardVariableMap[D.getDeclContext()]; else if (!ThreadsafeStatic) GI = &GuardVariableMap[D.getDeclContext()]; llvm::GlobalVariable *GuardVar = GI ? GI->Guard : nullptr; unsigned GuardNum; if (D.isExternallyVisible()) { // Externally visible variables have to be numbered in Sema to properly // handle unreachable VarDecls. GuardNum = getContext().getStaticLocalNumber(&D); assert(GuardNum > 0); GuardNum--; } else if (HasPerVariableGuard) { GuardNum = ThreadSafeGuardNumMap[D.getDeclContext()]++; } else { // Non-externally visible variables are numbered here in CodeGen. GuardNum = GI->BitIndex++; } if (!HasPerVariableGuard && GuardNum >= 32) { if (D.isExternallyVisible()) ErrorUnsupportedABI(CGF, "more than 32 guarded initializations"); GuardNum %= 32; GuardVar = nullptr; } if (!GuardVar) { // Mangle the name for the guard. SmallString<256> GuardName; { llvm::raw_svector_ostream Out(GuardName); if (HasPerVariableGuard) getMangleContext().mangleThreadSafeStaticGuardVariable(&D, GuardNum, Out); else getMangleContext().mangleStaticGuardVariable(&D, Out); } // Create the guard variable with a zero-initializer. Just absorb linkage, // visibility and dll storage class from the guarded variable. GuardVar = new llvm::GlobalVariable(CGM.getModule(), GuardTy, /*isConstant=*/false, GV->getLinkage(), Zero, GuardName.str()); GuardVar->setVisibility(GV->getVisibility()); GuardVar->setDLLStorageClass(GV->getDLLStorageClass()); GuardVar->setAlignment(GuardAlign.getQuantity()); if (GuardVar->isWeakForLinker()) GuardVar->setComdat( CGM.getModule().getOrInsertComdat(GuardVar->getName())); if (D.getTLSKind()) GuardVar->setThreadLocal(true); if (GI && !HasPerVariableGuard) GI->Guard = GuardVar; } ConstantAddress GuardAddr(GuardVar, GuardAlign); assert(GuardVar->getLinkage() == GV->getLinkage() && "static local from the same function had different linkage"); if (!HasPerVariableGuard) { // Pseudo code for the test: // if (!(GuardVar & MyGuardBit)) { // GuardVar |= MyGuardBit; // ... initialize the object ...; // } // Test our bit from the guard variable. llvm::ConstantInt *Bit = llvm::ConstantInt::get(GuardTy, 1ULL << GuardNum); llvm::LoadInst *LI = Builder.CreateLoad(GuardAddr); llvm::Value *IsInitialized = Builder.CreateICmpNE(Builder.CreateAnd(LI, Bit), Zero); llvm::BasicBlock *InitBlock = CGF.createBasicBlock("init"); llvm::BasicBlock *EndBlock = CGF.createBasicBlock("init.end"); Builder.CreateCondBr(IsInitialized, EndBlock, InitBlock); // Set our bit in the guard variable and emit the initializer and add a global // destructor if appropriate. CGF.EmitBlock(InitBlock); Builder.CreateStore(Builder.CreateOr(LI, Bit), GuardAddr); CGF.EHStack.pushCleanup(EHCleanup, GuardAddr, GuardNum); CGF.EmitCXXGlobalVarDeclInit(D, GV, PerformInit); CGF.PopCleanupBlock(); Builder.CreateBr(EndBlock); // Continue. CGF.EmitBlock(EndBlock); } else { // Pseudo code for the test: // if (TSS > _Init_thread_epoch) { // _Init_thread_header(&TSS); // if (TSS == -1) { // ... initialize the object ...; // _Init_thread_footer(&TSS); // } // } // // The algorithm is almost identical to what can be found in the appendix // found in N2325. // This BasicBLock determines whether or not we have any work to do. llvm::LoadInst *FirstGuardLoad = Builder.CreateLoad(GuardAddr); FirstGuardLoad->setOrdering(llvm::AtomicOrdering::Unordered); llvm::LoadInst *InitThreadEpoch = Builder.CreateLoad(getInitThreadEpochPtr(CGM)); llvm::Value *IsUninitialized = Builder.CreateICmpSGT(FirstGuardLoad, InitThreadEpoch); llvm::BasicBlock *AttemptInitBlock = CGF.createBasicBlock("init.attempt"); llvm::BasicBlock *EndBlock = CGF.createBasicBlock("init.end"); Builder.CreateCondBr(IsUninitialized, AttemptInitBlock, EndBlock); // This BasicBlock attempts to determine whether or not this thread is // responsible for doing the initialization. CGF.EmitBlock(AttemptInitBlock); CGF.EmitNounwindRuntimeCall(getInitThreadHeaderFn(CGM), GuardAddr.getPointer()); llvm::LoadInst *SecondGuardLoad = Builder.CreateLoad(GuardAddr); SecondGuardLoad->setOrdering(llvm::AtomicOrdering::Unordered); llvm::Value *ShouldDoInit = Builder.CreateICmpEQ(SecondGuardLoad, getAllOnesInt()); llvm::BasicBlock *InitBlock = CGF.createBasicBlock("init"); Builder.CreateCondBr(ShouldDoInit, InitBlock, EndBlock); // Ok, we ended up getting selected as the initializing thread. CGF.EmitBlock(InitBlock); CGF.EHStack.pushCleanup(EHCleanup, GuardAddr); CGF.EmitCXXGlobalVarDeclInit(D, GV, PerformInit); CGF.PopCleanupBlock(); CGF.EmitNounwindRuntimeCall(getInitThreadFooterFn(CGM), GuardAddr.getPointer()); Builder.CreateBr(EndBlock); CGF.EmitBlock(EndBlock); } } bool MicrosoftCXXABI::isZeroInitializable(const MemberPointerType *MPT) { // Null-ness for function memptrs only depends on the first field, which is // the function pointer. The rest don't matter, so we can zero initialize. if (MPT->isMemberFunctionPointer()) return true; // The virtual base adjustment field is always -1 for null, so if we have one // we can't zero initialize. The field offset is sometimes also -1 if 0 is a // valid field offset. const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); MSInheritanceAttr::Spelling Inheritance = RD->getMSInheritanceModel(); return (!MSInheritanceAttr::hasVBTableOffsetField(Inheritance) && RD->nullFieldOffsetIsZero()); } llvm::Type * MicrosoftCXXABI::ConvertMemberPointerType(const MemberPointerType *MPT) { const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); MSInheritanceAttr::Spelling Inheritance = RD->getMSInheritanceModel(); llvm::SmallVector fields; if (MPT->isMemberFunctionPointer()) fields.push_back(CGM.VoidPtrTy); // FunctionPointerOrVirtualThunk else fields.push_back(CGM.IntTy); // FieldOffset if (MSInheritanceAttr::hasNVOffsetField(MPT->isMemberFunctionPointer(), Inheritance)) fields.push_back(CGM.IntTy); if (MSInheritanceAttr::hasVBPtrOffsetField(Inheritance)) fields.push_back(CGM.IntTy); if (MSInheritanceAttr::hasVBTableOffsetField(Inheritance)) fields.push_back(CGM.IntTy); // VirtualBaseAdjustmentOffset if (fields.size() == 1) return fields[0]; return llvm::StructType::get(CGM.getLLVMContext(), fields); } void MicrosoftCXXABI:: GetNullMemberPointerFields(const MemberPointerType *MPT, llvm::SmallVectorImpl &fields) { assert(fields.empty()); const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); MSInheritanceAttr::Spelling Inheritance = RD->getMSInheritanceModel(); if (MPT->isMemberFunctionPointer()) { // FunctionPointerOrVirtualThunk fields.push_back(llvm::Constant::getNullValue(CGM.VoidPtrTy)); } else { if (RD->nullFieldOffsetIsZero()) fields.push_back(getZeroInt()); // FieldOffset else fields.push_back(getAllOnesInt()); // FieldOffset } if (MSInheritanceAttr::hasNVOffsetField(MPT->isMemberFunctionPointer(), Inheritance)) fields.push_back(getZeroInt()); if (MSInheritanceAttr::hasVBPtrOffsetField(Inheritance)) fields.push_back(getZeroInt()); if (MSInheritanceAttr::hasVBTableOffsetField(Inheritance)) fields.push_back(getAllOnesInt()); } llvm::Constant * MicrosoftCXXABI::EmitNullMemberPointer(const MemberPointerType *MPT) { llvm::SmallVector fields; GetNullMemberPointerFields(MPT, fields); if (fields.size() == 1) return fields[0]; llvm::Constant *Res = llvm::ConstantStruct::getAnon(fields); assert(Res->getType() == ConvertMemberPointerType(MPT)); return Res; } llvm::Constant * MicrosoftCXXABI::EmitFullMemberPointer(llvm::Constant *FirstField, bool IsMemberFunction, const CXXRecordDecl *RD, CharUnits NonVirtualBaseAdjustment, unsigned VBTableIndex) { MSInheritanceAttr::Spelling Inheritance = RD->getMSInheritanceModel(); // Single inheritance class member pointer are represented as scalars instead // of aggregates. if (MSInheritanceAttr::hasOnlyOneField(IsMemberFunction, Inheritance)) return FirstField; llvm::SmallVector fields; fields.push_back(FirstField); if (MSInheritanceAttr::hasNVOffsetField(IsMemberFunction, Inheritance)) fields.push_back(llvm::ConstantInt::get( CGM.IntTy, NonVirtualBaseAdjustment.getQuantity())); if (MSInheritanceAttr::hasVBPtrOffsetField(Inheritance)) { CharUnits Offs = CharUnits::Zero(); if (VBTableIndex) Offs = getContext().getASTRecordLayout(RD).getVBPtrOffset(); fields.push_back(llvm::ConstantInt::get(CGM.IntTy, Offs.getQuantity())); } // The rest of the fields are adjusted by conversions to a more derived class. if (MSInheritanceAttr::hasVBTableOffsetField(Inheritance)) fields.push_back(llvm::ConstantInt::get(CGM.IntTy, VBTableIndex)); return llvm::ConstantStruct::getAnon(fields); } llvm::Constant * MicrosoftCXXABI::EmitMemberDataPointer(const MemberPointerType *MPT, CharUnits offset) { const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); if (RD->getMSInheritanceModel() == MSInheritanceAttr::Keyword_virtual_inheritance) offset -= getContext().getOffsetOfBaseWithVBPtr(RD); llvm::Constant *FirstField = llvm::ConstantInt::get(CGM.IntTy, offset.getQuantity()); return EmitFullMemberPointer(FirstField, /*IsMemberFunction=*/false, RD, CharUnits::Zero(), /*VBTableIndex=*/0); } llvm::Constant *MicrosoftCXXABI::EmitMemberPointer(const APValue &MP, QualType MPType) { const MemberPointerType *DstTy = MPType->castAs(); const ValueDecl *MPD = MP.getMemberPointerDecl(); if (!MPD) return EmitNullMemberPointer(DstTy); ASTContext &Ctx = getContext(); ArrayRef MemberPointerPath = MP.getMemberPointerPath(); llvm::Constant *C; if (const CXXMethodDecl *MD = dyn_cast(MPD)) { C = EmitMemberFunctionPointer(MD); } else { CharUnits FieldOffset = Ctx.toCharUnitsFromBits(Ctx.getFieldOffset(MPD)); C = EmitMemberDataPointer(DstTy, FieldOffset); } if (!MemberPointerPath.empty()) { const CXXRecordDecl *SrcRD = cast(MPD->getDeclContext()); const Type *SrcRecTy = Ctx.getTypeDeclType(SrcRD).getTypePtr(); const MemberPointerType *SrcTy = Ctx.getMemberPointerType(DstTy->getPointeeType(), SrcRecTy) ->castAs(); bool DerivedMember = MP.isMemberPointerToDerivedMember(); SmallVector DerivedToBasePath; const CXXRecordDecl *PrevRD = SrcRD; for (const CXXRecordDecl *PathElem : MemberPointerPath) { const CXXRecordDecl *Base = nullptr; const CXXRecordDecl *Derived = nullptr; if (DerivedMember) { Base = PathElem; Derived = PrevRD; } else { Base = PrevRD; Derived = PathElem; } for (const CXXBaseSpecifier &BS : Derived->bases()) if (BS.getType()->getAsCXXRecordDecl()->getCanonicalDecl() == Base->getCanonicalDecl()) DerivedToBasePath.push_back(&BS); PrevRD = PathElem; } assert(DerivedToBasePath.size() == MemberPointerPath.size()); CastKind CK = DerivedMember ? CK_DerivedToBaseMemberPointer : CK_BaseToDerivedMemberPointer; C = EmitMemberPointerConversion(SrcTy, DstTy, CK, DerivedToBasePath.begin(), DerivedToBasePath.end(), C); } return C; } llvm::Constant * MicrosoftCXXABI::EmitMemberFunctionPointer(const CXXMethodDecl *MD) { assert(MD->isInstance() && "Member function must not be static!"); MD = MD->getCanonicalDecl(); CharUnits NonVirtualBaseAdjustment = CharUnits::Zero(); const CXXRecordDecl *RD = MD->getParent()->getMostRecentDecl(); CodeGenTypes &Types = CGM.getTypes(); unsigned VBTableIndex = 0; llvm::Constant *FirstField; const FunctionProtoType *FPT = MD->getType()->castAs(); if (!MD->isVirtual()) { llvm::Type *Ty; // Check whether the function has a computable LLVM signature. if (Types.isFuncTypeConvertible(FPT)) { // The function has a computable LLVM signature; use the correct type. Ty = Types.GetFunctionType(Types.arrangeCXXMethodDeclaration(MD)); } else { // Use an arbitrary non-function type to tell GetAddrOfFunction that the // function type is incomplete. Ty = CGM.PtrDiffTy; } FirstField = CGM.GetAddrOfFunction(MD, Ty); } else { auto &VTableContext = CGM.getMicrosoftVTableContext(); MicrosoftVTableContext::MethodVFTableLocation ML = VTableContext.getMethodVFTableLocation(MD); FirstField = EmitVirtualMemPtrThunk(MD, ML); // Include the vfptr adjustment if the method is in a non-primary vftable. NonVirtualBaseAdjustment += ML.VFPtrOffset; if (ML.VBase) VBTableIndex = VTableContext.getVBTableIndex(RD, ML.VBase) * 4; } if (VBTableIndex == 0 && RD->getMSInheritanceModel() == MSInheritanceAttr::Keyword_virtual_inheritance) NonVirtualBaseAdjustment -= getContext().getOffsetOfBaseWithVBPtr(RD); // The rest of the fields are common with data member pointers. FirstField = llvm::ConstantExpr::getBitCast(FirstField, CGM.VoidPtrTy); return EmitFullMemberPointer(FirstField, /*IsMemberFunction=*/true, RD, NonVirtualBaseAdjustment, VBTableIndex); } /// Member pointers are the same if they're either bitwise identical *or* both /// null. Null-ness for function members is determined by the first field, /// while for data member pointers we must compare all fields. llvm::Value * MicrosoftCXXABI::EmitMemberPointerComparison(CodeGenFunction &CGF, llvm::Value *L, llvm::Value *R, const MemberPointerType *MPT, bool Inequality) { CGBuilderTy &Builder = CGF.Builder; // Handle != comparisons by switching the sense of all boolean operations. llvm::ICmpInst::Predicate Eq; llvm::Instruction::BinaryOps And, Or; if (Inequality) { Eq = llvm::ICmpInst::ICMP_NE; And = llvm::Instruction::Or; Or = llvm::Instruction::And; } else { Eq = llvm::ICmpInst::ICMP_EQ; And = llvm::Instruction::And; Or = llvm::Instruction::Or; } // If this is a single field member pointer (single inheritance), this is a // single icmp. const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); MSInheritanceAttr::Spelling Inheritance = RD->getMSInheritanceModel(); if (MSInheritanceAttr::hasOnlyOneField(MPT->isMemberFunctionPointer(), Inheritance)) return Builder.CreateICmp(Eq, L, R); // Compare the first field. llvm::Value *L0 = Builder.CreateExtractValue(L, 0, "lhs.0"); llvm::Value *R0 = Builder.CreateExtractValue(R, 0, "rhs.0"); llvm::Value *Cmp0 = Builder.CreateICmp(Eq, L0, R0, "memptr.cmp.first"); // Compare everything other than the first field. llvm::Value *Res = nullptr; llvm::StructType *LType = cast(L->getType()); for (unsigned I = 1, E = LType->getNumElements(); I != E; ++I) { llvm::Value *LF = Builder.CreateExtractValue(L, I); llvm::Value *RF = Builder.CreateExtractValue(R, I); llvm::Value *Cmp = Builder.CreateICmp(Eq, LF, RF, "memptr.cmp.rest"); if (Res) Res = Builder.CreateBinOp(And, Res, Cmp); else Res = Cmp; } // Check if the first field is 0 if this is a function pointer. if (MPT->isMemberFunctionPointer()) { // (l1 == r1 && ...) || l0 == 0 llvm::Value *Zero = llvm::Constant::getNullValue(L0->getType()); llvm::Value *IsZero = Builder.CreateICmp(Eq, L0, Zero, "memptr.cmp.iszero"); Res = Builder.CreateBinOp(Or, Res, IsZero); } // Combine the comparison of the first field, which must always be true for // this comparison to succeeed. return Builder.CreateBinOp(And, Res, Cmp0, "memptr.cmp"); } llvm::Value * MicrosoftCXXABI::EmitMemberPointerIsNotNull(CodeGenFunction &CGF, llvm::Value *MemPtr, const MemberPointerType *MPT) { CGBuilderTy &Builder = CGF.Builder; llvm::SmallVector fields; // We only need one field for member functions. if (MPT->isMemberFunctionPointer()) fields.push_back(llvm::Constant::getNullValue(CGM.VoidPtrTy)); else GetNullMemberPointerFields(MPT, fields); assert(!fields.empty()); llvm::Value *FirstField = MemPtr; if (MemPtr->getType()->isStructTy()) FirstField = Builder.CreateExtractValue(MemPtr, 0); llvm::Value *Res = Builder.CreateICmpNE(FirstField, fields[0], "memptr.cmp0"); // For function member pointers, we only need to test the function pointer // field. The other fields if any can be garbage. if (MPT->isMemberFunctionPointer()) return Res; // Otherwise, emit a series of compares and combine the results. for (int I = 1, E = fields.size(); I < E; ++I) { llvm::Value *Field = Builder.CreateExtractValue(MemPtr, I); llvm::Value *Next = Builder.CreateICmpNE(Field, fields[I], "memptr.cmp"); Res = Builder.CreateOr(Res, Next, "memptr.tobool"); } return Res; } bool MicrosoftCXXABI::MemberPointerConstantIsNull(const MemberPointerType *MPT, llvm::Constant *Val) { // Function pointers are null if the pointer in the first field is null. if (MPT->isMemberFunctionPointer()) { llvm::Constant *FirstField = Val->getType()->isStructTy() ? Val->getAggregateElement(0U) : Val; return FirstField->isNullValue(); } // If it's not a function pointer and it's zero initializable, we can easily // check zero. if (isZeroInitializable(MPT) && Val->isNullValue()) return true; // Otherwise, break down all the fields for comparison. Hopefully these // little Constants are reused, while a big null struct might not be. llvm::SmallVector Fields; GetNullMemberPointerFields(MPT, Fields); if (Fields.size() == 1) { assert(Val->getType()->isIntegerTy()); return Val == Fields[0]; } unsigned I, E; for (I = 0, E = Fields.size(); I != E; ++I) { if (Val->getAggregateElement(I) != Fields[I]) break; } return I == E; } llvm::Value * MicrosoftCXXABI::GetVBaseOffsetFromVBPtr(CodeGenFunction &CGF, Address This, llvm::Value *VBPtrOffset, llvm::Value *VBTableOffset, llvm::Value **VBPtrOut) { CGBuilderTy &Builder = CGF.Builder; // Load the vbtable pointer from the vbptr in the instance. This = Builder.CreateElementBitCast(This, CGM.Int8Ty); llvm::Value *VBPtr = Builder.CreateInBoundsGEP(This.getPointer(), VBPtrOffset, "vbptr"); if (VBPtrOut) *VBPtrOut = VBPtr; VBPtr = Builder.CreateBitCast(VBPtr, CGM.Int32Ty->getPointerTo(0)->getPointerTo(This.getAddressSpace())); CharUnits VBPtrAlign; if (auto CI = dyn_cast(VBPtrOffset)) { VBPtrAlign = This.getAlignment().alignmentAtOffset( CharUnits::fromQuantity(CI->getSExtValue())); } else { VBPtrAlign = CGF.getPointerAlign(); } llvm::Value *VBTable = Builder.CreateAlignedLoad(VBPtr, VBPtrAlign, "vbtable"); // Translate from byte offset to table index. It improves analyzability. llvm::Value *VBTableIndex = Builder.CreateAShr( VBTableOffset, llvm::ConstantInt::get(VBTableOffset->getType(), 2), "vbtindex", /*isExact=*/true); // Load an i32 offset from the vb-table. llvm::Value *VBaseOffs = Builder.CreateInBoundsGEP(VBTable, VBTableIndex); VBaseOffs = Builder.CreateBitCast(VBaseOffs, CGM.Int32Ty->getPointerTo(0)); return Builder.CreateAlignedLoad(VBaseOffs, CharUnits::fromQuantity(4), "vbase_offs"); } // Returns an adjusted base cast to i8*, since we do more address arithmetic on // it. llvm::Value *MicrosoftCXXABI::AdjustVirtualBase( CodeGenFunction &CGF, const Expr *E, const CXXRecordDecl *RD, Address Base, llvm::Value *VBTableOffset, llvm::Value *VBPtrOffset) { CGBuilderTy &Builder = CGF.Builder; Base = Builder.CreateElementBitCast(Base, CGM.Int8Ty); llvm::BasicBlock *OriginalBB = nullptr; llvm::BasicBlock *SkipAdjustBB = nullptr; llvm::BasicBlock *VBaseAdjustBB = nullptr; // In the unspecified inheritance model, there might not be a vbtable at all, // in which case we need to skip the virtual base lookup. If there is a // vbtable, the first entry is a no-op entry that gives back the original // base, so look for a virtual base adjustment offset of zero. if (VBPtrOffset) { OriginalBB = Builder.GetInsertBlock(); VBaseAdjustBB = CGF.createBasicBlock("memptr.vadjust"); SkipAdjustBB = CGF.createBasicBlock("memptr.skip_vadjust"); llvm::Value *IsVirtual = Builder.CreateICmpNE(VBTableOffset, getZeroInt(), "memptr.is_vbase"); Builder.CreateCondBr(IsVirtual, VBaseAdjustBB, SkipAdjustBB); CGF.EmitBlock(VBaseAdjustBB); } // If we weren't given a dynamic vbptr offset, RD should be complete and we'll // know the vbptr offset. if (!VBPtrOffset) { CharUnits offs = CharUnits::Zero(); if (!RD->hasDefinition()) { DiagnosticsEngine &Diags = CGF.CGM.getDiags(); unsigned DiagID = Diags.getCustomDiagID( DiagnosticsEngine::Error, "member pointer representation requires a " "complete class type for %0 to perform this expression"); Diags.Report(E->getExprLoc(), DiagID) << RD << E->getSourceRange(); } else if (RD->getNumVBases()) offs = getContext().getASTRecordLayout(RD).getVBPtrOffset(); VBPtrOffset = llvm::ConstantInt::get(CGM.IntTy, offs.getQuantity()); } llvm::Value *VBPtr = nullptr; llvm::Value *VBaseOffs = GetVBaseOffsetFromVBPtr(CGF, Base, VBPtrOffset, VBTableOffset, &VBPtr); llvm::Value *AdjustedBase = Builder.CreateInBoundsGEP(VBPtr, VBaseOffs); // Merge control flow with the case where we didn't have to adjust. if (VBaseAdjustBB) { Builder.CreateBr(SkipAdjustBB); CGF.EmitBlock(SkipAdjustBB); llvm::PHINode *Phi = Builder.CreatePHI(CGM.Int8PtrTy, 2, "memptr.base"); Phi->addIncoming(Base.getPointer(), OriginalBB); Phi->addIncoming(AdjustedBase, VBaseAdjustBB); return Phi; } return AdjustedBase; } llvm::Value *MicrosoftCXXABI::EmitMemberDataPointerAddress( CodeGenFunction &CGF, const Expr *E, Address Base, llvm::Value *MemPtr, const MemberPointerType *MPT) { assert(MPT->isMemberDataPointer()); unsigned AS = Base.getAddressSpace(); llvm::Type *PType = CGF.ConvertTypeForMem(MPT->getPointeeType())->getPointerTo(AS); CGBuilderTy &Builder = CGF.Builder; const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); MSInheritanceAttr::Spelling Inheritance = RD->getMSInheritanceModel(); // Extract the fields we need, regardless of model. We'll apply them if we // have them. llvm::Value *FieldOffset = MemPtr; llvm::Value *VirtualBaseAdjustmentOffset = nullptr; llvm::Value *VBPtrOffset = nullptr; if (MemPtr->getType()->isStructTy()) { // We need to extract values. unsigned I = 0; FieldOffset = Builder.CreateExtractValue(MemPtr, I++); if (MSInheritanceAttr::hasVBPtrOffsetField(Inheritance)) VBPtrOffset = Builder.CreateExtractValue(MemPtr, I++); if (MSInheritanceAttr::hasVBTableOffsetField(Inheritance)) VirtualBaseAdjustmentOffset = Builder.CreateExtractValue(MemPtr, I++); } llvm::Value *Addr; if (VirtualBaseAdjustmentOffset) { Addr = AdjustVirtualBase(CGF, E, RD, Base, VirtualBaseAdjustmentOffset, VBPtrOffset); } else { Addr = Base.getPointer(); } // Cast to char*. Addr = Builder.CreateBitCast(Addr, CGF.Int8Ty->getPointerTo(AS)); // Apply the offset, which we assume is non-null. Addr = Builder.CreateInBoundsGEP(Addr, FieldOffset, "memptr.offset"); // Cast the address to the appropriate pointer type, adopting the address // space of the base pointer. return Builder.CreateBitCast(Addr, PType); } llvm::Value * MicrosoftCXXABI::EmitMemberPointerConversion(CodeGenFunction &CGF, const CastExpr *E, llvm::Value *Src) { assert(E->getCastKind() == CK_DerivedToBaseMemberPointer || E->getCastKind() == CK_BaseToDerivedMemberPointer || E->getCastKind() == CK_ReinterpretMemberPointer); // Use constant emission if we can. if (isa(Src)) return EmitMemberPointerConversion(E, cast(Src)); // We may be adding or dropping fields from the member pointer, so we need // both types and the inheritance models of both records. const MemberPointerType *SrcTy = E->getSubExpr()->getType()->castAs(); const MemberPointerType *DstTy = E->getType()->castAs(); bool IsFunc = SrcTy->isMemberFunctionPointer(); // If the classes use the same null representation, reinterpret_cast is a nop. bool IsReinterpret = E->getCastKind() == CK_ReinterpretMemberPointer; if (IsReinterpret && IsFunc) return Src; CXXRecordDecl *SrcRD = SrcTy->getMostRecentCXXRecordDecl(); CXXRecordDecl *DstRD = DstTy->getMostRecentCXXRecordDecl(); if (IsReinterpret && SrcRD->nullFieldOffsetIsZero() == DstRD->nullFieldOffsetIsZero()) return Src; CGBuilderTy &Builder = CGF.Builder; // Branch past the conversion if Src is null. llvm::Value *IsNotNull = EmitMemberPointerIsNotNull(CGF, Src, SrcTy); llvm::Constant *DstNull = EmitNullMemberPointer(DstTy); // C++ 5.2.10p9: The null member pointer value is converted to the null member // pointer value of the destination type. if (IsReinterpret) { // For reinterpret casts, sema ensures that src and dst are both functions // or data and have the same size, which means the LLVM types should match. assert(Src->getType() == DstNull->getType()); return Builder.CreateSelect(IsNotNull, Src, DstNull); } llvm::BasicBlock *OriginalBB = Builder.GetInsertBlock(); llvm::BasicBlock *ConvertBB = CGF.createBasicBlock("memptr.convert"); llvm::BasicBlock *ContinueBB = CGF.createBasicBlock("memptr.converted"); Builder.CreateCondBr(IsNotNull, ConvertBB, ContinueBB); CGF.EmitBlock(ConvertBB); llvm::Value *Dst = EmitNonNullMemberPointerConversion( SrcTy, DstTy, E->getCastKind(), E->path_begin(), E->path_end(), Src, Builder); Builder.CreateBr(ContinueBB); // In the continuation, choose between DstNull and Dst. CGF.EmitBlock(ContinueBB); llvm::PHINode *Phi = Builder.CreatePHI(DstNull->getType(), 2, "memptr.converted"); Phi->addIncoming(DstNull, OriginalBB); Phi->addIncoming(Dst, ConvertBB); return Phi; } llvm::Value *MicrosoftCXXABI::EmitNonNullMemberPointerConversion( const MemberPointerType *SrcTy, const MemberPointerType *DstTy, CastKind CK, CastExpr::path_const_iterator PathBegin, CastExpr::path_const_iterator PathEnd, llvm::Value *Src, CGBuilderTy &Builder) { const CXXRecordDecl *SrcRD = SrcTy->getMostRecentCXXRecordDecl(); const CXXRecordDecl *DstRD = DstTy->getMostRecentCXXRecordDecl(); MSInheritanceAttr::Spelling SrcInheritance = SrcRD->getMSInheritanceModel(); MSInheritanceAttr::Spelling DstInheritance = DstRD->getMSInheritanceModel(); bool IsFunc = SrcTy->isMemberFunctionPointer(); bool IsConstant = isa(Src); // Decompose src. llvm::Value *FirstField = Src; llvm::Value *NonVirtualBaseAdjustment = getZeroInt(); llvm::Value *VirtualBaseAdjustmentOffset = getZeroInt(); llvm::Value *VBPtrOffset = getZeroInt(); if (!MSInheritanceAttr::hasOnlyOneField(IsFunc, SrcInheritance)) { // We need to extract values. unsigned I = 0; FirstField = Builder.CreateExtractValue(Src, I++); if (MSInheritanceAttr::hasNVOffsetField(IsFunc, SrcInheritance)) NonVirtualBaseAdjustment = Builder.CreateExtractValue(Src, I++); if (MSInheritanceAttr::hasVBPtrOffsetField(SrcInheritance)) VBPtrOffset = Builder.CreateExtractValue(Src, I++); if (MSInheritanceAttr::hasVBTableOffsetField(SrcInheritance)) VirtualBaseAdjustmentOffset = Builder.CreateExtractValue(Src, I++); } bool IsDerivedToBase = (CK == CK_DerivedToBaseMemberPointer); const MemberPointerType *DerivedTy = IsDerivedToBase ? SrcTy : DstTy; const CXXRecordDecl *DerivedClass = DerivedTy->getMostRecentCXXRecordDecl(); // For data pointers, we adjust the field offset directly. For functions, we // have a separate field. llvm::Value *&NVAdjustField = IsFunc ? NonVirtualBaseAdjustment : FirstField; // The virtual inheritance model has a quirk: the virtual base table is always // referenced when dereferencing a member pointer even if the member pointer // is non-virtual. This is accounted for by adjusting the non-virtual offset // to point backwards to the top of the MDC from the first VBase. Undo this // adjustment to normalize the member pointer. llvm::Value *SrcVBIndexEqZero = Builder.CreateICmpEQ(VirtualBaseAdjustmentOffset, getZeroInt()); if (SrcInheritance == MSInheritanceAttr::Keyword_virtual_inheritance) { if (int64_t SrcOffsetToFirstVBase = getContext().getOffsetOfBaseWithVBPtr(SrcRD).getQuantity()) { llvm::Value *UndoSrcAdjustment = Builder.CreateSelect( SrcVBIndexEqZero, llvm::ConstantInt::get(CGM.IntTy, SrcOffsetToFirstVBase), getZeroInt()); NVAdjustField = Builder.CreateNSWAdd(NVAdjustField, UndoSrcAdjustment); } } // A non-zero vbindex implies that we are dealing with a source member in a // floating virtual base in addition to some non-virtual offset. If the // vbindex is zero, we are dealing with a source that exists in a non-virtual, // fixed, base. The difference between these two cases is that the vbindex + // nvoffset *always* point to the member regardless of what context they are // evaluated in so long as the vbindex is adjusted. A member inside a fixed // base requires explicit nv adjustment. llvm::Constant *BaseClassOffset = llvm::ConstantInt::get( CGM.IntTy, CGM.computeNonVirtualBaseClassOffset(DerivedClass, PathBegin, PathEnd) .getQuantity()); llvm::Value *NVDisp; if (IsDerivedToBase) NVDisp = Builder.CreateNSWSub(NVAdjustField, BaseClassOffset, "adj"); else NVDisp = Builder.CreateNSWAdd(NVAdjustField, BaseClassOffset, "adj"); NVAdjustField = Builder.CreateSelect(SrcVBIndexEqZero, NVDisp, getZeroInt()); // Update the vbindex to an appropriate value in the destination because // SrcRD's vbtable might not be a strict prefix of the one in DstRD. llvm::Value *DstVBIndexEqZero = SrcVBIndexEqZero; if (MSInheritanceAttr::hasVBTableOffsetField(DstInheritance) && MSInheritanceAttr::hasVBTableOffsetField(SrcInheritance)) { if (llvm::GlobalVariable *VDispMap = getAddrOfVirtualDisplacementMap(SrcRD, DstRD)) { llvm::Value *VBIndex = Builder.CreateExactUDiv( VirtualBaseAdjustmentOffset, llvm::ConstantInt::get(CGM.IntTy, 4)); if (IsConstant) { llvm::Constant *Mapping = VDispMap->getInitializer(); VirtualBaseAdjustmentOffset = Mapping->getAggregateElement(cast(VBIndex)); } else { llvm::Value *Idxs[] = {getZeroInt(), VBIndex}; VirtualBaseAdjustmentOffset = Builder.CreateAlignedLoad(Builder.CreateInBoundsGEP(VDispMap, Idxs), CharUnits::fromQuantity(4)); } DstVBIndexEqZero = Builder.CreateICmpEQ(VirtualBaseAdjustmentOffset, getZeroInt()); } } // Set the VBPtrOffset to zero if the vbindex is zero. Otherwise, initialize // it to the offset of the vbptr. if (MSInheritanceAttr::hasVBPtrOffsetField(DstInheritance)) { llvm::Value *DstVBPtrOffset = llvm::ConstantInt::get( CGM.IntTy, getContext().getASTRecordLayout(DstRD).getVBPtrOffset().getQuantity()); VBPtrOffset = Builder.CreateSelect(DstVBIndexEqZero, getZeroInt(), DstVBPtrOffset); } // Likewise, apply a similar adjustment so that dereferencing the member // pointer correctly accounts for the distance between the start of the first // virtual base and the top of the MDC. if (DstInheritance == MSInheritanceAttr::Keyword_virtual_inheritance) { if (int64_t DstOffsetToFirstVBase = getContext().getOffsetOfBaseWithVBPtr(DstRD).getQuantity()) { llvm::Value *DoDstAdjustment = Builder.CreateSelect( DstVBIndexEqZero, llvm::ConstantInt::get(CGM.IntTy, DstOffsetToFirstVBase), getZeroInt()); NVAdjustField = Builder.CreateNSWSub(NVAdjustField, DoDstAdjustment); } } // Recompose dst from the null struct and the adjusted fields from src. llvm::Value *Dst; if (MSInheritanceAttr::hasOnlyOneField(IsFunc, DstInheritance)) { Dst = FirstField; } else { Dst = llvm::UndefValue::get(ConvertMemberPointerType(DstTy)); unsigned Idx = 0; Dst = Builder.CreateInsertValue(Dst, FirstField, Idx++); if (MSInheritanceAttr::hasNVOffsetField(IsFunc, DstInheritance)) Dst = Builder.CreateInsertValue(Dst, NonVirtualBaseAdjustment, Idx++); if (MSInheritanceAttr::hasVBPtrOffsetField(DstInheritance)) Dst = Builder.CreateInsertValue(Dst, VBPtrOffset, Idx++); if (MSInheritanceAttr::hasVBTableOffsetField(DstInheritance)) Dst = Builder.CreateInsertValue(Dst, VirtualBaseAdjustmentOffset, Idx++); } return Dst; } llvm::Constant * MicrosoftCXXABI::EmitMemberPointerConversion(const CastExpr *E, llvm::Constant *Src) { const MemberPointerType *SrcTy = E->getSubExpr()->getType()->castAs(); const MemberPointerType *DstTy = E->getType()->castAs(); CastKind CK = E->getCastKind(); return EmitMemberPointerConversion(SrcTy, DstTy, CK, E->path_begin(), E->path_end(), Src); } llvm::Constant *MicrosoftCXXABI::EmitMemberPointerConversion( const MemberPointerType *SrcTy, const MemberPointerType *DstTy, CastKind CK, CastExpr::path_const_iterator PathBegin, CastExpr::path_const_iterator PathEnd, llvm::Constant *Src) { assert(CK == CK_DerivedToBaseMemberPointer || CK == CK_BaseToDerivedMemberPointer || CK == CK_ReinterpretMemberPointer); // If src is null, emit a new null for dst. We can't return src because dst // might have a new representation. if (MemberPointerConstantIsNull(SrcTy, Src)) return EmitNullMemberPointer(DstTy); // We don't need to do anything for reinterpret_casts of non-null member // pointers. We should only get here when the two type representations have // the same size. if (CK == CK_ReinterpretMemberPointer) return Src; CGBuilderTy Builder(CGM, CGM.getLLVMContext()); auto *Dst = cast(EmitNonNullMemberPointerConversion( SrcTy, DstTy, CK, PathBegin, PathEnd, Src, Builder)); return Dst; } CGCallee MicrosoftCXXABI::EmitLoadOfMemberFunctionPointer( CodeGenFunction &CGF, const Expr *E, Address This, llvm::Value *&ThisPtrForCall, llvm::Value *MemPtr, const MemberPointerType *MPT) { assert(MPT->isMemberFunctionPointer()); const FunctionProtoType *FPT = MPT->getPointeeType()->castAs(); const CXXRecordDecl *RD = MPT->getMostRecentCXXRecordDecl(); llvm::FunctionType *FTy = CGM.getTypes().GetFunctionType( CGM.getTypes().arrangeCXXMethodType(RD, FPT, /*FD=*/nullptr)); CGBuilderTy &Builder = CGF.Builder; MSInheritanceAttr::Spelling Inheritance = RD->getMSInheritanceModel(); // Extract the fields we need, regardless of model. We'll apply them if we // have them. llvm::Value *FunctionPointer = MemPtr; llvm::Value *NonVirtualBaseAdjustment = nullptr; llvm::Value *VirtualBaseAdjustmentOffset = nullptr; llvm::Value *VBPtrOffset = nullptr; if (MemPtr->getType()->isStructTy()) { // We need to extract values. unsigned I = 0; FunctionPointer = Builder.CreateExtractValue(MemPtr, I++); if (MSInheritanceAttr::hasNVOffsetField(MPT, Inheritance)) NonVirtualBaseAdjustment = Builder.CreateExtractValue(MemPtr, I++); if (MSInheritanceAttr::hasVBPtrOffsetField(Inheritance)) VBPtrOffset = Builder.CreateExtractValue(MemPtr, I++); if (MSInheritanceAttr::hasVBTableOffsetField(Inheritance)) VirtualBaseAdjustmentOffset = Builder.CreateExtractValue(MemPtr, I++); } if (VirtualBaseAdjustmentOffset) { ThisPtrForCall = AdjustVirtualBase(CGF, E, RD, This, VirtualBaseAdjustmentOffset, VBPtrOffset); } else { ThisPtrForCall = This.getPointer(); } if (NonVirtualBaseAdjustment) { // Apply the adjustment and cast back to the original struct type. llvm::Value *Ptr = Builder.CreateBitCast(ThisPtrForCall, CGF.Int8PtrTy); Ptr = Builder.CreateInBoundsGEP(Ptr, NonVirtualBaseAdjustment); ThisPtrForCall = Builder.CreateBitCast(Ptr, ThisPtrForCall->getType(), "this.adjusted"); } FunctionPointer = Builder.CreateBitCast(FunctionPointer, FTy->getPointerTo()); CGCallee Callee(FPT, FunctionPointer); return Callee; } CGCXXABI *clang::CodeGen::CreateMicrosoftCXXABI(CodeGenModule &CGM) { return new MicrosoftCXXABI(CGM); } // MS RTTI Overview: // The run time type information emitted by cl.exe contains 5 distinct types of // structures. Many of them reference each other. // // TypeInfo: Static classes that are returned by typeid. // // CompleteObjectLocator: Referenced by vftables. They contain information // required for dynamic casting, including OffsetFromTop. They also contain // a reference to the TypeInfo for the type and a reference to the // CompleteHierarchyDescriptor for the type. // // ClassHieararchyDescriptor: Contains information about a class hierarchy. // Used during dynamic_cast to walk a class hierarchy. References a base // class array and the size of said array. // // BaseClassArray: Contains a list of classes in a hierarchy. BaseClassArray is // somewhat of a misnomer because the most derived class is also in the list // as well as multiple copies of virtual bases (if they occur multiple times // in the hiearchy.) The BaseClassArray contains one BaseClassDescriptor for // every path in the hierarchy, in pre-order depth first order. Note, we do // not declare a specific llvm type for BaseClassArray, it's merely an array // of BaseClassDescriptor pointers. // // BaseClassDescriptor: Contains information about a class in a class hierarchy. // BaseClassDescriptor is also somewhat of a misnomer for the same reason that // BaseClassArray is. It contains information about a class within a // hierarchy such as: is this base is ambiguous and what is its offset in the // vbtable. The names of the BaseClassDescriptors have all of their fields // mangled into them so they can be aggressively deduplicated by the linker. static llvm::GlobalVariable *getTypeInfoVTable(CodeGenModule &CGM) { StringRef MangledName("\01??_7type_info@@6B@"); if (auto VTable = CGM.getModule().getNamedGlobal(MangledName)) return VTable; return new llvm::GlobalVariable(CGM.getModule(), CGM.Int8PtrTy, /*Constant=*/true, llvm::GlobalVariable::ExternalLinkage, /*Initializer=*/nullptr, MangledName); } namespace { /// \brief A Helper struct that stores information about a class in a class /// hierarchy. The information stored in these structs struct is used during /// the generation of ClassHierarchyDescriptors and BaseClassDescriptors. // During RTTI creation, MSRTTIClasses are stored in a contiguous array with // implicit depth first pre-order tree connectivity. getFirstChild and // getNextSibling allow us to walk the tree efficiently. struct MSRTTIClass { enum { IsPrivateOnPath = 1 | 8, IsAmbiguous = 2, IsPrivate = 4, IsVirtual = 16, HasHierarchyDescriptor = 64 }; MSRTTIClass(const CXXRecordDecl *RD) : RD(RD) {} uint32_t initialize(const MSRTTIClass *Parent, const CXXBaseSpecifier *Specifier); MSRTTIClass *getFirstChild() { return this + 1; } static MSRTTIClass *getNextChild(MSRTTIClass *Child) { return Child + 1 + Child->NumBases; } const CXXRecordDecl *RD, *VirtualRoot; uint32_t Flags, NumBases, OffsetInVBase; }; /// \brief Recursively initialize the base class array. uint32_t MSRTTIClass::initialize(const MSRTTIClass *Parent, const CXXBaseSpecifier *Specifier) { Flags = HasHierarchyDescriptor; if (!Parent) { VirtualRoot = nullptr; OffsetInVBase = 0; } else { if (Specifier->getAccessSpecifier() != AS_public) Flags |= IsPrivate | IsPrivateOnPath; if (Specifier->isVirtual()) { Flags |= IsVirtual; VirtualRoot = RD; OffsetInVBase = 0; } else { if (Parent->Flags & IsPrivateOnPath) Flags |= IsPrivateOnPath; VirtualRoot = Parent->VirtualRoot; OffsetInVBase = Parent->OffsetInVBase + RD->getASTContext() .getASTRecordLayout(Parent->RD).getBaseClassOffset(RD).getQuantity(); } } NumBases = 0; MSRTTIClass *Child = getFirstChild(); for (const CXXBaseSpecifier &Base : RD->bases()) { NumBases += Child->initialize(this, &Base) + 1; Child = getNextChild(Child); } return NumBases; } static llvm::GlobalValue::LinkageTypes getLinkageForRTTI(QualType Ty) { switch (Ty->getLinkage()) { case NoLinkage: case InternalLinkage: case UniqueExternalLinkage: return llvm::GlobalValue::InternalLinkage; case VisibleNoLinkage: case ModuleInternalLinkage: case ModuleLinkage: case ExternalLinkage: return llvm::GlobalValue::LinkOnceODRLinkage; } llvm_unreachable("Invalid linkage!"); } /// \brief An ephemeral helper class for building MS RTTI types. It caches some /// calls to the module and information about the most derived class in a /// hierarchy. struct MSRTTIBuilder { enum { HasBranchingHierarchy = 1, HasVirtualBranchingHierarchy = 2, HasAmbiguousBases = 4 }; MSRTTIBuilder(MicrosoftCXXABI &ABI, const CXXRecordDecl *RD) : CGM(ABI.CGM), Context(CGM.getContext()), VMContext(CGM.getLLVMContext()), Module(CGM.getModule()), RD(RD), Linkage(getLinkageForRTTI(CGM.getContext().getTagDeclType(RD))), ABI(ABI) {} llvm::GlobalVariable *getBaseClassDescriptor(const MSRTTIClass &Classes); llvm::GlobalVariable * getBaseClassArray(SmallVectorImpl &Classes); llvm::GlobalVariable *getClassHierarchyDescriptor(); llvm::GlobalVariable *getCompleteObjectLocator(const VPtrInfo &Info); CodeGenModule &CGM; ASTContext &Context; llvm::LLVMContext &VMContext; llvm::Module &Module; const CXXRecordDecl *RD; llvm::GlobalVariable::LinkageTypes Linkage; MicrosoftCXXABI &ABI; }; } // namespace /// \brief Recursively serializes a class hierarchy in pre-order depth first /// order. static void serializeClassHierarchy(SmallVectorImpl &Classes, const CXXRecordDecl *RD) { Classes.push_back(MSRTTIClass(RD)); for (const CXXBaseSpecifier &Base : RD->bases()) serializeClassHierarchy(Classes, Base.getType()->getAsCXXRecordDecl()); } /// \brief Find ambiguity among base classes. static void detectAmbiguousBases(SmallVectorImpl &Classes) { llvm::SmallPtrSet VirtualBases; llvm::SmallPtrSet UniqueBases; llvm::SmallPtrSet AmbiguousBases; for (MSRTTIClass *Class = &Classes.front(); Class <= &Classes.back();) { if ((Class->Flags & MSRTTIClass::IsVirtual) && !VirtualBases.insert(Class->RD).second) { Class = MSRTTIClass::getNextChild(Class); continue; } if (!UniqueBases.insert(Class->RD).second) AmbiguousBases.insert(Class->RD); Class++; } if (AmbiguousBases.empty()) return; for (MSRTTIClass &Class : Classes) if (AmbiguousBases.count(Class.RD)) Class.Flags |= MSRTTIClass::IsAmbiguous; } llvm::GlobalVariable *MSRTTIBuilder::getClassHierarchyDescriptor() { SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); ABI.getMangleContext().mangleCXXRTTIClassHierarchyDescriptor(RD, Out); } // Check to see if we've already declared this ClassHierarchyDescriptor. if (auto CHD = Module.getNamedGlobal(MangledName)) return CHD; // Serialize the class hierarchy and initialize the CHD Fields. SmallVector Classes; serializeClassHierarchy(Classes, RD); Classes.front().initialize(/*Parent=*/nullptr, /*Specifier=*/nullptr); detectAmbiguousBases(Classes); int Flags = 0; for (auto Class : Classes) { if (Class.RD->getNumBases() > 1) Flags |= HasBranchingHierarchy; // Note: cl.exe does not calculate "HasAmbiguousBases" correctly. We // believe the field isn't actually used. if (Class.Flags & MSRTTIClass::IsAmbiguous) Flags |= HasAmbiguousBases; } if ((Flags & HasBranchingHierarchy) && RD->getNumVBases() != 0) Flags |= HasVirtualBranchingHierarchy; // These gep indices are used to get the address of the first element of the // base class array. llvm::Value *GEPIndices[] = {llvm::ConstantInt::get(CGM.IntTy, 0), llvm::ConstantInt::get(CGM.IntTy, 0)}; // Forward-declare the class hierarchy descriptor auto Type = ABI.getClassHierarchyDescriptorType(); auto CHD = new llvm::GlobalVariable(Module, Type, /*Constant=*/true, Linkage, /*Initializer=*/nullptr, MangledName); if (CHD->isWeakForLinker()) CHD->setComdat(CGM.getModule().getOrInsertComdat(CHD->getName())); auto *Bases = getBaseClassArray(Classes); // Initialize the base class ClassHierarchyDescriptor. llvm::Constant *Fields[] = { llvm::ConstantInt::get(CGM.IntTy, 0), // reserved by the runtime llvm::ConstantInt::get(CGM.IntTy, Flags), llvm::ConstantInt::get(CGM.IntTy, Classes.size()), ABI.getImageRelativeConstant(llvm::ConstantExpr::getInBoundsGetElementPtr( Bases->getValueType(), Bases, llvm::ArrayRef(GEPIndices))), }; CHD->setInitializer(llvm::ConstantStruct::get(Type, Fields)); return CHD; } llvm::GlobalVariable * MSRTTIBuilder::getBaseClassArray(SmallVectorImpl &Classes) { SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); ABI.getMangleContext().mangleCXXRTTIBaseClassArray(RD, Out); } // Forward-declare the base class array. // cl.exe pads the base class array with 1 (in 32 bit mode) or 4 (in 64 bit // mode) bytes of padding. We provide a pointer sized amount of padding by // adding +1 to Classes.size(). The sections have pointer alignment and are // marked pick-any so it shouldn't matter. llvm::Type *PtrType = ABI.getImageRelativeType( ABI.getBaseClassDescriptorType()->getPointerTo()); auto *ArrType = llvm::ArrayType::get(PtrType, Classes.size() + 1); auto *BCA = new llvm::GlobalVariable(Module, ArrType, /*Constant=*/true, Linkage, /*Initializer=*/nullptr, MangledName); if (BCA->isWeakForLinker()) BCA->setComdat(CGM.getModule().getOrInsertComdat(BCA->getName())); // Initialize the BaseClassArray. SmallVector BaseClassArrayData; for (MSRTTIClass &Class : Classes) BaseClassArrayData.push_back( ABI.getImageRelativeConstant(getBaseClassDescriptor(Class))); BaseClassArrayData.push_back(llvm::Constant::getNullValue(PtrType)); BCA->setInitializer(llvm::ConstantArray::get(ArrType, BaseClassArrayData)); return BCA; } llvm::GlobalVariable * MSRTTIBuilder::getBaseClassDescriptor(const MSRTTIClass &Class) { // Compute the fields for the BaseClassDescriptor. They are computed up front // because they are mangled into the name of the object. uint32_t OffsetInVBTable = 0; int32_t VBPtrOffset = -1; if (Class.VirtualRoot) { auto &VTableContext = CGM.getMicrosoftVTableContext(); OffsetInVBTable = VTableContext.getVBTableIndex(RD, Class.VirtualRoot) * 4; VBPtrOffset = Context.getASTRecordLayout(RD).getVBPtrOffset().getQuantity(); } SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); ABI.getMangleContext().mangleCXXRTTIBaseClassDescriptor( Class.RD, Class.OffsetInVBase, VBPtrOffset, OffsetInVBTable, Class.Flags, Out); } // Check to see if we've already declared this object. if (auto BCD = Module.getNamedGlobal(MangledName)) return BCD; // Forward-declare the base class descriptor. auto Type = ABI.getBaseClassDescriptorType(); auto BCD = new llvm::GlobalVariable(Module, Type, /*Constant=*/true, Linkage, /*Initializer=*/nullptr, MangledName); if (BCD->isWeakForLinker()) BCD->setComdat(CGM.getModule().getOrInsertComdat(BCD->getName())); // Initialize the BaseClassDescriptor. llvm::Constant *Fields[] = { ABI.getImageRelativeConstant( ABI.getAddrOfRTTIDescriptor(Context.getTypeDeclType(Class.RD))), llvm::ConstantInt::get(CGM.IntTy, Class.NumBases), llvm::ConstantInt::get(CGM.IntTy, Class.OffsetInVBase), llvm::ConstantInt::get(CGM.IntTy, VBPtrOffset), llvm::ConstantInt::get(CGM.IntTy, OffsetInVBTable), llvm::ConstantInt::get(CGM.IntTy, Class.Flags), ABI.getImageRelativeConstant( MSRTTIBuilder(ABI, Class.RD).getClassHierarchyDescriptor()), }; BCD->setInitializer(llvm::ConstantStruct::get(Type, Fields)); return BCD; } llvm::GlobalVariable * MSRTTIBuilder::getCompleteObjectLocator(const VPtrInfo &Info) { SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); ABI.getMangleContext().mangleCXXRTTICompleteObjectLocator(RD, Info.MangledPath, Out); } // Check to see if we've already computed this complete object locator. if (auto COL = Module.getNamedGlobal(MangledName)) return COL; // Compute the fields of the complete object locator. int OffsetToTop = Info.FullOffsetInMDC.getQuantity(); int VFPtrOffset = 0; // The offset includes the vtordisp if one exists. if (const CXXRecordDecl *VBase = Info.getVBaseWithVPtr()) if (Context.getASTRecordLayout(RD) .getVBaseOffsetsMap() .find(VBase) ->second.hasVtorDisp()) VFPtrOffset = Info.NonVirtualOffset.getQuantity() + 4; // Forward-declare the complete object locator. llvm::StructType *Type = ABI.getCompleteObjectLocatorType(); auto COL = new llvm::GlobalVariable(Module, Type, /*Constant=*/true, Linkage, /*Initializer=*/nullptr, MangledName); // Initialize the CompleteObjectLocator. llvm::Constant *Fields[] = { llvm::ConstantInt::get(CGM.IntTy, ABI.isImageRelative()), llvm::ConstantInt::get(CGM.IntTy, OffsetToTop), llvm::ConstantInt::get(CGM.IntTy, VFPtrOffset), ABI.getImageRelativeConstant( CGM.GetAddrOfRTTIDescriptor(Context.getTypeDeclType(RD))), ABI.getImageRelativeConstant(getClassHierarchyDescriptor()), ABI.getImageRelativeConstant(COL), }; llvm::ArrayRef FieldsRef(Fields); if (!ABI.isImageRelative()) FieldsRef = FieldsRef.drop_back(); COL->setInitializer(llvm::ConstantStruct::get(Type, FieldsRef)); if (COL->isWeakForLinker()) COL->setComdat(CGM.getModule().getOrInsertComdat(COL->getName())); return COL; } static QualType decomposeTypeForEH(ASTContext &Context, QualType T, bool &IsConst, bool &IsVolatile, bool &IsUnaligned) { T = Context.getExceptionObjectType(T); // C++14 [except.handle]p3: // A handler is a match for an exception object of type E if [...] // - the handler is of type cv T or const T& where T is a pointer type and // E is a pointer type that can be converted to T by [...] // - a qualification conversion IsConst = false; IsVolatile = false; IsUnaligned = false; QualType PointeeType = T->getPointeeType(); if (!PointeeType.isNull()) { IsConst = PointeeType.isConstQualified(); IsVolatile = PointeeType.isVolatileQualified(); IsUnaligned = PointeeType.getQualifiers().hasUnaligned(); } // Member pointer types like "const int A::*" are represented by having RTTI // for "int A::*" and separately storing the const qualifier. if (const auto *MPTy = T->getAs()) T = Context.getMemberPointerType(PointeeType.getUnqualifiedType(), MPTy->getClass()); // Pointer types like "const int * const *" are represented by having RTTI // for "const int **" and separately storing the const qualifier. if (T->isPointerType()) T = Context.getPointerType(PointeeType.getUnqualifiedType()); return T; } CatchTypeInfo MicrosoftCXXABI::getAddrOfCXXCatchHandlerType(QualType Type, QualType CatchHandlerType) { // TypeDescriptors for exceptions never have qualified pointer types, // qualifiers are stored separately in order to support qualification // conversions. bool IsConst, IsVolatile, IsUnaligned; Type = decomposeTypeForEH(getContext(), Type, IsConst, IsVolatile, IsUnaligned); bool IsReference = CatchHandlerType->isReferenceType(); uint32_t Flags = 0; if (IsConst) Flags |= 1; if (IsVolatile) Flags |= 2; if (IsUnaligned) Flags |= 4; if (IsReference) Flags |= 8; return CatchTypeInfo{getAddrOfRTTIDescriptor(Type)->stripPointerCasts(), Flags}; } /// \brief Gets a TypeDescriptor. Returns a llvm::Constant * rather than a /// llvm::GlobalVariable * because different type descriptors have different /// types, and need to be abstracted. They are abstracting by casting the /// address to an Int8PtrTy. llvm::Constant *MicrosoftCXXABI::getAddrOfRTTIDescriptor(QualType Type) { SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); getMangleContext().mangleCXXRTTI(Type, Out); } // Check to see if we've already declared this TypeDescriptor. if (llvm::GlobalVariable *GV = CGM.getModule().getNamedGlobal(MangledName)) return llvm::ConstantExpr::getBitCast(GV, CGM.Int8PtrTy); // Note for the future: If we would ever like to do deferred emission of // RTTI, check if emitting vtables opportunistically need any adjustment. // Compute the fields for the TypeDescriptor. SmallString<256> TypeInfoString; { llvm::raw_svector_ostream Out(TypeInfoString); getMangleContext().mangleCXXRTTIName(Type, Out); } // Declare and initialize the TypeDescriptor. llvm::Constant *Fields[] = { getTypeInfoVTable(CGM), // VFPtr llvm::ConstantPointerNull::get(CGM.Int8PtrTy), // Runtime data llvm::ConstantDataArray::getString(CGM.getLLVMContext(), TypeInfoString)}; llvm::StructType *TypeDescriptorType = getTypeDescriptorType(TypeInfoString); auto *Var = new llvm::GlobalVariable( CGM.getModule(), TypeDescriptorType, /*Constant=*/false, getLinkageForRTTI(Type), llvm::ConstantStruct::get(TypeDescriptorType, Fields), MangledName); if (Var->isWeakForLinker()) Var->setComdat(CGM.getModule().getOrInsertComdat(Var->getName())); return llvm::ConstantExpr::getBitCast(Var, CGM.Int8PtrTy); } /// \brief Gets or a creates a Microsoft CompleteObjectLocator. llvm::GlobalVariable * MicrosoftCXXABI::getMSCompleteObjectLocator(const CXXRecordDecl *RD, const VPtrInfo &Info) { return MSRTTIBuilder(*this, RD).getCompleteObjectLocator(Info); } static void emitCXXConstructor(CodeGenModule &CGM, const CXXConstructorDecl *ctor, StructorType ctorType) { // There are no constructor variants, always emit the complete destructor. llvm::Function *Fn = CGM.codegenCXXStructor(ctor, StructorType::Complete); CGM.maybeSetTrivialComdat(*ctor, *Fn); } static void emitCXXDestructor(CodeGenModule &CGM, const CXXDestructorDecl *dtor, StructorType dtorType) { // The complete destructor is equivalent to the base destructor for // classes with no virtual bases, so try to emit it as an alias. if (!dtor->getParent()->getNumVBases() && (dtorType == StructorType::Complete || dtorType == StructorType::Base)) { bool ProducedAlias = !CGM.TryEmitDefinitionAsAlias( GlobalDecl(dtor, Dtor_Complete), GlobalDecl(dtor, Dtor_Base), true); if (ProducedAlias) { if (dtorType == StructorType::Complete) return; if (dtor->isVirtual()) CGM.getVTables().EmitThunks(GlobalDecl(dtor, Dtor_Complete)); } } // The base destructor is equivalent to the base destructor of its // base class if there is exactly one non-virtual base class with a // non-trivial destructor, there are no fields with a non-trivial // destructor, and the body of the destructor is trivial. if (dtorType == StructorType::Base && !CGM.TryEmitBaseDestructorAsAlias(dtor)) return; llvm::Function *Fn = CGM.codegenCXXStructor(dtor, dtorType); if (Fn->isWeakForLinker()) Fn->setComdat(CGM.getModule().getOrInsertComdat(Fn->getName())); } void MicrosoftCXXABI::emitCXXStructor(const CXXMethodDecl *MD, StructorType Type) { if (auto *CD = dyn_cast(MD)) { emitCXXConstructor(CGM, CD, Type); return; } emitCXXDestructor(CGM, cast(MD), Type); } llvm::Function * MicrosoftCXXABI::getAddrOfCXXCtorClosure(const CXXConstructorDecl *CD, CXXCtorType CT) { assert(CT == Ctor_CopyingClosure || CT == Ctor_DefaultClosure); // Calculate the mangled name. SmallString<256> ThunkName; llvm::raw_svector_ostream Out(ThunkName); getMangleContext().mangleCXXCtor(CD, CT, Out); // If the thunk has been generated previously, just return it. if (llvm::GlobalValue *GV = CGM.getModule().getNamedValue(ThunkName)) return cast(GV); // Create the llvm::Function. const CGFunctionInfo &FnInfo = CGM.getTypes().arrangeMSCtorClosure(CD, CT); llvm::FunctionType *ThunkTy = CGM.getTypes().GetFunctionType(FnInfo); const CXXRecordDecl *RD = CD->getParent(); QualType RecordTy = getContext().getRecordType(RD); llvm::Function *ThunkFn = llvm::Function::Create( ThunkTy, getLinkageForRTTI(RecordTy), ThunkName.str(), &CGM.getModule()); ThunkFn->setCallingConv(static_cast( FnInfo.getEffectiveCallingConvention())); if (ThunkFn->isWeakForLinker()) ThunkFn->setComdat(CGM.getModule().getOrInsertComdat(ThunkFn->getName())); bool IsCopy = CT == Ctor_CopyingClosure; // Start codegen. CodeGenFunction CGF(CGM); CGF.CurGD = GlobalDecl(CD, Ctor_Complete); // Build FunctionArgs. FunctionArgList FunctionArgs; // A constructor always starts with a 'this' pointer as its first argument. buildThisParam(CGF, FunctionArgs); // Following the 'this' pointer is a reference to the source object that we // are copying from. ImplicitParamDecl SrcParam( getContext(), /*DC=*/nullptr, SourceLocation(), &getContext().Idents.get("src"), getContext().getLValueReferenceType(RecordTy, /*SpelledAsLValue=*/true), ImplicitParamDecl::Other); if (IsCopy) FunctionArgs.push_back(&SrcParam); // Constructors for classes which utilize virtual bases have an additional // parameter which indicates whether or not it is being delegated to by a more // derived constructor. ImplicitParamDecl IsMostDerived(getContext(), /*DC=*/nullptr, SourceLocation(), &getContext().Idents.get("is_most_derived"), getContext().IntTy, ImplicitParamDecl::Other); // Only add the parameter to the list if thie class has virtual bases. if (RD->getNumVBases() > 0) FunctionArgs.push_back(&IsMostDerived); // Start defining the function. auto NL = ApplyDebugLocation::CreateEmpty(CGF); CGF.StartFunction(GlobalDecl(), FnInfo.getReturnType(), ThunkFn, FnInfo, FunctionArgs, CD->getLocation(), SourceLocation()); // Create a scope with an artificial location for the body of this function. auto AL = ApplyDebugLocation::CreateArtificial(CGF); EmitThisParam(CGF); llvm::Value *This = getThisValue(CGF); llvm::Value *SrcVal = IsCopy ? CGF.Builder.CreateLoad(CGF.GetAddrOfLocalVar(&SrcParam), "src") : nullptr; CallArgList Args; // Push the this ptr. Args.add(RValue::get(This), CD->getThisType(getContext())); // Push the src ptr. if (SrcVal) Args.add(RValue::get(SrcVal), SrcParam.getType()); // Add the rest of the default arguments. SmallVector ArgVec; ArrayRef params = CD->parameters().drop_front(IsCopy ? 1 : 0); for (const ParmVarDecl *PD : params) { assert(PD->hasDefaultArg() && "ctor closure lacks default args"); ArgVec.push_back(PD->getDefaultArg()); } CodeGenFunction::RunCleanupsScope Cleanups(CGF); const auto *FPT = CD->getType()->castAs(); CGF.EmitCallArgs(Args, FPT, llvm::makeArrayRef(ArgVec), CD, IsCopy ? 1 : 0); // Insert any ABI-specific implicit constructor arguments. AddedStructorArgs ExtraArgs = addImplicitConstructorArgs(CGF, CD, Ctor_Complete, /*ForVirtualBase=*/false, /*Delegating=*/false, Args); // Call the destructor with our arguments. llvm::Constant *CalleePtr = CGM.getAddrOfCXXStructor(CD, StructorType::Complete); CGCallee Callee = CGCallee::forDirect(CalleePtr, CD); const CGFunctionInfo &CalleeInfo = CGM.getTypes().arrangeCXXConstructorCall( Args, CD, Ctor_Complete, ExtraArgs.Prefix, ExtraArgs.Suffix); CGF.EmitCall(CalleeInfo, Callee, ReturnValueSlot(), Args); Cleanups.ForceCleanup(); // Emit the ret instruction, remove any temporary instructions created for the // aid of CodeGen. CGF.FinishFunction(SourceLocation()); return ThunkFn; } llvm::Constant *MicrosoftCXXABI::getCatchableType(QualType T, uint32_t NVOffset, int32_t VBPtrOffset, uint32_t VBIndex) { assert(!T->isReferenceType()); CXXRecordDecl *RD = T->getAsCXXRecordDecl(); const CXXConstructorDecl *CD = RD ? CGM.getContext().getCopyConstructorForExceptionObject(RD) : nullptr; CXXCtorType CT = Ctor_Complete; if (CD) if (!hasDefaultCXXMethodCC(getContext(), CD) || CD->getNumParams() != 1) CT = Ctor_CopyingClosure; uint32_t Size = getContext().getTypeSizeInChars(T).getQuantity(); SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); getMangleContext().mangleCXXCatchableType(T, CD, CT, Size, NVOffset, VBPtrOffset, VBIndex, Out); } if (llvm::GlobalVariable *GV = CGM.getModule().getNamedGlobal(MangledName)) return getImageRelativeConstant(GV); // The TypeDescriptor is used by the runtime to determine if a catch handler // is appropriate for the exception object. llvm::Constant *TD = getImageRelativeConstant(getAddrOfRTTIDescriptor(T)); // The runtime is responsible for calling the copy constructor if the // exception is caught by value. llvm::Constant *CopyCtor; if (CD) { if (CT == Ctor_CopyingClosure) CopyCtor = getAddrOfCXXCtorClosure(CD, Ctor_CopyingClosure); else CopyCtor = CGM.getAddrOfCXXStructor(CD, StructorType::Complete); CopyCtor = llvm::ConstantExpr::getBitCast(CopyCtor, CGM.Int8PtrTy); } else { CopyCtor = llvm::Constant::getNullValue(CGM.Int8PtrTy); } CopyCtor = getImageRelativeConstant(CopyCtor); bool IsScalar = !RD; bool HasVirtualBases = false; bool IsStdBadAlloc = false; // std::bad_alloc is special for some reason. QualType PointeeType = T; if (T->isPointerType()) PointeeType = T->getPointeeType(); if (const CXXRecordDecl *RD = PointeeType->getAsCXXRecordDecl()) { HasVirtualBases = RD->getNumVBases() > 0; if (IdentifierInfo *II = RD->getIdentifier()) IsStdBadAlloc = II->isStr("bad_alloc") && RD->isInStdNamespace(); } // Encode the relevant CatchableType properties into the Flags bitfield. // FIXME: Figure out how bits 2 or 8 can get set. uint32_t Flags = 0; if (IsScalar) Flags |= 1; if (HasVirtualBases) Flags |= 4; if (IsStdBadAlloc) Flags |= 16; llvm::Constant *Fields[] = { llvm::ConstantInt::get(CGM.IntTy, Flags), // Flags TD, // TypeDescriptor llvm::ConstantInt::get(CGM.IntTy, NVOffset), // NonVirtualAdjustment llvm::ConstantInt::get(CGM.IntTy, VBPtrOffset), // OffsetToVBPtr llvm::ConstantInt::get(CGM.IntTy, VBIndex), // VBTableIndex llvm::ConstantInt::get(CGM.IntTy, Size), // Size CopyCtor // CopyCtor }; llvm::StructType *CTType = getCatchableTypeType(); auto *GV = new llvm::GlobalVariable( CGM.getModule(), CTType, /*Constant=*/true, getLinkageForRTTI(T), llvm::ConstantStruct::get(CTType, Fields), MangledName); GV->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global); GV->setSection(".xdata"); if (GV->isWeakForLinker()) GV->setComdat(CGM.getModule().getOrInsertComdat(GV->getName())); return getImageRelativeConstant(GV); } llvm::GlobalVariable *MicrosoftCXXABI::getCatchableTypeArray(QualType T) { assert(!T->isReferenceType()); // See if we've already generated a CatchableTypeArray for this type before. llvm::GlobalVariable *&CTA = CatchableTypeArrays[T]; if (CTA) return CTA; // Ensure that we don't have duplicate entries in our CatchableTypeArray by // using a SmallSetVector. Duplicates may arise due to virtual bases // occurring more than once in the hierarchy. llvm::SmallSetVector CatchableTypes; // C++14 [except.handle]p3: // A handler is a match for an exception object of type E if [...] // - the handler is of type cv T or cv T& and T is an unambiguous public // base class of E, or // - the handler is of type cv T or const T& where T is a pointer type and // E is a pointer type that can be converted to T by [...] // - a standard pointer conversion (4.10) not involving conversions to // pointers to private or protected or ambiguous classes const CXXRecordDecl *MostDerivedClass = nullptr; bool IsPointer = T->isPointerType(); if (IsPointer) MostDerivedClass = T->getPointeeType()->getAsCXXRecordDecl(); else MostDerivedClass = T->getAsCXXRecordDecl(); // Collect all the unambiguous public bases of the MostDerivedClass. if (MostDerivedClass) { const ASTContext &Context = getContext(); const ASTRecordLayout &MostDerivedLayout = Context.getASTRecordLayout(MostDerivedClass); MicrosoftVTableContext &VTableContext = CGM.getMicrosoftVTableContext(); SmallVector Classes; serializeClassHierarchy(Classes, MostDerivedClass); Classes.front().initialize(/*Parent=*/nullptr, /*Specifier=*/nullptr); detectAmbiguousBases(Classes); for (const MSRTTIClass &Class : Classes) { // Skip any ambiguous or private bases. if (Class.Flags & (MSRTTIClass::IsPrivateOnPath | MSRTTIClass::IsAmbiguous)) continue; // Write down how to convert from a derived pointer to a base pointer. uint32_t OffsetInVBTable = 0; int32_t VBPtrOffset = -1; if (Class.VirtualRoot) { OffsetInVBTable = VTableContext.getVBTableIndex(MostDerivedClass, Class.VirtualRoot)*4; VBPtrOffset = MostDerivedLayout.getVBPtrOffset().getQuantity(); } // Turn our record back into a pointer if the exception object is a // pointer. QualType RTTITy = QualType(Class.RD->getTypeForDecl(), 0); if (IsPointer) RTTITy = Context.getPointerType(RTTITy); CatchableTypes.insert(getCatchableType(RTTITy, Class.OffsetInVBase, VBPtrOffset, OffsetInVBTable)); } } // C++14 [except.handle]p3: // A handler is a match for an exception object of type E if // - The handler is of type cv T or cv T& and E and T are the same type // (ignoring the top-level cv-qualifiers) CatchableTypes.insert(getCatchableType(T)); // C++14 [except.handle]p3: // A handler is a match for an exception object of type E if // - the handler is of type cv T or const T& where T is a pointer type and // E is a pointer type that can be converted to T by [...] // - a standard pointer conversion (4.10) not involving conversions to // pointers to private or protected or ambiguous classes // // C++14 [conv.ptr]p2: // A prvalue of type "pointer to cv T," where T is an object type, can be // converted to a prvalue of type "pointer to cv void". if (IsPointer && T->getPointeeType()->isObjectType()) CatchableTypes.insert(getCatchableType(getContext().VoidPtrTy)); // C++14 [except.handle]p3: // A handler is a match for an exception object of type E if [...] // - the handler is of type cv T or const T& where T is a pointer or // pointer to member type and E is std::nullptr_t. // // We cannot possibly list all possible pointer types here, making this // implementation incompatible with the standard. However, MSVC includes an // entry for pointer-to-void in this case. Let's do the same. if (T->isNullPtrType()) CatchableTypes.insert(getCatchableType(getContext().VoidPtrTy)); uint32_t NumEntries = CatchableTypes.size(); llvm::Type *CTType = getImageRelativeType(getCatchableTypeType()->getPointerTo()); llvm::ArrayType *AT = llvm::ArrayType::get(CTType, NumEntries); llvm::StructType *CTAType = getCatchableTypeArrayType(NumEntries); llvm::Constant *Fields[] = { llvm::ConstantInt::get(CGM.IntTy, NumEntries), // NumEntries llvm::ConstantArray::get( AT, llvm::makeArrayRef(CatchableTypes.begin(), CatchableTypes.end())) // CatchableTypes }; SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); getMangleContext().mangleCXXCatchableTypeArray(T, NumEntries, Out); } CTA = new llvm::GlobalVariable( CGM.getModule(), CTAType, /*Constant=*/true, getLinkageForRTTI(T), llvm::ConstantStruct::get(CTAType, Fields), MangledName); CTA->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global); CTA->setSection(".xdata"); if (CTA->isWeakForLinker()) CTA->setComdat(CGM.getModule().getOrInsertComdat(CTA->getName())); return CTA; } llvm::GlobalVariable *MicrosoftCXXABI::getThrowInfo(QualType T) { bool IsConst, IsVolatile, IsUnaligned; T = decomposeTypeForEH(getContext(), T, IsConst, IsVolatile, IsUnaligned); // The CatchableTypeArray enumerates the various (CV-unqualified) types that // the exception object may be caught as. llvm::GlobalVariable *CTA = getCatchableTypeArray(T); // The first field in a CatchableTypeArray is the number of CatchableTypes. // This is used as a component of the mangled name which means that we need to // know what it is in order to see if we have previously generated the // ThrowInfo. uint32_t NumEntries = cast(CTA->getInitializer()->getAggregateElement(0U)) ->getLimitedValue(); SmallString<256> MangledName; { llvm::raw_svector_ostream Out(MangledName); getMangleContext().mangleCXXThrowInfo(T, IsConst, IsVolatile, IsUnaligned, NumEntries, Out); } // Reuse a previously generated ThrowInfo if we have generated an appropriate // one before. if (llvm::GlobalVariable *GV = CGM.getModule().getNamedGlobal(MangledName)) return GV; // The RTTI TypeDescriptor uses an unqualified type but catch clauses must // be at least as CV qualified. Encode this requirement into the Flags // bitfield. uint32_t Flags = 0; if (IsConst) Flags |= 1; if (IsVolatile) Flags |= 2; if (IsUnaligned) Flags |= 4; // The cleanup-function (a destructor) must be called when the exception // object's lifetime ends. llvm::Constant *CleanupFn = llvm::Constant::getNullValue(CGM.Int8PtrTy); if (const CXXRecordDecl *RD = T->getAsCXXRecordDecl()) if (CXXDestructorDecl *DtorD = RD->getDestructor()) if (!DtorD->isTrivial()) CleanupFn = llvm::ConstantExpr::getBitCast( CGM.getAddrOfCXXStructor(DtorD, StructorType::Complete), CGM.Int8PtrTy); // This is unused as far as we can tell, initialize it to null. llvm::Constant *ForwardCompat = getImageRelativeConstant(llvm::Constant::getNullValue(CGM.Int8PtrTy)); llvm::Constant *PointerToCatchableTypes = getImageRelativeConstant( llvm::ConstantExpr::getBitCast(CTA, CGM.Int8PtrTy)); llvm::StructType *TIType = getThrowInfoType(); llvm::Constant *Fields[] = { llvm::ConstantInt::get(CGM.IntTy, Flags), // Flags getImageRelativeConstant(CleanupFn), // CleanupFn ForwardCompat, // ForwardCompat PointerToCatchableTypes // CatchableTypeArray }; auto *GV = new llvm::GlobalVariable( CGM.getModule(), TIType, /*Constant=*/true, getLinkageForRTTI(T), llvm::ConstantStruct::get(TIType, Fields), StringRef(MangledName)); GV->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global); GV->setSection(".xdata"); if (GV->isWeakForLinker()) GV->setComdat(CGM.getModule().getOrInsertComdat(GV->getName())); return GV; } void MicrosoftCXXABI::emitThrow(CodeGenFunction &CGF, const CXXThrowExpr *E) { const Expr *SubExpr = E->getSubExpr(); QualType ThrowType = SubExpr->getType(); // The exception object lives on the stack and it's address is passed to the // runtime function. Address AI = CGF.CreateMemTemp(ThrowType); CGF.EmitAnyExprToMem(SubExpr, AI, ThrowType.getQualifiers(), /*IsInit=*/true); // The so-called ThrowInfo is used to describe how the exception object may be // caught. llvm::GlobalVariable *TI = getThrowInfo(ThrowType); // Call into the runtime to throw the exception. llvm::Value *Args[] = { CGF.Builder.CreateBitCast(AI.getPointer(), CGM.Int8PtrTy), TI }; CGF.EmitNoreturnRuntimeCallOrInvoke(getThrowFn(), Args); } diff --git a/lib/Driver/ToolChains/Darwin.cpp b/lib/Driver/ToolChains/Darwin.cpp index 6b7f0c71dfb7..32103a6120d4 100644 --- a/lib/Driver/ToolChains/Darwin.cpp +++ b/lib/Driver/ToolChains/Darwin.cpp @@ -1,2028 +1,2033 @@ //===--- Darwin.cpp - Darwin Tool and ToolChain Implementations -*- C++ -*-===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// #include "Darwin.h" #include "Arch/ARM.h" #include "CommonArgs.h" #include "clang/Basic/ObjCRuntime.h" #include "clang/Basic/VirtualFileSystem.h" #include "clang/Driver/Compilation.h" #include "clang/Driver/Driver.h" #include "clang/Driver/DriverDiagnostic.h" #include "clang/Driver/Options.h" #include "clang/Driver/SanitizerArgs.h" #include "llvm/ADT/StringSwitch.h" #include "llvm/Option/ArgList.h" #include "llvm/Support/Path.h" #include "llvm/Support/ScopedPrinter.h" #include "llvm/Support/TargetParser.h" #include // ::getenv using namespace clang::driver; using namespace clang::driver::tools; using namespace clang::driver::toolchains; using namespace clang; using namespace llvm::opt; llvm::Triple::ArchType darwin::getArchTypeForMachOArchName(StringRef Str) { // See arch(3) and llvm-gcc's driver-driver.c. We don't implement support for // archs which Darwin doesn't use. // The matching this routine does is fairly pointless, since it is neither the // complete architecture list, nor a reasonable subset. The problem is that // historically the driver driver accepts this and also ties its -march= // handling to the architecture name, so we need to be careful before removing // support for it. // This code must be kept in sync with Clang's Darwin specific argument // translation. return llvm::StringSwitch(Str) .Cases("ppc", "ppc601", "ppc603", "ppc604", "ppc604e", llvm::Triple::ppc) .Cases("ppc750", "ppc7400", "ppc7450", "ppc970", llvm::Triple::ppc) .Case("ppc64", llvm::Triple::ppc64) .Cases("i386", "i486", "i486SX", "i586", "i686", llvm::Triple::x86) .Cases("pentium", "pentpro", "pentIIm3", "pentIIm5", "pentium4", llvm::Triple::x86) .Cases("x86_64", "x86_64h", llvm::Triple::x86_64) // This is derived from the driver driver. .Cases("arm", "armv4t", "armv5", "armv6", "armv6m", llvm::Triple::arm) .Cases("armv7", "armv7em", "armv7k", "armv7m", llvm::Triple::arm) .Cases("armv7s", "xscale", llvm::Triple::arm) .Case("arm64", llvm::Triple::aarch64) .Case("r600", llvm::Triple::r600) .Case("amdgcn", llvm::Triple::amdgcn) .Case("nvptx", llvm::Triple::nvptx) .Case("nvptx64", llvm::Triple::nvptx64) .Case("amdil", llvm::Triple::amdil) .Case("spir", llvm::Triple::spir) .Default(llvm::Triple::UnknownArch); } void darwin::setTripleTypeForMachOArchName(llvm::Triple &T, StringRef Str) { const llvm::Triple::ArchType Arch = getArchTypeForMachOArchName(Str); unsigned ArchKind = llvm::ARM::parseArch(Str); T.setArch(Arch); if (Str == "x86_64h") T.setArchName(Str); else if (ArchKind == llvm::ARM::AK_ARMV6M || ArchKind == llvm::ARM::AK_ARMV7M || ArchKind == llvm::ARM::AK_ARMV7EM) { T.setOS(llvm::Triple::UnknownOS); T.setObjectFormat(llvm::Triple::MachO); } } void darwin::Assembler::ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { ArgStringList CmdArgs; assert(Inputs.size() == 1 && "Unexpected number of inputs."); const InputInfo &Input = Inputs[0]; // Determine the original source input. const Action *SourceAction = &JA; while (SourceAction->getKind() != Action::InputClass) { assert(!SourceAction->getInputs().empty() && "unexpected root action!"); SourceAction = SourceAction->getInputs()[0]; } // If -fno-integrated-as is used add -Q to the darwin assember driver to make // sure it runs its system assembler not clang's integrated assembler. // Applicable to darwin11+ and Xcode 4+. darwin<10 lacked integrated-as. // FIXME: at run-time detect assembler capabilities or rely on version // information forwarded by -target-assembler-version. if (Args.hasArg(options::OPT_fno_integrated_as)) { const llvm::Triple &T(getToolChain().getTriple()); if (!(T.isMacOSX() && T.isMacOSXVersionLT(10, 7))) CmdArgs.push_back("-Q"); } // Forward -g, assuming we are dealing with an actual assembly file. if (SourceAction->getType() == types::TY_Asm || SourceAction->getType() == types::TY_PP_Asm) { if (Args.hasArg(options::OPT_gstabs)) CmdArgs.push_back("--gstabs"); else if (Args.hasArg(options::OPT_g_Group)) CmdArgs.push_back("-g"); } // Derived from asm spec. AddMachOArch(Args, CmdArgs); // Use -force_cpusubtype_ALL on x86 by default. if (getToolChain().getArch() == llvm::Triple::x86 || getToolChain().getArch() == llvm::Triple::x86_64 || Args.hasArg(options::OPT_force__cpusubtype__ALL)) CmdArgs.push_back("-force_cpusubtype_ALL"); if (getToolChain().getArch() != llvm::Triple::x86_64 && (((Args.hasArg(options::OPT_mkernel) || Args.hasArg(options::OPT_fapple_kext)) && getMachOToolChain().isKernelStatic()) || Args.hasArg(options::OPT_static))) CmdArgs.push_back("-static"); Args.AddAllArgValues(CmdArgs, options::OPT_Wa_COMMA, options::OPT_Xassembler); assert(Output.isFilename() && "Unexpected lipo output."); CmdArgs.push_back("-o"); CmdArgs.push_back(Output.getFilename()); assert(Input.isFilename() && "Invalid input."); CmdArgs.push_back(Input.getFilename()); // asm_final spec is empty. const char *Exec = Args.MakeArgString(getToolChain().GetProgramPath("as")); C.addCommand(llvm::make_unique(JA, *this, Exec, CmdArgs, Inputs)); } void darwin::MachOTool::anchor() {} void darwin::MachOTool::AddMachOArch(const ArgList &Args, ArgStringList &CmdArgs) const { StringRef ArchName = getMachOToolChain().getMachOArchName(Args); // Derived from darwin_arch spec. CmdArgs.push_back("-arch"); CmdArgs.push_back(Args.MakeArgString(ArchName)); // FIXME: Is this needed anymore? if (ArchName == "arm") CmdArgs.push_back("-force_cpusubtype_ALL"); } bool darwin::Linker::NeedsTempPath(const InputInfoList &Inputs) const { // We only need to generate a temp path for LTO if we aren't compiling object // files. When compiling source files, we run 'dsymutil' after linking. We // don't run 'dsymutil' when compiling object files. for (const auto &Input : Inputs) if (Input.getType() != types::TY_Object) return true; return false; } /// \brief Pass -no_deduplicate to ld64 under certain conditions: /// /// - Either -O0 or -O1 is explicitly specified /// - No -O option is specified *and* this is a compile+link (implicit -O0) /// /// Also do *not* add -no_deduplicate when no -O option is specified and this /// is just a link (we can't imply -O0) static bool shouldLinkerNotDedup(bool IsLinkerOnlyAction, const ArgList &Args) { if (Arg *A = Args.getLastArg(options::OPT_O_Group)) { if (A->getOption().matches(options::OPT_O0)) return true; if (A->getOption().matches(options::OPT_O)) return llvm::StringSwitch(A->getValue()) .Case("1", true) .Default(false); return false; // OPT_Ofast & OPT_O4 } if (!IsLinkerOnlyAction) // Implicit -O0 for compile+linker only. return true; return false; } void darwin::Linker::AddLinkArgs(Compilation &C, const ArgList &Args, ArgStringList &CmdArgs, const InputInfoList &Inputs) const { const Driver &D = getToolChain().getDriver(); const toolchains::MachO &MachOTC = getMachOToolChain(); unsigned Version[5] = {0, 0, 0, 0, 0}; if (Arg *A = Args.getLastArg(options::OPT_mlinker_version_EQ)) { if (!Driver::GetReleaseVersion(A->getValue(), Version)) D.Diag(diag::err_drv_invalid_version_number) << A->getAsString(Args); } // Newer linkers support -demangle. Pass it if supported and not disabled by // the user. if (Version[0] >= 100 && !Args.hasArg(options::OPT_Z_Xlinker__no_demangle)) CmdArgs.push_back("-demangle"); if (Args.hasArg(options::OPT_rdynamic) && Version[0] >= 137) CmdArgs.push_back("-export_dynamic"); // If we are using App Extension restrictions, pass a flag to the linker // telling it that the compiled code has been audited. if (Args.hasFlag(options::OPT_fapplication_extension, options::OPT_fno_application_extension, false)) CmdArgs.push_back("-application_extension"); if (D.isUsingLTO()) { // If we are using LTO, then automatically create a temporary file path for // the linker to use, so that it's lifetime will extend past a possible // dsymutil step. if (Version[0] >= 116 && NeedsTempPath(Inputs)) { const char *TmpPath = C.getArgs().MakeArgString( D.GetTemporaryPath("cc", types::getTypeTempSuffix(types::TY_Object))); C.addTempFile(TmpPath); CmdArgs.push_back("-object_path_lto"); CmdArgs.push_back(TmpPath); } } // Use -lto_library option to specify the libLTO.dylib path. Try to find // it in clang installed libraries. ld64 will only look at this argument // when it actually uses LTO, so libLTO.dylib only needs to exist at link // time if ld64 decides that it needs to use LTO. // Since this is passed unconditionally, ld64 will never look for libLTO.dylib // next to it. That's ok since ld64 using a libLTO.dylib not matching the // clang version won't work anyways. if (Version[0] >= 133) { // Search for libLTO in /../lib/libLTO.dylib StringRef P = llvm::sys::path::parent_path(D.Dir); SmallString<128> LibLTOPath(P); llvm::sys::path::append(LibLTOPath, "lib"); llvm::sys::path::append(LibLTOPath, "libLTO.dylib"); CmdArgs.push_back("-lto_library"); CmdArgs.push_back(C.getArgs().MakeArgString(LibLTOPath)); } // ld64 version 262 and above run the deduplicate pass by default. if (Version[0] >= 262 && shouldLinkerNotDedup(C.getJobs().empty(), Args)) CmdArgs.push_back("-no_deduplicate"); // Derived from the "link" spec. Args.AddAllArgs(CmdArgs, options::OPT_static); if (!Args.hasArg(options::OPT_static)) CmdArgs.push_back("-dynamic"); if (Args.hasArg(options::OPT_fgnu_runtime)) { // FIXME: gcc replaces -lobjc in forward args with -lobjc-gnu // here. How do we wish to handle such things? } if (!Args.hasArg(options::OPT_dynamiclib)) { AddMachOArch(Args, CmdArgs); // FIXME: Why do this only on this path? Args.AddLastArg(CmdArgs, options::OPT_force__cpusubtype__ALL); Args.AddLastArg(CmdArgs, options::OPT_bundle); Args.AddAllArgs(CmdArgs, options::OPT_bundle__loader); Args.AddAllArgs(CmdArgs, options::OPT_client__name); Arg *A; if ((A = Args.getLastArg(options::OPT_compatibility__version)) || (A = Args.getLastArg(options::OPT_current__version)) || (A = Args.getLastArg(options::OPT_install__name))) D.Diag(diag::err_drv_argument_only_allowed_with) << A->getAsString(Args) << "-dynamiclib"; Args.AddLastArg(CmdArgs, options::OPT_force__flat__namespace); Args.AddLastArg(CmdArgs, options::OPT_keep__private__externs); Args.AddLastArg(CmdArgs, options::OPT_private__bundle); } else { CmdArgs.push_back("-dylib"); Arg *A; if ((A = Args.getLastArg(options::OPT_bundle)) || (A = Args.getLastArg(options::OPT_bundle__loader)) || (A = Args.getLastArg(options::OPT_client__name)) || (A = Args.getLastArg(options::OPT_force__flat__namespace)) || (A = Args.getLastArg(options::OPT_keep__private__externs)) || (A = Args.getLastArg(options::OPT_private__bundle))) D.Diag(diag::err_drv_argument_not_allowed_with) << A->getAsString(Args) << "-dynamiclib"; Args.AddAllArgsTranslated(CmdArgs, options::OPT_compatibility__version, "-dylib_compatibility_version"); Args.AddAllArgsTranslated(CmdArgs, options::OPT_current__version, "-dylib_current_version"); AddMachOArch(Args, CmdArgs); Args.AddAllArgsTranslated(CmdArgs, options::OPT_install__name, "-dylib_install_name"); } Args.AddLastArg(CmdArgs, options::OPT_all__load); Args.AddAllArgs(CmdArgs, options::OPT_allowable__client); Args.AddLastArg(CmdArgs, options::OPT_bind__at__load); if (MachOTC.isTargetIOSBased()) Args.AddLastArg(CmdArgs, options::OPT_arch__errors__fatal); Args.AddLastArg(CmdArgs, options::OPT_dead__strip); Args.AddLastArg(CmdArgs, options::OPT_no__dead__strip__inits__and__terms); Args.AddAllArgs(CmdArgs, options::OPT_dylib__file); Args.AddLastArg(CmdArgs, options::OPT_dynamic); Args.AddAllArgs(CmdArgs, options::OPT_exported__symbols__list); Args.AddLastArg(CmdArgs, options::OPT_flat__namespace); Args.AddAllArgs(CmdArgs, options::OPT_force__load); Args.AddAllArgs(CmdArgs, options::OPT_headerpad__max__install__names); Args.AddAllArgs(CmdArgs, options::OPT_image__base); Args.AddAllArgs(CmdArgs, options::OPT_init); // Add the deployment target. MachOTC.addMinVersionArgs(Args, CmdArgs); Args.AddLastArg(CmdArgs, options::OPT_nomultidefs); Args.AddLastArg(CmdArgs, options::OPT_multi__module); Args.AddLastArg(CmdArgs, options::OPT_single__module); Args.AddAllArgs(CmdArgs, options::OPT_multiply__defined); Args.AddAllArgs(CmdArgs, options::OPT_multiply__defined__unused); if (const Arg *A = Args.getLastArg(options::OPT_fpie, options::OPT_fPIE, options::OPT_fno_pie, options::OPT_fno_PIE)) { if (A->getOption().matches(options::OPT_fpie) || A->getOption().matches(options::OPT_fPIE)) CmdArgs.push_back("-pie"); else CmdArgs.push_back("-no_pie"); } // for embed-bitcode, use -bitcode_bundle in linker command if (C.getDriver().embedBitcodeEnabled()) { // Check if the toolchain supports bitcode build flow. if (MachOTC.SupportsEmbeddedBitcode()) { CmdArgs.push_back("-bitcode_bundle"); if (C.getDriver().embedBitcodeMarkerOnly() && Version[0] >= 278) { CmdArgs.push_back("-bitcode_process_mode"); CmdArgs.push_back("marker"); } } else D.Diag(diag::err_drv_bitcode_unsupported_on_toolchain); } Args.AddLastArg(CmdArgs, options::OPT_prebind); Args.AddLastArg(CmdArgs, options::OPT_noprebind); Args.AddLastArg(CmdArgs, options::OPT_nofixprebinding); Args.AddLastArg(CmdArgs, options::OPT_prebind__all__twolevel__modules); Args.AddLastArg(CmdArgs, options::OPT_read__only__relocs); Args.AddAllArgs(CmdArgs, options::OPT_sectcreate); Args.AddAllArgs(CmdArgs, options::OPT_sectorder); Args.AddAllArgs(CmdArgs, options::OPT_seg1addr); Args.AddAllArgs(CmdArgs, options::OPT_segprot); Args.AddAllArgs(CmdArgs, options::OPT_segaddr); Args.AddAllArgs(CmdArgs, options::OPT_segs__read__only__addr); Args.AddAllArgs(CmdArgs, options::OPT_segs__read__write__addr); Args.AddAllArgs(CmdArgs, options::OPT_seg__addr__table); Args.AddAllArgs(CmdArgs, options::OPT_seg__addr__table__filename); Args.AddAllArgs(CmdArgs, options::OPT_sub__library); Args.AddAllArgs(CmdArgs, options::OPT_sub__umbrella); // Give --sysroot= preference, over the Apple specific behavior to also use // --isysroot as the syslibroot. StringRef sysroot = C.getSysRoot(); if (sysroot != "") { CmdArgs.push_back("-syslibroot"); CmdArgs.push_back(C.getArgs().MakeArgString(sysroot)); } else if (const Arg *A = Args.getLastArg(options::OPT_isysroot)) { CmdArgs.push_back("-syslibroot"); CmdArgs.push_back(A->getValue()); } Args.AddLastArg(CmdArgs, options::OPT_twolevel__namespace); Args.AddLastArg(CmdArgs, options::OPT_twolevel__namespace__hints); Args.AddAllArgs(CmdArgs, options::OPT_umbrella); Args.AddAllArgs(CmdArgs, options::OPT_undefined); Args.AddAllArgs(CmdArgs, options::OPT_unexported__symbols__list); Args.AddAllArgs(CmdArgs, options::OPT_weak__reference__mismatches); Args.AddLastArg(CmdArgs, options::OPT_X_Flag); Args.AddAllArgs(CmdArgs, options::OPT_y); Args.AddLastArg(CmdArgs, options::OPT_w); Args.AddAllArgs(CmdArgs, options::OPT_pagezero__size); Args.AddAllArgs(CmdArgs, options::OPT_segs__read__); Args.AddLastArg(CmdArgs, options::OPT_seglinkedit); Args.AddLastArg(CmdArgs, options::OPT_noseglinkedit); Args.AddAllArgs(CmdArgs, options::OPT_sectalign); Args.AddAllArgs(CmdArgs, options::OPT_sectobjectsymbols); Args.AddAllArgs(CmdArgs, options::OPT_segcreate); Args.AddLastArg(CmdArgs, options::OPT_whyload); Args.AddLastArg(CmdArgs, options::OPT_whatsloaded); Args.AddAllArgs(CmdArgs, options::OPT_dylinker__install__name); Args.AddLastArg(CmdArgs, options::OPT_dylinker); Args.AddLastArg(CmdArgs, options::OPT_Mach); } /// \brief Determine whether we are linking the ObjC runtime. static bool isObjCRuntimeLinked(const ArgList &Args) { if (isObjCAutoRefCount(Args)) { Args.ClaimAllArgs(options::OPT_fobjc_link_runtime); return true; } return Args.hasArg(options::OPT_fobjc_link_runtime); } void darwin::Linker::ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { assert(Output.getType() == types::TY_Image && "Invalid linker output type."); // If the number of arguments surpasses the system limits, we will encode the // input files in a separate file, shortening the command line. To this end, // build a list of input file names that can be passed via a file with the // -filelist linker option. llvm::opt::ArgStringList InputFileList; // The logic here is derived from gcc's behavior; most of which // comes from specs (starting with link_command). Consult gcc for // more information. ArgStringList CmdArgs; /// Hack(tm) to ignore linking errors when we are doing ARC migration. if (Args.hasArg(options::OPT_ccc_arcmt_check, options::OPT_ccc_arcmt_migrate)) { for (const auto &Arg : Args) Arg->claim(); const char *Exec = Args.MakeArgString(getToolChain().GetProgramPath("touch")); CmdArgs.push_back(Output.getFilename()); C.addCommand(llvm::make_unique(JA, *this, Exec, CmdArgs, None)); return; } // I'm not sure why this particular decomposition exists in gcc, but // we follow suite for ease of comparison. AddLinkArgs(C, Args, CmdArgs, Inputs); // For LTO, pass the name of the optimization record file. if (Args.hasFlag(options::OPT_fsave_optimization_record, options::OPT_fno_save_optimization_record, false)) { CmdArgs.push_back("-mllvm"); CmdArgs.push_back("-lto-pass-remarks-output"); CmdArgs.push_back("-mllvm"); SmallString<128> F; F = Output.getFilename(); F += ".opt.yaml"; CmdArgs.push_back(Args.MakeArgString(F)); if (getLastProfileUseArg(Args)) { CmdArgs.push_back("-mllvm"); CmdArgs.push_back("-lto-pass-remarks-with-hotness"); } } // It seems that the 'e' option is completely ignored for dynamic executables // (the default), and with static executables, the last one wins, as expected. Args.AddAllArgs(CmdArgs, {options::OPT_d_Flag, options::OPT_s, options::OPT_t, options::OPT_Z_Flag, options::OPT_u_Group, options::OPT_e, options::OPT_r}); // Forward -ObjC when either -ObjC or -ObjC++ is used, to force loading // members of static archive libraries which implement Objective-C classes or // categories. if (Args.hasArg(options::OPT_ObjC) || Args.hasArg(options::OPT_ObjCXX)) CmdArgs.push_back("-ObjC"); CmdArgs.push_back("-o"); CmdArgs.push_back(Output.getFilename()); if (!Args.hasArg(options::OPT_nostdlib, options::OPT_nostartfiles)) getMachOToolChain().addStartObjectFileArgs(Args, CmdArgs); // SafeStack requires its own runtime libraries // These libraries should be linked first, to make sure the // __safestack_init constructor executes before everything else if (getToolChain().getSanitizerArgs().needsSafeStackRt()) { getMachOToolChain().AddLinkRuntimeLib(Args, CmdArgs, "libclang_rt.safestack_osx.a", /*AlwaysLink=*/true); } Args.AddAllArgs(CmdArgs, options::OPT_L); AddLinkerInputs(getToolChain(), Inputs, Args, CmdArgs, JA); // Build the input file for -filelist (list of linker input files) in case we // need it later for (const auto &II : Inputs) { if (!II.isFilename()) { // This is a linker input argument. // We cannot mix input arguments and file names in a -filelist input, thus // we prematurely stop our list (remaining files shall be passed as // arguments). if (InputFileList.size() > 0) break; continue; } InputFileList.push_back(II.getFilename()); } if (!Args.hasArg(options::OPT_nostdlib, options::OPT_nodefaultlibs)) addOpenMPRuntime(CmdArgs, getToolChain(), Args); if (isObjCRuntimeLinked(Args) && !Args.hasArg(options::OPT_nostdlib, options::OPT_nodefaultlibs)) { // We use arclite library for both ARC and subscripting support. getMachOToolChain().AddLinkARCArgs(Args, CmdArgs); CmdArgs.push_back("-framework"); CmdArgs.push_back("Foundation"); // Link libobj. CmdArgs.push_back("-lobjc"); } if (LinkingOutput) { CmdArgs.push_back("-arch_multiple"); CmdArgs.push_back("-final_output"); CmdArgs.push_back(LinkingOutput); } if (Args.hasArg(options::OPT_fnested_functions)) CmdArgs.push_back("-allow_stack_execute"); getMachOToolChain().addProfileRTLibs(Args, CmdArgs); if (unsigned Parallelism = getLTOParallelism(Args, getToolChain().getDriver())) { CmdArgs.push_back("-mllvm"); CmdArgs.push_back( Args.MakeArgString(Twine("-threads=") + llvm::to_string(Parallelism))); } if (!Args.hasArg(options::OPT_nostdlib, options::OPT_nodefaultlibs)) { if (getToolChain().getDriver().CCCIsCXX()) getToolChain().AddCXXStdlibLibArgs(Args, CmdArgs); // link_ssp spec is empty. // Let the tool chain choose which runtime library to link. getMachOToolChain().AddLinkRuntimeLibArgs(Args, CmdArgs); // No need to do anything for pthreads. Claim argument to avoid warning. Args.ClaimAllArgs(options::OPT_pthread); Args.ClaimAllArgs(options::OPT_pthreads); } if (!Args.hasArg(options::OPT_nostdlib, options::OPT_nostartfiles)) { // endfile_spec is empty. } Args.AddAllArgs(CmdArgs, options::OPT_T_Group); Args.AddAllArgs(CmdArgs, options::OPT_F); // -iframework should be forwarded as -F. for (const Arg *A : Args.filtered(options::OPT_iframework)) CmdArgs.push_back(Args.MakeArgString(std::string("-F") + A->getValue())); if (!Args.hasArg(options::OPT_nostdlib, options::OPT_nodefaultlibs)) { if (Arg *A = Args.getLastArg(options::OPT_fveclib)) { if (A->getValue() == StringRef("Accelerate")) { CmdArgs.push_back("-framework"); CmdArgs.push_back("Accelerate"); } } } const char *Exec = Args.MakeArgString(getToolChain().GetLinkerPath()); std::unique_ptr Cmd = llvm::make_unique(JA, *this, Exec, CmdArgs, Inputs); Cmd->setInputFileList(std::move(InputFileList)); C.addCommand(std::move(Cmd)); } void darwin::Lipo::ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { ArgStringList CmdArgs; CmdArgs.push_back("-create"); assert(Output.isFilename() && "Unexpected lipo output."); CmdArgs.push_back("-output"); CmdArgs.push_back(Output.getFilename()); for (const auto &II : Inputs) { assert(II.isFilename() && "Unexpected lipo input."); CmdArgs.push_back(II.getFilename()); } const char *Exec = Args.MakeArgString(getToolChain().GetProgramPath("lipo")); C.addCommand(llvm::make_unique(JA, *this, Exec, CmdArgs, Inputs)); } void darwin::Dsymutil::ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { ArgStringList CmdArgs; CmdArgs.push_back("-o"); CmdArgs.push_back(Output.getFilename()); assert(Inputs.size() == 1 && "Unable to handle multiple inputs."); const InputInfo &Input = Inputs[0]; assert(Input.isFilename() && "Unexpected dsymutil input."); CmdArgs.push_back(Input.getFilename()); const char *Exec = Args.MakeArgString(getToolChain().GetProgramPath("dsymutil")); C.addCommand(llvm::make_unique(JA, *this, Exec, CmdArgs, Inputs)); } void darwin::VerifyDebug::ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { ArgStringList CmdArgs; CmdArgs.push_back("--verify"); CmdArgs.push_back("--debug-info"); CmdArgs.push_back("--eh-frame"); CmdArgs.push_back("--quiet"); assert(Inputs.size() == 1 && "Unable to handle multiple inputs."); const InputInfo &Input = Inputs[0]; assert(Input.isFilename() && "Unexpected verify input"); // Grabbing the output of the earlier dsymutil run. CmdArgs.push_back(Input.getFilename()); const char *Exec = Args.MakeArgString(getToolChain().GetProgramPath("dwarfdump")); C.addCommand(llvm::make_unique(JA, *this, Exec, CmdArgs, Inputs)); } MachO::MachO(const Driver &D, const llvm::Triple &Triple, const ArgList &Args) : ToolChain(D, Triple, Args) { // We expect 'as', 'ld', etc. to be adjacent to our install dir. getProgramPaths().push_back(getDriver().getInstalledDir()); if (getDriver().getInstalledDir() != getDriver().Dir) getProgramPaths().push_back(getDriver().Dir); } /// Darwin - Darwin tool chain for i386 and x86_64. Darwin::Darwin(const Driver &D, const llvm::Triple &Triple, const ArgList &Args) : MachO(D, Triple, Args), TargetInitialized(false), CudaInstallation(D, Triple, Args) {} types::ID MachO::LookupTypeForExtension(StringRef Ext) const { types::ID Ty = types::lookupTypeForExtension(Ext); // Darwin always preprocesses assembly files (unless -x is used explicitly). if (Ty == types::TY_PP_Asm) return types::TY_Asm; return Ty; } bool MachO::HasNativeLLVMSupport() const { return true; } ToolChain::CXXStdlibType Darwin::GetDefaultCXXStdlibType() const { // Default to use libc++ on OS X 10.9+ and iOS 7+. if ((isTargetMacOS() && !isMacosxVersionLT(10, 9)) || (isTargetIOSBased() && !isIPhoneOSVersionLT(7, 0)) || isTargetWatchOSBased()) return ToolChain::CST_Libcxx; return ToolChain::CST_Libstdcxx; } /// Darwin provides an ARC runtime starting in MacOS X 10.7 and iOS 5.0. ObjCRuntime Darwin::getDefaultObjCRuntime(bool isNonFragile) const { if (isTargetWatchOSBased()) return ObjCRuntime(ObjCRuntime::WatchOS, TargetVersion); if (isTargetIOSBased()) return ObjCRuntime(ObjCRuntime::iOS, TargetVersion); if (isNonFragile) return ObjCRuntime(ObjCRuntime::MacOSX, TargetVersion); return ObjCRuntime(ObjCRuntime::FragileMacOSX, TargetVersion); } /// Darwin provides a blocks runtime starting in MacOS X 10.6 and iOS 3.2. bool Darwin::hasBlocksRuntime() const { if (isTargetWatchOSBased()) return true; else if (isTargetIOSBased()) return !isIPhoneOSVersionLT(3, 2); else { assert(isTargetMacOS() && "unexpected darwin target"); return !isMacosxVersionLT(10, 6); } } void Darwin::AddCudaIncludeArgs(const ArgList &DriverArgs, ArgStringList &CC1Args) const { CudaInstallation.AddCudaIncludeArgs(DriverArgs, CC1Args); } // This is just a MachO name translation routine and there's no // way to join this into ARMTargetParser without breaking all // other assumptions. Maybe MachO should consider standardising // their nomenclature. static const char *ArmMachOArchName(StringRef Arch) { return llvm::StringSwitch(Arch) .Case("armv6k", "armv6") .Case("armv6m", "armv6m") .Case("armv5tej", "armv5") .Case("xscale", "xscale") .Case("armv4t", "armv4t") .Case("armv7", "armv7") .Cases("armv7a", "armv7-a", "armv7") .Cases("armv7r", "armv7-r", "armv7") .Cases("armv7em", "armv7e-m", "armv7em") .Cases("armv7k", "armv7-k", "armv7k") .Cases("armv7m", "armv7-m", "armv7m") .Cases("armv7s", "armv7-s", "armv7s") .Default(nullptr); } static const char *ArmMachOArchNameCPU(StringRef CPU) { unsigned ArchKind = llvm::ARM::parseCPUArch(CPU); if (ArchKind == llvm::ARM::AK_INVALID) return nullptr; StringRef Arch = llvm::ARM::getArchName(ArchKind); // FIXME: Make sure this MachO triple mangling is really necessary. // ARMv5* normalises to ARMv5. if (Arch.startswith("armv5")) Arch = Arch.substr(0, 5); // ARMv6*, except ARMv6M, normalises to ARMv6. else if (Arch.startswith("armv6") && !Arch.endswith("6m")) Arch = Arch.substr(0, 5); // ARMv7A normalises to ARMv7. else if (Arch.endswith("v7a")) Arch = Arch.substr(0, 5); return Arch.data(); } StringRef MachO::getMachOArchName(const ArgList &Args) const { switch (getTriple().getArch()) { default: return getDefaultUniversalArchName(); case llvm::Triple::aarch64: return "arm64"; case llvm::Triple::thumb: case llvm::Triple::arm: if (const Arg *A = Args.getLastArg(clang::driver::options::OPT_march_EQ)) if (const char *Arch = ArmMachOArchName(A->getValue())) return Arch; if (const Arg *A = Args.getLastArg(options::OPT_mcpu_EQ)) if (const char *Arch = ArmMachOArchNameCPU(A->getValue())) return Arch; return "arm"; } } Darwin::~Darwin() {} MachO::~MachO() {} std::string Darwin::ComputeEffectiveClangTriple(const ArgList &Args, types::ID InputType) const { llvm::Triple Triple(ComputeLLVMTriple(Args, InputType)); // If the target isn't initialized (e.g., an unknown Darwin platform, return // the default triple). if (!isTargetInitialized()) return Triple.getTriple(); SmallString<16> Str; if (isTargetWatchOSBased()) Str += "watchos"; else if (isTargetTvOSBased()) Str += "tvos"; else if (isTargetIOSBased()) Str += "ios"; else Str += "macosx"; Str += getTargetVersion().getAsString(); Triple.setOSName(Str); return Triple.getTriple(); } Tool *MachO::getTool(Action::ActionClass AC) const { switch (AC) { case Action::LipoJobClass: if (!Lipo) Lipo.reset(new tools::darwin::Lipo(*this)); return Lipo.get(); case Action::DsymutilJobClass: if (!Dsymutil) Dsymutil.reset(new tools::darwin::Dsymutil(*this)); return Dsymutil.get(); case Action::VerifyDebugInfoJobClass: if (!VerifyDebug) VerifyDebug.reset(new tools::darwin::VerifyDebug(*this)); return VerifyDebug.get(); default: return ToolChain::getTool(AC); } } Tool *MachO::buildLinker() const { return new tools::darwin::Linker(*this); } Tool *MachO::buildAssembler() const { return new tools::darwin::Assembler(*this); } DarwinClang::DarwinClang(const Driver &D, const llvm::Triple &Triple, const ArgList &Args) : Darwin(D, Triple, Args) {} void DarwinClang::addClangWarningOptions(ArgStringList &CC1Args) const { // For modern targets, promote certain warnings to errors. if (isTargetWatchOSBased() || getTriple().isArch64Bit()) { // Always enable -Wdeprecated-objc-isa-usage and promote it // to an error. CC1Args.push_back("-Wdeprecated-objc-isa-usage"); CC1Args.push_back("-Werror=deprecated-objc-isa-usage"); // For iOS and watchOS, also error about implicit function declarations, // as that can impact calling conventions. if (!isTargetMacOS()) CC1Args.push_back("-Werror=implicit-function-declaration"); } } void DarwinClang::AddLinkARCArgs(const ArgList &Args, ArgStringList &CmdArgs) const { // Avoid linking compatibility stubs on i386 mac. if (isTargetMacOS() && getArch() == llvm::Triple::x86) return; ObjCRuntime runtime = getDefaultObjCRuntime(/*nonfragile*/ true); if ((runtime.hasNativeARC() || !isObjCAutoRefCount(Args)) && runtime.hasSubscripting()) return; CmdArgs.push_back("-force_load"); SmallString<128> P(getDriver().ClangExecutable); llvm::sys::path::remove_filename(P); // 'clang' llvm::sys::path::remove_filename(P); // 'bin' llvm::sys::path::append(P, "lib", "arc", "libarclite_"); // Mash in the platform. if (isTargetWatchOSSimulator()) P += "watchsimulator"; else if (isTargetWatchOS()) P += "watchos"; else if (isTargetTvOSSimulator()) P += "appletvsimulator"; else if (isTargetTvOS()) P += "appletvos"; else if (isTargetIOSSimulator()) P += "iphonesimulator"; else if (isTargetIPhoneOS()) P += "iphoneos"; else P += "macosx"; P += ".a"; CmdArgs.push_back(Args.MakeArgString(P)); } unsigned DarwinClang::GetDefaultDwarfVersion() const { // Default to use DWARF 2 on OS X 10.10 / iOS 8 and lower. if ((isTargetMacOS() && isMacosxVersionLT(10, 11)) || (isTargetIOSBased() && isIPhoneOSVersionLT(9))) return 2; return 4; } void MachO::AddLinkRuntimeLib(const ArgList &Args, ArgStringList &CmdArgs, StringRef DarwinLibName, bool AlwaysLink, bool IsEmbedded, bool AddRPath) const { SmallString<128> Dir(getDriver().ResourceDir); llvm::sys::path::append(Dir, "lib", IsEmbedded ? "macho_embedded" : "darwin"); SmallString<128> P(Dir); llvm::sys::path::append(P, DarwinLibName); // For now, allow missing resource libraries to support developers who may // not have compiler-rt checked out or integrated into their build (unless // we explicitly force linking with this library). if (AlwaysLink || getVFS().exists(P)) CmdArgs.push_back(Args.MakeArgString(P)); // Adding the rpaths might negatively interact when other rpaths are involved, // so we should make sure we add the rpaths last, after all user-specified // rpaths. This is currently true from this place, but we need to be // careful if this function is ever called before user's rpaths are emitted. if (AddRPath) { assert(DarwinLibName.endswith(".dylib") && "must be a dynamic library"); // Add @executable_path to rpath to support having the dylib copied with // the executable. CmdArgs.push_back("-rpath"); CmdArgs.push_back("@executable_path"); // Add the path to the resource dir to rpath to support using the dylib // from the default location without copying. CmdArgs.push_back("-rpath"); CmdArgs.push_back(Args.MakeArgString(Dir)); } } void MachO::AddFuzzerLinkArgs(const ArgList &Args, ArgStringList &CmdArgs) const { // Go up one directory from Clang to find the libfuzzer archive file. StringRef ParentDir = llvm::sys::path::parent_path(getDriver().InstalledDir); SmallString<128> P(ParentDir); llvm::sys::path::append(P, "lib", "libLLVMFuzzer.a"); CmdArgs.push_back(Args.MakeArgString(P)); // Libfuzzer is written in C++ and requires libcxx. AddCXXStdlibLibArgs(Args, CmdArgs); } StringRef Darwin::getPlatformFamily() const { switch (TargetPlatform) { case DarwinPlatformKind::MacOS: return "MacOSX"; case DarwinPlatformKind::IPhoneOS: case DarwinPlatformKind::IPhoneOSSimulator: return "iPhone"; case DarwinPlatformKind::TvOS: case DarwinPlatformKind::TvOSSimulator: return "AppleTV"; case DarwinPlatformKind::WatchOS: case DarwinPlatformKind::WatchOSSimulator: return "Watch"; } llvm_unreachable("Unsupported platform"); } StringRef Darwin::getSDKName(StringRef isysroot) { // Assume SDK has path: SOME_PATH/SDKs/PlatformXX.YY.sdk llvm::sys::path::const_iterator SDKDir; auto BeginSDK = llvm::sys::path::begin(isysroot); auto EndSDK = llvm::sys::path::end(isysroot); for (auto IT = BeginSDK; IT != EndSDK; ++IT) { StringRef SDK = *IT; if (SDK.endswith(".sdk")) return SDK.slice(0, SDK.size() - 4); } return ""; } StringRef Darwin::getOSLibraryNameSuffix() const { switch(TargetPlatform) { case DarwinPlatformKind::MacOS: return "osx"; case DarwinPlatformKind::IPhoneOS: return "ios"; case DarwinPlatformKind::IPhoneOSSimulator: return "iossim"; case DarwinPlatformKind::TvOS: return "tvos"; case DarwinPlatformKind::TvOSSimulator: return "tvossim"; case DarwinPlatformKind::WatchOS: return "watchos"; case DarwinPlatformKind::WatchOSSimulator: return "watchossim"; } llvm_unreachable("Unsupported platform"); } void Darwin::addProfileRTLibs(const ArgList &Args, ArgStringList &CmdArgs) const { if (!needsProfileRT(Args)) return; AddLinkRuntimeLib(Args, CmdArgs, (Twine("libclang_rt.profile_") + getOSLibraryNameSuffix() + ".a").str(), /*AlwaysLink*/ true); } void DarwinClang::AddLinkSanitizerLibArgs(const ArgList &Args, ArgStringList &CmdArgs, StringRef Sanitizer) const { AddLinkRuntimeLib( Args, CmdArgs, (Twine("libclang_rt.") + Sanitizer + "_" + getOSLibraryNameSuffix() + "_dynamic.dylib").str(), /*AlwaysLink*/ true, /*IsEmbedded*/ false, /*AddRPath*/ true); } ToolChain::RuntimeLibType DarwinClang::GetRuntimeLibType( const ArgList &Args) const { if (Arg* A = Args.getLastArg(options::OPT_rtlib_EQ)) { StringRef Value = A->getValue(); if (Value != "compiler-rt") getDriver().Diag(clang::diag::err_drv_unsupported_rtlib_for_platform) << Value << "darwin"; } return ToolChain::RLT_CompilerRT; } void DarwinClang::AddLinkRuntimeLibArgs(const ArgList &Args, ArgStringList &CmdArgs) const { // Call once to ensure diagnostic is printed if wrong value was specified GetRuntimeLibType(Args); // Darwin doesn't support real static executables, don't link any runtime // libraries with -static. if (Args.hasArg(options::OPT_static) || Args.hasArg(options::OPT_fapple_kext) || Args.hasArg(options::OPT_mkernel)) return; // Reject -static-libgcc for now, we can deal with this when and if someone // cares. This is useful in situations where someone wants to statically link // something like libstdc++, and needs its runtime support routines. if (const Arg *A = Args.getLastArg(options::OPT_static_libgcc)) { getDriver().Diag(diag::err_drv_unsupported_opt) << A->getAsString(Args); return; } const SanitizerArgs &Sanitize = getSanitizerArgs(); if (Sanitize.needsAsanRt()) AddLinkSanitizerLibArgs(Args, CmdArgs, "asan"); if (Sanitize.needsLsanRt()) AddLinkSanitizerLibArgs(Args, CmdArgs, "lsan"); if (Sanitize.needsUbsanRt()) AddLinkSanitizerLibArgs(Args, CmdArgs, "ubsan"); if (Sanitize.needsTsanRt()) AddLinkSanitizerLibArgs(Args, CmdArgs, "tsan"); if (Sanitize.needsFuzzer() && !Args.hasArg(options::OPT_dynamiclib)) AddFuzzerLinkArgs(Args, CmdArgs); if (Sanitize.needsStatsRt()) { StringRef OS = isTargetMacOS() ? "osx" : "iossim"; AddLinkRuntimeLib(Args, CmdArgs, (Twine("libclang_rt.stats_client_") + OS + ".a").str(), /*AlwaysLink=*/true); AddLinkSanitizerLibArgs(Args, CmdArgs, "stats"); } if (Sanitize.needsEsanRt()) AddLinkSanitizerLibArgs(Args, CmdArgs, "esan"); // Otherwise link libSystem, then the dynamic runtime library, and finally any // target specific static runtime library. CmdArgs.push_back("-lSystem"); // Select the dynamic runtime library and the target specific static library. if (isTargetWatchOSBased()) { // We currently always need a static runtime library for watchOS. AddLinkRuntimeLib(Args, CmdArgs, "libclang_rt.watchos.a"); } else if (isTargetTvOSBased()) { // We currently always need a static runtime library for tvOS. AddLinkRuntimeLib(Args, CmdArgs, "libclang_rt.tvos.a"); } else if (isTargetIOSBased()) { // If we are compiling as iOS / simulator, don't attempt to link libgcc_s.1, // it never went into the SDK. // Linking against libgcc_s.1 isn't needed for iOS 5.0+ if (isIPhoneOSVersionLT(5, 0) && !isTargetIOSSimulator() && getTriple().getArch() != llvm::Triple::aarch64) CmdArgs.push_back("-lgcc_s.1"); // We currently always need a static runtime library for iOS. AddLinkRuntimeLib(Args, CmdArgs, "libclang_rt.ios.a"); } else { assert(isTargetMacOS() && "unexpected non MacOS platform"); // The dynamic runtime library was merged with libSystem for 10.6 and // beyond; only 10.4 and 10.5 need an additional runtime library. if (isMacosxVersionLT(10, 5)) CmdArgs.push_back("-lgcc_s.10.4"); else if (isMacosxVersionLT(10, 6)) CmdArgs.push_back("-lgcc_s.10.5"); // Originally for OS X, we thought we would only need a static runtime // library when targeting 10.4, to provide versions of the static functions // which were omitted from 10.4.dylib. This led to the creation of the 10.4 // builtins library. // // Unfortunately, that turned out to not be true, because Darwin system // headers can still use eprintf on i386, and it is not exported from // libSystem. Therefore, we still must provide a runtime library just for // the tiny tiny handful of projects that *might* use that symbol. // // Then over time, we figured out it was useful to add more things to the // runtime so we created libclang_rt.osx.a to provide new functions when // deploying to old OS builds, and for a long time we had both eprintf and // osx builtin libraries. Which just seems excessive. So with PR 28855, we // are removing the eprintf library and expecting eprintf to be provided by // the OS X builtins library. if (isMacosxVersionLT(10, 5)) AddLinkRuntimeLib(Args, CmdArgs, "libclang_rt.10.4.a"); else AddLinkRuntimeLib(Args, CmdArgs, "libclang_rt.osx.a"); } } /// Returns the most appropriate macOS target version for the current process. /// /// If the macOS SDK version is the same or earlier than the system version, /// then the SDK version is returned. Otherwise the system version is returned. static std::string getSystemOrSDKMacOSVersion(StringRef MacOSSDKVersion) { unsigned Major, Minor, Micro; llvm::Triple SystemTriple(llvm::sys::getProcessTriple()); if (!SystemTriple.isMacOSX()) return MacOSSDKVersion; SystemTriple.getMacOSXVersion(Major, Minor, Micro); VersionTuple SystemVersion(Major, Minor, Micro); bool HadExtra; if (!Driver::GetReleaseVersion(MacOSSDKVersion, Major, Minor, Micro, HadExtra)) return MacOSSDKVersion; VersionTuple SDKVersion(Major, Minor, Micro); if (SDKVersion > SystemVersion) return SystemVersion.getAsString(); return MacOSSDKVersion; } void Darwin::AddDeploymentTarget(DerivedArgList &Args) const { const OptTable &Opts = getDriver().getOpts(); // Support allowing the SDKROOT environment variable used by xcrun and other // Xcode tools to define the default sysroot, by making it the default for // isysroot. if (const Arg *A = Args.getLastArg(options::OPT_isysroot)) { // Warn if the path does not exist. if (!getVFS().exists(A->getValue())) getDriver().Diag(clang::diag::warn_missing_sysroot) << A->getValue(); } else { if (char *env = ::getenv("SDKROOT")) { // We only use this value as the default if it is an absolute path, // exists, and it is not the root path. if (llvm::sys::path::is_absolute(env) && getVFS().exists(env) && StringRef(env) != "/") { Args.append(Args.MakeSeparateArg( nullptr, Opts.getOption(options::OPT_isysroot), env)); } } } Arg *OSXVersion = Args.getLastArg(options::OPT_mmacosx_version_min_EQ); Arg *iOSVersion = Args.getLastArg(options::OPT_miphoneos_version_min_EQ, options::OPT_mios_simulator_version_min_EQ); Arg *TvOSVersion = Args.getLastArg(options::OPT_mtvos_version_min_EQ, options::OPT_mtvos_simulator_version_min_EQ); Arg *WatchOSVersion = Args.getLastArg(options::OPT_mwatchos_version_min_EQ, options::OPT_mwatchos_simulator_version_min_EQ); unsigned Major, Minor, Micro; bool HadExtra; // The iOS deployment target that is explicitly specified via a command line // option or an environment variable. std::string ExplicitIOSDeploymentTargetStr; if (iOSVersion) ExplicitIOSDeploymentTargetStr = iOSVersion->getAsString(Args); // Add a macro to differentiate between m(iphone|tv|watch)os-version-min=X.Y and // -m(iphone|tv|watch)simulator-version-min=X.Y. if (Args.hasArg(options::OPT_mios_simulator_version_min_EQ) || Args.hasArg(options::OPT_mtvos_simulator_version_min_EQ) || Args.hasArg(options::OPT_mwatchos_simulator_version_min_EQ)) Args.append(Args.MakeSeparateArg(nullptr, Opts.getOption(options::OPT_D), " __APPLE_EMBEDDED_SIMULATOR__=1")); if (OSXVersion && (iOSVersion || TvOSVersion || WatchOSVersion)) { getDriver().Diag(diag::err_drv_argument_not_allowed_with) << OSXVersion->getAsString(Args) << (iOSVersion ? iOSVersion : TvOSVersion ? TvOSVersion : WatchOSVersion)->getAsString(Args); iOSVersion = TvOSVersion = WatchOSVersion = nullptr; } else if (iOSVersion && (TvOSVersion || WatchOSVersion)) { getDriver().Diag(diag::err_drv_argument_not_allowed_with) << iOSVersion->getAsString(Args) << (TvOSVersion ? TvOSVersion : WatchOSVersion)->getAsString(Args); TvOSVersion = WatchOSVersion = nullptr; } else if (TvOSVersion && WatchOSVersion) { getDriver().Diag(diag::err_drv_argument_not_allowed_with) << TvOSVersion->getAsString(Args) << WatchOSVersion->getAsString(Args); WatchOSVersion = nullptr; } else if (!OSXVersion && !iOSVersion && !TvOSVersion && !WatchOSVersion) { // If no deployment target was specified on the command line, check for // environment defines. std::string OSXTarget; std::string iOSTarget; std::string TvOSTarget; std::string WatchOSTarget; if (char *env = ::getenv("MACOSX_DEPLOYMENT_TARGET")) OSXTarget = env; if (char *env = ::getenv("IPHONEOS_DEPLOYMENT_TARGET")) iOSTarget = env; if (char *env = ::getenv("TVOS_DEPLOYMENT_TARGET")) TvOSTarget = env; if (char *env = ::getenv("WATCHOS_DEPLOYMENT_TARGET")) WatchOSTarget = env; if (!iOSTarget.empty()) ExplicitIOSDeploymentTargetStr = std::string("IPHONEOS_DEPLOYMENT_TARGET=") + iOSTarget; // If there is no command-line argument to specify the Target version and // no environment variable defined, see if we can set the default based // on -isysroot. if (OSXTarget.empty() && iOSTarget.empty() && WatchOSTarget.empty() && TvOSTarget.empty() && Args.hasArg(options::OPT_isysroot)) { if (const Arg *A = Args.getLastArg(options::OPT_isysroot)) { StringRef isysroot = A->getValue(); StringRef SDK = getSDKName(isysroot); if (SDK.size() > 0) { // Slice the version number out. // Version number is between the first and the last number. size_t StartVer = SDK.find_first_of("0123456789"); size_t EndVer = SDK.find_last_of("0123456789"); if (StartVer != StringRef::npos && EndVer > StartVer) { StringRef Version = SDK.slice(StartVer, EndVer + 1); if (SDK.startswith("iPhoneOS") || SDK.startswith("iPhoneSimulator")) iOSTarget = Version; else if (SDK.startswith("MacOSX")) OSXTarget = getSystemOrSDKMacOSVersion(Version); else if (SDK.startswith("WatchOS") || SDK.startswith("WatchSimulator")) WatchOSTarget = Version; else if (SDK.startswith("AppleTVOS") || SDK.startswith("AppleTVSimulator")) TvOSTarget = Version; } } } } // If no OS targets have been specified, try to guess platform from -target // or arch name and compute the version from the triple. if (OSXTarget.empty() && iOSTarget.empty() && TvOSTarget.empty() && WatchOSTarget.empty()) { llvm::Triple::OSType OSTy = llvm::Triple::UnknownOS; // Set the OSTy based on -target if -arch isn't present. if (Args.hasArg(options::OPT_target) && !Args.hasArg(options::OPT_arch)) { OSTy = getTriple().getOS(); } else { StringRef MachOArchName = getMachOArchName(Args); if (MachOArchName == "armv7" || MachOArchName == "armv7s" || MachOArchName == "arm64") OSTy = llvm::Triple::IOS; else if (MachOArchName == "armv7k") OSTy = llvm::Triple::WatchOS; else if (MachOArchName != "armv6m" && MachOArchName != "armv7m" && MachOArchName != "armv7em") OSTy = llvm::Triple::MacOSX; } if (OSTy != llvm::Triple::UnknownOS) { unsigned Major, Minor, Micro; std::string *OSTarget; switch (OSTy) { case llvm::Triple::Darwin: case llvm::Triple::MacOSX: if (!getTriple().getMacOSXVersion(Major, Minor, Micro)) getDriver().Diag(diag::err_drv_invalid_darwin_version) << getTriple().getOSName(); OSTarget = &OSXTarget; break; case llvm::Triple::IOS: getTriple().getiOSVersion(Major, Minor, Micro); OSTarget = &iOSTarget; break; case llvm::Triple::TvOS: getTriple().getOSVersion(Major, Minor, Micro); OSTarget = &TvOSTarget; break; case llvm::Triple::WatchOS: getTriple().getWatchOSVersion(Major, Minor, Micro); OSTarget = &WatchOSTarget; break; default: llvm_unreachable("Unexpected OS type"); break; } llvm::raw_string_ostream(*OSTarget) << Major << '.' << Minor << '.' << Micro; } } // Do not allow conflicts with the watchOS target. if (!WatchOSTarget.empty() && (!iOSTarget.empty() || !TvOSTarget.empty())) { getDriver().Diag(diag::err_drv_conflicting_deployment_targets) << "WATCHOS_DEPLOYMENT_TARGET" << (!iOSTarget.empty() ? "IPHONEOS_DEPLOYMENT_TARGET" : "TVOS_DEPLOYMENT_TARGET"); } // Do not allow conflicts with the tvOS target. if (!TvOSTarget.empty() && !iOSTarget.empty()) { getDriver().Diag(diag::err_drv_conflicting_deployment_targets) << "TVOS_DEPLOYMENT_TARGET" << "IPHONEOS_DEPLOYMENT_TARGET"; } // Allow conflicts among OSX and iOS for historical reasons, but choose the // default platform. if (!OSXTarget.empty() && (!iOSTarget.empty() || !WatchOSTarget.empty() || !TvOSTarget.empty())) { if (getTriple().getArch() == llvm::Triple::arm || getTriple().getArch() == llvm::Triple::aarch64 || getTriple().getArch() == llvm::Triple::thumb) OSXTarget = ""; else iOSTarget = WatchOSTarget = TvOSTarget = ""; } if (!OSXTarget.empty()) { const Option O = Opts.getOption(options::OPT_mmacosx_version_min_EQ); OSXVersion = Args.MakeJoinedArg(nullptr, O, OSXTarget); Args.append(OSXVersion); } else if (!iOSTarget.empty()) { const Option O = Opts.getOption(options::OPT_miphoneos_version_min_EQ); iOSVersion = Args.MakeJoinedArg(nullptr, O, iOSTarget); Args.append(iOSVersion); } else if (!TvOSTarget.empty()) { const Option O = Opts.getOption(options::OPT_mtvos_version_min_EQ); TvOSVersion = Args.MakeJoinedArg(nullptr, O, TvOSTarget); Args.append(TvOSVersion); } else if (!WatchOSTarget.empty()) { const Option O = Opts.getOption(options::OPT_mwatchos_version_min_EQ); WatchOSVersion = Args.MakeJoinedArg(nullptr, O, WatchOSTarget); Args.append(WatchOSVersion); } } DarwinPlatformKind Platform; if (OSXVersion) Platform = MacOS; else if (iOSVersion) Platform = IPhoneOS; else if (TvOSVersion) Platform = TvOS; else if (WatchOSVersion) Platform = WatchOS; else llvm_unreachable("Unable to infer Darwin variant"); // Set the tool chain target information. if (Platform == MacOS) { assert((!iOSVersion && !TvOSVersion && !WatchOSVersion) && "Unknown target platform!"); if (!Driver::GetReleaseVersion(OSXVersion->getValue(), Major, Minor, Micro, HadExtra) || HadExtra || Major != 10 || Minor >= 100 || Micro >= 100) getDriver().Diag(diag::err_drv_invalid_version_number) << OSXVersion->getAsString(Args); } else if (Platform == IPhoneOS) { assert(iOSVersion && "Unknown target platform!"); if (!Driver::GetReleaseVersion(iOSVersion->getValue(), Major, Minor, Micro, HadExtra) || HadExtra || Major >= 100 || Minor >= 100 || Micro >= 100) getDriver().Diag(diag::err_drv_invalid_version_number) << iOSVersion->getAsString(Args); // For 32-bit targets, the deployment target for iOS has to be earlier than // iOS 11. if (getTriple().isArch32Bit() && Major >= 11) { // If the deployment target is explicitly specified, print a diagnostic. if (!ExplicitIOSDeploymentTargetStr.empty()) { getDriver().Diag(diag::warn_invalid_ios_deployment_target) << ExplicitIOSDeploymentTargetStr; // Otherwise, set it to 10.99.99. } else { Major = 10; Minor = 99; Micro = 99; } } } else if (Platform == TvOS) { if (!Driver::GetReleaseVersion(TvOSVersion->getValue(), Major, Minor, Micro, HadExtra) || HadExtra || Major >= 100 || Minor >= 100 || Micro >= 100) getDriver().Diag(diag::err_drv_invalid_version_number) << TvOSVersion->getAsString(Args); } else if (Platform == WatchOS) { if (!Driver::GetReleaseVersion(WatchOSVersion->getValue(), Major, Minor, Micro, HadExtra) || HadExtra || Major >= 10 || Minor >= 100 || Micro >= 100) getDriver().Diag(diag::err_drv_invalid_version_number) << WatchOSVersion->getAsString(Args); } else llvm_unreachable("unknown kind of Darwin platform"); // Recognize iOS targets with an x86 architecture as the iOS simulator. if (iOSVersion && (getTriple().getArch() == llvm::Triple::x86 || getTriple().getArch() == llvm::Triple::x86_64)) Platform = IPhoneOSSimulator; if (TvOSVersion && (getTriple().getArch() == llvm::Triple::x86 || getTriple().getArch() == llvm::Triple::x86_64)) Platform = TvOSSimulator; if (WatchOSVersion && (getTriple().getArch() == llvm::Triple::x86 || getTriple().getArch() == llvm::Triple::x86_64)) Platform = WatchOSSimulator; setTarget(Platform, Major, Minor, Micro); if (const Arg *A = Args.getLastArg(options::OPT_isysroot)) { StringRef SDK = getSDKName(A->getValue()); if (SDK.size() > 0) { size_t StartVer = SDK.find_first_of("0123456789"); StringRef SDKName = SDK.slice(0, StartVer); if (!SDKName.startswith(getPlatformFamily())) getDriver().Diag(diag::warn_incompatible_sysroot) << SDKName << getPlatformFamily(); } } } void DarwinClang::AddCXXStdlibLibArgs(const ArgList &Args, ArgStringList &CmdArgs) const { CXXStdlibType Type = GetCXXStdlibType(Args); switch (Type) { case ToolChain::CST_Libcxx: CmdArgs.push_back("-lc++"); break; case ToolChain::CST_Libstdcxx: // Unfortunately, -lstdc++ doesn't always exist in the standard search path; // it was previously found in the gcc lib dir. However, for all the Darwin // platforms we care about it was -lstdc++.6, so we search for that // explicitly if we can't see an obvious -lstdc++ candidate. // Check in the sysroot first. if (const Arg *A = Args.getLastArg(options::OPT_isysroot)) { SmallString<128> P(A->getValue()); llvm::sys::path::append(P, "usr", "lib", "libstdc++.dylib"); if (!getVFS().exists(P)) { llvm::sys::path::remove_filename(P); llvm::sys::path::append(P, "libstdc++.6.dylib"); if (getVFS().exists(P)) { CmdArgs.push_back(Args.MakeArgString(P)); return; } } } // Otherwise, look in the root. // FIXME: This should be removed someday when we don't have to care about // 10.6 and earlier, where /usr/lib/libstdc++.dylib does not exist. if (!getVFS().exists("/usr/lib/libstdc++.dylib") && getVFS().exists("/usr/lib/libstdc++.6.dylib")) { CmdArgs.push_back("/usr/lib/libstdc++.6.dylib"); return; } // Otherwise, let the linker search. CmdArgs.push_back("-lstdc++"); break; } } void DarwinClang::AddCCKextLibArgs(const ArgList &Args, ArgStringList &CmdArgs) const { // For Darwin platforms, use the compiler-rt-based support library // instead of the gcc-provided one (which is also incidentally // only present in the gcc lib dir, which makes it hard to find). SmallString<128> P(getDriver().ResourceDir); llvm::sys::path::append(P, "lib", "darwin"); // Use the newer cc_kext for iOS ARM after 6.0. if (isTargetWatchOS()) { llvm::sys::path::append(P, "libclang_rt.cc_kext_watchos.a"); } else if (isTargetTvOS()) { llvm::sys::path::append(P, "libclang_rt.cc_kext_tvos.a"); } else if (isTargetIPhoneOS()) { llvm::sys::path::append(P, "libclang_rt.cc_kext_ios.a"); } else { llvm::sys::path::append(P, "libclang_rt.cc_kext.a"); } // For now, allow missing resource libraries to support developers who may // not have compiler-rt checked out or integrated into their build. if (getVFS().exists(P)) CmdArgs.push_back(Args.MakeArgString(P)); } DerivedArgList *MachO::TranslateArgs(const DerivedArgList &Args, StringRef BoundArch, Action::OffloadKind) const { DerivedArgList *DAL = new DerivedArgList(Args.getBaseArgs()); const OptTable &Opts = getDriver().getOpts(); // FIXME: We really want to get out of the tool chain level argument // translation business, as it makes the driver functionality much // more opaque. For now, we follow gcc closely solely for the // purpose of easily achieving feature parity & testability. Once we // have something that works, we should reevaluate each translation // and try to push it down into tool specific logic. for (Arg *A : Args) { if (A->getOption().matches(options::OPT_Xarch__)) { // Skip this argument unless the architecture matches either the toolchain // triple arch, or the arch being bound. llvm::Triple::ArchType XarchArch = tools::darwin::getArchTypeForMachOArchName(A->getValue(0)); if (!(XarchArch == getArch() || (!BoundArch.empty() && XarchArch == tools::darwin::getArchTypeForMachOArchName(BoundArch)))) continue; Arg *OriginalArg = A; unsigned Index = Args.getBaseArgs().MakeIndex(A->getValue(1)); unsigned Prev = Index; std::unique_ptr XarchArg(Opts.ParseOneArg(Args, Index)); // If the argument parsing failed or more than one argument was // consumed, the -Xarch_ argument's parameter tried to consume // extra arguments. Emit an error and ignore. // // We also want to disallow any options which would alter the // driver behavior; that isn't going to work in our model. We // use isDriverOption() as an approximation, although things // like -O4 are going to slip through. if (!XarchArg || Index > Prev + 1) { getDriver().Diag(diag::err_drv_invalid_Xarch_argument_with_args) << A->getAsString(Args); continue; } else if (XarchArg->getOption().hasFlag(options::DriverOption)) { getDriver().Diag(diag::err_drv_invalid_Xarch_argument_isdriver) << A->getAsString(Args); continue; } XarchArg->setBaseArg(A); A = XarchArg.release(); DAL->AddSynthesizedArg(A); // Linker input arguments require custom handling. The problem is that we // have already constructed the phase actions, so we can not treat them as // "input arguments". if (A->getOption().hasFlag(options::LinkerInput)) { // Convert the argument into individual Zlinker_input_args. for (const char *Value : A->getValues()) { DAL->AddSeparateArg( OriginalArg, Opts.getOption(options::OPT_Zlinker_input), Value); } continue; } } // Sob. These is strictly gcc compatible for the time being. Apple // gcc translates options twice, which means that self-expanding // options add duplicates. switch ((options::ID)A->getOption().getID()) { default: DAL->append(A); break; case options::OPT_mkernel: case options::OPT_fapple_kext: DAL->append(A); DAL->AddFlagArg(A, Opts.getOption(options::OPT_static)); break; case options::OPT_dependency_file: DAL->AddSeparateArg(A, Opts.getOption(options::OPT_MF), A->getValue()); break; case options::OPT_gfull: DAL->AddFlagArg(A, Opts.getOption(options::OPT_g_Flag)); DAL->AddFlagArg( A, Opts.getOption(options::OPT_fno_eliminate_unused_debug_symbols)); break; case options::OPT_gused: DAL->AddFlagArg(A, Opts.getOption(options::OPT_g_Flag)); DAL->AddFlagArg( A, Opts.getOption(options::OPT_feliminate_unused_debug_symbols)); break; case options::OPT_shared: DAL->AddFlagArg(A, Opts.getOption(options::OPT_dynamiclib)); break; case options::OPT_fconstant_cfstrings: DAL->AddFlagArg(A, Opts.getOption(options::OPT_mconstant_cfstrings)); break; case options::OPT_fno_constant_cfstrings: DAL->AddFlagArg(A, Opts.getOption(options::OPT_mno_constant_cfstrings)); break; case options::OPT_Wnonportable_cfstrings: DAL->AddFlagArg(A, Opts.getOption(options::OPT_mwarn_nonportable_cfstrings)); break; case options::OPT_Wno_nonportable_cfstrings: DAL->AddFlagArg( A, Opts.getOption(options::OPT_mno_warn_nonportable_cfstrings)); break; case options::OPT_fpascal_strings: DAL->AddFlagArg(A, Opts.getOption(options::OPT_mpascal_strings)); break; case options::OPT_fno_pascal_strings: DAL->AddFlagArg(A, Opts.getOption(options::OPT_mno_pascal_strings)); break; } } if (getTriple().getArch() == llvm::Triple::x86 || getTriple().getArch() == llvm::Triple::x86_64) if (!Args.hasArgNoClaim(options::OPT_mtune_EQ)) DAL->AddJoinedArg(nullptr, Opts.getOption(options::OPT_mtune_EQ), "core2"); // Add the arch options based on the particular spelling of -arch, to match // how the driver driver works. if (!BoundArch.empty()) { StringRef Name = BoundArch; const Option MCpu = Opts.getOption(options::OPT_mcpu_EQ); const Option MArch = Opts.getOption(clang::driver::options::OPT_march_EQ); // This code must be kept in sync with LLVM's getArchTypeForDarwinArch, // which defines the list of which architectures we accept. if (Name == "ppc") ; else if (Name == "ppc601") DAL->AddJoinedArg(nullptr, MCpu, "601"); else if (Name == "ppc603") DAL->AddJoinedArg(nullptr, MCpu, "603"); else if (Name == "ppc604") DAL->AddJoinedArg(nullptr, MCpu, "604"); else if (Name == "ppc604e") DAL->AddJoinedArg(nullptr, MCpu, "604e"); else if (Name == "ppc750") DAL->AddJoinedArg(nullptr, MCpu, "750"); else if (Name == "ppc7400") DAL->AddJoinedArg(nullptr, MCpu, "7400"); else if (Name == "ppc7450") DAL->AddJoinedArg(nullptr, MCpu, "7450"); else if (Name == "ppc970") DAL->AddJoinedArg(nullptr, MCpu, "970"); else if (Name == "ppc64" || Name == "ppc64le") DAL->AddFlagArg(nullptr, Opts.getOption(options::OPT_m64)); else if (Name == "i386") ; else if (Name == "i486") DAL->AddJoinedArg(nullptr, MArch, "i486"); else if (Name == "i586") DAL->AddJoinedArg(nullptr, MArch, "i586"); else if (Name == "i686") DAL->AddJoinedArg(nullptr, MArch, "i686"); else if (Name == "pentium") DAL->AddJoinedArg(nullptr, MArch, "pentium"); else if (Name == "pentium2") DAL->AddJoinedArg(nullptr, MArch, "pentium2"); else if (Name == "pentpro") DAL->AddJoinedArg(nullptr, MArch, "pentiumpro"); else if (Name == "pentIIm3") DAL->AddJoinedArg(nullptr, MArch, "pentium2"); else if (Name == "x86_64") DAL->AddFlagArg(nullptr, Opts.getOption(options::OPT_m64)); else if (Name == "x86_64h") { DAL->AddFlagArg(nullptr, Opts.getOption(options::OPT_m64)); DAL->AddJoinedArg(nullptr, MArch, "x86_64h"); } else if (Name == "arm") DAL->AddJoinedArg(nullptr, MArch, "armv4t"); else if (Name == "armv4t") DAL->AddJoinedArg(nullptr, MArch, "armv4t"); else if (Name == "armv5") DAL->AddJoinedArg(nullptr, MArch, "armv5tej"); else if (Name == "xscale") DAL->AddJoinedArg(nullptr, MArch, "xscale"); else if (Name == "armv6") DAL->AddJoinedArg(nullptr, MArch, "armv6k"); else if (Name == "armv6m") DAL->AddJoinedArg(nullptr, MArch, "armv6m"); else if (Name == "armv7") DAL->AddJoinedArg(nullptr, MArch, "armv7a"); else if (Name == "armv7em") DAL->AddJoinedArg(nullptr, MArch, "armv7em"); else if (Name == "armv7k") DAL->AddJoinedArg(nullptr, MArch, "armv7k"); else if (Name == "armv7m") DAL->AddJoinedArg(nullptr, MArch, "armv7m"); else if (Name == "armv7s") DAL->AddJoinedArg(nullptr, MArch, "armv7s"); } return DAL; } void MachO::AddLinkRuntimeLibArgs(const ArgList &Args, ArgStringList &CmdArgs) const { // Embedded targets are simple at the moment, not supporting sanitizers and // with different libraries for each member of the product { static, PIC } x // { hard-float, soft-float } llvm::SmallString<32> CompilerRT = StringRef("libclang_rt."); CompilerRT += (tools::arm::getARMFloatABI(*this, Args) == tools::arm::FloatABI::Hard) ? "hard" : "soft"; CompilerRT += Args.hasArg(options::OPT_fPIC) ? "_pic.a" : "_static.a"; AddLinkRuntimeLib(Args, CmdArgs, CompilerRT, false, true); } bool Darwin::isAlignedAllocationUnavailable() const { switch (TargetPlatform) { case MacOS: // Earlier than 10.13. return TargetVersion < VersionTuple(10U, 13U, 0U); case IPhoneOS: case IPhoneOSSimulator: case TvOS: case TvOSSimulator: // Earlier than 11.0. return TargetVersion < VersionTuple(11U, 0U, 0U); case WatchOS: case WatchOSSimulator: // Earlier than 4.0. return TargetVersion < VersionTuple(4U, 0U, 0U); } llvm_unreachable("Unsupported platform"); } void Darwin::addClangTargetOptions(const llvm::opt::ArgList &DriverArgs, llvm::opt::ArgStringList &CC1Args, Action::OffloadKind DeviceOffloadKind) const { if (isAlignedAllocationUnavailable()) CC1Args.push_back("-faligned-alloc-unavailable"); } DerivedArgList * Darwin::TranslateArgs(const DerivedArgList &Args, StringRef BoundArch, Action::OffloadKind DeviceOffloadKind) const { // First get the generic Apple args, before moving onto Darwin-specific ones. DerivedArgList *DAL = MachO::TranslateArgs(Args, BoundArch, DeviceOffloadKind); const OptTable &Opts = getDriver().getOpts(); // If no architecture is bound, none of the translations here are relevant. if (BoundArch.empty()) return DAL; // Add an explicit version min argument for the deployment target. We do this // after argument translation because -Xarch_ arguments may add a version min // argument. AddDeploymentTarget(*DAL); // For iOS 6, undo the translation to add -static for -mkernel/-fapple-kext. // FIXME: It would be far better to avoid inserting those -static arguments, // but we can't check the deployment target in the translation code until // it is set here. if (isTargetWatchOSBased() || (isTargetIOSBased() && !isIPhoneOSVersionLT(6, 0))) { for (ArgList::iterator it = DAL->begin(), ie = DAL->end(); it != ie; ) { Arg *A = *it; ++it; if (A->getOption().getID() != options::OPT_mkernel && A->getOption().getID() != options::OPT_fapple_kext) continue; assert(it != ie && "unexpected argument translation"); A = *it; assert(A->getOption().getID() == options::OPT_static && "missing expected -static argument"); *it = nullptr; ++it; } } if (!Args.getLastArg(options::OPT_stdlib_EQ) && GetCXXStdlibType(Args) == ToolChain::CST_Libcxx) DAL->AddJoinedArg(nullptr, Opts.getOption(options::OPT_stdlib_EQ), "libc++"); // Validate the C++ standard library choice. CXXStdlibType Type = GetCXXStdlibType(*DAL); if (Type == ToolChain::CST_Libcxx) { // Check whether the target provides libc++. StringRef where; // Complain about targeting iOS < 5.0 in any way. if (isTargetIOSBased() && isIPhoneOSVersionLT(5, 0)) where = "iOS 5.0"; if (where != StringRef()) { getDriver().Diag(clang::diag::err_drv_invalid_libcxx_deployment) << where; } } auto Arch = tools::darwin::getArchTypeForMachOArchName(BoundArch); if ((Arch == llvm::Triple::arm || Arch == llvm::Triple::thumb)) { if (Args.hasFlag(options::OPT_fomit_frame_pointer, options::OPT_fno_omit_frame_pointer, false)) getDriver().Diag(clang::diag::warn_drv_unsupported_opt_for_target) << "-fomit-frame-pointer" << BoundArch; } return DAL; } bool MachO::IsUnwindTablesDefault(const ArgList &Args) const { - return !UseSjLjExceptions(Args); + // Unwind tables are not emitted if -fno-exceptions is supplied (except when + // targeting x86_64). + return getArch() == llvm::Triple::x86_64 || + (!UseSjLjExceptions(Args) && + Args.hasFlag(options::OPT_fexceptions, options::OPT_fno_exceptions, + true)); } bool MachO::UseDwarfDebugFlags() const { if (const char *S = ::getenv("RC_DEBUG_OPTIONS")) return S[0] != '\0'; return false; } bool Darwin::UseSjLjExceptions(const ArgList &Args) const { // Darwin uses SjLj exceptions on ARM. if (getTriple().getArch() != llvm::Triple::arm && getTriple().getArch() != llvm::Triple::thumb) return false; // Only watchOS uses the new DWARF/Compact unwinding method. llvm::Triple Triple(ComputeLLVMTriple(Args)); return !Triple.isWatchABI(); } bool Darwin::SupportsEmbeddedBitcode() const { assert(TargetInitialized && "Target not initialized!"); if (isTargetIPhoneOS() && isIPhoneOSVersionLT(6, 0)) return false; return true; } bool MachO::isPICDefault() const { return true; } bool MachO::isPIEDefault() const { return false; } bool MachO::isPICDefaultForced() const { return (getArch() == llvm::Triple::x86_64 || getArch() == llvm::Triple::aarch64); } bool MachO::SupportsProfiling() const { // Profiling instrumentation is only supported on x86. return getArch() == llvm::Triple::x86 || getArch() == llvm::Triple::x86_64; } void Darwin::addMinVersionArgs(const ArgList &Args, ArgStringList &CmdArgs) const { VersionTuple TargetVersion = getTargetVersion(); if (isTargetWatchOS()) CmdArgs.push_back("-watchos_version_min"); else if (isTargetWatchOSSimulator()) CmdArgs.push_back("-watchos_simulator_version_min"); else if (isTargetTvOS()) CmdArgs.push_back("-tvos_version_min"); else if (isTargetTvOSSimulator()) CmdArgs.push_back("-tvos_simulator_version_min"); else if (isTargetIOSSimulator()) CmdArgs.push_back("-ios_simulator_version_min"); else if (isTargetIOSBased()) CmdArgs.push_back("-iphoneos_version_min"); else { assert(isTargetMacOS() && "unexpected target"); CmdArgs.push_back("-macosx_version_min"); } CmdArgs.push_back(Args.MakeArgString(TargetVersion.getAsString())); } void Darwin::addStartObjectFileArgs(const ArgList &Args, ArgStringList &CmdArgs) const { // Derived from startfile spec. if (Args.hasArg(options::OPT_dynamiclib)) { // Derived from darwin_dylib1 spec. if (isTargetWatchOSBased()) { ; // watchOS does not need dylib1.o. } else if (isTargetIOSSimulator()) { ; // iOS simulator does not need dylib1.o. } else if (isTargetIPhoneOS()) { if (isIPhoneOSVersionLT(3, 1)) CmdArgs.push_back("-ldylib1.o"); } else { if (isMacosxVersionLT(10, 5)) CmdArgs.push_back("-ldylib1.o"); else if (isMacosxVersionLT(10, 6)) CmdArgs.push_back("-ldylib1.10.5.o"); } } else { if (Args.hasArg(options::OPT_bundle)) { if (!Args.hasArg(options::OPT_static)) { // Derived from darwin_bundle1 spec. if (isTargetWatchOSBased()) { ; // watchOS does not need bundle1.o. } else if (isTargetIOSSimulator()) { ; // iOS simulator does not need bundle1.o. } else if (isTargetIPhoneOS()) { if (isIPhoneOSVersionLT(3, 1)) CmdArgs.push_back("-lbundle1.o"); } else { if (isMacosxVersionLT(10, 6)) CmdArgs.push_back("-lbundle1.o"); } } } else { if (Args.hasArg(options::OPT_pg) && SupportsProfiling()) { if (Args.hasArg(options::OPT_static) || Args.hasArg(options::OPT_object) || Args.hasArg(options::OPT_preload)) { CmdArgs.push_back("-lgcrt0.o"); } else { CmdArgs.push_back("-lgcrt1.o"); // darwin_crt2 spec is empty. } // By default on OS X 10.8 and later, we don't link with a crt1.o // file and the linker knows to use _main as the entry point. But, // when compiling with -pg, we need to link with the gcrt1.o file, // so pass the -no_new_main option to tell the linker to use the // "start" symbol as the entry point. if (isTargetMacOS() && !isMacosxVersionLT(10, 8)) CmdArgs.push_back("-no_new_main"); } else { if (Args.hasArg(options::OPT_static) || Args.hasArg(options::OPT_object) || Args.hasArg(options::OPT_preload)) { CmdArgs.push_back("-lcrt0.o"); } else { // Derived from darwin_crt1 spec. if (isTargetWatchOSBased()) { ; // watchOS does not need crt1.o. } else if (isTargetIOSSimulator()) { ; // iOS simulator does not need crt1.o. } else if (isTargetIPhoneOS()) { if (getArch() == llvm::Triple::aarch64) ; // iOS does not need any crt1 files for arm64 else if (isIPhoneOSVersionLT(3, 1)) CmdArgs.push_back("-lcrt1.o"); else if (isIPhoneOSVersionLT(6, 0)) CmdArgs.push_back("-lcrt1.3.1.o"); } else { if (isMacosxVersionLT(10, 5)) CmdArgs.push_back("-lcrt1.o"); else if (isMacosxVersionLT(10, 6)) CmdArgs.push_back("-lcrt1.10.5.o"); else if (isMacosxVersionLT(10, 8)) CmdArgs.push_back("-lcrt1.10.6.o"); // darwin_crt2 spec is empty. } } } } } if (!isTargetIPhoneOS() && Args.hasArg(options::OPT_shared_libgcc) && !isTargetWatchOS() && isMacosxVersionLT(10, 5)) { const char *Str = Args.MakeArgString(GetFilePath("crt3.o")); CmdArgs.push_back(Str); } } bool Darwin::SupportsObjCGC() const { return isTargetMacOS(); } void Darwin::CheckObjCARC() const { if (isTargetIOSBased() || isTargetWatchOSBased() || (isTargetMacOS() && !isMacosxVersionLT(10, 6))) return; getDriver().Diag(diag::err_arc_unsupported_on_toolchain); } SanitizerMask Darwin::getSupportedSanitizers() const { const bool IsX86_64 = getTriple().getArch() == llvm::Triple::x86_64; SanitizerMask Res = ToolChain::getSupportedSanitizers(); Res |= SanitizerKind::Address; Res |= SanitizerKind::Leak; Res |= SanitizerKind::Fuzzer; if (isTargetMacOS()) { if (!isMacosxVersionLT(10, 9)) Res |= SanitizerKind::Vptr; Res |= SanitizerKind::SafeStack; if (IsX86_64) Res |= SanitizerKind::Thread; } else if (isTargetIOSSimulator() || isTargetTvOSSimulator()) { if (IsX86_64) Res |= SanitizerKind::Thread; } return Res; } void Darwin::printVerboseInfo(raw_ostream &OS) const { CudaInstallation.print(OS); } diff --git a/lib/Driver/ToolChains/MSVC.cpp b/lib/Driver/ToolChains/MSVC.cpp index b871c856d2a0..7978a6941cb8 100644 --- a/lib/Driver/ToolChains/MSVC.cpp +++ b/lib/Driver/ToolChains/MSVC.cpp @@ -1,1426 +1,1463 @@ //===--- ToolChains.cpp - ToolChain Implementations -----------------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// #include "MSVC.h" #include "CommonArgs.h" #include "Darwin.h" #include "clang/Basic/CharInfo.h" #include "clang/Basic/Version.h" #include "clang/Driver/Compilation.h" #include "clang/Driver/Driver.h" #include "clang/Driver/DriverDiagnostic.h" #include "clang/Driver/Options.h" #include "clang/Driver/SanitizerArgs.h" #include "llvm/ADT/StringExtras.h" #include "llvm/ADT/StringSwitch.h" #include "llvm/Config/llvm-config.h" #include "llvm/Option/Arg.h" #include "llvm/Option/ArgList.h" #include "llvm/Support/ConvertUTF.h" #include "llvm/Support/ErrorHandling.h" #include "llvm/Support/FileSystem.h" #include "llvm/Support/Host.h" #include "llvm/Support/MemoryBuffer.h" #include "llvm/Support/Path.h" #include "llvm/Support/Process.h" #include // Include the necessary headers to interface with the Windows registry and // environment. #if defined(LLVM_ON_WIN32) #define USE_WIN32 #endif #ifdef USE_WIN32 #define WIN32_LEAN_AND_MEAN #define NOGDI #ifndef NOMINMAX #define NOMINMAX #endif #include #endif #ifdef _MSC_VER // Don't support SetupApi on MinGW. #define USE_MSVC_SETUP_API // Make sure this comes before MSVCSetupApi.h #include #include "MSVCSetupApi.h" #include "llvm/Support/COM.h" _COM_SMARTPTR_TYPEDEF(ISetupConfiguration, __uuidof(ISetupConfiguration)); _COM_SMARTPTR_TYPEDEF(ISetupConfiguration2, __uuidof(ISetupConfiguration2)); _COM_SMARTPTR_TYPEDEF(ISetupHelper, __uuidof(ISetupHelper)); _COM_SMARTPTR_TYPEDEF(IEnumSetupInstances, __uuidof(IEnumSetupInstances)); _COM_SMARTPTR_TYPEDEF(ISetupInstance, __uuidof(ISetupInstance)); _COM_SMARTPTR_TYPEDEF(ISetupInstance2, __uuidof(ISetupInstance2)); #endif using namespace clang::driver; using namespace clang::driver::toolchains; using namespace clang::driver::tools; using namespace clang; using namespace llvm::opt; // Defined below. // Forward declare this so there aren't too many things above the constructor. static bool getSystemRegistryString(const char *keyPath, const char *valueName, std::string &value, std::string *phValue); // Check various environment variables to try and find a toolchain. static bool findVCToolChainViaEnvironment(std::string &Path, - bool &IsVS2017OrNewer) { + MSVCToolChain::ToolsetLayout &VSLayout) { // These variables are typically set by vcvarsall.bat // when launching a developer command prompt. if (llvm::Optional VCToolsInstallDir = llvm::sys::Process::GetEnv("VCToolsInstallDir")) { // This is only set by newer Visual Studios, and it leads straight to // the toolchain directory. Path = std::move(*VCToolsInstallDir); - IsVS2017OrNewer = true; + VSLayout = MSVCToolChain::ToolsetLayout::VS2017OrNewer; return true; } if (llvm::Optional VCInstallDir = llvm::sys::Process::GetEnv("VCINSTALLDIR")) { // If the previous variable isn't set but this one is, then we've found // an older Visual Studio. This variable is set by newer Visual Studios too, // so this check has to appear second. // In older Visual Studios, the VC directory is the toolchain. Path = std::move(*VCInstallDir); - IsVS2017OrNewer = false; + VSLayout = MSVCToolChain::ToolsetLayout::OlderVS; return true; } // We couldn't find any VC environment variables. Let's walk through PATH and // see if it leads us to a VC toolchain bin directory. If it does, pick the // first one that we find. if (llvm::Optional PathEnv = llvm::sys::Process::GetEnv("PATH")) { llvm::SmallVector PathEntries; llvm::StringRef(*PathEnv).split(PathEntries, llvm::sys::EnvPathSeparator); for (llvm::StringRef PathEntry : PathEntries) { if (PathEntry.empty()) continue; llvm::SmallString<256> ExeTestPath; // If cl.exe doesn't exist, then this definitely isn't a VC toolchain. ExeTestPath = PathEntry; llvm::sys::path::append(ExeTestPath, "cl.exe"); if (!llvm::sys::fs::exists(ExeTestPath)) continue; // cl.exe existing isn't a conclusive test for a VC toolchain; clang also // has a cl.exe. So let's check for link.exe too. ExeTestPath = PathEntry; llvm::sys::path::append(ExeTestPath, "link.exe"); if (!llvm::sys::fs::exists(ExeTestPath)) continue; // whatever/VC/bin --> old toolchain, VC dir is toolchain dir. llvm::StringRef TestPath = PathEntry; bool IsBin = llvm::sys::path::filename(TestPath).equals_lower("bin"); if (!IsBin) { // Strip any architecture subdir like "amd64". TestPath = llvm::sys::path::parent_path(TestPath); IsBin = llvm::sys::path::filename(TestPath).equals_lower("bin"); } if (IsBin) { llvm::StringRef ParentPath = llvm::sys::path::parent_path(TestPath); - if (llvm::sys::path::filename(ParentPath) == "VC") { + llvm::StringRef ParentFilename = llvm::sys::path::filename(ParentPath); + if (ParentFilename == "VC") { Path = ParentPath; - IsVS2017OrNewer = false; + VSLayout = MSVCToolChain::ToolsetLayout::OlderVS; + return true; + } + if (ParentFilename == "x86ret" || ParentFilename == "x86chk" + || ParentFilename == "amd64ret" || ParentFilename == "amd64chk") { + Path = ParentPath; + VSLayout = MSVCToolChain::ToolsetLayout::DevDivInternal; return true; } } else { // This could be a new (>=VS2017) toolchain. If it is, we should find // path components with these prefixes when walking backwards through // the path. // Note: empty strings match anything. llvm::StringRef ExpectedPrefixes[] = {"", "Host", "bin", "", "MSVC", "Tools", "VC"}; auto It = llvm::sys::path::rbegin(PathEntry); auto End = llvm::sys::path::rend(PathEntry); for (llvm::StringRef Prefix : ExpectedPrefixes) { if (It == End) goto NotAToolChain; if (!It->startswith(Prefix)) goto NotAToolChain; ++It; } // We've found a new toolchain! // Back up 3 times (/bin/Host/arch) to get the root path. llvm::StringRef ToolChainPath(PathEntry); for (int i = 0; i < 3; ++i) ToolChainPath = llvm::sys::path::parent_path(ToolChainPath); Path = ToolChainPath; - IsVS2017OrNewer = true; + VSLayout = MSVCToolChain::ToolsetLayout::VS2017OrNewer; return true; } NotAToolChain: continue; } } return false; } // Query the Setup Config server for installs, then pick the newest version // and find its default VC toolchain. // This is the preferred way to discover new Visual Studios, as they're no // longer listed in the registry. static bool findVCToolChainViaSetupConfig(std::string &Path, - bool &IsVS2017OrNewer) { + MSVCToolChain::ToolsetLayout &VSLayout) { #if !defined(USE_MSVC_SETUP_API) return false; #else // FIXME: This really should be done once in the top-level program's main // function, as it may have already been initialized with a different // threading model otherwise. llvm::sys::InitializeCOMRAII COM(llvm::sys::COMThreadingMode::SingleThreaded); HRESULT HR; // _com_ptr_t will throw a _com_error if a COM calls fail. // The LLVM coding standards forbid exception handling, so we'll have to // stop them from being thrown in the first place. // The destructor will put the regular error handler back when we leave // this scope. struct SuppressCOMErrorsRAII { static void __stdcall handler(HRESULT hr, IErrorInfo *perrinfo) {} SuppressCOMErrorsRAII() { _set_com_error_handler(handler); } ~SuppressCOMErrorsRAII() { _set_com_error_handler(_com_raise_error); } } COMErrorSuppressor; ISetupConfigurationPtr Query; HR = Query.CreateInstance(__uuidof(SetupConfiguration)); if (FAILED(HR)) return false; IEnumSetupInstancesPtr EnumInstances; HR = ISetupConfiguration2Ptr(Query)->EnumAllInstances(&EnumInstances); if (FAILED(HR)) return false; ISetupInstancePtr Instance; HR = EnumInstances->Next(1, &Instance, nullptr); if (HR != S_OK) return false; ISetupInstancePtr NewestInstance; Optional NewestVersionNum; do { bstr_t VersionString; uint64_t VersionNum; HR = Instance->GetInstallationVersion(VersionString.GetAddress()); if (FAILED(HR)) continue; HR = ISetupHelperPtr(Query)->ParseVersion(VersionString, &VersionNum); if (FAILED(HR)) continue; if (!NewestVersionNum || (VersionNum > NewestVersionNum)) { NewestInstance = Instance; NewestVersionNum = VersionNum; } } while ((HR = EnumInstances->Next(1, &Instance, nullptr)) == S_OK); if (!NewestInstance) return false; bstr_t VCPathWide; HR = NewestInstance->ResolvePath(L"VC", VCPathWide.GetAddress()); if (FAILED(HR)) return false; std::string VCRootPath; llvm::convertWideToUTF8(std::wstring(VCPathWide), VCRootPath); llvm::SmallString<256> ToolsVersionFilePath(VCRootPath); llvm::sys::path::append(ToolsVersionFilePath, "Auxiliary", "Build", "Microsoft.VCToolsVersion.default.txt"); auto ToolsVersionFile = llvm::MemoryBuffer::getFile(ToolsVersionFilePath); if (!ToolsVersionFile) return false; llvm::SmallString<256> ToolchainPath(VCRootPath); llvm::sys::path::append(ToolchainPath, "Tools", "MSVC", ToolsVersionFile->get()->getBuffer().rtrim()); if (!llvm::sys::fs::is_directory(ToolchainPath)) return false; Path = ToolchainPath.str(); - IsVS2017OrNewer = true; + VSLayout = MSVCToolChain::ToolsetLayout::VS2017OrNewer; return true; #endif } // Look in the registry for Visual Studio installs, and use that to get // a toolchain path. VS2017 and newer don't get added to the registry. // So if we find something here, we know that it's an older version. static bool findVCToolChainViaRegistry(std::string &Path, - bool &IsVS2017OrNewer) { + MSVCToolChain::ToolsetLayout &VSLayout) { std::string VSInstallPath; if (getSystemRegistryString(R"(SOFTWARE\Microsoft\VisualStudio\$VERSION)", "InstallDir", VSInstallPath, nullptr) || getSystemRegistryString(R"(SOFTWARE\Microsoft\VCExpress\$VERSION)", "InstallDir", VSInstallPath, nullptr)) { if (!VSInstallPath.empty()) { llvm::SmallString<256> VCPath(llvm::StringRef( VSInstallPath.c_str(), VSInstallPath.find(R"(\Common7\IDE)"))); llvm::sys::path::append(VCPath, "VC"); Path = VCPath.str(); - IsVS2017OrNewer = false; + VSLayout = MSVCToolChain::ToolsetLayout::OlderVS; return true; } } return false; } // Try to find Exe from a Visual Studio distribution. This first tries to find // an installed copy of Visual Studio and, failing that, looks in the PATH, // making sure that whatever executable that's found is not a same-named exe // from clang itself to prevent clang from falling back to itself. static std::string FindVisualStudioExecutable(const ToolChain &TC, const char *Exe) { const auto &MSVC = static_cast(TC); SmallString<128> FilePath(MSVC.getSubDirectoryPath( toolchains::MSVCToolChain::SubDirectoryType::Bin)); llvm::sys::path::append(FilePath, Exe); return llvm::sys::fs::can_execute(FilePath) ? FilePath.str() : Exe; } void visualstudio::Linker::ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { ArgStringList CmdArgs; auto &TC = static_cast(getToolChain()); assert((Output.isFilename() || Output.isNothing()) && "invalid output"); if (Output.isFilename()) CmdArgs.push_back( Args.MakeArgString(std::string("-out:") + Output.getFilename())); if (!Args.hasArg(options::OPT_nostdlib, options::OPT_nostartfiles) && !C.getDriver().IsCLMode()) CmdArgs.push_back("-defaultlib:libcmt"); if (!llvm::sys::Process::GetEnv("LIB")) { // If the VC environment hasn't been configured (perhaps because the user // did not run vcvarsall), try to build a consistent link environment. If // the environment variable is set however, assume the user knows what // they're doing. CmdArgs.push_back(Args.MakeArgString( Twine("-libpath:") + TC.getSubDirectoryPath( toolchains::MSVCToolChain::SubDirectoryType::Lib))); if (TC.useUniversalCRT()) { std::string UniversalCRTLibPath; if (TC.getUniversalCRTLibraryPath(UniversalCRTLibPath)) CmdArgs.push_back( Args.MakeArgString(Twine("-libpath:") + UniversalCRTLibPath)); } std::string WindowsSdkLibPath; if (TC.getWindowsSDKLibraryPath(WindowsSdkLibPath)) CmdArgs.push_back( Args.MakeArgString(std::string("-libpath:") + WindowsSdkLibPath)); } if (!C.getDriver().IsCLMode() && Args.hasArg(options::OPT_L)) for (const auto &LibPath : Args.getAllArgValues(options::OPT_L)) CmdArgs.push_back(Args.MakeArgString("-libpath:" + LibPath)); CmdArgs.push_back("-nologo"); if (Args.hasArg(options::OPT_g_Group, options::OPT__SLASH_Z7, options::OPT__SLASH_Zd)) CmdArgs.push_back("-debug"); bool DLL = Args.hasArg(options::OPT__SLASH_LD, options::OPT__SLASH_LDd, options::OPT_shared); if (DLL) { CmdArgs.push_back(Args.MakeArgString("-dll")); SmallString<128> ImplibName(Output.getFilename()); llvm::sys::path::replace_extension(ImplibName, "lib"); CmdArgs.push_back(Args.MakeArgString(std::string("-implib:") + ImplibName)); } if (TC.getSanitizerArgs().needsAsanRt()) { CmdArgs.push_back(Args.MakeArgString("-debug")); CmdArgs.push_back(Args.MakeArgString("-incremental:no")); if (TC.getSanitizerArgs().needsSharedAsanRt() || Args.hasArg(options::OPT__SLASH_MD, options::OPT__SLASH_MDd)) { for (const auto &Lib : {"asan_dynamic", "asan_dynamic_runtime_thunk"}) CmdArgs.push_back(TC.getCompilerRTArgString(Args, Lib)); // Make sure the dynamic runtime thunk is not optimized out at link time // to ensure proper SEH handling. CmdArgs.push_back(Args.MakeArgString( TC.getArch() == llvm::Triple::x86 ? "-include:___asan_seh_interceptor" : "-include:__asan_seh_interceptor")); // Make sure the linker consider all object files from the dynamic runtime // thunk. CmdArgs.push_back(Args.MakeArgString(std::string("-wholearchive:") + TC.getCompilerRT(Args, "asan_dynamic_runtime_thunk"))); } else if (DLL) { CmdArgs.push_back(TC.getCompilerRTArgString(Args, "asan_dll_thunk")); } else { for (const auto &Lib : {"asan", "asan_cxx"}) { CmdArgs.push_back(TC.getCompilerRTArgString(Args, Lib)); // Make sure the linker consider all object files from the static lib. // This is necessary because instrumented dlls need access to all the // interface exported by the static lib in the main executable. CmdArgs.push_back(Args.MakeArgString(std::string("-wholearchive:") + TC.getCompilerRT(Args, Lib))); } } } Args.AddAllArgValues(CmdArgs, options::OPT__SLASH_link); if (Args.hasFlag(options::OPT_fopenmp, options::OPT_fopenmp_EQ, options::OPT_fno_openmp, false)) { CmdArgs.push_back("-nodefaultlib:vcomp.lib"); CmdArgs.push_back("-nodefaultlib:vcompd.lib"); CmdArgs.push_back(Args.MakeArgString(std::string("-libpath:") + TC.getDriver().Dir + "/../lib")); switch (TC.getDriver().getOpenMPRuntime(Args)) { case Driver::OMPRT_OMP: CmdArgs.push_back("-defaultlib:libomp.lib"); break; case Driver::OMPRT_IOMP5: CmdArgs.push_back("-defaultlib:libiomp5md.lib"); break; case Driver::OMPRT_GOMP: break; case Driver::OMPRT_Unknown: // Already diagnosed. break; } } // Add compiler-rt lib in case if it was explicitly // specified as an argument for --rtlib option. if (!Args.hasArg(options::OPT_nostdlib)) { AddRunTimeLibs(TC, TC.getDriver(), CmdArgs, Args); } // Add filenames, libraries, and other linker inputs. for (const auto &Input : Inputs) { if (Input.isFilename()) { CmdArgs.push_back(Input.getFilename()); continue; } const Arg &A = Input.getInputArg(); // Render -l options differently for the MSVC linker. if (A.getOption().matches(options::OPT_l)) { StringRef Lib = A.getValue(); const char *LinkLibArg; if (Lib.endswith(".lib")) LinkLibArg = Args.MakeArgString(Lib); else LinkLibArg = Args.MakeArgString(Lib + ".lib"); CmdArgs.push_back(LinkLibArg); continue; } // Otherwise, this is some other kind of linker input option like -Wl, -z, // or -L. Render it, even if MSVC doesn't understand it. A.renderAsInput(Args, CmdArgs); } TC.addProfileRTLibs(Args, CmdArgs); std::vector Environment; // We need to special case some linker paths. In the case of lld, we need to // translate 'lld' into 'lld-link', and in the case of the regular msvc // linker, we need to use a special search algorithm. llvm::SmallString<128> linkPath; StringRef Linker = Args.getLastArgValue(options::OPT_fuse_ld_EQ, "link"); if (Linker.equals_lower("lld")) Linker = "lld-link"; if (Linker.equals_lower("link")) { // If we're using the MSVC linker, it's not sufficient to just use link // from the program PATH, because other environments like GnuWin32 install // their own link.exe which may come first. linkPath = FindVisualStudioExecutable(TC, "link.exe"); #ifdef USE_WIN32 // When cross-compiling with VS2017 or newer, link.exe expects to have // its containing bin directory at the top of PATH, followed by the // native target bin directory. // e.g. when compiling for x86 on an x64 host, PATH should start with: // /bin/HostX64/x86;/bin/HostX64/x64 + // This doesn't attempt to handle ToolsetLayout::DevDivInternal. if (TC.getIsVS2017OrNewer() && llvm::Triple(llvm::sys::getProcessTriple()).getArch() != TC.getArch()) { auto HostArch = llvm::Triple(llvm::sys::getProcessTriple()).getArch(); auto EnvBlockWide = std::unique_ptr( GetEnvironmentStringsW(), FreeEnvironmentStringsW); if (!EnvBlockWide) goto SkipSettingEnvironment; size_t EnvCount = 0; size_t EnvBlockLen = 0; while (EnvBlockWide[EnvBlockLen] != L'\0') { ++EnvCount; EnvBlockLen += std::wcslen(&EnvBlockWide[EnvBlockLen]) + 1 /*string null-terminator*/; } ++EnvBlockLen; // add the block null-terminator std::string EnvBlock; if (!llvm::convertUTF16ToUTF8String( llvm::ArrayRef(reinterpret_cast(EnvBlockWide.get()), EnvBlockLen * sizeof(EnvBlockWide[0])), EnvBlock)) goto SkipSettingEnvironment; Environment.reserve(EnvCount); // Now loop over each string in the block and copy them into the // environment vector, adjusting the PATH variable as needed when we // find it. for (const char *Cursor = EnvBlock.data(); *Cursor != '\0';) { llvm::StringRef EnvVar(Cursor); if (EnvVar.startswith_lower("path=")) { using SubDirectoryType = toolchains::MSVCToolChain::SubDirectoryType; constexpr size_t PrefixLen = 5; // strlen("path=") Environment.push_back(Args.MakeArgString( EnvVar.substr(0, PrefixLen) + TC.getSubDirectoryPath(SubDirectoryType::Bin) + llvm::Twine(llvm::sys::EnvPathSeparator) + TC.getSubDirectoryPath(SubDirectoryType::Bin, HostArch) + (EnvVar.size() > PrefixLen ? llvm::Twine(llvm::sys::EnvPathSeparator) + EnvVar.substr(PrefixLen) : ""))); } else { Environment.push_back(Args.MakeArgString(EnvVar)); } Cursor += EnvVar.size() + 1 /*null-terminator*/; } } SkipSettingEnvironment:; #endif } else { linkPath = TC.GetProgramPath(Linker.str().c_str()); } auto LinkCmd = llvm::make_unique( JA, *this, Args.MakeArgString(linkPath), CmdArgs, Inputs); if (!Environment.empty()) LinkCmd->setEnvironment(Environment); C.addCommand(std::move(LinkCmd)); } void visualstudio::Compiler::ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { C.addCommand(GetCommand(C, JA, Output, Inputs, Args, LinkingOutput)); } std::unique_ptr visualstudio::Compiler::GetCommand( Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const ArgList &Args, const char *LinkingOutput) const { ArgStringList CmdArgs; CmdArgs.push_back("/nologo"); CmdArgs.push_back("/c"); // Compile only. CmdArgs.push_back("/W0"); // No warnings. // The goal is to be able to invoke this tool correctly based on // any flag accepted by clang-cl. // These are spelled the same way in clang and cl.exe,. Args.AddAllArgs(CmdArgs, {options::OPT_D, options::OPT_U, options::OPT_I}); // Optimization level. if (Arg *A = Args.getLastArg(options::OPT_fbuiltin, options::OPT_fno_builtin)) CmdArgs.push_back(A->getOption().getID() == options::OPT_fbuiltin ? "/Oi" : "/Oi-"); if (Arg *A = Args.getLastArg(options::OPT_O, options::OPT_O0)) { if (A->getOption().getID() == options::OPT_O0) { CmdArgs.push_back("/Od"); } else { CmdArgs.push_back("/Og"); StringRef OptLevel = A->getValue(); if (OptLevel == "s" || OptLevel == "z") CmdArgs.push_back("/Os"); else CmdArgs.push_back("/Ot"); CmdArgs.push_back("/Ob2"); } } if (Arg *A = Args.getLastArg(options::OPT_fomit_frame_pointer, options::OPT_fno_omit_frame_pointer)) CmdArgs.push_back(A->getOption().getID() == options::OPT_fomit_frame_pointer ? "/Oy" : "/Oy-"); if (!Args.hasArg(options::OPT_fwritable_strings)) CmdArgs.push_back("/GF"); // Flags for which clang-cl has an alias. // FIXME: How can we ensure this stays in sync with relevant clang-cl options? if (Args.hasFlag(options::OPT__SLASH_GR_, options::OPT__SLASH_GR, /*default=*/false)) CmdArgs.push_back("/GR-"); if (Args.hasFlag(options::OPT__SLASH_GS_, options::OPT__SLASH_GS, /*default=*/false)) CmdArgs.push_back("/GS-"); if (Arg *A = Args.getLastArg(options::OPT_ffunction_sections, options::OPT_fno_function_sections)) CmdArgs.push_back(A->getOption().getID() == options::OPT_ffunction_sections ? "/Gy" : "/Gy-"); if (Arg *A = Args.getLastArg(options::OPT_fdata_sections, options::OPT_fno_data_sections)) CmdArgs.push_back( A->getOption().getID() == options::OPT_fdata_sections ? "/Gw" : "/Gw-"); if (Args.hasArg(options::OPT_fsyntax_only)) CmdArgs.push_back("/Zs"); if (Args.hasArg(options::OPT_g_Flag, options::OPT_gline_tables_only, options::OPT__SLASH_Z7)) CmdArgs.push_back("/Z7"); std::vector Includes = Args.getAllArgValues(options::OPT_include); for (const auto &Include : Includes) CmdArgs.push_back(Args.MakeArgString(std::string("/FI") + Include)); // Flags that can simply be passed through. Args.AddAllArgs(CmdArgs, options::OPT__SLASH_LD); Args.AddAllArgs(CmdArgs, options::OPT__SLASH_LDd); Args.AddAllArgs(CmdArgs, options::OPT__SLASH_GX); Args.AddAllArgs(CmdArgs, options::OPT__SLASH_GX_); Args.AddAllArgs(CmdArgs, options::OPT__SLASH_EH); Args.AddAllArgs(CmdArgs, options::OPT__SLASH_Zl); // The order of these flags is relevant, so pick the last one. if (Arg *A = Args.getLastArg(options::OPT__SLASH_MD, options::OPT__SLASH_MDd, options::OPT__SLASH_MT, options::OPT__SLASH_MTd)) A->render(Args, CmdArgs); // Use MSVC's default threadsafe statics behaviour unless there was a flag. if (Arg *A = Args.getLastArg(options::OPT_fthreadsafe_statics, options::OPT_fno_threadsafe_statics)) { CmdArgs.push_back(A->getOption().getID() == options::OPT_fthreadsafe_statics ? "/Zc:threadSafeInit" : "/Zc:threadSafeInit-"); } // Pass through all unknown arguments so that the fallback command can see // them too. Args.AddAllArgs(CmdArgs, options::OPT_UNKNOWN); // Input filename. assert(Inputs.size() == 1); const InputInfo &II = Inputs[0]; assert(II.getType() == types::TY_C || II.getType() == types::TY_CXX); CmdArgs.push_back(II.getType() == types::TY_C ? "/Tc" : "/Tp"); if (II.isFilename()) CmdArgs.push_back(II.getFilename()); else II.getInputArg().renderAsInput(Args, CmdArgs); // Output filename. assert(Output.getType() == types::TY_Object); const char *Fo = Args.MakeArgString(std::string("/Fo") + Output.getFilename()); CmdArgs.push_back(Fo); std::string Exec = FindVisualStudioExecutable(getToolChain(), "cl.exe"); return llvm::make_unique(JA, *this, Args.MakeArgString(Exec), CmdArgs, Inputs); } MSVCToolChain::MSVCToolChain(const Driver &D, const llvm::Triple &Triple, const ArgList &Args) : ToolChain(D, Triple, Args), CudaInstallation(D, Triple, Args) { getProgramPaths().push_back(getDriver().getInstalledDir()); if (getDriver().getInstalledDir() != getDriver().Dir) getProgramPaths().push_back(getDriver().Dir); // Check the environment first, since that's probably the user telling us // what they want to use. // Failing that, just try to find the newest Visual Studio version we can // and use its default VC toolchain. - findVCToolChainViaEnvironment(VCToolChainPath, IsVS2017OrNewer) || - findVCToolChainViaSetupConfig(VCToolChainPath, IsVS2017OrNewer) || - findVCToolChainViaRegistry(VCToolChainPath, IsVS2017OrNewer); + findVCToolChainViaEnvironment(VCToolChainPath, VSLayout) || + findVCToolChainViaSetupConfig(VCToolChainPath, VSLayout) || + findVCToolChainViaRegistry(VCToolChainPath, VSLayout); } Tool *MSVCToolChain::buildLinker() const { if (VCToolChainPath.empty()) getDriver().Diag(clang::diag::warn_drv_msvc_not_found); return new tools::visualstudio::Linker(*this); } Tool *MSVCToolChain::buildAssembler() const { if (getTriple().isOSBinFormatMachO()) return new tools::darwin::Assembler(*this); getDriver().Diag(clang::diag::err_no_external_assembler); return nullptr; } bool MSVCToolChain::IsIntegratedAssemblerDefault() const { return true; } bool MSVCToolChain::IsUnwindTablesDefault(const ArgList &Args) const { // Emit unwind tables by default on Win64. All non-x86_32 Windows platforms // such as ARM and PPC actually require unwind tables, but LLVM doesn't know // how to generate them yet. // Don't emit unwind tables by default for MachO targets. if (getTriple().isOSBinFormatMachO()) return false; return getArch() == llvm::Triple::x86_64; } bool MSVCToolChain::isPICDefault() const { return getArch() == llvm::Triple::x86_64; } bool MSVCToolChain::isPIEDefault() const { return false; } bool MSVCToolChain::isPICDefaultForced() const { return getArch() == llvm::Triple::x86_64; } void MSVCToolChain::AddCudaIncludeArgs(const ArgList &DriverArgs, ArgStringList &CC1Args) const { CudaInstallation.AddCudaIncludeArgs(DriverArgs, CC1Args); } void MSVCToolChain::printVerboseInfo(raw_ostream &OS) const { CudaInstallation.print(OS); } // Windows SDKs and VC Toolchains group their contents into subdirectories based // on the target architecture. This function converts an llvm::Triple::ArchType // to the corresponding subdirectory name. static const char *llvmArchToWindowsSDKArch(llvm::Triple::ArchType Arch) { using ArchType = llvm::Triple::ArchType; switch (Arch) { case ArchType::x86: return "x86"; case ArchType::x86_64: return "x64"; case ArchType::arm: return "arm"; default: return ""; } } // Similar to the above function, but for Visual Studios before VS2017. static const char *llvmArchToLegacyVCArch(llvm::Triple::ArchType Arch) { using ArchType = llvm::Triple::ArchType; switch (Arch) { case ArchType::x86: // x86 is default in legacy VC toolchains. // e.g. x86 libs are directly in /lib as opposed to /lib/x86. return ""; case ArchType::x86_64: return "amd64"; case ArchType::arm: return "arm"; default: return ""; } } +// Similar to the above function, but for DevDiv internal builds. +static const char *llvmArchToDevDivInternalArch(llvm::Triple::ArchType Arch) { + using ArchType = llvm::Triple::ArchType; + switch (Arch) { + case ArchType::x86: + return "i386"; + case ArchType::x86_64: + return "amd64"; + case ArchType::arm: + return "arm"; + default: + return ""; + } +} + // Get the path to a specific subdirectory in the current toolchain for // a given target architecture. // VS2017 changed the VC toolchain layout, so this should be used instead // of hardcoding paths. std::string MSVCToolChain::getSubDirectoryPath(SubDirectoryType Type, llvm::Triple::ArchType TargetArch) const { + const char *SubdirName; + const char *IncludeName; + switch (VSLayout) { + case ToolsetLayout::OlderVS: + SubdirName = llvmArchToLegacyVCArch(TargetArch); + IncludeName = "include"; + break; + case ToolsetLayout::VS2017OrNewer: + SubdirName = llvmArchToWindowsSDKArch(TargetArch); + IncludeName = "include"; + break; + case ToolsetLayout::DevDivInternal: + SubdirName = llvmArchToDevDivInternalArch(TargetArch); + IncludeName = "inc"; + break; + } + llvm::SmallString<256> Path(VCToolChainPath); switch (Type) { case SubDirectoryType::Bin: - if (IsVS2017OrNewer) { - bool HostIsX64 = + if (VSLayout == ToolsetLayout::VS2017OrNewer) { + const bool HostIsX64 = llvm::Triple(llvm::sys::getProcessTriple()).isArch64Bit(); - llvm::sys::path::append(Path, "bin", (HostIsX64 ? "HostX64" : "HostX86"), - llvmArchToWindowsSDKArch(TargetArch)); - - } else { - llvm::sys::path::append(Path, "bin", llvmArchToLegacyVCArch(TargetArch)); + const char *const HostName = HostIsX64 ? "HostX64" : "HostX86"; + llvm::sys::path::append(Path, "bin", HostName, SubdirName); + } else { // OlderVS or DevDivInternal + llvm::sys::path::append(Path, "bin", SubdirName); } break; case SubDirectoryType::Include: - llvm::sys::path::append(Path, "include"); + llvm::sys::path::append(Path, IncludeName); break; case SubDirectoryType::Lib: - llvm::sys::path::append( - Path, "lib", IsVS2017OrNewer ? llvmArchToWindowsSDKArch(TargetArch) - : llvmArchToLegacyVCArch(TargetArch)); + llvm::sys::path::append(Path, "lib", SubdirName); break; } return Path.str(); } #ifdef USE_WIN32 static bool readFullStringValue(HKEY hkey, const char *valueName, std::string &value) { std::wstring WideValueName; if (!llvm::ConvertUTF8toWide(valueName, WideValueName)) return false; DWORD result = 0; DWORD valueSize = 0; DWORD type = 0; // First just query for the required size. result = RegQueryValueExW(hkey, WideValueName.c_str(), NULL, &type, NULL, &valueSize); if (result != ERROR_SUCCESS || type != REG_SZ || !valueSize) return false; std::vector buffer(valueSize); result = RegQueryValueExW(hkey, WideValueName.c_str(), NULL, NULL, &buffer[0], &valueSize); if (result == ERROR_SUCCESS) { std::wstring WideValue(reinterpret_cast(buffer.data()), valueSize / sizeof(wchar_t)); if (valueSize && WideValue.back() == L'\0') { WideValue.pop_back(); } // The destination buffer must be empty as an invariant of the conversion // function; but this function is sometimes called in a loop that passes in // the same buffer, however. Simply clear it out so we can overwrite it. value.clear(); return llvm::convertWideToUTF8(WideValue, value); } return false; } #endif /// \brief Read registry string. /// This also supports a means to look for high-versioned keys by use /// of a $VERSION placeholder in the key path. /// $VERSION in the key path is a placeholder for the version number, /// causing the highest value path to be searched for and used. /// I.e. "SOFTWARE\\Microsoft\\VisualStudio\\$VERSION". /// There can be additional characters in the component. Only the numeric /// characters are compared. This function only searches HKLM. static bool getSystemRegistryString(const char *keyPath, const char *valueName, std::string &value, std::string *phValue) { #ifndef USE_WIN32 return false; #else HKEY hRootKey = HKEY_LOCAL_MACHINE; HKEY hKey = NULL; long lResult; bool returnValue = false; const char *placeHolder = strstr(keyPath, "$VERSION"); std::string bestName; // If we have a $VERSION placeholder, do the highest-version search. if (placeHolder) { const char *keyEnd = placeHolder - 1; const char *nextKey = placeHolder; // Find end of previous key. while ((keyEnd > keyPath) && (*keyEnd != '\\')) keyEnd--; // Find end of key containing $VERSION. while (*nextKey && (*nextKey != '\\')) nextKey++; size_t partialKeyLength = keyEnd - keyPath; char partialKey[256]; if (partialKeyLength >= sizeof(partialKey)) partialKeyLength = sizeof(partialKey) - 1; strncpy(partialKey, keyPath, partialKeyLength); partialKey[partialKeyLength] = '\0'; HKEY hTopKey = NULL; lResult = RegOpenKeyExA(hRootKey, partialKey, 0, KEY_READ | KEY_WOW64_32KEY, &hTopKey); if (lResult == ERROR_SUCCESS) { char keyName[256]; double bestValue = 0.0; DWORD index, size = sizeof(keyName) - 1; for (index = 0; RegEnumKeyExA(hTopKey, index, keyName, &size, NULL, NULL, NULL, NULL) == ERROR_SUCCESS; index++) { const char *sp = keyName; while (*sp && !isDigit(*sp)) sp++; if (!*sp) continue; const char *ep = sp + 1; while (*ep && (isDigit(*ep) || (*ep == '.'))) ep++; char numBuf[32]; strncpy(numBuf, sp, sizeof(numBuf) - 1); numBuf[sizeof(numBuf) - 1] = '\0'; double dvalue = strtod(numBuf, NULL); if (dvalue > bestValue) { // Test that InstallDir is indeed there before keeping this index. // Open the chosen key path remainder. bestName = keyName; // Append rest of key. bestName.append(nextKey); lResult = RegOpenKeyExA(hTopKey, bestName.c_str(), 0, KEY_READ | KEY_WOW64_32KEY, &hKey); if (lResult == ERROR_SUCCESS) { if (readFullStringValue(hKey, valueName, value)) { bestValue = dvalue; if (phValue) *phValue = bestName; returnValue = true; } RegCloseKey(hKey); } } size = sizeof(keyName) - 1; } RegCloseKey(hTopKey); } } else { lResult = RegOpenKeyExA(hRootKey, keyPath, 0, KEY_READ | KEY_WOW64_32KEY, &hKey); if (lResult == ERROR_SUCCESS) { if (readFullStringValue(hKey, valueName, value)) returnValue = true; if (phValue) phValue->clear(); RegCloseKey(hKey); } } return returnValue; #endif // USE_WIN32 } // Find the most recent version of Universal CRT or Windows 10 SDK. // vcvarsqueryregistry.bat from Visual Studio 2015 sorts entries in the include // directory by name and uses the last one of the list. // So we compare entry names lexicographically to find the greatest one. static bool getWindows10SDKVersionFromPath(const std::string &SDKPath, std::string &SDKVersion) { SDKVersion.clear(); std::error_code EC; llvm::SmallString<128> IncludePath(SDKPath); llvm::sys::path::append(IncludePath, "Include"); for (llvm::sys::fs::directory_iterator DirIt(IncludePath, EC), DirEnd; DirIt != DirEnd && !EC; DirIt.increment(EC)) { if (!llvm::sys::fs::is_directory(DirIt->path())) continue; StringRef CandidateName = llvm::sys::path::filename(DirIt->path()); // If WDK is installed, there could be subfolders like "wdf" in the // "Include" directory. // Allow only directories which names start with "10.". if (!CandidateName.startswith("10.")) continue; if (CandidateName > SDKVersion) SDKVersion = CandidateName; } return !SDKVersion.empty(); } /// \brief Get Windows SDK installation directory. static bool getWindowsSDKDir(std::string &Path, int &Major, std::string &WindowsSDKIncludeVersion, std::string &WindowsSDKLibVersion) { std::string RegistrySDKVersion; // Try the Windows registry. if (!getSystemRegistryString( "SOFTWARE\\Microsoft\\Microsoft SDKs\\Windows\\$VERSION", "InstallationFolder", Path, &RegistrySDKVersion)) return false; if (Path.empty() || RegistrySDKVersion.empty()) return false; WindowsSDKIncludeVersion.clear(); WindowsSDKLibVersion.clear(); Major = 0; std::sscanf(RegistrySDKVersion.c_str(), "v%d.", &Major); if (Major <= 7) return true; if (Major == 8) { // Windows SDK 8.x installs libraries in a folder whose names depend on the // version of the OS you're targeting. By default choose the newest, which // usually corresponds to the version of the OS you've installed the SDK on. const char *Tests[] = {"winv6.3", "win8", "win7"}; for (const char *Test : Tests) { llvm::SmallString<128> TestPath(Path); llvm::sys::path::append(TestPath, "Lib", Test); if (llvm::sys::fs::exists(TestPath.c_str())) { WindowsSDKLibVersion = Test; break; } } return !WindowsSDKLibVersion.empty(); } if (Major == 10) { if (!getWindows10SDKVersionFromPath(Path, WindowsSDKIncludeVersion)) return false; WindowsSDKLibVersion = WindowsSDKIncludeVersion; return true; } // Unsupported SDK version return false; } // Gets the library path required to link against the Windows SDK. bool MSVCToolChain::getWindowsSDKLibraryPath(std::string &path) const { std::string sdkPath; int sdkMajor = 0; std::string windowsSDKIncludeVersion; std::string windowsSDKLibVersion; path.clear(); if (!getWindowsSDKDir(sdkPath, sdkMajor, windowsSDKIncludeVersion, windowsSDKLibVersion)) return false; llvm::SmallString<128> libPath(sdkPath); llvm::sys::path::append(libPath, "Lib"); if (sdkMajor >= 8) { llvm::sys::path::append(libPath, windowsSDKLibVersion, "um", llvmArchToWindowsSDKArch(getArch())); } else { switch (getArch()) { // In Windows SDK 7.x, x86 libraries are directly in the Lib folder. case llvm::Triple::x86: break; case llvm::Triple::x86_64: llvm::sys::path::append(libPath, "x64"); break; case llvm::Triple::arm: // It is not necessary to link against Windows SDK 7.x when targeting ARM. return false; default: return false; } } path = libPath.str(); return true; } // Check if the Include path of a specified version of Visual Studio contains // specific header files. If not, they are probably shipped with Universal CRT. bool MSVCToolChain::useUniversalCRT() const { llvm::SmallString<128> TestPath( getSubDirectoryPath(SubDirectoryType::Include)); llvm::sys::path::append(TestPath, "stdlib.h"); return !llvm::sys::fs::exists(TestPath); } static bool getUniversalCRTSdkDir(std::string &Path, std::string &UCRTVersion) { // vcvarsqueryregistry.bat for Visual Studio 2015 queries the registry // for the specific key "KitsRoot10". So do we. if (!getSystemRegistryString( "SOFTWARE\\Microsoft\\Windows Kits\\Installed Roots", "KitsRoot10", Path, nullptr)) return false; return getWindows10SDKVersionFromPath(Path, UCRTVersion); } bool MSVCToolChain::getUniversalCRTLibraryPath(std::string &Path) const { std::string UniversalCRTSdkPath; std::string UCRTVersion; Path.clear(); if (!getUniversalCRTSdkDir(UniversalCRTSdkPath, UCRTVersion)) return false; StringRef ArchName = llvmArchToWindowsSDKArch(getArch()); if (ArchName.empty()) return false; llvm::SmallString<128> LibPath(UniversalCRTSdkPath); llvm::sys::path::append(LibPath, "Lib", UCRTVersion, "ucrt", ArchName); Path = LibPath.str(); return true; } static VersionTuple getMSVCVersionFromTriple(const llvm::Triple &Triple) { unsigned Major, Minor, Micro; Triple.getEnvironmentVersion(Major, Minor, Micro); if (Major || Minor || Micro) return VersionTuple(Major, Minor, Micro); return VersionTuple(); } static VersionTuple getMSVCVersionFromExe(const std::string &BinDir) { VersionTuple Version; #ifdef USE_WIN32 SmallString<128> ClExe(BinDir); llvm::sys::path::append(ClExe, "cl.exe"); std::wstring ClExeWide; if (!llvm::ConvertUTF8toWide(ClExe.c_str(), ClExeWide)) return Version; const DWORD VersionSize = ::GetFileVersionInfoSizeW(ClExeWide.c_str(), nullptr); if (VersionSize == 0) return Version; SmallVector VersionBlock(VersionSize); if (!::GetFileVersionInfoW(ClExeWide.c_str(), 0, VersionSize, VersionBlock.data())) return Version; VS_FIXEDFILEINFO *FileInfo = nullptr; UINT FileInfoSize = 0; if (!::VerQueryValueW(VersionBlock.data(), L"\\", reinterpret_cast(&FileInfo), &FileInfoSize) || FileInfoSize < sizeof(*FileInfo)) return Version; const unsigned Major = (FileInfo->dwFileVersionMS >> 16) & 0xFFFF; const unsigned Minor = (FileInfo->dwFileVersionMS ) & 0xFFFF; const unsigned Micro = (FileInfo->dwFileVersionLS >> 16) & 0xFFFF; Version = VersionTuple(Major, Minor, Micro); #endif return Version; } void MSVCToolChain::AddSystemIncludeWithSubfolder( const ArgList &DriverArgs, ArgStringList &CC1Args, const std::string &folder, const Twine &subfolder1, const Twine &subfolder2, const Twine &subfolder3) const { llvm::SmallString<128> path(folder); llvm::sys::path::append(path, subfolder1, subfolder2, subfolder3); addSystemInclude(DriverArgs, CC1Args, path); } void MSVCToolChain::AddClangSystemIncludeArgs(const ArgList &DriverArgs, ArgStringList &CC1Args) const { if (DriverArgs.hasArg(options::OPT_nostdinc)) return; if (!DriverArgs.hasArg(options::OPT_nobuiltininc)) { AddSystemIncludeWithSubfolder(DriverArgs, CC1Args, getDriver().ResourceDir, "include"); } // Add %INCLUDE%-like directories from the -imsvc flag. for (const auto &Path : DriverArgs.getAllArgValues(options::OPT__SLASH_imsvc)) addSystemInclude(DriverArgs, CC1Args, Path); if (DriverArgs.hasArg(options::OPT_nostdlibinc)) return; // Honor %INCLUDE%. It should know essential search paths with vcvarsall.bat. if (llvm::Optional cl_include_dir = llvm::sys::Process::GetEnv("INCLUDE")) { SmallVector Dirs; StringRef(*cl_include_dir) .split(Dirs, ";", /*MaxSplit=*/-1, /*KeepEmpty=*/false); for (StringRef Dir : Dirs) addSystemInclude(DriverArgs, CC1Args, Dir); if (!Dirs.empty()) return; } // When built with access to the proper Windows APIs, try to actually find // the correct include paths first. if (!VCToolChainPath.empty()) { addSystemInclude(DriverArgs, CC1Args, getSubDirectoryPath(SubDirectoryType::Include)); if (useUniversalCRT()) { std::string UniversalCRTSdkPath; std::string UCRTVersion; if (getUniversalCRTSdkDir(UniversalCRTSdkPath, UCRTVersion)) { AddSystemIncludeWithSubfolder(DriverArgs, CC1Args, UniversalCRTSdkPath, "Include", UCRTVersion, "ucrt"); } } std::string WindowsSDKDir; int major; std::string windowsSDKIncludeVersion; std::string windowsSDKLibVersion; if (getWindowsSDKDir(WindowsSDKDir, major, windowsSDKIncludeVersion, windowsSDKLibVersion)) { if (major >= 8) { // Note: windowsSDKIncludeVersion is empty for SDKs prior to v10. // Anyway, llvm::sys::path::append is able to manage it. AddSystemIncludeWithSubfolder(DriverArgs, CC1Args, WindowsSDKDir, "include", windowsSDKIncludeVersion, "shared"); AddSystemIncludeWithSubfolder(DriverArgs, CC1Args, WindowsSDKDir, "include", windowsSDKIncludeVersion, "um"); AddSystemIncludeWithSubfolder(DriverArgs, CC1Args, WindowsSDKDir, "include", windowsSDKIncludeVersion, "winrt"); } else { AddSystemIncludeWithSubfolder(DriverArgs, CC1Args, WindowsSDKDir, "include"); } } return; } #if defined(LLVM_ON_WIN32) // As a fallback, select default install paths. // FIXME: Don't guess drives and paths like this on Windows. const StringRef Paths[] = { "C:/Program Files/Microsoft Visual Studio 10.0/VC/include", "C:/Program Files/Microsoft Visual Studio 9.0/VC/include", "C:/Program Files/Microsoft Visual Studio 9.0/VC/PlatformSDK/Include", "C:/Program Files/Microsoft Visual Studio 8/VC/include", "C:/Program Files/Microsoft Visual Studio 8/VC/PlatformSDK/Include" }; addSystemIncludes(DriverArgs, CC1Args, Paths); #endif } void MSVCToolChain::AddClangCXXStdlibIncludeArgs(const ArgList &DriverArgs, ArgStringList &CC1Args) const { // FIXME: There should probably be logic here to find libc++ on Windows. } VersionTuple MSVCToolChain::computeMSVCVersion(const Driver *D, const ArgList &Args) const { bool IsWindowsMSVC = getTriple().isWindowsMSVCEnvironment(); VersionTuple MSVT = ToolChain::computeMSVCVersion(D, Args); if (MSVT.empty()) MSVT = getMSVCVersionFromTriple(getTriple()); if (MSVT.empty() && IsWindowsMSVC) MSVT = getMSVCVersionFromExe(getSubDirectoryPath(SubDirectoryType::Bin)); if (MSVT.empty() && Args.hasFlag(options::OPT_fms_extensions, options::OPT_fno_ms_extensions, IsWindowsMSVC)) { // -fms-compatibility-version=18.00 is default. // FIXME: Consider bumping this to 19 (MSVC2015) soon. MSVT = VersionTuple(18); } return MSVT; } std::string MSVCToolChain::ComputeEffectiveClangTriple(const ArgList &Args, types::ID InputType) const { // The MSVC version doesn't care about the architecture, even though it // may look at the triple internally. VersionTuple MSVT = computeMSVCVersion(/*D=*/nullptr, Args); MSVT = VersionTuple(MSVT.getMajor(), MSVT.getMinor().getValueOr(0), MSVT.getSubminor().getValueOr(0)); // For the rest of the triple, however, a computed architecture name may // be needed. llvm::Triple Triple(ToolChain::ComputeEffectiveClangTriple(Args, InputType)); if (Triple.getEnvironment() == llvm::Triple::MSVC) { StringRef ObjFmt = Triple.getEnvironmentName().split('-').second; if (ObjFmt.empty()) Triple.setEnvironmentName((Twine("msvc") + MSVT.getAsString()).str()); else Triple.setEnvironmentName( (Twine("msvc") + MSVT.getAsString() + Twine('-') + ObjFmt).str()); } return Triple.getTriple(); } SanitizerMask MSVCToolChain::getSupportedSanitizers() const { SanitizerMask Res = ToolChain::getSupportedSanitizers(); Res |= SanitizerKind::Address; return Res; } static void TranslateOptArg(Arg *A, llvm::opt::DerivedArgList &DAL, bool SupportsForcingFramePointer, const char *ExpandChar, const OptTable &Opts) { assert(A->getOption().matches(options::OPT__SLASH_O)); StringRef OptStr = A->getValue(); for (size_t I = 0, E = OptStr.size(); I != E; ++I) { const char &OptChar = *(OptStr.data() + I); switch (OptChar) { default: break; case '1': case '2': case 'x': case 'd': if (&OptChar == ExpandChar) { if (OptChar == 'd') { DAL.AddFlagArg(A, Opts.getOption(options::OPT_O0)); } else { if (OptChar == '1') { DAL.AddJoinedArg(A, Opts.getOption(options::OPT_O), "s"); } else if (OptChar == '2' || OptChar == 'x') { DAL.AddFlagArg(A, Opts.getOption(options::OPT_fbuiltin)); DAL.AddJoinedArg(A, Opts.getOption(options::OPT_O), "2"); } if (SupportsForcingFramePointer && !DAL.hasArgNoClaim(options::OPT_fno_omit_frame_pointer)) DAL.AddFlagArg(A, Opts.getOption(options::OPT_fomit_frame_pointer)); if (OptChar == '1' || OptChar == '2') DAL.AddFlagArg(A, Opts.getOption(options::OPT_ffunction_sections)); } } break; case 'b': if (I + 1 != E && isdigit(OptStr[I + 1])) { switch (OptStr[I + 1]) { case '0': DAL.AddFlagArg(A, Opts.getOption(options::OPT_fno_inline)); break; case '1': DAL.AddFlagArg(A, Opts.getOption(options::OPT_finline_hint_functions)); break; case '2': DAL.AddFlagArg(A, Opts.getOption(options::OPT_finline_functions)); break; } ++I; } break; case 'g': break; case 'i': if (I + 1 != E && OptStr[I + 1] == '-') { ++I; DAL.AddFlagArg(A, Opts.getOption(options::OPT_fno_builtin)); } else { DAL.AddFlagArg(A, Opts.getOption(options::OPT_fbuiltin)); } break; case 's': DAL.AddJoinedArg(A, Opts.getOption(options::OPT_O), "s"); break; case 't': DAL.AddJoinedArg(A, Opts.getOption(options::OPT_O), "2"); break; case 'y': { bool OmitFramePointer = true; if (I + 1 != E && OptStr[I + 1] == '-') { OmitFramePointer = false; ++I; } if (SupportsForcingFramePointer) { if (OmitFramePointer) DAL.AddFlagArg(A, Opts.getOption(options::OPT_fomit_frame_pointer)); else DAL.AddFlagArg( A, Opts.getOption(options::OPT_fno_omit_frame_pointer)); } else { // Don't warn about /Oy- in 64-bit builds (where // SupportsForcingFramePointer is false). The flag having no effect // there is a compiler-internal optimization, and people shouldn't have // to special-case their build files for 64-bit clang-cl. A->claim(); } break; } } } } static void TranslateDArg(Arg *A, llvm::opt::DerivedArgList &DAL, const OptTable &Opts) { assert(A->getOption().matches(options::OPT_D)); StringRef Val = A->getValue(); size_t Hash = Val.find('#'); if (Hash == StringRef::npos || Hash > Val.find('=')) { DAL.append(A); return; } std::string NewVal = Val; NewVal[Hash] = '='; DAL.AddJoinedArg(A, Opts.getOption(options::OPT_D), NewVal); } llvm::opt::DerivedArgList * MSVCToolChain::TranslateArgs(const llvm::opt::DerivedArgList &Args, StringRef BoundArch, Action::OffloadKind) const { DerivedArgList *DAL = new DerivedArgList(Args.getBaseArgs()); const OptTable &Opts = getDriver().getOpts(); // /Oy and /Oy- only has an effect under X86-32. bool SupportsForcingFramePointer = getArch() == llvm::Triple::x86; // The -O[12xd] flag actually expands to several flags. We must desugar the // flags so that options embedded can be negated. For example, the '-O2' flag // enables '-Oy'. Expanding '-O2' into its constituent flags allows us to // correctly handle '-O2 -Oy-' where the trailing '-Oy-' disables a single // aspect of '-O2'. // // Note that this expansion logic only applies to the *last* of '[12xd]'. // First step is to search for the character we'd like to expand. const char *ExpandChar = nullptr; for (Arg *A : Args) { if (!A->getOption().matches(options::OPT__SLASH_O)) continue; StringRef OptStr = A->getValue(); for (size_t I = 0, E = OptStr.size(); I != E; ++I) { char OptChar = OptStr[I]; char PrevChar = I > 0 ? OptStr[I - 1] : '0'; if (PrevChar == 'b') { // OptChar does not expand; it's an argument to the previous char. continue; } if (OptChar == '1' || OptChar == '2' || OptChar == 'x' || OptChar == 'd') ExpandChar = OptStr.data() + I; } } for (Arg *A : Args) { if (A->getOption().matches(options::OPT__SLASH_O)) { // The -O flag actually takes an amalgam of other options. For example, // '/Ogyb2' is equivalent to '/Og' '/Oy' '/Ob2'. TranslateOptArg(A, *DAL, SupportsForcingFramePointer, ExpandChar, Opts); } else if (A->getOption().matches(options::OPT_D)) { // Translate -Dfoo#bar into -Dfoo=bar. TranslateDArg(A, *DAL, Opts); } else { DAL->append(A); } } return DAL; } diff --git a/lib/Driver/ToolChains/MSVC.h b/lib/Driver/ToolChains/MSVC.h index d153691a5c90..854f88a36fd2 100644 --- a/lib/Driver/ToolChains/MSVC.h +++ b/lib/Driver/ToolChains/MSVC.h @@ -1,141 +1,146 @@ //===--- MSVC.h - MSVC ToolChain Implementations ----------------*- C++ -*-===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// #ifndef LLVM_CLANG_LIB_DRIVER_TOOLCHAINS_MSVC_H #define LLVM_CLANG_LIB_DRIVER_TOOLCHAINS_MSVC_H #include "Cuda.h" #include "clang/Driver/Compilation.h" #include "clang/Driver/Tool.h" #include "clang/Driver/ToolChain.h" namespace clang { namespace driver { namespace tools { /// Visual studio tools. namespace visualstudio { class LLVM_LIBRARY_VISIBILITY Linker : public Tool { public: Linker(const ToolChain &TC) : Tool("visualstudio::Linker", "linker", TC, RF_Full, llvm::sys::WEM_UTF16) {} bool hasIntegratedCPP() const override { return false; } bool isLinkJob() const override { return true; } void ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const llvm::opt::ArgList &TCArgs, const char *LinkingOutput) const override; }; class LLVM_LIBRARY_VISIBILITY Compiler : public Tool { public: Compiler(const ToolChain &TC) : Tool("visualstudio::Compiler", "compiler", TC, RF_Full, llvm::sys::WEM_UTF16) {} bool hasIntegratedAssembler() const override { return true; } bool hasIntegratedCPP() const override { return true; } bool isLinkJob() const override { return false; } void ConstructJob(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const llvm::opt::ArgList &TCArgs, const char *LinkingOutput) const override; std::unique_ptr GetCommand(Compilation &C, const JobAction &JA, const InputInfo &Output, const InputInfoList &Inputs, const llvm::opt::ArgList &TCArgs, const char *LinkingOutput) const; }; } // end namespace visualstudio } // end namespace tools namespace toolchains { class LLVM_LIBRARY_VISIBILITY MSVCToolChain : public ToolChain { public: MSVCToolChain(const Driver &D, const llvm::Triple &Triple, const llvm::opt::ArgList &Args); llvm::opt::DerivedArgList * TranslateArgs(const llvm::opt::DerivedArgList &Args, StringRef BoundArch, Action::OffloadKind DeviceOffloadKind) const override; bool IsIntegratedAssemblerDefault() const override; bool IsUnwindTablesDefault(const llvm::opt::ArgList &Args) const override; bool isPICDefault() const override; bool isPIEDefault() const override; bool isPICDefaultForced() const override; enum class SubDirectoryType { Bin, Include, Lib, }; std::string getSubDirectoryPath(SubDirectoryType Type, llvm::Triple::ArchType TargetArch) const; // Convenience overload. // Uses the current target arch. std::string getSubDirectoryPath(SubDirectoryType Type) const { return getSubDirectoryPath(Type, getArch()); } - bool getIsVS2017OrNewer() const { return IsVS2017OrNewer; } + enum class ToolsetLayout { + OlderVS, + VS2017OrNewer, + DevDivInternal, + }; + bool getIsVS2017OrNewer() const { return VSLayout == ToolsetLayout::VS2017OrNewer; } void AddClangSystemIncludeArgs(const llvm::opt::ArgList &DriverArgs, llvm::opt::ArgStringList &CC1Args) const override; void AddClangCXXStdlibIncludeArgs( const llvm::opt::ArgList &DriverArgs, llvm::opt::ArgStringList &CC1Args) const override; void AddCudaIncludeArgs(const llvm::opt::ArgList &DriverArgs, llvm::opt::ArgStringList &CC1Args) const override; bool getWindowsSDKLibraryPath(std::string &path) const; /// \brief Check if Universal CRT should be used if available bool getUniversalCRTLibraryPath(std::string &path) const; bool useUniversalCRT() const; VersionTuple computeMSVCVersion(const Driver *D, const llvm::opt::ArgList &Args) const override; std::string ComputeEffectiveClangTriple(const llvm::opt::ArgList &Args, types::ID InputType) const override; SanitizerMask getSupportedSanitizers() const override; void printVerboseInfo(raw_ostream &OS) const override; protected: void AddSystemIncludeWithSubfolder(const llvm::opt::ArgList &DriverArgs, llvm::opt::ArgStringList &CC1Args, const std::string &folder, const Twine &subfolder1, const Twine &subfolder2 = "", const Twine &subfolder3 = "") const; Tool *buildLinker() const override; Tool *buildAssembler() const override; private: std::string VCToolChainPath; - bool IsVS2017OrNewer = false; + ToolsetLayout VSLayout = ToolsetLayout::OlderVS; CudaInstallationDetector CudaInstallation; }; } // end namespace toolchains } // end namespace driver } // end namespace clang #endif // LLVM_CLANG_LIB_DRIVER_TOOLCHAINS_MSVC_H diff --git a/lib/Format/WhitespaceManager.cpp b/lib/Format/WhitespaceManager.cpp index 4b4fd13145fb..b1a5f1eab552 100644 --- a/lib/Format/WhitespaceManager.cpp +++ b/lib/Format/WhitespaceManager.cpp @@ -1,701 +1,706 @@ //===--- WhitespaceManager.cpp - Format C++ code --------------------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// /// /// \file /// \brief This file implements WhitespaceManager class. /// //===----------------------------------------------------------------------===// #include "WhitespaceManager.h" #include "llvm/ADT/STLExtras.h" namespace clang { namespace format { bool WhitespaceManager::Change::IsBeforeInFile:: operator()(const Change &C1, const Change &C2) const { return SourceMgr.isBeforeInTranslationUnit( C1.OriginalWhitespaceRange.getBegin(), C2.OriginalWhitespaceRange.getBegin()); } WhitespaceManager::Change::Change(const FormatToken &Tok, bool CreateReplacement, SourceRange OriginalWhitespaceRange, int Spaces, unsigned StartOfTokenColumn, unsigned NewlinesBefore, StringRef PreviousLinePostfix, StringRef CurrentLinePrefix, bool ContinuesPPDirective, bool IsInsideToken) : Tok(&Tok), CreateReplacement(CreateReplacement), OriginalWhitespaceRange(OriginalWhitespaceRange), StartOfTokenColumn(StartOfTokenColumn), NewlinesBefore(NewlinesBefore), PreviousLinePostfix(PreviousLinePostfix), CurrentLinePrefix(CurrentLinePrefix), ContinuesPPDirective(ContinuesPPDirective), Spaces(Spaces), IsInsideToken(IsInsideToken), IsTrailingComment(false), TokenLength(0), PreviousEndOfTokenColumn(0), EscapedNewlineColumn(0), StartOfBlockComment(nullptr), IndentationOffset(0) {} void WhitespaceManager::replaceWhitespace(FormatToken &Tok, unsigned Newlines, unsigned Spaces, unsigned StartOfTokenColumn, bool InPPDirective) { if (Tok.Finalized) return; Tok.Decision = (Newlines > 0) ? FD_Break : FD_Continue; Changes.push_back(Change(Tok, /*CreateReplacement=*/true, Tok.WhitespaceRange, Spaces, StartOfTokenColumn, Newlines, "", "", InPPDirective && !Tok.IsFirst, /*IsInsideToken=*/false)); } void WhitespaceManager::addUntouchableToken(const FormatToken &Tok, bool InPPDirective) { if (Tok.Finalized) return; Changes.push_back(Change(Tok, /*CreateReplacement=*/false, Tok.WhitespaceRange, /*Spaces=*/0, Tok.OriginalColumn, Tok.NewlinesBefore, "", "", InPPDirective && !Tok.IsFirst, /*IsInsideToken=*/false)); } void WhitespaceManager::replaceWhitespaceInToken( const FormatToken &Tok, unsigned Offset, unsigned ReplaceChars, StringRef PreviousPostfix, StringRef CurrentPrefix, bool InPPDirective, unsigned Newlines, int Spaces) { if (Tok.Finalized) return; SourceLocation Start = Tok.getStartOfNonWhitespace().getLocWithOffset(Offset); Changes.push_back( Change(Tok, /*CreateReplacement=*/true, SourceRange(Start, Start.getLocWithOffset(ReplaceChars)), Spaces, std::max(0, Spaces), Newlines, PreviousPostfix, CurrentPrefix, InPPDirective && !Tok.IsFirst, /*IsInsideToken=*/true)); } const tooling::Replacements &WhitespaceManager::generateReplacements() { if (Changes.empty()) return Replaces; std::sort(Changes.begin(), Changes.end(), Change::IsBeforeInFile(SourceMgr)); calculateLineBreakInformation(); alignConsecutiveDeclarations(); alignConsecutiveAssignments(); alignTrailingComments(); alignEscapedNewlines(); generateChanges(); return Replaces; } void WhitespaceManager::calculateLineBreakInformation() { Changes[0].PreviousEndOfTokenColumn = 0; Change *LastOutsideTokenChange = &Changes[0]; for (unsigned i = 1, e = Changes.size(); i != e; ++i) { SourceLocation OriginalWhitespaceStart = Changes[i].OriginalWhitespaceRange.getBegin(); SourceLocation PreviousOriginalWhitespaceEnd = Changes[i - 1].OriginalWhitespaceRange.getEnd(); unsigned OriginalWhitespaceStartOffset = SourceMgr.getFileOffset(OriginalWhitespaceStart); unsigned PreviousOriginalWhitespaceEndOffset = SourceMgr.getFileOffset(PreviousOriginalWhitespaceEnd); assert(PreviousOriginalWhitespaceEndOffset <= OriginalWhitespaceStartOffset); const char *const PreviousOriginalWhitespaceEndData = SourceMgr.getCharacterData(PreviousOriginalWhitespaceEnd); StringRef Text(PreviousOriginalWhitespaceEndData, SourceMgr.getCharacterData(OriginalWhitespaceStart) - PreviousOriginalWhitespaceEndData); // Usually consecutive changes would occur in consecutive tokens. This is // not the case however when analyzing some preprocessor runs of the // annotated lines. For example, in this code: // // #if A // line 1 // int i = 1; // #else B // line 2 // int i = 2; // #endif // line 3 // // one of the runs will produce the sequence of lines marked with line 1, 2 // and 3. So the two consecutive whitespace changes just before '// line 2' // and before '#endif // line 3' span multiple lines and tokens: // // #else B{change X}[// line 2 // int i = 2; // ]{change Y}#endif // line 3 // // For this reason, if the text between consecutive changes spans multiple // newlines, the token length must be adjusted to the end of the original // line of the token. auto NewlinePos = Text.find_first_of('\n'); if (NewlinePos == StringRef::npos) { Changes[i - 1].TokenLength = OriginalWhitespaceStartOffset - PreviousOriginalWhitespaceEndOffset + Changes[i].PreviousLinePostfix.size() + Changes[i - 1].CurrentLinePrefix.size(); } else { Changes[i - 1].TokenLength = NewlinePos + Changes[i - 1].CurrentLinePrefix.size(); } // If there are multiple changes in this token, sum up all the changes until // the end of the line. if (Changes[i - 1].IsInsideToken && Changes[i - 1].NewlinesBefore == 0) LastOutsideTokenChange->TokenLength += Changes[i - 1].TokenLength + Changes[i - 1].Spaces; else LastOutsideTokenChange = &Changes[i - 1]; Changes[i].PreviousEndOfTokenColumn = Changes[i - 1].StartOfTokenColumn + Changes[i - 1].TokenLength; Changes[i - 1].IsTrailingComment = (Changes[i].NewlinesBefore > 0 || Changes[i].Tok->is(tok::eof) || (Changes[i].IsInsideToken && Changes[i].Tok->is(tok::comment))) && Changes[i - 1].Tok->is(tok::comment) && // FIXME: This is a dirty hack. The problem is that // BreakableLineCommentSection does comment reflow changes and here is // the aligning of trailing comments. Consider the case where we reflow // the second line up in this example: // // // line 1 // // line 2 // // That amounts to 2 changes by BreakableLineCommentSection: // - the first, delimited by (), for the whitespace between the tokens, // - and second, delimited by [], for the whitespace at the beginning // of the second token: // // // line 1( // )[// ]line 2 // // So in the end we have two changes like this: // // // line1()[ ]line 2 // // Note that the OriginalWhitespaceStart of the second change is the // same as the PreviousOriginalWhitespaceEnd of the first change. // In this case, the below check ensures that the second change doesn't // get treated as a trailing comment change here, since this might // trigger additional whitespace to be wrongly inserted before "line 2" // by the comment aligner here. // // For a proper solution we need a mechanism to say to WhitespaceManager // that a particular change breaks the current sequence of trailing // comments. OriginalWhitespaceStart != PreviousOriginalWhitespaceEnd; } // FIXME: The last token is currently not always an eof token; in those // cases, setting TokenLength of the last token to 0 is wrong. Changes.back().TokenLength = 0; Changes.back().IsTrailingComment = Changes.back().Tok->is(tok::comment); const WhitespaceManager::Change *LastBlockComment = nullptr; for (auto &Change : Changes) { // Reset the IsTrailingComment flag for changes inside of trailing comments // so they don't get realigned later. Comment line breaks however still need // to be aligned. if (Change.IsInsideToken && Change.NewlinesBefore == 0) Change.IsTrailingComment = false; Change.StartOfBlockComment = nullptr; Change.IndentationOffset = 0; if (Change.Tok->is(tok::comment)) { if (Change.Tok->is(TT_LineComment) || !Change.IsInsideToken) LastBlockComment = &Change; else { if ((Change.StartOfBlockComment = LastBlockComment)) Change.IndentationOffset = Change.StartOfTokenColumn - Change.StartOfBlockComment->StartOfTokenColumn; } } else { LastBlockComment = nullptr; } } } // Align a single sequence of tokens, see AlignTokens below. template static void AlignTokenSequence(unsigned Start, unsigned End, unsigned Column, F &&Matches, SmallVector &Changes) { bool FoundMatchOnLine = false; int Shift = 0; // ScopeStack keeps track of the current scope depth. It contains indices of // the first token on each scope. // We only run the "Matches" function on tokens from the outer-most scope. // However, we do need to pay special attention to one class of tokens // that are not in the outer-most scope, and that is function parameters // which are split across multiple lines, as illustrated by this example: // double a(int x); // int b(int y, // double z); // In the above example, we need to take special care to ensure that // 'double z' is indented along with it's owning function 'b'. SmallVector ScopeStack; for (unsigned i = Start; i != End; ++i) { if (ScopeStack.size() != 0 && Changes[i].nestingAndIndentLevel() < Changes[ScopeStack.back()].nestingAndIndentLevel()) ScopeStack.pop_back(); if (i != Start && Changes[i].nestingAndIndentLevel() > Changes[i - 1].nestingAndIndentLevel()) ScopeStack.push_back(i); bool InsideNestedScope = ScopeStack.size() != 0; if (Changes[i].NewlinesBefore > 0 && !InsideNestedScope) { Shift = 0; FoundMatchOnLine = false; } // If this is the first matching token to be aligned, remember by how many // spaces it has to be shifted, so the rest of the changes on the line are // shifted by the same amount if (!FoundMatchOnLine && !InsideNestedScope && Matches(Changes[i])) { FoundMatchOnLine = true; Shift = Column - Changes[i].StartOfTokenColumn; Changes[i].Spaces += Shift; } // This is for function parameters that are split across multiple lines, // as mentioned in the ScopeStack comment. if (InsideNestedScope && Changes[i].NewlinesBefore > 0) { unsigned ScopeStart = ScopeStack.back(); if (Changes[ScopeStart - 1].Tok->is(TT_FunctionDeclarationName) || (ScopeStart > Start + 1 && Changes[ScopeStart - 2].Tok->is(TT_FunctionDeclarationName))) Changes[i].Spaces += Shift; } assert(Shift >= 0); Changes[i].StartOfTokenColumn += Shift; if (i + 1 != Changes.size()) Changes[i + 1].PreviousEndOfTokenColumn += Shift; } } // Walk through a subset of the changes, starting at StartAt, and find // sequences of matching tokens to align. To do so, keep track of the lines and // whether or not a matching token was found on a line. If a matching token is // found, extend the current sequence. If the current line cannot be part of a // sequence, e.g. because there is an empty line before it or it contains only // non-matching tokens, finalize the previous sequence. // The value returned is the token on which we stopped, either because we // exhausted all items inside Changes, or because we hit a scope level higher // than our initial scope. // This function is recursive. Each invocation processes only the scope level // equal to the initial level, which is the level of Changes[StartAt]. // If we encounter a scope level greater than the initial level, then we call // ourselves recursively, thereby avoiding the pollution of the current state // with the alignment requirements of the nested sub-level. This recursive // behavior is necessary for aligning function prototypes that have one or more // arguments. // If this function encounters a scope level less than the initial level, // it returns the current position. // There is a non-obvious subtlety in the recursive behavior: Even though we // defer processing of nested levels to recursive invocations of this // function, when it comes time to align a sequence of tokens, we run the // alignment on the entire sequence, including the nested levels. // When doing so, most of the nested tokens are skipped, because their // alignment was already handled by the recursive invocations of this function. // However, the special exception is that we do NOT skip function parameters // that are split across multiple lines. See the test case in FormatTest.cpp // that mentions "split function parameter alignment" for an example of this. template static unsigned AlignTokens(const FormatStyle &Style, F &&Matches, SmallVector &Changes, unsigned StartAt) { unsigned MinColumn = 0; unsigned MaxColumn = UINT_MAX; // Line number of the start and the end of the current token sequence. unsigned StartOfSequence = 0; unsigned EndOfSequence = 0; // Measure the scope level (i.e. depth of (), [], {}) of the first token, and // abort when we hit any token in a higher scope than the starting one. auto NestingAndIndentLevel = StartAt < Changes.size() ? Changes[StartAt].nestingAndIndentLevel() : std::pair(0, 0); // Keep track of the number of commas before the matching tokens, we will only // align a sequence of matching tokens if they are preceded by the same number // of commas. unsigned CommasBeforeLastMatch = 0; unsigned CommasBeforeMatch = 0; // Whether a matching token has been found on the current line. bool FoundMatchOnLine = false; // Aligns a sequence of matching tokens, on the MinColumn column. // // Sequences start from the first matching token to align, and end at the // first token of the first line that doesn't need to be aligned. // // We need to adjust the StartOfTokenColumn of each Change that is on a line // containing any matching token to be aligned and located after such token. auto AlignCurrentSequence = [&] { if (StartOfSequence > 0 && StartOfSequence < EndOfSequence) AlignTokenSequence(StartOfSequence, EndOfSequence, MinColumn, Matches, Changes); MinColumn = 0; MaxColumn = UINT_MAX; StartOfSequence = 0; EndOfSequence = 0; }; unsigned i = StartAt; for (unsigned e = Changes.size(); i != e; ++i) { if (Changes[i].nestingAndIndentLevel() < NestingAndIndentLevel) break; if (Changes[i].NewlinesBefore != 0) { CommasBeforeMatch = 0; EndOfSequence = i; // If there is a blank line, or if the last line didn't contain any // matching token, the sequence ends here. if (Changes[i].NewlinesBefore > 1 || !FoundMatchOnLine) AlignCurrentSequence(); FoundMatchOnLine = false; } if (Changes[i].Tok->is(tok::comma)) { ++CommasBeforeMatch; } else if (Changes[i].nestingAndIndentLevel() > NestingAndIndentLevel) { // Call AlignTokens recursively, skipping over this scope block. unsigned StoppedAt = AlignTokens(Style, Matches, Changes, i); i = StoppedAt - 1; continue; } if (!Matches(Changes[i])) continue; // If there is more than one matching token per line, or if the number of // preceding commas, do not match anymore, end the sequence. if (FoundMatchOnLine || CommasBeforeMatch != CommasBeforeLastMatch) AlignCurrentSequence(); CommasBeforeLastMatch = CommasBeforeMatch; FoundMatchOnLine = true; if (StartOfSequence == 0) StartOfSequence = i; unsigned ChangeMinColumn = Changes[i].StartOfTokenColumn; int LineLengthAfter = -Changes[i].Spaces; for (unsigned j = i; j != e && Changes[j].NewlinesBefore == 0; ++j) LineLengthAfter += Changes[j].Spaces + Changes[j].TokenLength; unsigned ChangeMaxColumn = Style.ColumnLimit - LineLengthAfter; // If we are restricted by the maximum column width, end the sequence. if (ChangeMinColumn > MaxColumn || ChangeMaxColumn < MinColumn || CommasBeforeLastMatch != CommasBeforeMatch) { AlignCurrentSequence(); StartOfSequence = i; } MinColumn = std::max(MinColumn, ChangeMinColumn); MaxColumn = std::min(MaxColumn, ChangeMaxColumn); } EndOfSequence = i; AlignCurrentSequence(); return i; } void WhitespaceManager::alignConsecutiveAssignments() { if (!Style.AlignConsecutiveAssignments) return; AlignTokens(Style, [&](const Change &C) { // Do not align on equal signs that are first on a line. if (C.NewlinesBefore > 0) return false; // Do not align on equal signs that are last on a line. if (&C != &Changes.back() && (&C + 1)->NewlinesBefore > 0) return false; return C.Tok->is(tok::equal); }, Changes, /*StartAt=*/0); } void WhitespaceManager::alignConsecutiveDeclarations() { if (!Style.AlignConsecutiveDeclarations) return; // FIXME: Currently we don't handle properly the PointerAlignment: Right // The * and & are not aligned and are left dangling. Something has to be done // about it, but it raises the question of alignment of code like: // const char* const* v1; // float const* v2; // SomeVeryLongType const& v3; AlignTokens(Style, [](Change const &C) { // tok::kw_operator is necessary for aligning operator overload // definitions. return C.Tok->is(TT_StartOfName) || C.Tok->is(TT_FunctionDeclarationName) || C.Tok->is(tok::kw_operator); }, Changes, /*StartAt=*/0); } void WhitespaceManager::alignTrailingComments() { unsigned MinColumn = 0; unsigned MaxColumn = UINT_MAX; unsigned StartOfSequence = 0; bool BreakBeforeNext = false; unsigned Newlines = 0; for (unsigned i = 0, e = Changes.size(); i != e; ++i) { if (Changes[i].StartOfBlockComment) continue; Newlines += Changes[i].NewlinesBefore; if (!Changes[i].IsTrailingComment) continue; unsigned ChangeMinColumn = Changes[i].StartOfTokenColumn; - unsigned ChangeMaxColumn = Style.ColumnLimit >= Changes[i].TokenLength - ? Style.ColumnLimit - Changes[i].TokenLength - : ChangeMinColumn; + unsigned ChangeMaxColumn; + + if (Style.ColumnLimit == 0) + ChangeMaxColumn = UINT_MAX; + else if (Style.ColumnLimit >= Changes[i].TokenLength) + ChangeMaxColumn = Style.ColumnLimit - Changes[i].TokenLength; + else + ChangeMaxColumn = ChangeMinColumn; // If we don't create a replacement for this change, we have to consider // it to be immovable. if (!Changes[i].CreateReplacement) ChangeMaxColumn = ChangeMinColumn; if (i + 1 != e && Changes[i + 1].ContinuesPPDirective) ChangeMaxColumn -= 2; // If this comment follows an } in column 0, it probably documents the // closing of a namespace and we don't want to align it. bool FollowsRBraceInColumn0 = i > 0 && Changes[i].NewlinesBefore == 0 && Changes[i - 1].Tok->is(tok::r_brace) && Changes[i - 1].StartOfTokenColumn == 0; bool WasAlignedWithStartOfNextLine = false; if (Changes[i].NewlinesBefore == 1) { // A comment on its own line. unsigned CommentColumn = SourceMgr.getSpellingColumnNumber( Changes[i].OriginalWhitespaceRange.getEnd()); for (unsigned j = i + 1; j != e; ++j) { if (Changes[j].Tok->is(tok::comment)) continue; unsigned NextColumn = SourceMgr.getSpellingColumnNumber( Changes[j].OriginalWhitespaceRange.getEnd()); // The start of the next token was previously aligned with the // start of this comment. WasAlignedWithStartOfNextLine = CommentColumn == NextColumn || CommentColumn == NextColumn + Style.IndentWidth; break; } } if (!Style.AlignTrailingComments || FollowsRBraceInColumn0) { alignTrailingComments(StartOfSequence, i, MinColumn); MinColumn = ChangeMinColumn; MaxColumn = ChangeMinColumn; StartOfSequence = i; } else if (BreakBeforeNext || Newlines > 1 || (ChangeMinColumn > MaxColumn || ChangeMaxColumn < MinColumn) || // Break the comment sequence if the previous line did not end // in a trailing comment. (Changes[i].NewlinesBefore == 1 && i > 0 && !Changes[i - 1].IsTrailingComment) || WasAlignedWithStartOfNextLine) { alignTrailingComments(StartOfSequence, i, MinColumn); MinColumn = ChangeMinColumn; MaxColumn = ChangeMaxColumn; StartOfSequence = i; } else { MinColumn = std::max(MinColumn, ChangeMinColumn); MaxColumn = std::min(MaxColumn, ChangeMaxColumn); } BreakBeforeNext = (i == 0) || (Changes[i].NewlinesBefore > 1) || // Never start a sequence with a comment at the beginning of // the line. (Changes[i].NewlinesBefore == 1 && StartOfSequence == i); Newlines = 0; } alignTrailingComments(StartOfSequence, Changes.size(), MinColumn); } void WhitespaceManager::alignTrailingComments(unsigned Start, unsigned End, unsigned Column) { for (unsigned i = Start; i != End; ++i) { int Shift = 0; if (Changes[i].IsTrailingComment) { Shift = Column - Changes[i].StartOfTokenColumn; } if (Changes[i].StartOfBlockComment) { Shift = Changes[i].IndentationOffset + Changes[i].StartOfBlockComment->StartOfTokenColumn - Changes[i].StartOfTokenColumn; } assert(Shift >= 0); Changes[i].Spaces += Shift; if (i + 1 != Changes.size()) Changes[i + 1].PreviousEndOfTokenColumn += Shift; Changes[i].StartOfTokenColumn += Shift; } } void WhitespaceManager::alignEscapedNewlines() { if (Style.AlignEscapedNewlines == FormatStyle::ENAS_DontAlign) return; bool AlignLeft = Style.AlignEscapedNewlines == FormatStyle::ENAS_Left; unsigned MaxEndOfLine = AlignLeft ? 0 : Style.ColumnLimit; unsigned StartOfMacro = 0; for (unsigned i = 1, e = Changes.size(); i < e; ++i) { Change &C = Changes[i]; if (C.NewlinesBefore > 0) { if (C.ContinuesPPDirective) { MaxEndOfLine = std::max(C.PreviousEndOfTokenColumn + 2, MaxEndOfLine); } else { alignEscapedNewlines(StartOfMacro + 1, i, MaxEndOfLine); MaxEndOfLine = AlignLeft ? 0 : Style.ColumnLimit; StartOfMacro = i; } } } alignEscapedNewlines(StartOfMacro + 1, Changes.size(), MaxEndOfLine); } void WhitespaceManager::alignEscapedNewlines(unsigned Start, unsigned End, unsigned Column) { for (unsigned i = Start; i < End; ++i) { Change &C = Changes[i]; if (C.NewlinesBefore > 0) { assert(C.ContinuesPPDirective); if (C.PreviousEndOfTokenColumn + 1 > Column) C.EscapedNewlineColumn = 0; else C.EscapedNewlineColumn = Column; } } } void WhitespaceManager::generateChanges() { for (unsigned i = 0, e = Changes.size(); i != e; ++i) { const Change &C = Changes[i]; if (i > 0) { assert(Changes[i - 1].OriginalWhitespaceRange.getBegin() != C.OriginalWhitespaceRange.getBegin() && "Generating two replacements for the same location"); } if (C.CreateReplacement) { std::string ReplacementText = C.PreviousLinePostfix; if (C.ContinuesPPDirective) appendNewlineText(ReplacementText, C.NewlinesBefore, C.PreviousEndOfTokenColumn, C.EscapedNewlineColumn); else appendNewlineText(ReplacementText, C.NewlinesBefore); appendIndentText(ReplacementText, C.Tok->IndentLevel, std::max(0, C.Spaces), C.StartOfTokenColumn - std::max(0, C.Spaces)); ReplacementText.append(C.CurrentLinePrefix); storeReplacement(C.OriginalWhitespaceRange, ReplacementText); } } } void WhitespaceManager::storeReplacement(SourceRange Range, StringRef Text) { unsigned WhitespaceLength = SourceMgr.getFileOffset(Range.getEnd()) - SourceMgr.getFileOffset(Range.getBegin()); // Don't create a replacement, if it does not change anything. if (StringRef(SourceMgr.getCharacterData(Range.getBegin()), WhitespaceLength) == Text) return; auto Err = Replaces.add(tooling::Replacement( SourceMgr, CharSourceRange::getCharRange(Range), Text)); // FIXME: better error handling. For now, just print an error message in the // release version. if (Err) { llvm::errs() << llvm::toString(std::move(Err)) << "\n"; assert(false); } } void WhitespaceManager::appendNewlineText(std::string &Text, unsigned Newlines) { for (unsigned i = 0; i < Newlines; ++i) Text.append(UseCRLF ? "\r\n" : "\n"); } void WhitespaceManager::appendNewlineText(std::string &Text, unsigned Newlines, unsigned PreviousEndOfTokenColumn, unsigned EscapedNewlineColumn) { if (Newlines > 0) { unsigned Offset = std::min(EscapedNewlineColumn - 2, PreviousEndOfTokenColumn); for (unsigned i = 0; i < Newlines; ++i) { Text.append(EscapedNewlineColumn - Offset - 1, ' '); Text.append(UseCRLF ? "\\\r\n" : "\\\n"); Offset = 0; } } } void WhitespaceManager::appendIndentText(std::string &Text, unsigned IndentLevel, unsigned Spaces, unsigned WhitespaceStartColumn) { switch (Style.UseTab) { case FormatStyle::UT_Never: Text.append(Spaces, ' '); break; case FormatStyle::UT_Always: { unsigned FirstTabWidth = Style.TabWidth - WhitespaceStartColumn % Style.TabWidth; // Indent with tabs only when there's at least one full tab. if (FirstTabWidth + Style.TabWidth <= Spaces) { Spaces -= FirstTabWidth; Text.append("\t"); } Text.append(Spaces / Style.TabWidth, '\t'); Text.append(Spaces % Style.TabWidth, ' '); break; } case FormatStyle::UT_ForIndentation: if (WhitespaceStartColumn == 0) { unsigned Indentation = IndentLevel * Style.IndentWidth; // This happens, e.g. when a line in a block comment is indented less than // the first one. if (Indentation > Spaces) Indentation = Spaces; unsigned Tabs = Indentation / Style.TabWidth; Text.append(Tabs, '\t'); Spaces -= Tabs * Style.TabWidth; } Text.append(Spaces, ' '); break; case FormatStyle::UT_ForContinuationAndIndentation: if (WhitespaceStartColumn == 0) { unsigned Tabs = Spaces / Style.TabWidth; Text.append(Tabs, '\t'); Spaces -= Tabs * Style.TabWidth; } Text.append(Spaces, ' '); break; } } } // namespace format } // namespace clang diff --git a/lib/Headers/unwind.h b/lib/Headers/unwind.h index e94b00b57c26..4f74a3478740 100644 --- a/lib/Headers/unwind.h +++ b/lib/Headers/unwind.h @@ -1,337 +1,299 @@ /*===---- unwind.h - Stack unwinding ----------------------------------------=== * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. * *===-----------------------------------------------------------------------=== */ /* See "Data Definitions for libgcc_s" in the Linux Standard Base.*/ #ifndef __CLANG_UNWIND_H #define __CLANG_UNWIND_H #if defined(__APPLE__) && __has_include_next() /* Darwin (from 11.x on) provide an unwind.h. If that's available, * use it. libunwind wraps some of its definitions in #ifdef _GNU_SOURCE, * so define that around the include.*/ # ifndef _GNU_SOURCE # define _SHOULD_UNDEFINE_GNU_SOURCE # define _GNU_SOURCE # endif // libunwind's unwind.h reflects the current visibility. However, Mozilla // builds with -fvisibility=hidden and relies on gcc's unwind.h to reset the // visibility to default and export its contents. gcc also allows users to // override its override by #defining HIDE_EXPORTS (but note, this only obeys // the user's -fvisibility setting; it doesn't hide any exports on its own). We // imitate gcc's header here: # ifdef HIDE_EXPORTS # include_next # else # pragma GCC visibility push(default) # include_next # pragma GCC visibility pop # endif # ifdef _SHOULD_UNDEFINE_GNU_SOURCE # undef _GNU_SOURCE # undef _SHOULD_UNDEFINE_GNU_SOURCE # endif #else #include #ifdef __cplusplus extern "C" { #endif /* It is a bit strange for a header to play with the visibility of the symbols it declares, but this matches gcc's behavior and some programs depend on it */ #ifndef HIDE_EXPORTS #pragma GCC visibility push(default) #endif typedef uintptr_t _Unwind_Word; typedef intptr_t _Unwind_Sword; typedef uintptr_t _Unwind_Ptr; typedef uintptr_t _Unwind_Internal_Ptr; typedef uint64_t _Unwind_Exception_Class; typedef intptr_t _sleb128_t; typedef uintptr_t _uleb128_t; struct _Unwind_Context; -#if defined(__arm__) && !(defined(__USING_SJLJ_EXCEPTIONS__) || defined(__ARM_DWARF_EH___)) -struct _Unwind_Control_Block; -typedef struct _Unwind_Control_Block _Unwind_Exception; /* Alias */ -#else struct _Unwind_Exception; -typedef struct _Unwind_Exception _Unwind_Exception; -#endif typedef enum { _URC_NO_REASON = 0, #if defined(__arm__) && !defined(__USING_SJLJ_EXCEPTIONS__) && \ !defined(__ARM_DWARF_EH__) _URC_OK = 0, /* used by ARM EHABI */ #endif _URC_FOREIGN_EXCEPTION_CAUGHT = 1, _URC_FATAL_PHASE2_ERROR = 2, _URC_FATAL_PHASE1_ERROR = 3, _URC_NORMAL_STOP = 4, _URC_END_OF_STACK = 5, _URC_HANDLER_FOUND = 6, _URC_INSTALL_CONTEXT = 7, _URC_CONTINUE_UNWIND = 8, #if defined(__arm__) && !defined(__USING_SJLJ_EXCEPTIONS__) && \ !defined(__ARM_DWARF_EH__) _URC_FAILURE = 9 /* used by ARM EHABI */ #endif } _Unwind_Reason_Code; typedef enum { _UA_SEARCH_PHASE = 1, _UA_CLEANUP_PHASE = 2, _UA_HANDLER_FRAME = 4, _UA_FORCE_UNWIND = 8, _UA_END_OF_STACK = 16 /* gcc extension to C++ ABI */ } _Unwind_Action; typedef void (*_Unwind_Exception_Cleanup_Fn)(_Unwind_Reason_Code, - _Unwind_Exception *); - -#if defined(__arm__) && !(defined(__USING_SJLJ_EXCEPTIONS__) || defined(__ARM_DWARF_EH___)) -typedef struct _Unwind_Control_Block _Unwind_Control_Block; -typedef uint32_t _Unwind_EHT_Header; - -struct _Unwind_Control_Block { - uint64_t exception_class; - void (*exception_cleanup)(_Unwind_Reason_Code, _Unwind_Control_Block *); - /* unwinder cache (private fields for the unwinder's use) */ - struct { - uint32_t reserved1; /* forced unwind stop function, 0 if not forced */ - uint32_t reserved2; /* personality routine */ - uint32_t reserved3; /* callsite */ - uint32_t reserved4; /* forced unwind stop argument */ - uint32_t reserved5; - } unwinder_cache; - /* propagation barrier cache (valid after phase 1) */ - struct { - uint32_t sp; - uint32_t bitpattern[5]; - } barrier_cache; - /* cleanup cache (preserved over cleanup) */ - struct { - uint32_t bitpattern[4]; - } cleanup_cache; - /* personality cache (for personality's benefit) */ - struct { - uint32_t fnstart; /* function start address */ - _Unwind_EHT_Header *ehtp; /* pointer to EHT entry header word */ - uint32_t additional; /* additional data */ - uint32_t reserved1; - } pr_cache; - long long int : 0; /* force alignment of next item to 8-byte boundary */ -}; -#else + struct _Unwind_Exception *); + struct _Unwind_Exception { _Unwind_Exception_Class exception_class; _Unwind_Exception_Cleanup_Fn exception_cleanup; _Unwind_Word private_1; _Unwind_Word private_2; /* The Itanium ABI requires that _Unwind_Exception objects are "double-word * aligned". GCC has interpreted this to mean "use the maximum useful * alignment for the target"; so do we. */ } __attribute__((__aligned__)); -#endif typedef _Unwind_Reason_Code (*_Unwind_Stop_Fn)(int, _Unwind_Action, _Unwind_Exception_Class, - _Unwind_Exception *, + struct _Unwind_Exception *, struct _Unwind_Context *, void *); -typedef _Unwind_Reason_Code (*_Unwind_Personality_Fn)(int, _Unwind_Action, - _Unwind_Exception_Class, - _Unwind_Exception *, - struct _Unwind_Context *); +typedef _Unwind_Reason_Code (*_Unwind_Personality_Fn)( + int, _Unwind_Action, _Unwind_Exception_Class, struct _Unwind_Exception *, + struct _Unwind_Context *); typedef _Unwind_Personality_Fn __personality_routine; typedef _Unwind_Reason_Code (*_Unwind_Trace_Fn)(struct _Unwind_Context *, void *); -#if defined(__arm__) && !(defined(__USING_SJLJ_EXCEPTIONS__) || defined(__ARM_DWARF_EH___)) +#if defined(__arm__) && !defined(__APPLE__) + typedef enum { _UVRSC_CORE = 0, /* integer register */ _UVRSC_VFP = 1, /* vfp */ _UVRSC_WMMXD = 3, /* Intel WMMX data register */ _UVRSC_WMMXC = 4 /* Intel WMMX control register */ } _Unwind_VRS_RegClass; typedef enum { _UVRSD_UINT32 = 0, _UVRSD_VFPX = 1, _UVRSD_UINT64 = 3, _UVRSD_FLOAT = 4, _UVRSD_DOUBLE = 5 } _Unwind_VRS_DataRepresentation; typedef enum { _UVRSR_OK = 0, _UVRSR_NOT_IMPLEMENTED = 1, _UVRSR_FAILED = 2 } _Unwind_VRS_Result; +#if !defined(__USING_SJLJ_EXCEPTIONS__) && !defined(__ARM_DWARF_EH__) typedef uint32_t _Unwind_State; #define _US_VIRTUAL_UNWIND_FRAME ((_Unwind_State)0) #define _US_UNWIND_FRAME_STARTING ((_Unwind_State)1) #define _US_UNWIND_FRAME_RESUME ((_Unwind_State)2) #define _US_ACTION_MASK ((_Unwind_State)3) #define _US_FORCE_UNWIND ((_Unwind_State)8) +#endif _Unwind_VRS_Result _Unwind_VRS_Get(struct _Unwind_Context *__context, _Unwind_VRS_RegClass __regclass, uint32_t __regno, _Unwind_VRS_DataRepresentation __representation, void *__valuep); _Unwind_VRS_Result _Unwind_VRS_Set(struct _Unwind_Context *__context, _Unwind_VRS_RegClass __regclass, uint32_t __regno, _Unwind_VRS_DataRepresentation __representation, void *__valuep); static __inline__ _Unwind_Word _Unwind_GetGR(struct _Unwind_Context *__context, int __index) { _Unwind_Word __value; _Unwind_VRS_Get(__context, _UVRSC_CORE, __index, _UVRSD_UINT32, &__value); return __value; } static __inline__ void _Unwind_SetGR(struct _Unwind_Context *__context, int __index, _Unwind_Word __value) { _Unwind_VRS_Set(__context, _UVRSC_CORE, __index, _UVRSD_UINT32, &__value); } static __inline__ _Unwind_Word _Unwind_GetIP(struct _Unwind_Context *__context) { _Unwind_Word __ip = _Unwind_GetGR(__context, 15); return __ip & ~(_Unwind_Word)(0x1); /* Remove thumb mode bit. */ } static __inline__ void _Unwind_SetIP(struct _Unwind_Context *__context, _Unwind_Word __value) { _Unwind_Word __thumb_mode_bit = _Unwind_GetGR(__context, 15) & 0x1; _Unwind_SetGR(__context, 15, __value | __thumb_mode_bit); } #else _Unwind_Word _Unwind_GetGR(struct _Unwind_Context *, int); void _Unwind_SetGR(struct _Unwind_Context *, int, _Unwind_Word); _Unwind_Word _Unwind_GetIP(struct _Unwind_Context *); void _Unwind_SetIP(struct _Unwind_Context *, _Unwind_Word); #endif _Unwind_Word _Unwind_GetIPInfo(struct _Unwind_Context *, int *); _Unwind_Word _Unwind_GetCFA(struct _Unwind_Context *); _Unwind_Word _Unwind_GetBSP(struct _Unwind_Context *); void *_Unwind_GetLanguageSpecificData(struct _Unwind_Context *); _Unwind_Ptr _Unwind_GetRegionStart(struct _Unwind_Context *); /* DWARF EH functions; currently not available on Darwin/ARM */ #if !defined(__APPLE__) || !defined(__arm__) -_Unwind_Reason_Code _Unwind_RaiseException(_Unwind_Exception *); -_Unwind_Reason_Code _Unwind_ForcedUnwind(_Unwind_Exception *, _Unwind_Stop_Fn, - void *); -void _Unwind_DeleteException(_Unwind_Exception *); -void _Unwind_Resume(_Unwind_Exception *); -_Unwind_Reason_Code _Unwind_Resume_or_Rethrow(_Unwind_Exception *); + +_Unwind_Reason_Code _Unwind_RaiseException(struct _Unwind_Exception *); +_Unwind_Reason_Code _Unwind_ForcedUnwind(struct _Unwind_Exception *, + _Unwind_Stop_Fn, void *); +void _Unwind_DeleteException(struct _Unwind_Exception *); +void _Unwind_Resume(struct _Unwind_Exception *); +_Unwind_Reason_Code _Unwind_Resume_or_Rethrow(struct _Unwind_Exception *); #endif _Unwind_Reason_Code _Unwind_Backtrace(_Unwind_Trace_Fn, void *); /* setjmp(3)/longjmp(3) stuff */ typedef struct SjLj_Function_Context *_Unwind_FunctionContext_t; void _Unwind_SjLj_Register(_Unwind_FunctionContext_t); void _Unwind_SjLj_Unregister(_Unwind_FunctionContext_t); -_Unwind_Reason_Code _Unwind_SjLj_RaiseException(_Unwind_Exception *); -_Unwind_Reason_Code _Unwind_SjLj_ForcedUnwind(_Unwind_Exception *, +_Unwind_Reason_Code _Unwind_SjLj_RaiseException(struct _Unwind_Exception *); +_Unwind_Reason_Code _Unwind_SjLj_ForcedUnwind(struct _Unwind_Exception *, _Unwind_Stop_Fn, void *); -void _Unwind_SjLj_Resume(_Unwind_Exception *); -_Unwind_Reason_Code _Unwind_SjLj_Resume_or_Rethrow(_Unwind_Exception *); +void _Unwind_SjLj_Resume(struct _Unwind_Exception *); +_Unwind_Reason_Code _Unwind_SjLj_Resume_or_Rethrow(struct _Unwind_Exception *); void *_Unwind_FindEnclosingFunction(void *); #ifdef __APPLE__ _Unwind_Ptr _Unwind_GetDataRelBase(struct _Unwind_Context *) __attribute__((__unavailable__)); _Unwind_Ptr _Unwind_GetTextRelBase(struct _Unwind_Context *) __attribute__((__unavailable__)); /* Darwin-specific functions */ void __register_frame(const void *); void __deregister_frame(const void *); struct dwarf_eh_bases { uintptr_t tbase; uintptr_t dbase; uintptr_t func; }; void *_Unwind_Find_FDE(const void *, struct dwarf_eh_bases *); void __register_frame_info_bases(const void *, void *, void *, void *) __attribute__((__unavailable__)); void __register_frame_info(const void *, void *) __attribute__((__unavailable__)); void __register_frame_info_table_bases(const void *, void*, void *, void *) __attribute__((__unavailable__)); void __register_frame_info_table(const void *, void *) __attribute__((__unavailable__)); void __register_frame_table(const void *) __attribute__((__unavailable__)); void __deregister_frame_info(const void *) __attribute__((__unavailable__)); void __deregister_frame_info_bases(const void *)__attribute__((__unavailable__)); #else _Unwind_Ptr _Unwind_GetDataRelBase(struct _Unwind_Context *); _Unwind_Ptr _Unwind_GetTextRelBase(struct _Unwind_Context *); #endif #ifndef HIDE_EXPORTS #pragma GCC visibility pop #endif #ifdef __cplusplus } #endif #endif #endif /* __CLANG_UNWIND_H */ diff --git a/lib/Lex/PPLexerChange.cpp b/lib/Lex/PPLexerChange.cpp index 5a589d6a17b3..36d7028da688 100644 --- a/lib/Lex/PPLexerChange.cpp +++ b/lib/Lex/PPLexerChange.cpp @@ -1,828 +1,839 @@ //===--- PPLexerChange.cpp - Handle changing lexers in the preprocessor ---===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements pieces of the Preprocessor interface that manage the // current lexer stack. // //===----------------------------------------------------------------------===// #include "clang/Lex/Preprocessor.h" #include "clang/Basic/FileManager.h" #include "clang/Basic/SourceManager.h" #include "clang/Lex/HeaderSearch.h" #include "clang/Lex/LexDiagnostic.h" #include "clang/Lex/MacroInfo.h" #include "clang/Lex/PTHManager.h" #include "llvm/ADT/StringSwitch.h" #include "llvm/Support/FileSystem.h" #include "llvm/Support/MemoryBuffer.h" #include "llvm/Support/Path.h" using namespace clang; PPCallbacks::~PPCallbacks() {} //===----------------------------------------------------------------------===// // Miscellaneous Methods. //===----------------------------------------------------------------------===// /// isInPrimaryFile - Return true if we're in the top-level file, not in a /// \#include. This looks through macro expansions and active _Pragma lexers. bool Preprocessor::isInPrimaryFile() const { if (IsFileLexer()) return IncludeMacroStack.empty(); // If there are any stacked lexers, we're in a #include. assert(IsFileLexer(IncludeMacroStack[0]) && "Top level include stack isn't our primary lexer?"); return std::none_of(IncludeMacroStack.begin() + 1, IncludeMacroStack.end(), [this](const IncludeStackInfo &ISI) -> bool { return IsFileLexer(ISI); }); } /// getCurrentLexer - Return the current file lexer being lexed from. Note /// that this ignores any potentially active macro expansions and _Pragma /// expansions going on at the time. PreprocessorLexer *Preprocessor::getCurrentFileLexer() const { if (IsFileLexer()) return CurPPLexer; // Look for a stacked lexer. for (const IncludeStackInfo &ISI : llvm::reverse(IncludeMacroStack)) { if (IsFileLexer(ISI)) return ISI.ThePPLexer; } return nullptr; } //===----------------------------------------------------------------------===// // Methods for Entering and Callbacks for leaving various contexts //===----------------------------------------------------------------------===// /// EnterSourceFile - Add a source file to the top of the include stack and /// start lexing tokens from it instead of the current buffer. bool Preprocessor::EnterSourceFile(FileID FID, const DirectoryLookup *CurDir, SourceLocation Loc) { assert(!CurTokenLexer && "Cannot #include a file inside a macro!"); ++NumEnteredSourceFiles; if (MaxIncludeStackDepth < IncludeMacroStack.size()) MaxIncludeStackDepth = IncludeMacroStack.size(); if (PTH) { if (PTHLexer *PL = PTH->CreateLexer(FID)) { EnterSourceFileWithPTH(PL, CurDir); return false; } } // Get the MemoryBuffer for this FID, if it fails, we fail. bool Invalid = false; const llvm::MemoryBuffer *InputFile = getSourceManager().getBuffer(FID, Loc, &Invalid); if (Invalid) { SourceLocation FileStart = SourceMgr.getLocForStartOfFile(FID); Diag(Loc, diag::err_pp_error_opening_file) << std::string(SourceMgr.getBufferName(FileStart)) << ""; return true; } if (isCodeCompletionEnabled() && SourceMgr.getFileEntryForID(FID) == CodeCompletionFile) { CodeCompletionFileLoc = SourceMgr.getLocForStartOfFile(FID); CodeCompletionLoc = CodeCompletionFileLoc.getLocWithOffset(CodeCompletionOffset); } EnterSourceFileWithLexer(new Lexer(FID, InputFile, *this), CurDir); return false; } /// EnterSourceFileWithLexer - Add a source file to the top of the include stack /// and start lexing tokens from it instead of the current buffer. void Preprocessor::EnterSourceFileWithLexer(Lexer *TheLexer, const DirectoryLookup *CurDir) { // Add the current lexer to the include stack. if (CurPPLexer || CurTokenLexer) PushIncludeMacroStack(); CurLexer.reset(TheLexer); CurPPLexer = TheLexer; CurDirLookup = CurDir; CurLexerSubmodule = nullptr; if (CurLexerKind != CLK_LexAfterModuleImport) CurLexerKind = CLK_Lexer; // Notify the client, if desired, that we are in a new source file. if (Callbacks && !CurLexer->Is_PragmaLexer) { SrcMgr::CharacteristicKind FileType = SourceMgr.getFileCharacteristic(CurLexer->getFileLoc()); Callbacks->FileChanged(CurLexer->getFileLoc(), PPCallbacks::EnterFile, FileType); } } /// EnterSourceFileWithPTH - Add a source file to the top of the include stack /// and start getting tokens from it using the PTH cache. void Preprocessor::EnterSourceFileWithPTH(PTHLexer *PL, const DirectoryLookup *CurDir) { if (CurPPLexer || CurTokenLexer) PushIncludeMacroStack(); CurDirLookup = CurDir; CurPTHLexer.reset(PL); CurPPLexer = CurPTHLexer.get(); CurLexerSubmodule = nullptr; if (CurLexerKind != CLK_LexAfterModuleImport) CurLexerKind = CLK_PTHLexer; // Notify the client, if desired, that we are in a new source file. if (Callbacks) { FileID FID = CurPPLexer->getFileID(); SourceLocation EnterLoc = SourceMgr.getLocForStartOfFile(FID); SrcMgr::CharacteristicKind FileType = SourceMgr.getFileCharacteristic(EnterLoc); Callbacks->FileChanged(EnterLoc, PPCallbacks::EnterFile, FileType); } } /// EnterMacro - Add a Macro to the top of the include stack and start lexing /// tokens from it instead of the current buffer. void Preprocessor::EnterMacro(Token &Tok, SourceLocation ILEnd, MacroInfo *Macro, MacroArgs *Args) { std::unique_ptr TokLexer; if (NumCachedTokenLexers == 0) { TokLexer = llvm::make_unique(Tok, ILEnd, Macro, Args, *this); } else { TokLexer = std::move(TokenLexerCache[--NumCachedTokenLexers]); TokLexer->Init(Tok, ILEnd, Macro, Args); } PushIncludeMacroStack(); CurDirLookup = nullptr; CurTokenLexer = std::move(TokLexer); if (CurLexerKind != CLK_LexAfterModuleImport) CurLexerKind = CLK_TokenLexer; } /// EnterTokenStream - Add a "macro" context to the top of the include stack, /// which will cause the lexer to start returning the specified tokens. /// /// If DisableMacroExpansion is true, tokens lexed from the token stream will /// not be subject to further macro expansion. Otherwise, these tokens will /// be re-macro-expanded when/if expansion is enabled. /// /// If OwnsTokens is false, this method assumes that the specified stream of /// tokens has a permanent owner somewhere, so they do not need to be copied. /// If it is true, it assumes the array of tokens is allocated with new[] and /// must be freed. /// void Preprocessor::EnterTokenStream(const Token *Toks, unsigned NumToks, bool DisableMacroExpansion, bool OwnsTokens) { if (CurLexerKind == CLK_CachingLexer) { if (CachedLexPos < CachedTokens.size()) { // We're entering tokens into the middle of our cached token stream. We // can't represent that, so just insert the tokens into the buffer. CachedTokens.insert(CachedTokens.begin() + CachedLexPos, Toks, Toks + NumToks); if (OwnsTokens) delete [] Toks; return; } // New tokens are at the end of the cached token sequnece; insert the // token stream underneath the caching lexer. ExitCachingLexMode(); EnterTokenStream(Toks, NumToks, DisableMacroExpansion, OwnsTokens); EnterCachingLexMode(); return; } // Create a macro expander to expand from the specified token stream. std::unique_ptr TokLexer; if (NumCachedTokenLexers == 0) { TokLexer = llvm::make_unique( Toks, NumToks, DisableMacroExpansion, OwnsTokens, *this); } else { TokLexer = std::move(TokenLexerCache[--NumCachedTokenLexers]); TokLexer->Init(Toks, NumToks, DisableMacroExpansion, OwnsTokens); } // Save our current state. PushIncludeMacroStack(); CurDirLookup = nullptr; CurTokenLexer = std::move(TokLexer); if (CurLexerKind != CLK_LexAfterModuleImport) CurLexerKind = CLK_TokenLexer; } /// \brief Compute the relative path that names the given file relative to /// the given directory. static void computeRelativePath(FileManager &FM, const DirectoryEntry *Dir, const FileEntry *File, SmallString<128> &Result) { Result.clear(); StringRef FilePath = File->getDir()->getName(); StringRef Path = FilePath; while (!Path.empty()) { if (const DirectoryEntry *CurDir = FM.getDirectory(Path)) { if (CurDir == Dir) { Result = FilePath.substr(Path.size()); llvm::sys::path::append(Result, llvm::sys::path::filename(File->getName())); return; } } Path = llvm::sys::path::parent_path(Path); } Result = File->getName(); } void Preprocessor::PropagateLineStartLeadingSpaceInfo(Token &Result) { if (CurTokenLexer) { CurTokenLexer->PropagateLineStartLeadingSpaceInfo(Result); return; } if (CurLexer) { CurLexer->PropagateLineStartLeadingSpaceInfo(Result); return; } // FIXME: Handle other kinds of lexers? It generally shouldn't matter, // but it might if they're empty? } /// \brief Determine the location to use as the end of the buffer for a lexer. /// /// If the file ends with a newline, form the EOF token on the newline itself, /// rather than "on the line following it", which doesn't exist. This makes /// diagnostics relating to the end of file include the last file that the user /// actually typed, which is goodness. const char *Preprocessor::getCurLexerEndPos() { const char *EndPos = CurLexer->BufferEnd; if (EndPos != CurLexer->BufferStart && (EndPos[-1] == '\n' || EndPos[-1] == '\r')) { --EndPos; // Handle \n\r and \r\n: if (EndPos != CurLexer->BufferStart && (EndPos[-1] == '\n' || EndPos[-1] == '\r') && EndPos[-1] != EndPos[0]) --EndPos; } return EndPos; } static void collectAllSubModulesWithUmbrellaHeader( const Module &Mod, SmallVectorImpl &SubMods) { if (Mod.getUmbrellaHeader()) SubMods.push_back(&Mod); for (auto *M : Mod.submodules()) collectAllSubModulesWithUmbrellaHeader(*M, SubMods); } void Preprocessor::diagnoseMissingHeaderInUmbrellaDir(const Module &Mod) { assert(Mod.getUmbrellaHeader() && "Module must use umbrella header"); SourceLocation StartLoc = SourceMgr.getLocForStartOfFile(SourceMgr.getMainFileID()); if (getDiagnostics().isIgnored(diag::warn_uncovered_module_header, StartLoc)) return; ModuleMap &ModMap = getHeaderSearchInfo().getModuleMap(); const DirectoryEntry *Dir = Mod.getUmbrellaDir().Entry; vfs::FileSystem &FS = *FileMgr.getVirtualFileSystem(); std::error_code EC; for (vfs::recursive_directory_iterator Entry(FS, Dir->getName(), EC), End; Entry != End && !EC; Entry.increment(EC)) { using llvm::StringSwitch; // Check whether this entry has an extension typically associated with // headers. if (!StringSwitch(llvm::sys::path::extension(Entry->getName())) .Cases(".h", ".H", ".hh", ".hpp", true) .Default(false)) continue; if (const FileEntry *Header = getFileManager().getFile(Entry->getName())) if (!getSourceManager().hasFileInfo(Header)) { if (!ModMap.isHeaderInUnavailableModule(Header)) { // Find the relative path that would access this header. SmallString<128> RelativePath; computeRelativePath(FileMgr, Dir, Header, RelativePath); Diag(StartLoc, diag::warn_uncovered_module_header) << Mod.getFullModuleName() << RelativePath; } } } } /// HandleEndOfFile - This callback is invoked when the lexer hits the end of /// the current file. This either returns the EOF token or pops a level off /// the include stack and keeps going. bool Preprocessor::HandleEndOfFile(Token &Result, bool isEndOfMacro) { assert(!CurTokenLexer && "Ending a file when currently in a macro!"); // If we have an unclosed module region from a pragma at the end of a // module, complain and close it now. // FIXME: This is not correct if we are building a module from PTH. const bool LeavingSubmodule = CurLexer && CurLexerSubmodule; if ((LeavingSubmodule || IncludeMacroStack.empty()) && !BuildingSubmoduleStack.empty() && BuildingSubmoduleStack.back().IsPragma) { Diag(BuildingSubmoduleStack.back().ImportLoc, diag::err_pp_module_begin_without_module_end); Module *M = LeaveSubmodule(/*ForPragma*/true); Result.startToken(); const char *EndPos = getCurLexerEndPos(); CurLexer->BufferPtr = EndPos; CurLexer->FormTokenWithChars(Result, EndPos, tok::annot_module_end); Result.setAnnotationEndLoc(Result.getLocation()); Result.setAnnotationValue(M); return true; } // See if this file had a controlling macro. if (CurPPLexer) { // Not ending a macro, ignore it. if (const IdentifierInfo *ControllingMacro = CurPPLexer->MIOpt.GetControllingMacroAtEndOfFile()) { // Okay, this has a controlling macro, remember in HeaderFileInfo. if (const FileEntry *FE = CurPPLexer->getFileEntry()) { HeaderInfo.SetFileControllingMacro(FE, ControllingMacro); if (MacroInfo *MI = getMacroInfo(const_cast(ControllingMacro))) MI->setUsedForHeaderGuard(true); if (const IdentifierInfo *DefinedMacro = CurPPLexer->MIOpt.GetDefinedMacro()) { if (!isMacroDefined(ControllingMacro) && DefinedMacro != ControllingMacro && HeaderInfo.FirstTimeLexingFile(FE)) { // If the edit distance between the two macros is more than 50%, // DefinedMacro may not be header guard, or can be header guard of // another header file. Therefore, it maybe defining something // completely different. This can be observed in the wild when // handling feature macros or header guards in different files. const StringRef ControllingMacroName = ControllingMacro->getName(); const StringRef DefinedMacroName = DefinedMacro->getName(); const size_t MaxHalfLength = std::max(ControllingMacroName.size(), DefinedMacroName.size()) / 2; const unsigned ED = ControllingMacroName.edit_distance( DefinedMacroName, true, MaxHalfLength); if (ED <= MaxHalfLength) { // Emit a warning for a bad header guard. Diag(CurPPLexer->MIOpt.GetMacroLocation(), diag::warn_header_guard) << CurPPLexer->MIOpt.GetMacroLocation() << ControllingMacro; Diag(CurPPLexer->MIOpt.GetDefinedLocation(), diag::note_header_guard) << CurPPLexer->MIOpt.GetDefinedLocation() << DefinedMacro << ControllingMacro << FixItHint::CreateReplacement( CurPPLexer->MIOpt.GetDefinedLocation(), ControllingMacro->getName()); } } } } } } // Complain about reaching a true EOF within arc_cf_code_audited. // We don't want to complain about reaching the end of a macro // instantiation or a _Pragma. if (PragmaARCCFCodeAuditedLoc.isValid() && !isEndOfMacro && !(CurLexer && CurLexer->Is_PragmaLexer)) { Diag(PragmaARCCFCodeAuditedLoc, diag::err_pp_eof_in_arc_cf_code_audited); // Recover by leaving immediately. PragmaARCCFCodeAuditedLoc = SourceLocation(); } // Complain about reaching a true EOF within assume_nonnull. // We don't want to complain about reaching the end of a macro // instantiation or a _Pragma. if (PragmaAssumeNonNullLoc.isValid() && !isEndOfMacro && !(CurLexer && CurLexer->Is_PragmaLexer)) { Diag(PragmaAssumeNonNullLoc, diag::err_pp_eof_in_assume_nonnull); // Recover by leaving immediately. PragmaAssumeNonNullLoc = SourceLocation(); } // If this is a #include'd file, pop it off the include stack and continue // lexing the #includer file. if (!IncludeMacroStack.empty()) { // If we lexed the code-completion file, act as if we reached EOF. if (isCodeCompletionEnabled() && CurPPLexer && SourceMgr.getLocForStartOfFile(CurPPLexer->getFileID()) == CodeCompletionFileLoc) { if (CurLexer) { Result.startToken(); CurLexer->FormTokenWithChars(Result, CurLexer->BufferEnd, tok::eof); CurLexer.reset(); } else { assert(CurPTHLexer && "Got EOF but no current lexer set!"); CurPTHLexer->getEOF(Result); CurPTHLexer.reset(); } CurPPLexer = nullptr; return true; } if (!isEndOfMacro && CurPPLexer && SourceMgr.getIncludeLoc(CurPPLexer->getFileID()).isValid()) { // Notify SourceManager to record the number of FileIDs that were created // during lexing of the #include'd file. unsigned NumFIDs = SourceMgr.local_sloc_entry_size() - CurPPLexer->getInitialNumSLocEntries() + 1/*#include'd file*/; SourceMgr.setNumCreatedFIDsForFileID(CurPPLexer->getFileID(), NumFIDs); } + bool ExitedFromPredefinesFile = false; FileID ExitedFID; - if (Callbacks && !isEndOfMacro && CurPPLexer) + if (!isEndOfMacro && CurPPLexer) { ExitedFID = CurPPLexer->getFileID(); + assert(PredefinesFileID.isValid() && + "HandleEndOfFile is called before PredefinesFileId is set"); + ExitedFromPredefinesFile = (PredefinesFileID == ExitedFID); + } + if (LeavingSubmodule) { // We're done with this submodule. Module *M = LeaveSubmodule(/*ForPragma*/false); // Notify the parser that we've left the module. const char *EndPos = getCurLexerEndPos(); Result.startToken(); CurLexer->BufferPtr = EndPos; CurLexer->FormTokenWithChars(Result, EndPos, tok::annot_module_end); Result.setAnnotationEndLoc(Result.getLocation()); Result.setAnnotationValue(M); } // We're done with the #included file. RemoveTopOfLexerStack(); // Propagate info about start-of-line/leading white-space/etc. PropagateLineStartLeadingSpaceInfo(Result); // Notify the client, if desired, that we are in a new source file. if (Callbacks && !isEndOfMacro && CurPPLexer) { SrcMgr::CharacteristicKind FileType = SourceMgr.getFileCharacteristic(CurPPLexer->getSourceLocation()); Callbacks->FileChanged(CurPPLexer->getSourceLocation(), PPCallbacks::ExitFile, FileType, ExitedFID); } + // Restore conditional stack from the preamble right after exiting from the + // predefines file. + if (ExitedFromPredefinesFile) + replayPreambleConditionalStack(); + // Client should lex another token unless we generated an EOM. return LeavingSubmodule; } // If this is the end of the main file, form an EOF token. if (CurLexer) { const char *EndPos = getCurLexerEndPos(); Result.startToken(); CurLexer->BufferPtr = EndPos; CurLexer->FormTokenWithChars(Result, EndPos, tok::eof); if (isCodeCompletionEnabled()) { // Inserting the code-completion point increases the source buffer by 1, // but the main FileID was created before inserting the point. // Compensate by reducing the EOF location by 1, otherwise the location // will point to the next FileID. // FIXME: This is hacky, the code-completion point should probably be // inserted before the main FileID is created. if (CurLexer->getFileLoc() == CodeCompletionFileLoc) Result.setLocation(Result.getLocation().getLocWithOffset(-1)); } if (!isIncrementalProcessingEnabled()) // We're done with lexing. CurLexer.reset(); } else { assert(CurPTHLexer && "Got EOF but no current lexer set!"); CurPTHLexer->getEOF(Result); CurPTHLexer.reset(); } if (!isIncrementalProcessingEnabled()) CurPPLexer = nullptr; if (TUKind == TU_Complete) { // This is the end of the top-level file. 'WarnUnusedMacroLocs' has // collected all macro locations that we need to warn because they are not // used. for (WarnUnusedMacroLocsTy::iterator I=WarnUnusedMacroLocs.begin(), E=WarnUnusedMacroLocs.end(); I!=E; ++I) Diag(*I, diag::pp_macro_not_used); } // If we are building a module that has an umbrella header, make sure that // each of the headers within the directory, including all submodules, is // covered by the umbrella header was actually included by the umbrella // header. if (Module *Mod = getCurrentModule()) { llvm::SmallVector AllMods; collectAllSubModulesWithUmbrellaHeader(*Mod, AllMods); for (auto *M : AllMods) diagnoseMissingHeaderInUmbrellaDir(*M); } return true; } /// HandleEndOfTokenLexer - This callback is invoked when the current TokenLexer /// hits the end of its token stream. bool Preprocessor::HandleEndOfTokenLexer(Token &Result) { assert(CurTokenLexer && !CurPPLexer && "Ending a macro when currently in a #include file!"); if (!MacroExpandingLexersStack.empty() && MacroExpandingLexersStack.back().first == CurTokenLexer.get()) removeCachedMacroExpandedTokensOfLastLexer(); // Delete or cache the now-dead macro expander. if (NumCachedTokenLexers == TokenLexerCacheSize) CurTokenLexer.reset(); else TokenLexerCache[NumCachedTokenLexers++] = std::move(CurTokenLexer); // Handle this like a #include file being popped off the stack. return HandleEndOfFile(Result, true); } /// RemoveTopOfLexerStack - Pop the current lexer/macro exp off the top of the /// lexer stack. This should only be used in situations where the current /// state of the top-of-stack lexer is unknown. void Preprocessor::RemoveTopOfLexerStack() { assert(!IncludeMacroStack.empty() && "Ran out of stack entries to load"); if (CurTokenLexer) { // Delete or cache the now-dead macro expander. if (NumCachedTokenLexers == TokenLexerCacheSize) CurTokenLexer.reset(); else TokenLexerCache[NumCachedTokenLexers++] = std::move(CurTokenLexer); } PopIncludeMacroStack(); } /// HandleMicrosoftCommentPaste - When the macro expander pastes together a /// comment (/##/) in microsoft mode, this method handles updating the current /// state, returning the token on the next source line. void Preprocessor::HandleMicrosoftCommentPaste(Token &Tok) { assert(CurTokenLexer && !CurPPLexer && "Pasted comment can only be formed from macro"); // We handle this by scanning for the closest real lexer, switching it to // raw mode and preprocessor mode. This will cause it to return \n as an // explicit EOD token. PreprocessorLexer *FoundLexer = nullptr; bool LexerWasInPPMode = false; for (const IncludeStackInfo &ISI : llvm::reverse(IncludeMacroStack)) { if (ISI.ThePPLexer == nullptr) continue; // Scan for a real lexer. // Once we find a real lexer, mark it as raw mode (disabling macro // expansions) and preprocessor mode (return EOD). We know that the lexer // was *not* in raw mode before, because the macro that the comment came // from was expanded. However, it could have already been in preprocessor // mode (#if COMMENT) in which case we have to return it to that mode and // return EOD. FoundLexer = ISI.ThePPLexer; FoundLexer->LexingRawMode = true; LexerWasInPPMode = FoundLexer->ParsingPreprocessorDirective; FoundLexer->ParsingPreprocessorDirective = true; break; } // Okay, we either found and switched over the lexer, or we didn't find a // lexer. In either case, finish off the macro the comment came from, getting // the next token. if (!HandleEndOfTokenLexer(Tok)) Lex(Tok); // Discarding comments as long as we don't have EOF or EOD. This 'comments // out' the rest of the line, including any tokens that came from other macros // that were active, as in: // #define submacro a COMMENT b // submacro c // which should lex to 'a' only: 'b' and 'c' should be removed. while (Tok.isNot(tok::eod) && Tok.isNot(tok::eof)) Lex(Tok); // If we got an eod token, then we successfully found the end of the line. if (Tok.is(tok::eod)) { assert(FoundLexer && "Can't get end of line without an active lexer"); // Restore the lexer back to normal mode instead of raw mode. FoundLexer->LexingRawMode = false; // If the lexer was already in preprocessor mode, just return the EOD token // to finish the preprocessor line. if (LexerWasInPPMode) return; // Otherwise, switch out of PP mode and return the next lexed token. FoundLexer->ParsingPreprocessorDirective = false; return Lex(Tok); } // If we got an EOF token, then we reached the end of the token stream but // didn't find an explicit \n. This can only happen if there was no lexer // active (an active lexer would return EOD at EOF if there was no \n in // preprocessor directive mode), so just return EOF as our token. assert(!FoundLexer && "Lexer should return EOD before EOF in PP mode"); } void Preprocessor::EnterSubmodule(Module *M, SourceLocation ImportLoc, bool ForPragma) { if (!getLangOpts().ModulesLocalVisibility) { // Just track that we entered this submodule. BuildingSubmoduleStack.push_back( BuildingSubmoduleInfo(M, ImportLoc, ForPragma, CurSubmoduleState, PendingModuleMacroNames.size())); return; } // Resolve as much of the module definition as we can now, before we enter // one of its headers. // FIXME: Can we enable Complain here? // FIXME: Can we do this when local visibility is disabled? ModuleMap &ModMap = getHeaderSearchInfo().getModuleMap(); ModMap.resolveExports(M, /*Complain=*/false); ModMap.resolveUses(M, /*Complain=*/false); ModMap.resolveConflicts(M, /*Complain=*/false); // If this is the first time we've entered this module, set up its state. auto R = Submodules.insert(std::make_pair(M, SubmoduleState())); auto &State = R.first->second; bool FirstTime = R.second; if (FirstTime) { // Determine the set of starting macros for this submodule; take these // from the "null" module (the predefines buffer). // // FIXME: If we have local visibility but not modules enabled, the // NullSubmoduleState is polluted by #defines in the top-level source // file. auto &StartingMacros = NullSubmoduleState.Macros; // Restore to the starting state. // FIXME: Do this lazily, when each macro name is first referenced. for (auto &Macro : StartingMacros) { // Skip uninteresting macros. if (!Macro.second.getLatest() && Macro.second.getOverriddenMacros().empty()) continue; MacroState MS(Macro.second.getLatest()); MS.setOverriddenMacros(*this, Macro.second.getOverriddenMacros()); State.Macros.insert(std::make_pair(Macro.first, std::move(MS))); } } // Track that we entered this module. BuildingSubmoduleStack.push_back( BuildingSubmoduleInfo(M, ImportLoc, ForPragma, CurSubmoduleState, PendingModuleMacroNames.size())); // Switch to this submodule as the current submodule. CurSubmoduleState = &State; // This module is visible to itself. if (FirstTime) makeModuleVisible(M, ImportLoc); } bool Preprocessor::needModuleMacros() const { // If we're not within a submodule, we never need to create ModuleMacros. if (BuildingSubmoduleStack.empty()) return false; // If we are tracking module macro visibility even for textually-included // headers, we need ModuleMacros. if (getLangOpts().ModulesLocalVisibility) return true; // Otherwise, we only need module macros if we're actually compiling a module // interface. return getLangOpts().isCompilingModule(); } Module *Preprocessor::LeaveSubmodule(bool ForPragma) { if (BuildingSubmoduleStack.empty() || BuildingSubmoduleStack.back().IsPragma != ForPragma) { assert(ForPragma && "non-pragma module enter/leave mismatch"); return nullptr; } auto &Info = BuildingSubmoduleStack.back(); Module *LeavingMod = Info.M; SourceLocation ImportLoc = Info.ImportLoc; if (!needModuleMacros() || (!getLangOpts().ModulesLocalVisibility && LeavingMod->getTopLevelModuleName() != getLangOpts().CurrentModule)) { // If we don't need module macros, or this is not a module for which we // are tracking macro visibility, don't build any, and preserve the list // of pending names for the surrounding submodule. BuildingSubmoduleStack.pop_back(); makeModuleVisible(LeavingMod, ImportLoc); return LeavingMod; } // Create ModuleMacros for any macros defined in this submodule. llvm::SmallPtrSet VisitedMacros; for (unsigned I = Info.OuterPendingModuleMacroNames; I != PendingModuleMacroNames.size(); ++I) { auto *II = const_cast(PendingModuleMacroNames[I]); if (!VisitedMacros.insert(II).second) continue; auto MacroIt = CurSubmoduleState->Macros.find(II); if (MacroIt == CurSubmoduleState->Macros.end()) continue; auto &Macro = MacroIt->second; // Find the starting point for the MacroDirective chain in this submodule. MacroDirective *OldMD = nullptr; auto *OldState = Info.OuterSubmoduleState; if (getLangOpts().ModulesLocalVisibility) OldState = &NullSubmoduleState; if (OldState && OldState != CurSubmoduleState) { // FIXME: It'd be better to start at the state from when we most recently // entered this submodule, but it doesn't really matter. auto &OldMacros = OldState->Macros; auto OldMacroIt = OldMacros.find(II); if (OldMacroIt == OldMacros.end()) OldMD = nullptr; else OldMD = OldMacroIt->second.getLatest(); } // This module may have exported a new macro. If so, create a ModuleMacro // representing that fact. bool ExplicitlyPublic = false; for (auto *MD = Macro.getLatest(); MD != OldMD; MD = MD->getPrevious()) { assert(MD && "broken macro directive chain"); if (auto *VisMD = dyn_cast(MD)) { // The latest visibility directive for a name in a submodule affects // all the directives that come before it. if (VisMD->isPublic()) ExplicitlyPublic = true; else if (!ExplicitlyPublic) // Private with no following public directive: not exported. break; } else { MacroInfo *Def = nullptr; if (DefMacroDirective *DefMD = dyn_cast(MD)) Def = DefMD->getInfo(); // FIXME: Issue a warning if multiple headers for the same submodule // define a macro, rather than silently ignoring all but the first. bool IsNew; // Don't bother creating a module macro if it would represent a #undef // that doesn't override anything. if (Def || !Macro.getOverriddenMacros().empty()) addModuleMacro(LeavingMod, II, Def, Macro.getOverriddenMacros(), IsNew); if (!getLangOpts().ModulesLocalVisibility) { // This macro is exposed to the rest of this compilation as a // ModuleMacro; we don't need to track its MacroDirective any more. Macro.setLatest(nullptr); Macro.setOverriddenMacros(*this, {}); } break; } } } PendingModuleMacroNames.resize(Info.OuterPendingModuleMacroNames); // FIXME: Before we leave this submodule, we should parse all the other // headers within it. Otherwise, we're left with an inconsistent state // where we've made the module visible but don't yet have its complete // contents. // Put back the outer module's state, if we're tracking it. if (getLangOpts().ModulesLocalVisibility) CurSubmoduleState = Info.OuterSubmoduleState; BuildingSubmoduleStack.pop_back(); // A nested #include makes the included submodule visible. makeModuleVisible(LeavingMod, ImportLoc); return LeavingMod; } diff --git a/lib/Lex/Preprocessor.cpp b/lib/Lex/Preprocessor.cpp index d1dc8e1c0010..7979be773aa1 100644 --- a/lib/Lex/Preprocessor.cpp +++ b/lib/Lex/Preprocessor.cpp @@ -1,957 +1,959 @@ //===--- Preprocess.cpp - C Language Family Preprocessor Implementation ---===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements the Preprocessor interface. // //===----------------------------------------------------------------------===// // // Options to support: // -H - Print the name of each header file used. // -d[DNI] - Dump various things. // -fworking-directory - #line's with preprocessor's working dir. // -fpreprocessed // -dependency-file,-M,-MM,-MF,-MG,-MP,-MT,-MQ,-MD,-MMD // -W* // -w // // Messages to emit: // "Multiple include guards may be useful for:\n" // //===----------------------------------------------------------------------===// #include "clang/Lex/Preprocessor.h" #include "clang/Basic/FileManager.h" #include "clang/Basic/FileSystemStatCache.h" #include "clang/Basic/SourceManager.h" #include "clang/Basic/TargetInfo.h" #include "clang/Lex/CodeCompletionHandler.h" #include "clang/Lex/ExternalPreprocessorSource.h" #include "clang/Lex/HeaderSearch.h" #include "clang/Lex/LexDiagnostic.h" #include "clang/Lex/LiteralSupport.h" #include "clang/Lex/MacroArgs.h" #include "clang/Lex/MacroInfo.h" #include "clang/Lex/ModuleLoader.h" #include "clang/Lex/PTHManager.h" #include "clang/Lex/Pragma.h" #include "clang/Lex/PreprocessingRecord.h" #include "clang/Lex/PreprocessorOptions.h" #include "clang/Lex/ScratchBuffer.h" #include "llvm/ADT/APInt.h" #include "llvm/ADT/DenseMap.h" #include "llvm/ADT/SmallString.h" #include "llvm/ADT/SmallVector.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/StringRef.h" #include "llvm/ADT/StringSwitch.h" #include "llvm/Support/Capacity.h" #include "llvm/Support/ErrorHandling.h" #include "llvm/Support/MemoryBuffer.h" #include "llvm/Support/raw_ostream.h" #include #include #include #include #include #include using namespace clang; LLVM_INSTANTIATE_REGISTRY(PragmaHandlerRegistry) //===----------------------------------------------------------------------===// ExternalPreprocessorSource::~ExternalPreprocessorSource() { } Preprocessor::Preprocessor(std::shared_ptr PPOpts, DiagnosticsEngine &diags, LangOptions &opts, SourceManager &SM, MemoryBufferCache &PCMCache, HeaderSearch &Headers, ModuleLoader &TheModuleLoader, IdentifierInfoLookup *IILookup, bool OwnsHeaders, TranslationUnitKind TUKind) : PPOpts(std::move(PPOpts)), Diags(&diags), LangOpts(opts), Target(nullptr), AuxTarget(nullptr), FileMgr(Headers.getFileMgr()), SourceMgr(SM), PCMCache(PCMCache), ScratchBuf(new ScratchBuffer(SourceMgr)), HeaderInfo(Headers), TheModuleLoader(TheModuleLoader), ExternalSource(nullptr), Identifiers(opts, IILookup), PragmaHandlers(new PragmaNamespace(StringRef())), IncrementalProcessing(false), TUKind(TUKind), CodeComplete(nullptr), CodeCompletionFile(nullptr), CodeCompletionOffset(0), LastTokenWasAt(false), ModuleImportExpectsIdentifier(false), CodeCompletionReached(false), CodeCompletionII(nullptr), MainFileDir(nullptr), SkipMainFilePreamble(0, true), CurPPLexer(nullptr), CurDirLookup(nullptr), CurLexerKind(CLK_Lexer), CurLexerSubmodule(nullptr), Callbacks(nullptr), CurSubmoduleState(&NullSubmoduleState), MacroArgCache(nullptr), Record(nullptr), MIChainHead(nullptr) { OwnsHeaderSearch = OwnsHeaders; CounterValue = 0; // __COUNTER__ starts at 0. // Clear stats. NumDirectives = NumDefined = NumUndefined = NumPragma = 0; NumIf = NumElse = NumEndif = 0; NumEnteredSourceFiles = 0; NumMacroExpanded = NumFnMacroExpanded = NumBuiltinMacroExpanded = 0; NumFastMacroExpanded = NumTokenPaste = NumFastTokenPaste = 0; MaxIncludeStackDepth = 0; NumSkipped = 0; // Default to discarding comments. KeepComments = false; KeepMacroComments = false; SuppressIncludeNotFoundError = false; // Macro expansion is enabled. DisableMacroExpansion = false; MacroExpansionInDirectivesOverride = false; InMacroArgs = false; InMacroArgPreExpansion = false; NumCachedTokenLexers = 0; PragmasEnabled = true; ParsingIfOrElifDirective = false; PreprocessedOutput = false; CachedLexPos = 0; // We haven't read anything from the external source. ReadMacrosFromExternalSource = false; // "Poison" __VA_ARGS__, which can only appear in the expansion of a macro. // This gets unpoisoned where it is allowed. (Ident__VA_ARGS__ = getIdentifierInfo("__VA_ARGS__"))->setIsPoisoned(); SetPoisonReason(Ident__VA_ARGS__,diag::ext_pp_bad_vaargs_use); // Initialize the pragma handlers. RegisterBuiltinPragmas(); // Initialize builtin macros like __LINE__ and friends. RegisterBuiltinMacros(); if(LangOpts.Borland) { Ident__exception_info = getIdentifierInfo("_exception_info"); Ident___exception_info = getIdentifierInfo("__exception_info"); Ident_GetExceptionInfo = getIdentifierInfo("GetExceptionInformation"); Ident__exception_code = getIdentifierInfo("_exception_code"); Ident___exception_code = getIdentifierInfo("__exception_code"); Ident_GetExceptionCode = getIdentifierInfo("GetExceptionCode"); Ident__abnormal_termination = getIdentifierInfo("_abnormal_termination"); Ident___abnormal_termination = getIdentifierInfo("__abnormal_termination"); Ident_AbnormalTermination = getIdentifierInfo("AbnormalTermination"); } else { Ident__exception_info = Ident__exception_code = nullptr; Ident__abnormal_termination = Ident___exception_info = nullptr; Ident___exception_code = Ident___abnormal_termination = nullptr; Ident_GetExceptionInfo = Ident_GetExceptionCode = nullptr; Ident_AbnormalTermination = nullptr; } if (this->PPOpts->GeneratePreamble) PreambleConditionalStack.startRecording(); } Preprocessor::~Preprocessor() { assert(BacktrackPositions.empty() && "EnableBacktrack/Backtrack imbalance!"); IncludeMacroStack.clear(); // Destroy any macro definitions. while (MacroInfoChain *I = MIChainHead) { MIChainHead = I->Next; I->~MacroInfoChain(); } // Free any cached macro expanders. // This populates MacroArgCache, so all TokenLexers need to be destroyed // before the code below that frees up the MacroArgCache list. std::fill(TokenLexerCache, TokenLexerCache + NumCachedTokenLexers, nullptr); CurTokenLexer.reset(); // Free any cached MacroArgs. for (MacroArgs *ArgList = MacroArgCache; ArgList;) ArgList = ArgList->deallocate(); // Delete the header search info, if we own it. if (OwnsHeaderSearch) delete &HeaderInfo; } void Preprocessor::Initialize(const TargetInfo &Target, const TargetInfo *AuxTarget) { assert((!this->Target || this->Target == &Target) && "Invalid override of target information"); this->Target = &Target; assert((!this->AuxTarget || this->AuxTarget == AuxTarget) && "Invalid override of aux target information."); this->AuxTarget = AuxTarget; // Initialize information about built-ins. BuiltinInfo.InitializeTarget(Target, AuxTarget); HeaderInfo.setTarget(Target); } void Preprocessor::InitializeForModelFile() { NumEnteredSourceFiles = 0; // Reset pragmas PragmaHandlersBackup = std::move(PragmaHandlers); PragmaHandlers = llvm::make_unique(StringRef()); RegisterBuiltinPragmas(); // Reset PredefinesFileID PredefinesFileID = FileID(); } void Preprocessor::FinalizeForModelFile() { NumEnteredSourceFiles = 1; PragmaHandlers = std::move(PragmaHandlersBackup); } void Preprocessor::setPTHManager(PTHManager* pm) { PTH.reset(pm); FileMgr.addStatCache(PTH->createStatCache()); } void Preprocessor::DumpToken(const Token &Tok, bool DumpFlags) const { llvm::errs() << tok::getTokenName(Tok.getKind()) << " '" << getSpelling(Tok) << "'"; if (!DumpFlags) return; llvm::errs() << "\t"; if (Tok.isAtStartOfLine()) llvm::errs() << " [StartOfLine]"; if (Tok.hasLeadingSpace()) llvm::errs() << " [LeadingSpace]"; if (Tok.isExpandDisabled()) llvm::errs() << " [ExpandDisabled]"; if (Tok.needsCleaning()) { const char *Start = SourceMgr.getCharacterData(Tok.getLocation()); llvm::errs() << " [UnClean='" << StringRef(Start, Tok.getLength()) << "']"; } llvm::errs() << "\tLoc=<"; DumpLocation(Tok.getLocation()); llvm::errs() << ">"; } void Preprocessor::DumpLocation(SourceLocation Loc) const { Loc.dump(SourceMgr); } void Preprocessor::DumpMacro(const MacroInfo &MI) const { llvm::errs() << "MACRO: "; for (unsigned i = 0, e = MI.getNumTokens(); i != e; ++i) { DumpToken(MI.getReplacementToken(i)); llvm::errs() << " "; } llvm::errs() << "\n"; } void Preprocessor::PrintStats() { llvm::errs() << "\n*** Preprocessor Stats:\n"; llvm::errs() << NumDirectives << " directives found:\n"; llvm::errs() << " " << NumDefined << " #define.\n"; llvm::errs() << " " << NumUndefined << " #undef.\n"; llvm::errs() << " #include/#include_next/#import:\n"; llvm::errs() << " " << NumEnteredSourceFiles << " source files entered.\n"; llvm::errs() << " " << MaxIncludeStackDepth << " max include stack depth\n"; llvm::errs() << " " << NumIf << " #if/#ifndef/#ifdef.\n"; llvm::errs() << " " << NumElse << " #else/#elif.\n"; llvm::errs() << " " << NumEndif << " #endif.\n"; llvm::errs() << " " << NumPragma << " #pragma.\n"; llvm::errs() << NumSkipped << " #if/#ifndef#ifdef regions skipped\n"; llvm::errs() << NumMacroExpanded << "/" << NumFnMacroExpanded << "/" << NumBuiltinMacroExpanded << " obj/fn/builtin macros expanded, " << NumFastMacroExpanded << " on the fast path.\n"; llvm::errs() << (NumFastTokenPaste+NumTokenPaste) << " token paste (##) operations performed, " << NumFastTokenPaste << " on the fast path.\n"; llvm::errs() << "\nPreprocessor Memory: " << getTotalMemory() << "B total"; llvm::errs() << "\n BumpPtr: " << BP.getTotalMemory(); llvm::errs() << "\n Macro Expanded Tokens: " << llvm::capacity_in_bytes(MacroExpandedTokens); llvm::errs() << "\n Predefines Buffer: " << Predefines.capacity(); // FIXME: List information for all submodules. llvm::errs() << "\n Macros: " << llvm::capacity_in_bytes(CurSubmoduleState->Macros); llvm::errs() << "\n #pragma push_macro Info: " << llvm::capacity_in_bytes(PragmaPushMacroInfo); llvm::errs() << "\n Poison Reasons: " << llvm::capacity_in_bytes(PoisonReasons); llvm::errs() << "\n Comment Handlers: " << llvm::capacity_in_bytes(CommentHandlers) << "\n"; } Preprocessor::macro_iterator Preprocessor::macro_begin(bool IncludeExternalMacros) const { if (IncludeExternalMacros && ExternalSource && !ReadMacrosFromExternalSource) { ReadMacrosFromExternalSource = true; ExternalSource->ReadDefinedMacros(); } // Make sure we cover all macros in visible modules. for (const ModuleMacro &Macro : ModuleMacros) CurSubmoduleState->Macros.insert(std::make_pair(Macro.II, MacroState())); return CurSubmoduleState->Macros.begin(); } size_t Preprocessor::getTotalMemory() const { return BP.getTotalMemory() + llvm::capacity_in_bytes(MacroExpandedTokens) + Predefines.capacity() /* Predefines buffer. */ // FIXME: Include sizes from all submodules, and include MacroInfo sizes, // and ModuleMacros. + llvm::capacity_in_bytes(CurSubmoduleState->Macros) + llvm::capacity_in_bytes(PragmaPushMacroInfo) + llvm::capacity_in_bytes(PoisonReasons) + llvm::capacity_in_bytes(CommentHandlers); } Preprocessor::macro_iterator Preprocessor::macro_end(bool IncludeExternalMacros) const { if (IncludeExternalMacros && ExternalSource && !ReadMacrosFromExternalSource) { ReadMacrosFromExternalSource = true; ExternalSource->ReadDefinedMacros(); } return CurSubmoduleState->Macros.end(); } /// \brief Compares macro tokens with a specified token value sequence. static bool MacroDefinitionEquals(const MacroInfo *MI, ArrayRef Tokens) { return Tokens.size() == MI->getNumTokens() && std::equal(Tokens.begin(), Tokens.end(), MI->tokens_begin()); } StringRef Preprocessor::getLastMacroWithSpelling( SourceLocation Loc, ArrayRef Tokens) const { SourceLocation BestLocation; StringRef BestSpelling; for (Preprocessor::macro_iterator I = macro_begin(), E = macro_end(); I != E; ++I) { const MacroDirective::DefInfo Def = I->second.findDirectiveAtLoc(Loc, SourceMgr); if (!Def || !Def.getMacroInfo()) continue; if (!Def.getMacroInfo()->isObjectLike()) continue; if (!MacroDefinitionEquals(Def.getMacroInfo(), Tokens)) continue; SourceLocation Location = Def.getLocation(); // Choose the macro defined latest. if (BestLocation.isInvalid() || (Location.isValid() && SourceMgr.isBeforeInTranslationUnit(BestLocation, Location))) { BestLocation = Location; BestSpelling = I->first->getName(); } } return BestSpelling; } void Preprocessor::recomputeCurLexerKind() { if (CurLexer) CurLexerKind = CLK_Lexer; else if (CurPTHLexer) CurLexerKind = CLK_PTHLexer; else if (CurTokenLexer) CurLexerKind = CLK_TokenLexer; else CurLexerKind = CLK_CachingLexer; } bool Preprocessor::SetCodeCompletionPoint(const FileEntry *File, unsigned CompleteLine, unsigned CompleteColumn) { assert(File); assert(CompleteLine && CompleteColumn && "Starts from 1:1"); assert(!CodeCompletionFile && "Already set"); using llvm::MemoryBuffer; // Load the actual file's contents. bool Invalid = false; const MemoryBuffer *Buffer = SourceMgr.getMemoryBufferForFile(File, &Invalid); if (Invalid) return true; // Find the byte position of the truncation point. const char *Position = Buffer->getBufferStart(); for (unsigned Line = 1; Line < CompleteLine; ++Line) { for (; *Position; ++Position) { if (*Position != '\r' && *Position != '\n') continue; // Eat \r\n or \n\r as a single line. if ((Position[1] == '\r' || Position[1] == '\n') && Position[0] != Position[1]) ++Position; ++Position; break; } } Position += CompleteColumn - 1; // If pointing inside the preamble, adjust the position at the beginning of // the file after the preamble. if (SkipMainFilePreamble.first && SourceMgr.getFileEntryForID(SourceMgr.getMainFileID()) == File) { if (Position - Buffer->getBufferStart() < SkipMainFilePreamble.first) Position = Buffer->getBufferStart() + SkipMainFilePreamble.first; } if (Position > Buffer->getBufferEnd()) Position = Buffer->getBufferEnd(); CodeCompletionFile = File; CodeCompletionOffset = Position - Buffer->getBufferStart(); std::unique_ptr NewBuffer = MemoryBuffer::getNewUninitMemBuffer(Buffer->getBufferSize() + 1, Buffer->getBufferIdentifier()); char *NewBuf = const_cast(NewBuffer->getBufferStart()); char *NewPos = std::copy(Buffer->getBufferStart(), Position, NewBuf); *NewPos = '\0'; std::copy(Position, Buffer->getBufferEnd(), NewPos+1); SourceMgr.overrideFileContents(File, std::move(NewBuffer)); return false; } void Preprocessor::CodeCompleteNaturalLanguage() { if (CodeComplete) CodeComplete->CodeCompleteNaturalLanguage(); setCodeCompletionReached(); } /// getSpelling - This method is used to get the spelling of a token into a /// SmallVector. Note that the returned StringRef may not point to the /// supplied buffer if a copy can be avoided. StringRef Preprocessor::getSpelling(const Token &Tok, SmallVectorImpl &Buffer, bool *Invalid) const { // NOTE: this has to be checked *before* testing for an IdentifierInfo. if (Tok.isNot(tok::raw_identifier) && !Tok.hasUCN()) { // Try the fast path. if (const IdentifierInfo *II = Tok.getIdentifierInfo()) return II->getName(); } // Resize the buffer if we need to copy into it. if (Tok.needsCleaning()) Buffer.resize(Tok.getLength()); const char *Ptr = Buffer.data(); unsigned Len = getSpelling(Tok, Ptr, Invalid); return StringRef(Ptr, Len); } /// CreateString - Plop the specified string into a scratch buffer and return a /// location for it. If specified, the source location provides a source /// location for the token. void Preprocessor::CreateString(StringRef Str, Token &Tok, SourceLocation ExpansionLocStart, SourceLocation ExpansionLocEnd) { Tok.setLength(Str.size()); const char *DestPtr; SourceLocation Loc = ScratchBuf->getToken(Str.data(), Str.size(), DestPtr); if (ExpansionLocStart.isValid()) Loc = SourceMgr.createExpansionLoc(Loc, ExpansionLocStart, ExpansionLocEnd, Str.size()); Tok.setLocation(Loc); // If this is a raw identifier or a literal token, set the pointer data. if (Tok.is(tok::raw_identifier)) Tok.setRawIdentifierData(DestPtr); else if (Tok.isLiteral()) Tok.setLiteralData(DestPtr); } Module *Preprocessor::getCurrentModule() { if (!getLangOpts().isCompilingModule()) return nullptr; return getHeaderSearchInfo().lookupModule(getLangOpts().CurrentModule); } //===----------------------------------------------------------------------===// // Preprocessor Initialization Methods //===----------------------------------------------------------------------===// /// EnterMainSourceFile - Enter the specified FileID as the main source file, /// which implicitly adds the builtin defines etc. void Preprocessor::EnterMainSourceFile() { // We do not allow the preprocessor to reenter the main file. Doing so will // cause FileID's to accumulate information from both runs (e.g. #line // information) and predefined macros aren't guaranteed to be set properly. assert(NumEnteredSourceFiles == 0 && "Cannot reenter the main file!"); FileID MainFileID = SourceMgr.getMainFileID(); // If MainFileID is loaded it means we loaded an AST file, no need to enter // a main file. if (!SourceMgr.isLoadedFileID(MainFileID)) { // Enter the main file source buffer. EnterSourceFile(MainFileID, nullptr, SourceLocation()); // If we've been asked to skip bytes in the main file (e.g., as part of a // precompiled preamble), do so now. if (SkipMainFilePreamble.first > 0) CurLexer->SkipBytes(SkipMainFilePreamble.first, SkipMainFilePreamble.second); // Tell the header info that the main file was entered. If the file is later // #imported, it won't be re-entered. if (const FileEntry *FE = SourceMgr.getFileEntryForID(MainFileID)) HeaderInfo.IncrementIncludeCount(FE); } // Preprocess Predefines to populate the initial preprocessor state. std::unique_ptr SB = llvm::MemoryBuffer::getMemBufferCopy(Predefines, ""); assert(SB && "Cannot create predefined source buffer"); FileID FID = SourceMgr.createFileID(std::move(SB)); assert(FID.isValid() && "Could not create FileID for predefines?"); setPredefinesFileID(FID); // Start parsing the predefines. EnterSourceFile(FID, nullptr, SourceLocation()); } void Preprocessor::replayPreambleConditionalStack() { // Restore the conditional stack from the preamble, if there is one. if (PreambleConditionalStack.isReplaying()) { + assert(CurPPLexer && + "CurPPLexer is null when calling replayPreambleConditionalStack."); CurPPLexer->setConditionalLevels(PreambleConditionalStack.getStack()); PreambleConditionalStack.doneReplaying(); } } void Preprocessor::EndSourceFile() { // Notify the client that we reached the end of the source file. if (Callbacks) Callbacks->EndOfMainFile(); } //===----------------------------------------------------------------------===// // Lexer Event Handling. //===----------------------------------------------------------------------===// /// LookUpIdentifierInfo - Given a tok::raw_identifier token, look up the /// identifier information for the token and install it into the token, /// updating the token kind accordingly. IdentifierInfo *Preprocessor::LookUpIdentifierInfo(Token &Identifier) const { assert(!Identifier.getRawIdentifier().empty() && "No raw identifier data!"); // Look up this token, see if it is a macro, or if it is a language keyword. IdentifierInfo *II; if (!Identifier.needsCleaning() && !Identifier.hasUCN()) { // No cleaning needed, just use the characters from the lexed buffer. II = getIdentifierInfo(Identifier.getRawIdentifier()); } else { // Cleaning needed, alloca a buffer, clean into it, then use the buffer. SmallString<64> IdentifierBuffer; StringRef CleanedStr = getSpelling(Identifier, IdentifierBuffer); if (Identifier.hasUCN()) { SmallString<64> UCNIdentifierBuffer; expandUCNs(UCNIdentifierBuffer, CleanedStr); II = getIdentifierInfo(UCNIdentifierBuffer); } else { II = getIdentifierInfo(CleanedStr); } } // Update the token info (identifier info and appropriate token kind). Identifier.setIdentifierInfo(II); if (getLangOpts().MSVCCompat && II->isCPlusPlusOperatorKeyword() && getSourceManager().isInSystemHeader(Identifier.getLocation())) Identifier.setKind(clang::tok::identifier); else Identifier.setKind(II->getTokenID()); return II; } void Preprocessor::SetPoisonReason(IdentifierInfo *II, unsigned DiagID) { PoisonReasons[II] = DiagID; } void Preprocessor::PoisonSEHIdentifiers(bool Poison) { assert(Ident__exception_code && Ident__exception_info); assert(Ident___exception_code && Ident___exception_info); Ident__exception_code->setIsPoisoned(Poison); Ident___exception_code->setIsPoisoned(Poison); Ident_GetExceptionCode->setIsPoisoned(Poison); Ident__exception_info->setIsPoisoned(Poison); Ident___exception_info->setIsPoisoned(Poison); Ident_GetExceptionInfo->setIsPoisoned(Poison); Ident__abnormal_termination->setIsPoisoned(Poison); Ident___abnormal_termination->setIsPoisoned(Poison); Ident_AbnormalTermination->setIsPoisoned(Poison); } void Preprocessor::HandlePoisonedIdentifier(Token & Identifier) { assert(Identifier.getIdentifierInfo() && "Can't handle identifiers without identifier info!"); llvm::DenseMap::const_iterator it = PoisonReasons.find(Identifier.getIdentifierInfo()); if(it == PoisonReasons.end()) Diag(Identifier, diag::err_pp_used_poisoned_id); else Diag(Identifier,it->second) << Identifier.getIdentifierInfo(); } /// \brief Returns a diagnostic message kind for reporting a future keyword as /// appropriate for the identifier and specified language. static diag::kind getFutureCompatDiagKind(const IdentifierInfo &II, const LangOptions &LangOpts) { assert(II.isFutureCompatKeyword() && "diagnostic should not be needed"); if (LangOpts.CPlusPlus) return llvm::StringSwitch(II.getName()) #define CXX11_KEYWORD(NAME, FLAGS) \ .Case(#NAME, diag::warn_cxx11_keyword) #include "clang/Basic/TokenKinds.def" ; llvm_unreachable( "Keyword not known to come from a newer Standard or proposed Standard"); } void Preprocessor::updateOutOfDateIdentifier(IdentifierInfo &II) const { assert(II.isOutOfDate() && "not out of date"); getExternalSource()->updateOutOfDateIdentifier(II); } /// HandleIdentifier - This callback is invoked when the lexer reads an /// identifier. This callback looks up the identifier in the map and/or /// potentially macro expands it or turns it into a named token (like 'for'). /// /// Note that callers of this method are guarded by checking the /// IdentifierInfo's 'isHandleIdentifierCase' bit. If this method changes, the /// IdentifierInfo methods that compute these properties will need to change to /// match. bool Preprocessor::HandleIdentifier(Token &Identifier) { assert(Identifier.getIdentifierInfo() && "Can't handle identifiers without identifier info!"); IdentifierInfo &II = *Identifier.getIdentifierInfo(); // If the information about this identifier is out of date, update it from // the external source. // We have to treat __VA_ARGS__ in a special way, since it gets // serialized with isPoisoned = true, but our preprocessor may have // unpoisoned it if we're defining a C99 macro. if (II.isOutOfDate()) { bool CurrentIsPoisoned = false; if (&II == Ident__VA_ARGS__) CurrentIsPoisoned = Ident__VA_ARGS__->isPoisoned(); updateOutOfDateIdentifier(II); Identifier.setKind(II.getTokenID()); if (&II == Ident__VA_ARGS__) II.setIsPoisoned(CurrentIsPoisoned); } // If this identifier was poisoned, and if it was not produced from a macro // expansion, emit an error. if (II.isPoisoned() && CurPPLexer) { HandlePoisonedIdentifier(Identifier); } // If this is a macro to be expanded, do it. if (MacroDefinition MD = getMacroDefinition(&II)) { auto *MI = MD.getMacroInfo(); assert(MI && "macro definition with no macro info?"); if (!DisableMacroExpansion) { if (!Identifier.isExpandDisabled() && MI->isEnabled()) { // C99 6.10.3p10: If the preprocessing token immediately after the // macro name isn't a '(', this macro should not be expanded. if (!MI->isFunctionLike() || isNextPPTokenLParen()) return HandleMacroExpandedIdentifier(Identifier, MD); } else { // C99 6.10.3.4p2 says that a disabled macro may never again be // expanded, even if it's in a context where it could be expanded in the // future. Identifier.setFlag(Token::DisableExpand); if (MI->isObjectLike() || isNextPPTokenLParen()) Diag(Identifier, diag::pp_disabled_macro_expansion); } } } // If this identifier is a keyword in a newer Standard or proposed Standard, // produce a warning. Don't warn if we're not considering macro expansion, // since this identifier might be the name of a macro. // FIXME: This warning is disabled in cases where it shouldn't be, like // "#define constexpr constexpr", "int constexpr;" if (II.isFutureCompatKeyword() && !DisableMacroExpansion) { Diag(Identifier, getFutureCompatDiagKind(II, getLangOpts())) << II.getName(); // Don't diagnose this keyword again in this translation unit. II.setIsFutureCompatKeyword(false); } // If this is an extension token, diagnose its use. // We avoid diagnosing tokens that originate from macro definitions. // FIXME: This warning is disabled in cases where it shouldn't be, // like "#define TY typeof", "TY(1) x". if (II.isExtensionToken() && !DisableMacroExpansion) Diag(Identifier, diag::ext_token_used); // If this is the 'import' contextual keyword following an '@', note // that the next token indicates a module name. // // Note that we do not treat 'import' as a contextual // keyword when we're in a caching lexer, because caching lexers only get // used in contexts where import declarations are disallowed. // // Likewise if this is the C++ Modules TS import keyword. if (((LastTokenWasAt && II.isModulesImport()) || Identifier.is(tok::kw_import)) && !InMacroArgs && !DisableMacroExpansion && (getLangOpts().Modules || getLangOpts().DebuggerSupport) && CurLexerKind != CLK_CachingLexer) { ModuleImportLoc = Identifier.getLocation(); ModuleImportPath.clear(); ModuleImportExpectsIdentifier = true; CurLexerKind = CLK_LexAfterModuleImport; } return true; } void Preprocessor::Lex(Token &Result) { // We loop here until a lex function returns a token; this avoids recursion. bool ReturnedToken; do { switch (CurLexerKind) { case CLK_Lexer: ReturnedToken = CurLexer->Lex(Result); break; case CLK_PTHLexer: ReturnedToken = CurPTHLexer->Lex(Result); break; case CLK_TokenLexer: ReturnedToken = CurTokenLexer->Lex(Result); break; case CLK_CachingLexer: CachingLex(Result); ReturnedToken = true; break; case CLK_LexAfterModuleImport: LexAfterModuleImport(Result); ReturnedToken = true; break; } } while (!ReturnedToken); if (Result.is(tok::code_completion)) setCodeCompletionIdentifierInfo(Result.getIdentifierInfo()); LastTokenWasAt = Result.is(tok::at); } /// \brief Lex a token following the 'import' contextual keyword. /// void Preprocessor::LexAfterModuleImport(Token &Result) { // Figure out what kind of lexer we actually have. recomputeCurLexerKind(); // Lex the next token. Lex(Result); // The token sequence // // import identifier (. identifier)* // // indicates a module import directive. We already saw the 'import' // contextual keyword, so now we're looking for the identifiers. if (ModuleImportExpectsIdentifier && Result.getKind() == tok::identifier) { // We expected to see an identifier here, and we did; continue handling // identifiers. ModuleImportPath.push_back(std::make_pair(Result.getIdentifierInfo(), Result.getLocation())); ModuleImportExpectsIdentifier = false; CurLexerKind = CLK_LexAfterModuleImport; return; } // If we're expecting a '.' or a ';', and we got a '.', then wait until we // see the next identifier. (We can also see a '[[' that begins an // attribute-specifier-seq here under the C++ Modules TS.) if (!ModuleImportExpectsIdentifier && Result.getKind() == tok::period) { ModuleImportExpectsIdentifier = true; CurLexerKind = CLK_LexAfterModuleImport; return; } // If we have a non-empty module path, load the named module. if (!ModuleImportPath.empty()) { // Under the Modules TS, the dot is just part of the module name, and not // a real hierarachy separator. Flatten such module names now. // // FIXME: Is this the right level to be performing this transformation? std::string FlatModuleName; if (getLangOpts().ModulesTS) { for (auto &Piece : ModuleImportPath) { if (!FlatModuleName.empty()) FlatModuleName += "."; FlatModuleName += Piece.first->getName(); } SourceLocation FirstPathLoc = ModuleImportPath[0].second; ModuleImportPath.clear(); ModuleImportPath.push_back( std::make_pair(getIdentifierInfo(FlatModuleName), FirstPathLoc)); } Module *Imported = nullptr; if (getLangOpts().Modules) { Imported = TheModuleLoader.loadModule(ModuleImportLoc, ModuleImportPath, Module::Hidden, /*IsIncludeDirective=*/false); if (Imported) makeModuleVisible(Imported, ModuleImportLoc); } if (Callbacks && (getLangOpts().Modules || getLangOpts().DebuggerSupport)) Callbacks->moduleImport(ModuleImportLoc, ModuleImportPath, Imported); } } void Preprocessor::makeModuleVisible(Module *M, SourceLocation Loc) { CurSubmoduleState->VisibleModules.setVisible( M, Loc, [](Module *) {}, [&](ArrayRef Path, Module *Conflict, StringRef Message) { // FIXME: Include the path in the diagnostic. // FIXME: Include the import location for the conflicting module. Diag(ModuleImportLoc, diag::warn_module_conflict) << Path[0]->getFullModuleName() << Conflict->getFullModuleName() << Message; }); // Add this module to the imports list of the currently-built submodule. if (!BuildingSubmoduleStack.empty() && M != BuildingSubmoduleStack.back().M) BuildingSubmoduleStack.back().M->Imports.insert(M); } bool Preprocessor::FinishLexStringLiteral(Token &Result, std::string &String, const char *DiagnosticTag, bool AllowMacroExpansion) { // We need at least one string literal. if (Result.isNot(tok::string_literal)) { Diag(Result, diag::err_expected_string_literal) << /*Source='in...'*/0 << DiagnosticTag; return false; } // Lex string literal tokens, optionally with macro expansion. SmallVector StrToks; do { StrToks.push_back(Result); if (Result.hasUDSuffix()) Diag(Result, diag::err_invalid_string_udl); if (AllowMacroExpansion) Lex(Result); else LexUnexpandedToken(Result); } while (Result.is(tok::string_literal)); // Concatenate and parse the strings. StringLiteralParser Literal(StrToks, *this); assert(Literal.isAscii() && "Didn't allow wide strings in"); if (Literal.hadError) return false; if (Literal.Pascal) { Diag(StrToks[0].getLocation(), diag::err_expected_string_literal) << /*Source='in...'*/0 << DiagnosticTag; return false; } String = Literal.GetString(); return true; } bool Preprocessor::parseSimpleIntegerLiteral(Token &Tok, uint64_t &Value) { assert(Tok.is(tok::numeric_constant)); SmallString<8> IntegerBuffer; bool NumberInvalid = false; StringRef Spelling = getSpelling(Tok, IntegerBuffer, &NumberInvalid); if (NumberInvalid) return false; NumericLiteralParser Literal(Spelling, Tok.getLocation(), *this); if (Literal.hadError || !Literal.isIntegerLiteral() || Literal.hasUDSuffix()) return false; llvm::APInt APVal(64, 0); if (Literal.GetIntegerValue(APVal)) return false; Lex(Tok); Value = APVal.getLimitedValue(); return true; } void Preprocessor::addCommentHandler(CommentHandler *Handler) { assert(Handler && "NULL comment handler"); assert(std::find(CommentHandlers.begin(), CommentHandlers.end(), Handler) == CommentHandlers.end() && "Comment handler already registered"); CommentHandlers.push_back(Handler); } void Preprocessor::removeCommentHandler(CommentHandler *Handler) { std::vector::iterator Pos = std::find(CommentHandlers.begin(), CommentHandlers.end(), Handler); assert(Pos != CommentHandlers.end() && "Comment handler not registered"); CommentHandlers.erase(Pos); } bool Preprocessor::HandleComment(Token &result, SourceRange Comment) { bool AnyPendingTokens = false; for (std::vector::iterator H = CommentHandlers.begin(), HEnd = CommentHandlers.end(); H != HEnd; ++H) { if ((*H)->HandleComment(*this, Comment)) AnyPendingTokens = true; } if (!AnyPendingTokens || getCommentRetentionState()) return false; Lex(result); return true; } ModuleLoader::~ModuleLoader() { } CommentHandler::~CommentHandler() { } CodeCompletionHandler::~CodeCompletionHandler() { } void Preprocessor::createPreprocessingRecord() { if (Record) return; Record = new PreprocessingRecord(getSourceManager()); addPPCallbacks(std::unique_ptr(Record)); } diff --git a/lib/Parse/Parser.cpp b/lib/Parse/Parser.cpp index 4aa9a5971929..1ed7ef966358 100644 --- a/lib/Parse/Parser.cpp +++ b/lib/Parse/Parser.cpp @@ -1,2248 +1,2246 @@ //===--- Parser.cpp - C Language Family Parser ----------------------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements the Parser interfaces. // //===----------------------------------------------------------------------===// #include "clang/Parse/Parser.h" #include "clang/AST/ASTConsumer.h" #include "clang/AST/ASTContext.h" #include "clang/AST/DeclTemplate.h" #include "clang/Parse/ParseDiagnostic.h" #include "clang/Parse/RAIIObjectsForParser.h" #include "clang/Sema/DeclSpec.h" #include "clang/Sema/ParsedTemplate.h" #include "clang/Sema/Scope.h" using namespace clang; namespace { /// \brief A comment handler that passes comments found by the preprocessor /// to the parser action. class ActionCommentHandler : public CommentHandler { Sema &S; public: explicit ActionCommentHandler(Sema &S) : S(S) { } bool HandleComment(Preprocessor &PP, SourceRange Comment) override { S.ActOnComment(Comment); return false; } }; } // end anonymous namespace IdentifierInfo *Parser::getSEHExceptKeyword() { // __except is accepted as a (contextual) keyword if (!Ident__except && (getLangOpts().MicrosoftExt || getLangOpts().Borland)) Ident__except = PP.getIdentifierInfo("__except"); return Ident__except; } Parser::Parser(Preprocessor &pp, Sema &actions, bool skipFunctionBodies) : PP(pp), Actions(actions), Diags(PP.getDiagnostics()), GreaterThanIsOperator(true), ColonIsSacred(false), InMessageExpression(false), TemplateParameterDepth(0), ParsingInObjCContainer(false) { SkipFunctionBodies = pp.isCodeCompletionEnabled() || skipFunctionBodies; Tok.startToken(); Tok.setKind(tok::eof); Actions.CurScope = nullptr; NumCachedScopes = 0; CurParsedObjCImpl = nullptr; // Add #pragma handlers. These are removed and destroyed in the // destructor. initializePragmaHandlers(); CommentSemaHandler.reset(new ActionCommentHandler(actions)); PP.addCommentHandler(CommentSemaHandler.get()); PP.setCodeCompletionHandler(*this); } DiagnosticBuilder Parser::Diag(SourceLocation Loc, unsigned DiagID) { return Diags.Report(Loc, DiagID); } DiagnosticBuilder Parser::Diag(const Token &Tok, unsigned DiagID) { return Diag(Tok.getLocation(), DiagID); } /// \brief Emits a diagnostic suggesting parentheses surrounding a /// given range. /// /// \param Loc The location where we'll emit the diagnostic. /// \param DK The kind of diagnostic to emit. /// \param ParenRange Source range enclosing code that should be parenthesized. void Parser::SuggestParentheses(SourceLocation Loc, unsigned DK, SourceRange ParenRange) { SourceLocation EndLoc = PP.getLocForEndOfToken(ParenRange.getEnd()); if (!ParenRange.getEnd().isFileID() || EndLoc.isInvalid()) { // We can't display the parentheses, so just dig the // warning/error and return. Diag(Loc, DK); return; } Diag(Loc, DK) << FixItHint::CreateInsertion(ParenRange.getBegin(), "(") << FixItHint::CreateInsertion(EndLoc, ")"); } static bool IsCommonTypo(tok::TokenKind ExpectedTok, const Token &Tok) { switch (ExpectedTok) { case tok::semi: return Tok.is(tok::colon) || Tok.is(tok::comma); // : or , for ; default: return false; } } bool Parser::ExpectAndConsume(tok::TokenKind ExpectedTok, unsigned DiagID, StringRef Msg) { if (Tok.is(ExpectedTok) || Tok.is(tok::code_completion)) { ConsumeAnyToken(); return false; } // Detect common single-character typos and resume. if (IsCommonTypo(ExpectedTok, Tok)) { SourceLocation Loc = Tok.getLocation(); { DiagnosticBuilder DB = Diag(Loc, DiagID); DB << FixItHint::CreateReplacement( SourceRange(Loc), tok::getPunctuatorSpelling(ExpectedTok)); if (DiagID == diag::err_expected) DB << ExpectedTok; else if (DiagID == diag::err_expected_after) DB << Msg << ExpectedTok; else DB << Msg; } // Pretend there wasn't a problem. ConsumeAnyToken(); return false; } SourceLocation EndLoc = PP.getLocForEndOfToken(PrevTokLocation); const char *Spelling = nullptr; if (EndLoc.isValid()) Spelling = tok::getPunctuatorSpelling(ExpectedTok); DiagnosticBuilder DB = Spelling ? Diag(EndLoc, DiagID) << FixItHint::CreateInsertion(EndLoc, Spelling) : Diag(Tok, DiagID); if (DiagID == diag::err_expected) DB << ExpectedTok; else if (DiagID == diag::err_expected_after) DB << Msg << ExpectedTok; else DB << Msg; return true; } bool Parser::ExpectAndConsumeSemi(unsigned DiagID) { if (TryConsumeToken(tok::semi)) return false; if (Tok.is(tok::code_completion)) { handleUnexpectedCodeCompletionToken(); return false; } if ((Tok.is(tok::r_paren) || Tok.is(tok::r_square)) && NextToken().is(tok::semi)) { Diag(Tok, diag::err_extraneous_token_before_semi) << PP.getSpelling(Tok) << FixItHint::CreateRemoval(Tok.getLocation()); ConsumeAnyToken(); // The ')' or ']'. ConsumeToken(); // The ';'. return false; } return ExpectAndConsume(tok::semi, DiagID); } void Parser::ConsumeExtraSemi(ExtraSemiKind Kind, unsigned TST) { if (!Tok.is(tok::semi)) return; bool HadMultipleSemis = false; SourceLocation StartLoc = Tok.getLocation(); SourceLocation EndLoc = Tok.getLocation(); ConsumeToken(); while ((Tok.is(tok::semi) && !Tok.isAtStartOfLine())) { HadMultipleSemis = true; EndLoc = Tok.getLocation(); ConsumeToken(); } // C++11 allows extra semicolons at namespace scope, but not in any of the // other contexts. if (Kind == OutsideFunction && getLangOpts().CPlusPlus) { if (getLangOpts().CPlusPlus11) Diag(StartLoc, diag::warn_cxx98_compat_top_level_semi) << FixItHint::CreateRemoval(SourceRange(StartLoc, EndLoc)); else Diag(StartLoc, diag::ext_extra_semi_cxx11) << FixItHint::CreateRemoval(SourceRange(StartLoc, EndLoc)); return; } if (Kind != AfterMemberFunctionDefinition || HadMultipleSemis) Diag(StartLoc, diag::ext_extra_semi) << Kind << DeclSpec::getSpecifierName((DeclSpec::TST)TST, Actions.getASTContext().getPrintingPolicy()) << FixItHint::CreateRemoval(SourceRange(StartLoc, EndLoc)); else // A single semicolon is valid after a member function definition. Diag(StartLoc, diag::warn_extra_semi_after_mem_fn_def) << FixItHint::CreateRemoval(SourceRange(StartLoc, EndLoc)); } bool Parser::expectIdentifier() { if (Tok.is(tok::identifier)) return false; if (const auto *II = Tok.getIdentifierInfo()) { if (II->isCPlusPlusKeyword(getLangOpts())) { Diag(Tok, diag::err_expected_token_instead_of_objcxx_keyword) << tok::identifier << Tok.getIdentifierInfo(); // Objective-C++: Recover by treating this keyword as a valid identifier. return false; } } Diag(Tok, diag::err_expected) << tok::identifier; return true; } //===----------------------------------------------------------------------===// // Error recovery. //===----------------------------------------------------------------------===// static bool HasFlagsSet(Parser::SkipUntilFlags L, Parser::SkipUntilFlags R) { return (static_cast(L) & static_cast(R)) != 0; } /// SkipUntil - Read tokens until we get to the specified token, then consume /// it (unless no flag StopBeforeMatch). Because we cannot guarantee that the /// token will ever occur, this skips to the next token, or to some likely /// good stopping point. If StopAtSemi is true, skipping will stop at a ';' /// character. /// /// If SkipUntil finds the specified token, it returns true, otherwise it /// returns false. bool Parser::SkipUntil(ArrayRef Toks, SkipUntilFlags Flags) { // We always want this function to skip at least one token if the first token // isn't T and if not at EOF. bool isFirstTokenSkipped = true; while (1) { // If we found one of the tokens, stop and return true. for (unsigned i = 0, NumToks = Toks.size(); i != NumToks; ++i) { if (Tok.is(Toks[i])) { if (HasFlagsSet(Flags, StopBeforeMatch)) { // Noop, don't consume the token. } else { ConsumeAnyToken(); } return true; } } // Important special case: The caller has given up and just wants us to // skip the rest of the file. Do this without recursing, since we can // get here precisely because the caller detected too much recursion. if (Toks.size() == 1 && Toks[0] == tok::eof && !HasFlagsSet(Flags, StopAtSemi) && !HasFlagsSet(Flags, StopAtCodeCompletion)) { while (Tok.isNot(tok::eof)) ConsumeAnyToken(); return true; } switch (Tok.getKind()) { case tok::eof: // Ran out of tokens. return false; case tok::annot_pragma_openmp: case tok::annot_pragma_openmp_end: // Stop before an OpenMP pragma boundary. case tok::annot_module_begin: case tok::annot_module_end: case tok::annot_module_include: // Stop before we change submodules. They generally indicate a "good" // place to pick up parsing again (except in the special case where // we're trying to skip to EOF). return false; case tok::code_completion: if (!HasFlagsSet(Flags, StopAtCodeCompletion)) handleUnexpectedCodeCompletionToken(); return false; case tok::l_paren: // Recursively skip properly-nested parens. ConsumeParen(); if (HasFlagsSet(Flags, StopAtCodeCompletion)) SkipUntil(tok::r_paren, StopAtCodeCompletion); else SkipUntil(tok::r_paren); break; case tok::l_square: // Recursively skip properly-nested square brackets. ConsumeBracket(); if (HasFlagsSet(Flags, StopAtCodeCompletion)) SkipUntil(tok::r_square, StopAtCodeCompletion); else SkipUntil(tok::r_square); break; case tok::l_brace: // Recursively skip properly-nested braces. ConsumeBrace(); if (HasFlagsSet(Flags, StopAtCodeCompletion)) SkipUntil(tok::r_brace, StopAtCodeCompletion); else SkipUntil(tok::r_brace); break; // Okay, we found a ']' or '}' or ')', which we think should be balanced. // Since the user wasn't looking for this token (if they were, it would // already be handled), this isn't balanced. If there is a LHS token at a // higher level, we will assume that this matches the unbalanced token // and return it. Otherwise, this is a spurious RHS token, which we skip. case tok::r_paren: if (ParenCount && !isFirstTokenSkipped) return false; // Matches something. ConsumeParen(); break; case tok::r_square: if (BracketCount && !isFirstTokenSkipped) return false; // Matches something. ConsumeBracket(); break; case tok::r_brace: if (BraceCount && !isFirstTokenSkipped) return false; // Matches something. ConsumeBrace(); break; case tok::semi: if (HasFlagsSet(Flags, StopAtSemi)) return false; // FALL THROUGH. default: // Skip this token. ConsumeAnyToken(); break; } isFirstTokenSkipped = false; } } //===----------------------------------------------------------------------===// // Scope manipulation //===----------------------------------------------------------------------===// /// EnterScope - Start a new scope. void Parser::EnterScope(unsigned ScopeFlags) { if (NumCachedScopes) { Scope *N = ScopeCache[--NumCachedScopes]; N->Init(getCurScope(), ScopeFlags); Actions.CurScope = N; } else { Actions.CurScope = new Scope(getCurScope(), ScopeFlags, Diags); } } /// ExitScope - Pop a scope off the scope stack. void Parser::ExitScope() { assert(getCurScope() && "Scope imbalance!"); // Inform the actions module that this scope is going away if there are any // decls in it. Actions.ActOnPopScope(Tok.getLocation(), getCurScope()); Scope *OldScope = getCurScope(); Actions.CurScope = OldScope->getParent(); if (NumCachedScopes == ScopeCacheSize) delete OldScope; else ScopeCache[NumCachedScopes++] = OldScope; } /// Set the flags for the current scope to ScopeFlags. If ManageFlags is false, /// this object does nothing. Parser::ParseScopeFlags::ParseScopeFlags(Parser *Self, unsigned ScopeFlags, bool ManageFlags) : CurScope(ManageFlags ? Self->getCurScope() : nullptr) { if (CurScope) { OldFlags = CurScope->getFlags(); CurScope->setFlags(ScopeFlags); } } /// Restore the flags for the current scope to what they were before this /// object overrode them. Parser::ParseScopeFlags::~ParseScopeFlags() { if (CurScope) CurScope->setFlags(OldFlags); } //===----------------------------------------------------------------------===// // C99 6.9: External Definitions. //===----------------------------------------------------------------------===// Parser::~Parser() { // If we still have scopes active, delete the scope tree. delete getCurScope(); Actions.CurScope = nullptr; // Free the scope cache. for (unsigned i = 0, e = NumCachedScopes; i != e; ++i) delete ScopeCache[i]; resetPragmaHandlers(); PP.removeCommentHandler(CommentSemaHandler.get()); PP.clearCodeCompletionHandler(); if (getLangOpts().DelayedTemplateParsing && !PP.isIncrementalProcessingEnabled() && !TemplateIds.empty()) { // If an ASTConsumer parsed delay-parsed templates in their // HandleTranslationUnit() method, TemplateIds created there were not // guarded by a DestroyTemplateIdAnnotationsRAIIObj object in // ParseTopLevelDecl(). Destroy them here. DestroyTemplateIdAnnotationsRAIIObj CleanupRAII(TemplateIds); } assert(TemplateIds.empty() && "Still alive TemplateIdAnnotations around?"); } /// Initialize - Warm up the parser. /// void Parser::Initialize() { // Create the translation unit scope. Install it as the current scope. assert(getCurScope() == nullptr && "A scope is already active?"); EnterScope(Scope::DeclScope); Actions.ActOnTranslationUnitScope(getCurScope()); // Initialization for Objective-C context sensitive keywords recognition. // Referenced in Parser::ParseObjCTypeQualifierList. if (getLangOpts().ObjC1) { ObjCTypeQuals[objc_in] = &PP.getIdentifierTable().get("in"); ObjCTypeQuals[objc_out] = &PP.getIdentifierTable().get("out"); ObjCTypeQuals[objc_inout] = &PP.getIdentifierTable().get("inout"); ObjCTypeQuals[objc_oneway] = &PP.getIdentifierTable().get("oneway"); ObjCTypeQuals[objc_bycopy] = &PP.getIdentifierTable().get("bycopy"); ObjCTypeQuals[objc_byref] = &PP.getIdentifierTable().get("byref"); ObjCTypeQuals[objc_nonnull] = &PP.getIdentifierTable().get("nonnull"); ObjCTypeQuals[objc_nullable] = &PP.getIdentifierTable().get("nullable"); ObjCTypeQuals[objc_null_unspecified] = &PP.getIdentifierTable().get("null_unspecified"); } Ident_instancetype = nullptr; Ident_final = nullptr; Ident_sealed = nullptr; Ident_override = nullptr; Ident_GNU_final = nullptr; Ident_super = &PP.getIdentifierTable().get("super"); Ident_vector = nullptr; Ident_bool = nullptr; Ident_pixel = nullptr; if (getLangOpts().AltiVec || getLangOpts().ZVector) { Ident_vector = &PP.getIdentifierTable().get("vector"); Ident_bool = &PP.getIdentifierTable().get("bool"); } if (getLangOpts().AltiVec) Ident_pixel = &PP.getIdentifierTable().get("pixel"); Ident_introduced = nullptr; Ident_deprecated = nullptr; Ident_obsoleted = nullptr; Ident_unavailable = nullptr; Ident_strict = nullptr; Ident_replacement = nullptr; Ident_language = Ident_defined_in = Ident_generated_declaration = nullptr; Ident__except = nullptr; Ident__exception_code = Ident__exception_info = nullptr; Ident__abnormal_termination = Ident___exception_code = nullptr; Ident___exception_info = Ident___abnormal_termination = nullptr; Ident_GetExceptionCode = Ident_GetExceptionInfo = nullptr; Ident_AbnormalTermination = nullptr; if(getLangOpts().Borland) { Ident__exception_info = PP.getIdentifierInfo("_exception_info"); Ident___exception_info = PP.getIdentifierInfo("__exception_info"); Ident_GetExceptionInfo = PP.getIdentifierInfo("GetExceptionInformation"); Ident__exception_code = PP.getIdentifierInfo("_exception_code"); Ident___exception_code = PP.getIdentifierInfo("__exception_code"); Ident_GetExceptionCode = PP.getIdentifierInfo("GetExceptionCode"); Ident__abnormal_termination = PP.getIdentifierInfo("_abnormal_termination"); Ident___abnormal_termination = PP.getIdentifierInfo("__abnormal_termination"); Ident_AbnormalTermination = PP.getIdentifierInfo("AbnormalTermination"); PP.SetPoisonReason(Ident__exception_code,diag::err_seh___except_block); PP.SetPoisonReason(Ident___exception_code,diag::err_seh___except_block); PP.SetPoisonReason(Ident_GetExceptionCode,diag::err_seh___except_block); PP.SetPoisonReason(Ident__exception_info,diag::err_seh___except_filter); PP.SetPoisonReason(Ident___exception_info,diag::err_seh___except_filter); PP.SetPoisonReason(Ident_GetExceptionInfo,diag::err_seh___except_filter); PP.SetPoisonReason(Ident__abnormal_termination,diag::err_seh___finally_block); PP.SetPoisonReason(Ident___abnormal_termination,diag::err_seh___finally_block); PP.SetPoisonReason(Ident_AbnormalTermination,diag::err_seh___finally_block); } Actions.Initialize(); // Prime the lexer look-ahead. ConsumeToken(); - - PP.replayPreambleConditionalStack(); } void Parser::LateTemplateParserCleanupCallback(void *P) { // While this RAII helper doesn't bracket any actual work, the destructor will // clean up annotations that were created during ActOnEndOfTranslationUnit // when incremental processing is enabled. DestroyTemplateIdAnnotationsRAIIObj CleanupRAII(((Parser *)P)->TemplateIds); } bool Parser::ParseFirstTopLevelDecl(DeclGroupPtrTy &Result) { Actions.ActOnStartOfTranslationUnit(); // C11 6.9p1 says translation units must have at least one top-level // declaration. C++ doesn't have this restriction. We also don't want to // complain if we have a precompiled header, although technically if the PCH // is empty we should still emit the (pedantic) diagnostic. bool NoTopLevelDecls = ParseTopLevelDecl(Result); if (NoTopLevelDecls && !Actions.getASTContext().getExternalSource() && !getLangOpts().CPlusPlus) Diag(diag::ext_empty_translation_unit); return NoTopLevelDecls; } /// ParseTopLevelDecl - Parse one top-level declaration, return whatever the /// action tells us to. This returns true if the EOF was encountered. bool Parser::ParseTopLevelDecl(DeclGroupPtrTy &Result) { DestroyTemplateIdAnnotationsRAIIObj CleanupRAII(TemplateIds); // Skip over the EOF token, flagging end of previous input for incremental // processing if (PP.isIncrementalProcessingEnabled() && Tok.is(tok::eof)) ConsumeToken(); Result = nullptr; switch (Tok.getKind()) { case tok::annot_pragma_unused: HandlePragmaUnused(); return false; case tok::kw_import: Result = ParseModuleImport(SourceLocation()); return false; case tok::kw_export: if (NextToken().isNot(tok::kw_module)) break; LLVM_FALLTHROUGH; case tok::kw_module: Result = ParseModuleDecl(); return false; case tok::annot_module_include: Actions.ActOnModuleInclude(Tok.getLocation(), reinterpret_cast( Tok.getAnnotationValue())); ConsumeAnnotationToken(); return false; case tok::annot_module_begin: Actions.ActOnModuleBegin(Tok.getLocation(), reinterpret_cast( Tok.getAnnotationValue())); ConsumeAnnotationToken(); return false; case tok::annot_module_end: Actions.ActOnModuleEnd(Tok.getLocation(), reinterpret_cast( Tok.getAnnotationValue())); ConsumeAnnotationToken(); return false; case tok::annot_pragma_attribute: HandlePragmaAttribute(); return false; case tok::eof: // Late template parsing can begin. if (getLangOpts().DelayedTemplateParsing) Actions.SetLateTemplateParser(LateTemplateParserCallback, PP.isIncrementalProcessingEnabled() ? LateTemplateParserCleanupCallback : nullptr, this); if (!PP.isIncrementalProcessingEnabled()) Actions.ActOnEndOfTranslationUnit(); //else don't tell Sema that we ended parsing: more input might come. return true; default: break; } ParsedAttributesWithRange attrs(AttrFactory); MaybeParseCXX11Attributes(attrs); Result = ParseExternalDeclaration(attrs); return false; } /// ParseExternalDeclaration: /// /// external-declaration: [C99 6.9], declaration: [C++ dcl.dcl] /// function-definition /// declaration /// [GNU] asm-definition /// [GNU] __extension__ external-declaration /// [OBJC] objc-class-definition /// [OBJC] objc-class-declaration /// [OBJC] objc-alias-declaration /// [OBJC] objc-protocol-definition /// [OBJC] objc-method-definition /// [OBJC] @end /// [C++] linkage-specification /// [GNU] asm-definition: /// simple-asm-expr ';' /// [C++11] empty-declaration /// [C++11] attribute-declaration /// /// [C++11] empty-declaration: /// ';' /// /// [C++0x/GNU] 'extern' 'template' declaration Parser::DeclGroupPtrTy Parser::ParseExternalDeclaration(ParsedAttributesWithRange &attrs, ParsingDeclSpec *DS) { DestroyTemplateIdAnnotationsRAIIObj CleanupRAII(TemplateIds); ParenBraceBracketBalancer BalancerRAIIObj(*this); if (PP.isCodeCompletionReached()) { cutOffParsing(); return nullptr; } Decl *SingleDecl = nullptr; switch (Tok.getKind()) { case tok::annot_pragma_vis: HandlePragmaVisibility(); return nullptr; case tok::annot_pragma_pack: HandlePragmaPack(); return nullptr; case tok::annot_pragma_msstruct: HandlePragmaMSStruct(); return nullptr; case tok::annot_pragma_align: HandlePragmaAlign(); return nullptr; case tok::annot_pragma_weak: HandlePragmaWeak(); return nullptr; case tok::annot_pragma_weakalias: HandlePragmaWeakAlias(); return nullptr; case tok::annot_pragma_redefine_extname: HandlePragmaRedefineExtname(); return nullptr; case tok::annot_pragma_fp_contract: HandlePragmaFPContract(); return nullptr; case tok::annot_pragma_fp: HandlePragmaFP(); break; case tok::annot_pragma_opencl_extension: HandlePragmaOpenCLExtension(); return nullptr; case tok::annot_pragma_openmp: { AccessSpecifier AS = AS_none; return ParseOpenMPDeclarativeDirectiveWithExtDecl(AS, attrs); } case tok::annot_pragma_ms_pointers_to_members: HandlePragmaMSPointersToMembers(); return nullptr; case tok::annot_pragma_ms_vtordisp: HandlePragmaMSVtorDisp(); return nullptr; case tok::annot_pragma_ms_pragma: HandlePragmaMSPragma(); return nullptr; case tok::annot_pragma_dump: HandlePragmaDump(); return nullptr; case tok::semi: // Either a C++11 empty-declaration or attribute-declaration. SingleDecl = Actions.ActOnEmptyDeclaration(getCurScope(), attrs.getList(), Tok.getLocation()); ConsumeExtraSemi(OutsideFunction); break; case tok::r_brace: Diag(Tok, diag::err_extraneous_closing_brace); ConsumeBrace(); return nullptr; case tok::eof: Diag(Tok, diag::err_expected_external_declaration); return nullptr; case tok::kw___extension__: { // __extension__ silences extension warnings in the subexpression. ExtensionRAIIObject O(Diags); // Use RAII to do this. ConsumeToken(); return ParseExternalDeclaration(attrs); } case tok::kw_asm: { ProhibitAttributes(attrs); SourceLocation StartLoc = Tok.getLocation(); SourceLocation EndLoc; ExprResult Result(ParseSimpleAsm(&EndLoc)); // Check if GNU-style InlineAsm is disabled. // Empty asm string is allowed because it will not introduce // any assembly code. if (!(getLangOpts().GNUAsm || Result.isInvalid())) { const auto *SL = cast(Result.get()); if (!SL->getString().trim().empty()) Diag(StartLoc, diag::err_gnu_inline_asm_disabled); } ExpectAndConsume(tok::semi, diag::err_expected_after, "top-level asm block"); if (Result.isInvalid()) return nullptr; SingleDecl = Actions.ActOnFileScopeAsmDecl(Result.get(), StartLoc, EndLoc); break; } case tok::at: return ParseObjCAtDirectives(); case tok::minus: case tok::plus: if (!getLangOpts().ObjC1) { Diag(Tok, diag::err_expected_external_declaration); ConsumeToken(); return nullptr; } SingleDecl = ParseObjCMethodDefinition(); break; case tok::code_completion: Actions.CodeCompleteOrdinaryName(getCurScope(), CurParsedObjCImpl? Sema::PCC_ObjCImplementation : Sema::PCC_Namespace); cutOffParsing(); return nullptr; case tok::kw_export: if (getLangOpts().ModulesTS) { SingleDecl = ParseExportDeclaration(); break; } // This must be 'export template'. Parse it so we can diagnose our lack // of support. LLVM_FALLTHROUGH; case tok::kw_using: case tok::kw_namespace: case tok::kw_typedef: case tok::kw_template: case tok::kw_static_assert: case tok::kw__Static_assert: // A function definition cannot start with any of these keywords. { SourceLocation DeclEnd; return ParseDeclaration(Declarator::FileContext, DeclEnd, attrs); } case tok::kw_static: // Parse (then ignore) 'static' prior to a template instantiation. This is // a GCC extension that we intentionally do not support. if (getLangOpts().CPlusPlus && NextToken().is(tok::kw_template)) { Diag(ConsumeToken(), diag::warn_static_inline_explicit_inst_ignored) << 0; SourceLocation DeclEnd; return ParseDeclaration(Declarator::FileContext, DeclEnd, attrs); } goto dont_know; case tok::kw_inline: if (getLangOpts().CPlusPlus) { tok::TokenKind NextKind = NextToken().getKind(); // Inline namespaces. Allowed as an extension even in C++03. if (NextKind == tok::kw_namespace) { SourceLocation DeclEnd; return ParseDeclaration(Declarator::FileContext, DeclEnd, attrs); } // Parse (then ignore) 'inline' prior to a template instantiation. This is // a GCC extension that we intentionally do not support. if (NextKind == tok::kw_template) { Diag(ConsumeToken(), diag::warn_static_inline_explicit_inst_ignored) << 1; SourceLocation DeclEnd; return ParseDeclaration(Declarator::FileContext, DeclEnd, attrs); } } goto dont_know; case tok::kw_extern: if (getLangOpts().CPlusPlus && NextToken().is(tok::kw_template)) { // Extern templates SourceLocation ExternLoc = ConsumeToken(); SourceLocation TemplateLoc = ConsumeToken(); Diag(ExternLoc, getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_extern_template : diag::ext_extern_template) << SourceRange(ExternLoc, TemplateLoc); SourceLocation DeclEnd; return Actions.ConvertDeclToDeclGroup( ParseExplicitInstantiation(Declarator::FileContext, ExternLoc, TemplateLoc, DeclEnd)); } goto dont_know; case tok::kw___if_exists: case tok::kw___if_not_exists: ParseMicrosoftIfExistsExternalDeclaration(); return nullptr; case tok::kw_module: Diag(Tok, diag::err_unexpected_module_decl); SkipUntil(tok::semi); return nullptr; default: dont_know: if (Tok.isEditorPlaceholder()) { ConsumeToken(); return nullptr; } // We can't tell whether this is a function-definition or declaration yet. return ParseDeclarationOrFunctionDefinition(attrs, DS); } // This routine returns a DeclGroup, if the thing we parsed only contains a // single decl, convert it now. return Actions.ConvertDeclToDeclGroup(SingleDecl); } /// \brief Determine whether the current token, if it occurs after a /// declarator, continues a declaration or declaration list. bool Parser::isDeclarationAfterDeclarator() { // Check for '= delete' or '= default' if (getLangOpts().CPlusPlus && Tok.is(tok::equal)) { const Token &KW = NextToken(); if (KW.is(tok::kw_default) || KW.is(tok::kw_delete)) return false; } return Tok.is(tok::equal) || // int X()= -> not a function def Tok.is(tok::comma) || // int X(), -> not a function def Tok.is(tok::semi) || // int X(); -> not a function def Tok.is(tok::kw_asm) || // int X() __asm__ -> not a function def Tok.is(tok::kw___attribute) || // int X() __attr__ -> not a function def (getLangOpts().CPlusPlus && Tok.is(tok::l_paren)); // int X(0) -> not a function def [C++] } /// \brief Determine whether the current token, if it occurs after a /// declarator, indicates the start of a function definition. bool Parser::isStartOfFunctionDefinition(const ParsingDeclarator &Declarator) { assert(Declarator.isFunctionDeclarator() && "Isn't a function declarator"); if (Tok.is(tok::l_brace)) // int X() {} return true; // Handle K&R C argument lists: int X(f) int f; {} if (!getLangOpts().CPlusPlus && Declarator.getFunctionTypeInfo().isKNRPrototype()) return isDeclarationSpecifier(); if (getLangOpts().CPlusPlus && Tok.is(tok::equal)) { const Token &KW = NextToken(); return KW.is(tok::kw_default) || KW.is(tok::kw_delete); } return Tok.is(tok::colon) || // X() : Base() {} (used for ctors) Tok.is(tok::kw_try); // X() try { ... } } /// Parse either a function-definition or a declaration. We can't tell which /// we have until we read up to the compound-statement in function-definition. /// TemplateParams, if non-NULL, provides the template parameters when we're /// parsing a C++ template-declaration. /// /// function-definition: [C99 6.9.1] /// decl-specs declarator declaration-list[opt] compound-statement /// [C90] function-definition: [C99 6.7.1] - implicit int result /// [C90] decl-specs[opt] declarator declaration-list[opt] compound-statement /// /// declaration: [C99 6.7] /// declaration-specifiers init-declarator-list[opt] ';' /// [!C99] init-declarator-list ';' [TODO: warn in c99 mode] /// [OMP] threadprivate-directive [TODO] /// Parser::DeclGroupPtrTy Parser::ParseDeclOrFunctionDefInternal(ParsedAttributesWithRange &attrs, ParsingDeclSpec &DS, AccessSpecifier AS) { MaybeParseMicrosoftAttributes(DS.getAttributes()); // Parse the common declaration-specifiers piece. ParseDeclarationSpecifiers(DS, ParsedTemplateInfo(), AS, DSC_top_level); // If we had a free-standing type definition with a missing semicolon, we // may get this far before the problem becomes obvious. if (DS.hasTagDefinition() && DiagnoseMissingSemiAfterTagDefinition(DS, AS, DSC_top_level)) return nullptr; // C99 6.7.2.3p6: Handle "struct-or-union identifier;", "enum { X };" // declaration-specifiers init-declarator-list[opt] ';' if (Tok.is(tok::semi)) { ProhibitAttributes(attrs); ConsumeToken(); RecordDecl *AnonRecord = nullptr; Decl *TheDecl = Actions.ParsedFreeStandingDeclSpec(getCurScope(), AS_none, DS, AnonRecord); DS.complete(TheDecl); if (getLangOpts().OpenCL) Actions.setCurrentOpenCLExtensionForDecl(TheDecl); if (AnonRecord) { Decl* decls[] = {AnonRecord, TheDecl}; return Actions.BuildDeclaratorGroup(decls); } return Actions.ConvertDeclToDeclGroup(TheDecl); } DS.takeAttributesFrom(attrs); // ObjC2 allows prefix attributes on class interfaces and protocols. // FIXME: This still needs better diagnostics. We should only accept // attributes here, no types, etc. if (getLangOpts().ObjC2 && Tok.is(tok::at)) { SourceLocation AtLoc = ConsumeToken(); // the "@" if (!Tok.isObjCAtKeyword(tok::objc_interface) && !Tok.isObjCAtKeyword(tok::objc_protocol)) { Diag(Tok, diag::err_objc_unexpected_attr); SkipUntil(tok::semi); // FIXME: better skip? return nullptr; } DS.abort(); const char *PrevSpec = nullptr; unsigned DiagID; if (DS.SetTypeSpecType(DeclSpec::TST_unspecified, AtLoc, PrevSpec, DiagID, Actions.getASTContext().getPrintingPolicy())) Diag(AtLoc, DiagID) << PrevSpec; if (Tok.isObjCAtKeyword(tok::objc_protocol)) return ParseObjCAtProtocolDeclaration(AtLoc, DS.getAttributes()); return Actions.ConvertDeclToDeclGroup( ParseObjCAtInterfaceDeclaration(AtLoc, DS.getAttributes())); } // If the declspec consisted only of 'extern' and we have a string // literal following it, this must be a C++ linkage specifier like // 'extern "C"'. if (getLangOpts().CPlusPlus && isTokenStringLiteral() && DS.getStorageClassSpec() == DeclSpec::SCS_extern && DS.getParsedSpecifiers() == DeclSpec::PQ_StorageClassSpecifier) { Decl *TheDecl = ParseLinkage(DS, Declarator::FileContext); return Actions.ConvertDeclToDeclGroup(TheDecl); } return ParseDeclGroup(DS, Declarator::FileContext); } Parser::DeclGroupPtrTy Parser::ParseDeclarationOrFunctionDefinition(ParsedAttributesWithRange &attrs, ParsingDeclSpec *DS, AccessSpecifier AS) { if (DS) { return ParseDeclOrFunctionDefInternal(attrs, *DS, AS); } else { ParsingDeclSpec PDS(*this); // Must temporarily exit the objective-c container scope for // parsing c constructs and re-enter objc container scope // afterwards. ObjCDeclContextSwitch ObjCDC(*this); return ParseDeclOrFunctionDefInternal(attrs, PDS, AS); } } /// ParseFunctionDefinition - We parsed and verified that the specified /// Declarator is well formed. If this is a K&R-style function, read the /// parameters declaration-list, then start the compound-statement. /// /// function-definition: [C99 6.9.1] /// decl-specs declarator declaration-list[opt] compound-statement /// [C90] function-definition: [C99 6.7.1] - implicit int result /// [C90] decl-specs[opt] declarator declaration-list[opt] compound-statement /// [C++] function-definition: [C++ 8.4] /// decl-specifier-seq[opt] declarator ctor-initializer[opt] /// function-body /// [C++] function-definition: [C++ 8.4] /// decl-specifier-seq[opt] declarator function-try-block /// Decl *Parser::ParseFunctionDefinition(ParsingDeclarator &D, const ParsedTemplateInfo &TemplateInfo, LateParsedAttrList *LateParsedAttrs) { // Poison SEH identifiers so they are flagged as illegal in function bodies. PoisonSEHIdentifiersRAIIObject PoisonSEHIdentifiers(*this, true); const DeclaratorChunk::FunctionTypeInfo &FTI = D.getFunctionTypeInfo(); // If this is C90 and the declspecs were completely missing, fudge in an // implicit int. We do this here because this is the only place where // declaration-specifiers are completely optional in the grammar. if (getLangOpts().ImplicitInt && D.getDeclSpec().isEmpty()) { const char *PrevSpec; unsigned DiagID; const PrintingPolicy &Policy = Actions.getASTContext().getPrintingPolicy(); D.getMutableDeclSpec().SetTypeSpecType(DeclSpec::TST_int, D.getIdentifierLoc(), PrevSpec, DiagID, Policy); D.SetRangeBegin(D.getDeclSpec().getSourceRange().getBegin()); } // If this declaration was formed with a K&R-style identifier list for the // arguments, parse declarations for all of the args next. // int foo(a,b) int a; float b; {} if (FTI.isKNRPrototype()) ParseKNRParamDeclarations(D); // We should have either an opening brace or, in a C++ constructor, // we may have a colon. if (Tok.isNot(tok::l_brace) && (!getLangOpts().CPlusPlus || (Tok.isNot(tok::colon) && Tok.isNot(tok::kw_try) && Tok.isNot(tok::equal)))) { Diag(Tok, diag::err_expected_fn_body); // Skip over garbage, until we get to '{'. Don't eat the '{'. SkipUntil(tok::l_brace, StopAtSemi | StopBeforeMatch); // If we didn't find the '{', bail out. if (Tok.isNot(tok::l_brace)) return nullptr; } // Check to make sure that any normal attributes are allowed to be on // a definition. Late parsed attributes are checked at the end. if (Tok.isNot(tok::equal)) { AttributeList *DtorAttrs = D.getAttributes(); while (DtorAttrs) { if (DtorAttrs->isKnownToGCC() && !DtorAttrs->isCXX11Attribute()) { Diag(DtorAttrs->getLoc(), diag::warn_attribute_on_function_definition) << DtorAttrs->getName(); } DtorAttrs = DtorAttrs->getNext(); } } // In delayed template parsing mode, for function template we consume the // tokens and store them for late parsing at the end of the translation unit. if (getLangOpts().DelayedTemplateParsing && Tok.isNot(tok::equal) && TemplateInfo.Kind == ParsedTemplateInfo::Template && Actions.canDelayFunctionBody(D)) { MultiTemplateParamsArg TemplateParameterLists(*TemplateInfo.TemplateParams); ParseScope BodyScope(this, Scope::FnScope|Scope::DeclScope); Scope *ParentScope = getCurScope()->getParent(); D.setFunctionDefinitionKind(FDK_Definition); Decl *DP = Actions.HandleDeclarator(ParentScope, D, TemplateParameterLists); D.complete(DP); D.getMutableDeclSpec().abort(); if (SkipFunctionBodies && (!DP || Actions.canSkipFunctionBody(DP)) && trySkippingFunctionBody()) { BodyScope.Exit(); return Actions.ActOnSkippedFunctionBody(DP); } CachedTokens Toks; LexTemplateFunctionForLateParsing(Toks); if (DP) { FunctionDecl *FnD = DP->getAsFunction(); Actions.CheckForFunctionRedefinition(FnD); Actions.MarkAsLateParsedTemplate(FnD, DP, Toks); } return DP; } else if (CurParsedObjCImpl && !TemplateInfo.TemplateParams && (Tok.is(tok::l_brace) || Tok.is(tok::kw_try) || Tok.is(tok::colon)) && Actions.CurContext->isTranslationUnit()) { ParseScope BodyScope(this, Scope::FnScope|Scope::DeclScope); Scope *ParentScope = getCurScope()->getParent(); D.setFunctionDefinitionKind(FDK_Definition); Decl *FuncDecl = Actions.HandleDeclarator(ParentScope, D, MultiTemplateParamsArg()); D.complete(FuncDecl); D.getMutableDeclSpec().abort(); if (FuncDecl) { // Consume the tokens and store them for later parsing. StashAwayMethodOrFunctionBodyTokens(FuncDecl); CurParsedObjCImpl->HasCFunction = true; return FuncDecl; } // FIXME: Should we really fall through here? } // Enter a scope for the function body. ParseScope BodyScope(this, Scope::FnScope|Scope::DeclScope); // Tell the actions module that we have entered a function definition with the // specified Declarator for the function. Sema::SkipBodyInfo SkipBody; Decl *Res = Actions.ActOnStartOfFunctionDef(getCurScope(), D, TemplateInfo.TemplateParams ? *TemplateInfo.TemplateParams : MultiTemplateParamsArg(), &SkipBody); if (SkipBody.ShouldSkip) { SkipFunctionBody(); return Res; } // Break out of the ParsingDeclarator context before we parse the body. D.complete(Res); // Break out of the ParsingDeclSpec context, too. This const_cast is // safe because we're always the sole owner. D.getMutableDeclSpec().abort(); if (TryConsumeToken(tok::equal)) { assert(getLangOpts().CPlusPlus && "Only C++ function definitions have '='"); bool Delete = false; SourceLocation KWLoc; if (TryConsumeToken(tok::kw_delete, KWLoc)) { Diag(KWLoc, getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_defaulted_deleted_function : diag::ext_defaulted_deleted_function) << 1 /* deleted */; Actions.SetDeclDeleted(Res, KWLoc); Delete = true; } else if (TryConsumeToken(tok::kw_default, KWLoc)) { Diag(KWLoc, getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_defaulted_deleted_function : diag::ext_defaulted_deleted_function) << 0 /* defaulted */; Actions.SetDeclDefaulted(Res, KWLoc); } else { llvm_unreachable("function definition after = not 'delete' or 'default'"); } if (Tok.is(tok::comma)) { Diag(KWLoc, diag::err_default_delete_in_multiple_declaration) << Delete; SkipUntil(tok::semi); } else if (ExpectAndConsume(tok::semi, diag::err_expected_after, Delete ? "delete" : "default")) { SkipUntil(tok::semi); } Stmt *GeneratedBody = Res ? Res->getBody() : nullptr; Actions.ActOnFinishFunctionBody(Res, GeneratedBody, false); return Res; } if (SkipFunctionBodies && (!Res || Actions.canSkipFunctionBody(Res)) && trySkippingFunctionBody()) { BodyScope.Exit(); Actions.ActOnSkippedFunctionBody(Res); return Actions.ActOnFinishFunctionBody(Res, nullptr, false); } if (Tok.is(tok::kw_try)) return ParseFunctionTryBlock(Res, BodyScope); // If we have a colon, then we're probably parsing a C++ // ctor-initializer. if (Tok.is(tok::colon)) { ParseConstructorInitializer(Res); // Recover from error. if (!Tok.is(tok::l_brace)) { BodyScope.Exit(); Actions.ActOnFinishFunctionBody(Res, nullptr); return Res; } } else Actions.ActOnDefaultCtorInitializers(Res); // Late attributes are parsed in the same scope as the function body. if (LateParsedAttrs) ParseLexedAttributeList(*LateParsedAttrs, Res, false, true); return ParseFunctionStatementBody(Res, BodyScope); } void Parser::SkipFunctionBody() { if (Tok.is(tok::equal)) { SkipUntil(tok::semi); return; } bool IsFunctionTryBlock = Tok.is(tok::kw_try); if (IsFunctionTryBlock) ConsumeToken(); CachedTokens Skipped; if (ConsumeAndStoreFunctionPrologue(Skipped)) SkipMalformedDecl(); else { SkipUntil(tok::r_brace); while (IsFunctionTryBlock && Tok.is(tok::kw_catch)) { SkipUntil(tok::l_brace); SkipUntil(tok::r_brace); } } } /// ParseKNRParamDeclarations - Parse 'declaration-list[opt]' which provides /// types for a function with a K&R-style identifier list for arguments. void Parser::ParseKNRParamDeclarations(Declarator &D) { // We know that the top-level of this declarator is a function. DeclaratorChunk::FunctionTypeInfo &FTI = D.getFunctionTypeInfo(); // Enter function-declaration scope, limiting any declarators to the // function prototype scope, including parameter declarators. ParseScope PrototypeScope(this, Scope::FunctionPrototypeScope | Scope::FunctionDeclarationScope | Scope::DeclScope); // Read all the argument declarations. while (isDeclarationSpecifier()) { SourceLocation DSStart = Tok.getLocation(); // Parse the common declaration-specifiers piece. DeclSpec DS(AttrFactory); ParseDeclarationSpecifiers(DS); // C99 6.9.1p6: 'each declaration in the declaration list shall have at // least one declarator'. // NOTE: GCC just makes this an ext-warn. It's not clear what it does with // the declarations though. It's trivial to ignore them, really hard to do // anything else with them. if (TryConsumeToken(tok::semi)) { Diag(DSStart, diag::err_declaration_does_not_declare_param); continue; } // C99 6.9.1p6: Declarations shall contain no storage-class specifiers other // than register. if (DS.getStorageClassSpec() != DeclSpec::SCS_unspecified && DS.getStorageClassSpec() != DeclSpec::SCS_register) { Diag(DS.getStorageClassSpecLoc(), diag::err_invalid_storage_class_in_func_decl); DS.ClearStorageClassSpecs(); } if (DS.getThreadStorageClassSpec() != DeclSpec::TSCS_unspecified) { Diag(DS.getThreadStorageClassSpecLoc(), diag::err_invalid_storage_class_in_func_decl); DS.ClearStorageClassSpecs(); } // Parse the first declarator attached to this declspec. Declarator ParmDeclarator(DS, Declarator::KNRTypeListContext); ParseDeclarator(ParmDeclarator); // Handle the full declarator list. while (1) { // If attributes are present, parse them. MaybeParseGNUAttributes(ParmDeclarator); // Ask the actions module to compute the type for this declarator. Decl *Param = Actions.ActOnParamDeclarator(getCurScope(), ParmDeclarator); if (Param && // A missing identifier has already been diagnosed. ParmDeclarator.getIdentifier()) { // Scan the argument list looking for the correct param to apply this // type. for (unsigned i = 0; ; ++i) { // C99 6.9.1p6: those declarators shall declare only identifiers from // the identifier list. if (i == FTI.NumParams) { Diag(ParmDeclarator.getIdentifierLoc(), diag::err_no_matching_param) << ParmDeclarator.getIdentifier(); break; } if (FTI.Params[i].Ident == ParmDeclarator.getIdentifier()) { // Reject redefinitions of parameters. if (FTI.Params[i].Param) { Diag(ParmDeclarator.getIdentifierLoc(), diag::err_param_redefinition) << ParmDeclarator.getIdentifier(); } else { FTI.Params[i].Param = Param; } break; } } } // If we don't have a comma, it is either the end of the list (a ';') or // an error, bail out. if (Tok.isNot(tok::comma)) break; ParmDeclarator.clear(); // Consume the comma. ParmDeclarator.setCommaLoc(ConsumeToken()); // Parse the next declarator. ParseDeclarator(ParmDeclarator); } // Consume ';' and continue parsing. if (!ExpectAndConsumeSemi(diag::err_expected_semi_declaration)) continue; // Otherwise recover by skipping to next semi or mandatory function body. if (SkipUntil(tok::l_brace, StopAtSemi | StopBeforeMatch)) break; TryConsumeToken(tok::semi); } // The actions module must verify that all arguments were declared. Actions.ActOnFinishKNRParamDeclarations(getCurScope(), D, Tok.getLocation()); } /// ParseAsmStringLiteral - This is just a normal string-literal, but is not /// allowed to be a wide string, and is not subject to character translation. /// /// [GNU] asm-string-literal: /// string-literal /// ExprResult Parser::ParseAsmStringLiteral() { if (!isTokenStringLiteral()) { Diag(Tok, diag::err_expected_string_literal) << /*Source='in...'*/0 << "'asm'"; return ExprError(); } ExprResult AsmString(ParseStringLiteralExpression()); if (!AsmString.isInvalid()) { const auto *SL = cast(AsmString.get()); if (!SL->isAscii()) { Diag(Tok, diag::err_asm_operand_wide_string_literal) << SL->isWide() << SL->getSourceRange(); return ExprError(); } } return AsmString; } /// ParseSimpleAsm /// /// [GNU] simple-asm-expr: /// 'asm' '(' asm-string-literal ')' /// ExprResult Parser::ParseSimpleAsm(SourceLocation *EndLoc) { assert(Tok.is(tok::kw_asm) && "Not an asm!"); SourceLocation Loc = ConsumeToken(); if (Tok.is(tok::kw_volatile)) { // Remove from the end of 'asm' to the end of 'volatile'. SourceRange RemovalRange(PP.getLocForEndOfToken(Loc), PP.getLocForEndOfToken(Tok.getLocation())); Diag(Tok, diag::warn_file_asm_volatile) << FixItHint::CreateRemoval(RemovalRange); ConsumeToken(); } BalancedDelimiterTracker T(*this, tok::l_paren); if (T.consumeOpen()) { Diag(Tok, diag::err_expected_lparen_after) << "asm"; return ExprError(); } ExprResult Result(ParseAsmStringLiteral()); if (!Result.isInvalid()) { // Close the paren and get the location of the end bracket T.consumeClose(); if (EndLoc) *EndLoc = T.getCloseLocation(); } else if (SkipUntil(tok::r_paren, StopAtSemi | StopBeforeMatch)) { if (EndLoc) *EndLoc = Tok.getLocation(); ConsumeParen(); } return Result; } /// \brief Get the TemplateIdAnnotation from the token and put it in the /// cleanup pool so that it gets destroyed when parsing the current top level /// declaration is finished. TemplateIdAnnotation *Parser::takeTemplateIdAnnotation(const Token &tok) { assert(tok.is(tok::annot_template_id) && "Expected template-id token"); TemplateIdAnnotation * Id = static_cast(tok.getAnnotationValue()); return Id; } void Parser::AnnotateScopeToken(CXXScopeSpec &SS, bool IsNewAnnotation) { // Push the current token back into the token stream (or revert it if it is // cached) and use an annotation scope token for current token. if (PP.isBacktrackEnabled()) PP.RevertCachedTokens(1); else PP.EnterToken(Tok); Tok.setKind(tok::annot_cxxscope); Tok.setAnnotationValue(Actions.SaveNestedNameSpecifierAnnotation(SS)); Tok.setAnnotationRange(SS.getRange()); // In case the tokens were cached, have Preprocessor replace them // with the annotation token. We don't need to do this if we've // just reverted back to a prior state. if (IsNewAnnotation) PP.AnnotateCachedTokens(Tok); } /// \brief Attempt to classify the name at the current token position. This may /// form a type, scope or primary expression annotation, or replace the token /// with a typo-corrected keyword. This is only appropriate when the current /// name must refer to an entity which has already been declared. /// /// \param IsAddressOfOperand Must be \c true if the name is preceded by an '&' /// and might possibly have a dependent nested name specifier. /// \param CCC Indicates how to perform typo-correction for this name. If NULL, /// no typo correction will be performed. Parser::AnnotatedNameKind Parser::TryAnnotateName(bool IsAddressOfOperand, std::unique_ptr CCC) { assert(Tok.is(tok::identifier) || Tok.is(tok::annot_cxxscope)); const bool EnteringContext = false; const bool WasScopeAnnotation = Tok.is(tok::annot_cxxscope); CXXScopeSpec SS; if (getLangOpts().CPlusPlus && ParseOptionalCXXScopeSpecifier(SS, nullptr, EnteringContext)) return ANK_Error; if (Tok.isNot(tok::identifier) || SS.isInvalid()) { if (TryAnnotateTypeOrScopeTokenAfterScopeSpec(SS, !WasScopeAnnotation)) return ANK_Error; return ANK_Unresolved; } IdentifierInfo *Name = Tok.getIdentifierInfo(); SourceLocation NameLoc = Tok.getLocation(); // FIXME: Move the tentative declaration logic into ClassifyName so we can // typo-correct to tentatively-declared identifiers. if (isTentativelyDeclared(Name)) { // Identifier has been tentatively declared, and thus cannot be resolved as // an expression. Fall back to annotating it as a type. if (TryAnnotateTypeOrScopeTokenAfterScopeSpec(SS, !WasScopeAnnotation)) return ANK_Error; return Tok.is(tok::annot_typename) ? ANK_Success : ANK_TentativeDecl; } Token Next = NextToken(); // Look up and classify the identifier. We don't perform any typo-correction // after a scope specifier, because in general we can't recover from typos // there (eg, after correcting 'A::tempalte B::C' [sic], we would need to // jump back into scope specifier parsing). Sema::NameClassification Classification = Actions.ClassifyName( getCurScope(), SS, Name, NameLoc, Next, IsAddressOfOperand, SS.isEmpty() ? std::move(CCC) : nullptr); switch (Classification.getKind()) { case Sema::NC_Error: return ANK_Error; case Sema::NC_Keyword: // The identifier was typo-corrected to a keyword. Tok.setIdentifierInfo(Name); Tok.setKind(Name->getTokenID()); PP.TypoCorrectToken(Tok); if (SS.isNotEmpty()) AnnotateScopeToken(SS, !WasScopeAnnotation); // We've "annotated" this as a keyword. return ANK_Success; case Sema::NC_Unknown: // It's not something we know about. Leave it unannotated. break; case Sema::NC_Type: { SourceLocation BeginLoc = NameLoc; if (SS.isNotEmpty()) BeginLoc = SS.getBeginLoc(); /// An Objective-C object type followed by '<' is a specialization of /// a parameterized class type or a protocol-qualified type. ParsedType Ty = Classification.getType(); if (getLangOpts().ObjC1 && NextToken().is(tok::less) && (Ty.get()->isObjCObjectType() || Ty.get()->isObjCObjectPointerType())) { // Consume the name. SourceLocation IdentifierLoc = ConsumeToken(); SourceLocation NewEndLoc; TypeResult NewType = parseObjCTypeArgsAndProtocolQualifiers(IdentifierLoc, Ty, /*consumeLastToken=*/false, NewEndLoc); if (NewType.isUsable()) Ty = NewType.get(); else if (Tok.is(tok::eof)) // Nothing to do here, bail out... return ANK_Error; } Tok.setKind(tok::annot_typename); setTypeAnnotation(Tok, Ty); Tok.setAnnotationEndLoc(Tok.getLocation()); Tok.setLocation(BeginLoc); PP.AnnotateCachedTokens(Tok); return ANK_Success; } case Sema::NC_Expression: Tok.setKind(tok::annot_primary_expr); setExprAnnotation(Tok, Classification.getExpression()); Tok.setAnnotationEndLoc(NameLoc); if (SS.isNotEmpty()) Tok.setLocation(SS.getBeginLoc()); PP.AnnotateCachedTokens(Tok); return ANK_Success; case Sema::NC_TypeTemplate: if (Next.isNot(tok::less)) { // This may be a type template being used as a template template argument. if (SS.isNotEmpty()) AnnotateScopeToken(SS, !WasScopeAnnotation); return ANK_TemplateName; } // Fall through. case Sema::NC_VarTemplate: case Sema::NC_FunctionTemplate: { // We have a type, variable or function template followed by '<'. ConsumeToken(); UnqualifiedId Id; Id.setIdentifier(Name, NameLoc); if (AnnotateTemplateIdToken( TemplateTy::make(Classification.getTemplateName()), Classification.getTemplateNameKind(), SS, SourceLocation(), Id)) return ANK_Error; return ANK_Success; } case Sema::NC_NestedNameSpecifier: llvm_unreachable("already parsed nested name specifier"); } // Unable to classify the name, but maybe we can annotate a scope specifier. if (SS.isNotEmpty()) AnnotateScopeToken(SS, !WasScopeAnnotation); return ANK_Unresolved; } bool Parser::TryKeywordIdentFallback(bool DisableKeyword) { assert(Tok.isNot(tok::identifier)); Diag(Tok, diag::ext_keyword_as_ident) << PP.getSpelling(Tok) << DisableKeyword; if (DisableKeyword) Tok.getIdentifierInfo()->revertTokenIDToIdentifier(); Tok.setKind(tok::identifier); return true; } /// TryAnnotateTypeOrScopeToken - If the current token position is on a /// typename (possibly qualified in C++) or a C++ scope specifier not followed /// by a typename, TryAnnotateTypeOrScopeToken will replace one or more tokens /// with a single annotation token representing the typename or C++ scope /// respectively. /// This simplifies handling of C++ scope specifiers and allows efficient /// backtracking without the need to re-parse and resolve nested-names and /// typenames. /// It will mainly be called when we expect to treat identifiers as typenames /// (if they are typenames). For example, in C we do not expect identifiers /// inside expressions to be treated as typenames so it will not be called /// for expressions in C. /// The benefit for C/ObjC is that a typename will be annotated and /// Actions.getTypeName will not be needed to be called again (e.g. getTypeName /// will not be called twice, once to check whether we have a declaration /// specifier, and another one to get the actual type inside /// ParseDeclarationSpecifiers). /// /// This returns true if an error occurred. /// /// Note that this routine emits an error if you call it with ::new or ::delete /// as the current tokens, so only call it in contexts where these are invalid. bool Parser::TryAnnotateTypeOrScopeToken() { assert((Tok.is(tok::identifier) || Tok.is(tok::coloncolon) || Tok.is(tok::kw_typename) || Tok.is(tok::annot_cxxscope) || Tok.is(tok::kw_decltype) || Tok.is(tok::annot_template_id) || Tok.is(tok::kw___super)) && "Cannot be a type or scope token!"); if (Tok.is(tok::kw_typename)) { // MSVC lets you do stuff like: // typename typedef T_::D D; // // We will consume the typedef token here and put it back after we have // parsed the first identifier, transforming it into something more like: // typename T_::D typedef D; if (getLangOpts().MSVCCompat && NextToken().is(tok::kw_typedef)) { Token TypedefToken; PP.Lex(TypedefToken); bool Result = TryAnnotateTypeOrScopeToken(); PP.EnterToken(Tok); Tok = TypedefToken; if (!Result) Diag(Tok.getLocation(), diag::warn_expected_qualified_after_typename); return Result; } // Parse a C++ typename-specifier, e.g., "typename T::type". // // typename-specifier: // 'typename' '::' [opt] nested-name-specifier identifier // 'typename' '::' [opt] nested-name-specifier template [opt] // simple-template-id SourceLocation TypenameLoc = ConsumeToken(); CXXScopeSpec SS; if (ParseOptionalCXXScopeSpecifier(SS, /*ObjectType=*/nullptr, /*EnteringContext=*/false, nullptr, /*IsTypename*/ true)) return true; if (!SS.isSet()) { if (Tok.is(tok::identifier) || Tok.is(tok::annot_template_id) || Tok.is(tok::annot_decltype)) { // Attempt to recover by skipping the invalid 'typename' if (Tok.is(tok::annot_decltype) || (!TryAnnotateTypeOrScopeToken() && Tok.isAnnotation())) { unsigned DiagID = diag::err_expected_qualified_after_typename; // MS compatibility: MSVC permits using known types with typename. // e.g. "typedef typename T* pointer_type" if (getLangOpts().MicrosoftExt) DiagID = diag::warn_expected_qualified_after_typename; Diag(Tok.getLocation(), DiagID); return false; } } if (Tok.isEditorPlaceholder()) return true; Diag(Tok.getLocation(), diag::err_expected_qualified_after_typename); return true; } TypeResult Ty; if (Tok.is(tok::identifier)) { // FIXME: check whether the next token is '<', first! Ty = Actions.ActOnTypenameType(getCurScope(), TypenameLoc, SS, *Tok.getIdentifierInfo(), Tok.getLocation()); } else if (Tok.is(tok::annot_template_id)) { TemplateIdAnnotation *TemplateId = takeTemplateIdAnnotation(Tok); if (TemplateId->Kind != TNK_Type_template && TemplateId->Kind != TNK_Dependent_template_name) { Diag(Tok, diag::err_typename_refers_to_non_type_template) << Tok.getAnnotationRange(); return true; } ASTTemplateArgsPtr TemplateArgsPtr(TemplateId->getTemplateArgs(), TemplateId->NumArgs); Ty = Actions.ActOnTypenameType(getCurScope(), TypenameLoc, SS, TemplateId->TemplateKWLoc, TemplateId->Template, TemplateId->Name, TemplateId->TemplateNameLoc, TemplateId->LAngleLoc, TemplateArgsPtr, TemplateId->RAngleLoc); } else { Diag(Tok, diag::err_expected_type_name_after_typename) << SS.getRange(); return true; } SourceLocation EndLoc = Tok.getLastLoc(); Tok.setKind(tok::annot_typename); setTypeAnnotation(Tok, Ty.isInvalid() ? nullptr : Ty.get()); Tok.setAnnotationEndLoc(EndLoc); Tok.setLocation(TypenameLoc); PP.AnnotateCachedTokens(Tok); return false; } // Remembers whether the token was originally a scope annotation. bool WasScopeAnnotation = Tok.is(tok::annot_cxxscope); CXXScopeSpec SS; if (getLangOpts().CPlusPlus) if (ParseOptionalCXXScopeSpecifier(SS, nullptr, /*EnteringContext*/false)) return true; return TryAnnotateTypeOrScopeTokenAfterScopeSpec(SS, !WasScopeAnnotation); } /// \brief Try to annotate a type or scope token, having already parsed an /// optional scope specifier. \p IsNewScope should be \c true unless the scope /// specifier was extracted from an existing tok::annot_cxxscope annotation. bool Parser::TryAnnotateTypeOrScopeTokenAfterScopeSpec(CXXScopeSpec &SS, bool IsNewScope) { if (Tok.is(tok::identifier)) { // Determine whether the identifier is a type name. if (ParsedType Ty = Actions.getTypeName( *Tok.getIdentifierInfo(), Tok.getLocation(), getCurScope(), &SS, false, NextToken().is(tok::period), nullptr, /*IsCtorOrDtorName=*/false, /*NonTrivialTypeSourceInfo*/ true, /*IsClassTemplateDeductionContext*/GreaterThanIsOperator)) { SourceLocation BeginLoc = Tok.getLocation(); if (SS.isNotEmpty()) // it was a C++ qualified type name. BeginLoc = SS.getBeginLoc(); /// An Objective-C object type followed by '<' is a specialization of /// a parameterized class type or a protocol-qualified type. if (getLangOpts().ObjC1 && NextToken().is(tok::less) && (Ty.get()->isObjCObjectType() || Ty.get()->isObjCObjectPointerType())) { // Consume the name. SourceLocation IdentifierLoc = ConsumeToken(); SourceLocation NewEndLoc; TypeResult NewType = parseObjCTypeArgsAndProtocolQualifiers(IdentifierLoc, Ty, /*consumeLastToken=*/false, NewEndLoc); if (NewType.isUsable()) Ty = NewType.get(); else if (Tok.is(tok::eof)) // Nothing to do here, bail out... return false; } // This is a typename. Replace the current token in-place with an // annotation type token. Tok.setKind(tok::annot_typename); setTypeAnnotation(Tok, Ty); Tok.setAnnotationEndLoc(Tok.getLocation()); Tok.setLocation(BeginLoc); // In case the tokens were cached, have Preprocessor replace // them with the annotation token. PP.AnnotateCachedTokens(Tok); return false; } if (!getLangOpts().CPlusPlus) { // If we're in C, we can't have :: tokens at all (the lexer won't return // them). If the identifier is not a type, then it can't be scope either, // just early exit. return false; } // If this is a template-id, annotate with a template-id or type token. if (NextToken().is(tok::less)) { TemplateTy Template; UnqualifiedId TemplateName; TemplateName.setIdentifier(Tok.getIdentifierInfo(), Tok.getLocation()); bool MemberOfUnknownSpecialization; if (TemplateNameKind TNK = Actions.isTemplateName( getCurScope(), SS, /*hasTemplateKeyword=*/false, TemplateName, /*ObjectType=*/nullptr, /*EnteringContext*/false, Template, MemberOfUnknownSpecialization)) { // Consume the identifier. ConsumeToken(); if (AnnotateTemplateIdToken(Template, TNK, SS, SourceLocation(), TemplateName)) { // If an unrecoverable error occurred, we need to return true here, // because the token stream is in a damaged state. We may not return // a valid identifier. return true; } } } // The current token, which is either an identifier or a // template-id, is not part of the annotation. Fall through to // push that token back into the stream and complete the C++ scope // specifier annotation. } if (Tok.is(tok::annot_template_id)) { TemplateIdAnnotation *TemplateId = takeTemplateIdAnnotation(Tok); if (TemplateId->Kind == TNK_Type_template) { // A template-id that refers to a type was parsed into a // template-id annotation in a context where we weren't allowed // to produce a type annotation token. Update the template-id // annotation token to a type annotation token now. AnnotateTemplateIdTokenAsType(); return false; } } if (SS.isEmpty()) return false; // A C++ scope specifier that isn't followed by a typename. AnnotateScopeToken(SS, IsNewScope); return false; } /// TryAnnotateScopeToken - Like TryAnnotateTypeOrScopeToken but only /// annotates C++ scope specifiers and template-ids. This returns /// true if there was an error that could not be recovered from. /// /// Note that this routine emits an error if you call it with ::new or ::delete /// as the current tokens, so only call it in contexts where these are invalid. bool Parser::TryAnnotateCXXScopeToken(bool EnteringContext) { assert(getLangOpts().CPlusPlus && "Call sites of this function should be guarded by checking for C++"); assert((Tok.is(tok::identifier) || Tok.is(tok::coloncolon) || (Tok.is(tok::annot_template_id) && NextToken().is(tok::coloncolon)) || Tok.is(tok::kw_decltype) || Tok.is(tok::kw___super)) && "Cannot be a type or scope token!"); CXXScopeSpec SS; if (ParseOptionalCXXScopeSpecifier(SS, nullptr, EnteringContext)) return true; if (SS.isEmpty()) return false; AnnotateScopeToken(SS, true); return false; } bool Parser::isTokenEqualOrEqualTypo() { tok::TokenKind Kind = Tok.getKind(); switch (Kind) { default: return false; case tok::ampequal: // &= case tok::starequal: // *= case tok::plusequal: // += case tok::minusequal: // -= case tok::exclaimequal: // != case tok::slashequal: // /= case tok::percentequal: // %= case tok::lessequal: // <= case tok::lesslessequal: // <<= case tok::greaterequal: // >= case tok::greatergreaterequal: // >>= case tok::caretequal: // ^= case tok::pipeequal: // |= case tok::equalequal: // == Diag(Tok, diag::err_invalid_token_after_declarator_suggest_equal) << Kind << FixItHint::CreateReplacement(SourceRange(Tok.getLocation()), "="); LLVM_FALLTHROUGH; case tok::equal: return true; } } SourceLocation Parser::handleUnexpectedCodeCompletionToken() { assert(Tok.is(tok::code_completion)); PrevTokLocation = Tok.getLocation(); for (Scope *S = getCurScope(); S; S = S->getParent()) { if (S->getFlags() & Scope::FnScope) { Actions.CodeCompleteOrdinaryName(getCurScope(), Sema::PCC_RecoveryInFunction); cutOffParsing(); return PrevTokLocation; } if (S->getFlags() & Scope::ClassScope) { Actions.CodeCompleteOrdinaryName(getCurScope(), Sema::PCC_Class); cutOffParsing(); return PrevTokLocation; } } Actions.CodeCompleteOrdinaryName(getCurScope(), Sema::PCC_Namespace); cutOffParsing(); return PrevTokLocation; } // Code-completion pass-through functions void Parser::CodeCompleteDirective(bool InConditional) { Actions.CodeCompletePreprocessorDirective(InConditional); } void Parser::CodeCompleteInConditionalExclusion() { Actions.CodeCompleteInPreprocessorConditionalExclusion(getCurScope()); } void Parser::CodeCompleteMacroName(bool IsDefinition) { Actions.CodeCompletePreprocessorMacroName(IsDefinition); } void Parser::CodeCompletePreprocessorExpression() { Actions.CodeCompletePreprocessorExpression(); } void Parser::CodeCompleteMacroArgument(IdentifierInfo *Macro, MacroInfo *MacroInfo, unsigned ArgumentIndex) { Actions.CodeCompletePreprocessorMacroArgument(getCurScope(), Macro, MacroInfo, ArgumentIndex); } void Parser::CodeCompleteNaturalLanguage() { Actions.CodeCompleteNaturalLanguage(); } bool Parser::ParseMicrosoftIfExistsCondition(IfExistsCondition& Result) { assert((Tok.is(tok::kw___if_exists) || Tok.is(tok::kw___if_not_exists)) && "Expected '__if_exists' or '__if_not_exists'"); Result.IsIfExists = Tok.is(tok::kw___if_exists); Result.KeywordLoc = ConsumeToken(); BalancedDelimiterTracker T(*this, tok::l_paren); if (T.consumeOpen()) { Diag(Tok, diag::err_expected_lparen_after) << (Result.IsIfExists? "__if_exists" : "__if_not_exists"); return true; } // Parse nested-name-specifier. if (getLangOpts().CPlusPlus) ParseOptionalCXXScopeSpecifier(Result.SS, nullptr, /*EnteringContext=*/false); // Check nested-name specifier. if (Result.SS.isInvalid()) { T.skipToEnd(); return true; } // Parse the unqualified-id. SourceLocation TemplateKWLoc; // FIXME: parsed, but unused. if (ParseUnqualifiedId( Result.SS, /*EnteringContext*/false, /*AllowDestructorName*/true, /*AllowConstructorName*/true, /*AllowDeductionGuide*/false, nullptr, TemplateKWLoc, Result.Name)) { T.skipToEnd(); return true; } if (T.consumeClose()) return true; // Check if the symbol exists. switch (Actions.CheckMicrosoftIfExistsSymbol(getCurScope(), Result.KeywordLoc, Result.IsIfExists, Result.SS, Result.Name)) { case Sema::IER_Exists: Result.Behavior = Result.IsIfExists ? IEB_Parse : IEB_Skip; break; case Sema::IER_DoesNotExist: Result.Behavior = !Result.IsIfExists ? IEB_Parse : IEB_Skip; break; case Sema::IER_Dependent: Result.Behavior = IEB_Dependent; break; case Sema::IER_Error: return true; } return false; } void Parser::ParseMicrosoftIfExistsExternalDeclaration() { IfExistsCondition Result; if (ParseMicrosoftIfExistsCondition(Result)) return; BalancedDelimiterTracker Braces(*this, tok::l_brace); if (Braces.consumeOpen()) { Diag(Tok, diag::err_expected) << tok::l_brace; return; } switch (Result.Behavior) { case IEB_Parse: // Parse declarations below. break; case IEB_Dependent: llvm_unreachable("Cannot have a dependent external declaration"); case IEB_Skip: Braces.skipToEnd(); return; } // Parse the declarations. // FIXME: Support module import within __if_exists? while (Tok.isNot(tok::r_brace) && !isEofOrEom()) { ParsedAttributesWithRange attrs(AttrFactory); MaybeParseCXX11Attributes(attrs); DeclGroupPtrTy Result = ParseExternalDeclaration(attrs); if (Result && !getCurScope()->getParent()) Actions.getASTConsumer().HandleTopLevelDecl(Result.get()); } Braces.consumeClose(); } /// Parse a C++ Modules TS module declaration, which appears at the beginning /// of a module interface, module partition, or module implementation file. /// /// module-declaration: [Modules TS + P0273R0 + P0629R0] /// 'export'[opt] 'module' 'partition'[opt] /// module-name attribute-specifier-seq[opt] ';' /// /// Note that 'partition' is a context-sensitive keyword. Parser::DeclGroupPtrTy Parser::ParseModuleDecl() { SourceLocation StartLoc = Tok.getLocation(); Sema::ModuleDeclKind MDK = TryConsumeToken(tok::kw_export) ? Sema::ModuleDeclKind::Module : Sema::ModuleDeclKind::Implementation; assert(Tok.is(tok::kw_module) && "not a module declaration"); SourceLocation ModuleLoc = ConsumeToken(); if (Tok.is(tok::identifier) && NextToken().is(tok::identifier) && Tok.getIdentifierInfo()->isStr("partition")) { // If 'partition' is present, this must be a module interface unit. if (MDK != Sema::ModuleDeclKind::Module) Diag(Tok.getLocation(), diag::err_module_implementation_partition) << FixItHint::CreateInsertion(ModuleLoc, "export "); MDK = Sema::ModuleDeclKind::Partition; ConsumeToken(); } SmallVector, 2> Path; if (ParseModuleName(ModuleLoc, Path, /*IsImport*/false)) return nullptr; // We don't support any module attributes yet; just parse them and diagnose. ParsedAttributesWithRange Attrs(AttrFactory); MaybeParseCXX11Attributes(Attrs); ProhibitCXX11Attributes(Attrs, diag::err_attribute_not_module_attr); ExpectAndConsumeSemi(diag::err_module_expected_semi); return Actions.ActOnModuleDecl(StartLoc, ModuleLoc, MDK, Path); } /// Parse a module import declaration. This is essentially the same for /// Objective-C and the C++ Modules TS, except for the leading '@' (in ObjC) /// and the trailing optional attributes (in C++). /// /// [ObjC] @import declaration: /// '@' 'import' module-name ';' /// [ModTS] module-import-declaration: /// 'import' module-name attribute-specifier-seq[opt] ';' Parser::DeclGroupPtrTy Parser::ParseModuleImport(SourceLocation AtLoc) { assert((AtLoc.isInvalid() ? Tok.is(tok::kw_import) : Tok.isObjCAtKeyword(tok::objc_import)) && "Improper start to module import"); SourceLocation ImportLoc = ConsumeToken(); SourceLocation StartLoc = AtLoc.isInvalid() ? ImportLoc : AtLoc; SmallVector, 2> Path; if (ParseModuleName(ImportLoc, Path, /*IsImport*/true)) return nullptr; ParsedAttributesWithRange Attrs(AttrFactory); MaybeParseCXX11Attributes(Attrs); // We don't support any module import attributes yet. ProhibitCXX11Attributes(Attrs, diag::err_attribute_not_import_attr); if (PP.hadModuleLoaderFatalFailure()) { // With a fatal failure in the module loader, we abort parsing. cutOffParsing(); return nullptr; } DeclResult Import = Actions.ActOnModuleImport(StartLoc, ImportLoc, Path); ExpectAndConsumeSemi(diag::err_module_expected_semi); if (Import.isInvalid()) return nullptr; return Actions.ConvertDeclToDeclGroup(Import.get()); } /// Parse a C++ Modules TS / Objective-C module name (both forms use the same /// grammar). /// /// module-name: /// module-name-qualifier[opt] identifier /// module-name-qualifier: /// module-name-qualifier[opt] identifier '.' bool Parser::ParseModuleName( SourceLocation UseLoc, SmallVectorImpl> &Path, bool IsImport) { // Parse the module path. while (true) { if (!Tok.is(tok::identifier)) { if (Tok.is(tok::code_completion)) { Actions.CodeCompleteModuleImport(UseLoc, Path); cutOffParsing(); return true; } Diag(Tok, diag::err_module_expected_ident) << IsImport; SkipUntil(tok::semi); return true; } // Record this part of the module path. Path.push_back(std::make_pair(Tok.getIdentifierInfo(), Tok.getLocation())); ConsumeToken(); if (Tok.isNot(tok::period)) return false; ConsumeToken(); } } /// \brief Try recover parser when module annotation appears where it must not /// be found. /// \returns false if the recover was successful and parsing may be continued, or /// true if parser must bail out to top level and handle the token there. bool Parser::parseMisplacedModuleImport() { while (true) { switch (Tok.getKind()) { case tok::annot_module_end: // If we recovered from a misplaced module begin, we expect to hit a // misplaced module end too. Stay in the current context when this // happens. if (MisplacedModuleBeginCount) { --MisplacedModuleBeginCount; Actions.ActOnModuleEnd(Tok.getLocation(), reinterpret_cast( Tok.getAnnotationValue())); ConsumeAnnotationToken(); continue; } // Inform caller that recovery failed, the error must be handled at upper // level. This will generate the desired "missing '}' at end of module" // diagnostics on the way out. return true; case tok::annot_module_begin: // Recover by entering the module (Sema will diagnose). Actions.ActOnModuleBegin(Tok.getLocation(), reinterpret_cast( Tok.getAnnotationValue())); ConsumeAnnotationToken(); ++MisplacedModuleBeginCount; continue; case tok::annot_module_include: // Module import found where it should not be, for instance, inside a // namespace. Recover by importing the module. Actions.ActOnModuleInclude(Tok.getLocation(), reinterpret_cast( Tok.getAnnotationValue())); ConsumeAnnotationToken(); // If there is another module import, process it. continue; default: return false; } } return false; } bool BalancedDelimiterTracker::diagnoseOverflow() { P.Diag(P.Tok, diag::err_bracket_depth_exceeded) << P.getLangOpts().BracketDepth; P.Diag(P.Tok, diag::note_bracket_depth); P.cutOffParsing(); return true; } bool BalancedDelimiterTracker::expectAndConsume(unsigned DiagID, const char *Msg, tok::TokenKind SkipToTok) { LOpen = P.Tok.getLocation(); if (P.ExpectAndConsume(Kind, DiagID, Msg)) { if (SkipToTok != tok::unknown) P.SkipUntil(SkipToTok, Parser::StopAtSemi); return true; } if (getDepth() < MaxDepth) return false; return diagnoseOverflow(); } bool BalancedDelimiterTracker::diagnoseMissingClose() { assert(!P.Tok.is(Close) && "Should have consumed closing delimiter"); if (P.Tok.is(tok::annot_module_end)) P.Diag(P.Tok, diag::err_missing_before_module_end) << Close; else P.Diag(P.Tok, diag::err_expected) << Close; P.Diag(LOpen, diag::note_matching) << Kind; // If we're not already at some kind of closing bracket, skip to our closing // token. if (P.Tok.isNot(tok::r_paren) && P.Tok.isNot(tok::r_brace) && P.Tok.isNot(tok::r_square) && P.SkipUntil(Close, FinalToken, Parser::StopAtSemi | Parser::StopBeforeMatch) && P.Tok.is(Close)) LClose = P.ConsumeAnyToken(); return true; } void BalancedDelimiterTracker::skipToEnd() { P.SkipUntil(Close, Parser::StopBeforeMatch); consumeClose(); } diff --git a/lib/Sema/SemaDeclCXX.cpp b/lib/Sema/SemaDeclCXX.cpp index e9070881afe4..c05e5f020708 100644 --- a/lib/Sema/SemaDeclCXX.cpp +++ b/lib/Sema/SemaDeclCXX.cpp @@ -1,14920 +1,14970 @@ //===------ SemaDeclCXX.cpp - Semantic Analysis for C++ Declarations ------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements semantic analysis for C++ declarations. // //===----------------------------------------------------------------------===// #include "clang/AST/ASTConsumer.h" #include "clang/AST/ASTContext.h" #include "clang/AST/ASTLambda.h" #include "clang/AST/ASTMutationListener.h" #include "clang/AST/CXXInheritance.h" #include "clang/AST/CharUnits.h" #include "clang/AST/EvaluatedExprVisitor.h" #include "clang/AST/ExprCXX.h" #include "clang/AST/RecordLayout.h" #include "clang/AST/RecursiveASTVisitor.h" #include "clang/AST/StmtVisitor.h" #include "clang/AST/TypeLoc.h" #include "clang/AST/TypeOrdering.h" #include "clang/Basic/PartialDiagnostic.h" #include "clang/Basic/TargetInfo.h" #include "clang/Lex/LiteralSupport.h" #include "clang/Lex/Preprocessor.h" #include "clang/Sema/CXXFieldCollector.h" #include "clang/Sema/DeclSpec.h" #include "clang/Sema/Initialization.h" #include "clang/Sema/Lookup.h" #include "clang/Sema/ParsedTemplate.h" #include "clang/Sema/Scope.h" #include "clang/Sema/ScopeInfo.h" #include "clang/Sema/SemaInternal.h" #include "clang/Sema/Template.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/SmallString.h" #include "llvm/ADT/StringExtras.h" #include #include using namespace clang; //===----------------------------------------------------------------------===// // CheckDefaultArgumentVisitor //===----------------------------------------------------------------------===// namespace { /// CheckDefaultArgumentVisitor - C++ [dcl.fct.default] Traverses /// the default argument of a parameter to determine whether it /// contains any ill-formed subexpressions. For example, this will /// diagnose the use of local variables or parameters within the /// default argument expression. class CheckDefaultArgumentVisitor : public StmtVisitor { Expr *DefaultArg; Sema *S; public: CheckDefaultArgumentVisitor(Expr *defarg, Sema *s) : DefaultArg(defarg), S(s) {} bool VisitExpr(Expr *Node); bool VisitDeclRefExpr(DeclRefExpr *DRE); bool VisitCXXThisExpr(CXXThisExpr *ThisE); bool VisitLambdaExpr(LambdaExpr *Lambda); bool VisitPseudoObjectExpr(PseudoObjectExpr *POE); }; /// VisitExpr - Visit all of the children of this expression. bool CheckDefaultArgumentVisitor::VisitExpr(Expr *Node) { bool IsInvalid = false; for (Stmt *SubStmt : Node->children()) IsInvalid |= Visit(SubStmt); return IsInvalid; } /// VisitDeclRefExpr - Visit a reference to a declaration, to /// determine whether this declaration can be used in the default /// argument expression. bool CheckDefaultArgumentVisitor::VisitDeclRefExpr(DeclRefExpr *DRE) { NamedDecl *Decl = DRE->getDecl(); if (ParmVarDecl *Param = dyn_cast(Decl)) { // C++ [dcl.fct.default]p9 // Default arguments are evaluated each time the function is // called. The order of evaluation of function arguments is // unspecified. Consequently, parameters of a function shall not // be used in default argument expressions, even if they are not // evaluated. Parameters of a function declared before a default // argument expression are in scope and can hide namespace and // class member names. return S->Diag(DRE->getLocStart(), diag::err_param_default_argument_references_param) << Param->getDeclName() << DefaultArg->getSourceRange(); } else if (VarDecl *VDecl = dyn_cast(Decl)) { // C++ [dcl.fct.default]p7 // Local variables shall not be used in default argument // expressions. if (VDecl->isLocalVarDecl()) return S->Diag(DRE->getLocStart(), diag::err_param_default_argument_references_local) << VDecl->getDeclName() << DefaultArg->getSourceRange(); } return false; } /// VisitCXXThisExpr - Visit a C++ "this" expression. bool CheckDefaultArgumentVisitor::VisitCXXThisExpr(CXXThisExpr *ThisE) { // C++ [dcl.fct.default]p8: // The keyword this shall not be used in a default argument of a // member function. return S->Diag(ThisE->getLocStart(), diag::err_param_default_argument_references_this) << ThisE->getSourceRange(); } bool CheckDefaultArgumentVisitor::VisitPseudoObjectExpr(PseudoObjectExpr *POE) { bool Invalid = false; for (PseudoObjectExpr::semantics_iterator i = POE->semantics_begin(), e = POE->semantics_end(); i != e; ++i) { Expr *E = *i; // Look through bindings. if (OpaqueValueExpr *OVE = dyn_cast(E)) { E = OVE->getSourceExpr(); assert(E && "pseudo-object binding without source expression?"); } Invalid |= Visit(E); } return Invalid; } bool CheckDefaultArgumentVisitor::VisitLambdaExpr(LambdaExpr *Lambda) { // C++11 [expr.lambda.prim]p13: // A lambda-expression appearing in a default argument shall not // implicitly or explicitly capture any entity. if (Lambda->capture_begin() == Lambda->capture_end()) return false; return S->Diag(Lambda->getLocStart(), diag::err_lambda_capture_default_arg); } } void Sema::ImplicitExceptionSpecification::CalledDecl(SourceLocation CallLoc, const CXXMethodDecl *Method) { // If we have an MSAny spec already, don't bother. if (!Method || ComputedEST == EST_MSAny) return; const FunctionProtoType *Proto = Method->getType()->getAs(); Proto = Self->ResolveExceptionSpec(CallLoc, Proto); if (!Proto) return; ExceptionSpecificationType EST = Proto->getExceptionSpecType(); // If we have a throw-all spec at this point, ignore the function. if (ComputedEST == EST_None) return; switch(EST) { // If this function can throw any exceptions, make a note of that. case EST_MSAny: case EST_None: ClearExceptions(); ComputedEST = EST; return; // FIXME: If the call to this decl is using any of its default arguments, we // need to search them for potentially-throwing calls. // If this function has a basic noexcept, it doesn't affect the outcome. case EST_BasicNoexcept: return; // If we're still at noexcept(true) and there's a nothrow() callee, // change to that specification. case EST_DynamicNone: if (ComputedEST == EST_BasicNoexcept) ComputedEST = EST_DynamicNone; return; // Check out noexcept specs. case EST_ComputedNoexcept: { FunctionProtoType::NoexceptResult NR = Proto->getNoexceptSpec(Self->Context); assert(NR != FunctionProtoType::NR_NoNoexcept && "Must have noexcept result for EST_ComputedNoexcept."); assert(NR != FunctionProtoType::NR_Dependent && "Should not generate implicit declarations for dependent cases, " "and don't know how to handle them anyway."); // noexcept(false) -> no spec on the new function if (NR == FunctionProtoType::NR_Throw) { ClearExceptions(); ComputedEST = EST_None; } // noexcept(true) won't change anything either. return; } default: break; } assert(EST == EST_Dynamic && "EST case not considered earlier."); assert(ComputedEST != EST_None && "Shouldn't collect exceptions when throw-all is guaranteed."); ComputedEST = EST_Dynamic; // Record the exceptions in this function's exception specification. for (const auto &E : Proto->exceptions()) if (ExceptionsSeen.insert(Self->Context.getCanonicalType(E)).second) Exceptions.push_back(E); } void Sema::ImplicitExceptionSpecification::CalledExpr(Expr *E) { if (!E || ComputedEST == EST_MSAny) return; // FIXME: // // C++0x [except.spec]p14: // [An] implicit exception-specification specifies the type-id T if and // only if T is allowed by the exception-specification of a function directly // invoked by f's implicit definition; f shall allow all exceptions if any // function it directly invokes allows all exceptions, and f shall allow no // exceptions if every function it directly invokes allows no exceptions. // // Note in particular that if an implicit exception-specification is generated // for a function containing a throw-expression, that specification can still // be noexcept(true). // // Note also that 'directly invoked' is not defined in the standard, and there // is no indication that we should only consider potentially-evaluated calls. // // Ultimately we should implement the intent of the standard: the exception // specification should be the set of exceptions which can be thrown by the // implicit definition. For now, we assume that any non-nothrow expression can // throw any exception. if (Self->canThrow(E)) ComputedEST = EST_None; } bool Sema::SetParamDefaultArgument(ParmVarDecl *Param, Expr *Arg, SourceLocation EqualLoc) { if (RequireCompleteType(Param->getLocation(), Param->getType(), diag::err_typecheck_decl_incomplete_type)) { Param->setInvalidDecl(); return true; } // C++ [dcl.fct.default]p5 // A default argument expression is implicitly converted (clause // 4) to the parameter type. The default argument expression has // the same semantic constraints as the initializer expression in // a declaration of a variable of the parameter type, using the // copy-initialization semantics (8.5). InitializedEntity Entity = InitializedEntity::InitializeParameter(Context, Param); InitializationKind Kind = InitializationKind::CreateCopy(Param->getLocation(), EqualLoc); InitializationSequence InitSeq(*this, Entity, Kind, Arg); ExprResult Result = InitSeq.Perform(*this, Entity, Kind, Arg); if (Result.isInvalid()) return true; Arg = Result.getAs(); CheckCompletedExpr(Arg, EqualLoc); Arg = MaybeCreateExprWithCleanups(Arg); // Okay: add the default argument to the parameter Param->setDefaultArg(Arg); // We have already instantiated this parameter; provide each of the // instantiations with the uninstantiated default argument. UnparsedDefaultArgInstantiationsMap::iterator InstPos = UnparsedDefaultArgInstantiations.find(Param); if (InstPos != UnparsedDefaultArgInstantiations.end()) { for (unsigned I = 0, N = InstPos->second.size(); I != N; ++I) InstPos->second[I]->setUninstantiatedDefaultArg(Arg); // We're done tracking this parameter's instantiations. UnparsedDefaultArgInstantiations.erase(InstPos); } return false; } /// ActOnParamDefaultArgument - Check whether the default argument /// provided for a function parameter is well-formed. If so, attach it /// to the parameter declaration. void Sema::ActOnParamDefaultArgument(Decl *param, SourceLocation EqualLoc, Expr *DefaultArg) { if (!param || !DefaultArg) return; ParmVarDecl *Param = cast(param); UnparsedDefaultArgLocs.erase(Param); // Default arguments are only permitted in C++ if (!getLangOpts().CPlusPlus) { Diag(EqualLoc, diag::err_param_default_argument) << DefaultArg->getSourceRange(); Param->setInvalidDecl(); return; } // Check for unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(DefaultArg, UPPC_DefaultArgument)) { Param->setInvalidDecl(); return; } // C++11 [dcl.fct.default]p3 // A default argument expression [...] shall not be specified for a // parameter pack. if (Param->isParameterPack()) { Diag(EqualLoc, diag::err_param_default_argument_on_parameter_pack) << DefaultArg->getSourceRange(); return; } // Check that the default argument is well-formed CheckDefaultArgumentVisitor DefaultArgChecker(DefaultArg, this); if (DefaultArgChecker.Visit(DefaultArg)) { Param->setInvalidDecl(); return; } SetParamDefaultArgument(Param, DefaultArg, EqualLoc); } /// ActOnParamUnparsedDefaultArgument - We've seen a default /// argument for a function parameter, but we can't parse it yet /// because we're inside a class definition. Note that this default /// argument will be parsed later. void Sema::ActOnParamUnparsedDefaultArgument(Decl *param, SourceLocation EqualLoc, SourceLocation ArgLoc) { if (!param) return; ParmVarDecl *Param = cast(param); Param->setUnparsedDefaultArg(); UnparsedDefaultArgLocs[Param] = ArgLoc; } /// ActOnParamDefaultArgumentError - Parsing or semantic analysis of /// the default argument for the parameter param failed. void Sema::ActOnParamDefaultArgumentError(Decl *param, SourceLocation EqualLoc) { if (!param) return; ParmVarDecl *Param = cast(param); Param->setInvalidDecl(); UnparsedDefaultArgLocs.erase(Param); Param->setDefaultArg(new(Context) OpaqueValueExpr(EqualLoc, Param->getType().getNonReferenceType(), VK_RValue)); } /// CheckExtraCXXDefaultArguments - Check for any extra default /// arguments in the declarator, which is not a function declaration /// or definition and therefore is not permitted to have default /// arguments. This routine should be invoked for every declarator /// that is not a function declaration or definition. void Sema::CheckExtraCXXDefaultArguments(Declarator &D) { // C++ [dcl.fct.default]p3 // A default argument expression shall be specified only in the // parameter-declaration-clause of a function declaration or in a // template-parameter (14.1). It shall not be specified for a // parameter pack. If it is specified in a // parameter-declaration-clause, it shall not occur within a // declarator or abstract-declarator of a parameter-declaration. bool MightBeFunction = D.isFunctionDeclarationContext(); for (unsigned i = 0, e = D.getNumTypeObjects(); i != e; ++i) { DeclaratorChunk &chunk = D.getTypeObject(i); if (chunk.Kind == DeclaratorChunk::Function) { if (MightBeFunction) { // This is a function declaration. It can have default arguments, but // keep looking in case its return type is a function type with default // arguments. MightBeFunction = false; continue; } for (unsigned argIdx = 0, e = chunk.Fun.NumParams; argIdx != e; ++argIdx) { ParmVarDecl *Param = cast(chunk.Fun.Params[argIdx].Param); if (Param->hasUnparsedDefaultArg()) { std::unique_ptr Toks = std::move(chunk.Fun.Params[argIdx].DefaultArgTokens); SourceRange SR; if (Toks->size() > 1) SR = SourceRange((*Toks)[1].getLocation(), Toks->back().getLocation()); else SR = UnparsedDefaultArgLocs[Param]; Diag(Param->getLocation(), diag::err_param_default_argument_nonfunc) << SR; } else if (Param->getDefaultArg()) { Diag(Param->getLocation(), diag::err_param_default_argument_nonfunc) << Param->getDefaultArg()->getSourceRange(); Param->setDefaultArg(nullptr); } } } else if (chunk.Kind != DeclaratorChunk::Paren) { MightBeFunction = false; } } } static bool functionDeclHasDefaultArgument(const FunctionDecl *FD) { for (unsigned NumParams = FD->getNumParams(); NumParams > 0; --NumParams) { const ParmVarDecl *PVD = FD->getParamDecl(NumParams-1); if (!PVD->hasDefaultArg()) return false; if (!PVD->hasInheritedDefaultArg()) return true; } return false; } /// MergeCXXFunctionDecl - Merge two declarations of the same C++ /// function, once we already know that they have the same /// type. Subroutine of MergeFunctionDecl. Returns true if there was an /// error, false otherwise. bool Sema::MergeCXXFunctionDecl(FunctionDecl *New, FunctionDecl *Old, Scope *S) { bool Invalid = false; // The declaration context corresponding to the scope is the semantic // parent, unless this is a local function declaration, in which case // it is that surrounding function. DeclContext *ScopeDC = New->isLocalExternDecl() ? New->getLexicalDeclContext() : New->getDeclContext(); // Find the previous declaration for the purpose of default arguments. FunctionDecl *PrevForDefaultArgs = Old; for (/**/; PrevForDefaultArgs; // Don't bother looking back past the latest decl if this is a local // extern declaration; nothing else could work. PrevForDefaultArgs = New->isLocalExternDecl() ? nullptr : PrevForDefaultArgs->getPreviousDecl()) { // Ignore hidden declarations. if (!LookupResult::isVisible(*this, PrevForDefaultArgs)) continue; if (S && !isDeclInScope(PrevForDefaultArgs, ScopeDC, S) && !New->isCXXClassMember()) { // Ignore default arguments of old decl if they are not in // the same scope and this is not an out-of-line definition of // a member function. continue; } if (PrevForDefaultArgs->isLocalExternDecl() != New->isLocalExternDecl()) { // If only one of these is a local function declaration, then they are // declared in different scopes, even though isDeclInScope may think // they're in the same scope. (If both are local, the scope check is // sufficient, and if neither is local, then they are in the same scope.) continue; } // We found the right previous declaration. break; } // C++ [dcl.fct.default]p4: // For non-template functions, default arguments can be added in // later declarations of a function in the same // scope. Declarations in different scopes have completely // distinct sets of default arguments. That is, declarations in // inner scopes do not acquire default arguments from // declarations in outer scopes, and vice versa. In a given // function declaration, all parameters subsequent to a // parameter with a default argument shall have default // arguments supplied in this or previous declarations. A // default argument shall not be redefined by a later // declaration (not even to the same value). // // C++ [dcl.fct.default]p6: // Except for member functions of class templates, the default arguments // in a member function definition that appears outside of the class // definition are added to the set of default arguments provided by the // member function declaration in the class definition. for (unsigned p = 0, NumParams = PrevForDefaultArgs ? PrevForDefaultArgs->getNumParams() : 0; p < NumParams; ++p) { ParmVarDecl *OldParam = PrevForDefaultArgs->getParamDecl(p); ParmVarDecl *NewParam = New->getParamDecl(p); bool OldParamHasDfl = OldParam ? OldParam->hasDefaultArg() : false; bool NewParamHasDfl = NewParam->hasDefaultArg(); if (OldParamHasDfl && NewParamHasDfl) { unsigned DiagDefaultParamID = diag::err_param_default_argument_redefinition; // MSVC accepts that default parameters be redefined for member functions // of template class. The new default parameter's value is ignored. Invalid = true; if (getLangOpts().MicrosoftExt) { CXXMethodDecl *MD = dyn_cast(New); if (MD && MD->getParent()->getDescribedClassTemplate()) { // Merge the old default argument into the new parameter. NewParam->setHasInheritedDefaultArg(); if (OldParam->hasUninstantiatedDefaultArg()) NewParam->setUninstantiatedDefaultArg( OldParam->getUninstantiatedDefaultArg()); else NewParam->setDefaultArg(OldParam->getInit()); DiagDefaultParamID = diag::ext_param_default_argument_redefinition; Invalid = false; } } // FIXME: If we knew where the '=' was, we could easily provide a fix-it // hint here. Alternatively, we could walk the type-source information // for NewParam to find the last source location in the type... but it // isn't worth the effort right now. This is the kind of test case that // is hard to get right: // int f(int); // void g(int (*fp)(int) = f); // void g(int (*fp)(int) = &f); Diag(NewParam->getLocation(), DiagDefaultParamID) << NewParam->getDefaultArgRange(); // Look for the function declaration where the default argument was // actually written, which may be a declaration prior to Old. for (auto Older = PrevForDefaultArgs; OldParam->hasInheritedDefaultArg(); /**/) { Older = Older->getPreviousDecl(); OldParam = Older->getParamDecl(p); } Diag(OldParam->getLocation(), diag::note_previous_definition) << OldParam->getDefaultArgRange(); } else if (OldParamHasDfl) { // Merge the old default argument into the new parameter unless the new // function is a friend declaration in a template class. In the latter // case the default arguments will be inherited when the friend // declaration will be instantiated. if (New->getFriendObjectKind() == Decl::FOK_None || !New->getLexicalDeclContext()->isDependentContext()) { // It's important to use getInit() here; getDefaultArg() // strips off any top-level ExprWithCleanups. NewParam->setHasInheritedDefaultArg(); if (OldParam->hasUnparsedDefaultArg()) NewParam->setUnparsedDefaultArg(); else if (OldParam->hasUninstantiatedDefaultArg()) NewParam->setUninstantiatedDefaultArg( OldParam->getUninstantiatedDefaultArg()); else NewParam->setDefaultArg(OldParam->getInit()); } } else if (NewParamHasDfl) { if (New->getDescribedFunctionTemplate()) { // Paragraph 4, quoted above, only applies to non-template functions. Diag(NewParam->getLocation(), diag::err_param_default_argument_template_redecl) << NewParam->getDefaultArgRange(); Diag(PrevForDefaultArgs->getLocation(), diag::note_template_prev_declaration) << false; } else if (New->getTemplateSpecializationKind() != TSK_ImplicitInstantiation && New->getTemplateSpecializationKind() != TSK_Undeclared) { // C++ [temp.expr.spec]p21: // Default function arguments shall not be specified in a declaration // or a definition for one of the following explicit specializations: // - the explicit specialization of a function template; // - the explicit specialization of a member function template; // - the explicit specialization of a member function of a class // template where the class template specialization to which the // member function specialization belongs is implicitly // instantiated. Diag(NewParam->getLocation(), diag::err_template_spec_default_arg) << (New->getTemplateSpecializationKind() ==TSK_ExplicitSpecialization) << New->getDeclName() << NewParam->getDefaultArgRange(); } else if (New->getDeclContext()->isDependentContext()) { // C++ [dcl.fct.default]p6 (DR217): // Default arguments for a member function of a class template shall // be specified on the initial declaration of the member function // within the class template. // // Reading the tea leaves a bit in DR217 and its reference to DR205 // leads me to the conclusion that one cannot add default function // arguments for an out-of-line definition of a member function of a // dependent type. int WhichKind = 2; if (CXXRecordDecl *Record = dyn_cast(New->getDeclContext())) { if (Record->getDescribedClassTemplate()) WhichKind = 0; else if (isa(Record)) WhichKind = 1; else WhichKind = 2; } Diag(NewParam->getLocation(), diag::err_param_default_argument_member_template_redecl) << WhichKind << NewParam->getDefaultArgRange(); } } } // DR1344: If a default argument is added outside a class definition and that // default argument makes the function a special member function, the program // is ill-formed. This can only happen for constructors. if (isa(New) && New->getMinRequiredArguments() < Old->getMinRequiredArguments()) { CXXSpecialMember NewSM = getSpecialMember(cast(New)), OldSM = getSpecialMember(cast(Old)); if (NewSM != OldSM) { ParmVarDecl *NewParam = New->getParamDecl(New->getMinRequiredArguments()); assert(NewParam->hasDefaultArg()); Diag(NewParam->getLocation(), diag::err_default_arg_makes_ctor_special) << NewParam->getDefaultArgRange() << NewSM; Diag(Old->getLocation(), diag::note_previous_declaration); } } const FunctionDecl *Def; // C++11 [dcl.constexpr]p1: If any declaration of a function or function // template has a constexpr specifier then all its declarations shall // contain the constexpr specifier. if (New->isConstexpr() != Old->isConstexpr()) { Diag(New->getLocation(), diag::err_constexpr_redecl_mismatch) << New << New->isConstexpr(); Diag(Old->getLocation(), diag::note_previous_declaration); Invalid = true; } else if (!Old->getMostRecentDecl()->isInlined() && New->isInlined() && Old->isDefined(Def) && // If a friend function is inlined but does not have 'inline' // specifier, it is a definition. Do not report attribute conflict // in this case, redefinition will be diagnosed later. (New->isInlineSpecified() || New->getFriendObjectKind() == Decl::FOK_None)) { // C++11 [dcl.fcn.spec]p4: // If the definition of a function appears in a translation unit before its // first declaration as inline, the program is ill-formed. Diag(New->getLocation(), diag::err_inline_decl_follows_def) << New; Diag(Def->getLocation(), diag::note_previous_definition); Invalid = true; } // FIXME: It's not clear what should happen if multiple declarations of a // deduction guide have different explicitness. For now at least we simply // reject any case where the explicitness changes. auto *NewGuide = dyn_cast(New); if (NewGuide && NewGuide->isExplicitSpecified() != cast(Old)->isExplicitSpecified()) { Diag(New->getLocation(), diag::err_deduction_guide_explicit_mismatch) << NewGuide->isExplicitSpecified(); Diag(Old->getLocation(), diag::note_previous_declaration); } // C++11 [dcl.fct.default]p4: If a friend declaration specifies a default // argument expression, that declaration shall be a definition and shall be // the only declaration of the function or function template in the // translation unit. if (Old->getFriendObjectKind() == Decl::FOK_Undeclared && functionDeclHasDefaultArgument(Old)) { Diag(New->getLocation(), diag::err_friend_decl_with_def_arg_redeclared); Diag(Old->getLocation(), diag::note_previous_declaration); Invalid = true; } return Invalid; } NamedDecl * Sema::ActOnDecompositionDeclarator(Scope *S, Declarator &D, MultiTemplateParamsArg TemplateParamLists) { assert(D.isDecompositionDeclarator()); const DecompositionDeclarator &Decomp = D.getDecompositionDeclarator(); // The syntax only allows a decomposition declarator as a simple-declaration // or a for-range-declaration, but we parse it in more cases than that. if (!D.mayHaveDecompositionDeclarator()) { Diag(Decomp.getLSquareLoc(), diag::err_decomp_decl_context) << Decomp.getSourceRange(); return nullptr; } if (!TemplateParamLists.empty()) { // FIXME: There's no rule against this, but there are also no rules that // would actually make it usable, so we reject it for now. Diag(TemplateParamLists.front()->getTemplateLoc(), diag::err_decomp_decl_template); return nullptr; } Diag(Decomp.getLSquareLoc(), getLangOpts().CPlusPlus1z ? diag::warn_cxx14_compat_decomp_decl : diag::ext_decomp_decl) << Decomp.getSourceRange(); // The semantic context is always just the current context. DeclContext *const DC = CurContext; // C++1z [dcl.dcl]/8: // The decl-specifier-seq shall contain only the type-specifier auto // and cv-qualifiers. auto &DS = D.getDeclSpec(); { SmallVector BadSpecifiers; SmallVector BadSpecifierLocs; if (auto SCS = DS.getStorageClassSpec()) { BadSpecifiers.push_back(DeclSpec::getSpecifierName(SCS)); BadSpecifierLocs.push_back(DS.getStorageClassSpecLoc()); } if (auto TSCS = DS.getThreadStorageClassSpec()) { BadSpecifiers.push_back(DeclSpec::getSpecifierName(TSCS)); BadSpecifierLocs.push_back(DS.getThreadStorageClassSpecLoc()); } if (DS.isConstexprSpecified()) { BadSpecifiers.push_back("constexpr"); BadSpecifierLocs.push_back(DS.getConstexprSpecLoc()); } if (DS.isInlineSpecified()) { BadSpecifiers.push_back("inline"); BadSpecifierLocs.push_back(DS.getInlineSpecLoc()); } if (!BadSpecifiers.empty()) { auto &&Err = Diag(BadSpecifierLocs.front(), diag::err_decomp_decl_spec); Err << (int)BadSpecifiers.size() << llvm::join(BadSpecifiers.begin(), BadSpecifiers.end(), " "); // Don't add FixItHints to remove the specifiers; we do still respect // them when building the underlying variable. for (auto Loc : BadSpecifierLocs) Err << SourceRange(Loc, Loc); } // We can't recover from it being declared as a typedef. if (DS.getStorageClassSpec() == DeclSpec::SCS_typedef) return nullptr; } TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); QualType R = TInfo->getType(); if (DiagnoseUnexpandedParameterPack(D.getIdentifierLoc(), TInfo, UPPC_DeclarationType)) D.setInvalidType(); // The syntax only allows a single ref-qualifier prior to the decomposition // declarator. No other declarator chunks are permitted. Also check the type // specifier here. if (DS.getTypeSpecType() != DeclSpec::TST_auto || D.hasGroupingParens() || D.getNumTypeObjects() > 1 || (D.getNumTypeObjects() == 1 && D.getTypeObject(0).Kind != DeclaratorChunk::Reference)) { Diag(Decomp.getLSquareLoc(), (D.hasGroupingParens() || (D.getNumTypeObjects() && D.getTypeObject(0).Kind == DeclaratorChunk::Paren)) ? diag::err_decomp_decl_parens : diag::err_decomp_decl_type) << R; // In most cases, there's no actual problem with an explicitly-specified // type, but a function type won't work here, and ActOnVariableDeclarator // shouldn't be called for such a type. if (R->isFunctionType()) D.setInvalidType(); } // Build the BindingDecls. SmallVector Bindings; // Build the BindingDecls. for (auto &B : D.getDecompositionDeclarator().bindings()) { // Check for name conflicts. DeclarationNameInfo NameInfo(B.Name, B.NameLoc); LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); LookupName(Previous, S, /*CreateBuiltins*/DC->getRedeclContext()->isTranslationUnit()); // It's not permitted to shadow a template parameter name. if (Previous.isSingleResult() && Previous.getFoundDecl()->isTemplateParameter()) { DiagnoseTemplateParameterShadow(D.getIdentifierLoc(), Previous.getFoundDecl()); Previous.clear(); } bool ConsiderLinkage = DC->isFunctionOrMethod() && DS.getStorageClassSpec() == DeclSpec::SCS_extern; FilterLookupForScope(Previous, DC, S, ConsiderLinkage, /*AllowInlineNamespace*/false); if (!Previous.empty()) { auto *Old = Previous.getRepresentativeDecl(); Diag(B.NameLoc, diag::err_redefinition) << B.Name; Diag(Old->getLocation(), diag::note_previous_definition); } auto *BD = BindingDecl::Create(Context, DC, B.NameLoc, B.Name); PushOnScopeChains(BD, S, true); Bindings.push_back(BD); ParsingInitForAutoVars.insert(BD); } // There are no prior lookup results for the variable itself, because it // is unnamed. DeclarationNameInfo NameInfo((IdentifierInfo *)nullptr, Decomp.getLSquareLoc()); LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); // Build the variable that holds the non-decomposed object. bool AddToScope = true; NamedDecl *New = ActOnVariableDeclarator(S, D, DC, TInfo, Previous, MultiTemplateParamsArg(), AddToScope, Bindings); CurContext->addHiddenDecl(New); if (isInOpenMPDeclareTargetContext()) checkDeclIsAllowedInOpenMPTarget(nullptr, New); return New; } static bool checkSimpleDecomposition( Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const llvm::APSInt &NumElems, QualType ElemType, llvm::function_ref GetInit) { if ((int64_t)Bindings.size() != NumElems) { S.Diag(Src->getLocation(), diag::err_decomp_decl_wrong_number_bindings) << DecompType << (unsigned)Bindings.size() << NumElems.toString(10) << (NumElems < Bindings.size()); return true; } unsigned I = 0; for (auto *B : Bindings) { SourceLocation Loc = B->getLocation(); ExprResult E = S.BuildDeclRefExpr(Src, DecompType, VK_LValue, Loc); if (E.isInvalid()) return true; E = GetInit(Loc, E.get(), I++); if (E.isInvalid()) return true; B->setBinding(ElemType, E.get()); } return false; } static bool checkArrayLikeDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const llvm::APSInt &NumElems, QualType ElemType) { return checkSimpleDecomposition( S, Bindings, Src, DecompType, NumElems, ElemType, [&](SourceLocation Loc, Expr *Base, unsigned I) -> ExprResult { ExprResult E = S.ActOnIntegerConstant(Loc, I); if (E.isInvalid()) return ExprError(); return S.CreateBuiltinArraySubscriptExpr(Base, Loc, E.get(), Loc); }); } static bool checkArrayDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const ConstantArrayType *CAT) { return checkArrayLikeDecomposition(S, Bindings, Src, DecompType, llvm::APSInt(CAT->getSize()), CAT->getElementType()); } static bool checkVectorDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const VectorType *VT) { return checkArrayLikeDecomposition( S, Bindings, Src, DecompType, llvm::APSInt::get(VT->getNumElements()), S.Context.getQualifiedType(VT->getElementType(), DecompType.getQualifiers())); } static bool checkComplexDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const ComplexType *CT) { return checkSimpleDecomposition( S, Bindings, Src, DecompType, llvm::APSInt::get(2), S.Context.getQualifiedType(CT->getElementType(), DecompType.getQualifiers()), [&](SourceLocation Loc, Expr *Base, unsigned I) -> ExprResult { return S.CreateBuiltinUnaryOp(Loc, I ? UO_Imag : UO_Real, Base); }); } static std::string printTemplateArgs(const PrintingPolicy &PrintingPolicy, TemplateArgumentListInfo &Args) { SmallString<128> SS; llvm::raw_svector_ostream OS(SS); bool First = true; for (auto &Arg : Args.arguments()) { if (!First) OS << ", "; Arg.getArgument().print(PrintingPolicy, OS); First = false; } return OS.str(); } static bool lookupStdTypeTraitMember(Sema &S, LookupResult &TraitMemberLookup, SourceLocation Loc, StringRef Trait, TemplateArgumentListInfo &Args, unsigned DiagID) { auto DiagnoseMissing = [&] { if (DiagID) S.Diag(Loc, DiagID) << printTemplateArgs(S.Context.getPrintingPolicy(), Args); return true; }; // FIXME: Factor out duplication with lookupPromiseType in SemaCoroutine. NamespaceDecl *Std = S.getStdNamespace(); if (!Std) return DiagnoseMissing(); // Look up the trait itself, within namespace std. We can diagnose various // problems with this lookup even if we've been asked to not diagnose a // missing specialization, because this can only fail if the user has been // declaring their own names in namespace std or we don't support the // standard library implementation in use. LookupResult Result(S, &S.PP.getIdentifierTable().get(Trait), Loc, Sema::LookupOrdinaryName); if (!S.LookupQualifiedName(Result, Std)) return DiagnoseMissing(); if (Result.isAmbiguous()) return true; ClassTemplateDecl *TraitTD = Result.getAsSingle(); if (!TraitTD) { Result.suppressDiagnostics(); NamedDecl *Found = *Result.begin(); S.Diag(Loc, diag::err_std_type_trait_not_class_template) << Trait; S.Diag(Found->getLocation(), diag::note_declared_at); return true; } // Build the template-id. QualType TraitTy = S.CheckTemplateIdType(TemplateName(TraitTD), Loc, Args); if (TraitTy.isNull()) return true; if (!S.isCompleteType(Loc, TraitTy)) { if (DiagID) S.RequireCompleteType( Loc, TraitTy, DiagID, printTemplateArgs(S.Context.getPrintingPolicy(), Args)); return true; } CXXRecordDecl *RD = TraitTy->getAsCXXRecordDecl(); assert(RD && "specialization of class template is not a class?"); // Look up the member of the trait type. S.LookupQualifiedName(TraitMemberLookup, RD); return TraitMemberLookup.isAmbiguous(); } static TemplateArgumentLoc getTrivialIntegralTemplateArgument(Sema &S, SourceLocation Loc, QualType T, uint64_t I) { TemplateArgument Arg(S.Context, S.Context.MakeIntValue(I, T), T); return S.getTrivialTemplateArgumentLoc(Arg, T, Loc); } static TemplateArgumentLoc getTrivialTypeTemplateArgument(Sema &S, SourceLocation Loc, QualType T) { return S.getTrivialTemplateArgumentLoc(TemplateArgument(T), QualType(), Loc); } namespace { enum class IsTupleLike { TupleLike, NotTupleLike, Error }; } static IsTupleLike isTupleLike(Sema &S, SourceLocation Loc, QualType T, llvm::APSInt &Size) { EnterExpressionEvaluationContext ContextRAII( S, Sema::ExpressionEvaluationContext::ConstantEvaluated); DeclarationName Value = S.PP.getIdentifierInfo("value"); LookupResult R(S, Value, Loc, Sema::LookupOrdinaryName); // Form template argument list for tuple_size. TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument(getTrivialTypeTemplateArgument(S, Loc, T)); // If there's no tuple_size specialization, it's not tuple-like. if (lookupStdTypeTraitMember(S, R, Loc, "tuple_size", Args, /*DiagID*/0)) return IsTupleLike::NotTupleLike; // If we get this far, we've committed to the tuple interpretation, but // we can still fail if there actually isn't a usable ::value. struct ICEDiagnoser : Sema::VerifyICEDiagnoser { LookupResult &R; TemplateArgumentListInfo &Args; ICEDiagnoser(LookupResult &R, TemplateArgumentListInfo &Args) : R(R), Args(Args) {} void diagnoseNotICE(Sema &S, SourceLocation Loc, SourceRange SR) { S.Diag(Loc, diag::err_decomp_decl_std_tuple_size_not_constant) << printTemplateArgs(S.Context.getPrintingPolicy(), Args); } } Diagnoser(R, Args); if (R.empty()) { Diagnoser.diagnoseNotICE(S, Loc, SourceRange()); return IsTupleLike::Error; } ExprResult E = S.BuildDeclarationNameExpr(CXXScopeSpec(), R, /*NeedsADL*/false); if (E.isInvalid()) return IsTupleLike::Error; E = S.VerifyIntegerConstantExpression(E.get(), &Size, Diagnoser, false); if (E.isInvalid()) return IsTupleLike::Error; return IsTupleLike::TupleLike; } /// \return std::tuple_element::type. static QualType getTupleLikeElementType(Sema &S, SourceLocation Loc, unsigned I, QualType T) { // Form template argument list for tuple_element. TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument( getTrivialIntegralTemplateArgument(S, Loc, S.Context.getSizeType(), I)); Args.addArgument(getTrivialTypeTemplateArgument(S, Loc, T)); DeclarationName TypeDN = S.PP.getIdentifierInfo("type"); LookupResult R(S, TypeDN, Loc, Sema::LookupOrdinaryName); if (lookupStdTypeTraitMember( S, R, Loc, "tuple_element", Args, diag::err_decomp_decl_std_tuple_element_not_specialized)) return QualType(); auto *TD = R.getAsSingle(); if (!TD) { R.suppressDiagnostics(); S.Diag(Loc, diag::err_decomp_decl_std_tuple_element_not_specialized) << printTemplateArgs(S.Context.getPrintingPolicy(), Args); if (!R.empty()) S.Diag(R.getRepresentativeDecl()->getLocation(), diag::note_declared_at); return QualType(); } return S.Context.getTypeDeclType(TD); } namespace { struct BindingDiagnosticTrap { Sema &S; DiagnosticErrorTrap Trap; BindingDecl *BD; BindingDiagnosticTrap(Sema &S, BindingDecl *BD) : S(S), Trap(S.Diags), BD(BD) {} ~BindingDiagnosticTrap() { if (Trap.hasErrorOccurred()) S.Diag(BD->getLocation(), diag::note_in_binding_decl_init) << BD; } }; } static bool checkTupleLikeDecomposition(Sema &S, ArrayRef Bindings, VarDecl *Src, QualType DecompType, const llvm::APSInt &TupleSize) { if ((int64_t)Bindings.size() != TupleSize) { S.Diag(Src->getLocation(), diag::err_decomp_decl_wrong_number_bindings) << DecompType << (unsigned)Bindings.size() << TupleSize.toString(10) << (TupleSize < Bindings.size()); return true; } if (Bindings.empty()) return false; DeclarationName GetDN = S.PP.getIdentifierInfo("get"); // [dcl.decomp]p3: // The unqualified-id get is looked up in the scope of E by class member // access lookup LookupResult MemberGet(S, GetDN, Src->getLocation(), Sema::LookupMemberName); bool UseMemberGet = false; if (S.isCompleteType(Src->getLocation(), DecompType)) { if (auto *RD = DecompType->getAsCXXRecordDecl()) S.LookupQualifiedName(MemberGet, RD); if (MemberGet.isAmbiguous()) return true; UseMemberGet = !MemberGet.empty(); S.FilterAcceptableTemplateNames(MemberGet); } unsigned I = 0; for (auto *B : Bindings) { BindingDiagnosticTrap Trap(S, B); SourceLocation Loc = B->getLocation(); ExprResult E = S.BuildDeclRefExpr(Src, DecompType, VK_LValue, Loc); if (E.isInvalid()) return true; // e is an lvalue if the type of the entity is an lvalue reference and // an xvalue otherwise if (!Src->getType()->isLValueReferenceType()) E = ImplicitCastExpr::Create(S.Context, E.get()->getType(), CK_NoOp, E.get(), nullptr, VK_XValue); TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument( getTrivialIntegralTemplateArgument(S, Loc, S.Context.getSizeType(), I)); if (UseMemberGet) { // if [lookup of member get] finds at least one declaration, the // initializer is e.get(). E = S.BuildMemberReferenceExpr(E.get(), DecompType, Loc, false, CXXScopeSpec(), SourceLocation(), nullptr, MemberGet, &Args, nullptr); if (E.isInvalid()) return true; E = S.ActOnCallExpr(nullptr, E.get(), Loc, None, Loc); } else { // Otherwise, the initializer is get(e), where get is looked up // in the associated namespaces. Expr *Get = UnresolvedLookupExpr::Create( S.Context, nullptr, NestedNameSpecifierLoc(), SourceLocation(), DeclarationNameInfo(GetDN, Loc), /*RequiresADL*/true, &Args, UnresolvedSetIterator(), UnresolvedSetIterator()); Expr *Arg = E.get(); E = S.ActOnCallExpr(nullptr, Get, Loc, Arg, Loc); } if (E.isInvalid()) return true; Expr *Init = E.get(); // Given the type T designated by std::tuple_element::type, QualType T = getTupleLikeElementType(S, Loc, I, DecompType); if (T.isNull()) return true; // each vi is a variable of type "reference to T" initialized with the // initializer, where the reference is an lvalue reference if the // initializer is an lvalue and an rvalue reference otherwise QualType RefType = S.BuildReferenceType(T, E.get()->isLValue(), Loc, B->getDeclName()); if (RefType.isNull()) return true; auto *RefVD = VarDecl::Create( S.Context, Src->getDeclContext(), Loc, Loc, B->getDeclName().getAsIdentifierInfo(), RefType, S.Context.getTrivialTypeSourceInfo(T, Loc), Src->getStorageClass()); RefVD->setLexicalDeclContext(Src->getLexicalDeclContext()); RefVD->setTSCSpec(Src->getTSCSpec()); RefVD->setImplicit(); if (Src->isInlineSpecified()) RefVD->setInlineSpecified(); RefVD->getLexicalDeclContext()->addHiddenDecl(RefVD); InitializedEntity Entity = InitializedEntity::InitializeBinding(RefVD); InitializationKind Kind = InitializationKind::CreateCopy(Loc, Loc); InitializationSequence Seq(S, Entity, Kind, Init); E = Seq.Perform(S, Entity, Kind, Init); if (E.isInvalid()) return true; E = S.ActOnFinishFullExpr(E.get(), Loc); if (E.isInvalid()) return true; RefVD->setInit(E.get()); RefVD->checkInitIsICE(); E = S.BuildDeclarationNameExpr(CXXScopeSpec(), DeclarationNameInfo(B->getDeclName(), Loc), RefVD); if (E.isInvalid()) return true; B->setBinding(T, E.get()); I++; } return false; } /// Find the base class to decompose in a built-in decomposition of a class type. /// This base class search is, unfortunately, not quite like any other that we /// perform anywhere else in C++. static const CXXRecordDecl *findDecomposableBaseClass(Sema &S, SourceLocation Loc, const CXXRecordDecl *RD, CXXCastPath &BasePath) { auto BaseHasFields = [](const CXXBaseSpecifier *Specifier, CXXBasePath &Path) { return Specifier->getType()->getAsCXXRecordDecl()->hasDirectFields(); }; const CXXRecordDecl *ClassWithFields = nullptr; if (RD->hasDirectFields()) // [dcl.decomp]p4: // Otherwise, all of E's non-static data members shall be public direct // members of E ... ClassWithFields = RD; else { // ... or of ... CXXBasePaths Paths; Paths.setOrigin(const_cast(RD)); if (!RD->lookupInBases(BaseHasFields, Paths)) { // If no classes have fields, just decompose RD itself. (This will work // if and only if zero bindings were provided.) return RD; } CXXBasePath *BestPath = nullptr; for (auto &P : Paths) { if (!BestPath) BestPath = &P; else if (!S.Context.hasSameType(P.back().Base->getType(), BestPath->back().Base->getType())) { // ... the same ... S.Diag(Loc, diag::err_decomp_decl_multiple_bases_with_members) << false << RD << BestPath->back().Base->getType() << P.back().Base->getType(); return nullptr; } else if (P.Access < BestPath->Access) { BestPath = &P; } } // ... unambiguous ... QualType BaseType = BestPath->back().Base->getType(); if (Paths.isAmbiguous(S.Context.getCanonicalType(BaseType))) { S.Diag(Loc, diag::err_decomp_decl_ambiguous_base) << RD << BaseType << S.getAmbiguousPathsDisplayString(Paths); return nullptr; } // ... public base class of E. if (BestPath->Access != AS_public) { S.Diag(Loc, diag::err_decomp_decl_non_public_base) << RD << BaseType; for (auto &BS : *BestPath) { if (BS.Base->getAccessSpecifier() != AS_public) { S.Diag(BS.Base->getLocStart(), diag::note_access_constrained_by_path) << (BS.Base->getAccessSpecifier() == AS_protected) << (BS.Base->getAccessSpecifierAsWritten() == AS_none); break; } } return nullptr; } ClassWithFields = BaseType->getAsCXXRecordDecl(); S.BuildBasePathArray(Paths, BasePath); } // The above search did not check whether the selected class itself has base // classes with fields, so check that now. CXXBasePaths Paths; if (ClassWithFields->lookupInBases(BaseHasFields, Paths)) { S.Diag(Loc, diag::err_decomp_decl_multiple_bases_with_members) << (ClassWithFields == RD) << RD << ClassWithFields << Paths.front().back().Base->getType(); return nullptr; } return ClassWithFields; } static bool checkMemberDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const CXXRecordDecl *RD) { CXXCastPath BasePath; RD = findDecomposableBaseClass(S, Src->getLocation(), RD, BasePath); if (!RD) return true; QualType BaseType = S.Context.getQualifiedType(S.Context.getRecordType(RD), DecompType.getQualifiers()); auto DiagnoseBadNumberOfBindings = [&]() -> bool { unsigned NumFields = std::count_if(RD->field_begin(), RD->field_end(), [](FieldDecl *FD) { return !FD->isUnnamedBitfield(); }); assert(Bindings.size() != NumFields); S.Diag(Src->getLocation(), diag::err_decomp_decl_wrong_number_bindings) << DecompType << (unsigned)Bindings.size() << NumFields << (NumFields < Bindings.size()); return true; }; // all of E's non-static data members shall be public [...] members, // E shall not have an anonymous union member, ... unsigned I = 0; for (auto *FD : RD->fields()) { if (FD->isUnnamedBitfield()) continue; if (FD->isAnonymousStructOrUnion()) { S.Diag(Src->getLocation(), diag::err_decomp_decl_anon_union_member) << DecompType << FD->getType()->isUnionType(); S.Diag(FD->getLocation(), diag::note_declared_at); return true; } // We have a real field to bind. if (I >= Bindings.size()) return DiagnoseBadNumberOfBindings(); auto *B = Bindings[I++]; SourceLocation Loc = B->getLocation(); if (FD->getAccess() != AS_public) { S.Diag(Loc, diag::err_decomp_decl_non_public_member) << FD << DecompType; // Determine whether the access specifier was explicit. bool Implicit = true; for (const auto *D : RD->decls()) { if (declaresSameEntity(D, FD)) break; if (isa(D)) { Implicit = false; break; } } S.Diag(FD->getLocation(), diag::note_access_natural) << (FD->getAccess() == AS_protected) << Implicit; return true; } // Initialize the binding to Src.FD. ExprResult E = S.BuildDeclRefExpr(Src, DecompType, VK_LValue, Loc); if (E.isInvalid()) return true; E = S.ImpCastExprToType(E.get(), BaseType, CK_UncheckedDerivedToBase, VK_LValue, &BasePath); if (E.isInvalid()) return true; E = S.BuildFieldReferenceExpr(E.get(), /*IsArrow*/ false, Loc, CXXScopeSpec(), FD, DeclAccessPair::make(FD, FD->getAccess()), DeclarationNameInfo(FD->getDeclName(), Loc)); if (E.isInvalid()) return true; // If the type of the member is T, the referenced type is cv T, where cv is // the cv-qualification of the decomposition expression. // // FIXME: We resolve a defect here: if the field is mutable, we do not add // 'const' to the type of the field. Qualifiers Q = DecompType.getQualifiers(); if (FD->isMutable()) Q.removeConst(); B->setBinding(S.BuildQualifiedType(FD->getType(), Loc, Q), E.get()); } if (I != Bindings.size()) return DiagnoseBadNumberOfBindings(); return false; } void Sema::CheckCompleteDecompositionDeclaration(DecompositionDecl *DD) { QualType DecompType = DD->getType(); // If the type of the decomposition is dependent, then so is the type of // each binding. if (DecompType->isDependentType()) { for (auto *B : DD->bindings()) B->setType(Context.DependentTy); return; } DecompType = DecompType.getNonReferenceType(); ArrayRef Bindings = DD->bindings(); // C++1z [dcl.decomp]/2: // If E is an array type [...] // As an extension, we also support decomposition of built-in complex and // vector types. if (auto *CAT = Context.getAsConstantArrayType(DecompType)) { if (checkArrayDecomposition(*this, Bindings, DD, DecompType, CAT)) DD->setInvalidDecl(); return; } if (auto *VT = DecompType->getAs()) { if (checkVectorDecomposition(*this, Bindings, DD, DecompType, VT)) DD->setInvalidDecl(); return; } if (auto *CT = DecompType->getAs()) { if (checkComplexDecomposition(*this, Bindings, DD, DecompType, CT)) DD->setInvalidDecl(); return; } // C++1z [dcl.decomp]/3: // if the expression std::tuple_size::value is a well-formed integral // constant expression, [...] llvm::APSInt TupleSize(32); switch (isTupleLike(*this, DD->getLocation(), DecompType, TupleSize)) { case IsTupleLike::Error: DD->setInvalidDecl(); return; case IsTupleLike::TupleLike: if (checkTupleLikeDecomposition(*this, Bindings, DD, DecompType, TupleSize)) DD->setInvalidDecl(); return; case IsTupleLike::NotTupleLike: break; } // C++1z [dcl.dcl]/8: // [E shall be of array or non-union class type] CXXRecordDecl *RD = DecompType->getAsCXXRecordDecl(); if (!RD || RD->isUnion()) { Diag(DD->getLocation(), diag::err_decomp_decl_unbindable_type) << DD << !RD << DecompType; DD->setInvalidDecl(); return; } // C++1z [dcl.decomp]/4: // all of E's non-static data members shall be [...] direct members of // E or of the same unambiguous public base class of E, ... if (checkMemberDecomposition(*this, Bindings, DD, DecompType, RD)) DD->setInvalidDecl(); } /// \brief Merge the exception specifications of two variable declarations. /// /// This is called when there's a redeclaration of a VarDecl. The function /// checks if the redeclaration might have an exception specification and /// validates compatibility and merges the specs if necessary. void Sema::MergeVarDeclExceptionSpecs(VarDecl *New, VarDecl *Old) { // Shortcut if exceptions are disabled. if (!getLangOpts().CXXExceptions) return; assert(Context.hasSameType(New->getType(), Old->getType()) && "Should only be called if types are otherwise the same."); QualType NewType = New->getType(); QualType OldType = Old->getType(); // We're only interested in pointers and references to functions, as well // as pointers to member functions. if (const ReferenceType *R = NewType->getAs()) { NewType = R->getPointeeType(); OldType = OldType->getAs()->getPointeeType(); } else if (const PointerType *P = NewType->getAs()) { NewType = P->getPointeeType(); OldType = OldType->getAs()->getPointeeType(); } else if (const MemberPointerType *M = NewType->getAs()) { NewType = M->getPointeeType(); OldType = OldType->getAs()->getPointeeType(); } if (!NewType->isFunctionProtoType()) return; // There's lots of special cases for functions. For function pointers, system // libraries are hopefully not as broken so that we don't need these // workarounds. if (CheckEquivalentExceptionSpec( OldType->getAs(), Old->getLocation(), NewType->getAs(), New->getLocation())) { New->setInvalidDecl(); } } /// CheckCXXDefaultArguments - Verify that the default arguments for a /// function declaration are well-formed according to C++ /// [dcl.fct.default]. void Sema::CheckCXXDefaultArguments(FunctionDecl *FD) { unsigned NumParams = FD->getNumParams(); unsigned p; // Find first parameter with a default argument for (p = 0; p < NumParams; ++p) { ParmVarDecl *Param = FD->getParamDecl(p); if (Param->hasDefaultArg()) break; } // C++11 [dcl.fct.default]p4: // In a given function declaration, each parameter subsequent to a parameter // with a default argument shall have a default argument supplied in this or // a previous declaration or shall be a function parameter pack. A default // argument shall not be redefined by a later declaration (not even to the // same value). unsigned LastMissingDefaultArg = 0; for (; p < NumParams; ++p) { ParmVarDecl *Param = FD->getParamDecl(p); if (!Param->hasDefaultArg() && !Param->isParameterPack()) { if (Param->isInvalidDecl()) /* We already complained about this parameter. */; else if (Param->getIdentifier()) Diag(Param->getLocation(), diag::err_param_default_argument_missing_name) << Param->getIdentifier(); else Diag(Param->getLocation(), diag::err_param_default_argument_missing); LastMissingDefaultArg = p; } } if (LastMissingDefaultArg > 0) { // Some default arguments were missing. Clear out all of the // default arguments up to (and including) the last missing // default argument, so that we leave the function parameters // in a semantically valid state. for (p = 0; p <= LastMissingDefaultArg; ++p) { ParmVarDecl *Param = FD->getParamDecl(p); if (Param->hasDefaultArg()) { Param->setDefaultArg(nullptr); } } } } // CheckConstexprParameterTypes - Check whether a function's parameter types // are all literal types. If so, return true. If not, produce a suitable // diagnostic and return false. static bool CheckConstexprParameterTypes(Sema &SemaRef, const FunctionDecl *FD) { unsigned ArgIndex = 0; const FunctionProtoType *FT = FD->getType()->getAs(); for (FunctionProtoType::param_type_iterator i = FT->param_type_begin(), e = FT->param_type_end(); i != e; ++i, ++ArgIndex) { const ParmVarDecl *PD = FD->getParamDecl(ArgIndex); SourceLocation ParamLoc = PD->getLocation(); if (!(*i)->isDependentType() && SemaRef.RequireLiteralType(ParamLoc, *i, diag::err_constexpr_non_literal_param, ArgIndex+1, PD->getSourceRange(), isa(FD))) return false; } return true; } /// \brief Get diagnostic %select index for tag kind for /// record diagnostic message. /// WARNING: Indexes apply to particular diagnostics only! /// /// \returns diagnostic %select index. static unsigned getRecordDiagFromTagKind(TagTypeKind Tag) { switch (Tag) { case TTK_Struct: return 0; case TTK_Interface: return 1; case TTK_Class: return 2; default: llvm_unreachable("Invalid tag kind for record diagnostic!"); } } // CheckConstexprFunctionDecl - Check whether a function declaration satisfies // the requirements of a constexpr function definition or a constexpr // constructor definition. If so, return true. If not, produce appropriate // diagnostics and return false. // // This implements C++11 [dcl.constexpr]p3,4, as amended by DR1360. bool Sema::CheckConstexprFunctionDecl(const FunctionDecl *NewFD) { const CXXMethodDecl *MD = dyn_cast(NewFD); if (MD && MD->isInstance()) { // C++11 [dcl.constexpr]p4: // The definition of a constexpr constructor shall satisfy the following // constraints: // - the class shall not have any virtual base classes; const CXXRecordDecl *RD = MD->getParent(); if (RD->getNumVBases()) { Diag(NewFD->getLocation(), diag::err_constexpr_virtual_base) << isa(NewFD) << getRecordDiagFromTagKind(RD->getTagKind()) << RD->getNumVBases(); for (const auto &I : RD->vbases()) Diag(I.getLocStart(), diag::note_constexpr_virtual_base_here) << I.getSourceRange(); return false; } } if (!isa(NewFD)) { // C++11 [dcl.constexpr]p3: // The definition of a constexpr function shall satisfy the following // constraints: // - it shall not be virtual; const CXXMethodDecl *Method = dyn_cast(NewFD); if (Method && Method->isVirtual()) { Method = Method->getCanonicalDecl(); Diag(Method->getLocation(), diag::err_constexpr_virtual); // If it's not obvious why this function is virtual, find an overridden // function which uses the 'virtual' keyword. const CXXMethodDecl *WrittenVirtual = Method; while (!WrittenVirtual->isVirtualAsWritten()) WrittenVirtual = *WrittenVirtual->begin_overridden_methods(); if (WrittenVirtual != Method) Diag(WrittenVirtual->getLocation(), diag::note_overridden_virtual_function); return false; } // - its return type shall be a literal type; QualType RT = NewFD->getReturnType(); if (!RT->isDependentType() && RequireLiteralType(NewFD->getLocation(), RT, diag::err_constexpr_non_literal_return)) return false; } // - each of its parameter types shall be a literal type; if (!CheckConstexprParameterTypes(*this, NewFD)) return false; return true; } /// Check the given declaration statement is legal within a constexpr function /// body. C++11 [dcl.constexpr]p3,p4, and C++1y [dcl.constexpr]p3. /// /// \return true if the body is OK (maybe only as an extension), false if we /// have diagnosed a problem. static bool CheckConstexprDeclStmt(Sema &SemaRef, const FunctionDecl *Dcl, DeclStmt *DS, SourceLocation &Cxx1yLoc) { // C++11 [dcl.constexpr]p3 and p4: // The definition of a constexpr function(p3) or constructor(p4) [...] shall // contain only for (const auto *DclIt : DS->decls()) { switch (DclIt->getKind()) { case Decl::StaticAssert: case Decl::Using: case Decl::UsingShadow: case Decl::UsingDirective: case Decl::UnresolvedUsingTypename: case Decl::UnresolvedUsingValue: // - static_assert-declarations // - using-declarations, // - using-directives, continue; case Decl::Typedef: case Decl::TypeAlias: { // - typedef declarations and alias-declarations that do not define // classes or enumerations, const auto *TN = cast(DclIt); if (TN->getUnderlyingType()->isVariablyModifiedType()) { // Don't allow variably-modified types in constexpr functions. TypeLoc TL = TN->getTypeSourceInfo()->getTypeLoc(); SemaRef.Diag(TL.getBeginLoc(), diag::err_constexpr_vla) << TL.getSourceRange() << TL.getType() << isa(Dcl); return false; } continue; } case Decl::Enum: case Decl::CXXRecord: // C++1y allows types to be defined, not just declared. if (cast(DclIt)->isThisDeclarationADefinition()) SemaRef.Diag(DS->getLocStart(), SemaRef.getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_type_definition : diag::ext_constexpr_type_definition) << isa(Dcl); continue; case Decl::EnumConstant: case Decl::IndirectField: case Decl::ParmVar: // These can only appear with other declarations which are banned in // C++11 and permitted in C++1y, so ignore them. continue; case Decl::Var: case Decl::Decomposition: { // C++1y [dcl.constexpr]p3 allows anything except: // a definition of a variable of non-literal type or of static or // thread storage duration or for which no initialization is performed. const auto *VD = cast(DclIt); if (VD->isThisDeclarationADefinition()) { if (VD->isStaticLocal()) { SemaRef.Diag(VD->getLocation(), diag::err_constexpr_local_var_static) << isa(Dcl) << (VD->getTLSKind() == VarDecl::TLS_Dynamic); return false; } if (!VD->getType()->isDependentType() && SemaRef.RequireLiteralType( VD->getLocation(), VD->getType(), diag::err_constexpr_local_var_non_literal_type, isa(Dcl))) return false; if (!VD->getType()->isDependentType() && !VD->hasInit() && !VD->isCXXForRangeDecl()) { SemaRef.Diag(VD->getLocation(), diag::err_constexpr_local_var_no_init) << isa(Dcl); return false; } } SemaRef.Diag(VD->getLocation(), SemaRef.getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_local_var : diag::ext_constexpr_local_var) << isa(Dcl); continue; } case Decl::NamespaceAlias: case Decl::Function: // These are disallowed in C++11 and permitted in C++1y. Allow them // everywhere as an extension. if (!Cxx1yLoc.isValid()) Cxx1yLoc = DS->getLocStart(); continue; default: SemaRef.Diag(DS->getLocStart(), diag::err_constexpr_body_invalid_stmt) << isa(Dcl); return false; } } return true; } /// Check that the given field is initialized within a constexpr constructor. /// /// \param Dcl The constexpr constructor being checked. /// \param Field The field being checked. This may be a member of an anonymous /// struct or union nested within the class being checked. /// \param Inits All declarations, including anonymous struct/union members and /// indirect members, for which any initialization was provided. /// \param Diagnosed Set to true if an error is produced. static void CheckConstexprCtorInitializer(Sema &SemaRef, const FunctionDecl *Dcl, FieldDecl *Field, llvm::SmallSet &Inits, bool &Diagnosed) { if (Field->isInvalidDecl()) return; if (Field->isUnnamedBitfield()) return; // Anonymous unions with no variant members and empty anonymous structs do not // need to be explicitly initialized. FIXME: Anonymous structs that contain no // indirect fields don't need initializing. if (Field->isAnonymousStructOrUnion() && (Field->getType()->isUnionType() ? !Field->getType()->getAsCXXRecordDecl()->hasVariantMembers() : Field->getType()->getAsCXXRecordDecl()->isEmpty())) return; if (!Inits.count(Field)) { if (!Diagnosed) { SemaRef.Diag(Dcl->getLocation(), diag::err_constexpr_ctor_missing_init); Diagnosed = true; } SemaRef.Diag(Field->getLocation(), diag::note_constexpr_ctor_missing_init); } else if (Field->isAnonymousStructOrUnion()) { const RecordDecl *RD = Field->getType()->castAs()->getDecl(); for (auto *I : RD->fields()) // If an anonymous union contains an anonymous struct of which any member // is initialized, all members must be initialized. if (!RD->isUnion() || Inits.count(I)) CheckConstexprCtorInitializer(SemaRef, Dcl, I, Inits, Diagnosed); } } /// Check the provided statement is allowed in a constexpr function /// definition. static bool CheckConstexprFunctionStmt(Sema &SemaRef, const FunctionDecl *Dcl, Stmt *S, SmallVectorImpl &ReturnStmts, SourceLocation &Cxx1yLoc) { // - its function-body shall be [...] a compound-statement that contains only switch (S->getStmtClass()) { case Stmt::NullStmtClass: // - null statements, return true; case Stmt::DeclStmtClass: // - static_assert-declarations // - using-declarations, // - using-directives, // - typedef declarations and alias-declarations that do not define // classes or enumerations, if (!CheckConstexprDeclStmt(SemaRef, Dcl, cast(S), Cxx1yLoc)) return false; return true; case Stmt::ReturnStmtClass: // - and exactly one return statement; if (isa(Dcl)) { // C++1y allows return statements in constexpr constructors. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); return true; } ReturnStmts.push_back(S->getLocStart()); return true; case Stmt::CompoundStmtClass: { // C++1y allows compound-statements. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); CompoundStmt *CompStmt = cast(S); for (auto *BodyIt : CompStmt->body()) { if (!CheckConstexprFunctionStmt(SemaRef, Dcl, BodyIt, ReturnStmts, Cxx1yLoc)) return false; } return true; } case Stmt::AttributedStmtClass: if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); return true; case Stmt::IfStmtClass: { // C++1y allows if-statements. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); IfStmt *If = cast(S); if (!CheckConstexprFunctionStmt(SemaRef, Dcl, If->getThen(), ReturnStmts, Cxx1yLoc)) return false; if (If->getElse() && !CheckConstexprFunctionStmt(SemaRef, Dcl, If->getElse(), ReturnStmts, Cxx1yLoc)) return false; return true; } case Stmt::WhileStmtClass: case Stmt::DoStmtClass: case Stmt::ForStmtClass: case Stmt::CXXForRangeStmtClass: case Stmt::ContinueStmtClass: // C++1y allows all of these. We don't allow them as extensions in C++11, // because they don't make sense without variable mutation. if (!SemaRef.getLangOpts().CPlusPlus14) break; if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); for (Stmt *SubStmt : S->children()) if (SubStmt && !CheckConstexprFunctionStmt(SemaRef, Dcl, SubStmt, ReturnStmts, Cxx1yLoc)) return false; return true; case Stmt::SwitchStmtClass: case Stmt::CaseStmtClass: case Stmt::DefaultStmtClass: case Stmt::BreakStmtClass: // C++1y allows switch-statements, and since they don't need variable // mutation, we can reasonably allow them in C++11 as an extension. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); for (Stmt *SubStmt : S->children()) if (SubStmt && !CheckConstexprFunctionStmt(SemaRef, Dcl, SubStmt, ReturnStmts, Cxx1yLoc)) return false; return true; default: if (!isa(S)) break; // C++1y allows expression-statements. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); return true; } SemaRef.Diag(S->getLocStart(), diag::err_constexpr_body_invalid_stmt) << isa(Dcl); return false; } /// Check the body for the given constexpr function declaration only contains /// the permitted types of statement. C++11 [dcl.constexpr]p3,p4. /// /// \return true if the body is OK, false if we have diagnosed a problem. bool Sema::CheckConstexprFunctionBody(const FunctionDecl *Dcl, Stmt *Body) { if (isa(Body)) { // C++11 [dcl.constexpr]p3: // The definition of a constexpr function shall satisfy the following // constraints: [...] // - its function-body shall be = delete, = default, or a // compound-statement // // C++11 [dcl.constexpr]p4: // In the definition of a constexpr constructor, [...] // - its function-body shall not be a function-try-block; Diag(Body->getLocStart(), diag::err_constexpr_function_try_block) << isa(Dcl); return false; } SmallVector ReturnStmts; // - its function-body shall be [...] a compound-statement that contains only // [... list of cases ...] CompoundStmt *CompBody = cast(Body); SourceLocation Cxx1yLoc; for (auto *BodyIt : CompBody->body()) { if (!CheckConstexprFunctionStmt(*this, Dcl, BodyIt, ReturnStmts, Cxx1yLoc)) return false; } if (Cxx1yLoc.isValid()) Diag(Cxx1yLoc, getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_body_invalid_stmt : diag::ext_constexpr_body_invalid_stmt) << isa(Dcl); if (const CXXConstructorDecl *Constructor = dyn_cast(Dcl)) { const CXXRecordDecl *RD = Constructor->getParent(); // DR1359: // - every non-variant non-static data member and base class sub-object // shall be initialized; // DR1460: // - if the class is a union having variant members, exactly one of them // shall be initialized; if (RD->isUnion()) { if (Constructor->getNumCtorInitializers() == 0 && RD->hasVariantMembers()) { Diag(Dcl->getLocation(), diag::err_constexpr_union_ctor_no_init); return false; } } else if (!Constructor->isDependentContext() && !Constructor->isDelegatingConstructor()) { assert(RD->getNumVBases() == 0 && "constexpr ctor with virtual bases"); // Skip detailed checking if we have enough initializers, and we would // allow at most one initializer per member. bool AnyAnonStructUnionMembers = false; unsigned Fields = 0; for (CXXRecordDecl::field_iterator I = RD->field_begin(), E = RD->field_end(); I != E; ++I, ++Fields) { if (I->isAnonymousStructOrUnion()) { AnyAnonStructUnionMembers = true; break; } } // DR1460: // - if the class is a union-like class, but is not a union, for each of // its anonymous union members having variant members, exactly one of // them shall be initialized; if (AnyAnonStructUnionMembers || Constructor->getNumCtorInitializers() != RD->getNumBases() + Fields) { // Check initialization of non-static data members. Base classes are // always initialized so do not need to be checked. Dependent bases // might not have initializers in the member initializer list. llvm::SmallSet Inits; for (const auto *I: Constructor->inits()) { if (FieldDecl *FD = I->getMember()) Inits.insert(FD); else if (IndirectFieldDecl *ID = I->getIndirectMember()) Inits.insert(ID->chain_begin(), ID->chain_end()); } bool Diagnosed = false; for (auto *I : RD->fields()) CheckConstexprCtorInitializer(*this, Dcl, I, Inits, Diagnosed); if (Diagnosed) return false; } } } else { if (ReturnStmts.empty()) { // C++1y doesn't require constexpr functions to contain a 'return' // statement. We still do, unless the return type might be void, because // otherwise if there's no return statement, the function cannot // be used in a core constant expression. bool OK = getLangOpts().CPlusPlus14 && (Dcl->getReturnType()->isVoidType() || Dcl->getReturnType()->isDependentType()); Diag(Dcl->getLocation(), OK ? diag::warn_cxx11_compat_constexpr_body_no_return : diag::err_constexpr_body_no_return); if (!OK) return false; } else if (ReturnStmts.size() > 1) { Diag(ReturnStmts.back(), getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_body_multiple_return : diag::ext_constexpr_body_multiple_return); for (unsigned I = 0; I < ReturnStmts.size() - 1; ++I) Diag(ReturnStmts[I], diag::note_constexpr_body_previous_return); } } // C++11 [dcl.constexpr]p5: // if no function argument values exist such that the function invocation // substitution would produce a constant expression, the program is // ill-formed; no diagnostic required. // C++11 [dcl.constexpr]p3: // - every constructor call and implicit conversion used in initializing the // return value shall be one of those allowed in a constant expression. // C++11 [dcl.constexpr]p4: // - every constructor involved in initializing non-static data members and // base class sub-objects shall be a constexpr constructor. SmallVector Diags; if (!Expr::isPotentialConstantExpr(Dcl, Diags)) { Diag(Dcl->getLocation(), diag::ext_constexpr_function_never_constant_expr) << isa(Dcl); for (size_t I = 0, N = Diags.size(); I != N; ++I) Diag(Diags[I].first, Diags[I].second); // Don't return false here: we allow this for compatibility in // system headers. } return true; } /// isCurrentClassName - Determine whether the identifier II is the /// name of the class type currently being defined. In the case of /// nested classes, this will only return true if II is the name of /// the innermost class. bool Sema::isCurrentClassName(const IdentifierInfo &II, Scope *, const CXXScopeSpec *SS) { assert(getLangOpts().CPlusPlus && "No class names in C!"); CXXRecordDecl *CurDecl; if (SS && SS->isSet() && !SS->isInvalid()) { DeclContext *DC = computeDeclContext(*SS, true); CurDecl = dyn_cast_or_null(DC); } else CurDecl = dyn_cast_or_null(CurContext); if (CurDecl && CurDecl->getIdentifier()) return &II == CurDecl->getIdentifier(); return false; } /// \brief Determine whether the identifier II is a typo for the name of /// the class type currently being defined. If so, update it to the identifier /// that should have been used. bool Sema::isCurrentClassNameTypo(IdentifierInfo *&II, const CXXScopeSpec *SS) { assert(getLangOpts().CPlusPlus && "No class names in C!"); if (!getLangOpts().SpellChecking) return false; CXXRecordDecl *CurDecl; if (SS && SS->isSet() && !SS->isInvalid()) { DeclContext *DC = computeDeclContext(*SS, true); CurDecl = dyn_cast_or_null(DC); } else CurDecl = dyn_cast_or_null(CurContext); if (CurDecl && CurDecl->getIdentifier() && II != CurDecl->getIdentifier() && 3 * II->getName().edit_distance(CurDecl->getIdentifier()->getName()) < II->getLength()) { II = CurDecl->getIdentifier(); return true; } return false; } /// \brief Determine whether the given class is a base class of the given /// class, including looking at dependent bases. static bool findCircularInheritance(const CXXRecordDecl *Class, const CXXRecordDecl *Current) { SmallVector Queue; Class = Class->getCanonicalDecl(); while (true) { for (const auto &I : Current->bases()) { CXXRecordDecl *Base = I.getType()->getAsCXXRecordDecl(); if (!Base) continue; Base = Base->getDefinition(); if (!Base) continue; if (Base->getCanonicalDecl() == Class) return true; Queue.push_back(Base); } if (Queue.empty()) return false; Current = Queue.pop_back_val(); } return false; } /// \brief Check the validity of a C++ base class specifier. /// /// \returns a new CXXBaseSpecifier if well-formed, emits diagnostics /// and returns NULL otherwise. CXXBaseSpecifier * Sema::CheckBaseSpecifier(CXXRecordDecl *Class, SourceRange SpecifierRange, bool Virtual, AccessSpecifier Access, TypeSourceInfo *TInfo, SourceLocation EllipsisLoc) { QualType BaseType = TInfo->getType(); // C++ [class.union]p1: // A union shall not have base classes. if (Class->isUnion()) { Diag(Class->getLocation(), diag::err_base_clause_on_union) << SpecifierRange; return nullptr; } if (EllipsisLoc.isValid() && !TInfo->getType()->containsUnexpandedParameterPack()) { Diag(EllipsisLoc, diag::err_pack_expansion_without_parameter_packs) << TInfo->getTypeLoc().getSourceRange(); EllipsisLoc = SourceLocation(); } SourceLocation BaseLoc = TInfo->getTypeLoc().getBeginLoc(); if (BaseType->isDependentType()) { // Make sure that we don't have circular inheritance among our dependent // bases. For non-dependent bases, the check for completeness below handles // this. if (CXXRecordDecl *BaseDecl = BaseType->getAsCXXRecordDecl()) { if (BaseDecl->getCanonicalDecl() == Class->getCanonicalDecl() || ((BaseDecl = BaseDecl->getDefinition()) && findCircularInheritance(Class, BaseDecl))) { Diag(BaseLoc, diag::err_circular_inheritance) << BaseType << Context.getTypeDeclType(Class); if (BaseDecl->getCanonicalDecl() != Class->getCanonicalDecl()) Diag(BaseDecl->getLocation(), diag::note_previous_decl) << BaseType; return nullptr; } } return new (Context) CXXBaseSpecifier(SpecifierRange, Virtual, Class->getTagKind() == TTK_Class, Access, TInfo, EllipsisLoc); } // Base specifiers must be record types. if (!BaseType->isRecordType()) { Diag(BaseLoc, diag::err_base_must_be_class) << SpecifierRange; return nullptr; } // C++ [class.union]p1: // A union shall not be used as a base class. if (BaseType->isUnionType()) { Diag(BaseLoc, diag::err_union_as_base_class) << SpecifierRange; return nullptr; } // For the MS ABI, propagate DLL attributes to base class templates. if (Context.getTargetInfo().getCXXABI().isMicrosoft()) { if (Attr *ClassAttr = getDLLAttr(Class)) { if (auto *BaseTemplate = dyn_cast_or_null( BaseType->getAsCXXRecordDecl())) { propagateDLLAttrToBaseClassTemplate(Class, ClassAttr, BaseTemplate, BaseLoc); } } } // C++ [class.derived]p2: // The class-name in a base-specifier shall not be an incompletely // defined class. if (RequireCompleteType(BaseLoc, BaseType, diag::err_incomplete_base_class, SpecifierRange)) { Class->setInvalidDecl(); return nullptr; } // If the base class is polymorphic or isn't empty, the new one is/isn't, too. RecordDecl *BaseDecl = BaseType->getAs()->getDecl(); assert(BaseDecl && "Record type has no declaration"); BaseDecl = BaseDecl->getDefinition(); assert(BaseDecl && "Base type is not incomplete, but has no definition"); CXXRecordDecl *CXXBaseDecl = cast(BaseDecl); assert(CXXBaseDecl && "Base type is not a C++ type"); // A class which contains a flexible array member is not suitable for use as a // base class: // - If the layout determines that a base comes before another base, // the flexible array member would index into the subsequent base. // - If the layout determines that base comes before the derived class, // the flexible array member would index into the derived class. if (CXXBaseDecl->hasFlexibleArrayMember()) { Diag(BaseLoc, diag::err_base_class_has_flexible_array_member) << CXXBaseDecl->getDeclName(); return nullptr; } // C++ [class]p3: // If a class is marked final and it appears as a base-type-specifier in // base-clause, the program is ill-formed. if (FinalAttr *FA = CXXBaseDecl->getAttr()) { Diag(BaseLoc, diag::err_class_marked_final_used_as_base) << CXXBaseDecl->getDeclName() << FA->isSpelledAsSealed(); Diag(CXXBaseDecl->getLocation(), diag::note_entity_declared_at) << CXXBaseDecl->getDeclName() << FA->getRange(); return nullptr; } if (BaseDecl->isInvalidDecl()) Class->setInvalidDecl(); // Create the base specifier. return new (Context) CXXBaseSpecifier(SpecifierRange, Virtual, Class->getTagKind() == TTK_Class, Access, TInfo, EllipsisLoc); } /// ActOnBaseSpecifier - Parsed a base specifier. A base specifier is /// one entry in the base class list of a class specifier, for /// example: /// class foo : public bar, virtual private baz { /// 'public bar' and 'virtual private baz' are each base-specifiers. BaseResult Sema::ActOnBaseSpecifier(Decl *classdecl, SourceRange SpecifierRange, ParsedAttributes &Attributes, bool Virtual, AccessSpecifier Access, ParsedType basetype, SourceLocation BaseLoc, SourceLocation EllipsisLoc) { if (!classdecl) return true; AdjustDeclIfTemplate(classdecl); CXXRecordDecl *Class = dyn_cast(classdecl); if (!Class) return true; // We haven't yet attached the base specifiers. Class->setIsParsingBaseSpecifiers(); // We do not support any C++11 attributes on base-specifiers yet. // Diagnose any attributes we see. if (!Attributes.empty()) { for (AttributeList *Attr = Attributes.getList(); Attr; Attr = Attr->getNext()) { if (Attr->isInvalid() || Attr->getKind() == AttributeList::IgnoredAttribute) continue; Diag(Attr->getLoc(), Attr->getKind() == AttributeList::UnknownAttribute ? diag::warn_unknown_attribute_ignored : diag::err_base_specifier_attribute) << Attr->getName(); } } TypeSourceInfo *TInfo = nullptr; GetTypeFromParser(basetype, &TInfo); if (EllipsisLoc.isInvalid() && DiagnoseUnexpandedParameterPack(SpecifierRange.getBegin(), TInfo, UPPC_BaseType)) return true; if (CXXBaseSpecifier *BaseSpec = CheckBaseSpecifier(Class, SpecifierRange, Virtual, Access, TInfo, EllipsisLoc)) return BaseSpec; else Class->setInvalidDecl(); return true; } /// Use small set to collect indirect bases. As this is only used /// locally, there's no need to abstract the small size parameter. typedef llvm::SmallPtrSet IndirectBaseSet; /// \brief Recursively add the bases of Type. Don't add Type itself. static void NoteIndirectBases(ASTContext &Context, IndirectBaseSet &Set, const QualType &Type) { // Even though the incoming type is a base, it might not be // a class -- it could be a template parm, for instance. if (auto Rec = Type->getAs()) { auto Decl = Rec->getAsCXXRecordDecl(); // Iterate over its bases. for (const auto &BaseSpec : Decl->bases()) { QualType Base = Context.getCanonicalType(BaseSpec.getType()) .getUnqualifiedType(); if (Set.insert(Base).second) // If we've not already seen it, recurse. NoteIndirectBases(Context, Set, Base); } } } /// \brief Performs the actual work of attaching the given base class /// specifiers to a C++ class. bool Sema::AttachBaseSpecifiers(CXXRecordDecl *Class, MutableArrayRef Bases) { if (Bases.empty()) return false; // Used to keep track of which base types we have already seen, so // that we can properly diagnose redundant direct base types. Note // that the key is always the unqualified canonical type of the base // class. std::map KnownBaseTypes; // Used to track indirect bases so we can see if a direct base is // ambiguous. IndirectBaseSet IndirectBaseTypes; // Copy non-redundant base specifiers into permanent storage. unsigned NumGoodBases = 0; bool Invalid = false; for (unsigned idx = 0; idx < Bases.size(); ++idx) { QualType NewBaseType = Context.getCanonicalType(Bases[idx]->getType()); NewBaseType = NewBaseType.getLocalUnqualifiedType(); CXXBaseSpecifier *&KnownBase = KnownBaseTypes[NewBaseType]; if (KnownBase) { // C++ [class.mi]p3: // A class shall not be specified as a direct base class of a // derived class more than once. Diag(Bases[idx]->getLocStart(), diag::err_duplicate_base_class) << KnownBase->getType() << Bases[idx]->getSourceRange(); // Delete the duplicate base class specifier; we're going to // overwrite its pointer later. Context.Deallocate(Bases[idx]); Invalid = true; } else { // Okay, add this new base class. KnownBase = Bases[idx]; Bases[NumGoodBases++] = Bases[idx]; // Note this base's direct & indirect bases, if there could be ambiguity. if (Bases.size() > 1) NoteIndirectBases(Context, IndirectBaseTypes, NewBaseType); if (const RecordType *Record = NewBaseType->getAs()) { const CXXRecordDecl *RD = cast(Record->getDecl()); if (Class->isInterface() && (!RD->isInterface() || KnownBase->getAccessSpecifier() != AS_public)) { // The Microsoft extension __interface does not permit bases that // are not themselves public interfaces. Diag(KnownBase->getLocStart(), diag::err_invalid_base_in_interface) << getRecordDiagFromTagKind(RD->getTagKind()) << RD->getName() << RD->getSourceRange(); Invalid = true; } if (RD->hasAttr()) Class->addAttr(WeakAttr::CreateImplicit(Context)); } } } // Attach the remaining base class specifiers to the derived class. Class->setBases(Bases.data(), NumGoodBases); for (unsigned idx = 0; idx < NumGoodBases; ++idx) { // Check whether this direct base is inaccessible due to ambiguity. QualType BaseType = Bases[idx]->getType(); CanQualType CanonicalBase = Context.getCanonicalType(BaseType) .getUnqualifiedType(); if (IndirectBaseTypes.count(CanonicalBase)) { CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/true); bool found = Class->isDerivedFrom(CanonicalBase->getAsCXXRecordDecl(), Paths); assert(found); (void)found; if (Paths.isAmbiguous(CanonicalBase)) Diag(Bases[idx]->getLocStart (), diag::warn_inaccessible_base_class) << BaseType << getAmbiguousPathsDisplayString(Paths) << Bases[idx]->getSourceRange(); else assert(Bases[idx]->isVirtual()); } // Delete the base class specifier, since its data has been copied // into the CXXRecordDecl. Context.Deallocate(Bases[idx]); } return Invalid; } /// ActOnBaseSpecifiers - Attach the given base specifiers to the /// class, after checking whether there are any duplicate base /// classes. void Sema::ActOnBaseSpecifiers(Decl *ClassDecl, MutableArrayRef Bases) { if (!ClassDecl || Bases.empty()) return; AdjustDeclIfTemplate(ClassDecl); AttachBaseSpecifiers(cast(ClassDecl), Bases); } /// \brief Determine whether the type \p Derived is a C++ class that is /// derived from the type \p Base. bool Sema::IsDerivedFrom(SourceLocation Loc, QualType Derived, QualType Base) { if (!getLangOpts().CPlusPlus) return false; CXXRecordDecl *DerivedRD = Derived->getAsCXXRecordDecl(); if (!DerivedRD) return false; CXXRecordDecl *BaseRD = Base->getAsCXXRecordDecl(); if (!BaseRD) return false; // If either the base or the derived type is invalid, don't try to // check whether one is derived from the other. if (BaseRD->isInvalidDecl() || DerivedRD->isInvalidDecl()) return false; // FIXME: In a modules build, do we need the entire path to be visible for us // to be able to use the inheritance relationship? if (!isCompleteType(Loc, Derived) && !DerivedRD->isBeingDefined()) return false; return DerivedRD->isDerivedFrom(BaseRD); } /// \brief Determine whether the type \p Derived is a C++ class that is /// derived from the type \p Base. bool Sema::IsDerivedFrom(SourceLocation Loc, QualType Derived, QualType Base, CXXBasePaths &Paths) { if (!getLangOpts().CPlusPlus) return false; CXXRecordDecl *DerivedRD = Derived->getAsCXXRecordDecl(); if (!DerivedRD) return false; CXXRecordDecl *BaseRD = Base->getAsCXXRecordDecl(); if (!BaseRD) return false; if (!isCompleteType(Loc, Derived) && !DerivedRD->isBeingDefined()) return false; return DerivedRD->isDerivedFrom(BaseRD, Paths); } void Sema::BuildBasePathArray(const CXXBasePaths &Paths, CXXCastPath &BasePathArray) { assert(BasePathArray.empty() && "Base path array must be empty!"); assert(Paths.isRecordingPaths() && "Must record paths!"); const CXXBasePath &Path = Paths.front(); // We first go backward and check if we have a virtual base. // FIXME: It would be better if CXXBasePath had the base specifier for // the nearest virtual base. unsigned Start = 0; for (unsigned I = Path.size(); I != 0; --I) { if (Path[I - 1].Base->isVirtual()) { Start = I - 1; break; } } // Now add all bases. for (unsigned I = Start, E = Path.size(); I != E; ++I) BasePathArray.push_back(const_cast(Path[I].Base)); } /// CheckDerivedToBaseConversion - Check whether the Derived-to-Base /// conversion (where Derived and Base are class types) is /// well-formed, meaning that the conversion is unambiguous (and /// that all of the base classes are accessible). Returns true /// and emits a diagnostic if the code is ill-formed, returns false /// otherwise. Loc is the location where this routine should point to /// if there is an error, and Range is the source range to highlight /// if there is an error. /// /// If either InaccessibleBaseID or AmbigiousBaseConvID are 0, then the /// diagnostic for the respective type of error will be suppressed, but the /// check for ill-formed code will still be performed. bool Sema::CheckDerivedToBaseConversion(QualType Derived, QualType Base, unsigned InaccessibleBaseID, unsigned AmbigiousBaseConvID, SourceLocation Loc, SourceRange Range, DeclarationName Name, CXXCastPath *BasePath, bool IgnoreAccess) { // First, determine whether the path from Derived to Base is // ambiguous. This is slightly more expensive than checking whether // the Derived to Base conversion exists, because here we need to // explore multiple paths to determine if there is an ambiguity. CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/false); bool DerivationOkay = IsDerivedFrom(Loc, Derived, Base, Paths); assert(DerivationOkay && "Can only be used with a derived-to-base conversion"); (void)DerivationOkay; if (!Paths.isAmbiguous(Context.getCanonicalType(Base).getUnqualifiedType())) { if (!IgnoreAccess) { // Check that the base class can be accessed. switch (CheckBaseClassAccess(Loc, Base, Derived, Paths.front(), InaccessibleBaseID)) { case AR_inaccessible: return true; case AR_accessible: case AR_dependent: case AR_delayed: break; } } // Build a base path if necessary. if (BasePath) BuildBasePathArray(Paths, *BasePath); return false; } if (AmbigiousBaseConvID) { // We know that the derived-to-base conversion is ambiguous, and // we're going to produce a diagnostic. Perform the derived-to-base // search just one more time to compute all of the possible paths so // that we can print them out. This is more expensive than any of // the previous derived-to-base checks we've done, but at this point // performance isn't as much of an issue. Paths.clear(); Paths.setRecordingPaths(true); bool StillOkay = IsDerivedFrom(Loc, Derived, Base, Paths); assert(StillOkay && "Can only be used with a derived-to-base conversion"); (void)StillOkay; // Build up a textual representation of the ambiguous paths, e.g., // D -> B -> A, that will be used to illustrate the ambiguous // conversions in the diagnostic. We only print one of the paths // to each base class subobject. std::string PathDisplayStr = getAmbiguousPathsDisplayString(Paths); Diag(Loc, AmbigiousBaseConvID) << Derived << Base << PathDisplayStr << Range << Name; } return true; } bool Sema::CheckDerivedToBaseConversion(QualType Derived, QualType Base, SourceLocation Loc, SourceRange Range, CXXCastPath *BasePath, bool IgnoreAccess) { return CheckDerivedToBaseConversion( Derived, Base, diag::err_upcast_to_inaccessible_base, diag::err_ambiguous_derived_to_base_conv, Loc, Range, DeclarationName(), BasePath, IgnoreAccess); } /// @brief Builds a string representing ambiguous paths from a /// specific derived class to different subobjects of the same base /// class. /// /// This function builds a string that can be used in error messages /// to show the different paths that one can take through the /// inheritance hierarchy to go from the derived class to different /// subobjects of a base class. The result looks something like this: /// @code /// struct D -> struct B -> struct A /// struct D -> struct C -> struct A /// @endcode std::string Sema::getAmbiguousPathsDisplayString(CXXBasePaths &Paths) { std::string PathDisplayStr; std::set DisplayedPaths; for (CXXBasePaths::paths_iterator Path = Paths.begin(); Path != Paths.end(); ++Path) { if (DisplayedPaths.insert(Path->back().SubobjectNumber).second) { // We haven't displayed a path to this particular base // class subobject yet. PathDisplayStr += "\n "; PathDisplayStr += Context.getTypeDeclType(Paths.getOrigin()).getAsString(); for (CXXBasePath::const_iterator Element = Path->begin(); Element != Path->end(); ++Element) PathDisplayStr += " -> " + Element->Base->getType().getAsString(); } } return PathDisplayStr; } //===----------------------------------------------------------------------===// // C++ class member Handling //===----------------------------------------------------------------------===// /// ActOnAccessSpecifier - Parsed an access specifier followed by a colon. bool Sema::ActOnAccessSpecifier(AccessSpecifier Access, SourceLocation ASLoc, SourceLocation ColonLoc, AttributeList *Attrs) { assert(Access != AS_none && "Invalid kind for syntactic access specifier!"); AccessSpecDecl *ASDecl = AccessSpecDecl::Create(Context, Access, CurContext, ASLoc, ColonLoc); CurContext->addHiddenDecl(ASDecl); return ProcessAccessDeclAttributeList(ASDecl, Attrs); } /// CheckOverrideControl - Check C++11 override control semantics. void Sema::CheckOverrideControl(NamedDecl *D) { if (D->isInvalidDecl()) return; // We only care about "override" and "final" declarations. if (!D->hasAttr() && !D->hasAttr()) return; CXXMethodDecl *MD = dyn_cast(D); // We can't check dependent instance methods. if (MD && MD->isInstance() && (MD->getParent()->hasAnyDependentBases() || MD->getType()->isDependentType())) return; if (MD && !MD->isVirtual()) { // If we have a non-virtual method, check if if hides a virtual method. // (In that case, it's most likely the method has the wrong type.) SmallVector OverloadedMethods; FindHiddenVirtualMethods(MD, OverloadedMethods); if (!OverloadedMethods.empty()) { if (OverrideAttr *OA = D->getAttr()) { Diag(OA->getLocation(), diag::override_keyword_hides_virtual_member_function) << "override" << (OverloadedMethods.size() > 1); } else if (FinalAttr *FA = D->getAttr()) { Diag(FA->getLocation(), diag::override_keyword_hides_virtual_member_function) << (FA->isSpelledAsSealed() ? "sealed" : "final") << (OverloadedMethods.size() > 1); } NoteHiddenVirtualMethods(MD, OverloadedMethods); MD->setInvalidDecl(); return; } // Fall through into the general case diagnostic. // FIXME: We might want to attempt typo correction here. } if (!MD || !MD->isVirtual()) { if (OverrideAttr *OA = D->getAttr()) { Diag(OA->getLocation(), diag::override_keyword_only_allowed_on_virtual_member_functions) << "override" << FixItHint::CreateRemoval(OA->getLocation()); D->dropAttr(); } if (FinalAttr *FA = D->getAttr()) { Diag(FA->getLocation(), diag::override_keyword_only_allowed_on_virtual_member_functions) << (FA->isSpelledAsSealed() ? "sealed" : "final") << FixItHint::CreateRemoval(FA->getLocation()); D->dropAttr(); } return; } // C++11 [class.virtual]p5: // If a function is marked with the virt-specifier override and // does not override a member function of a base class, the program is // ill-formed. bool HasOverriddenMethods = MD->begin_overridden_methods() != MD->end_overridden_methods(); if (MD->hasAttr() && !HasOverriddenMethods) Diag(MD->getLocation(), diag::err_function_marked_override_not_overriding) << MD->getDeclName(); } void Sema::DiagnoseAbsenceOfOverrideControl(NamedDecl *D) { if (D->isInvalidDecl() || D->hasAttr()) return; CXXMethodDecl *MD = dyn_cast(D); if (!MD || MD->isImplicit() || MD->hasAttr()) return; SourceLocation Loc = MD->getLocation(); SourceLocation SpellingLoc = Loc; if (getSourceManager().isMacroArgExpansion(Loc)) SpellingLoc = getSourceManager().getImmediateExpansionRange(Loc).first; SpellingLoc = getSourceManager().getSpellingLoc(SpellingLoc); if (SpellingLoc.isValid() && getSourceManager().isInSystemHeader(SpellingLoc)) return; if (MD->size_overridden_methods() > 0) { unsigned DiagID = isa(MD) ? diag::warn_destructor_marked_not_override_overriding : diag::warn_function_marked_not_override_overriding; Diag(MD->getLocation(), DiagID) << MD->getDeclName(); const CXXMethodDecl *OMD = *MD->begin_overridden_methods(); Diag(OMD->getLocation(), diag::note_overridden_virtual_function); } } /// CheckIfOverriddenFunctionIsMarkedFinal - Checks whether a virtual member /// function overrides a virtual member function marked 'final', according to /// C++11 [class.virtual]p4. bool Sema::CheckIfOverriddenFunctionIsMarkedFinal(const CXXMethodDecl *New, const CXXMethodDecl *Old) { FinalAttr *FA = Old->getAttr(); if (!FA) return false; Diag(New->getLocation(), diag::err_final_function_overridden) << New->getDeclName() << FA->isSpelledAsSealed(); Diag(Old->getLocation(), diag::note_overridden_virtual_function); return true; } static bool InitializationHasSideEffects(const FieldDecl &FD) { const Type *T = FD.getType()->getBaseElementTypeUnsafe(); // FIXME: Destruction of ObjC lifetime types has side-effects. if (const CXXRecordDecl *RD = T->getAsCXXRecordDecl()) return !RD->isCompleteDefinition() || !RD->hasTrivialDefaultConstructor() || !RD->hasTrivialDestructor(); return false; } static AttributeList *getMSPropertyAttr(AttributeList *list) { for (AttributeList *it = list; it != nullptr; it = it->getNext()) if (it->isDeclspecPropertyAttribute()) return it; return nullptr; } // Check if there is a field shadowing. void Sema::CheckShadowInheritedFields(const SourceLocation &Loc, DeclarationName FieldName, const CXXRecordDecl *RD) { if (Diags.isIgnored(diag::warn_shadow_field, Loc)) return; // To record a shadowed field in a base std::map Bases; auto FieldShadowed = [&](const CXXBaseSpecifier *Specifier, CXXBasePath &Path) { const auto Base = Specifier->getType()->getAsCXXRecordDecl(); // Record an ambiguous path directly if (Bases.find(Base) != Bases.end()) return true; for (const auto Field : Base->lookup(FieldName)) { if ((isa(Field) || isa(Field)) && Field->getAccess() != AS_private) { assert(Field->getAccess() != AS_none); assert(Bases.find(Base) == Bases.end()); Bases[Base] = Field; return true; } } return false; }; CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/true); if (!RD->lookupInBases(FieldShadowed, Paths)) return; for (const auto &P : Paths) { auto Base = P.back().Base->getType()->getAsCXXRecordDecl(); auto It = Bases.find(Base); // Skip duplicated bases if (It == Bases.end()) continue; auto BaseField = It->second; assert(BaseField->getAccess() != AS_private); if (AS_none != CXXRecordDecl::MergeAccess(P.Access, BaseField->getAccess())) { Diag(Loc, diag::warn_shadow_field) << FieldName.getAsString() << RD->getName() << Base->getName(); Diag(BaseField->getLocation(), diag::note_shadow_field); Bases.erase(It); } } } /// ActOnCXXMemberDeclarator - This is invoked when a C++ class member /// declarator is parsed. 'AS' is the access specifier, 'BW' specifies the /// bitfield width if there is one, 'InitExpr' specifies the initializer if /// one has been parsed, and 'InitStyle' is set if an in-class initializer is /// present (but parsing it has been deferred). NamedDecl * Sema::ActOnCXXMemberDeclarator(Scope *S, AccessSpecifier AS, Declarator &D, MultiTemplateParamsArg TemplateParameterLists, Expr *BW, const VirtSpecifiers &VS, InClassInitStyle InitStyle) { const DeclSpec &DS = D.getDeclSpec(); DeclarationNameInfo NameInfo = GetNameForDeclarator(D); DeclarationName Name = NameInfo.getName(); SourceLocation Loc = NameInfo.getLoc(); // For anonymous bitfields, the location should point to the type. if (Loc.isInvalid()) Loc = D.getLocStart(); Expr *BitWidth = static_cast(BW); assert(isa(CurContext)); assert(!DS.isFriendSpecified()); bool isFunc = D.isDeclarationOfFunction(); if (cast(CurContext)->isInterface()) { // The Microsoft extension __interface only permits public member functions // and prohibits constructors, destructors, operators, non-public member // functions, static methods and data members. unsigned InvalidDecl; bool ShowDeclName = true; if (!isFunc) InvalidDecl = (DS.getStorageClassSpec() == DeclSpec::SCS_typedef) ? 0 : 1; else if (AS != AS_public) InvalidDecl = 2; else if (DS.getStorageClassSpec() == DeclSpec::SCS_static) InvalidDecl = 3; else switch (Name.getNameKind()) { case DeclarationName::CXXConstructorName: InvalidDecl = 4; ShowDeclName = false; break; case DeclarationName::CXXDestructorName: InvalidDecl = 5; ShowDeclName = false; break; case DeclarationName::CXXOperatorName: case DeclarationName::CXXConversionFunctionName: InvalidDecl = 6; break; default: InvalidDecl = 0; break; } if (InvalidDecl) { if (ShowDeclName) Diag(Loc, diag::err_invalid_member_in_interface) << (InvalidDecl-1) << Name; else Diag(Loc, diag::err_invalid_member_in_interface) << (InvalidDecl-1) << ""; return nullptr; } } // C++ 9.2p6: A member shall not be declared to have automatic storage // duration (auto, register) or with the extern storage-class-specifier. // C++ 7.1.1p8: The mutable specifier can be applied only to names of class // data members and cannot be applied to names declared const or static, // and cannot be applied to reference members. switch (DS.getStorageClassSpec()) { case DeclSpec::SCS_unspecified: case DeclSpec::SCS_typedef: case DeclSpec::SCS_static: break; case DeclSpec::SCS_mutable: if (isFunc) { Diag(DS.getStorageClassSpecLoc(), diag::err_mutable_function); // FIXME: It would be nicer if the keyword was ignored only for this // declarator. Otherwise we could get follow-up errors. D.getMutableDeclSpec().ClearStorageClassSpecs(); } break; default: Diag(DS.getStorageClassSpecLoc(), diag::err_storageclass_invalid_for_member); D.getMutableDeclSpec().ClearStorageClassSpecs(); break; } bool isInstField = ((DS.getStorageClassSpec() == DeclSpec::SCS_unspecified || DS.getStorageClassSpec() == DeclSpec::SCS_mutable) && !isFunc); if (DS.isConstexprSpecified() && isInstField) { SemaDiagnosticBuilder B = Diag(DS.getConstexprSpecLoc(), diag::err_invalid_constexpr_member); SourceLocation ConstexprLoc = DS.getConstexprSpecLoc(); if (InitStyle == ICIS_NoInit) { B << 0 << 0; if (D.getDeclSpec().getTypeQualifiers() & DeclSpec::TQ_const) B << FixItHint::CreateRemoval(ConstexprLoc); else { B << FixItHint::CreateReplacement(ConstexprLoc, "const"); D.getMutableDeclSpec().ClearConstexprSpec(); const char *PrevSpec; unsigned DiagID; bool Failed = D.getMutableDeclSpec().SetTypeQual( DeclSpec::TQ_const, ConstexprLoc, PrevSpec, DiagID, getLangOpts()); (void)Failed; assert(!Failed && "Making a constexpr member const shouldn't fail"); } } else { B << 1; const char *PrevSpec; unsigned DiagID; if (D.getMutableDeclSpec().SetStorageClassSpec( *this, DeclSpec::SCS_static, ConstexprLoc, PrevSpec, DiagID, Context.getPrintingPolicy())) { assert(DS.getStorageClassSpec() == DeclSpec::SCS_mutable && "This is the only DeclSpec that should fail to be applied"); B << 1; } else { B << 0 << FixItHint::CreateInsertion(ConstexprLoc, "static "); isInstField = false; } } } NamedDecl *Member; if (isInstField) { CXXScopeSpec &SS = D.getCXXScopeSpec(); // Data members must have identifiers for names. if (!Name.isIdentifier()) { Diag(Loc, diag::err_bad_variable_name) << Name; return nullptr; } IdentifierInfo *II = Name.getAsIdentifierInfo(); // Member field could not be with "template" keyword. // So TemplateParameterLists should be empty in this case. if (TemplateParameterLists.size()) { TemplateParameterList* TemplateParams = TemplateParameterLists[0]; if (TemplateParams->size()) { // There is no such thing as a member field template. Diag(D.getIdentifierLoc(), diag::err_template_member) << II << SourceRange(TemplateParams->getTemplateLoc(), TemplateParams->getRAngleLoc()); } else { // There is an extraneous 'template<>' for this member. Diag(TemplateParams->getTemplateLoc(), diag::err_template_member_noparams) << II << SourceRange(TemplateParams->getTemplateLoc(), TemplateParams->getRAngleLoc()); } return nullptr; } if (SS.isSet() && !SS.isInvalid()) { // The user provided a superfluous scope specifier inside a class // definition: // // class X { // int X::member; // }; if (DeclContext *DC = computeDeclContext(SS, false)) diagnoseQualifiedDeclaration(SS, DC, Name, D.getIdentifierLoc()); else Diag(D.getIdentifierLoc(), diag::err_member_qualification) << Name << SS.getRange(); SS.clear(); } AttributeList *MSPropertyAttr = getMSPropertyAttr(D.getDeclSpec().getAttributes().getList()); if (MSPropertyAttr) { Member = HandleMSProperty(S, cast(CurContext), Loc, D, BitWidth, InitStyle, AS, MSPropertyAttr); if (!Member) return nullptr; isInstField = false; } else { Member = HandleField(S, cast(CurContext), Loc, D, BitWidth, InitStyle, AS); if (!Member) return nullptr; } CheckShadowInheritedFields(Loc, Name, cast(CurContext)); } else { Member = HandleDeclarator(S, D, TemplateParameterLists); if (!Member) return nullptr; // Non-instance-fields can't have a bitfield. if (BitWidth) { if (Member->isInvalidDecl()) { // don't emit another diagnostic. } else if (isa(Member) || isa(Member)) { // C++ 9.6p3: A bit-field shall not be a static member. // "static member 'A' cannot be a bit-field" Diag(Loc, diag::err_static_not_bitfield) << Name << BitWidth->getSourceRange(); } else if (isa(Member)) { // "typedef member 'x' cannot be a bit-field" Diag(Loc, diag::err_typedef_not_bitfield) << Name << BitWidth->getSourceRange(); } else { // A function typedef ("typedef int f(); f a;"). // C++ 9.6p3: A bit-field shall have integral or enumeration type. Diag(Loc, diag::err_not_integral_type_bitfield) << Name << cast(Member)->getType() << BitWidth->getSourceRange(); } BitWidth = nullptr; Member->setInvalidDecl(); } Member->setAccess(AS); // If we have declared a member function template or static data member // template, set the access of the templated declaration as well. if (FunctionTemplateDecl *FunTmpl = dyn_cast(Member)) FunTmpl->getTemplatedDecl()->setAccess(AS); else if (VarTemplateDecl *VarTmpl = dyn_cast(Member)) VarTmpl->getTemplatedDecl()->setAccess(AS); } if (VS.isOverrideSpecified()) Member->addAttr(new (Context) OverrideAttr(VS.getOverrideLoc(), Context, 0)); if (VS.isFinalSpecified()) Member->addAttr(new (Context) FinalAttr(VS.getFinalLoc(), Context, VS.isFinalSpelledSealed())); if (VS.getLastLocation().isValid()) { // Update the end location of a method that has a virt-specifiers. if (CXXMethodDecl *MD = dyn_cast_or_null(Member)) MD->setRangeEnd(VS.getLastLocation()); } CheckOverrideControl(Member); assert((Name || isInstField) && "No identifier for non-field ?"); if (isInstField) { FieldDecl *FD = cast(Member); FieldCollector->Add(FD); if (!Diags.isIgnored(diag::warn_unused_private_field, FD->getLocation())) { // Remember all explicit private FieldDecls that have a name, no side // effects and are not part of a dependent type declaration. if (!FD->isImplicit() && FD->getDeclName() && FD->getAccess() == AS_private && !FD->hasAttr() && !FD->getParent()->isDependentContext() && !InitializationHasSideEffects(*FD)) UnusedPrivateFields.insert(FD); } } return Member; } namespace { class UninitializedFieldVisitor : public EvaluatedExprVisitor { Sema &S; // List of Decls to generate a warning on. Also remove Decls that become // initialized. llvm::SmallPtrSetImpl &Decls; // List of base classes of the record. Classes are removed after their // initializers. llvm::SmallPtrSetImpl &BaseClasses; // Vector of decls to be removed from the Decl set prior to visiting the // nodes. These Decls may have been initialized in the prior initializer. llvm::SmallVector DeclsToRemove; // If non-null, add a note to the warning pointing back to the constructor. const CXXConstructorDecl *Constructor; // Variables to hold state when processing an initializer list. When // InitList is true, special case initialization of FieldDecls matching // InitListFieldDecl. bool InitList; FieldDecl *InitListFieldDecl; llvm::SmallVector InitFieldIndex; public: typedef EvaluatedExprVisitor Inherited; UninitializedFieldVisitor(Sema &S, llvm::SmallPtrSetImpl &Decls, llvm::SmallPtrSetImpl &BaseClasses) : Inherited(S.Context), S(S), Decls(Decls), BaseClasses(BaseClasses), Constructor(nullptr), InitList(false), InitListFieldDecl(nullptr) {} // Returns true if the use of ME is not an uninitialized use. bool IsInitListMemberExprInitialized(MemberExpr *ME, bool CheckReferenceOnly) { llvm::SmallVector Fields; bool ReferenceField = false; while (ME) { FieldDecl *FD = dyn_cast(ME->getMemberDecl()); if (!FD) return false; Fields.push_back(FD); if (FD->getType()->isReferenceType()) ReferenceField = true; ME = dyn_cast(ME->getBase()->IgnoreParenImpCasts()); } // Binding a reference to an unintialized field is not an // uninitialized use. if (CheckReferenceOnly && !ReferenceField) return true; llvm::SmallVector UsedFieldIndex; // Discard the first field since it is the field decl that is being // initialized. for (auto I = Fields.rbegin() + 1, E = Fields.rend(); I != E; ++I) { UsedFieldIndex.push_back((*I)->getFieldIndex()); } for (auto UsedIter = UsedFieldIndex.begin(), UsedEnd = UsedFieldIndex.end(), OrigIter = InitFieldIndex.begin(), OrigEnd = InitFieldIndex.end(); UsedIter != UsedEnd && OrigIter != OrigEnd; ++UsedIter, ++OrigIter) { if (*UsedIter < *OrigIter) return true; if (*UsedIter > *OrigIter) break; } return false; } void HandleMemberExpr(MemberExpr *ME, bool CheckReferenceOnly, bool AddressOf) { if (isa(ME->getMemberDecl())) return; // FieldME is the inner-most MemberExpr that is not an anonymous struct // or union. MemberExpr *FieldME = ME; bool AllPODFields = FieldME->getType().isPODType(S.Context); Expr *Base = ME; while (MemberExpr *SubME = dyn_cast(Base->IgnoreParenImpCasts())) { if (isa(SubME->getMemberDecl())) return; if (FieldDecl *FD = dyn_cast(SubME->getMemberDecl())) if (!FD->isAnonymousStructOrUnion()) FieldME = SubME; if (!FieldME->getType().isPODType(S.Context)) AllPODFields = false; Base = SubME->getBase(); } if (!isa(Base->IgnoreParenImpCasts())) return; if (AddressOf && AllPODFields) return; ValueDecl* FoundVD = FieldME->getMemberDecl(); if (ImplicitCastExpr *BaseCast = dyn_cast(Base)) { while (isa(BaseCast->getSubExpr())) { BaseCast = cast(BaseCast->getSubExpr()); } if (BaseCast->getCastKind() == CK_UncheckedDerivedToBase) { QualType T = BaseCast->getType(); if (T->isPointerType() && BaseClasses.count(T->getPointeeType())) { S.Diag(FieldME->getExprLoc(), diag::warn_base_class_is_uninit) << T->getPointeeType() << FoundVD; } } } if (!Decls.count(FoundVD)) return; const bool IsReference = FoundVD->getType()->isReferenceType(); if (InitList && !AddressOf && FoundVD == InitListFieldDecl) { // Special checking for initializer lists. if (IsInitListMemberExprInitialized(ME, CheckReferenceOnly)) { return; } } else { // Prevent double warnings on use of unbounded references. if (CheckReferenceOnly && !IsReference) return; } unsigned diag = IsReference ? diag::warn_reference_field_is_uninit : diag::warn_field_is_uninit; S.Diag(FieldME->getExprLoc(), diag) << FoundVD; if (Constructor) S.Diag(Constructor->getLocation(), diag::note_uninit_in_this_constructor) << (Constructor->isDefaultConstructor() && Constructor->isImplicit()); } void HandleValue(Expr *E, bool AddressOf) { E = E->IgnoreParens(); if (MemberExpr *ME = dyn_cast(E)) { HandleMemberExpr(ME, false /*CheckReferenceOnly*/, AddressOf /*AddressOf*/); return; } if (ConditionalOperator *CO = dyn_cast(E)) { Visit(CO->getCond()); HandleValue(CO->getTrueExpr(), AddressOf); HandleValue(CO->getFalseExpr(), AddressOf); return; } if (BinaryConditionalOperator *BCO = dyn_cast(E)) { Visit(BCO->getCond()); HandleValue(BCO->getFalseExpr(), AddressOf); return; } if (OpaqueValueExpr *OVE = dyn_cast(E)) { HandleValue(OVE->getSourceExpr(), AddressOf); return; } if (BinaryOperator *BO = dyn_cast(E)) { switch (BO->getOpcode()) { default: break; case(BO_PtrMemD): case(BO_PtrMemI): HandleValue(BO->getLHS(), AddressOf); Visit(BO->getRHS()); return; case(BO_Comma): Visit(BO->getLHS()); HandleValue(BO->getRHS(), AddressOf); return; } } Visit(E); } void CheckInitListExpr(InitListExpr *ILE) { InitFieldIndex.push_back(0); for (auto Child : ILE->children()) { if (InitListExpr *SubList = dyn_cast(Child)) { CheckInitListExpr(SubList); } else { Visit(Child); } ++InitFieldIndex.back(); } InitFieldIndex.pop_back(); } void CheckInitializer(Expr *E, const CXXConstructorDecl *FieldConstructor, FieldDecl *Field, const Type *BaseClass) { // Remove Decls that may have been initialized in the previous // initializer. for (ValueDecl* VD : DeclsToRemove) Decls.erase(VD); DeclsToRemove.clear(); Constructor = FieldConstructor; InitListExpr *ILE = dyn_cast(E); if (ILE && Field) { InitList = true; InitListFieldDecl = Field; InitFieldIndex.clear(); CheckInitListExpr(ILE); } else { InitList = false; Visit(E); } if (Field) Decls.erase(Field); if (BaseClass) BaseClasses.erase(BaseClass->getCanonicalTypeInternal()); } void VisitMemberExpr(MemberExpr *ME) { // All uses of unbounded reference fields will warn. HandleMemberExpr(ME, true /*CheckReferenceOnly*/, false /*AddressOf*/); } void VisitImplicitCastExpr(ImplicitCastExpr *E) { if (E->getCastKind() == CK_LValueToRValue) { HandleValue(E->getSubExpr(), false /*AddressOf*/); return; } Inherited::VisitImplicitCastExpr(E); } void VisitCXXConstructExpr(CXXConstructExpr *E) { if (E->getConstructor()->isCopyConstructor()) { Expr *ArgExpr = E->getArg(0); if (InitListExpr *ILE = dyn_cast(ArgExpr)) if (ILE->getNumInits() == 1) ArgExpr = ILE->getInit(0); if (ImplicitCastExpr *ICE = dyn_cast(ArgExpr)) if (ICE->getCastKind() == CK_NoOp) ArgExpr = ICE->getSubExpr(); HandleValue(ArgExpr, false /*AddressOf*/); return; } Inherited::VisitCXXConstructExpr(E); } void VisitCXXMemberCallExpr(CXXMemberCallExpr *E) { Expr *Callee = E->getCallee(); if (isa(Callee)) { HandleValue(Callee, false /*AddressOf*/); for (auto Arg : E->arguments()) Visit(Arg); return; } Inherited::VisitCXXMemberCallExpr(E); } void VisitCallExpr(CallExpr *E) { // Treat std::move as a use. if (E->getNumArgs() == 1) { if (FunctionDecl *FD = E->getDirectCallee()) { if (FD->isInStdNamespace() && FD->getIdentifier() && FD->getIdentifier()->isStr("move")) { HandleValue(E->getArg(0), false /*AddressOf*/); return; } } } Inherited::VisitCallExpr(E); } void VisitCXXOperatorCallExpr(CXXOperatorCallExpr *E) { Expr *Callee = E->getCallee(); if (isa(Callee)) return Inherited::VisitCXXOperatorCallExpr(E); Visit(Callee); for (auto Arg : E->arguments()) HandleValue(Arg->IgnoreParenImpCasts(), false /*AddressOf*/); } void VisitBinaryOperator(BinaryOperator *E) { // If a field assignment is detected, remove the field from the // uninitiailized field set. if (E->getOpcode() == BO_Assign) if (MemberExpr *ME = dyn_cast(E->getLHS())) if (FieldDecl *FD = dyn_cast(ME->getMemberDecl())) if (!FD->getType()->isReferenceType()) DeclsToRemove.push_back(FD); if (E->isCompoundAssignmentOp()) { HandleValue(E->getLHS(), false /*AddressOf*/); Visit(E->getRHS()); return; } Inherited::VisitBinaryOperator(E); } void VisitUnaryOperator(UnaryOperator *E) { if (E->isIncrementDecrementOp()) { HandleValue(E->getSubExpr(), false /*AddressOf*/); return; } if (E->getOpcode() == UO_AddrOf) { if (MemberExpr *ME = dyn_cast(E->getSubExpr())) { HandleValue(ME->getBase(), true /*AddressOf*/); return; } } Inherited::VisitUnaryOperator(E); } }; // Diagnose value-uses of fields to initialize themselves, e.g. // foo(foo) // where foo is not also a parameter to the constructor. // Also diagnose across field uninitialized use such as // x(y), y(x) // TODO: implement -Wuninitialized and fold this into that framework. static void DiagnoseUninitializedFields( Sema &SemaRef, const CXXConstructorDecl *Constructor) { if (SemaRef.getDiagnostics().isIgnored(diag::warn_field_is_uninit, Constructor->getLocation())) { return; } if (Constructor->isInvalidDecl()) return; const CXXRecordDecl *RD = Constructor->getParent(); if (RD->getDescribedClassTemplate()) return; // Holds fields that are uninitialized. llvm::SmallPtrSet UninitializedFields; // At the beginning, all fields are uninitialized. for (auto *I : RD->decls()) { if (auto *FD = dyn_cast(I)) { UninitializedFields.insert(FD); } else if (auto *IFD = dyn_cast(I)) { UninitializedFields.insert(IFD->getAnonField()); } } llvm::SmallPtrSet UninitializedBaseClasses; for (auto I : RD->bases()) UninitializedBaseClasses.insert(I.getType().getCanonicalType()); if (UninitializedFields.empty() && UninitializedBaseClasses.empty()) return; UninitializedFieldVisitor UninitializedChecker(SemaRef, UninitializedFields, UninitializedBaseClasses); for (const auto *FieldInit : Constructor->inits()) { if (UninitializedFields.empty() && UninitializedBaseClasses.empty()) break; Expr *InitExpr = FieldInit->getInit(); if (!InitExpr) continue; if (CXXDefaultInitExpr *Default = dyn_cast(InitExpr)) { InitExpr = Default->getExpr(); if (!InitExpr) continue; // In class initializers will point to the constructor. UninitializedChecker.CheckInitializer(InitExpr, Constructor, FieldInit->getAnyMember(), FieldInit->getBaseClass()); } else { UninitializedChecker.CheckInitializer(InitExpr, nullptr, FieldInit->getAnyMember(), FieldInit->getBaseClass()); } } } } // namespace /// \brief Enter a new C++ default initializer scope. After calling this, the /// caller must call \ref ActOnFinishCXXInClassMemberInitializer, even if /// parsing or instantiating the initializer failed. void Sema::ActOnStartCXXInClassMemberInitializer() { // Create a synthetic function scope to represent the call to the constructor // that notionally surrounds a use of this initializer. PushFunctionScope(); } /// \brief This is invoked after parsing an in-class initializer for a /// non-static C++ class member, and after instantiating an in-class initializer /// in a class template. Such actions are deferred until the class is complete. void Sema::ActOnFinishCXXInClassMemberInitializer(Decl *D, SourceLocation InitLoc, Expr *InitExpr) { // Pop the notional constructor scope we created earlier. PopFunctionScopeInfo(nullptr, D); FieldDecl *FD = dyn_cast(D); assert((isa(D) || FD->getInClassInitStyle() != ICIS_NoInit) && "must set init style when field is created"); if (!InitExpr) { D->setInvalidDecl(); if (FD) FD->removeInClassInitializer(); return; } if (DiagnoseUnexpandedParameterPack(InitExpr, UPPC_Initializer)) { FD->setInvalidDecl(); FD->removeInClassInitializer(); return; } ExprResult Init = InitExpr; if (!FD->getType()->isDependentType() && !InitExpr->isTypeDependent()) { InitializedEntity Entity = InitializedEntity::InitializeMember(FD); InitializationKind Kind = FD->getInClassInitStyle() == ICIS_ListInit ? InitializationKind::CreateDirectList(InitExpr->getLocStart()) : InitializationKind::CreateCopy(InitExpr->getLocStart(), InitLoc); InitializationSequence Seq(*this, Entity, Kind, InitExpr); Init = Seq.Perform(*this, Entity, Kind, InitExpr); if (Init.isInvalid()) { FD->setInvalidDecl(); return; } } // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. Init = ActOnFinishFullExpr(Init.get(), InitLoc); if (Init.isInvalid()) { FD->setInvalidDecl(); return; } InitExpr = Init.get(); FD->setInClassInitializer(InitExpr); } /// \brief Find the direct and/or virtual base specifiers that /// correspond to the given base type, for use in base initialization /// within a constructor. static bool FindBaseInitializer(Sema &SemaRef, CXXRecordDecl *ClassDecl, QualType BaseType, const CXXBaseSpecifier *&DirectBaseSpec, const CXXBaseSpecifier *&VirtualBaseSpec) { // First, check for a direct base class. DirectBaseSpec = nullptr; for (const auto &Base : ClassDecl->bases()) { if (SemaRef.Context.hasSameUnqualifiedType(BaseType, Base.getType())) { // We found a direct base of this type. That's what we're // initializing. DirectBaseSpec = &Base; break; } } // Check for a virtual base class. // FIXME: We might be able to short-circuit this if we know in advance that // there are no virtual bases. VirtualBaseSpec = nullptr; if (!DirectBaseSpec || !DirectBaseSpec->isVirtual()) { // We haven't found a base yet; search the class hierarchy for a // virtual base class. CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/false); if (SemaRef.IsDerivedFrom(ClassDecl->getLocation(), SemaRef.Context.getTypeDeclType(ClassDecl), BaseType, Paths)) { for (CXXBasePaths::paths_iterator Path = Paths.begin(); Path != Paths.end(); ++Path) { if (Path->back().Base->isVirtual()) { VirtualBaseSpec = Path->back().Base; break; } } } } return DirectBaseSpec || VirtualBaseSpec; } /// \brief Handle a C++ member initializer using braced-init-list syntax. MemInitResult Sema::ActOnMemInitializer(Decl *ConstructorD, Scope *S, CXXScopeSpec &SS, IdentifierInfo *MemberOrBase, ParsedType TemplateTypeTy, const DeclSpec &DS, SourceLocation IdLoc, Expr *InitList, SourceLocation EllipsisLoc) { return BuildMemInitializer(ConstructorD, S, SS, MemberOrBase, TemplateTypeTy, DS, IdLoc, InitList, EllipsisLoc); } /// \brief Handle a C++ member initializer using parentheses syntax. MemInitResult Sema::ActOnMemInitializer(Decl *ConstructorD, Scope *S, CXXScopeSpec &SS, IdentifierInfo *MemberOrBase, ParsedType TemplateTypeTy, const DeclSpec &DS, SourceLocation IdLoc, SourceLocation LParenLoc, ArrayRef Args, SourceLocation RParenLoc, SourceLocation EllipsisLoc) { Expr *List = new (Context) ParenListExpr(Context, LParenLoc, Args, RParenLoc); return BuildMemInitializer(ConstructorD, S, SS, MemberOrBase, TemplateTypeTy, DS, IdLoc, List, EllipsisLoc); } namespace { // Callback to only accept typo corrections that can be a valid C++ member // intializer: either a non-static field member or a base class. class MemInitializerValidatorCCC : public CorrectionCandidateCallback { public: explicit MemInitializerValidatorCCC(CXXRecordDecl *ClassDecl) : ClassDecl(ClassDecl) {} bool ValidateCandidate(const TypoCorrection &candidate) override { if (NamedDecl *ND = candidate.getCorrectionDecl()) { if (FieldDecl *Member = dyn_cast(ND)) return Member->getDeclContext()->getRedeclContext()->Equals(ClassDecl); return isa(ND); } return false; } private: CXXRecordDecl *ClassDecl; }; } /// \brief Handle a C++ member initializer. MemInitResult Sema::BuildMemInitializer(Decl *ConstructorD, Scope *S, CXXScopeSpec &SS, IdentifierInfo *MemberOrBase, ParsedType TemplateTypeTy, const DeclSpec &DS, SourceLocation IdLoc, Expr *Init, SourceLocation EllipsisLoc) { ExprResult Res = CorrectDelayedTyposInExpr(Init); if (!Res.isUsable()) return true; Init = Res.get(); if (!ConstructorD) return true; AdjustDeclIfTemplate(ConstructorD); CXXConstructorDecl *Constructor = dyn_cast(ConstructorD); if (!Constructor) { // The user wrote a constructor initializer on a function that is // not a C++ constructor. Ignore the error for now, because we may // have more member initializers coming; we'll diagnose it just // once in ActOnMemInitializers. return true; } CXXRecordDecl *ClassDecl = Constructor->getParent(); // C++ [class.base.init]p2: // Names in a mem-initializer-id are looked up in the scope of the // constructor's class and, if not found in that scope, are looked // up in the scope containing the constructor's definition. // [Note: if the constructor's class contains a member with the // same name as a direct or virtual base class of the class, a // mem-initializer-id naming the member or base class and composed // of a single identifier refers to the class member. A // mem-initializer-id for the hidden base class may be specified // using a qualified name. ] if (!SS.getScopeRep() && !TemplateTypeTy) { // Look for a member, first. DeclContext::lookup_result Result = ClassDecl->lookup(MemberOrBase); if (!Result.empty()) { ValueDecl *Member; if ((Member = dyn_cast(Result.front())) || (Member = dyn_cast(Result.front()))) { if (EllipsisLoc.isValid()) Diag(EllipsisLoc, diag::err_pack_expansion_member_init) << MemberOrBase << SourceRange(IdLoc, Init->getSourceRange().getEnd()); return BuildMemberInitializer(Member, Init, IdLoc); } } } // It didn't name a member, so see if it names a class. QualType BaseType; TypeSourceInfo *TInfo = nullptr; if (TemplateTypeTy) { BaseType = GetTypeFromParser(TemplateTypeTy, &TInfo); } else if (DS.getTypeSpecType() == TST_decltype) { BaseType = BuildDecltypeType(DS.getRepAsExpr(), DS.getTypeSpecTypeLoc()); } else if (DS.getTypeSpecType() == TST_decltype_auto) { Diag(DS.getTypeSpecTypeLoc(), diag::err_decltype_auto_invalid); return true; } else { LookupResult R(*this, MemberOrBase, IdLoc, LookupOrdinaryName); LookupParsedName(R, S, &SS); TypeDecl *TyD = R.getAsSingle(); if (!TyD) { if (R.isAmbiguous()) return true; // We don't want access-control diagnostics here. R.suppressDiagnostics(); if (SS.isSet() && isDependentScopeSpecifier(SS)) { bool NotUnknownSpecialization = false; DeclContext *DC = computeDeclContext(SS, false); if (CXXRecordDecl *Record = dyn_cast_or_null(DC)) NotUnknownSpecialization = !Record->hasAnyDependentBases(); if (!NotUnknownSpecialization) { // When the scope specifier can refer to a member of an unknown // specialization, we take it as a type name. BaseType = CheckTypenameType(ETK_None, SourceLocation(), SS.getWithLocInContext(Context), *MemberOrBase, IdLoc); if (BaseType.isNull()) return true; TInfo = Context.CreateTypeSourceInfo(BaseType); DependentNameTypeLoc TL = TInfo->getTypeLoc().castAs(); if (!TL.isNull()) { TL.setNameLoc(IdLoc); TL.setElaboratedKeywordLoc(SourceLocation()); TL.setQualifierLoc(SS.getWithLocInContext(Context)); } R.clear(); R.setLookupName(MemberOrBase); } } // If no results were found, try to correct typos. TypoCorrection Corr; if (R.empty() && BaseType.isNull() && (Corr = CorrectTypo( R.getLookupNameInfo(), R.getLookupKind(), S, &SS, llvm::make_unique(ClassDecl), CTK_ErrorRecovery, ClassDecl))) { if (FieldDecl *Member = Corr.getCorrectionDeclAs()) { // We have found a non-static data member with a similar // name to what was typed; complain and initialize that // member. diagnoseTypo(Corr, PDiag(diag::err_mem_init_not_member_or_class_suggest) << MemberOrBase << true); return BuildMemberInitializer(Member, Init, IdLoc); } else if (TypeDecl *Type = Corr.getCorrectionDeclAs()) { const CXXBaseSpecifier *DirectBaseSpec; const CXXBaseSpecifier *VirtualBaseSpec; if (FindBaseInitializer(*this, ClassDecl, Context.getTypeDeclType(Type), DirectBaseSpec, VirtualBaseSpec)) { // We have found a direct or virtual base class with a // similar name to what was typed; complain and initialize // that base class. diagnoseTypo(Corr, PDiag(diag::err_mem_init_not_member_or_class_suggest) << MemberOrBase << false, PDiag() /*Suppress note, we provide our own.*/); const CXXBaseSpecifier *BaseSpec = DirectBaseSpec ? DirectBaseSpec : VirtualBaseSpec; Diag(BaseSpec->getLocStart(), diag::note_base_class_specified_here) << BaseSpec->getType() << BaseSpec->getSourceRange(); TyD = Type; } } } if (!TyD && BaseType.isNull()) { Diag(IdLoc, diag::err_mem_init_not_member_or_class) << MemberOrBase << SourceRange(IdLoc,Init->getSourceRange().getEnd()); return true; } } if (BaseType.isNull()) { BaseType = Context.getTypeDeclType(TyD); MarkAnyDeclReferenced(TyD->getLocation(), TyD, /*OdrUse=*/false); if (SS.isSet()) { BaseType = Context.getElaboratedType(ETK_None, SS.getScopeRep(), BaseType); TInfo = Context.CreateTypeSourceInfo(BaseType); ElaboratedTypeLoc TL = TInfo->getTypeLoc().castAs(); TL.getNamedTypeLoc().castAs().setNameLoc(IdLoc); TL.setElaboratedKeywordLoc(SourceLocation()); TL.setQualifierLoc(SS.getWithLocInContext(Context)); } } } if (!TInfo) TInfo = Context.getTrivialTypeSourceInfo(BaseType, IdLoc); return BuildBaseInitializer(BaseType, TInfo, Init, ClassDecl, EllipsisLoc); } /// Checks a member initializer expression for cases where reference (or /// pointer) members are bound to by-value parameters (or their addresses). static void CheckForDanglingReferenceOrPointer(Sema &S, ValueDecl *Member, Expr *Init, SourceLocation IdLoc) { QualType MemberTy = Member->getType(); // We only handle pointers and references currently. // FIXME: Would this be relevant for ObjC object pointers? Or block pointers? if (!MemberTy->isReferenceType() && !MemberTy->isPointerType()) return; const bool IsPointer = MemberTy->isPointerType(); if (IsPointer) { if (const UnaryOperator *Op = dyn_cast(Init->IgnoreParenImpCasts())) { // The only case we're worried about with pointers requires taking the // address. if (Op->getOpcode() != UO_AddrOf) return; Init = Op->getSubExpr(); } else { // We only handle address-of expression initializers for pointers. return; } } if (const DeclRefExpr *DRE = dyn_cast(Init->IgnoreParens())) { // We only warn when referring to a non-reference parameter declaration. const ParmVarDecl *Parameter = dyn_cast(DRE->getDecl()); if (!Parameter || Parameter->getType()->isReferenceType()) return; S.Diag(Init->getExprLoc(), IsPointer ? diag::warn_init_ptr_member_to_parameter_addr : diag::warn_bind_ref_member_to_parameter) << Member << Parameter << Init->getSourceRange(); } else { // Other initializers are fine. return; } S.Diag(Member->getLocation(), diag::note_ref_or_ptr_member_declared_here) << (unsigned)IsPointer; } MemInitResult Sema::BuildMemberInitializer(ValueDecl *Member, Expr *Init, SourceLocation IdLoc) { FieldDecl *DirectMember = dyn_cast(Member); IndirectFieldDecl *IndirectMember = dyn_cast(Member); assert((DirectMember || IndirectMember) && "Member must be a FieldDecl or IndirectFieldDecl"); if (DiagnoseUnexpandedParameterPack(Init, UPPC_Initializer)) return true; if (Member->isInvalidDecl()) return true; MultiExprArg Args; if (ParenListExpr *ParenList = dyn_cast(Init)) { Args = MultiExprArg(ParenList->getExprs(), ParenList->getNumExprs()); } else if (InitListExpr *InitList = dyn_cast(Init)) { Args = MultiExprArg(InitList->getInits(), InitList->getNumInits()); } else { // Template instantiation doesn't reconstruct ParenListExprs for us. Args = Init; } SourceRange InitRange = Init->getSourceRange(); if (Member->getType()->isDependentType() || Init->isTypeDependent()) { // Can't check initialization for a member of dependent type or when // any of the arguments are type-dependent expressions. DiscardCleanupsInEvaluationContext(); } else { bool InitList = false; if (isa(Init)) { InitList = true; Args = Init; } // Initialize the member. InitializedEntity MemberEntity = DirectMember ? InitializedEntity::InitializeMember(DirectMember, nullptr) : InitializedEntity::InitializeMember(IndirectMember, nullptr); InitializationKind Kind = InitList ? InitializationKind::CreateDirectList(IdLoc) : InitializationKind::CreateDirect(IdLoc, InitRange.getBegin(), InitRange.getEnd()); InitializationSequence InitSeq(*this, MemberEntity, Kind, Args); ExprResult MemberInit = InitSeq.Perform(*this, MemberEntity, Kind, Args, nullptr); if (MemberInit.isInvalid()) return true; CheckForDanglingReferenceOrPointer(*this, Member, MemberInit.get(), IdLoc); // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. MemberInit = ActOnFinishFullExpr(MemberInit.get(), InitRange.getBegin()); if (MemberInit.isInvalid()) return true; Init = MemberInit.get(); } if (DirectMember) { return new (Context) CXXCtorInitializer(Context, DirectMember, IdLoc, InitRange.getBegin(), Init, InitRange.getEnd()); } else { return new (Context) CXXCtorInitializer(Context, IndirectMember, IdLoc, InitRange.getBegin(), Init, InitRange.getEnd()); } } MemInitResult Sema::BuildDelegatingInitializer(TypeSourceInfo *TInfo, Expr *Init, CXXRecordDecl *ClassDecl) { SourceLocation NameLoc = TInfo->getTypeLoc().getLocalSourceRange().getBegin(); if (!LangOpts.CPlusPlus11) return Diag(NameLoc, diag::err_delegating_ctor) << TInfo->getTypeLoc().getLocalSourceRange(); Diag(NameLoc, diag::warn_cxx98_compat_delegating_ctor); bool InitList = true; MultiExprArg Args = Init; if (ParenListExpr *ParenList = dyn_cast(Init)) { InitList = false; Args = MultiExprArg(ParenList->getExprs(), ParenList->getNumExprs()); } SourceRange InitRange = Init->getSourceRange(); // Initialize the object. InitializedEntity DelegationEntity = InitializedEntity::InitializeDelegation( QualType(ClassDecl->getTypeForDecl(), 0)); InitializationKind Kind = InitList ? InitializationKind::CreateDirectList(NameLoc) : InitializationKind::CreateDirect(NameLoc, InitRange.getBegin(), InitRange.getEnd()); InitializationSequence InitSeq(*this, DelegationEntity, Kind, Args); ExprResult DelegationInit = InitSeq.Perform(*this, DelegationEntity, Kind, Args, nullptr); if (DelegationInit.isInvalid()) return true; assert(cast(DelegationInit.get())->getConstructor() && "Delegating constructor with no target?"); // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. DelegationInit = ActOnFinishFullExpr(DelegationInit.get(), InitRange.getBegin()); if (DelegationInit.isInvalid()) return true; // If we are in a dependent context, template instantiation will // perform this type-checking again. Just save the arguments that we // received in a ParenListExpr. // FIXME: This isn't quite ideal, since our ASTs don't capture all // of the information that we have about the base // initializer. However, deconstructing the ASTs is a dicey process, // and this approach is far more likely to get the corner cases right. if (CurContext->isDependentContext()) DelegationInit = Init; return new (Context) CXXCtorInitializer(Context, TInfo, InitRange.getBegin(), DelegationInit.getAs(), InitRange.getEnd()); } MemInitResult Sema::BuildBaseInitializer(QualType BaseType, TypeSourceInfo *BaseTInfo, Expr *Init, CXXRecordDecl *ClassDecl, SourceLocation EllipsisLoc) { SourceLocation BaseLoc = BaseTInfo->getTypeLoc().getLocalSourceRange().getBegin(); if (!BaseType->isDependentType() && !BaseType->isRecordType()) return Diag(BaseLoc, diag::err_base_init_does_not_name_class) << BaseType << BaseTInfo->getTypeLoc().getLocalSourceRange(); // C++ [class.base.init]p2: // [...] Unless the mem-initializer-id names a nonstatic data // member of the constructor's class or a direct or virtual base // of that class, the mem-initializer is ill-formed. A // mem-initializer-list can initialize a base class using any // name that denotes that base class type. bool Dependent = BaseType->isDependentType() || Init->isTypeDependent(); SourceRange InitRange = Init->getSourceRange(); if (EllipsisLoc.isValid()) { // This is a pack expansion. if (!BaseType->containsUnexpandedParameterPack()) { Diag(EllipsisLoc, diag::err_pack_expansion_without_parameter_packs) << SourceRange(BaseLoc, InitRange.getEnd()); EllipsisLoc = SourceLocation(); } } else { // Check for any unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(BaseLoc, BaseTInfo, UPPC_Initializer)) return true; if (DiagnoseUnexpandedParameterPack(Init, UPPC_Initializer)) return true; } // Check for direct and virtual base classes. const CXXBaseSpecifier *DirectBaseSpec = nullptr; const CXXBaseSpecifier *VirtualBaseSpec = nullptr; if (!Dependent) { if (Context.hasSameUnqualifiedType(QualType(ClassDecl->getTypeForDecl(),0), BaseType)) return BuildDelegatingInitializer(BaseTInfo, Init, ClassDecl); FindBaseInitializer(*this, ClassDecl, BaseType, DirectBaseSpec, VirtualBaseSpec); // C++ [base.class.init]p2: // Unless the mem-initializer-id names a nonstatic data member of the // constructor's class or a direct or virtual base of that class, the // mem-initializer is ill-formed. if (!DirectBaseSpec && !VirtualBaseSpec) { // If the class has any dependent bases, then it's possible that // one of those types will resolve to the same type as // BaseType. Therefore, just treat this as a dependent base // class initialization. FIXME: Should we try to check the // initialization anyway? It seems odd. if (ClassDecl->hasAnyDependentBases()) Dependent = true; else return Diag(BaseLoc, diag::err_not_direct_base_or_virtual) << BaseType << Context.getTypeDeclType(ClassDecl) << BaseTInfo->getTypeLoc().getLocalSourceRange(); } } if (Dependent) { DiscardCleanupsInEvaluationContext(); return new (Context) CXXCtorInitializer(Context, BaseTInfo, /*IsVirtual=*/false, InitRange.getBegin(), Init, InitRange.getEnd(), EllipsisLoc); } // C++ [base.class.init]p2: // If a mem-initializer-id is ambiguous because it designates both // a direct non-virtual base class and an inherited virtual base // class, the mem-initializer is ill-formed. if (DirectBaseSpec && VirtualBaseSpec) return Diag(BaseLoc, diag::err_base_init_direct_and_virtual) << BaseType << BaseTInfo->getTypeLoc().getLocalSourceRange(); const CXXBaseSpecifier *BaseSpec = DirectBaseSpec; if (!BaseSpec) BaseSpec = VirtualBaseSpec; // Initialize the base. bool InitList = true; MultiExprArg Args = Init; if (ParenListExpr *ParenList = dyn_cast(Init)) { InitList = false; Args = MultiExprArg(ParenList->getExprs(), ParenList->getNumExprs()); } InitializedEntity BaseEntity = InitializedEntity::InitializeBase(Context, BaseSpec, VirtualBaseSpec); InitializationKind Kind = InitList ? InitializationKind::CreateDirectList(BaseLoc) : InitializationKind::CreateDirect(BaseLoc, InitRange.getBegin(), InitRange.getEnd()); InitializationSequence InitSeq(*this, BaseEntity, Kind, Args); ExprResult BaseInit = InitSeq.Perform(*this, BaseEntity, Kind, Args, nullptr); if (BaseInit.isInvalid()) return true; // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. BaseInit = ActOnFinishFullExpr(BaseInit.get(), InitRange.getBegin()); if (BaseInit.isInvalid()) return true; // If we are in a dependent context, template instantiation will // perform this type-checking again. Just save the arguments that we // received in a ParenListExpr. // FIXME: This isn't quite ideal, since our ASTs don't capture all // of the information that we have about the base // initializer. However, deconstructing the ASTs is a dicey process, // and this approach is far more likely to get the corner cases right. if (CurContext->isDependentContext()) BaseInit = Init; return new (Context) CXXCtorInitializer(Context, BaseTInfo, BaseSpec->isVirtual(), InitRange.getBegin(), BaseInit.getAs(), InitRange.getEnd(), EllipsisLoc); } // Create a static_cast\(expr). static Expr *CastForMoving(Sema &SemaRef, Expr *E, QualType T = QualType()) { if (T.isNull()) T = E->getType(); QualType TargetType = SemaRef.BuildReferenceType( T, /*SpelledAsLValue*/false, SourceLocation(), DeclarationName()); SourceLocation ExprLoc = E->getLocStart(); TypeSourceInfo *TargetLoc = SemaRef.Context.getTrivialTypeSourceInfo( TargetType, ExprLoc); return SemaRef.BuildCXXNamedCast(ExprLoc, tok::kw_static_cast, TargetLoc, E, SourceRange(ExprLoc, ExprLoc), E->getSourceRange()).get(); } /// ImplicitInitializerKind - How an implicit base or member initializer should /// initialize its base or member. enum ImplicitInitializerKind { IIK_Default, IIK_Copy, IIK_Move, IIK_Inherit }; static bool BuildImplicitBaseInitializer(Sema &SemaRef, CXXConstructorDecl *Constructor, ImplicitInitializerKind ImplicitInitKind, CXXBaseSpecifier *BaseSpec, bool IsInheritedVirtualBase, CXXCtorInitializer *&CXXBaseInit) { InitializedEntity InitEntity = InitializedEntity::InitializeBase(SemaRef.Context, BaseSpec, IsInheritedVirtualBase); ExprResult BaseInit; switch (ImplicitInitKind) { case IIK_Inherit: case IIK_Default: { InitializationKind InitKind = InitializationKind::CreateDefault(Constructor->getLocation()); InitializationSequence InitSeq(SemaRef, InitEntity, InitKind, None); BaseInit = InitSeq.Perform(SemaRef, InitEntity, InitKind, None); break; } case IIK_Move: case IIK_Copy: { bool Moving = ImplicitInitKind == IIK_Move; ParmVarDecl *Param = Constructor->getParamDecl(0); QualType ParamType = Param->getType().getNonReferenceType(); Expr *CopyCtorArg = DeclRefExpr::Create(SemaRef.Context, NestedNameSpecifierLoc(), SourceLocation(), Param, false, Constructor->getLocation(), ParamType, VK_LValue, nullptr); SemaRef.MarkDeclRefReferenced(cast(CopyCtorArg)); // Cast to the base class to avoid ambiguities. QualType ArgTy = SemaRef.Context.getQualifiedType(BaseSpec->getType().getUnqualifiedType(), ParamType.getQualifiers()); if (Moving) { CopyCtorArg = CastForMoving(SemaRef, CopyCtorArg); } CXXCastPath BasePath; BasePath.push_back(BaseSpec); CopyCtorArg = SemaRef.ImpCastExprToType(CopyCtorArg, ArgTy, CK_UncheckedDerivedToBase, Moving ? VK_XValue : VK_LValue, &BasePath).get(); InitializationKind InitKind = InitializationKind::CreateDirect(Constructor->getLocation(), SourceLocation(), SourceLocation()); InitializationSequence InitSeq(SemaRef, InitEntity, InitKind, CopyCtorArg); BaseInit = InitSeq.Perform(SemaRef, InitEntity, InitKind, CopyCtorArg); break; } } BaseInit = SemaRef.MaybeCreateExprWithCleanups(BaseInit); if (BaseInit.isInvalid()) return true; CXXBaseInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, SemaRef.Context.getTrivialTypeSourceInfo(BaseSpec->getType(), SourceLocation()), BaseSpec->isVirtual(), SourceLocation(), BaseInit.getAs(), SourceLocation(), SourceLocation()); return false; } static bool RefersToRValueRef(Expr *MemRef) { ValueDecl *Referenced = cast(MemRef)->getMemberDecl(); return Referenced->getType()->isRValueReferenceType(); } static bool BuildImplicitMemberInitializer(Sema &SemaRef, CXXConstructorDecl *Constructor, ImplicitInitializerKind ImplicitInitKind, FieldDecl *Field, IndirectFieldDecl *Indirect, CXXCtorInitializer *&CXXMemberInit) { if (Field->isInvalidDecl()) return true; SourceLocation Loc = Constructor->getLocation(); if (ImplicitInitKind == IIK_Copy || ImplicitInitKind == IIK_Move) { bool Moving = ImplicitInitKind == IIK_Move; ParmVarDecl *Param = Constructor->getParamDecl(0); QualType ParamType = Param->getType().getNonReferenceType(); // Suppress copying zero-width bitfields. if (Field->isBitField() && Field->getBitWidthValue(SemaRef.Context) == 0) return false; Expr *MemberExprBase = DeclRefExpr::Create(SemaRef.Context, NestedNameSpecifierLoc(), SourceLocation(), Param, false, Loc, ParamType, VK_LValue, nullptr); SemaRef.MarkDeclRefReferenced(cast(MemberExprBase)); if (Moving) { MemberExprBase = CastForMoving(SemaRef, MemberExprBase); } // Build a reference to this field within the parameter. CXXScopeSpec SS; LookupResult MemberLookup(SemaRef, Field->getDeclName(), Loc, Sema::LookupMemberName); MemberLookup.addDecl(Indirect ? cast(Indirect) : cast(Field), AS_public); MemberLookup.resolveKind(); ExprResult CtorArg = SemaRef.BuildMemberReferenceExpr(MemberExprBase, ParamType, Loc, /*IsArrow=*/false, SS, /*TemplateKWLoc=*/SourceLocation(), /*FirstQualifierInScope=*/nullptr, MemberLookup, /*TemplateArgs=*/nullptr, /*S*/nullptr); if (CtorArg.isInvalid()) return true; // C++11 [class.copy]p15: // - if a member m has rvalue reference type T&&, it is direct-initialized // with static_cast(x.m); if (RefersToRValueRef(CtorArg.get())) { CtorArg = CastForMoving(SemaRef, CtorArg.get()); } InitializedEntity Entity = Indirect ? InitializedEntity::InitializeMember(Indirect, nullptr, /*Implicit*/ true) : InitializedEntity::InitializeMember(Field, nullptr, /*Implicit*/ true); // Direct-initialize to use the copy constructor. InitializationKind InitKind = InitializationKind::CreateDirect(Loc, SourceLocation(), SourceLocation()); Expr *CtorArgE = CtorArg.getAs(); InitializationSequence InitSeq(SemaRef, Entity, InitKind, CtorArgE); ExprResult MemberInit = InitSeq.Perform(SemaRef, Entity, InitKind, MultiExprArg(&CtorArgE, 1)); MemberInit = SemaRef.MaybeCreateExprWithCleanups(MemberInit); if (MemberInit.isInvalid()) return true; if (Indirect) CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer( SemaRef.Context, Indirect, Loc, Loc, MemberInit.getAs(), Loc); else CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer( SemaRef.Context, Field, Loc, Loc, MemberInit.getAs(), Loc); return false; } assert((ImplicitInitKind == IIK_Default || ImplicitInitKind == IIK_Inherit) && "Unhandled implicit init kind!"); QualType FieldBaseElementType = SemaRef.Context.getBaseElementType(Field->getType()); if (FieldBaseElementType->isRecordType()) { InitializedEntity InitEntity = Indirect ? InitializedEntity::InitializeMember(Indirect, nullptr, /*Implicit*/ true) : InitializedEntity::InitializeMember(Field, nullptr, /*Implicit*/ true); InitializationKind InitKind = InitializationKind::CreateDefault(Loc); InitializationSequence InitSeq(SemaRef, InitEntity, InitKind, None); ExprResult MemberInit = InitSeq.Perform(SemaRef, InitEntity, InitKind, None); MemberInit = SemaRef.MaybeCreateExprWithCleanups(MemberInit); if (MemberInit.isInvalid()) return true; if (Indirect) CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Indirect, Loc, Loc, MemberInit.get(), Loc); else CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Field, Loc, Loc, MemberInit.get(), Loc); return false; } if (!Field->getParent()->isUnion()) { if (FieldBaseElementType->isReferenceType()) { SemaRef.Diag(Constructor->getLocation(), diag::err_uninitialized_member_in_ctor) << (int)Constructor->isImplicit() << SemaRef.Context.getTagDeclType(Constructor->getParent()) << 0 << Field->getDeclName(); SemaRef.Diag(Field->getLocation(), diag::note_declared_at); return true; } if (FieldBaseElementType.isConstQualified()) { SemaRef.Diag(Constructor->getLocation(), diag::err_uninitialized_member_in_ctor) << (int)Constructor->isImplicit() << SemaRef.Context.getTagDeclType(Constructor->getParent()) << 1 << Field->getDeclName(); SemaRef.Diag(Field->getLocation(), diag::note_declared_at); return true; } } if (FieldBaseElementType.hasNonTrivialObjCLifetime()) { // ARC and Weak: // Default-initialize Objective-C pointers to NULL. CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Field, Loc, Loc, new (SemaRef.Context) ImplicitValueInitExpr(Field->getType()), Loc); return false; } // Nothing to initialize. CXXMemberInit = nullptr; return false; } namespace { struct BaseAndFieldInfo { Sema &S; CXXConstructorDecl *Ctor; bool AnyErrorsInInits; ImplicitInitializerKind IIK; llvm::DenseMap AllBaseFields; SmallVector AllToInit; llvm::DenseMap ActiveUnionMember; BaseAndFieldInfo(Sema &S, CXXConstructorDecl *Ctor, bool ErrorsInInits) : S(S), Ctor(Ctor), AnyErrorsInInits(ErrorsInInits) { bool Generated = Ctor->isImplicit() || Ctor->isDefaulted(); if (Ctor->getInheritedConstructor()) IIK = IIK_Inherit; else if (Generated && Ctor->isCopyConstructor()) IIK = IIK_Copy; else if (Generated && Ctor->isMoveConstructor()) IIK = IIK_Move; else IIK = IIK_Default; } bool isImplicitCopyOrMove() const { switch (IIK) { case IIK_Copy: case IIK_Move: return true; case IIK_Default: case IIK_Inherit: return false; } llvm_unreachable("Invalid ImplicitInitializerKind!"); } bool addFieldInitializer(CXXCtorInitializer *Init) { AllToInit.push_back(Init); // Check whether this initializer makes the field "used". if (Init->getInit()->HasSideEffects(S.Context)) S.UnusedPrivateFields.remove(Init->getAnyMember()); return false; } bool isInactiveUnionMember(FieldDecl *Field) { RecordDecl *Record = Field->getParent(); if (!Record->isUnion()) return false; if (FieldDecl *Active = ActiveUnionMember.lookup(Record->getCanonicalDecl())) return Active != Field->getCanonicalDecl(); // In an implicit copy or move constructor, ignore any in-class initializer. if (isImplicitCopyOrMove()) return true; // If there's no explicit initialization, the field is active only if it // has an in-class initializer... if (Field->hasInClassInitializer()) return false; // ... or it's an anonymous struct or union whose class has an in-class // initializer. if (!Field->isAnonymousStructOrUnion()) return true; CXXRecordDecl *FieldRD = Field->getType()->getAsCXXRecordDecl(); return !FieldRD->hasInClassInitializer(); } /// \brief Determine whether the given field is, or is within, a union member /// that is inactive (because there was an initializer given for a different /// member of the union, or because the union was not initialized at all). bool isWithinInactiveUnionMember(FieldDecl *Field, IndirectFieldDecl *Indirect) { if (!Indirect) return isInactiveUnionMember(Field); for (auto *C : Indirect->chain()) { FieldDecl *Field = dyn_cast(C); if (Field && isInactiveUnionMember(Field)) return true; } return false; } }; } /// \brief Determine whether the given type is an incomplete or zero-lenfgth /// array type. static bool isIncompleteOrZeroLengthArrayType(ASTContext &Context, QualType T) { if (T->isIncompleteArrayType()) return true; while (const ConstantArrayType *ArrayT = Context.getAsConstantArrayType(T)) { if (!ArrayT->getSize()) return true; T = ArrayT->getElementType(); } return false; } static bool CollectFieldInitializer(Sema &SemaRef, BaseAndFieldInfo &Info, FieldDecl *Field, IndirectFieldDecl *Indirect = nullptr) { if (Field->isInvalidDecl()) return false; // Overwhelmingly common case: we have a direct initializer for this field. if (CXXCtorInitializer *Init = Info.AllBaseFields.lookup(Field->getCanonicalDecl())) return Info.addFieldInitializer(Init); // C++11 [class.base.init]p8: // if the entity is a non-static data member that has a // brace-or-equal-initializer and either // -- the constructor's class is a union and no other variant member of that // union is designated by a mem-initializer-id or // -- the constructor's class is not a union, and, if the entity is a member // of an anonymous union, no other member of that union is designated by // a mem-initializer-id, // the entity is initialized as specified in [dcl.init]. // // We also apply the same rules to handle anonymous structs within anonymous // unions. if (Info.isWithinInactiveUnionMember(Field, Indirect)) return false; if (Field->hasInClassInitializer() && !Info.isImplicitCopyOrMove()) { ExprResult DIE = SemaRef.BuildCXXDefaultInitExpr(Info.Ctor->getLocation(), Field); if (DIE.isInvalid()) return true; CXXCtorInitializer *Init; if (Indirect) Init = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Indirect, SourceLocation(), SourceLocation(), DIE.get(), SourceLocation()); else Init = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Field, SourceLocation(), SourceLocation(), DIE.get(), SourceLocation()); return Info.addFieldInitializer(Init); } // Don't initialize incomplete or zero-length arrays. if (isIncompleteOrZeroLengthArrayType(SemaRef.Context, Field->getType())) return false; // Don't try to build an implicit initializer if there were semantic // errors in any of the initializers (and therefore we might be // missing some that the user actually wrote). if (Info.AnyErrorsInInits) return false; CXXCtorInitializer *Init = nullptr; if (BuildImplicitMemberInitializer(Info.S, Info.Ctor, Info.IIK, Field, Indirect, Init)) return true; if (!Init) return false; return Info.addFieldInitializer(Init); } bool Sema::SetDelegatingInitializer(CXXConstructorDecl *Constructor, CXXCtorInitializer *Initializer) { assert(Initializer->isDelegatingInitializer()); Constructor->setNumCtorInitializers(1); CXXCtorInitializer **initializer = new (Context) CXXCtorInitializer*[1]; memcpy(initializer, &Initializer, sizeof (CXXCtorInitializer*)); Constructor->setCtorInitializers(initializer); if (CXXDestructorDecl *Dtor = LookupDestructor(Constructor->getParent())) { MarkFunctionReferenced(Initializer->getSourceLocation(), Dtor); DiagnoseUseOfDecl(Dtor, Initializer->getSourceLocation()); } DelegatingCtorDecls.push_back(Constructor); DiagnoseUninitializedFields(*this, Constructor); return false; } bool Sema::SetCtorInitializers(CXXConstructorDecl *Constructor, bool AnyErrors, ArrayRef Initializers) { if (Constructor->isDependentContext()) { // Just store the initializers as written, they will be checked during // instantiation. if (!Initializers.empty()) { Constructor->setNumCtorInitializers(Initializers.size()); CXXCtorInitializer **baseOrMemberInitializers = new (Context) CXXCtorInitializer*[Initializers.size()]; memcpy(baseOrMemberInitializers, Initializers.data(), Initializers.size() * sizeof(CXXCtorInitializer*)); Constructor->setCtorInitializers(baseOrMemberInitializers); } // Let template instantiation know whether we had errors. if (AnyErrors) Constructor->setInvalidDecl(); return false; } BaseAndFieldInfo Info(*this, Constructor, AnyErrors); // We need to build the initializer AST according to order of construction // and not what user specified in the Initializers list. CXXRecordDecl *ClassDecl = Constructor->getParent()->getDefinition(); if (!ClassDecl) return true; bool HadError = false; for (unsigned i = 0; i < Initializers.size(); i++) { CXXCtorInitializer *Member = Initializers[i]; if (Member->isBaseInitializer()) Info.AllBaseFields[Member->getBaseClass()->getAs()] = Member; else { Info.AllBaseFields[Member->getAnyMember()->getCanonicalDecl()] = Member; if (IndirectFieldDecl *F = Member->getIndirectMember()) { for (auto *C : F->chain()) { FieldDecl *FD = dyn_cast(C); if (FD && FD->getParent()->isUnion()) Info.ActiveUnionMember.insert(std::make_pair( FD->getParent()->getCanonicalDecl(), FD->getCanonicalDecl())); } } else if (FieldDecl *FD = Member->getMember()) { if (FD->getParent()->isUnion()) Info.ActiveUnionMember.insert(std::make_pair( FD->getParent()->getCanonicalDecl(), FD->getCanonicalDecl())); } } } // Keep track of the direct virtual bases. llvm::SmallPtrSet DirectVBases; for (auto &I : ClassDecl->bases()) { if (I.isVirtual()) DirectVBases.insert(&I); } // Push virtual bases before others. for (auto &VBase : ClassDecl->vbases()) { if (CXXCtorInitializer *Value = Info.AllBaseFields.lookup(VBase.getType()->getAs())) { // [class.base.init]p7, per DR257: // A mem-initializer where the mem-initializer-id names a virtual base // class is ignored during execution of a constructor of any class that // is not the most derived class. if (ClassDecl->isAbstract()) { // FIXME: Provide a fixit to remove the base specifier. This requires // tracking the location of the associated comma for a base specifier. Diag(Value->getSourceLocation(), diag::warn_abstract_vbase_init_ignored) << VBase.getType() << ClassDecl; DiagnoseAbstractType(ClassDecl); } Info.AllToInit.push_back(Value); } else if (!AnyErrors && !ClassDecl->isAbstract()) { // [class.base.init]p8, per DR257: // If a given [...] base class is not named by a mem-initializer-id // [...] and the entity is not a virtual base class of an abstract // class, then [...] the entity is default-initialized. bool IsInheritedVirtualBase = !DirectVBases.count(&VBase); CXXCtorInitializer *CXXBaseInit; if (BuildImplicitBaseInitializer(*this, Constructor, Info.IIK, &VBase, IsInheritedVirtualBase, CXXBaseInit)) { HadError = true; continue; } Info.AllToInit.push_back(CXXBaseInit); } } // Non-virtual bases. for (auto &Base : ClassDecl->bases()) { // Virtuals are in the virtual base list and already constructed. if (Base.isVirtual()) continue; if (CXXCtorInitializer *Value = Info.AllBaseFields.lookup(Base.getType()->getAs())) { Info.AllToInit.push_back(Value); } else if (!AnyErrors) { CXXCtorInitializer *CXXBaseInit; if (BuildImplicitBaseInitializer(*this, Constructor, Info.IIK, &Base, /*IsInheritedVirtualBase=*/false, CXXBaseInit)) { HadError = true; continue; } Info.AllToInit.push_back(CXXBaseInit); } } // Fields. for (auto *Mem : ClassDecl->decls()) { if (auto *F = dyn_cast(Mem)) { // C++ [class.bit]p2: // A declaration for a bit-field that omits the identifier declares an // unnamed bit-field. Unnamed bit-fields are not members and cannot be // initialized. if (F->isUnnamedBitfield()) continue; // If we're not generating the implicit copy/move constructor, then we'll // handle anonymous struct/union fields based on their individual // indirect fields. if (F->isAnonymousStructOrUnion() && !Info.isImplicitCopyOrMove()) continue; if (CollectFieldInitializer(*this, Info, F)) HadError = true; continue; } // Beyond this point, we only consider default initialization. if (Info.isImplicitCopyOrMove()) continue; if (auto *F = dyn_cast(Mem)) { if (F->getType()->isIncompleteArrayType()) { assert(ClassDecl->hasFlexibleArrayMember() && "Incomplete array type is not valid"); continue; } // Initialize each field of an anonymous struct individually. if (CollectFieldInitializer(*this, Info, F->getAnonField(), F)) HadError = true; continue; } } unsigned NumInitializers = Info.AllToInit.size(); if (NumInitializers > 0) { Constructor->setNumCtorInitializers(NumInitializers); CXXCtorInitializer **baseOrMemberInitializers = new (Context) CXXCtorInitializer*[NumInitializers]; memcpy(baseOrMemberInitializers, Info.AllToInit.data(), NumInitializers * sizeof(CXXCtorInitializer*)); Constructor->setCtorInitializers(baseOrMemberInitializers); // Constructors implicitly reference the base and member // destructors. MarkBaseAndMemberDestructorsReferenced(Constructor->getLocation(), Constructor->getParent()); } return HadError; } static void PopulateKeysForFields(FieldDecl *Field, SmallVectorImpl &IdealInits) { if (const RecordType *RT = Field->getType()->getAs()) { const RecordDecl *RD = RT->getDecl(); if (RD->isAnonymousStructOrUnion()) { for (auto *Field : RD->fields()) PopulateKeysForFields(Field, IdealInits); return; } } IdealInits.push_back(Field->getCanonicalDecl()); } static const void *GetKeyForBase(ASTContext &Context, QualType BaseType) { return Context.getCanonicalType(BaseType).getTypePtr(); } static const void *GetKeyForMember(ASTContext &Context, CXXCtorInitializer *Member) { if (!Member->isAnyMemberInitializer()) return GetKeyForBase(Context, QualType(Member->getBaseClass(), 0)); return Member->getAnyMember()->getCanonicalDecl(); } static void DiagnoseBaseOrMemInitializerOrder( Sema &SemaRef, const CXXConstructorDecl *Constructor, ArrayRef Inits) { if (Constructor->getDeclContext()->isDependentContext()) return; // Don't check initializers order unless the warning is enabled at the // location of at least one initializer. bool ShouldCheckOrder = false; for (unsigned InitIndex = 0; InitIndex != Inits.size(); ++InitIndex) { CXXCtorInitializer *Init = Inits[InitIndex]; if (!SemaRef.Diags.isIgnored(diag::warn_initializer_out_of_order, Init->getSourceLocation())) { ShouldCheckOrder = true; break; } } if (!ShouldCheckOrder) return; // Build the list of bases and members in the order that they'll // actually be initialized. The explicit initializers should be in // this same order but may be missing things. SmallVector IdealInitKeys; const CXXRecordDecl *ClassDecl = Constructor->getParent(); // 1. Virtual bases. for (const auto &VBase : ClassDecl->vbases()) IdealInitKeys.push_back(GetKeyForBase(SemaRef.Context, VBase.getType())); // 2. Non-virtual bases. for (const auto &Base : ClassDecl->bases()) { if (Base.isVirtual()) continue; IdealInitKeys.push_back(GetKeyForBase(SemaRef.Context, Base.getType())); } // 3. Direct fields. for (auto *Field : ClassDecl->fields()) { if (Field->isUnnamedBitfield()) continue; PopulateKeysForFields(Field, IdealInitKeys); } unsigned NumIdealInits = IdealInitKeys.size(); unsigned IdealIndex = 0; CXXCtorInitializer *PrevInit = nullptr; for (unsigned InitIndex = 0; InitIndex != Inits.size(); ++InitIndex) { CXXCtorInitializer *Init = Inits[InitIndex]; const void *InitKey = GetKeyForMember(SemaRef.Context, Init); // Scan forward to try to find this initializer in the idealized // initializers list. for (; IdealIndex != NumIdealInits; ++IdealIndex) if (InitKey == IdealInitKeys[IdealIndex]) break; // If we didn't find this initializer, it must be because we // scanned past it on a previous iteration. That can only // happen if we're out of order; emit a warning. if (IdealIndex == NumIdealInits && PrevInit) { Sema::SemaDiagnosticBuilder D = SemaRef.Diag(PrevInit->getSourceLocation(), diag::warn_initializer_out_of_order); if (PrevInit->isAnyMemberInitializer()) D << 0 << PrevInit->getAnyMember()->getDeclName(); else D << 1 << PrevInit->getTypeSourceInfo()->getType(); if (Init->isAnyMemberInitializer()) D << 0 << Init->getAnyMember()->getDeclName(); else D << 1 << Init->getTypeSourceInfo()->getType(); // Move back to the initializer's location in the ideal list. for (IdealIndex = 0; IdealIndex != NumIdealInits; ++IdealIndex) if (InitKey == IdealInitKeys[IdealIndex]) break; assert(IdealIndex < NumIdealInits && "initializer not found in initializer list"); } PrevInit = Init; } } namespace { bool CheckRedundantInit(Sema &S, CXXCtorInitializer *Init, CXXCtorInitializer *&PrevInit) { if (!PrevInit) { PrevInit = Init; return false; } if (FieldDecl *Field = Init->getAnyMember()) S.Diag(Init->getSourceLocation(), diag::err_multiple_mem_initialization) << Field->getDeclName() << Init->getSourceRange(); else { const Type *BaseClass = Init->getBaseClass(); assert(BaseClass && "neither field nor base"); S.Diag(Init->getSourceLocation(), diag::err_multiple_base_initialization) << QualType(BaseClass, 0) << Init->getSourceRange(); } S.Diag(PrevInit->getSourceLocation(), diag::note_previous_initializer) << 0 << PrevInit->getSourceRange(); return true; } typedef std::pair UnionEntry; typedef llvm::DenseMap RedundantUnionMap; bool CheckRedundantUnionInit(Sema &S, CXXCtorInitializer *Init, RedundantUnionMap &Unions) { FieldDecl *Field = Init->getAnyMember(); RecordDecl *Parent = Field->getParent(); NamedDecl *Child = Field; while (Parent->isAnonymousStructOrUnion() || Parent->isUnion()) { if (Parent->isUnion()) { UnionEntry &En = Unions[Parent]; if (En.first && En.first != Child) { S.Diag(Init->getSourceLocation(), diag::err_multiple_mem_union_initialization) << Field->getDeclName() << Init->getSourceRange(); S.Diag(En.second->getSourceLocation(), diag::note_previous_initializer) << 0 << En.second->getSourceRange(); return true; } if (!En.first) { En.first = Child; En.second = Init; } if (!Parent->isAnonymousStructOrUnion()) return false; } Child = Parent; Parent = cast(Parent->getDeclContext()); } return false; } } /// ActOnMemInitializers - Handle the member initializers for a constructor. void Sema::ActOnMemInitializers(Decl *ConstructorDecl, SourceLocation ColonLoc, ArrayRef MemInits, bool AnyErrors) { if (!ConstructorDecl) return; AdjustDeclIfTemplate(ConstructorDecl); CXXConstructorDecl *Constructor = dyn_cast(ConstructorDecl); if (!Constructor) { Diag(ColonLoc, diag::err_only_constructors_take_base_inits); return; } // Mapping for the duplicate initializers check. // For member initializers, this is keyed with a FieldDecl*. // For base initializers, this is keyed with a Type*. llvm::DenseMap Members; // Mapping for the inconsistent anonymous-union initializers check. RedundantUnionMap MemberUnions; bool HadError = false; for (unsigned i = 0; i < MemInits.size(); i++) { CXXCtorInitializer *Init = MemInits[i]; // Set the source order index. Init->setSourceOrder(i); if (Init->isAnyMemberInitializer()) { const void *Key = GetKeyForMember(Context, Init); if (CheckRedundantInit(*this, Init, Members[Key]) || CheckRedundantUnionInit(*this, Init, MemberUnions)) HadError = true; } else if (Init->isBaseInitializer()) { const void *Key = GetKeyForMember(Context, Init); if (CheckRedundantInit(*this, Init, Members[Key])) HadError = true; } else { assert(Init->isDelegatingInitializer()); // This must be the only initializer if (MemInits.size() != 1) { Diag(Init->getSourceLocation(), diag::err_delegating_initializer_alone) << Init->getSourceRange() << MemInits[i ? 0 : 1]->getSourceRange(); // We will treat this as being the only initializer. } SetDelegatingInitializer(Constructor, MemInits[i]); // Return immediately as the initializer is set. return; } } if (HadError) return; DiagnoseBaseOrMemInitializerOrder(*this, Constructor, MemInits); SetCtorInitializers(Constructor, AnyErrors, MemInits); DiagnoseUninitializedFields(*this, Constructor); } void Sema::MarkBaseAndMemberDestructorsReferenced(SourceLocation Location, CXXRecordDecl *ClassDecl) { // Ignore dependent contexts. Also ignore unions, since their members never // have destructors implicitly called. if (ClassDecl->isDependentContext() || ClassDecl->isUnion()) return; // FIXME: all the access-control diagnostics are positioned on the // field/base declaration. That's probably good; that said, the // user might reasonably want to know why the destructor is being // emitted, and we currently don't say. // Non-static data members. for (auto *Field : ClassDecl->fields()) { if (Field->isInvalidDecl()) continue; // Don't destroy incomplete or zero-length arrays. if (isIncompleteOrZeroLengthArrayType(Context, Field->getType())) continue; QualType FieldType = Context.getBaseElementType(Field->getType()); const RecordType* RT = FieldType->getAs(); if (!RT) continue; CXXRecordDecl *FieldClassDecl = cast(RT->getDecl()); if (FieldClassDecl->isInvalidDecl()) continue; if (FieldClassDecl->hasIrrelevantDestructor()) continue; // The destructor for an implicit anonymous union member is never invoked. if (FieldClassDecl->isUnion() && FieldClassDecl->isAnonymousStructOrUnion()) continue; CXXDestructorDecl *Dtor = LookupDestructor(FieldClassDecl); assert(Dtor && "No dtor found for FieldClassDecl!"); CheckDestructorAccess(Field->getLocation(), Dtor, PDiag(diag::err_access_dtor_field) << Field->getDeclName() << FieldType); MarkFunctionReferenced(Location, Dtor); DiagnoseUseOfDecl(Dtor, Location); } // We only potentially invoke the destructors of potentially constructed // subobjects. bool VisitVirtualBases = !ClassDecl->isAbstract(); llvm::SmallPtrSet DirectVirtualBases; // Bases. for (const auto &Base : ClassDecl->bases()) { // Bases are always records in a well-formed non-dependent class. const RecordType *RT = Base.getType()->getAs(); // Remember direct virtual bases. if (Base.isVirtual()) { if (!VisitVirtualBases) continue; DirectVirtualBases.insert(RT); } CXXRecordDecl *BaseClassDecl = cast(RT->getDecl()); // If our base class is invalid, we probably can't get its dtor anyway. if (BaseClassDecl->isInvalidDecl()) continue; if (BaseClassDecl->hasIrrelevantDestructor()) continue; CXXDestructorDecl *Dtor = LookupDestructor(BaseClassDecl); assert(Dtor && "No dtor found for BaseClassDecl!"); // FIXME: caret should be on the start of the class name CheckDestructorAccess(Base.getLocStart(), Dtor, PDiag(diag::err_access_dtor_base) << Base.getType() << Base.getSourceRange(), Context.getTypeDeclType(ClassDecl)); MarkFunctionReferenced(Location, Dtor); DiagnoseUseOfDecl(Dtor, Location); } if (!VisitVirtualBases) return; // Virtual bases. for (const auto &VBase : ClassDecl->vbases()) { // Bases are always records in a well-formed non-dependent class. const RecordType *RT = VBase.getType()->castAs(); // Ignore direct virtual bases. if (DirectVirtualBases.count(RT)) continue; CXXRecordDecl *BaseClassDecl = cast(RT->getDecl()); // If our base class is invalid, we probably can't get its dtor anyway. if (BaseClassDecl->isInvalidDecl()) continue; if (BaseClassDecl->hasIrrelevantDestructor()) continue; CXXDestructorDecl *Dtor = LookupDestructor(BaseClassDecl); assert(Dtor && "No dtor found for BaseClassDecl!"); if (CheckDestructorAccess( ClassDecl->getLocation(), Dtor, PDiag(diag::err_access_dtor_vbase) << Context.getTypeDeclType(ClassDecl) << VBase.getType(), Context.getTypeDeclType(ClassDecl)) == AR_accessible) { CheckDerivedToBaseConversion( Context.getTypeDeclType(ClassDecl), VBase.getType(), diag::err_access_dtor_vbase, 0, ClassDecl->getLocation(), SourceRange(), DeclarationName(), nullptr); } MarkFunctionReferenced(Location, Dtor); DiagnoseUseOfDecl(Dtor, Location); } } void Sema::ActOnDefaultCtorInitializers(Decl *CDtorDecl) { if (!CDtorDecl) return; if (CXXConstructorDecl *Constructor = dyn_cast(CDtorDecl)) { SetCtorInitializers(Constructor, /*AnyErrors=*/false); DiagnoseUninitializedFields(*this, Constructor); } } bool Sema::isAbstractType(SourceLocation Loc, QualType T) { if (!getLangOpts().CPlusPlus) return false; const auto *RD = Context.getBaseElementType(T)->getAsCXXRecordDecl(); if (!RD) return false; // FIXME: Per [temp.inst]p1, we are supposed to trigger instantiation of a // class template specialization here, but doing so breaks a lot of code. // We can't answer whether something is abstract until it has a // definition. If it's currently being defined, we'll walk back // over all the declarations when we have a full definition. const CXXRecordDecl *Def = RD->getDefinition(); if (!Def || Def->isBeingDefined()) return false; return RD->isAbstract(); } bool Sema::RequireNonAbstractType(SourceLocation Loc, QualType T, TypeDiagnoser &Diagnoser) { if (!isAbstractType(Loc, T)) return false; T = Context.getBaseElementType(T); Diagnoser.diagnose(*this, Loc, T); DiagnoseAbstractType(T->getAsCXXRecordDecl()); return true; } void Sema::DiagnoseAbstractType(const CXXRecordDecl *RD) { // Check if we've already emitted the list of pure virtual functions // for this class. if (PureVirtualClassDiagSet && PureVirtualClassDiagSet->count(RD)) return; // If the diagnostic is suppressed, don't emit the notes. We're only // going to emit them once, so try to attach them to a diagnostic we're // actually going to show. if (Diags.isLastDiagnosticIgnored()) return; CXXFinalOverriderMap FinalOverriders; RD->getFinalOverriders(FinalOverriders); // Keep a set of seen pure methods so we won't diagnose the same method // more than once. llvm::SmallPtrSet SeenPureMethods; for (CXXFinalOverriderMap::iterator M = FinalOverriders.begin(), MEnd = FinalOverriders.end(); M != MEnd; ++M) { for (OverridingMethods::iterator SO = M->second.begin(), SOEnd = M->second.end(); SO != SOEnd; ++SO) { // C++ [class.abstract]p4: // A class is abstract if it contains or inherits at least one // pure virtual function for which the final overrider is pure // virtual. // if (SO->second.size() != 1) continue; if (!SO->second.front().Method->isPure()) continue; if (!SeenPureMethods.insert(SO->second.front().Method).second) continue; Diag(SO->second.front().Method->getLocation(), diag::note_pure_virtual_function) << SO->second.front().Method->getDeclName() << RD->getDeclName(); } } if (!PureVirtualClassDiagSet) PureVirtualClassDiagSet.reset(new RecordDeclSetTy); PureVirtualClassDiagSet->insert(RD); } namespace { struct AbstractUsageInfo { Sema &S; CXXRecordDecl *Record; CanQualType AbstractType; bool Invalid; AbstractUsageInfo(Sema &S, CXXRecordDecl *Record) : S(S), Record(Record), AbstractType(S.Context.getCanonicalType( S.Context.getTypeDeclType(Record))), Invalid(false) {} void DiagnoseAbstractType() { if (Invalid) return; S.DiagnoseAbstractType(Record); Invalid = true; } void CheckType(const NamedDecl *D, TypeLoc TL, Sema::AbstractDiagSelID Sel); }; struct CheckAbstractUsage { AbstractUsageInfo &Info; const NamedDecl *Ctx; CheckAbstractUsage(AbstractUsageInfo &Info, const NamedDecl *Ctx) : Info(Info), Ctx(Ctx) {} void Visit(TypeLoc TL, Sema::AbstractDiagSelID Sel) { switch (TL.getTypeLocClass()) { #define ABSTRACT_TYPELOC(CLASS, PARENT) #define TYPELOC(CLASS, PARENT) \ case TypeLoc::CLASS: Check(TL.castAs(), Sel); break; #include "clang/AST/TypeLocNodes.def" } } void Check(FunctionProtoTypeLoc TL, Sema::AbstractDiagSelID Sel) { Visit(TL.getReturnLoc(), Sema::AbstractReturnType); for (unsigned I = 0, E = TL.getNumParams(); I != E; ++I) { if (!TL.getParam(I)) continue; TypeSourceInfo *TSI = TL.getParam(I)->getTypeSourceInfo(); if (TSI) Visit(TSI->getTypeLoc(), Sema::AbstractParamType); } } void Check(ArrayTypeLoc TL, Sema::AbstractDiagSelID Sel) { Visit(TL.getElementLoc(), Sema::AbstractArrayType); } void Check(TemplateSpecializationTypeLoc TL, Sema::AbstractDiagSelID Sel) { // Visit the type parameters from a permissive context. for (unsigned I = 0, E = TL.getNumArgs(); I != E; ++I) { TemplateArgumentLoc TAL = TL.getArgLoc(I); if (TAL.getArgument().getKind() == TemplateArgument::Type) if (TypeSourceInfo *TSI = TAL.getTypeSourceInfo()) Visit(TSI->getTypeLoc(), Sema::AbstractNone); // TODO: other template argument types? } } // Visit pointee types from a permissive context. #define CheckPolymorphic(Type) \ void Check(Type TL, Sema::AbstractDiagSelID Sel) { \ Visit(TL.getNextTypeLoc(), Sema::AbstractNone); \ } CheckPolymorphic(PointerTypeLoc) CheckPolymorphic(ReferenceTypeLoc) CheckPolymorphic(MemberPointerTypeLoc) CheckPolymorphic(BlockPointerTypeLoc) CheckPolymorphic(AtomicTypeLoc) /// Handle all the types we haven't given a more specific /// implementation for above. void Check(TypeLoc TL, Sema::AbstractDiagSelID Sel) { // Every other kind of type that we haven't called out already // that has an inner type is either (1) sugar or (2) contains that // inner type in some way as a subobject. if (TypeLoc Next = TL.getNextTypeLoc()) return Visit(Next, Sel); // If there's no inner type and we're in a permissive context, // don't diagnose. if (Sel == Sema::AbstractNone) return; // Check whether the type matches the abstract type. QualType T = TL.getType(); if (T->isArrayType()) { Sel = Sema::AbstractArrayType; T = Info.S.Context.getBaseElementType(T); } CanQualType CT = T->getCanonicalTypeUnqualified().getUnqualifiedType(); if (CT != Info.AbstractType) return; // It matched; do some magic. if (Sel == Sema::AbstractArrayType) { Info.S.Diag(Ctx->getLocation(), diag::err_array_of_abstract_type) << T << TL.getSourceRange(); } else { Info.S.Diag(Ctx->getLocation(), diag::err_abstract_type_in_decl) << Sel << T << TL.getSourceRange(); } Info.DiagnoseAbstractType(); } }; void AbstractUsageInfo::CheckType(const NamedDecl *D, TypeLoc TL, Sema::AbstractDiagSelID Sel) { CheckAbstractUsage(*this, D).Visit(TL, Sel); } } /// Check for invalid uses of an abstract type in a method declaration. static void CheckAbstractClassUsage(AbstractUsageInfo &Info, CXXMethodDecl *MD) { // No need to do the check on definitions, which require that // the return/param types be complete. if (MD->doesThisDeclarationHaveABody()) return; // For safety's sake, just ignore it if we don't have type source // information. This should never happen for non-implicit methods, // but... if (TypeSourceInfo *TSI = MD->getTypeSourceInfo()) Info.CheckType(MD, TSI->getTypeLoc(), Sema::AbstractNone); } /// Check for invalid uses of an abstract type within a class definition. static void CheckAbstractClassUsage(AbstractUsageInfo &Info, CXXRecordDecl *RD) { for (auto *D : RD->decls()) { if (D->isImplicit()) continue; // Methods and method templates. if (isa(D)) { CheckAbstractClassUsage(Info, cast(D)); } else if (isa(D)) { FunctionDecl *FD = cast(D)->getTemplatedDecl(); CheckAbstractClassUsage(Info, cast(FD)); // Fields and static variables. } else if (isa(D)) { FieldDecl *FD = cast(D); if (TypeSourceInfo *TSI = FD->getTypeSourceInfo()) Info.CheckType(FD, TSI->getTypeLoc(), Sema::AbstractFieldType); } else if (isa(D)) { VarDecl *VD = cast(D); if (TypeSourceInfo *TSI = VD->getTypeSourceInfo()) Info.CheckType(VD, TSI->getTypeLoc(), Sema::AbstractVariableType); // Nested classes and class templates. } else if (isa(D)) { CheckAbstractClassUsage(Info, cast(D)); } else if (isa(D)) { CheckAbstractClassUsage(Info, cast(D)->getTemplatedDecl()); } } } static void ReferenceDllExportedMethods(Sema &S, CXXRecordDecl *Class) { Attr *ClassAttr = getDLLAttr(Class); if (!ClassAttr) return; assert(ClassAttr->getKind() == attr::DLLExport); TemplateSpecializationKind TSK = Class->getTemplateSpecializationKind(); if (TSK == TSK_ExplicitInstantiationDeclaration) // Don't go any further if this is just an explicit instantiation // declaration. return; for (Decl *Member : Class->decls()) { auto *MD = dyn_cast(Member); if (!MD) continue; if (Member->getAttr()) { if (MD->isUserProvided()) { // Instantiate non-default class member functions ... // .. except for certain kinds of template specializations. if (TSK == TSK_ImplicitInstantiation && !ClassAttr->isInherited()) continue; S.MarkFunctionReferenced(Class->getLocation(), MD); // The function will be passed to the consumer when its definition is // encountered. } else if (!MD->isTrivial() || MD->isExplicitlyDefaulted() || MD->isCopyAssignmentOperator() || MD->isMoveAssignmentOperator()) { // Synthesize and instantiate non-trivial implicit methods, explicitly // defaulted methods, and the copy and move assignment operators. The // latter are exported even if they are trivial, because the address of // an operator can be taken and should compare equal across libraries. DiagnosticErrorTrap Trap(S.Diags); S.MarkFunctionReferenced(Class->getLocation(), MD); if (Trap.hasErrorOccurred()) { S.Diag(ClassAttr->getLocation(), diag::note_due_to_dllexported_class) << Class->getName() << !S.getLangOpts().CPlusPlus11; break; } // There is no later point when we will see the definition of this // function, so pass it to the consumer now. S.Consumer.HandleTopLevelDecl(DeclGroupRef(MD)); } } } } static void checkForMultipleExportedDefaultConstructors(Sema &S, CXXRecordDecl *Class) { // Only the MS ABI has default constructor closures, so we don't need to do // this semantic checking anywhere else. if (!S.Context.getTargetInfo().getCXXABI().isMicrosoft()) return; CXXConstructorDecl *LastExportedDefaultCtor = nullptr; for (Decl *Member : Class->decls()) { // Look for exported default constructors. auto *CD = dyn_cast(Member); if (!CD || !CD->isDefaultConstructor()) continue; auto *Attr = CD->getAttr(); if (!Attr) continue; // If the class is non-dependent, mark the default arguments as ODR-used so // that we can properly codegen the constructor closure. if (!Class->isDependentContext()) { for (ParmVarDecl *PD : CD->parameters()) { (void)S.CheckCXXDefaultArgExpr(Attr->getLocation(), CD, PD); S.DiscardCleanupsInEvaluationContext(); } } if (LastExportedDefaultCtor) { S.Diag(LastExportedDefaultCtor->getLocation(), diag::err_attribute_dll_ambiguous_default_ctor) << Class; S.Diag(CD->getLocation(), diag::note_entity_declared_at) << CD->getDeclName(); return; } LastExportedDefaultCtor = CD; } } /// \brief Check class-level dllimport/dllexport attribute. void Sema::checkClassLevelDLLAttribute(CXXRecordDecl *Class) { Attr *ClassAttr = getDLLAttr(Class); // MSVC inherits DLL attributes to partial class template specializations. if (Context.getTargetInfo().getCXXABI().isMicrosoft() && !ClassAttr) { if (auto *Spec = dyn_cast(Class)) { if (Attr *TemplateAttr = getDLLAttr(Spec->getSpecializedTemplate()->getTemplatedDecl())) { auto *A = cast(TemplateAttr->clone(getASTContext())); A->setInherited(true); ClassAttr = A; } } } if (!ClassAttr) return; if (!Class->isExternallyVisible()) { Diag(Class->getLocation(), diag::err_attribute_dll_not_extern) << Class << ClassAttr; return; } if (Context.getTargetInfo().getCXXABI().isMicrosoft() && !ClassAttr->isInherited()) { // Diagnose dll attributes on members of class with dll attribute. for (Decl *Member : Class->decls()) { if (!isa(Member) && !isa(Member)) continue; InheritableAttr *MemberAttr = getDLLAttr(Member); if (!MemberAttr || MemberAttr->isInherited() || Member->isInvalidDecl()) continue; Diag(MemberAttr->getLocation(), diag::err_attribute_dll_member_of_dll_class) << MemberAttr << ClassAttr; Diag(ClassAttr->getLocation(), diag::note_previous_attribute); Member->setInvalidDecl(); } } if (Class->getDescribedClassTemplate()) // Don't inherit dll attribute until the template is instantiated. return; // The class is either imported or exported. const bool ClassExported = ClassAttr->getKind() == attr::DLLExport; TemplateSpecializationKind TSK = Class->getTemplateSpecializationKind(); // Ignore explicit dllexport on explicit class template instantiation declarations. if (ClassExported && !ClassAttr->isInherited() && TSK == TSK_ExplicitInstantiationDeclaration) { Class->dropAttr(); return; } // Force declaration of implicit members so they can inherit the attribute. ForceDeclarationOfImplicitMembers(Class); // FIXME: MSVC's docs say all bases must be exportable, but this doesn't // seem to be true in practice? for (Decl *Member : Class->decls()) { VarDecl *VD = dyn_cast(Member); CXXMethodDecl *MD = dyn_cast(Member); // Only methods and static fields inherit the attributes. if (!VD && !MD) continue; if (MD) { // Don't process deleted methods. if (MD->isDeleted()) continue; if (MD->isInlined()) { // MinGW does not import or export inline methods. if (!Context.getTargetInfo().getCXXABI().isMicrosoft() && !Context.getTargetInfo().getTriple().isWindowsItaniumEnvironment()) continue; // MSVC versions before 2015 don't export the move assignment operators // and move constructor, so don't attempt to import/export them if // we have a definition. auto *Ctor = dyn_cast(MD); if ((MD->isMoveAssignmentOperator() || (Ctor && Ctor->isMoveConstructor())) && !getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015)) continue; // MSVC2015 doesn't export trivial defaulted x-tor but copy assign // operator is exported anyway. if (getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015) && (Ctor || isa(MD)) && MD->isTrivial()) continue; } } if (!cast(Member)->isExternallyVisible()) continue; if (!getDLLAttr(Member)) { auto *NewAttr = cast(ClassAttr->clone(getASTContext())); NewAttr->setInherited(true); Member->addAttr(NewAttr); } } if (ClassExported) DelayedDllExportClasses.push_back(Class); } /// \brief Perform propagation of DLL attributes from a derived class to a /// templated base class for MS compatibility. void Sema::propagateDLLAttrToBaseClassTemplate( CXXRecordDecl *Class, Attr *ClassAttr, ClassTemplateSpecializationDecl *BaseTemplateSpec, SourceLocation BaseLoc) { if (getDLLAttr( BaseTemplateSpec->getSpecializedTemplate()->getTemplatedDecl())) { // If the base class template has a DLL attribute, don't try to change it. return; } auto TSK = BaseTemplateSpec->getSpecializationKind(); if (!getDLLAttr(BaseTemplateSpec) && (TSK == TSK_Undeclared || TSK == TSK_ExplicitInstantiationDeclaration || TSK == TSK_ImplicitInstantiation)) { // The template hasn't been instantiated yet (or it has, but only as an // explicit instantiation declaration or implicit instantiation, which means // we haven't codegenned any members yet), so propagate the attribute. auto *NewAttr = cast(ClassAttr->clone(getASTContext())); NewAttr->setInherited(true); BaseTemplateSpec->addAttr(NewAttr); // If the template is already instantiated, checkDLLAttributeRedeclaration() // needs to be run again to work see the new attribute. Otherwise this will // get run whenever the template is instantiated. if (TSK != TSK_Undeclared) checkClassLevelDLLAttribute(BaseTemplateSpec); return; } if (getDLLAttr(BaseTemplateSpec)) { // The template has already been specialized or instantiated with an // attribute, explicitly or through propagation. We should not try to change // it. return; } // The template was previously instantiated or explicitly specialized without // a dll attribute, It's too late for us to add an attribute, so warn that // this is unsupported. Diag(BaseLoc, diag::warn_attribute_dll_instantiated_base_class) << BaseTemplateSpec->isExplicitSpecialization(); Diag(ClassAttr->getLocation(), diag::note_attribute); if (BaseTemplateSpec->isExplicitSpecialization()) { Diag(BaseTemplateSpec->getLocation(), diag::note_template_class_explicit_specialization_was_here) << BaseTemplateSpec; } else { Diag(BaseTemplateSpec->getPointOfInstantiation(), diag::note_template_class_instantiation_was_here) << BaseTemplateSpec; } } static void DefineImplicitSpecialMember(Sema &S, CXXMethodDecl *MD, SourceLocation DefaultLoc) { switch (S.getSpecialMember(MD)) { case Sema::CXXDefaultConstructor: S.DefineImplicitDefaultConstructor(DefaultLoc, cast(MD)); break; case Sema::CXXCopyConstructor: S.DefineImplicitCopyConstructor(DefaultLoc, cast(MD)); break; case Sema::CXXCopyAssignment: S.DefineImplicitCopyAssignment(DefaultLoc, MD); break; case Sema::CXXDestructor: S.DefineImplicitDestructor(DefaultLoc, cast(MD)); break; case Sema::CXXMoveConstructor: S.DefineImplicitMoveConstructor(DefaultLoc, cast(MD)); break; case Sema::CXXMoveAssignment: S.DefineImplicitMoveAssignment(DefaultLoc, MD); break; case Sema::CXXInvalid: llvm_unreachable("Invalid special member."); } } +/// Determine whether a type is permitted to be passed or returned in +/// registers, per C++ [class.temporary]p3. +static bool computeCanPassInRegisters(Sema &S, CXXRecordDecl *D) { + if (D->isDependentType() || D->isInvalidDecl()) + return false; + + // Per C++ [class.temporary]p3, the relevant condition is: + // each copy constructor, move constructor, and destructor of X is + // either trivial or deleted, and X has at least one non-deleted copy + // or move constructor + bool HasNonDeletedCopyOrMove = false; + + if (D->needsImplicitCopyConstructor() && + !D->defaultedCopyConstructorIsDeleted()) { + if (!D->hasTrivialCopyConstructor()) + return false; + HasNonDeletedCopyOrMove = true; + } + + if (S.getLangOpts().CPlusPlus11 && D->needsImplicitMoveConstructor() && + !D->defaultedMoveConstructorIsDeleted()) { + if (!D->hasTrivialMoveConstructor()) + return false; + HasNonDeletedCopyOrMove = true; + } + + if (D->needsImplicitDestructor() && !D->defaultedDestructorIsDeleted() && + !D->hasTrivialDestructor()) + return false; + + for (const CXXMethodDecl *MD : D->methods()) { + if (MD->isDeleted()) + continue; + + auto *CD = dyn_cast(MD); + if (CD && CD->isCopyOrMoveConstructor()) + HasNonDeletedCopyOrMove = true; + else if (!isa(MD)) + continue; + + if (!MD->isTrivial()) + return false; + } + + return HasNonDeletedCopyOrMove; +} + /// \brief Perform semantic checks on a class definition that has been /// completing, introducing implicitly-declared members, checking for /// abstract types, etc. void Sema::CheckCompletedCXXClass(CXXRecordDecl *Record) { if (!Record) return; if (Record->isAbstract() && !Record->isInvalidDecl()) { AbstractUsageInfo Info(*this, Record); CheckAbstractClassUsage(Info, Record); } // If this is not an aggregate type and has no user-declared constructor, // complain about any non-static data members of reference or const scalar // type, since they will never get initializers. if (!Record->isInvalidDecl() && !Record->isDependentType() && !Record->isAggregate() && !Record->hasUserDeclaredConstructor() && !Record->isLambda()) { bool Complained = false; for (const auto *F : Record->fields()) { if (F->hasInClassInitializer() || F->isUnnamedBitfield()) continue; if (F->getType()->isReferenceType() || (F->getType().isConstQualified() && F->getType()->isScalarType())) { if (!Complained) { Diag(Record->getLocation(), diag::warn_no_constructor_for_refconst) << Record->getTagKind() << Record; Complained = true; } Diag(F->getLocation(), diag::note_refconst_member_not_initialized) << F->getType()->isReferenceType() << F->getDeclName(); } } } if (Record->getIdentifier()) { // C++ [class.mem]p13: // If T is the name of a class, then each of the following shall have a // name different from T: // - every member of every anonymous union that is a member of class T. // // C++ [class.mem]p14: // In addition, if class T has a user-declared constructor (12.1), every // non-static data member of class T shall have a name different from T. DeclContext::lookup_result R = Record->lookup(Record->getDeclName()); for (DeclContext::lookup_iterator I = R.begin(), E = R.end(); I != E; ++I) { NamedDecl *D = *I; if ((isa(D) && Record->hasUserDeclaredConstructor()) || isa(D)) { Diag(D->getLocation(), diag::err_member_name_of_class) << D->getDeclName(); break; } } } // Warn if the class has virtual methods but non-virtual public destructor. if (Record->isPolymorphic() && !Record->isDependentType()) { CXXDestructorDecl *dtor = Record->getDestructor(); if ((!dtor || (!dtor->isVirtual() && dtor->getAccess() == AS_public)) && !Record->hasAttr()) Diag(dtor ? dtor->getLocation() : Record->getLocation(), diag::warn_non_virtual_dtor) << Context.getRecordType(Record); } if (Record->isAbstract()) { if (FinalAttr *FA = Record->getAttr()) { Diag(Record->getLocation(), diag::warn_abstract_final_class) << FA->isSpelledAsSealed(); DiagnoseAbstractType(Record); } } bool HasMethodWithOverrideControl = false, HasOverridingMethodWithoutOverrideControl = false; if (!Record->isDependentType()) { for (auto *M : Record->methods()) { // See if a method overloads virtual methods in a base // class without overriding any. if (!M->isStatic()) DiagnoseHiddenVirtualMethods(M); if (M->hasAttr()) HasMethodWithOverrideControl = true; else if (M->size_overridden_methods() > 0) HasOverridingMethodWithoutOverrideControl = true; // Check whether the explicitly-defaulted special members are valid. if (!M->isInvalidDecl() && M->isExplicitlyDefaulted()) CheckExplicitlyDefaultedSpecialMember(M); // For an explicitly defaulted or deleted special member, we defer // determining triviality until the class is complete. That time is now! CXXSpecialMember CSM = getSpecialMember(M); if (!M->isImplicit() && !M->isUserProvided()) { if (CSM != CXXInvalid) { M->setTrivial(SpecialMemberIsTrivial(M, CSM)); // Inform the class that we've finished declaring this member. Record->finishedDefaultedOrDeletedMember(M); } } if (!M->isInvalidDecl() && M->isExplicitlyDefaulted() && M->hasAttr()) { if (getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015) && M->isTrivial() && (CSM == CXXDefaultConstructor || CSM == CXXCopyConstructor || CSM == CXXDestructor)) M->dropAttr(); if (M->hasAttr()) { DefineImplicitSpecialMember(*this, M, M->getLocation()); ActOnFinishInlineFunctionDef(M); } } } } if (HasMethodWithOverrideControl && HasOverridingMethodWithoutOverrideControl) { // At least one method has the 'override' control declared. // Diagnose all other overridden methods which do not have 'override' specified on them. for (auto *M : Record->methods()) DiagnoseAbsenceOfOverrideControl(M); } // ms_struct is a request to use the same ABI rules as MSVC. Check // whether this class uses any C++ features that are implemented // completely differently in MSVC, and if so, emit a diagnostic. // That diagnostic defaults to an error, but we allow projects to // map it down to a warning (or ignore it). It's a fairly common // practice among users of the ms_struct pragma to mass-annotate // headers, sweeping up a bunch of types that the project doesn't // really rely on MSVC-compatible layout for. We must therefore // support "ms_struct except for C++ stuff" as a secondary ABI. if (Record->isMsStruct(Context) && (Record->isPolymorphic() || Record->getNumBases())) { Diag(Record->getLocation(), diag::warn_cxx_ms_struct); } checkClassLevelDLLAttribute(Record); + + Record->setCanPassInRegisters(computeCanPassInRegisters(*this, Record)); } /// Look up the special member function that would be called by a special /// member function for a subobject of class type. /// /// \param Class The class type of the subobject. /// \param CSM The kind of special member function. /// \param FieldQuals If the subobject is a field, its cv-qualifiers. /// \param ConstRHS True if this is a copy operation with a const object /// on its RHS, that is, if the argument to the outer special member /// function is 'const' and this is not a field marked 'mutable'. static Sema::SpecialMemberOverloadResult lookupCallFromSpecialMember( Sema &S, CXXRecordDecl *Class, Sema::CXXSpecialMember CSM, unsigned FieldQuals, bool ConstRHS) { unsigned LHSQuals = 0; if (CSM == Sema::CXXCopyAssignment || CSM == Sema::CXXMoveAssignment) LHSQuals = FieldQuals; unsigned RHSQuals = FieldQuals; if (CSM == Sema::CXXDefaultConstructor || CSM == Sema::CXXDestructor) RHSQuals = 0; else if (ConstRHS) RHSQuals |= Qualifiers::Const; return S.LookupSpecialMember(Class, CSM, RHSQuals & Qualifiers::Const, RHSQuals & Qualifiers::Volatile, false, LHSQuals & Qualifiers::Const, LHSQuals & Qualifiers::Volatile); } class Sema::InheritedConstructorInfo { Sema &S; SourceLocation UseLoc; /// A mapping from the base classes through which the constructor was /// inherited to the using shadow declaration in that base class (or a null /// pointer if the constructor was declared in that base class). llvm::DenseMap InheritedFromBases; public: InheritedConstructorInfo(Sema &S, SourceLocation UseLoc, ConstructorUsingShadowDecl *Shadow) : S(S), UseLoc(UseLoc) { bool DiagnosedMultipleConstructedBases = false; CXXRecordDecl *ConstructedBase = nullptr; UsingDecl *ConstructedBaseUsing = nullptr; // Find the set of such base class subobjects and check that there's a // unique constructed subobject. for (auto *D : Shadow->redecls()) { auto *DShadow = cast(D); auto *DNominatedBase = DShadow->getNominatedBaseClass(); auto *DConstructedBase = DShadow->getConstructedBaseClass(); InheritedFromBases.insert( std::make_pair(DNominatedBase->getCanonicalDecl(), DShadow->getNominatedBaseClassShadowDecl())); if (DShadow->constructsVirtualBase()) InheritedFromBases.insert( std::make_pair(DConstructedBase->getCanonicalDecl(), DShadow->getConstructedBaseClassShadowDecl())); else assert(DNominatedBase == DConstructedBase); // [class.inhctor.init]p2: // If the constructor was inherited from multiple base class subobjects // of type B, the program is ill-formed. if (!ConstructedBase) { ConstructedBase = DConstructedBase; ConstructedBaseUsing = D->getUsingDecl(); } else if (ConstructedBase != DConstructedBase && !Shadow->isInvalidDecl()) { if (!DiagnosedMultipleConstructedBases) { S.Diag(UseLoc, diag::err_ambiguous_inherited_constructor) << Shadow->getTargetDecl(); S.Diag(ConstructedBaseUsing->getLocation(), diag::note_ambiguous_inherited_constructor_using) << ConstructedBase; DiagnosedMultipleConstructedBases = true; } S.Diag(D->getUsingDecl()->getLocation(), diag::note_ambiguous_inherited_constructor_using) << DConstructedBase; } } if (DiagnosedMultipleConstructedBases) Shadow->setInvalidDecl(); } /// Find the constructor to use for inherited construction of a base class, /// and whether that base class constructor inherits the constructor from a /// virtual base class (in which case it won't actually invoke it). std::pair findConstructorForBase(CXXRecordDecl *Base, CXXConstructorDecl *Ctor) const { auto It = InheritedFromBases.find(Base->getCanonicalDecl()); if (It == InheritedFromBases.end()) return std::make_pair(nullptr, false); // This is an intermediary class. if (It->second) return std::make_pair( S.findInheritingConstructor(UseLoc, Ctor, It->second), It->second->constructsVirtualBase()); // This is the base class from which the constructor was inherited. return std::make_pair(Ctor, false); } }; /// Is the special member function which would be selected to perform the /// specified operation on the specified class type a constexpr constructor? static bool specialMemberIsConstexpr(Sema &S, CXXRecordDecl *ClassDecl, Sema::CXXSpecialMember CSM, unsigned Quals, bool ConstRHS, CXXConstructorDecl *InheritedCtor = nullptr, Sema::InheritedConstructorInfo *Inherited = nullptr) { // If we're inheriting a constructor, see if we need to call it for this base // class. if (InheritedCtor) { assert(CSM == Sema::CXXDefaultConstructor); auto BaseCtor = Inherited->findConstructorForBase(ClassDecl, InheritedCtor).first; if (BaseCtor) return BaseCtor->isConstexpr(); } if (CSM == Sema::CXXDefaultConstructor) return ClassDecl->hasConstexprDefaultConstructor(); Sema::SpecialMemberOverloadResult SMOR = lookupCallFromSpecialMember(S, ClassDecl, CSM, Quals, ConstRHS); if (!SMOR.getMethod()) // A constructor we wouldn't select can't be "involved in initializing" // anything. return true; return SMOR.getMethod()->isConstexpr(); } /// Determine whether the specified special member function would be constexpr /// if it were implicitly defined. static bool defaultedSpecialMemberIsConstexpr( Sema &S, CXXRecordDecl *ClassDecl, Sema::CXXSpecialMember CSM, bool ConstArg, CXXConstructorDecl *InheritedCtor = nullptr, Sema::InheritedConstructorInfo *Inherited = nullptr) { if (!S.getLangOpts().CPlusPlus11) return false; // C++11 [dcl.constexpr]p4: // In the definition of a constexpr constructor [...] bool Ctor = true; switch (CSM) { case Sema::CXXDefaultConstructor: if (Inherited) break; // Since default constructor lookup is essentially trivial (and cannot // involve, for instance, template instantiation), we compute whether a // defaulted default constructor is constexpr directly within CXXRecordDecl. // // This is important for performance; we need to know whether the default // constructor is constexpr to determine whether the type is a literal type. return ClassDecl->defaultedDefaultConstructorIsConstexpr(); case Sema::CXXCopyConstructor: case Sema::CXXMoveConstructor: // For copy or move constructors, we need to perform overload resolution. break; case Sema::CXXCopyAssignment: case Sema::CXXMoveAssignment: if (!S.getLangOpts().CPlusPlus14) return false; // In C++1y, we need to perform overload resolution. Ctor = false; break; case Sema::CXXDestructor: case Sema::CXXInvalid: return false; } // -- if the class is a non-empty union, or for each non-empty anonymous // union member of a non-union class, exactly one non-static data member // shall be initialized; [DR1359] // // If we squint, this is guaranteed, since exactly one non-static data member // will be initialized (if the constructor isn't deleted), we just don't know // which one. if (Ctor && ClassDecl->isUnion()) return CSM == Sema::CXXDefaultConstructor ? ClassDecl->hasInClassInitializer() || !ClassDecl->hasVariantMembers() : true; // -- the class shall not have any virtual base classes; if (Ctor && ClassDecl->getNumVBases()) return false; // C++1y [class.copy]p26: // -- [the class] is a literal type, and if (!Ctor && !ClassDecl->isLiteral()) return false; // -- every constructor involved in initializing [...] base class // sub-objects shall be a constexpr constructor; // -- the assignment operator selected to copy/move each direct base // class is a constexpr function, and for (const auto &B : ClassDecl->bases()) { const RecordType *BaseType = B.getType()->getAs(); if (!BaseType) continue; CXXRecordDecl *BaseClassDecl = cast(BaseType->getDecl()); if (!specialMemberIsConstexpr(S, BaseClassDecl, CSM, 0, ConstArg, InheritedCtor, Inherited)) return false; } // -- every constructor involved in initializing non-static data members // [...] shall be a constexpr constructor; // -- every non-static data member and base class sub-object shall be // initialized // -- for each non-static data member of X that is of class type (or array // thereof), the assignment operator selected to copy/move that member is // a constexpr function for (const auto *F : ClassDecl->fields()) { if (F->isInvalidDecl()) continue; if (CSM == Sema::CXXDefaultConstructor && F->hasInClassInitializer()) continue; QualType BaseType = S.Context.getBaseElementType(F->getType()); if (const RecordType *RecordTy = BaseType->getAs()) { CXXRecordDecl *FieldRecDecl = cast(RecordTy->getDecl()); if (!specialMemberIsConstexpr(S, FieldRecDecl, CSM, BaseType.getCVRQualifiers(), ConstArg && !F->isMutable())) return false; } else if (CSM == Sema::CXXDefaultConstructor) { return false; } } // All OK, it's constexpr! return true; } static Sema::ImplicitExceptionSpecification ComputeDefaultedSpecialMemberExceptionSpec( Sema &S, SourceLocation Loc, CXXMethodDecl *MD, Sema::CXXSpecialMember CSM, Sema::InheritedConstructorInfo *ICI); static Sema::ImplicitExceptionSpecification computeImplicitExceptionSpec(Sema &S, SourceLocation Loc, CXXMethodDecl *MD) { auto CSM = S.getSpecialMember(MD); if (CSM != Sema::CXXInvalid) return ComputeDefaultedSpecialMemberExceptionSpec(S, Loc, MD, CSM, nullptr); auto *CD = cast(MD); assert(CD->getInheritedConstructor() && "only special members have implicit exception specs"); Sema::InheritedConstructorInfo ICI( S, Loc, CD->getInheritedConstructor().getShadowDecl()); return ComputeDefaultedSpecialMemberExceptionSpec( S, Loc, CD, Sema::CXXDefaultConstructor, &ICI); } static FunctionProtoType::ExtProtoInfo getImplicitMethodEPI(Sema &S, CXXMethodDecl *MD) { FunctionProtoType::ExtProtoInfo EPI; // Build an exception specification pointing back at this member. EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = MD; // Set the calling convention to the default for C++ instance methods. EPI.ExtInfo = EPI.ExtInfo.withCallingConv( S.Context.getDefaultCallingConvention(/*IsVariadic=*/false, /*IsCXXMethod=*/true)); return EPI; } void Sema::EvaluateImplicitExceptionSpec(SourceLocation Loc, CXXMethodDecl *MD) { const FunctionProtoType *FPT = MD->getType()->castAs(); if (FPT->getExceptionSpecType() != EST_Unevaluated) return; // Evaluate the exception specification. auto IES = computeImplicitExceptionSpec(*this, Loc, MD); auto ESI = IES.getExceptionSpec(); // Update the type of the special member to use it. UpdateExceptionSpec(MD, ESI); // A user-provided destructor can be defined outside the class. When that // happens, be sure to update the exception specification on both // declarations. const FunctionProtoType *CanonicalFPT = MD->getCanonicalDecl()->getType()->castAs(); if (CanonicalFPT->getExceptionSpecType() == EST_Unevaluated) UpdateExceptionSpec(MD->getCanonicalDecl(), ESI); } void Sema::CheckExplicitlyDefaultedSpecialMember(CXXMethodDecl *MD) { CXXRecordDecl *RD = MD->getParent(); CXXSpecialMember CSM = getSpecialMember(MD); assert(MD->isExplicitlyDefaulted() && CSM != CXXInvalid && "not an explicitly-defaulted special member"); // Whether this was the first-declared instance of the constructor. // This affects whether we implicitly add an exception spec and constexpr. bool First = MD == MD->getCanonicalDecl(); bool HadError = false; // C++11 [dcl.fct.def.default]p1: // A function that is explicitly defaulted shall // -- be a special member function (checked elsewhere), // -- have the same type (except for ref-qualifiers, and except that a // copy operation can take a non-const reference) as an implicit // declaration, and // -- not have default arguments. unsigned ExpectedParams = 1; if (CSM == CXXDefaultConstructor || CSM == CXXDestructor) ExpectedParams = 0; if (MD->getNumParams() != ExpectedParams) { // This also checks for default arguments: a copy or move constructor with a // default argument is classified as a default constructor, and assignment // operations and destructors can't have default arguments. Diag(MD->getLocation(), diag::err_defaulted_special_member_params) << CSM << MD->getSourceRange(); HadError = true; } else if (MD->isVariadic()) { Diag(MD->getLocation(), diag::err_defaulted_special_member_variadic) << CSM << MD->getSourceRange(); HadError = true; } const FunctionProtoType *Type = MD->getType()->getAs(); bool CanHaveConstParam = false; if (CSM == CXXCopyConstructor) CanHaveConstParam = RD->implicitCopyConstructorHasConstParam(); else if (CSM == CXXCopyAssignment) CanHaveConstParam = RD->implicitCopyAssignmentHasConstParam(); QualType ReturnType = Context.VoidTy; if (CSM == CXXCopyAssignment || CSM == CXXMoveAssignment) { // Check for return type matching. ReturnType = Type->getReturnType(); QualType ExpectedReturnType = Context.getLValueReferenceType(Context.getTypeDeclType(RD)); if (!Context.hasSameType(ReturnType, ExpectedReturnType)) { Diag(MD->getLocation(), diag::err_defaulted_special_member_return_type) << (CSM == CXXMoveAssignment) << ExpectedReturnType; HadError = true; } // A defaulted special member cannot have cv-qualifiers. if (Type->getTypeQuals()) { Diag(MD->getLocation(), diag::err_defaulted_special_member_quals) << (CSM == CXXMoveAssignment) << getLangOpts().CPlusPlus14; HadError = true; } } // Check for parameter type matching. QualType ArgType = ExpectedParams ? Type->getParamType(0) : QualType(); bool HasConstParam = false; if (ExpectedParams && ArgType->isReferenceType()) { // Argument must be reference to possibly-const T. QualType ReferentType = ArgType->getPointeeType(); HasConstParam = ReferentType.isConstQualified(); if (ReferentType.isVolatileQualified()) { Diag(MD->getLocation(), diag::err_defaulted_special_member_volatile_param) << CSM; HadError = true; } if (HasConstParam && !CanHaveConstParam) { if (CSM == CXXCopyConstructor || CSM == CXXCopyAssignment) { Diag(MD->getLocation(), diag::err_defaulted_special_member_copy_const_param) << (CSM == CXXCopyAssignment); // FIXME: Explain why this special member can't be const. } else { Diag(MD->getLocation(), diag::err_defaulted_special_member_move_const_param) << (CSM == CXXMoveAssignment); } HadError = true; } } else if (ExpectedParams) { // A copy assignment operator can take its argument by value, but a // defaulted one cannot. assert(CSM == CXXCopyAssignment && "unexpected non-ref argument"); Diag(MD->getLocation(), diag::err_defaulted_copy_assign_not_ref); HadError = true; } // C++11 [dcl.fct.def.default]p2: // An explicitly-defaulted function may be declared constexpr only if it // would have been implicitly declared as constexpr, // Do not apply this rule to members of class templates, since core issue 1358 // makes such functions always instantiate to constexpr functions. For // functions which cannot be constexpr (for non-constructors in C++11 and for // destructors in C++1y), this is checked elsewhere. bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, RD, CSM, HasConstParam); if ((getLangOpts().CPlusPlus14 ? !isa(MD) : isa(MD)) && MD->isConstexpr() && !Constexpr && MD->getTemplatedKind() == FunctionDecl::TK_NonTemplate) { Diag(MD->getLocStart(), diag::err_incorrect_defaulted_constexpr) << CSM; // FIXME: Explain why the special member can't be constexpr. HadError = true; } // and may have an explicit exception-specification only if it is compatible // with the exception-specification on the implicit declaration. if (Type->hasExceptionSpec()) { // Delay the check if this is the first declaration of the special member, // since we may not have parsed some necessary in-class initializers yet. if (First) { // If the exception specification needs to be instantiated, do so now, // before we clobber it with an EST_Unevaluated specification below. if (Type->getExceptionSpecType() == EST_Uninstantiated) { InstantiateExceptionSpec(MD->getLocStart(), MD); Type = MD->getType()->getAs(); } DelayedDefaultedMemberExceptionSpecs.push_back(std::make_pair(MD, Type)); } else CheckExplicitlyDefaultedMemberExceptionSpec(MD, Type); } // If a function is explicitly defaulted on its first declaration, if (First) { // -- it is implicitly considered to be constexpr if the implicit // definition would be, MD->setConstexpr(Constexpr); // -- it is implicitly considered to have the same exception-specification // as if it had been implicitly declared, FunctionProtoType::ExtProtoInfo EPI = Type->getExtProtoInfo(); EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = MD; MD->setType(Context.getFunctionType(ReturnType, llvm::makeArrayRef(&ArgType, ExpectedParams), EPI)); } if (ShouldDeleteSpecialMember(MD, CSM)) { if (First) { SetDeclDeleted(MD, MD->getLocation()); } else { // C++11 [dcl.fct.def.default]p4: // [For a] user-provided explicitly-defaulted function [...] if such a // function is implicitly defined as deleted, the program is ill-formed. Diag(MD->getLocation(), diag::err_out_of_line_default_deletes) << CSM; ShouldDeleteSpecialMember(MD, CSM, nullptr, /*Diagnose*/true); HadError = true; } } if (HadError) MD->setInvalidDecl(); } /// Check whether the exception specification provided for an /// explicitly-defaulted special member matches the exception specification /// that would have been generated for an implicit special member, per /// C++11 [dcl.fct.def.default]p2. void Sema::CheckExplicitlyDefaultedMemberExceptionSpec( CXXMethodDecl *MD, const FunctionProtoType *SpecifiedType) { // If the exception specification was explicitly specified but hadn't been // parsed when the method was defaulted, grab it now. if (SpecifiedType->getExceptionSpecType() == EST_Unparsed) SpecifiedType = MD->getTypeSourceInfo()->getType()->castAs(); // Compute the implicit exception specification. CallingConv CC = Context.getDefaultCallingConvention(/*IsVariadic=*/false, /*IsCXXMethod=*/true); FunctionProtoType::ExtProtoInfo EPI(CC); auto IES = computeImplicitExceptionSpec(*this, MD->getLocation(), MD); EPI.ExceptionSpec = IES.getExceptionSpec(); const FunctionProtoType *ImplicitType = cast( Context.getFunctionType(Context.VoidTy, None, EPI)); // Ensure that it matches. CheckEquivalentExceptionSpec( PDiag(diag::err_incorrect_defaulted_exception_spec) << getSpecialMember(MD), PDiag(), ImplicitType, SourceLocation(), SpecifiedType, MD->getLocation()); } void Sema::CheckDelayedMemberExceptionSpecs() { decltype(DelayedExceptionSpecChecks) Checks; decltype(DelayedDefaultedMemberExceptionSpecs) Specs; std::swap(Checks, DelayedExceptionSpecChecks); std::swap(Specs, DelayedDefaultedMemberExceptionSpecs); // Perform any deferred checking of exception specifications for virtual // destructors. for (auto &Check : Checks) CheckOverridingFunctionExceptionSpec(Check.first, Check.second); // Check that any explicitly-defaulted methods have exception specifications // compatible with their implicit exception specifications. for (auto &Spec : Specs) CheckExplicitlyDefaultedMemberExceptionSpec(Spec.first, Spec.second); } namespace { /// CRTP base class for visiting operations performed by a special member /// function (or inherited constructor). template struct SpecialMemberVisitor { Sema &S; CXXMethodDecl *MD; Sema::CXXSpecialMember CSM; Sema::InheritedConstructorInfo *ICI; // Properties of the special member, computed for convenience. bool IsConstructor = false, IsAssignment = false, ConstArg = false; SpecialMemberVisitor(Sema &S, CXXMethodDecl *MD, Sema::CXXSpecialMember CSM, Sema::InheritedConstructorInfo *ICI) : S(S), MD(MD), CSM(CSM), ICI(ICI) { switch (CSM) { case Sema::CXXDefaultConstructor: case Sema::CXXCopyConstructor: case Sema::CXXMoveConstructor: IsConstructor = true; break; case Sema::CXXCopyAssignment: case Sema::CXXMoveAssignment: IsAssignment = true; break; case Sema::CXXDestructor: break; case Sema::CXXInvalid: llvm_unreachable("invalid special member kind"); } if (MD->getNumParams()) { if (const ReferenceType *RT = MD->getParamDecl(0)->getType()->getAs()) ConstArg = RT->getPointeeType().isConstQualified(); } } Derived &getDerived() { return static_cast(*this); } /// Is this a "move" special member? bool isMove() const { return CSM == Sema::CXXMoveConstructor || CSM == Sema::CXXMoveAssignment; } /// Look up the corresponding special member in the given class. Sema::SpecialMemberOverloadResult lookupIn(CXXRecordDecl *Class, unsigned Quals, bool IsMutable) { return lookupCallFromSpecialMember(S, Class, CSM, Quals, ConstArg && !IsMutable); } /// Look up the constructor for the specified base class to see if it's /// overridden due to this being an inherited constructor. Sema::SpecialMemberOverloadResult lookupInheritedCtor(CXXRecordDecl *Class) { if (!ICI) return {}; assert(CSM == Sema::CXXDefaultConstructor); auto *BaseCtor = cast(MD)->getInheritedConstructor().getConstructor(); if (auto *MD = ICI->findConstructorForBase(Class, BaseCtor).first) return MD; return {}; } /// A base or member subobject. typedef llvm::PointerUnion Subobject; /// Get the location to use for a subobject in diagnostics. static SourceLocation getSubobjectLoc(Subobject Subobj) { // FIXME: For an indirect virtual base, the direct base leading to // the indirect virtual base would be a more useful choice. if (auto *B = Subobj.dyn_cast()) return B->getBaseTypeLoc(); else return Subobj.get()->getLocation(); } enum BasesToVisit { /// Visit all non-virtual (direct) bases. VisitNonVirtualBases, /// Visit all direct bases, virtual or not. VisitDirectBases, /// Visit all non-virtual bases, and all virtual bases if the class /// is not abstract. VisitPotentiallyConstructedBases, /// Visit all direct or virtual bases. VisitAllBases }; // Visit the bases and members of the class. bool visit(BasesToVisit Bases) { CXXRecordDecl *RD = MD->getParent(); if (Bases == VisitPotentiallyConstructedBases) Bases = RD->isAbstract() ? VisitNonVirtualBases : VisitAllBases; for (auto &B : RD->bases()) if ((Bases == VisitDirectBases || !B.isVirtual()) && getDerived().visitBase(&B)) return true; if (Bases == VisitAllBases) for (auto &B : RD->vbases()) if (getDerived().visitBase(&B)) return true; for (auto *F : RD->fields()) if (!F->isInvalidDecl() && !F->isUnnamedBitfield() && getDerived().visitField(F)) return true; return false; } }; } namespace { struct SpecialMemberDeletionInfo : SpecialMemberVisitor { bool Diagnose; SourceLocation Loc; bool AllFieldsAreConst; SpecialMemberDeletionInfo(Sema &S, CXXMethodDecl *MD, Sema::CXXSpecialMember CSM, Sema::InheritedConstructorInfo *ICI, bool Diagnose) : SpecialMemberVisitor(S, MD, CSM, ICI), Diagnose(Diagnose), Loc(MD->getLocation()), AllFieldsAreConst(true) {} bool inUnion() const { return MD->getParent()->isUnion(); } Sema::CXXSpecialMember getEffectiveCSM() { return ICI ? Sema::CXXInvalid : CSM; } bool visitBase(CXXBaseSpecifier *Base) { return shouldDeleteForBase(Base); } bool visitField(FieldDecl *Field) { return shouldDeleteForField(Field); } bool shouldDeleteForBase(CXXBaseSpecifier *Base); bool shouldDeleteForField(FieldDecl *FD); bool shouldDeleteForAllConstMembers(); bool shouldDeleteForClassSubobject(CXXRecordDecl *Class, Subobject Subobj, unsigned Quals); bool shouldDeleteForSubobjectCall(Subobject Subobj, Sema::SpecialMemberOverloadResult SMOR, bool IsDtorCallInCtor); bool isAccessible(Subobject Subobj, CXXMethodDecl *D); }; } /// Is the given special member inaccessible when used on the given /// sub-object. bool SpecialMemberDeletionInfo::isAccessible(Subobject Subobj, CXXMethodDecl *target) { /// If we're operating on a base class, the object type is the /// type of this special member. QualType objectTy; AccessSpecifier access = target->getAccess(); if (CXXBaseSpecifier *base = Subobj.dyn_cast()) { objectTy = S.Context.getTypeDeclType(MD->getParent()); access = CXXRecordDecl::MergeAccess(base->getAccessSpecifier(), access); // If we're operating on a field, the object type is the type of the field. } else { objectTy = S.Context.getTypeDeclType(target->getParent()); } return S.isSpecialMemberAccessibleForDeletion(target, access, objectTy); } /// Check whether we should delete a special member due to the implicit /// definition containing a call to a special member of a subobject. bool SpecialMemberDeletionInfo::shouldDeleteForSubobjectCall( Subobject Subobj, Sema::SpecialMemberOverloadResult SMOR, bool IsDtorCallInCtor) { CXXMethodDecl *Decl = SMOR.getMethod(); FieldDecl *Field = Subobj.dyn_cast(); int DiagKind = -1; if (SMOR.getKind() == Sema::SpecialMemberOverloadResult::NoMemberOrDeleted) DiagKind = !Decl ? 0 : 1; else if (SMOR.getKind() == Sema::SpecialMemberOverloadResult::Ambiguous) DiagKind = 2; else if (!isAccessible(Subobj, Decl)) DiagKind = 3; else if (!IsDtorCallInCtor && Field && Field->getParent()->isUnion() && !Decl->isTrivial()) { // A member of a union must have a trivial corresponding special member. // As a weird special case, a destructor call from a union's constructor // must be accessible and non-deleted, but need not be trivial. Such a // destructor is never actually called, but is semantically checked as // if it were. DiagKind = 4; } if (DiagKind == -1) return false; if (Diagnose) { if (Field) { S.Diag(Field->getLocation(), diag::note_deleted_special_member_class_subobject) << getEffectiveCSM() << MD->getParent() << /*IsField*/true << Field << DiagKind << IsDtorCallInCtor; } else { CXXBaseSpecifier *Base = Subobj.get(); S.Diag(Base->getLocStart(), diag::note_deleted_special_member_class_subobject) << getEffectiveCSM() << MD->getParent() << /*IsField*/false << Base->getType() << DiagKind << IsDtorCallInCtor; } if (DiagKind == 1) S.NoteDeletedFunction(Decl); // FIXME: Explain inaccessibility if DiagKind == 3. } return true; } /// Check whether we should delete a special member function due to having a /// direct or virtual base class or non-static data member of class type M. bool SpecialMemberDeletionInfo::shouldDeleteForClassSubobject( CXXRecordDecl *Class, Subobject Subobj, unsigned Quals) { FieldDecl *Field = Subobj.dyn_cast(); bool IsMutable = Field && Field->isMutable(); // C++11 [class.ctor]p5: // -- any direct or virtual base class, or non-static data member with no // brace-or-equal-initializer, has class type M (or array thereof) and // either M has no default constructor or overload resolution as applied // to M's default constructor results in an ambiguity or in a function // that is deleted or inaccessible // C++11 [class.copy]p11, C++11 [class.copy]p23: // -- a direct or virtual base class B that cannot be copied/moved because // overload resolution, as applied to B's corresponding special member, // results in an ambiguity or a function that is deleted or inaccessible // from the defaulted special member // C++11 [class.dtor]p5: // -- any direct or virtual base class [...] has a type with a destructor // that is deleted or inaccessible if (!(CSM == Sema::CXXDefaultConstructor && Field && Field->hasInClassInitializer()) && shouldDeleteForSubobjectCall(Subobj, lookupIn(Class, Quals, IsMutable), false)) return true; // C++11 [class.ctor]p5, C++11 [class.copy]p11: // -- any direct or virtual base class or non-static data member has a // type with a destructor that is deleted or inaccessible if (IsConstructor) { Sema::SpecialMemberOverloadResult SMOR = S.LookupSpecialMember(Class, Sema::CXXDestructor, false, false, false, false, false); if (shouldDeleteForSubobjectCall(Subobj, SMOR, true)) return true; } return false; } /// Check whether we should delete a special member function due to the class /// having a particular direct or virtual base class. bool SpecialMemberDeletionInfo::shouldDeleteForBase(CXXBaseSpecifier *Base) { CXXRecordDecl *BaseClass = Base->getType()->getAsCXXRecordDecl(); // If program is correct, BaseClass cannot be null, but if it is, the error // must be reported elsewhere. if (!BaseClass) return false; // If we have an inheriting constructor, check whether we're calling an // inherited constructor instead of a default constructor. Sema::SpecialMemberOverloadResult SMOR = lookupInheritedCtor(BaseClass); if (auto *BaseCtor = SMOR.getMethod()) { // Note that we do not check access along this path; other than that, // this is the same as shouldDeleteForSubobjectCall(Base, BaseCtor, false); // FIXME: Check that the base has a usable destructor! Sink this into // shouldDeleteForClassSubobject. if (BaseCtor->isDeleted() && Diagnose) { S.Diag(Base->getLocStart(), diag::note_deleted_special_member_class_subobject) << getEffectiveCSM() << MD->getParent() << /*IsField*/false << Base->getType() << /*Deleted*/1 << /*IsDtorCallInCtor*/false; S.NoteDeletedFunction(BaseCtor); } return BaseCtor->isDeleted(); } return shouldDeleteForClassSubobject(BaseClass, Base, 0); } /// Check whether we should delete a special member function due to the class /// having a particular non-static data member. bool SpecialMemberDeletionInfo::shouldDeleteForField(FieldDecl *FD) { QualType FieldType = S.Context.getBaseElementType(FD->getType()); CXXRecordDecl *FieldRecord = FieldType->getAsCXXRecordDecl(); if (CSM == Sema::CXXDefaultConstructor) { // For a default constructor, all references must be initialized in-class // and, if a union, it must have a non-const member. if (FieldType->isReferenceType() && !FD->hasInClassInitializer()) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_default_ctor_uninit_field) << !!ICI << MD->getParent() << FD << FieldType << /*Reference*/0; return true; } // C++11 [class.ctor]p5: any non-variant non-static data member of // const-qualified type (or array thereof) with no // brace-or-equal-initializer does not have a user-provided default // constructor. if (!inUnion() && FieldType.isConstQualified() && !FD->hasInClassInitializer() && (!FieldRecord || !FieldRecord->hasUserProvidedDefaultConstructor())) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_default_ctor_uninit_field) << !!ICI << MD->getParent() << FD << FD->getType() << /*Const*/1; return true; } if (inUnion() && !FieldType.isConstQualified()) AllFieldsAreConst = false; } else if (CSM == Sema::CXXCopyConstructor) { // For a copy constructor, data members must not be of rvalue reference // type. if (FieldType->isRValueReferenceType()) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_copy_ctor_rvalue_reference) << MD->getParent() << FD << FieldType; return true; } } else if (IsAssignment) { // For an assignment operator, data members must not be of reference type. if (FieldType->isReferenceType()) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_assign_field) << isMove() << MD->getParent() << FD << FieldType << /*Reference*/0; return true; } if (!FieldRecord && FieldType.isConstQualified()) { // C++11 [class.copy]p23: // -- a non-static data member of const non-class type (or array thereof) if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_assign_field) << isMove() << MD->getParent() << FD << FD->getType() << /*Const*/1; return true; } } if (FieldRecord) { // Some additional restrictions exist on the variant members. if (!inUnion() && FieldRecord->isUnion() && FieldRecord->isAnonymousStructOrUnion()) { bool AllVariantFieldsAreConst = true; // FIXME: Handle anonymous unions declared within anonymous unions. for (auto *UI : FieldRecord->fields()) { QualType UnionFieldType = S.Context.getBaseElementType(UI->getType()); if (!UnionFieldType.isConstQualified()) AllVariantFieldsAreConst = false; CXXRecordDecl *UnionFieldRecord = UnionFieldType->getAsCXXRecordDecl(); if (UnionFieldRecord && shouldDeleteForClassSubobject(UnionFieldRecord, UI, UnionFieldType.getCVRQualifiers())) return true; } // At least one member in each anonymous union must be non-const if (CSM == Sema::CXXDefaultConstructor && AllVariantFieldsAreConst && !FieldRecord->field_empty()) { if (Diagnose) S.Diag(FieldRecord->getLocation(), diag::note_deleted_default_ctor_all_const) << !!ICI << MD->getParent() << /*anonymous union*/1; return true; } // Don't check the implicit member of the anonymous union type. // This is technically non-conformant, but sanity demands it. return false; } if (shouldDeleteForClassSubobject(FieldRecord, FD, FieldType.getCVRQualifiers())) return true; } return false; } /// C++11 [class.ctor] p5: /// A defaulted default constructor for a class X is defined as deleted if /// X is a union and all of its variant members are of const-qualified type. bool SpecialMemberDeletionInfo::shouldDeleteForAllConstMembers() { // This is a silly definition, because it gives an empty union a deleted // default constructor. Don't do that. if (CSM == Sema::CXXDefaultConstructor && inUnion() && AllFieldsAreConst) { bool AnyFields = false; for (auto *F : MD->getParent()->fields()) if ((AnyFields = !F->isUnnamedBitfield())) break; if (!AnyFields) return false; if (Diagnose) S.Diag(MD->getParent()->getLocation(), diag::note_deleted_default_ctor_all_const) << !!ICI << MD->getParent() << /*not anonymous union*/0; return true; } return false; } /// Determine whether a defaulted special member function should be defined as /// deleted, as specified in C++11 [class.ctor]p5, C++11 [class.copy]p11, /// C++11 [class.copy]p23, and C++11 [class.dtor]p5. bool Sema::ShouldDeleteSpecialMember(CXXMethodDecl *MD, CXXSpecialMember CSM, InheritedConstructorInfo *ICI, bool Diagnose) { if (MD->isInvalidDecl()) return false; CXXRecordDecl *RD = MD->getParent(); assert(!RD->isDependentType() && "do deletion after instantiation"); if (!LangOpts.CPlusPlus11 || RD->isInvalidDecl()) return false; // C++11 [expr.lambda.prim]p19: // The closure type associated with a lambda-expression has a // deleted (8.4.3) default constructor and a deleted copy // assignment operator. if (RD->isLambda() && (CSM == CXXDefaultConstructor || CSM == CXXCopyAssignment)) { if (Diagnose) Diag(RD->getLocation(), diag::note_lambda_decl); return true; } // For an anonymous struct or union, the copy and assignment special members // will never be used, so skip the check. For an anonymous union declared at // namespace scope, the constructor and destructor are used. if (CSM != CXXDefaultConstructor && CSM != CXXDestructor && RD->isAnonymousStructOrUnion()) return false; // C++11 [class.copy]p7, p18: // If the class definition declares a move constructor or move assignment // operator, an implicitly declared copy constructor or copy assignment // operator is defined as deleted. if (MD->isImplicit() && (CSM == CXXCopyConstructor || CSM == CXXCopyAssignment)) { CXXMethodDecl *UserDeclaredMove = nullptr; // In Microsoft mode up to MSVC 2013, a user-declared move only causes the // deletion of the corresponding copy operation, not both copy operations. // MSVC 2015 has adopted the standards conforming behavior. bool DeletesOnlyMatchingCopy = getLangOpts().MSVCCompat && !getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015); if (RD->hasUserDeclaredMoveConstructor() && (!DeletesOnlyMatchingCopy || CSM == CXXCopyConstructor)) { if (!Diagnose) return true; // Find any user-declared move constructor. for (auto *I : RD->ctors()) { if (I->isMoveConstructor()) { UserDeclaredMove = I; break; } } assert(UserDeclaredMove); } else if (RD->hasUserDeclaredMoveAssignment() && (!DeletesOnlyMatchingCopy || CSM == CXXCopyAssignment)) { if (!Diagnose) return true; // Find any user-declared move assignment operator. for (auto *I : RD->methods()) { if (I->isMoveAssignmentOperator()) { UserDeclaredMove = I; break; } } assert(UserDeclaredMove); } if (UserDeclaredMove) { Diag(UserDeclaredMove->getLocation(), diag::note_deleted_copy_user_declared_move) << (CSM == CXXCopyAssignment) << RD << UserDeclaredMove->isMoveAssignmentOperator(); return true; } } // Do access control from the special member function ContextRAII MethodContext(*this, MD); // C++11 [class.dtor]p5: // -- for a virtual destructor, lookup of the non-array deallocation function // results in an ambiguity or in a function that is deleted or inaccessible if (CSM == CXXDestructor && MD->isVirtual()) { FunctionDecl *OperatorDelete = nullptr; DeclarationName Name = Context.DeclarationNames.getCXXOperatorName(OO_Delete); if (FindDeallocationFunction(MD->getLocation(), MD->getParent(), Name, OperatorDelete, /*Diagnose*/false)) { if (Diagnose) Diag(RD->getLocation(), diag::note_deleted_dtor_no_operator_delete); return true; } } SpecialMemberDeletionInfo SMI(*this, MD, CSM, ICI, Diagnose); // Per DR1611, do not consider virtual bases of constructors of abstract // classes, since we are not going to construct them. // Per DR1658, do not consider virtual bases of destructors of abstract // classes either. // Per DR2180, for assignment operators we only assign (and thus only // consider) direct bases. if (SMI.visit(SMI.IsAssignment ? SMI.VisitDirectBases : SMI.VisitPotentiallyConstructedBases)) return true; if (SMI.shouldDeleteForAllConstMembers()) return true; if (getLangOpts().CUDA) { // We should delete the special member in CUDA mode if target inference // failed. return inferCUDATargetForImplicitSpecialMember(RD, CSM, MD, SMI.ConstArg, Diagnose); } return false; } /// Perform lookup for a special member of the specified kind, and determine /// whether it is trivial. If the triviality can be determined without the /// lookup, skip it. This is intended for use when determining whether a /// special member of a containing object is trivial, and thus does not ever /// perform overload resolution for default constructors. /// /// If \p Selected is not \c NULL, \c *Selected will be filled in with the /// member that was most likely to be intended to be trivial, if any. static bool findTrivialSpecialMember(Sema &S, CXXRecordDecl *RD, Sema::CXXSpecialMember CSM, unsigned Quals, bool ConstRHS, CXXMethodDecl **Selected) { if (Selected) *Selected = nullptr; switch (CSM) { case Sema::CXXInvalid: llvm_unreachable("not a special member"); case Sema::CXXDefaultConstructor: // C++11 [class.ctor]p5: // A default constructor is trivial if: // - all the [direct subobjects] have trivial default constructors // // Note, no overload resolution is performed in this case. if (RD->hasTrivialDefaultConstructor()) return true; if (Selected) { // If there's a default constructor which could have been trivial, dig it // out. Otherwise, if there's any user-provided default constructor, point // to that as an example of why there's not a trivial one. CXXConstructorDecl *DefCtor = nullptr; if (RD->needsImplicitDefaultConstructor()) S.DeclareImplicitDefaultConstructor(RD); for (auto *CI : RD->ctors()) { if (!CI->isDefaultConstructor()) continue; DefCtor = CI; if (!DefCtor->isUserProvided()) break; } *Selected = DefCtor; } return false; case Sema::CXXDestructor: // C++11 [class.dtor]p5: // A destructor is trivial if: // - all the direct [subobjects] have trivial destructors if (RD->hasTrivialDestructor()) return true; if (Selected) { if (RD->needsImplicitDestructor()) S.DeclareImplicitDestructor(RD); *Selected = RD->getDestructor(); } return false; case Sema::CXXCopyConstructor: // C++11 [class.copy]p12: // A copy constructor is trivial if: // - the constructor selected to copy each direct [subobject] is trivial if (RD->hasTrivialCopyConstructor()) { if (Quals == Qualifiers::Const) // We must either select the trivial copy constructor or reach an // ambiguity; no need to actually perform overload resolution. return true; } else if (!Selected) { return false; } // In C++98, we are not supposed to perform overload resolution here, but we // treat that as a language defect, as suggested on cxx-abi-dev, to treat // cases like B as having a non-trivial copy constructor: // struct A { template A(T&); }; // struct B { mutable A a; }; goto NeedOverloadResolution; case Sema::CXXCopyAssignment: // C++11 [class.copy]p25: // A copy assignment operator is trivial if: // - the assignment operator selected to copy each direct [subobject] is // trivial if (RD->hasTrivialCopyAssignment()) { if (Quals == Qualifiers::Const) return true; } else if (!Selected) { return false; } // In C++98, we are not supposed to perform overload resolution here, but we // treat that as a language defect. goto NeedOverloadResolution; case Sema::CXXMoveConstructor: case Sema::CXXMoveAssignment: NeedOverloadResolution: Sema::SpecialMemberOverloadResult SMOR = lookupCallFromSpecialMember(S, RD, CSM, Quals, ConstRHS); // The standard doesn't describe how to behave if the lookup is ambiguous. // We treat it as not making the member non-trivial, just like the standard // mandates for the default constructor. This should rarely matter, because // the member will also be deleted. if (SMOR.getKind() == Sema::SpecialMemberOverloadResult::Ambiguous) return true; if (!SMOR.getMethod()) { assert(SMOR.getKind() == Sema::SpecialMemberOverloadResult::NoMemberOrDeleted); return false; } // We deliberately don't check if we found a deleted special member. We're // not supposed to! if (Selected) *Selected = SMOR.getMethod(); return SMOR.getMethod()->isTrivial(); } llvm_unreachable("unknown special method kind"); } static CXXConstructorDecl *findUserDeclaredCtor(CXXRecordDecl *RD) { for (auto *CI : RD->ctors()) if (!CI->isImplicit()) return CI; // Look for constructor templates. typedef CXXRecordDecl::specific_decl_iterator tmpl_iter; for (tmpl_iter TI(RD->decls_begin()), TE(RD->decls_end()); TI != TE; ++TI) { if (CXXConstructorDecl *CD = dyn_cast(TI->getTemplatedDecl())) return CD; } return nullptr; } /// The kind of subobject we are checking for triviality. The values of this /// enumeration are used in diagnostics. enum TrivialSubobjectKind { /// The subobject is a base class. TSK_BaseClass, /// The subobject is a non-static data member. TSK_Field, /// The object is actually the complete object. TSK_CompleteObject }; /// Check whether the special member selected for a given type would be trivial. static bool checkTrivialSubobjectCall(Sema &S, SourceLocation SubobjLoc, QualType SubType, bool ConstRHS, Sema::CXXSpecialMember CSM, TrivialSubobjectKind Kind, bool Diagnose) { CXXRecordDecl *SubRD = SubType->getAsCXXRecordDecl(); if (!SubRD) return true; CXXMethodDecl *Selected; if (findTrivialSpecialMember(S, SubRD, CSM, SubType.getCVRQualifiers(), ConstRHS, Diagnose ? &Selected : nullptr)) return true; if (Diagnose) { if (ConstRHS) SubType.addConst(); if (!Selected && CSM == Sema::CXXDefaultConstructor) { S.Diag(SubobjLoc, diag::note_nontrivial_no_def_ctor) << Kind << SubType.getUnqualifiedType(); if (CXXConstructorDecl *CD = findUserDeclaredCtor(SubRD)) S.Diag(CD->getLocation(), diag::note_user_declared_ctor); } else if (!Selected) S.Diag(SubobjLoc, diag::note_nontrivial_no_copy) << Kind << SubType.getUnqualifiedType() << CSM << SubType; else if (Selected->isUserProvided()) { if (Kind == TSK_CompleteObject) S.Diag(Selected->getLocation(), diag::note_nontrivial_user_provided) << Kind << SubType.getUnqualifiedType() << CSM; else { S.Diag(SubobjLoc, diag::note_nontrivial_user_provided) << Kind << SubType.getUnqualifiedType() << CSM; S.Diag(Selected->getLocation(), diag::note_declared_at); } } else { if (Kind != TSK_CompleteObject) S.Diag(SubobjLoc, diag::note_nontrivial_subobject) << Kind << SubType.getUnqualifiedType() << CSM; // Explain why the defaulted or deleted special member isn't trivial. S.SpecialMemberIsTrivial(Selected, CSM, Diagnose); } } return false; } /// Check whether the members of a class type allow a special member to be /// trivial. static bool checkTrivialClassMembers(Sema &S, CXXRecordDecl *RD, Sema::CXXSpecialMember CSM, bool ConstArg, bool Diagnose) { for (const auto *FI : RD->fields()) { if (FI->isInvalidDecl() || FI->isUnnamedBitfield()) continue; QualType FieldType = S.Context.getBaseElementType(FI->getType()); // Pretend anonymous struct or union members are members of this class. if (FI->isAnonymousStructOrUnion()) { if (!checkTrivialClassMembers(S, FieldType->getAsCXXRecordDecl(), CSM, ConstArg, Diagnose)) return false; continue; } // C++11 [class.ctor]p5: // A default constructor is trivial if [...] // -- no non-static data member of its class has a // brace-or-equal-initializer if (CSM == Sema::CXXDefaultConstructor && FI->hasInClassInitializer()) { if (Diagnose) S.Diag(FI->getLocation(), diag::note_nontrivial_in_class_init) << FI; return false; } // Objective C ARC 4.3.5: // [...] nontrivally ownership-qualified types are [...] not trivially // default constructible, copy constructible, move constructible, copy // assignable, move assignable, or destructible [...] if (FieldType.hasNonTrivialObjCLifetime()) { if (Diagnose) S.Diag(FI->getLocation(), diag::note_nontrivial_objc_ownership) << RD << FieldType.getObjCLifetime(); return false; } bool ConstRHS = ConstArg && !FI->isMutable(); if (!checkTrivialSubobjectCall(S, FI->getLocation(), FieldType, ConstRHS, CSM, TSK_Field, Diagnose)) return false; } return true; } /// Diagnose why the specified class does not have a trivial special member of /// the given kind. void Sema::DiagnoseNontrivial(const CXXRecordDecl *RD, CXXSpecialMember CSM) { QualType Ty = Context.getRecordType(RD); bool ConstArg = (CSM == CXXCopyConstructor || CSM == CXXCopyAssignment); checkTrivialSubobjectCall(*this, RD->getLocation(), Ty, ConstArg, CSM, TSK_CompleteObject, /*Diagnose*/true); } /// Determine whether a defaulted or deleted special member function is trivial, /// as specified in C++11 [class.ctor]p5, C++11 [class.copy]p12, /// C++11 [class.copy]p25, and C++11 [class.dtor]p5. bool Sema::SpecialMemberIsTrivial(CXXMethodDecl *MD, CXXSpecialMember CSM, bool Diagnose) { assert(!MD->isUserProvided() && CSM != CXXInvalid && "not special enough"); CXXRecordDecl *RD = MD->getParent(); bool ConstArg = false; // C++11 [class.copy]p12, p25: [DR1593] // A [special member] is trivial if [...] its parameter-type-list is // equivalent to the parameter-type-list of an implicit declaration [...] switch (CSM) { case CXXDefaultConstructor: case CXXDestructor: // Trivial default constructors and destructors cannot have parameters. break; case CXXCopyConstructor: case CXXCopyAssignment: { // Trivial copy operations always have const, non-volatile parameter types. ConstArg = true; const ParmVarDecl *Param0 = MD->getParamDecl(0); const ReferenceType *RT = Param0->getType()->getAs(); if (!RT || RT->getPointeeType().getCVRQualifiers() != Qualifiers::Const) { if (Diagnose) Diag(Param0->getLocation(), diag::note_nontrivial_param_type) << Param0->getSourceRange() << Param0->getType() << Context.getLValueReferenceType( Context.getRecordType(RD).withConst()); return false; } break; } case CXXMoveConstructor: case CXXMoveAssignment: { // Trivial move operations always have non-cv-qualified parameters. const ParmVarDecl *Param0 = MD->getParamDecl(0); const RValueReferenceType *RT = Param0->getType()->getAs(); if (!RT || RT->getPointeeType().getCVRQualifiers()) { if (Diagnose) Diag(Param0->getLocation(), diag::note_nontrivial_param_type) << Param0->getSourceRange() << Param0->getType() << Context.getRValueReferenceType(Context.getRecordType(RD)); return false; } break; } case CXXInvalid: llvm_unreachable("not a special member"); } if (MD->getMinRequiredArguments() < MD->getNumParams()) { if (Diagnose) Diag(MD->getParamDecl(MD->getMinRequiredArguments())->getLocation(), diag::note_nontrivial_default_arg) << MD->getParamDecl(MD->getMinRequiredArguments())->getSourceRange(); return false; } if (MD->isVariadic()) { if (Diagnose) Diag(MD->getLocation(), diag::note_nontrivial_variadic); return false; } // C++11 [class.ctor]p5, C++11 [class.dtor]p5: // A copy/move [constructor or assignment operator] is trivial if // -- the [member] selected to copy/move each direct base class subobject // is trivial // // C++11 [class.copy]p12, C++11 [class.copy]p25: // A [default constructor or destructor] is trivial if // -- all the direct base classes have trivial [default constructors or // destructors] for (const auto &BI : RD->bases()) if (!checkTrivialSubobjectCall(*this, BI.getLocStart(), BI.getType(), ConstArg, CSM, TSK_BaseClass, Diagnose)) return false; // C++11 [class.ctor]p5, C++11 [class.dtor]p5: // A copy/move [constructor or assignment operator] for a class X is // trivial if // -- for each non-static data member of X that is of class type (or array // thereof), the constructor selected to copy/move that member is // trivial // // C++11 [class.copy]p12, C++11 [class.copy]p25: // A [default constructor or destructor] is trivial if // -- for all of the non-static data members of its class that are of class // type (or array thereof), each such class has a trivial [default // constructor or destructor] if (!checkTrivialClassMembers(*this, RD, CSM, ConstArg, Diagnose)) return false; // C++11 [class.dtor]p5: // A destructor is trivial if [...] // -- the destructor is not virtual if (CSM == CXXDestructor && MD->isVirtual()) { if (Diagnose) Diag(MD->getLocation(), diag::note_nontrivial_virtual_dtor) << RD; return false; } // C++11 [class.ctor]p5, C++11 [class.copy]p12, C++11 [class.copy]p25: // A [special member] for class X is trivial if [...] // -- class X has no virtual functions and no virtual base classes if (CSM != CXXDestructor && MD->getParent()->isDynamicClass()) { if (!Diagnose) return false; if (RD->getNumVBases()) { // Check for virtual bases. We already know that the corresponding // member in all bases is trivial, so vbases must all be direct. CXXBaseSpecifier &BS = *RD->vbases_begin(); assert(BS.isVirtual()); Diag(BS.getLocStart(), diag::note_nontrivial_has_virtual) << RD << 1; return false; } // Must have a virtual method. for (const auto *MI : RD->methods()) { if (MI->isVirtual()) { SourceLocation MLoc = MI->getLocStart(); Diag(MLoc, diag::note_nontrivial_has_virtual) << RD << 0; return false; } } llvm_unreachable("dynamic class with no vbases and no virtual functions"); } // Looks like it's trivial! return true; } namespace { struct FindHiddenVirtualMethod { Sema *S; CXXMethodDecl *Method; llvm::SmallPtrSet OverridenAndUsingBaseMethods; SmallVector OverloadedMethods; private: /// Check whether any most overriden method from MD in Methods static bool CheckMostOverridenMethods( const CXXMethodDecl *MD, const llvm::SmallPtrSetImpl &Methods) { if (MD->size_overridden_methods() == 0) return Methods.count(MD->getCanonicalDecl()); for (CXXMethodDecl::method_iterator I = MD->begin_overridden_methods(), E = MD->end_overridden_methods(); I != E; ++I) if (CheckMostOverridenMethods(*I, Methods)) return true; return false; } public: /// Member lookup function that determines whether a given C++ /// method overloads virtual methods in a base class without overriding any, /// to be used with CXXRecordDecl::lookupInBases(). bool operator()(const CXXBaseSpecifier *Specifier, CXXBasePath &Path) { RecordDecl *BaseRecord = Specifier->getType()->getAs()->getDecl(); DeclarationName Name = Method->getDeclName(); assert(Name.getNameKind() == DeclarationName::Identifier); bool foundSameNameMethod = false; SmallVector overloadedMethods; for (Path.Decls = BaseRecord->lookup(Name); !Path.Decls.empty(); Path.Decls = Path.Decls.slice(1)) { NamedDecl *D = Path.Decls.front(); if (CXXMethodDecl *MD = dyn_cast(D)) { MD = MD->getCanonicalDecl(); foundSameNameMethod = true; // Interested only in hidden virtual methods. if (!MD->isVirtual()) continue; // If the method we are checking overrides a method from its base // don't warn about the other overloaded methods. Clang deviates from // GCC by only diagnosing overloads of inherited virtual functions that // do not override any other virtual functions in the base. GCC's // -Woverloaded-virtual diagnoses any derived function hiding a virtual // function from a base class. These cases may be better served by a // warning (not specific to virtual functions) on call sites when the // call would select a different function from the base class, were it // visible. // See FIXME in test/SemaCXX/warn-overload-virtual.cpp for an example. if (!S->IsOverload(Method, MD, false)) return true; // Collect the overload only if its hidden. if (!CheckMostOverridenMethods(MD, OverridenAndUsingBaseMethods)) overloadedMethods.push_back(MD); } } if (foundSameNameMethod) OverloadedMethods.append(overloadedMethods.begin(), overloadedMethods.end()); return foundSameNameMethod; } }; } // end anonymous namespace /// \brief Add the most overriden methods from MD to Methods static void AddMostOverridenMethods(const CXXMethodDecl *MD, llvm::SmallPtrSetImpl& Methods) { if (MD->size_overridden_methods() == 0) Methods.insert(MD->getCanonicalDecl()); for (CXXMethodDecl::method_iterator I = MD->begin_overridden_methods(), E = MD->end_overridden_methods(); I != E; ++I) AddMostOverridenMethods(*I, Methods); } /// \brief Check if a method overloads virtual methods in a base class without /// overriding any. void Sema::FindHiddenVirtualMethods(CXXMethodDecl *MD, SmallVectorImpl &OverloadedMethods) { if (!MD->getDeclName().isIdentifier()) return; CXXBasePaths Paths(/*FindAmbiguities=*/true, // true to look in all bases. /*bool RecordPaths=*/false, /*bool DetectVirtual=*/false); FindHiddenVirtualMethod FHVM; FHVM.Method = MD; FHVM.S = this; // Keep the base methods that were overriden or introduced in the subclass // by 'using' in a set. A base method not in this set is hidden. CXXRecordDecl *DC = MD->getParent(); DeclContext::lookup_result R = DC->lookup(MD->getDeclName()); for (DeclContext::lookup_iterator I = R.begin(), E = R.end(); I != E; ++I) { NamedDecl *ND = *I; if (UsingShadowDecl *shad = dyn_cast(*I)) ND = shad->getTargetDecl(); if (CXXMethodDecl *MD = dyn_cast(ND)) AddMostOverridenMethods(MD, FHVM.OverridenAndUsingBaseMethods); } if (DC->lookupInBases(FHVM, Paths)) OverloadedMethods = FHVM.OverloadedMethods; } void Sema::NoteHiddenVirtualMethods(CXXMethodDecl *MD, SmallVectorImpl &OverloadedMethods) { for (unsigned i = 0, e = OverloadedMethods.size(); i != e; ++i) { CXXMethodDecl *overloadedMD = OverloadedMethods[i]; PartialDiagnostic PD = PDiag( diag::note_hidden_overloaded_virtual_declared_here) << overloadedMD; HandleFunctionTypeMismatch(PD, MD->getType(), overloadedMD->getType()); Diag(overloadedMD->getLocation(), PD); } } /// \brief Diagnose methods which overload virtual methods in a base class /// without overriding any. void Sema::DiagnoseHiddenVirtualMethods(CXXMethodDecl *MD) { if (MD->isInvalidDecl()) return; if (Diags.isIgnored(diag::warn_overloaded_virtual, MD->getLocation())) return; SmallVector OverloadedMethods; FindHiddenVirtualMethods(MD, OverloadedMethods); if (!OverloadedMethods.empty()) { Diag(MD->getLocation(), diag::warn_overloaded_virtual) << MD << (OverloadedMethods.size() > 1); NoteHiddenVirtualMethods(MD, OverloadedMethods); } } void Sema::ActOnFinishCXXMemberSpecification(Scope* S, SourceLocation RLoc, Decl *TagDecl, SourceLocation LBrac, SourceLocation RBrac, AttributeList *AttrList) { if (!TagDecl) return; AdjustDeclIfTemplate(TagDecl); for (const AttributeList* l = AttrList; l; l = l->getNext()) { if (l->getKind() != AttributeList::AT_Visibility) continue; l->setInvalid(); Diag(l->getLoc(), diag::warn_attribute_after_definition_ignored) << l->getName(); } ActOnFields(S, RLoc, TagDecl, llvm::makeArrayRef( // strict aliasing violation! reinterpret_cast(FieldCollector->getCurFields()), FieldCollector->getCurNumFields()), LBrac, RBrac, AttrList); - CheckCompletedCXXClass( - dyn_cast_or_null(TagDecl)); + CheckCompletedCXXClass(dyn_cast_or_null(TagDecl)); } /// AddImplicitlyDeclaredMembersToClass - Adds any implicitly-declared /// special functions, such as the default constructor, copy /// constructor, or destructor, to the given C++ class (C++ /// [special]p1). This routine can only be executed just before the /// definition of the class is complete. void Sema::AddImplicitlyDeclaredMembersToClass(CXXRecordDecl *ClassDecl) { if (ClassDecl->needsImplicitDefaultConstructor()) { ++ASTContext::NumImplicitDefaultConstructors; if (ClassDecl->hasInheritedConstructor()) DeclareImplicitDefaultConstructor(ClassDecl); } if (ClassDecl->needsImplicitCopyConstructor()) { ++ASTContext::NumImplicitCopyConstructors; // If the properties or semantics of the copy constructor couldn't be // determined while the class was being declared, force a declaration // of it now. if (ClassDecl->needsOverloadResolutionForCopyConstructor() || ClassDecl->hasInheritedConstructor()) DeclareImplicitCopyConstructor(ClassDecl); // For the MS ABI we need to know whether the copy ctor is deleted. A // prerequisite for deleting the implicit copy ctor is that the class has a // move ctor or move assignment that is either user-declared or whose // semantics are inherited from a subobject. FIXME: We should provide a more // direct way for CodeGen to ask whether the constructor was deleted. else if (Context.getTargetInfo().getCXXABI().isMicrosoft() && (ClassDecl->hasUserDeclaredMoveConstructor() || ClassDecl->needsOverloadResolutionForMoveConstructor() || ClassDecl->hasUserDeclaredMoveAssignment() || ClassDecl->needsOverloadResolutionForMoveAssignment())) DeclareImplicitCopyConstructor(ClassDecl); } if (getLangOpts().CPlusPlus11 && ClassDecl->needsImplicitMoveConstructor()) { ++ASTContext::NumImplicitMoveConstructors; if (ClassDecl->needsOverloadResolutionForMoveConstructor() || ClassDecl->hasInheritedConstructor()) DeclareImplicitMoveConstructor(ClassDecl); } if (ClassDecl->needsImplicitCopyAssignment()) { ++ASTContext::NumImplicitCopyAssignmentOperators; // If we have a dynamic class, then the copy assignment operator may be // virtual, so we have to declare it immediately. This ensures that, e.g., // it shows up in the right place in the vtable and that we diagnose // problems with the implicit exception specification. if (ClassDecl->isDynamicClass() || ClassDecl->needsOverloadResolutionForCopyAssignment() || ClassDecl->hasInheritedAssignment()) DeclareImplicitCopyAssignment(ClassDecl); } if (getLangOpts().CPlusPlus11 && ClassDecl->needsImplicitMoveAssignment()) { ++ASTContext::NumImplicitMoveAssignmentOperators; // Likewise for the move assignment operator. if (ClassDecl->isDynamicClass() || ClassDecl->needsOverloadResolutionForMoveAssignment() || ClassDecl->hasInheritedAssignment()) DeclareImplicitMoveAssignment(ClassDecl); } if (ClassDecl->needsImplicitDestructor()) { ++ASTContext::NumImplicitDestructors; // If we have a dynamic class, then the destructor may be virtual, so we // have to declare the destructor immediately. This ensures that, e.g., it // shows up in the right place in the vtable and that we diagnose problems // with the implicit exception specification. if (ClassDecl->isDynamicClass() || ClassDecl->needsOverloadResolutionForDestructor()) DeclareImplicitDestructor(ClassDecl); } } unsigned Sema::ActOnReenterTemplateScope(Scope *S, Decl *D) { if (!D) return 0; // The order of template parameters is not important here. All names // get added to the same scope. SmallVector ParameterLists; if (TemplateDecl *TD = dyn_cast(D)) D = TD->getTemplatedDecl(); if (auto *PSD = dyn_cast(D)) ParameterLists.push_back(PSD->getTemplateParameters()); if (DeclaratorDecl *DD = dyn_cast(D)) { for (unsigned i = 0; i < DD->getNumTemplateParameterLists(); ++i) ParameterLists.push_back(DD->getTemplateParameterList(i)); if (FunctionDecl *FD = dyn_cast(D)) { if (FunctionTemplateDecl *FTD = FD->getDescribedFunctionTemplate()) ParameterLists.push_back(FTD->getTemplateParameters()); } } if (TagDecl *TD = dyn_cast(D)) { for (unsigned i = 0; i < TD->getNumTemplateParameterLists(); ++i) ParameterLists.push_back(TD->getTemplateParameterList(i)); if (CXXRecordDecl *RD = dyn_cast(TD)) { if (ClassTemplateDecl *CTD = RD->getDescribedClassTemplate()) ParameterLists.push_back(CTD->getTemplateParameters()); } } unsigned Count = 0; for (TemplateParameterList *Params : ParameterLists) { if (Params->size() > 0) // Ignore explicit specializations; they don't contribute to the template // depth. ++Count; for (NamedDecl *Param : *Params) { if (Param->getDeclName()) { S->AddDecl(Param); IdResolver.AddDecl(Param); } } } return Count; } void Sema::ActOnStartDelayedMemberDeclarations(Scope *S, Decl *RecordD) { if (!RecordD) return; AdjustDeclIfTemplate(RecordD); CXXRecordDecl *Record = cast(RecordD); PushDeclContext(S, Record); } void Sema::ActOnFinishDelayedMemberDeclarations(Scope *S, Decl *RecordD) { if (!RecordD) return; PopDeclContext(); } /// This is used to implement the constant expression evaluation part of the /// attribute enable_if extension. There is nothing in standard C++ which would /// require reentering parameters. void Sema::ActOnReenterCXXMethodParameter(Scope *S, ParmVarDecl *Param) { if (!Param) return; S->AddDecl(Param); if (Param->getDeclName()) IdResolver.AddDecl(Param); } /// ActOnStartDelayedCXXMethodDeclaration - We have completed /// parsing a top-level (non-nested) C++ class, and we are now /// parsing those parts of the given Method declaration that could /// not be parsed earlier (C++ [class.mem]p2), such as default /// arguments. This action should enter the scope of the given /// Method declaration as if we had just parsed the qualified method /// name. However, it should not bring the parameters into scope; /// that will be performed by ActOnDelayedCXXMethodParameter. void Sema::ActOnStartDelayedCXXMethodDeclaration(Scope *S, Decl *MethodD) { } /// ActOnDelayedCXXMethodParameter - We've already started a delayed /// C++ method declaration. We're (re-)introducing the given /// function parameter into scope for use in parsing later parts of /// the method declaration. For example, we could see an /// ActOnParamDefaultArgument event for this parameter. void Sema::ActOnDelayedCXXMethodParameter(Scope *S, Decl *ParamD) { if (!ParamD) return; ParmVarDecl *Param = cast(ParamD); // If this parameter has an unparsed default argument, clear it out // to make way for the parsed default argument. if (Param->hasUnparsedDefaultArg()) Param->setDefaultArg(nullptr); S->AddDecl(Param); if (Param->getDeclName()) IdResolver.AddDecl(Param); } /// ActOnFinishDelayedCXXMethodDeclaration - We have finished /// processing the delayed method declaration for Method. The method /// declaration is now considered finished. There may be a separate /// ActOnStartOfFunctionDef action later (not necessarily /// immediately!) for this method, if it was also defined inside the /// class body. void Sema::ActOnFinishDelayedCXXMethodDeclaration(Scope *S, Decl *MethodD) { if (!MethodD) return; AdjustDeclIfTemplate(MethodD); FunctionDecl *Method = cast(MethodD); // Now that we have our default arguments, check the constructor // again. It could produce additional diagnostics or affect whether // the class has implicitly-declared destructors, among other // things. if (CXXConstructorDecl *Constructor = dyn_cast(Method)) CheckConstructor(Constructor); // Check the default arguments, which we may have added. if (!Method->isInvalidDecl()) CheckCXXDefaultArguments(Method); } /// CheckConstructorDeclarator - Called by ActOnDeclarator to check /// the well-formedness of the constructor declarator @p D with type @p /// R. If there are any errors in the declarator, this routine will /// emit diagnostics and set the invalid bit to true. In any case, the type /// will be updated to reflect a well-formed type for the constructor and /// returned. QualType Sema::CheckConstructorDeclarator(Declarator &D, QualType R, StorageClass &SC) { bool isVirtual = D.getDeclSpec().isVirtualSpecified(); // C++ [class.ctor]p3: // A constructor shall not be virtual (10.3) or static (9.4). A // constructor can be invoked for a const, volatile or const // volatile object. A constructor shall not be declared const, // volatile, or const volatile (9.3.2). if (isVirtual) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_constructor_cannot_be) << "virtual" << SourceRange(D.getDeclSpec().getVirtualSpecLoc()) << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } if (SC == SC_Static) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_constructor_cannot_be) << "static" << SourceRange(D.getDeclSpec().getStorageClassSpecLoc()) << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); SC = SC_None; } if (unsigned TypeQuals = D.getDeclSpec().getTypeQualifiers()) { diagnoseIgnoredQualifiers( diag::err_constructor_return_type, TypeQuals, SourceLocation(), D.getDeclSpec().getConstSpecLoc(), D.getDeclSpec().getVolatileSpecLoc(), D.getDeclSpec().getRestrictSpecLoc(), D.getDeclSpec().getAtomicSpecLoc()); D.setInvalidType(); } DeclaratorChunk::FunctionTypeInfo &FTI = D.getFunctionTypeInfo(); if (FTI.TypeQuals != 0) { if (FTI.TypeQuals & Qualifiers::Const) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_constructor) << "const" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Volatile) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_constructor) << "volatile" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Restrict) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_constructor) << "restrict" << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } // C++0x [class.ctor]p4: // A constructor shall not be declared with a ref-qualifier. if (FTI.hasRefQualifier()) { Diag(FTI.getRefQualifierLoc(), diag::err_ref_qualifier_constructor) << FTI.RefQualifierIsLValueRef << FixItHint::CreateRemoval(FTI.getRefQualifierLoc()); D.setInvalidType(); } // Rebuild the function type "R" without any type qualifiers (in // case any of the errors above fired) and with "void" as the // return type, since constructors don't have return types. const FunctionProtoType *Proto = R->getAs(); if (Proto->getReturnType() == Context.VoidTy && !D.isInvalidType()) return R; FunctionProtoType::ExtProtoInfo EPI = Proto->getExtProtoInfo(); EPI.TypeQuals = 0; EPI.RefQualifier = RQ_None; return Context.getFunctionType(Context.VoidTy, Proto->getParamTypes(), EPI); } /// CheckConstructor - Checks a fully-formed constructor for /// well-formedness, issuing any diagnostics required. Returns true if /// the constructor declarator is invalid. void Sema::CheckConstructor(CXXConstructorDecl *Constructor) { CXXRecordDecl *ClassDecl = dyn_cast(Constructor->getDeclContext()); if (!ClassDecl) return Constructor->setInvalidDecl(); // C++ [class.copy]p3: // A declaration of a constructor for a class X is ill-formed if // its first parameter is of type (optionally cv-qualified) X and // either there are no other parameters or else all other // parameters have default arguments. if (!Constructor->isInvalidDecl() && ((Constructor->getNumParams() == 1) || (Constructor->getNumParams() > 1 && Constructor->getParamDecl(1)->hasDefaultArg())) && Constructor->getTemplateSpecializationKind() != TSK_ImplicitInstantiation) { QualType ParamType = Constructor->getParamDecl(0)->getType(); QualType ClassTy = Context.getTagDeclType(ClassDecl); if (Context.getCanonicalType(ParamType).getUnqualifiedType() == ClassTy) { SourceLocation ParamLoc = Constructor->getParamDecl(0)->getLocation(); const char *ConstRef = Constructor->getParamDecl(0)->getIdentifier() ? "const &" : " const &"; Diag(ParamLoc, diag::err_constructor_byvalue_arg) << FixItHint::CreateInsertion(ParamLoc, ConstRef); // FIXME: Rather that making the constructor invalid, we should endeavor // to fix the type. Constructor->setInvalidDecl(); } } } /// CheckDestructor - Checks a fully-formed destructor definition for /// well-formedness, issuing any diagnostics required. Returns true /// on error. bool Sema::CheckDestructor(CXXDestructorDecl *Destructor) { CXXRecordDecl *RD = Destructor->getParent(); if (!Destructor->getOperatorDelete() && Destructor->isVirtual()) { SourceLocation Loc; if (!Destructor->isImplicit()) Loc = Destructor->getLocation(); else Loc = RD->getLocation(); // If we have a virtual destructor, look up the deallocation function if (FunctionDecl *OperatorDelete = FindDeallocationFunctionForDestructor(Loc, RD)) { MarkFunctionReferenced(Loc, OperatorDelete); Destructor->setOperatorDelete(OperatorDelete); } } return false; } /// CheckDestructorDeclarator - Called by ActOnDeclarator to check /// the well-formednes of the destructor declarator @p D with type @p /// R. If there are any errors in the declarator, this routine will /// emit diagnostics and set the declarator to invalid. Even if this happens, /// will be updated to reflect a well-formed type for the destructor and /// returned. QualType Sema::CheckDestructorDeclarator(Declarator &D, QualType R, StorageClass& SC) { // C++ [class.dtor]p1: // [...] A typedef-name that names a class is a class-name // (7.1.3); however, a typedef-name that names a class shall not // be used as the identifier in the declarator for a destructor // declaration. QualType DeclaratorType = GetTypeFromParser(D.getName().DestructorName); if (const TypedefType *TT = DeclaratorType->getAs()) Diag(D.getIdentifierLoc(), diag::err_destructor_typedef_name) << DeclaratorType << isa(TT->getDecl()); else if (const TemplateSpecializationType *TST = DeclaratorType->getAs()) if (TST->isTypeAlias()) Diag(D.getIdentifierLoc(), diag::err_destructor_typedef_name) << DeclaratorType << 1; // C++ [class.dtor]p2: // A destructor is used to destroy objects of its class type. A // destructor takes no parameters, and no return type can be // specified for it (not even void). The address of a destructor // shall not be taken. A destructor shall not be static. A // destructor can be invoked for a const, volatile or const // volatile object. A destructor shall not be declared const, // volatile or const volatile (9.3.2). if (SC == SC_Static) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_destructor_cannot_be) << "static" << SourceRange(D.getDeclSpec().getStorageClassSpecLoc()) << SourceRange(D.getIdentifierLoc()) << FixItHint::CreateRemoval(D.getDeclSpec().getStorageClassSpecLoc()); SC = SC_None; } if (!D.isInvalidType()) { // Destructors don't have return types, but the parser will // happily parse something like: // // class X { // float ~X(); // }; // // The return type will be eliminated later. if (D.getDeclSpec().hasTypeSpecifier()) Diag(D.getIdentifierLoc(), diag::err_destructor_return_type) << SourceRange(D.getDeclSpec().getTypeSpecTypeLoc()) << SourceRange(D.getIdentifierLoc()); else if (unsigned TypeQuals = D.getDeclSpec().getTypeQualifiers()) { diagnoseIgnoredQualifiers(diag::err_destructor_return_type, TypeQuals, SourceLocation(), D.getDeclSpec().getConstSpecLoc(), D.getDeclSpec().getVolatileSpecLoc(), D.getDeclSpec().getRestrictSpecLoc(), D.getDeclSpec().getAtomicSpecLoc()); D.setInvalidType(); } } DeclaratorChunk::FunctionTypeInfo &FTI = D.getFunctionTypeInfo(); if (FTI.TypeQuals != 0 && !D.isInvalidType()) { if (FTI.TypeQuals & Qualifiers::Const) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_destructor) << "const" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Volatile) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_destructor) << "volatile" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Restrict) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_destructor) << "restrict" << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } // C++0x [class.dtor]p2: // A destructor shall not be declared with a ref-qualifier. if (FTI.hasRefQualifier()) { Diag(FTI.getRefQualifierLoc(), diag::err_ref_qualifier_destructor) << FTI.RefQualifierIsLValueRef << FixItHint::CreateRemoval(FTI.getRefQualifierLoc()); D.setInvalidType(); } // Make sure we don't have any parameters. if (FTIHasNonVoidParameters(FTI)) { Diag(D.getIdentifierLoc(), diag::err_destructor_with_params); // Delete the parameters. FTI.freeParams(); D.setInvalidType(); } // Make sure the destructor isn't variadic. if (FTI.isVariadic) { Diag(D.getIdentifierLoc(), diag::err_destructor_variadic); D.setInvalidType(); } // Rebuild the function type "R" without any type qualifiers or // parameters (in case any of the errors above fired) and with // "void" as the return type, since destructors don't have return // types. if (!D.isInvalidType()) return R; const FunctionProtoType *Proto = R->getAs(); FunctionProtoType::ExtProtoInfo EPI = Proto->getExtProtoInfo(); EPI.Variadic = false; EPI.TypeQuals = 0; EPI.RefQualifier = RQ_None; return Context.getFunctionType(Context.VoidTy, None, EPI); } static void extendLeft(SourceRange &R, SourceRange Before) { if (Before.isInvalid()) return; R.setBegin(Before.getBegin()); if (R.getEnd().isInvalid()) R.setEnd(Before.getEnd()); } static void extendRight(SourceRange &R, SourceRange After) { if (After.isInvalid()) return; if (R.getBegin().isInvalid()) R.setBegin(After.getBegin()); R.setEnd(After.getEnd()); } /// CheckConversionDeclarator - Called by ActOnDeclarator to check the /// well-formednes of the conversion function declarator @p D with /// type @p R. If there are any errors in the declarator, this routine /// will emit diagnostics and return true. Otherwise, it will return /// false. Either way, the type @p R will be updated to reflect a /// well-formed type for the conversion operator. void Sema::CheckConversionDeclarator(Declarator &D, QualType &R, StorageClass& SC) { // C++ [class.conv.fct]p1: // Neither parameter types nor return type can be specified. The // type of a conversion function (8.3.5) is "function taking no // parameter returning conversion-type-id." if (SC == SC_Static) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_conv_function_not_member) << SourceRange(D.getDeclSpec().getStorageClassSpecLoc()) << D.getName().getSourceRange(); D.setInvalidType(); SC = SC_None; } TypeSourceInfo *ConvTSI = nullptr; QualType ConvType = GetTypeFromParser(D.getName().ConversionFunctionId, &ConvTSI); if (D.getDeclSpec().hasTypeSpecifier() && !D.isInvalidType()) { // Conversion functions don't have return types, but the parser will // happily parse something like: // // class X { // float operator bool(); // }; // // The return type will be changed later anyway. Diag(D.getIdentifierLoc(), diag::err_conv_function_return_type) << SourceRange(D.getDeclSpec().getTypeSpecTypeLoc()) << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } const FunctionProtoType *Proto = R->getAs(); // Make sure we don't have any parameters. if (Proto->getNumParams() > 0) { Diag(D.getIdentifierLoc(), diag::err_conv_function_with_params); // Delete the parameters. D.getFunctionTypeInfo().freeParams(); D.setInvalidType(); } else if (Proto->isVariadic()) { Diag(D.getIdentifierLoc(), diag::err_conv_function_variadic); D.setInvalidType(); } // Diagnose "&operator bool()" and other such nonsense. This // is actually a gcc extension which we don't support. if (Proto->getReturnType() != ConvType) { bool NeedsTypedef = false; SourceRange Before, After; // Walk the chunks and extract information on them for our diagnostic. bool PastFunctionChunk = false; for (auto &Chunk : D.type_objects()) { switch (Chunk.Kind) { case DeclaratorChunk::Function: if (!PastFunctionChunk) { if (Chunk.Fun.HasTrailingReturnType) { TypeSourceInfo *TRT = nullptr; GetTypeFromParser(Chunk.Fun.getTrailingReturnType(), &TRT); if (TRT) extendRight(After, TRT->getTypeLoc().getSourceRange()); } PastFunctionChunk = true; break; } // Fall through. case DeclaratorChunk::Array: NeedsTypedef = true; extendRight(After, Chunk.getSourceRange()); break; case DeclaratorChunk::Pointer: case DeclaratorChunk::BlockPointer: case DeclaratorChunk::Reference: case DeclaratorChunk::MemberPointer: case DeclaratorChunk::Pipe: extendLeft(Before, Chunk.getSourceRange()); break; case DeclaratorChunk::Paren: extendLeft(Before, Chunk.Loc); extendRight(After, Chunk.EndLoc); break; } } SourceLocation Loc = Before.isValid() ? Before.getBegin() : After.isValid() ? After.getBegin() : D.getIdentifierLoc(); auto &&DB = Diag(Loc, diag::err_conv_function_with_complex_decl); DB << Before << After; if (!NeedsTypedef) { DB << /*don't need a typedef*/0; // If we can provide a correct fix-it hint, do so. if (After.isInvalid() && ConvTSI) { SourceLocation InsertLoc = getLocForEndOfToken(ConvTSI->getTypeLoc().getLocEnd()); DB << FixItHint::CreateInsertion(InsertLoc, " ") << FixItHint::CreateInsertionFromRange( InsertLoc, CharSourceRange::getTokenRange(Before)) << FixItHint::CreateRemoval(Before); } } else if (!Proto->getReturnType()->isDependentType()) { DB << /*typedef*/1 << Proto->getReturnType(); } else if (getLangOpts().CPlusPlus11) { DB << /*alias template*/2 << Proto->getReturnType(); } else { DB << /*might not be fixable*/3; } // Recover by incorporating the other type chunks into the result type. // Note, this does *not* change the name of the function. This is compatible // with the GCC extension: // struct S { &operator int(); } s; // int &r = s.operator int(); // ok in GCC // S::operator int&() {} // error in GCC, function name is 'operator int'. ConvType = Proto->getReturnType(); } // C++ [class.conv.fct]p4: // The conversion-type-id shall not represent a function type nor // an array type. if (ConvType->isArrayType()) { Diag(D.getIdentifierLoc(), diag::err_conv_function_to_array); ConvType = Context.getPointerType(ConvType); D.setInvalidType(); } else if (ConvType->isFunctionType()) { Diag(D.getIdentifierLoc(), diag::err_conv_function_to_function); ConvType = Context.getPointerType(ConvType); D.setInvalidType(); } // Rebuild the function type "R" without any parameters (in case any // of the errors above fired) and with the conversion type as the // return type. if (D.isInvalidType()) R = Context.getFunctionType(ConvType, None, Proto->getExtProtoInfo()); // C++0x explicit conversion operators. if (D.getDeclSpec().isExplicitSpecified()) Diag(D.getDeclSpec().getExplicitSpecLoc(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_explicit_conversion_functions : diag::ext_explicit_conversion_functions) << SourceRange(D.getDeclSpec().getExplicitSpecLoc()); } /// ActOnConversionDeclarator - Called by ActOnDeclarator to complete /// the declaration of the given C++ conversion function. This routine /// is responsible for recording the conversion function in the C++ /// class, if possible. Decl *Sema::ActOnConversionDeclarator(CXXConversionDecl *Conversion) { assert(Conversion && "Expected to receive a conversion function declaration"); CXXRecordDecl *ClassDecl = cast(Conversion->getDeclContext()); // Make sure we aren't redeclaring the conversion function. QualType ConvType = Context.getCanonicalType(Conversion->getConversionType()); // C++ [class.conv.fct]p1: // [...] A conversion function is never used to convert a // (possibly cv-qualified) object to the (possibly cv-qualified) // same object type (or a reference to it), to a (possibly // cv-qualified) base class of that type (or a reference to it), // or to (possibly cv-qualified) void. // FIXME: Suppress this warning if the conversion function ends up being a // virtual function that overrides a virtual function in a base class. QualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(ClassDecl)); if (const ReferenceType *ConvTypeRef = ConvType->getAs()) ConvType = ConvTypeRef->getPointeeType(); if (Conversion->getTemplateSpecializationKind() != TSK_Undeclared && Conversion->getTemplateSpecializationKind() != TSK_ExplicitSpecialization) /* Suppress diagnostics for instantiations. */; else if (ConvType->isRecordType()) { ConvType = Context.getCanonicalType(ConvType).getUnqualifiedType(); if (ConvType == ClassType) Diag(Conversion->getLocation(), diag::warn_conv_to_self_not_used) << ClassType; else if (IsDerivedFrom(Conversion->getLocation(), ClassType, ConvType)) Diag(Conversion->getLocation(), diag::warn_conv_to_base_not_used) << ClassType << ConvType; } else if (ConvType->isVoidType()) { Diag(Conversion->getLocation(), diag::warn_conv_to_void_not_used) << ClassType << ConvType; } if (FunctionTemplateDecl *ConversionTemplate = Conversion->getDescribedFunctionTemplate()) return ConversionTemplate; return Conversion; } namespace { /// Utility class to accumulate and print a diagnostic listing the invalid /// specifier(s) on a declaration. struct BadSpecifierDiagnoser { BadSpecifierDiagnoser(Sema &S, SourceLocation Loc, unsigned DiagID) : S(S), Diagnostic(S.Diag(Loc, DiagID)) {} ~BadSpecifierDiagnoser() { Diagnostic << Specifiers; } template void check(SourceLocation SpecLoc, T Spec) { return check(SpecLoc, DeclSpec::getSpecifierName(Spec)); } void check(SourceLocation SpecLoc, DeclSpec::TST Spec) { return check(SpecLoc, DeclSpec::getSpecifierName(Spec, S.getPrintingPolicy())); } void check(SourceLocation SpecLoc, const char *Spec) { if (SpecLoc.isInvalid()) return; Diagnostic << SourceRange(SpecLoc, SpecLoc); if (!Specifiers.empty()) Specifiers += " "; Specifiers += Spec; } Sema &S; Sema::SemaDiagnosticBuilder Diagnostic; std::string Specifiers; }; } /// Check the validity of a declarator that we parsed for a deduction-guide. /// These aren't actually declarators in the grammar, so we need to check that /// the user didn't specify any pieces that are not part of the deduction-guide /// grammar. void Sema::CheckDeductionGuideDeclarator(Declarator &D, QualType &R, StorageClass &SC) { TemplateName GuidedTemplate = D.getName().TemplateName.get().get(); TemplateDecl *GuidedTemplateDecl = GuidedTemplate.getAsTemplateDecl(); assert(GuidedTemplateDecl && "missing template decl for deduction guide"); // C++ [temp.deduct.guide]p3: // A deduction-gide shall be declared in the same scope as the // corresponding class template. if (!CurContext->getRedeclContext()->Equals( GuidedTemplateDecl->getDeclContext()->getRedeclContext())) { Diag(D.getIdentifierLoc(), diag::err_deduction_guide_wrong_scope) << GuidedTemplateDecl; Diag(GuidedTemplateDecl->getLocation(), diag::note_template_decl_here); } auto &DS = D.getMutableDeclSpec(); // We leave 'friend' and 'virtual' to be rejected in the normal way. if (DS.hasTypeSpecifier() || DS.getTypeQualifiers() || DS.getStorageClassSpecLoc().isValid() || DS.isInlineSpecified() || DS.isNoreturnSpecified() || DS.isConstexprSpecified() || DS.isConceptSpecified()) { BadSpecifierDiagnoser Diagnoser( *this, D.getIdentifierLoc(), diag::err_deduction_guide_invalid_specifier); Diagnoser.check(DS.getStorageClassSpecLoc(), DS.getStorageClassSpec()); DS.ClearStorageClassSpecs(); SC = SC_None; // 'explicit' is permitted. Diagnoser.check(DS.getInlineSpecLoc(), "inline"); Diagnoser.check(DS.getNoreturnSpecLoc(), "_Noreturn"); Diagnoser.check(DS.getConstexprSpecLoc(), "constexpr"); Diagnoser.check(DS.getConceptSpecLoc(), "concept"); DS.ClearConstexprSpec(); DS.ClearConceptSpec(); Diagnoser.check(DS.getConstSpecLoc(), "const"); Diagnoser.check(DS.getRestrictSpecLoc(), "__restrict"); Diagnoser.check(DS.getVolatileSpecLoc(), "volatile"); Diagnoser.check(DS.getAtomicSpecLoc(), "_Atomic"); Diagnoser.check(DS.getUnalignedSpecLoc(), "__unaligned"); DS.ClearTypeQualifiers(); Diagnoser.check(DS.getTypeSpecComplexLoc(), DS.getTypeSpecComplex()); Diagnoser.check(DS.getTypeSpecSignLoc(), DS.getTypeSpecSign()); Diagnoser.check(DS.getTypeSpecWidthLoc(), DS.getTypeSpecWidth()); Diagnoser.check(DS.getTypeSpecTypeLoc(), DS.getTypeSpecType()); DS.ClearTypeSpecType(); } if (D.isInvalidType()) return; // Check the declarator is simple enough. bool FoundFunction = false; for (const DeclaratorChunk &Chunk : llvm::reverse(D.type_objects())) { if (Chunk.Kind == DeclaratorChunk::Paren) continue; if (Chunk.Kind != DeclaratorChunk::Function || FoundFunction) { Diag(D.getDeclSpec().getLocStart(), diag::err_deduction_guide_with_complex_decl) << D.getSourceRange(); break; } if (!Chunk.Fun.hasTrailingReturnType()) { Diag(D.getName().getLocStart(), diag::err_deduction_guide_no_trailing_return_type); break; } // Check that the return type is written as a specialization of // the template specified as the deduction-guide's name. ParsedType TrailingReturnType = Chunk.Fun.getTrailingReturnType(); TypeSourceInfo *TSI = nullptr; QualType RetTy = GetTypeFromParser(TrailingReturnType, &TSI); assert(TSI && "deduction guide has valid type but invalid return type?"); bool AcceptableReturnType = false; bool MightInstantiateToSpecialization = false; if (auto RetTST = TSI->getTypeLoc().getAs()) { TemplateName SpecifiedName = RetTST.getTypePtr()->getTemplateName(); bool TemplateMatches = Context.hasSameTemplateName(SpecifiedName, GuidedTemplate); if (SpecifiedName.getKind() == TemplateName::Template && TemplateMatches) AcceptableReturnType = true; else { // This could still instantiate to the right type, unless we know it // names the wrong class template. auto *TD = SpecifiedName.getAsTemplateDecl(); MightInstantiateToSpecialization = !(TD && isa(TD) && !TemplateMatches); } } else if (!RetTy.hasQualifiers() && RetTy->isDependentType()) { MightInstantiateToSpecialization = true; } if (!AcceptableReturnType) { Diag(TSI->getTypeLoc().getLocStart(), diag::err_deduction_guide_bad_trailing_return_type) << GuidedTemplate << TSI->getType() << MightInstantiateToSpecialization << TSI->getTypeLoc().getSourceRange(); } // Keep going to check that we don't have any inner declarator pieces (we // could still have a function returning a pointer to a function). FoundFunction = true; } if (D.isFunctionDefinition()) Diag(D.getIdentifierLoc(), diag::err_deduction_guide_defines_function); } //===----------------------------------------------------------------------===// // Namespace Handling //===----------------------------------------------------------------------===// /// \brief Diagnose a mismatch in 'inline' qualifiers when a namespace is /// reopened. static void DiagnoseNamespaceInlineMismatch(Sema &S, SourceLocation KeywordLoc, SourceLocation Loc, IdentifierInfo *II, bool *IsInline, NamespaceDecl *PrevNS) { assert(*IsInline != PrevNS->isInline()); // HACK: Work around a bug in libstdc++4.6's , where // std::__atomic[0,1,2] are defined as non-inline namespaces, then reopened as // inline namespaces, with the intention of bringing names into namespace std. // // We support this just well enough to get that case working; this is not // sufficient to support reopening namespaces as inline in general. if (*IsInline && II && II->getName().startswith("__atomic") && S.getSourceManager().isInSystemHeader(Loc)) { // Mark all prior declarations of the namespace as inline. for (NamespaceDecl *NS = PrevNS->getMostRecentDecl(); NS; NS = NS->getPreviousDecl()) NS->setInline(*IsInline); // Patch up the lookup table for the containing namespace. This isn't really // correct, but it's good enough for this particular case. for (auto *I : PrevNS->decls()) if (auto *ND = dyn_cast(I)) PrevNS->getParent()->makeDeclVisibleInContext(ND); return; } if (PrevNS->isInline()) // The user probably just forgot the 'inline', so suggest that it // be added back. S.Diag(Loc, diag::warn_inline_namespace_reopened_noninline) << FixItHint::CreateInsertion(KeywordLoc, "inline "); else S.Diag(Loc, diag::err_inline_namespace_mismatch); S.Diag(PrevNS->getLocation(), diag::note_previous_definition); *IsInline = PrevNS->isInline(); } /// ActOnStartNamespaceDef - This is called at the start of a namespace /// definition. Decl *Sema::ActOnStartNamespaceDef(Scope *NamespcScope, SourceLocation InlineLoc, SourceLocation NamespaceLoc, SourceLocation IdentLoc, IdentifierInfo *II, SourceLocation LBrace, AttributeList *AttrList, UsingDirectiveDecl *&UD) { SourceLocation StartLoc = InlineLoc.isValid() ? InlineLoc : NamespaceLoc; // For anonymous namespace, take the location of the left brace. SourceLocation Loc = II ? IdentLoc : LBrace; bool IsInline = InlineLoc.isValid(); bool IsInvalid = false; bool IsStd = false; bool AddToKnown = false; Scope *DeclRegionScope = NamespcScope->getParent(); NamespaceDecl *PrevNS = nullptr; if (II) { // C++ [namespace.def]p2: // The identifier in an original-namespace-definition shall not // have been previously defined in the declarative region in // which the original-namespace-definition appears. The // identifier in an original-namespace-definition is the name of // the namespace. Subsequently in that declarative region, it is // treated as an original-namespace-name. // // Since namespace names are unique in their scope, and we don't // look through using directives, just look for any ordinary names // as if by qualified name lookup. LookupResult R(*this, II, IdentLoc, LookupOrdinaryName, ForRedeclaration); LookupQualifiedName(R, CurContext->getRedeclContext()); NamedDecl *PrevDecl = R.isSingleResult() ? R.getRepresentativeDecl() : nullptr; PrevNS = dyn_cast_or_null(PrevDecl); if (PrevNS) { // This is an extended namespace definition. if (IsInline != PrevNS->isInline()) DiagnoseNamespaceInlineMismatch(*this, NamespaceLoc, Loc, II, &IsInline, PrevNS); } else if (PrevDecl) { // This is an invalid name redefinition. Diag(Loc, diag::err_redefinition_different_kind) << II; Diag(PrevDecl->getLocation(), diag::note_previous_definition); IsInvalid = true; // Continue on to push Namespc as current DeclContext and return it. } else if (II->isStr("std") && CurContext->getRedeclContext()->isTranslationUnit()) { // This is the first "real" definition of the namespace "std", so update // our cache of the "std" namespace to point at this definition. PrevNS = getStdNamespace(); IsStd = true; AddToKnown = !IsInline; } else { // We've seen this namespace for the first time. AddToKnown = !IsInline; } } else { // Anonymous namespaces. // Determine whether the parent already has an anonymous namespace. DeclContext *Parent = CurContext->getRedeclContext(); if (TranslationUnitDecl *TU = dyn_cast(Parent)) { PrevNS = TU->getAnonymousNamespace(); } else { NamespaceDecl *ND = cast(Parent); PrevNS = ND->getAnonymousNamespace(); } if (PrevNS && IsInline != PrevNS->isInline()) DiagnoseNamespaceInlineMismatch(*this, NamespaceLoc, NamespaceLoc, II, &IsInline, PrevNS); } NamespaceDecl *Namespc = NamespaceDecl::Create(Context, CurContext, IsInline, StartLoc, Loc, II, PrevNS); if (IsInvalid) Namespc->setInvalidDecl(); ProcessDeclAttributeList(DeclRegionScope, Namespc, AttrList); AddPragmaAttributes(DeclRegionScope, Namespc); // FIXME: Should we be merging attributes? if (const VisibilityAttr *Attr = Namespc->getAttr()) PushNamespaceVisibilityAttr(Attr, Loc); if (IsStd) StdNamespace = Namespc; if (AddToKnown) KnownNamespaces[Namespc] = false; if (II) { PushOnScopeChains(Namespc, DeclRegionScope); } else { // Link the anonymous namespace into its parent. DeclContext *Parent = CurContext->getRedeclContext(); if (TranslationUnitDecl *TU = dyn_cast(Parent)) { TU->setAnonymousNamespace(Namespc); } else { cast(Parent)->setAnonymousNamespace(Namespc); } CurContext->addDecl(Namespc); // C++ [namespace.unnamed]p1. An unnamed-namespace-definition // behaves as if it were replaced by // namespace unique { /* empty body */ } // using namespace unique; // namespace unique { namespace-body } // where all occurrences of 'unique' in a translation unit are // replaced by the same identifier and this identifier differs // from all other identifiers in the entire program. // We just create the namespace with an empty name and then add an // implicit using declaration, just like the standard suggests. // // CodeGen enforces the "universally unique" aspect by giving all // declarations semantically contained within an anonymous // namespace internal linkage. if (!PrevNS) { UD = UsingDirectiveDecl::Create(Context, Parent, /* 'using' */ LBrace, /* 'namespace' */ SourceLocation(), /* qualifier */ NestedNameSpecifierLoc(), /* identifier */ SourceLocation(), Namespc, /* Ancestor */ Parent); UD->setImplicit(); Parent->addDecl(UD); } } ActOnDocumentableDecl(Namespc); // Although we could have an invalid decl (i.e. the namespace name is a // redefinition), push it as current DeclContext and try to continue parsing. // FIXME: We should be able to push Namespc here, so that the each DeclContext // for the namespace has the declarations that showed up in that particular // namespace definition. PushDeclContext(NamespcScope, Namespc); return Namespc; } /// getNamespaceDecl - Returns the namespace a decl represents. If the decl /// is a namespace alias, returns the namespace it points to. static inline NamespaceDecl *getNamespaceDecl(NamedDecl *D) { if (NamespaceAliasDecl *AD = dyn_cast_or_null(D)) return AD->getNamespace(); return dyn_cast_or_null(D); } /// ActOnFinishNamespaceDef - This callback is called after a namespace is /// exited. Decl is the DeclTy returned by ActOnStartNamespaceDef. void Sema::ActOnFinishNamespaceDef(Decl *Dcl, SourceLocation RBrace) { NamespaceDecl *Namespc = dyn_cast_or_null(Dcl); assert(Namespc && "Invalid parameter, expected NamespaceDecl"); Namespc->setRBraceLoc(RBrace); PopDeclContext(); if (Namespc->hasAttr()) PopPragmaVisibility(true, RBrace); } CXXRecordDecl *Sema::getStdBadAlloc() const { return cast_or_null( StdBadAlloc.get(Context.getExternalSource())); } EnumDecl *Sema::getStdAlignValT() const { return cast_or_null(StdAlignValT.get(Context.getExternalSource())); } NamespaceDecl *Sema::getStdNamespace() const { return cast_or_null( StdNamespace.get(Context.getExternalSource())); } NamespaceDecl *Sema::lookupStdExperimentalNamespace() { if (!StdExperimentalNamespaceCache) { if (auto Std = getStdNamespace()) { LookupResult Result(*this, &PP.getIdentifierTable().get("experimental"), SourceLocation(), LookupNamespaceName); if (!LookupQualifiedName(Result, Std) || !(StdExperimentalNamespaceCache = Result.getAsSingle())) Result.suppressDiagnostics(); } } return StdExperimentalNamespaceCache; } /// \brief Retrieve the special "std" namespace, which may require us to /// implicitly define the namespace. NamespaceDecl *Sema::getOrCreateStdNamespace() { if (!StdNamespace) { // The "std" namespace has not yet been defined, so build one implicitly. StdNamespace = NamespaceDecl::Create(Context, Context.getTranslationUnitDecl(), /*Inline=*/false, SourceLocation(), SourceLocation(), &PP.getIdentifierTable().get("std"), /*PrevDecl=*/nullptr); getStdNamespace()->setImplicit(true); } return getStdNamespace(); } bool Sema::isStdInitializerList(QualType Ty, QualType *Element) { assert(getLangOpts().CPlusPlus && "Looking for std::initializer_list outside of C++."); // We're looking for implicit instantiations of // template class std::initializer_list. if (!StdNamespace) // If we haven't seen namespace std yet, this can't be it. return false; ClassTemplateDecl *Template = nullptr; const TemplateArgument *Arguments = nullptr; if (const RecordType *RT = Ty->getAs()) { ClassTemplateSpecializationDecl *Specialization = dyn_cast(RT->getDecl()); if (!Specialization) return false; Template = Specialization->getSpecializedTemplate(); Arguments = Specialization->getTemplateArgs().data(); } else if (const TemplateSpecializationType *TST = Ty->getAs()) { Template = dyn_cast_or_null( TST->getTemplateName().getAsTemplateDecl()); Arguments = TST->getArgs(); } if (!Template) return false; if (!StdInitializerList) { // Haven't recognized std::initializer_list yet, maybe this is it. CXXRecordDecl *TemplateClass = Template->getTemplatedDecl(); if (TemplateClass->getIdentifier() != &PP.getIdentifierTable().get("initializer_list") || !getStdNamespace()->InEnclosingNamespaceSetOf( TemplateClass->getDeclContext())) return false; // This is a template called std::initializer_list, but is it the right // template? TemplateParameterList *Params = Template->getTemplateParameters(); if (Params->getMinRequiredArguments() != 1) return false; if (!isa(Params->getParam(0))) return false; // It's the right template. StdInitializerList = Template; } if (Template->getCanonicalDecl() != StdInitializerList->getCanonicalDecl()) return false; // This is an instance of std::initializer_list. Find the argument type. if (Element) *Element = Arguments[0].getAsType(); return true; } static ClassTemplateDecl *LookupStdInitializerList(Sema &S, SourceLocation Loc){ NamespaceDecl *Std = S.getStdNamespace(); if (!Std) { S.Diag(Loc, diag::err_implied_std_initializer_list_not_found); return nullptr; } LookupResult Result(S, &S.PP.getIdentifierTable().get("initializer_list"), Loc, Sema::LookupOrdinaryName); if (!S.LookupQualifiedName(Result, Std)) { S.Diag(Loc, diag::err_implied_std_initializer_list_not_found); return nullptr; } ClassTemplateDecl *Template = Result.getAsSingle(); if (!Template) { Result.suppressDiagnostics(); // We found something weird. Complain about the first thing we found. NamedDecl *Found = *Result.begin(); S.Diag(Found->getLocation(), diag::err_malformed_std_initializer_list); return nullptr; } // We found some template called std::initializer_list. Now verify that it's // correct. TemplateParameterList *Params = Template->getTemplateParameters(); if (Params->getMinRequiredArguments() != 1 || !isa(Params->getParam(0))) { S.Diag(Template->getLocation(), diag::err_malformed_std_initializer_list); return nullptr; } return Template; } QualType Sema::BuildStdInitializerList(QualType Element, SourceLocation Loc) { if (!StdInitializerList) { StdInitializerList = LookupStdInitializerList(*this, Loc); if (!StdInitializerList) return QualType(); } TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument(TemplateArgumentLoc(TemplateArgument(Element), Context.getTrivialTypeSourceInfo(Element, Loc))); return Context.getCanonicalType( CheckTemplateIdType(TemplateName(StdInitializerList), Loc, Args)); } bool Sema::isInitListConstructor(const FunctionDecl *Ctor) { // C++ [dcl.init.list]p2: // A constructor is an initializer-list constructor if its first parameter // is of type std::initializer_list or reference to possibly cv-qualified // std::initializer_list for some type E, and either there are no other // parameters or else all other parameters have default arguments. if (Ctor->getNumParams() < 1 || (Ctor->getNumParams() > 1 && !Ctor->getParamDecl(1)->hasDefaultArg())) return false; QualType ArgType = Ctor->getParamDecl(0)->getType(); if (const ReferenceType *RT = ArgType->getAs()) ArgType = RT->getPointeeType().getUnqualifiedType(); return isStdInitializerList(ArgType, nullptr); } /// \brief Determine whether a using statement is in a context where it will be /// apply in all contexts. static bool IsUsingDirectiveInToplevelContext(DeclContext *CurContext) { switch (CurContext->getDeclKind()) { case Decl::TranslationUnit: return true; case Decl::LinkageSpec: return IsUsingDirectiveInToplevelContext(CurContext->getParent()); default: return false; } } namespace { // Callback to only accept typo corrections that are namespaces. class NamespaceValidatorCCC : public CorrectionCandidateCallback { public: bool ValidateCandidate(const TypoCorrection &candidate) override { if (NamedDecl *ND = candidate.getCorrectionDecl()) return isa(ND) || isa(ND); return false; } }; } static bool TryNamespaceTypoCorrection(Sema &S, LookupResult &R, Scope *Sc, CXXScopeSpec &SS, SourceLocation IdentLoc, IdentifierInfo *Ident) { R.clear(); if (TypoCorrection Corrected = S.CorrectTypo(R.getLookupNameInfo(), R.getLookupKind(), Sc, &SS, llvm::make_unique(), Sema::CTK_ErrorRecovery)) { if (DeclContext *DC = S.computeDeclContext(SS, false)) { std::string CorrectedStr(Corrected.getAsString(S.getLangOpts())); bool DroppedSpecifier = Corrected.WillReplaceSpecifier() && Ident->getName().equals(CorrectedStr); S.diagnoseTypo(Corrected, S.PDiag(diag::err_using_directive_member_suggest) << Ident << DC << DroppedSpecifier << SS.getRange(), S.PDiag(diag::note_namespace_defined_here)); } else { S.diagnoseTypo(Corrected, S.PDiag(diag::err_using_directive_suggest) << Ident, S.PDiag(diag::note_namespace_defined_here)); } R.addDecl(Corrected.getFoundDecl()); return true; } return false; } Decl *Sema::ActOnUsingDirective(Scope *S, SourceLocation UsingLoc, SourceLocation NamespcLoc, CXXScopeSpec &SS, SourceLocation IdentLoc, IdentifierInfo *NamespcName, AttributeList *AttrList) { assert(!SS.isInvalid() && "Invalid CXXScopeSpec."); assert(NamespcName && "Invalid NamespcName."); assert(IdentLoc.isValid() && "Invalid NamespceName location."); // This can only happen along a recovery path. while (S->isTemplateParamScope()) S = S->getParent(); assert(S->getFlags() & Scope::DeclScope && "Invalid Scope."); UsingDirectiveDecl *UDir = nullptr; NestedNameSpecifier *Qualifier = nullptr; if (SS.isSet()) Qualifier = SS.getScopeRep(); // Lookup namespace name. LookupResult R(*this, NamespcName, IdentLoc, LookupNamespaceName); LookupParsedName(R, S, &SS); if (R.isAmbiguous()) return nullptr; if (R.empty()) { R.clear(); // Allow "using namespace std;" or "using namespace ::std;" even if // "std" hasn't been defined yet, for GCC compatibility. if ((!Qualifier || Qualifier->getKind() == NestedNameSpecifier::Global) && NamespcName->isStr("std")) { Diag(IdentLoc, diag::ext_using_undefined_std); R.addDecl(getOrCreateStdNamespace()); R.resolveKind(); } // Otherwise, attempt typo correction. else TryNamespaceTypoCorrection(*this, R, S, SS, IdentLoc, NamespcName); } if (!R.empty()) { NamedDecl *Named = R.getRepresentativeDecl(); NamespaceDecl *NS = R.getAsSingle(); assert(NS && "expected namespace decl"); // The use of a nested name specifier may trigger deprecation warnings. DiagnoseUseOfDecl(Named, IdentLoc); // C++ [namespace.udir]p1: // A using-directive specifies that the names in the nominated // namespace can be used in the scope in which the // using-directive appears after the using-directive. During // unqualified name lookup (3.4.1), the names appear as if they // were declared in the nearest enclosing namespace which // contains both the using-directive and the nominated // namespace. [Note: in this context, "contains" means "contains // directly or indirectly". ] // Find enclosing context containing both using-directive and // nominated namespace. DeclContext *CommonAncestor = cast(NS); while (CommonAncestor && !CommonAncestor->Encloses(CurContext)) CommonAncestor = CommonAncestor->getParent(); UDir = UsingDirectiveDecl::Create(Context, CurContext, UsingLoc, NamespcLoc, SS.getWithLocInContext(Context), IdentLoc, Named, CommonAncestor); if (IsUsingDirectiveInToplevelContext(CurContext) && !SourceMgr.isInMainFile(SourceMgr.getExpansionLoc(IdentLoc))) { Diag(IdentLoc, diag::warn_using_directive_in_header); } PushUsingDirective(S, UDir); } else { Diag(IdentLoc, diag::err_expected_namespace_name) << SS.getRange(); } if (UDir) ProcessDeclAttributeList(S, UDir, AttrList); return UDir; } void Sema::PushUsingDirective(Scope *S, UsingDirectiveDecl *UDir) { // If the scope has an associated entity and the using directive is at // namespace or translation unit scope, add the UsingDirectiveDecl into // its lookup structure so qualified name lookup can find it. DeclContext *Ctx = S->getEntity(); if (Ctx && !Ctx->isFunctionOrMethod()) Ctx->addDecl(UDir); else // Otherwise, it is at block scope. The using-directives will affect lookup // only to the end of the scope. S->PushUsingDirective(UDir); } Decl *Sema::ActOnUsingDeclaration(Scope *S, AccessSpecifier AS, SourceLocation UsingLoc, SourceLocation TypenameLoc, CXXScopeSpec &SS, UnqualifiedId &Name, SourceLocation EllipsisLoc, AttributeList *AttrList) { assert(S->getFlags() & Scope::DeclScope && "Invalid Scope."); if (SS.isEmpty()) { Diag(Name.getLocStart(), diag::err_using_requires_qualname); return nullptr; } switch (Name.getKind()) { case UnqualifiedId::IK_ImplicitSelfParam: case UnqualifiedId::IK_Identifier: case UnqualifiedId::IK_OperatorFunctionId: case UnqualifiedId::IK_LiteralOperatorId: case UnqualifiedId::IK_ConversionFunctionId: break; case UnqualifiedId::IK_ConstructorName: case UnqualifiedId::IK_ConstructorTemplateId: // C++11 inheriting constructors. Diag(Name.getLocStart(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_using_decl_constructor : diag::err_using_decl_constructor) << SS.getRange(); if (getLangOpts().CPlusPlus11) break; return nullptr; case UnqualifiedId::IK_DestructorName: Diag(Name.getLocStart(), diag::err_using_decl_destructor) << SS.getRange(); return nullptr; case UnqualifiedId::IK_TemplateId: Diag(Name.getLocStart(), diag::err_using_decl_template_id) << SourceRange(Name.TemplateId->LAngleLoc, Name.TemplateId->RAngleLoc); return nullptr; case UnqualifiedId::IK_DeductionGuideName: llvm_unreachable("cannot parse qualified deduction guide name"); } DeclarationNameInfo TargetNameInfo = GetNameFromUnqualifiedId(Name); DeclarationName TargetName = TargetNameInfo.getName(); if (!TargetName) return nullptr; // Warn about access declarations. if (UsingLoc.isInvalid()) { Diag(Name.getLocStart(), getLangOpts().CPlusPlus11 ? diag::err_access_decl : diag::warn_access_decl_deprecated) << FixItHint::CreateInsertion(SS.getRange().getBegin(), "using "); } if (EllipsisLoc.isInvalid()) { if (DiagnoseUnexpandedParameterPack(SS, UPPC_UsingDeclaration) || DiagnoseUnexpandedParameterPack(TargetNameInfo, UPPC_UsingDeclaration)) return nullptr; } else { if (!SS.getScopeRep()->containsUnexpandedParameterPack() && !TargetNameInfo.containsUnexpandedParameterPack()) { Diag(EllipsisLoc, diag::err_pack_expansion_without_parameter_packs) << SourceRange(SS.getBeginLoc(), TargetNameInfo.getEndLoc()); EllipsisLoc = SourceLocation(); } } NamedDecl *UD = BuildUsingDeclaration(S, AS, UsingLoc, TypenameLoc.isValid(), TypenameLoc, SS, TargetNameInfo, EllipsisLoc, AttrList, /*IsInstantiation*/false); if (UD) PushOnScopeChains(UD, S, /*AddToContext*/ false); return UD; } /// \brief Determine whether a using declaration considers the given /// declarations as "equivalent", e.g., if they are redeclarations of /// the same entity or are both typedefs of the same type. static bool IsEquivalentForUsingDecl(ASTContext &Context, NamedDecl *D1, NamedDecl *D2) { if (D1->getCanonicalDecl() == D2->getCanonicalDecl()) return true; if (TypedefNameDecl *TD1 = dyn_cast(D1)) if (TypedefNameDecl *TD2 = dyn_cast(D2)) return Context.hasSameType(TD1->getUnderlyingType(), TD2->getUnderlyingType()); return false; } /// Determines whether to create a using shadow decl for a particular /// decl, given the set of decls existing prior to this using lookup. bool Sema::CheckUsingShadowDecl(UsingDecl *Using, NamedDecl *Orig, const LookupResult &Previous, UsingShadowDecl *&PrevShadow) { // Diagnose finding a decl which is not from a base class of the // current class. We do this now because there are cases where this // function will silently decide not to build a shadow decl, which // will pre-empt further diagnostics. // // We don't need to do this in C++11 because we do the check once on // the qualifier. // // FIXME: diagnose the following if we care enough: // struct A { int foo; }; // struct B : A { using A::foo; }; // template struct C : A {}; // template struct D : C { using B::foo; } // <--- // This is invalid (during instantiation) in C++03 because B::foo // resolves to the using decl in B, which is not a base class of D. // We can't diagnose it immediately because C is an unknown // specialization. The UsingShadowDecl in D then points directly // to A::foo, which will look well-formed when we instantiate. // The right solution is to not collapse the shadow-decl chain. if (!getLangOpts().CPlusPlus11 && CurContext->isRecord()) { DeclContext *OrigDC = Orig->getDeclContext(); // Handle enums and anonymous structs. if (isa(OrigDC)) OrigDC = OrigDC->getParent(); CXXRecordDecl *OrigRec = cast(OrigDC); while (OrigRec->isAnonymousStructOrUnion()) OrigRec = cast(OrigRec->getDeclContext()); if (cast(CurContext)->isProvablyNotDerivedFrom(OrigRec)) { if (OrigDC == CurContext) { Diag(Using->getLocation(), diag::err_using_decl_nested_name_specifier_is_current_class) << Using->getQualifierLoc().getSourceRange(); Diag(Orig->getLocation(), diag::note_using_decl_target); Using->setInvalidDecl(); return true; } Diag(Using->getQualifierLoc().getBeginLoc(), diag::err_using_decl_nested_name_specifier_is_not_base_class) << Using->getQualifier() << cast(CurContext) << Using->getQualifierLoc().getSourceRange(); Diag(Orig->getLocation(), diag::note_using_decl_target); Using->setInvalidDecl(); return true; } } if (Previous.empty()) return false; NamedDecl *Target = Orig; if (isa(Target)) Target = cast(Target)->getTargetDecl(); // If the target happens to be one of the previous declarations, we // don't have a conflict. // // FIXME: but we might be increasing its access, in which case we // should redeclare it. NamedDecl *NonTag = nullptr, *Tag = nullptr; bool FoundEquivalentDecl = false; for (LookupResult::iterator I = Previous.begin(), E = Previous.end(); I != E; ++I) { NamedDecl *D = (*I)->getUnderlyingDecl(); // We can have UsingDecls in our Previous results because we use the same // LookupResult for checking whether the UsingDecl itself is a valid // redeclaration. if (isa(D) || isa(D)) continue; if (IsEquivalentForUsingDecl(Context, D, Target)) { if (UsingShadowDecl *Shadow = dyn_cast(*I)) PrevShadow = Shadow; FoundEquivalentDecl = true; } else if (isEquivalentInternalLinkageDeclaration(D, Target)) { // We don't conflict with an existing using shadow decl of an equivalent // declaration, but we're not a redeclaration of it. FoundEquivalentDecl = true; } if (isVisible(D)) (isa(D) ? Tag : NonTag) = D; } if (FoundEquivalentDecl) return false; if (FunctionDecl *FD = Target->getAsFunction()) { NamedDecl *OldDecl = nullptr; switch (CheckOverload(nullptr, FD, Previous, OldDecl, /*IsForUsingDecl*/ true)) { case Ovl_Overload: return false; case Ovl_NonFunction: Diag(Using->getLocation(), diag::err_using_decl_conflict); break; // We found a decl with the exact signature. case Ovl_Match: // If we're in a record, we want to hide the target, so we // return true (without a diagnostic) to tell the caller not to // build a shadow decl. if (CurContext->isRecord()) return true; // If we're not in a record, this is an error. Diag(Using->getLocation(), diag::err_using_decl_conflict); break; } Diag(Target->getLocation(), diag::note_using_decl_target); Diag(OldDecl->getLocation(), diag::note_using_decl_conflict); Using->setInvalidDecl(); return true; } // Target is not a function. if (isa(Target)) { // No conflict between a tag and a non-tag. if (!Tag) return false; Diag(Using->getLocation(), diag::err_using_decl_conflict); Diag(Target->getLocation(), diag::note_using_decl_target); Diag(Tag->getLocation(), diag::note_using_decl_conflict); Using->setInvalidDecl(); return true; } // No conflict between a tag and a non-tag. if (!NonTag) return false; Diag(Using->getLocation(), diag::err_using_decl_conflict); Diag(Target->getLocation(), diag::note_using_decl_target); Diag(NonTag->getLocation(), diag::note_using_decl_conflict); Using->setInvalidDecl(); return true; } /// Determine whether a direct base class is a virtual base class. static bool isVirtualDirectBase(CXXRecordDecl *Derived, CXXRecordDecl *Base) { if (!Derived->getNumVBases()) return false; for (auto &B : Derived->bases()) if (B.getType()->getAsCXXRecordDecl() == Base) return B.isVirtual(); llvm_unreachable("not a direct base class"); } /// Builds a shadow declaration corresponding to a 'using' declaration. UsingShadowDecl *Sema::BuildUsingShadowDecl(Scope *S, UsingDecl *UD, NamedDecl *Orig, UsingShadowDecl *PrevDecl) { // If we resolved to another shadow declaration, just coalesce them. NamedDecl *Target = Orig; if (isa(Target)) { Target = cast(Target)->getTargetDecl(); assert(!isa(Target) && "nested shadow declaration"); } NamedDecl *NonTemplateTarget = Target; if (auto *TargetTD = dyn_cast(Target)) NonTemplateTarget = TargetTD->getTemplatedDecl(); UsingShadowDecl *Shadow; if (isa(NonTemplateTarget)) { bool IsVirtualBase = isVirtualDirectBase(cast(CurContext), UD->getQualifier()->getAsRecordDecl()); Shadow = ConstructorUsingShadowDecl::Create( Context, CurContext, UD->getLocation(), UD, Orig, IsVirtualBase); } else { Shadow = UsingShadowDecl::Create(Context, CurContext, UD->getLocation(), UD, Target); } UD->addShadowDecl(Shadow); Shadow->setAccess(UD->getAccess()); if (Orig->isInvalidDecl() || UD->isInvalidDecl()) Shadow->setInvalidDecl(); Shadow->setPreviousDecl(PrevDecl); if (S) PushOnScopeChains(Shadow, S); else CurContext->addDecl(Shadow); return Shadow; } /// Hides a using shadow declaration. This is required by the current /// using-decl implementation when a resolvable using declaration in a /// class is followed by a declaration which would hide or override /// one or more of the using decl's targets; for example: /// /// struct Base { void foo(int); }; /// struct Derived : Base { /// using Base::foo; /// void foo(int); /// }; /// /// The governing language is C++03 [namespace.udecl]p12: /// /// When a using-declaration brings names from a base class into a /// derived class scope, member functions in the derived class /// override and/or hide member functions with the same name and /// parameter types in a base class (rather than conflicting). /// /// There are two ways to implement this: /// (1) optimistically create shadow decls when they're not hidden /// by existing declarations, or /// (2) don't create any shadow decls (or at least don't make them /// visible) until we've fully parsed/instantiated the class. /// The problem with (1) is that we might have to retroactively remove /// a shadow decl, which requires several O(n) operations because the /// decl structures are (very reasonably) not designed for removal. /// (2) avoids this but is very fiddly and phase-dependent. void Sema::HideUsingShadowDecl(Scope *S, UsingShadowDecl *Shadow) { if (Shadow->getDeclName().getNameKind() == DeclarationName::CXXConversionFunctionName) cast(Shadow->getDeclContext())->removeConversion(Shadow); // Remove it from the DeclContext... Shadow->getDeclContext()->removeDecl(Shadow); // ...and the scope, if applicable... if (S) { S->RemoveDecl(Shadow); IdResolver.RemoveDecl(Shadow); } // ...and the using decl. Shadow->getUsingDecl()->removeShadowDecl(Shadow); // TODO: complain somehow if Shadow was used. It shouldn't // be possible for this to happen, because...? } /// Find the base specifier for a base class with the given type. static CXXBaseSpecifier *findDirectBaseWithType(CXXRecordDecl *Derived, QualType DesiredBase, bool &AnyDependentBases) { // Check whether the named type is a direct base class. CanQualType CanonicalDesiredBase = DesiredBase->getCanonicalTypeUnqualified(); for (auto &Base : Derived->bases()) { CanQualType BaseType = Base.getType()->getCanonicalTypeUnqualified(); if (CanonicalDesiredBase == BaseType) return &Base; if (BaseType->isDependentType()) AnyDependentBases = true; } return nullptr; } namespace { class UsingValidatorCCC : public CorrectionCandidateCallback { public: UsingValidatorCCC(bool HasTypenameKeyword, bool IsInstantiation, NestedNameSpecifier *NNS, CXXRecordDecl *RequireMemberOf) : HasTypenameKeyword(HasTypenameKeyword), IsInstantiation(IsInstantiation), OldNNS(NNS), RequireMemberOf(RequireMemberOf) {} bool ValidateCandidate(const TypoCorrection &Candidate) override { NamedDecl *ND = Candidate.getCorrectionDecl(); // Keywords are not valid here. if (!ND || isa(ND)) return false; // Completely unqualified names are invalid for a 'using' declaration. if (Candidate.WillReplaceSpecifier() && !Candidate.getCorrectionSpecifier()) return false; // FIXME: Don't correct to a name that CheckUsingDeclRedeclaration would // reject. if (RequireMemberOf) { auto *FoundRecord = dyn_cast(ND); if (FoundRecord && FoundRecord->isInjectedClassName()) { // No-one ever wants a using-declaration to name an injected-class-name // of a base class, unless they're declaring an inheriting constructor. ASTContext &Ctx = ND->getASTContext(); if (!Ctx.getLangOpts().CPlusPlus11) return false; QualType FoundType = Ctx.getRecordType(FoundRecord); // Check that the injected-class-name is named as a member of its own // type; we don't want to suggest 'using Derived::Base;', since that // means something else. NestedNameSpecifier *Specifier = Candidate.WillReplaceSpecifier() ? Candidate.getCorrectionSpecifier() : OldNNS; if (!Specifier->getAsType() || !Ctx.hasSameType(QualType(Specifier->getAsType(), 0), FoundType)) return false; // Check that this inheriting constructor declaration actually names a // direct base class of the current class. bool AnyDependentBases = false; if (!findDirectBaseWithType(RequireMemberOf, Ctx.getRecordType(FoundRecord), AnyDependentBases) && !AnyDependentBases) return false; } else { auto *RD = dyn_cast(ND->getDeclContext()); if (!RD || RequireMemberOf->isProvablyNotDerivedFrom(RD)) return false; // FIXME: Check that the base class member is accessible? } } else { auto *FoundRecord = dyn_cast(ND); if (FoundRecord && FoundRecord->isInjectedClassName()) return false; } if (isa(ND)) return HasTypenameKeyword || !IsInstantiation; return !HasTypenameKeyword; } private: bool HasTypenameKeyword; bool IsInstantiation; NestedNameSpecifier *OldNNS; CXXRecordDecl *RequireMemberOf; }; } // end anonymous namespace /// Builds a using declaration. /// /// \param IsInstantiation - Whether this call arises from an /// instantiation of an unresolved using declaration. We treat /// the lookup differently for these declarations. NamedDecl *Sema::BuildUsingDeclaration(Scope *S, AccessSpecifier AS, SourceLocation UsingLoc, bool HasTypenameKeyword, SourceLocation TypenameLoc, CXXScopeSpec &SS, DeclarationNameInfo NameInfo, SourceLocation EllipsisLoc, AttributeList *AttrList, bool IsInstantiation) { assert(!SS.isInvalid() && "Invalid CXXScopeSpec."); SourceLocation IdentLoc = NameInfo.getLoc(); assert(IdentLoc.isValid() && "Invalid TargetName location."); // FIXME: We ignore attributes for now. // For an inheriting constructor declaration, the name of the using // declaration is the name of a constructor in this class, not in the // base class. DeclarationNameInfo UsingName = NameInfo; if (UsingName.getName().getNameKind() == DeclarationName::CXXConstructorName) if (auto *RD = dyn_cast(CurContext)) UsingName.setName(Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(Context.getRecordType(RD)))); // Do the redeclaration lookup in the current scope. LookupResult Previous(*this, UsingName, LookupUsingDeclName, ForRedeclaration); Previous.setHideTags(false); if (S) { LookupName(Previous, S); // It is really dumb that we have to do this. LookupResult::Filter F = Previous.makeFilter(); while (F.hasNext()) { NamedDecl *D = F.next(); if (!isDeclInScope(D, CurContext, S)) F.erase(); // If we found a local extern declaration that's not ordinarily visible, // and this declaration is being added to a non-block scope, ignore it. // We're only checking for scope conflicts here, not also for violations // of the linkage rules. else if (!CurContext->isFunctionOrMethod() && D->isLocalExternDecl() && !(D->getIdentifierNamespace() & Decl::IDNS_Ordinary)) F.erase(); } F.done(); } else { assert(IsInstantiation && "no scope in non-instantiation"); if (CurContext->isRecord()) LookupQualifiedName(Previous, CurContext); else { // No redeclaration check is needed here; in non-member contexts we // diagnosed all possible conflicts with other using-declarations when // building the template: // // For a dependent non-type using declaration, the only valid case is // if we instantiate to a single enumerator. We check for conflicts // between shadow declarations we introduce, and we check in the template // definition for conflicts between a non-type using declaration and any // other declaration, which together covers all cases. // // A dependent typename using declaration will never successfully // instantiate, since it will always name a class member, so we reject // that in the template definition. } } // Check for invalid redeclarations. if (CheckUsingDeclRedeclaration(UsingLoc, HasTypenameKeyword, SS, IdentLoc, Previous)) return nullptr; // Check for bad qualifiers. if (CheckUsingDeclQualifier(UsingLoc, HasTypenameKeyword, SS, NameInfo, IdentLoc)) return nullptr; DeclContext *LookupContext = computeDeclContext(SS); NamedDecl *D; NestedNameSpecifierLoc QualifierLoc = SS.getWithLocInContext(Context); if (!LookupContext || EllipsisLoc.isValid()) { if (HasTypenameKeyword) { // FIXME: not all declaration name kinds are legal here D = UnresolvedUsingTypenameDecl::Create(Context, CurContext, UsingLoc, TypenameLoc, QualifierLoc, IdentLoc, NameInfo.getName(), EllipsisLoc); } else { D = UnresolvedUsingValueDecl::Create(Context, CurContext, UsingLoc, QualifierLoc, NameInfo, EllipsisLoc); } D->setAccess(AS); CurContext->addDecl(D); return D; } auto Build = [&](bool Invalid) { UsingDecl *UD = UsingDecl::Create(Context, CurContext, UsingLoc, QualifierLoc, UsingName, HasTypenameKeyword); UD->setAccess(AS); CurContext->addDecl(UD); UD->setInvalidDecl(Invalid); return UD; }; auto BuildInvalid = [&]{ return Build(true); }; auto BuildValid = [&]{ return Build(false); }; if (RequireCompleteDeclContext(SS, LookupContext)) return BuildInvalid(); // Look up the target name. LookupResult R(*this, NameInfo, LookupOrdinaryName); // Unlike most lookups, we don't always want to hide tag // declarations: tag names are visible through the using declaration // even if hidden by ordinary names, *except* in a dependent context // where it's important for the sanity of two-phase lookup. if (!IsInstantiation) R.setHideTags(false); // For the purposes of this lookup, we have a base object type // equal to that of the current context. if (CurContext->isRecord()) { R.setBaseObjectType( Context.getTypeDeclType(cast(CurContext))); } LookupQualifiedName(R, LookupContext); // Try to correct typos if possible. If constructor name lookup finds no // results, that means the named class has no explicit constructors, and we // suppressed declaring implicit ones (probably because it's dependent or // invalid). if (R.empty() && NameInfo.getName().getNameKind() != DeclarationName::CXXConstructorName) { // HACK: Work around a bug in libstdc++'s detection of ::gets. Sometimes // it will believe that glibc provides a ::gets in cases where it does not, // and will try to pull it into namespace std with a using-declaration. // Just ignore the using-declaration in that case. auto *II = NameInfo.getName().getAsIdentifierInfo(); if (getLangOpts().CPlusPlus14 && II && II->isStr("gets") && CurContext->isStdNamespace() && isa(LookupContext) && getSourceManager().isInSystemHeader(UsingLoc)) return nullptr; if (TypoCorrection Corrected = CorrectTypo( R.getLookupNameInfo(), R.getLookupKind(), S, &SS, llvm::make_unique( HasTypenameKeyword, IsInstantiation, SS.getScopeRep(), dyn_cast(CurContext)), CTK_ErrorRecovery)) { // We reject candidates where DroppedSpecifier == true, hence the // literal '0' below. diagnoseTypo(Corrected, PDiag(diag::err_no_member_suggest) << NameInfo.getName() << LookupContext << 0 << SS.getRange()); // If we picked a correction with no attached Decl we can't do anything // useful with it, bail out. NamedDecl *ND = Corrected.getCorrectionDecl(); if (!ND) return BuildInvalid(); // If we corrected to an inheriting constructor, handle it as one. auto *RD = dyn_cast(ND); if (RD && RD->isInjectedClassName()) { // The parent of the injected class name is the class itself. RD = cast(RD->getParent()); // Fix up the information we'll use to build the using declaration. if (Corrected.WillReplaceSpecifier()) { NestedNameSpecifierLocBuilder Builder; Builder.MakeTrivial(Context, Corrected.getCorrectionSpecifier(), QualifierLoc.getSourceRange()); QualifierLoc = Builder.getWithLocInContext(Context); } // In this case, the name we introduce is the name of a derived class // constructor. auto *CurClass = cast(CurContext); UsingName.setName(Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(Context.getRecordType(CurClass)))); UsingName.setNamedTypeInfo(nullptr); for (auto *Ctor : LookupConstructors(RD)) R.addDecl(Ctor); R.resolveKind(); } else { // FIXME: Pick up all the declarations if we found an overloaded // function. UsingName.setName(ND->getDeclName()); R.addDecl(ND); } } else { Diag(IdentLoc, diag::err_no_member) << NameInfo.getName() << LookupContext << SS.getRange(); return BuildInvalid(); } } if (R.isAmbiguous()) return BuildInvalid(); if (HasTypenameKeyword) { // If we asked for a typename and got a non-type decl, error out. if (!R.getAsSingle()) { Diag(IdentLoc, diag::err_using_typename_non_type); for (LookupResult::iterator I = R.begin(), E = R.end(); I != E; ++I) Diag((*I)->getUnderlyingDecl()->getLocation(), diag::note_using_decl_target); return BuildInvalid(); } } else { // If we asked for a non-typename and we got a type, error out, // but only if this is an instantiation of an unresolved using // decl. Otherwise just silently find the type name. if (IsInstantiation && R.getAsSingle()) { Diag(IdentLoc, diag::err_using_dependent_value_is_type); Diag(R.getFoundDecl()->getLocation(), diag::note_using_decl_target); return BuildInvalid(); } } // C++14 [namespace.udecl]p6: // A using-declaration shall not name a namespace. if (R.getAsSingle()) { Diag(IdentLoc, diag::err_using_decl_can_not_refer_to_namespace) << SS.getRange(); return BuildInvalid(); } // C++14 [namespace.udecl]p7: // A using-declaration shall not name a scoped enumerator. if (auto *ED = R.getAsSingle()) { if (cast(ED->getDeclContext())->isScoped()) { Diag(IdentLoc, diag::err_using_decl_can_not_refer_to_scoped_enum) << SS.getRange(); return BuildInvalid(); } } UsingDecl *UD = BuildValid(); // Some additional rules apply to inheriting constructors. if (UsingName.getName().getNameKind() == DeclarationName::CXXConstructorName) { // Suppress access diagnostics; the access check is instead performed at the // point of use for an inheriting constructor. R.suppressDiagnostics(); if (CheckInheritingConstructorUsingDecl(UD)) return UD; } for (LookupResult::iterator I = R.begin(), E = R.end(); I != E; ++I) { UsingShadowDecl *PrevDecl = nullptr; if (!CheckUsingShadowDecl(UD, *I, Previous, PrevDecl)) BuildUsingShadowDecl(S, UD, *I, PrevDecl); } return UD; } NamedDecl *Sema::BuildUsingPackDecl(NamedDecl *InstantiatedFrom, ArrayRef Expansions) { assert(isa(InstantiatedFrom) || isa(InstantiatedFrom) || isa(InstantiatedFrom)); auto *UPD = UsingPackDecl::Create(Context, CurContext, InstantiatedFrom, Expansions); UPD->setAccess(InstantiatedFrom->getAccess()); CurContext->addDecl(UPD); return UPD; } /// Additional checks for a using declaration referring to a constructor name. bool Sema::CheckInheritingConstructorUsingDecl(UsingDecl *UD) { assert(!UD->hasTypename() && "expecting a constructor name"); const Type *SourceType = UD->getQualifier()->getAsType(); assert(SourceType && "Using decl naming constructor doesn't have type in scope spec."); CXXRecordDecl *TargetClass = cast(CurContext); // Check whether the named type is a direct base class. bool AnyDependentBases = false; auto *Base = findDirectBaseWithType(TargetClass, QualType(SourceType, 0), AnyDependentBases); if (!Base && !AnyDependentBases) { Diag(UD->getUsingLoc(), diag::err_using_decl_constructor_not_in_direct_base) << UD->getNameInfo().getSourceRange() << QualType(SourceType, 0) << TargetClass; UD->setInvalidDecl(); return true; } if (Base) Base->setInheritConstructors(); return false; } /// Checks that the given using declaration is not an invalid /// redeclaration. Note that this is checking only for the using decl /// itself, not for any ill-formedness among the UsingShadowDecls. bool Sema::CheckUsingDeclRedeclaration(SourceLocation UsingLoc, bool HasTypenameKeyword, const CXXScopeSpec &SS, SourceLocation NameLoc, const LookupResult &Prev) { NestedNameSpecifier *Qual = SS.getScopeRep(); // C++03 [namespace.udecl]p8: // C++0x [namespace.udecl]p10: // A using-declaration is a declaration and can therefore be used // repeatedly where (and only where) multiple declarations are // allowed. // // That's in non-member contexts. if (!CurContext->getRedeclContext()->isRecord()) { // A dependent qualifier outside a class can only ever resolve to an // enumeration type. Therefore it conflicts with any other non-type // declaration in the same scope. // FIXME: How should we check for dependent type-type conflicts at block // scope? if (Qual->isDependent() && !HasTypenameKeyword) { for (auto *D : Prev) { if (!isa(D) && !isa(D) && !isa(D)) { bool OldCouldBeEnumerator = isa(D) || isa(D); Diag(NameLoc, OldCouldBeEnumerator ? diag::err_redefinition : diag::err_redefinition_different_kind) << Prev.getLookupName(); Diag(D->getLocation(), diag::note_previous_definition); return true; } } } return false; } for (LookupResult::iterator I = Prev.begin(), E = Prev.end(); I != E; ++I) { NamedDecl *D = *I; bool DTypename; NestedNameSpecifier *DQual; if (UsingDecl *UD = dyn_cast(D)) { DTypename = UD->hasTypename(); DQual = UD->getQualifier(); } else if (UnresolvedUsingValueDecl *UD = dyn_cast(D)) { DTypename = false; DQual = UD->getQualifier(); } else if (UnresolvedUsingTypenameDecl *UD = dyn_cast(D)) { DTypename = true; DQual = UD->getQualifier(); } else continue; // using decls differ if one says 'typename' and the other doesn't. // FIXME: non-dependent using decls? if (HasTypenameKeyword != DTypename) continue; // using decls differ if they name different scopes (but note that // template instantiation can cause this check to trigger when it // didn't before instantiation). if (Context.getCanonicalNestedNameSpecifier(Qual) != Context.getCanonicalNestedNameSpecifier(DQual)) continue; Diag(NameLoc, diag::err_using_decl_redeclaration) << SS.getRange(); Diag(D->getLocation(), diag::note_using_decl) << 1; return true; } return false; } /// Checks that the given nested-name qualifier used in a using decl /// in the current context is appropriately related to the current /// scope. If an error is found, diagnoses it and returns true. bool Sema::CheckUsingDeclQualifier(SourceLocation UsingLoc, bool HasTypename, const CXXScopeSpec &SS, const DeclarationNameInfo &NameInfo, SourceLocation NameLoc) { DeclContext *NamedContext = computeDeclContext(SS); if (!CurContext->isRecord()) { // C++03 [namespace.udecl]p3: // C++0x [namespace.udecl]p8: // A using-declaration for a class member shall be a member-declaration. // If we weren't able to compute a valid scope, it might validly be a // dependent class scope or a dependent enumeration unscoped scope. If // we have a 'typename' keyword, the scope must resolve to a class type. if ((HasTypename && !NamedContext) || (NamedContext && NamedContext->getRedeclContext()->isRecord())) { auto *RD = NamedContext ? cast(NamedContext->getRedeclContext()) : nullptr; if (RD && RequireCompleteDeclContext(const_cast(SS), RD)) RD = nullptr; Diag(NameLoc, diag::err_using_decl_can_not_refer_to_class_member) << SS.getRange(); // If we have a complete, non-dependent source type, try to suggest a // way to get the same effect. if (!RD) return true; // Find what this using-declaration was referring to. LookupResult R(*this, NameInfo, LookupOrdinaryName); R.setHideTags(false); R.suppressDiagnostics(); LookupQualifiedName(R, RD); if (R.getAsSingle()) { if (getLangOpts().CPlusPlus11) { // Convert 'using X::Y;' to 'using Y = X::Y;'. Diag(SS.getBeginLoc(), diag::note_using_decl_class_member_workaround) << 0 // alias declaration << FixItHint::CreateInsertion(SS.getBeginLoc(), NameInfo.getName().getAsString() + " = "); } else { // Convert 'using X::Y;' to 'typedef X::Y Y;'. SourceLocation InsertLoc = getLocForEndOfToken(NameInfo.getLocEnd()); Diag(InsertLoc, diag::note_using_decl_class_member_workaround) << 1 // typedef declaration << FixItHint::CreateReplacement(UsingLoc, "typedef") << FixItHint::CreateInsertion( InsertLoc, " " + NameInfo.getName().getAsString()); } } else if (R.getAsSingle()) { // Don't provide a fixit outside C++11 mode; we don't want to suggest // repeating the type of the static data member here. FixItHint FixIt; if (getLangOpts().CPlusPlus11) { // Convert 'using X::Y;' to 'auto &Y = X::Y;'. FixIt = FixItHint::CreateReplacement( UsingLoc, "auto &" + NameInfo.getName().getAsString() + " = "); } Diag(UsingLoc, diag::note_using_decl_class_member_workaround) << 2 // reference declaration << FixIt; } else if (R.getAsSingle()) { // Don't provide a fixit outside C++11 mode; we don't want to suggest // repeating the type of the enumeration here, and we can't do so if // the type is anonymous. FixItHint FixIt; if (getLangOpts().CPlusPlus11) { // Convert 'using X::Y;' to 'auto &Y = X::Y;'. FixIt = FixItHint::CreateReplacement( UsingLoc, "constexpr auto " + NameInfo.getName().getAsString() + " = "); } Diag(UsingLoc, diag::note_using_decl_class_member_workaround) << (getLangOpts().CPlusPlus11 ? 4 : 3) // const[expr] variable << FixIt; } return true; } // Otherwise, this might be valid. return false; } // The current scope is a record. // If the named context is dependent, we can't decide much. if (!NamedContext) { // FIXME: in C++0x, we can diagnose if we can prove that the // nested-name-specifier does not refer to a base class, which is // still possible in some cases. // Otherwise we have to conservatively report that things might be // okay. return false; } if (!NamedContext->isRecord()) { // Ideally this would point at the last name in the specifier, // but we don't have that level of source info. Diag(SS.getRange().getBegin(), diag::err_using_decl_nested_name_specifier_is_not_class) << SS.getScopeRep() << SS.getRange(); return true; } if (!NamedContext->isDependentContext() && RequireCompleteDeclContext(const_cast(SS), NamedContext)) return true; if (getLangOpts().CPlusPlus11) { // C++11 [namespace.udecl]p3: // In a using-declaration used as a member-declaration, the // nested-name-specifier shall name a base class of the class // being defined. if (cast(CurContext)->isProvablyNotDerivedFrom( cast(NamedContext))) { if (CurContext == NamedContext) { Diag(NameLoc, diag::err_using_decl_nested_name_specifier_is_current_class) << SS.getRange(); return true; } if (!cast(NamedContext)->isInvalidDecl()) { Diag(SS.getRange().getBegin(), diag::err_using_decl_nested_name_specifier_is_not_base_class) << SS.getScopeRep() << cast(CurContext) << SS.getRange(); } return true; } return false; } // C++03 [namespace.udecl]p4: // A using-declaration used as a member-declaration shall refer // to a member of a base class of the class being defined [etc.]. // Salient point: SS doesn't have to name a base class as long as // lookup only finds members from base classes. Therefore we can // diagnose here only if we can prove that that can't happen, // i.e. if the class hierarchies provably don't intersect. // TODO: it would be nice if "definitely valid" results were cached // in the UsingDecl and UsingShadowDecl so that these checks didn't // need to be repeated. llvm::SmallPtrSet Bases; auto Collect = [&Bases](const CXXRecordDecl *Base) { Bases.insert(Base); return true; }; // Collect all bases. Return false if we find a dependent base. if (!cast(CurContext)->forallBases(Collect)) return false; // Returns true if the base is dependent or is one of the accumulated base // classes. auto IsNotBase = [&Bases](const CXXRecordDecl *Base) { return !Bases.count(Base); }; // Return false if the class has a dependent base or if it or one // of its bases is present in the base set of the current context. if (Bases.count(cast(NamedContext)) || !cast(NamedContext)->forallBases(IsNotBase)) return false; Diag(SS.getRange().getBegin(), diag::err_using_decl_nested_name_specifier_is_not_base_class) << SS.getScopeRep() << cast(CurContext) << SS.getRange(); return true; } Decl *Sema::ActOnAliasDeclaration(Scope *S, AccessSpecifier AS, MultiTemplateParamsArg TemplateParamLists, SourceLocation UsingLoc, UnqualifiedId &Name, AttributeList *AttrList, TypeResult Type, Decl *DeclFromDeclSpec) { // Skip up to the relevant declaration scope. while (S->isTemplateParamScope()) S = S->getParent(); assert((S->getFlags() & Scope::DeclScope) && "got alias-declaration outside of declaration scope"); if (Type.isInvalid()) return nullptr; bool Invalid = false; DeclarationNameInfo NameInfo = GetNameFromUnqualifiedId(Name); TypeSourceInfo *TInfo = nullptr; GetTypeFromParser(Type.get(), &TInfo); if (DiagnoseClassNameShadow(CurContext, NameInfo)) return nullptr; if (DiagnoseUnexpandedParameterPack(Name.StartLocation, TInfo, UPPC_DeclarationType)) { Invalid = true; TInfo = Context.getTrivialTypeSourceInfo(Context.IntTy, TInfo->getTypeLoc().getBeginLoc()); } LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); LookupName(Previous, S); // Warn about shadowing the name of a template parameter. if (Previous.isSingleResult() && Previous.getFoundDecl()->isTemplateParameter()) { DiagnoseTemplateParameterShadow(Name.StartLocation,Previous.getFoundDecl()); Previous.clear(); } assert(Name.Kind == UnqualifiedId::IK_Identifier && "name in alias declaration must be an identifier"); TypeAliasDecl *NewTD = TypeAliasDecl::Create(Context, CurContext, UsingLoc, Name.StartLocation, Name.Identifier, TInfo); NewTD->setAccess(AS); if (Invalid) NewTD->setInvalidDecl(); ProcessDeclAttributeList(S, NewTD, AttrList); AddPragmaAttributes(S, NewTD); CheckTypedefForVariablyModifiedType(S, NewTD); Invalid |= NewTD->isInvalidDecl(); bool Redeclaration = false; NamedDecl *NewND; if (TemplateParamLists.size()) { TypeAliasTemplateDecl *OldDecl = nullptr; TemplateParameterList *OldTemplateParams = nullptr; if (TemplateParamLists.size() != 1) { Diag(UsingLoc, diag::err_alias_template_extra_headers) << SourceRange(TemplateParamLists[1]->getTemplateLoc(), TemplateParamLists[TemplateParamLists.size()-1]->getRAngleLoc()); } TemplateParameterList *TemplateParams = TemplateParamLists[0]; // Check that we can declare a template here. if (CheckTemplateDeclScope(S, TemplateParams)) return nullptr; // Only consider previous declarations in the same scope. FilterLookupForScope(Previous, CurContext, S, /*ConsiderLinkage*/false, /*ExplicitInstantiationOrSpecialization*/false); if (!Previous.empty()) { Redeclaration = true; OldDecl = Previous.getAsSingle(); if (!OldDecl && !Invalid) { Diag(UsingLoc, diag::err_redefinition_different_kind) << Name.Identifier; NamedDecl *OldD = Previous.getRepresentativeDecl(); if (OldD->getLocation().isValid()) Diag(OldD->getLocation(), diag::note_previous_definition); Invalid = true; } if (!Invalid && OldDecl && !OldDecl->isInvalidDecl()) { if (TemplateParameterListsAreEqual(TemplateParams, OldDecl->getTemplateParameters(), /*Complain=*/true, TPL_TemplateMatch)) OldTemplateParams = OldDecl->getTemplateParameters(); else Invalid = true; TypeAliasDecl *OldTD = OldDecl->getTemplatedDecl(); if (!Invalid && !Context.hasSameType(OldTD->getUnderlyingType(), NewTD->getUnderlyingType())) { // FIXME: The C++0x standard does not clearly say this is ill-formed, // but we can't reasonably accept it. Diag(NewTD->getLocation(), diag::err_redefinition_different_typedef) << 2 << NewTD->getUnderlyingType() << OldTD->getUnderlyingType(); if (OldTD->getLocation().isValid()) Diag(OldTD->getLocation(), diag::note_previous_definition); Invalid = true; } } } // Merge any previous default template arguments into our parameters, // and check the parameter list. if (CheckTemplateParameterList(TemplateParams, OldTemplateParams, TPC_TypeAliasTemplate)) return nullptr; TypeAliasTemplateDecl *NewDecl = TypeAliasTemplateDecl::Create(Context, CurContext, UsingLoc, Name.Identifier, TemplateParams, NewTD); NewTD->setDescribedAliasTemplate(NewDecl); NewDecl->setAccess(AS); if (Invalid) NewDecl->setInvalidDecl(); else if (OldDecl) NewDecl->setPreviousDecl(OldDecl); NewND = NewDecl; } else { if (auto *TD = dyn_cast_or_null(DeclFromDeclSpec)) { setTagNameForLinkagePurposes(TD, NewTD); handleTagNumbering(TD, S); } ActOnTypedefNameDecl(S, CurContext, NewTD, Previous, Redeclaration); NewND = NewTD; } PushOnScopeChains(NewND, S); ActOnDocumentableDecl(NewND); return NewND; } Decl *Sema::ActOnNamespaceAliasDef(Scope *S, SourceLocation NamespaceLoc, SourceLocation AliasLoc, IdentifierInfo *Alias, CXXScopeSpec &SS, SourceLocation IdentLoc, IdentifierInfo *Ident) { // Lookup the namespace name. LookupResult R(*this, Ident, IdentLoc, LookupNamespaceName); LookupParsedName(R, S, &SS); if (R.isAmbiguous()) return nullptr; if (R.empty()) { if (!TryNamespaceTypoCorrection(*this, R, S, SS, IdentLoc, Ident)) { Diag(IdentLoc, diag::err_expected_namespace_name) << SS.getRange(); return nullptr; } } assert(!R.isAmbiguous() && !R.empty()); NamedDecl *ND = R.getRepresentativeDecl(); // Check if we have a previous declaration with the same name. LookupResult PrevR(*this, Alias, AliasLoc, LookupOrdinaryName, ForRedeclaration); LookupName(PrevR, S); // Check we're not shadowing a template parameter. if (PrevR.isSingleResult() && PrevR.getFoundDecl()->isTemplateParameter()) { DiagnoseTemplateParameterShadow(AliasLoc, PrevR.getFoundDecl()); PrevR.clear(); } // Filter out any other lookup result from an enclosing scope. FilterLookupForScope(PrevR, CurContext, S, /*ConsiderLinkage*/false, /*AllowInlineNamespace*/false); // Find the previous declaration and check that we can redeclare it. NamespaceAliasDecl *Prev = nullptr; if (PrevR.isSingleResult()) { NamedDecl *PrevDecl = PrevR.getRepresentativeDecl(); if (NamespaceAliasDecl *AD = dyn_cast(PrevDecl)) { // We already have an alias with the same name that points to the same // namespace; check that it matches. if (AD->getNamespace()->Equals(getNamespaceDecl(ND))) { Prev = AD; } else if (isVisible(PrevDecl)) { Diag(AliasLoc, diag::err_redefinition_different_namespace_alias) << Alias; Diag(AD->getLocation(), diag::note_previous_namespace_alias) << AD->getNamespace(); return nullptr; } } else if (isVisible(PrevDecl)) { unsigned DiagID = isa(PrevDecl->getUnderlyingDecl()) ? diag::err_redefinition : diag::err_redefinition_different_kind; Diag(AliasLoc, DiagID) << Alias; Diag(PrevDecl->getLocation(), diag::note_previous_definition); return nullptr; } } // The use of a nested name specifier may trigger deprecation warnings. DiagnoseUseOfDecl(ND, IdentLoc); NamespaceAliasDecl *AliasDecl = NamespaceAliasDecl::Create(Context, CurContext, NamespaceLoc, AliasLoc, Alias, SS.getWithLocInContext(Context), IdentLoc, ND); if (Prev) AliasDecl->setPreviousDecl(Prev); PushOnScopeChains(AliasDecl, S); return AliasDecl; } namespace { struct SpecialMemberExceptionSpecInfo : SpecialMemberVisitor { SourceLocation Loc; Sema::ImplicitExceptionSpecification ExceptSpec; SpecialMemberExceptionSpecInfo(Sema &S, CXXMethodDecl *MD, Sema::CXXSpecialMember CSM, Sema::InheritedConstructorInfo *ICI, SourceLocation Loc) : SpecialMemberVisitor(S, MD, CSM, ICI), Loc(Loc), ExceptSpec(S) {} bool visitBase(CXXBaseSpecifier *Base); bool visitField(FieldDecl *FD); void visitClassSubobject(CXXRecordDecl *Class, Subobject Subobj, unsigned Quals); void visitSubobjectCall(Subobject Subobj, Sema::SpecialMemberOverloadResult SMOR); }; } bool SpecialMemberExceptionSpecInfo::visitBase(CXXBaseSpecifier *Base) { auto *RT = Base->getType()->getAs(); if (!RT) return false; auto *BaseClass = cast(RT->getDecl()); Sema::SpecialMemberOverloadResult SMOR = lookupInheritedCtor(BaseClass); if (auto *BaseCtor = SMOR.getMethod()) { visitSubobjectCall(Base, BaseCtor); return false; } visitClassSubobject(BaseClass, Base, 0); return false; } bool SpecialMemberExceptionSpecInfo::visitField(FieldDecl *FD) { if (CSM == Sema::CXXDefaultConstructor && FD->hasInClassInitializer()) { Expr *E = FD->getInClassInitializer(); if (!E) // FIXME: It's a little wasteful to build and throw away a // CXXDefaultInitExpr here. // FIXME: We should have a single context note pointing at Loc, and // this location should be MD->getLocation() instead, since that's // the location where we actually use the default init expression. E = S.BuildCXXDefaultInitExpr(Loc, FD).get(); if (E) ExceptSpec.CalledExpr(E); } else if (auto *RT = S.Context.getBaseElementType(FD->getType()) ->getAs()) { visitClassSubobject(cast(RT->getDecl()), FD, FD->getType().getCVRQualifiers()); } return false; } void SpecialMemberExceptionSpecInfo::visitClassSubobject(CXXRecordDecl *Class, Subobject Subobj, unsigned Quals) { FieldDecl *Field = Subobj.dyn_cast(); bool IsMutable = Field && Field->isMutable(); visitSubobjectCall(Subobj, lookupIn(Class, Quals, IsMutable)); } void SpecialMemberExceptionSpecInfo::visitSubobjectCall( Subobject Subobj, Sema::SpecialMemberOverloadResult SMOR) { // Note, if lookup fails, it doesn't matter what exception specification we // choose because the special member will be deleted. if (CXXMethodDecl *MD = SMOR.getMethod()) ExceptSpec.CalledDecl(getSubobjectLoc(Subobj), MD); } static Sema::ImplicitExceptionSpecification ComputeDefaultedSpecialMemberExceptionSpec( Sema &S, SourceLocation Loc, CXXMethodDecl *MD, Sema::CXXSpecialMember CSM, Sema::InheritedConstructorInfo *ICI) { CXXRecordDecl *ClassDecl = MD->getParent(); // C++ [except.spec]p14: // An implicitly declared special member function (Clause 12) shall have an // exception-specification. [...] SpecialMemberExceptionSpecInfo Info(S, MD, CSM, ICI, Loc); if (ClassDecl->isInvalidDecl()) return Info.ExceptSpec; // C++1z [except.spec]p7: // [Look for exceptions thrown by] a constructor selected [...] to // initialize a potentially constructed subobject, // C++1z [except.spec]p8: // The exception specification for an implicitly-declared destructor, or a // destructor without a noexcept-specifier, is potentially-throwing if and // only if any of the destructors for any of its potentially constructed // subojects is potentially throwing. // FIXME: We respect the first rule but ignore the "potentially constructed" // in the second rule to resolve a core issue (no number yet) that would have // us reject: // struct A { virtual void f() = 0; virtual ~A() noexcept(false) = 0; }; // struct B : A {}; // struct C : B { void f(); }; // ... due to giving B::~B() a non-throwing exception specification. Info.visit(Info.IsConstructor ? Info.VisitPotentiallyConstructedBases : Info.VisitAllBases); return Info.ExceptSpec; } namespace { /// RAII object to register a special member as being currently declared. struct DeclaringSpecialMember { Sema &S; Sema::SpecialMemberDecl D; Sema::ContextRAII SavedContext; bool WasAlreadyBeingDeclared; DeclaringSpecialMember(Sema &S, CXXRecordDecl *RD, Sema::CXXSpecialMember CSM) : S(S), D(RD, CSM), SavedContext(S, RD) { WasAlreadyBeingDeclared = !S.SpecialMembersBeingDeclared.insert(D).second; if (WasAlreadyBeingDeclared) // This almost never happens, but if it does, ensure that our cache // doesn't contain a stale result. S.SpecialMemberCache.clear(); else { // Register a note to be produced if we encounter an error while // declaring the special member. Sema::CodeSynthesisContext Ctx; Ctx.Kind = Sema::CodeSynthesisContext::DeclaringSpecialMember; // FIXME: We don't have a location to use here. Using the class's // location maintains the fiction that we declare all special members // with the class, but (1) it's not clear that lying about that helps our // users understand what's going on, and (2) there may be outer contexts // on the stack (some of which are relevant) and printing them exposes // our lies. Ctx.PointOfInstantiation = RD->getLocation(); Ctx.Entity = RD; Ctx.SpecialMember = CSM; S.pushCodeSynthesisContext(Ctx); } } ~DeclaringSpecialMember() { if (!WasAlreadyBeingDeclared) { S.SpecialMembersBeingDeclared.erase(D); S.popCodeSynthesisContext(); } } /// \brief Are we already trying to declare this special member? bool isAlreadyBeingDeclared() const { return WasAlreadyBeingDeclared; } }; } void Sema::CheckImplicitSpecialMemberDeclaration(Scope *S, FunctionDecl *FD) { // Look up any existing declarations, but don't trigger declaration of all // implicit special members with this name. DeclarationName Name = FD->getDeclName(); LookupResult R(*this, Name, SourceLocation(), LookupOrdinaryName, ForRedeclaration); for (auto *D : FD->getParent()->lookup(Name)) if (auto *Acceptable = R.getAcceptableDecl(D)) R.addDecl(Acceptable); R.resolveKind(); R.suppressDiagnostics(); CheckFunctionDeclaration(S, FD, R, /*IsMemberSpecialization*/false); } CXXConstructorDecl *Sema::DeclareImplicitDefaultConstructor( CXXRecordDecl *ClassDecl) { // C++ [class.ctor]p5: // A default constructor for a class X is a constructor of class X // that can be called without an argument. If there is no // user-declared constructor for class X, a default constructor is // implicitly declared. An implicitly-declared default constructor // is an inline public member of its class. assert(ClassDecl->needsImplicitDefaultConstructor() && "Should not build implicit default constructor!"); DeclaringSpecialMember DSM(*this, ClassDecl, CXXDefaultConstructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXDefaultConstructor, false); // Create the actual constructor declaration. CanQualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(ClassDecl)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationName Name = Context.DeclarationNames.getCXXConstructorName(ClassType); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXConstructorDecl *DefaultCon = CXXConstructorDecl::Create( Context, ClassDecl, ClassLoc, NameInfo, /*Type*/QualType(), /*TInfo=*/nullptr, /*isExplicit=*/false, /*isInline=*/true, /*isImplicitlyDeclared=*/true, Constexpr); DefaultCon->setAccess(AS_public); DefaultCon->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXDefaultConstructor, DefaultCon, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this constructor. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, DefaultCon); DefaultCon->setType(Context.getFunctionType(Context.VoidTy, None, EPI)); // We don't need to use SpecialMemberIsTrivial here; triviality for default // constructors is easy to compute. DefaultCon->setTrivial(ClassDecl->hasTrivialDefaultConstructor()); // Note that we have declared this constructor. ++ASTContext::NumImplicitDefaultConstructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, DefaultCon); if (ShouldDeleteSpecialMember(DefaultCon, CXXDefaultConstructor)) SetDeclDeleted(DefaultCon, ClassLoc); if (S) PushOnScopeChains(DefaultCon, S, false); ClassDecl->addDecl(DefaultCon); return DefaultCon; } void Sema::DefineImplicitDefaultConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *Constructor) { assert((Constructor->isDefaulted() && Constructor->isDefaultConstructor() && !Constructor->doesThisDeclarationHaveABody() && !Constructor->isDeleted()) && "DefineImplicitDefaultConstructor - call it for implicit default ctor"); if (Constructor->willHaveBody() || Constructor->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = Constructor->getParent(); assert(ClassDecl && "DefineImplicitDefaultConstructor - invalid constructor"); SynthesizedFunctionScope Scope(*this, Constructor); // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, Constructor->getType()->castAs()); MarkVTableUsed(CurrentLocation, ClassDecl); // Add a context note for diagnostics produced after this point. Scope.addContextNote(CurrentLocation); if (SetCtorInitializers(Constructor, /*AnyErrors=*/false)) { Constructor->setInvalidDecl(); return; } SourceLocation Loc = Constructor->getLocEnd().isValid() ? Constructor->getLocEnd() : Constructor->getLocation(); Constructor->setBody(new (Context) CompoundStmt(Loc)); Constructor->markUsed(Context); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Constructor); } DiagnoseUninitializedFields(*this, Constructor); } void Sema::ActOnFinishDelayedMemberInitializers(Decl *D) { // Perform any delayed checks on exception specifications. CheckDelayedMemberExceptionSpecs(); } /// Find or create the fake constructor we synthesize to model constructing an /// object of a derived class via a constructor of a base class. CXXConstructorDecl * Sema::findInheritingConstructor(SourceLocation Loc, CXXConstructorDecl *BaseCtor, ConstructorUsingShadowDecl *Shadow) { CXXRecordDecl *Derived = Shadow->getParent(); SourceLocation UsingLoc = Shadow->getLocation(); // FIXME: Add a new kind of DeclarationName for an inherited constructor. // For now we use the name of the base class constructor as a member of the // derived class to indicate a (fake) inherited constructor name. DeclarationName Name = BaseCtor->getDeclName(); // Check to see if we already have a fake constructor for this inherited // constructor call. for (NamedDecl *Ctor : Derived->lookup(Name)) if (declaresSameEntity(cast(Ctor) ->getInheritedConstructor() .getConstructor(), BaseCtor)) return cast(Ctor); DeclarationNameInfo NameInfo(Name, UsingLoc); TypeSourceInfo *TInfo = Context.getTrivialTypeSourceInfo(BaseCtor->getType(), UsingLoc); FunctionProtoTypeLoc ProtoLoc = TInfo->getTypeLoc().IgnoreParens().castAs(); // Check the inherited constructor is valid and find the list of base classes // from which it was inherited. InheritedConstructorInfo ICI(*this, Loc, Shadow); bool Constexpr = BaseCtor->isConstexpr() && defaultedSpecialMemberIsConstexpr(*this, Derived, CXXDefaultConstructor, false, BaseCtor, &ICI); CXXConstructorDecl *DerivedCtor = CXXConstructorDecl::Create( Context, Derived, UsingLoc, NameInfo, TInfo->getType(), TInfo, BaseCtor->isExplicit(), /*Inline=*/true, /*ImplicitlyDeclared=*/true, Constexpr, InheritedConstructor(Shadow, BaseCtor)); if (Shadow->isInvalidDecl()) DerivedCtor->setInvalidDecl(); // Build an unevaluated exception specification for this fake constructor. const FunctionProtoType *FPT = TInfo->getType()->castAs(); FunctionProtoType::ExtProtoInfo EPI = FPT->getExtProtoInfo(); EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = DerivedCtor; DerivedCtor->setType(Context.getFunctionType(FPT->getReturnType(), FPT->getParamTypes(), EPI)); // Build the parameter declarations. SmallVector ParamDecls; for (unsigned I = 0, N = FPT->getNumParams(); I != N; ++I) { TypeSourceInfo *TInfo = Context.getTrivialTypeSourceInfo(FPT->getParamType(I), UsingLoc); ParmVarDecl *PD = ParmVarDecl::Create( Context, DerivedCtor, UsingLoc, UsingLoc, /*IdentifierInfo=*/nullptr, FPT->getParamType(I), TInfo, SC_None, /*DefaultArg=*/nullptr); PD->setScopeInfo(0, I); PD->setImplicit(); // Ensure attributes are propagated onto parameters (this matters for // format, pass_object_size, ...). mergeDeclAttributes(PD, BaseCtor->getParamDecl(I)); ParamDecls.push_back(PD); ProtoLoc.setParam(I, PD); } // Set up the new constructor. assert(!BaseCtor->isDeleted() && "should not use deleted constructor"); DerivedCtor->setAccess(BaseCtor->getAccess()); DerivedCtor->setParams(ParamDecls); Derived->addDecl(DerivedCtor); if (ShouldDeleteSpecialMember(DerivedCtor, CXXDefaultConstructor, &ICI)) SetDeclDeleted(DerivedCtor, UsingLoc); return DerivedCtor; } void Sema::NoteDeletedInheritingConstructor(CXXConstructorDecl *Ctor) { InheritedConstructorInfo ICI(*this, Ctor->getLocation(), Ctor->getInheritedConstructor().getShadowDecl()); ShouldDeleteSpecialMember(Ctor, CXXDefaultConstructor, &ICI, /*Diagnose*/true); } void Sema::DefineInheritingConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *Constructor) { CXXRecordDecl *ClassDecl = Constructor->getParent(); assert(Constructor->getInheritedConstructor() && !Constructor->doesThisDeclarationHaveABody() && !Constructor->isDeleted()); if (Constructor->willHaveBody() || Constructor->isInvalidDecl()) return; // Initializations are performed "as if by a defaulted default constructor", // so enter the appropriate scope. SynthesizedFunctionScope Scope(*this, Constructor); // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, Constructor->getType()->castAs()); MarkVTableUsed(CurrentLocation, ClassDecl); // Add a context note for diagnostics produced after this point. Scope.addContextNote(CurrentLocation); ConstructorUsingShadowDecl *Shadow = Constructor->getInheritedConstructor().getShadowDecl(); CXXConstructorDecl *InheritedCtor = Constructor->getInheritedConstructor().getConstructor(); // [class.inhctor.init]p1: // initialization proceeds as if a defaulted default constructor is used to // initialize the D object and each base class subobject from which the // constructor was inherited InheritedConstructorInfo ICI(*this, CurrentLocation, Shadow); CXXRecordDecl *RD = Shadow->getParent(); SourceLocation InitLoc = Shadow->getLocation(); // Build explicit initializers for all base classes from which the // constructor was inherited. SmallVector Inits; for (bool VBase : {false, true}) { for (CXXBaseSpecifier &B : VBase ? RD->vbases() : RD->bases()) { if (B.isVirtual() != VBase) continue; auto *BaseRD = B.getType()->getAsCXXRecordDecl(); if (!BaseRD) continue; auto BaseCtor = ICI.findConstructorForBase(BaseRD, InheritedCtor); if (!BaseCtor.first) continue; MarkFunctionReferenced(CurrentLocation, BaseCtor.first); ExprResult Init = new (Context) CXXInheritedCtorInitExpr( InitLoc, B.getType(), BaseCtor.first, VBase, BaseCtor.second); auto *TInfo = Context.getTrivialTypeSourceInfo(B.getType(), InitLoc); Inits.push_back(new (Context) CXXCtorInitializer( Context, TInfo, VBase, InitLoc, Init.get(), InitLoc, SourceLocation())); } } // We now proceed as if for a defaulted default constructor, with the relevant // initializers replaced. if (SetCtorInitializers(Constructor, /*AnyErrors*/false, Inits)) { Constructor->setInvalidDecl(); return; } Constructor->setBody(new (Context) CompoundStmt(InitLoc)); Constructor->markUsed(Context); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Constructor); } DiagnoseUninitializedFields(*this, Constructor); } CXXDestructorDecl *Sema::DeclareImplicitDestructor(CXXRecordDecl *ClassDecl) { // C++ [class.dtor]p2: // If a class has no user-declared destructor, a destructor is // declared implicitly. An implicitly-declared destructor is an // inline public member of its class. assert(ClassDecl->needsImplicitDestructor()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXDestructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; // Create the actual destructor declaration. CanQualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(ClassDecl)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationName Name = Context.DeclarationNames.getCXXDestructorName(ClassType); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXDestructorDecl *Destructor = CXXDestructorDecl::Create(Context, ClassDecl, ClassLoc, NameInfo, QualType(), nullptr, /*isInline=*/true, /*isImplicitlyDeclared=*/true); Destructor->setAccess(AS_public); Destructor->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXDestructor, Destructor, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this destructor. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, Destructor); Destructor->setType(Context.getFunctionType(Context.VoidTy, None, EPI)); // We don't need to use SpecialMemberIsTrivial here; triviality for // destructors is easy to compute. Destructor->setTrivial(ClassDecl->hasTrivialDestructor()); // Note that we have declared this destructor. ++ASTContext::NumImplicitDestructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, Destructor); // We can't check whether an implicit destructor is deleted before we complete // the definition of the class, because its validity depends on the alignment // of the class. We'll check this from ActOnFields once the class is complete. if (ClassDecl->isCompleteDefinition() && ShouldDeleteSpecialMember(Destructor, CXXDestructor)) SetDeclDeleted(Destructor, ClassLoc); // Introduce this destructor into its scope. if (S) PushOnScopeChains(Destructor, S, false); ClassDecl->addDecl(Destructor); return Destructor; } void Sema::DefineImplicitDestructor(SourceLocation CurrentLocation, CXXDestructorDecl *Destructor) { assert((Destructor->isDefaulted() && !Destructor->doesThisDeclarationHaveABody() && !Destructor->isDeleted()) && "DefineImplicitDestructor - call it for implicit default dtor"); if (Destructor->willHaveBody() || Destructor->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = Destructor->getParent(); assert(ClassDecl && "DefineImplicitDestructor - invalid destructor"); SynthesizedFunctionScope Scope(*this, Destructor); // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, Destructor->getType()->castAs()); MarkVTableUsed(CurrentLocation, ClassDecl); // Add a context note for diagnostics produced after this point. Scope.addContextNote(CurrentLocation); MarkBaseAndMemberDestructorsReferenced(Destructor->getLocation(), Destructor->getParent()); if (CheckDestructor(Destructor)) { Destructor->setInvalidDecl(); return; } SourceLocation Loc = Destructor->getLocEnd().isValid() ? Destructor->getLocEnd() : Destructor->getLocation(); Destructor->setBody(new (Context) CompoundStmt(Loc)); Destructor->markUsed(Context); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Destructor); } } /// \brief Perform any semantic analysis which needs to be delayed until all /// pending class member declarations have been parsed. void Sema::ActOnFinishCXXMemberDecls() { // If the context is an invalid C++ class, just suppress these checks. if (CXXRecordDecl *Record = dyn_cast(CurContext)) { if (Record->isInvalidDecl()) { DelayedDefaultedMemberExceptionSpecs.clear(); DelayedExceptionSpecChecks.clear(); return; } checkForMultipleExportedDefaultConstructors(*this, Record); } } void Sema::ActOnFinishCXXNonNestedClass(Decl *D) { referenceDLLExportedClassMethods(); } void Sema::referenceDLLExportedClassMethods() { if (!DelayedDllExportClasses.empty()) { // Calling ReferenceDllExportedMethods might cause the current function to // be called again, so use a local copy of DelayedDllExportClasses. SmallVector WorkList; std::swap(DelayedDllExportClasses, WorkList); for (CXXRecordDecl *Class : WorkList) ReferenceDllExportedMethods(*this, Class); } } void Sema::AdjustDestructorExceptionSpec(CXXRecordDecl *ClassDecl, CXXDestructorDecl *Destructor) { assert(getLangOpts().CPlusPlus11 && "adjusting dtor exception specs was introduced in c++11"); // C++11 [class.dtor]p3: // A declaration of a destructor that does not have an exception- // specification is implicitly considered to have the same exception- // specification as an implicit declaration. const FunctionProtoType *DtorType = Destructor->getType()-> getAs(); if (DtorType->hasExceptionSpec()) return; // Replace the destructor's type, building off the existing one. Fortunately, // the only thing of interest in the destructor type is its extended info. // The return and arguments are fixed. FunctionProtoType::ExtProtoInfo EPI = DtorType->getExtProtoInfo(); EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = Destructor; Destructor->setType(Context.getFunctionType(Context.VoidTy, None, EPI)); // FIXME: If the destructor has a body that could throw, and the newly created // spec doesn't allow exceptions, we should emit a warning, because this // change in behavior can break conforming C++03 programs at runtime. // However, we don't have a body or an exception specification yet, so it // needs to be done somewhere else. } namespace { /// \brief An abstract base class for all helper classes used in building the // copy/move operators. These classes serve as factory functions and help us // avoid using the same Expr* in the AST twice. class ExprBuilder { ExprBuilder(const ExprBuilder&) = delete; ExprBuilder &operator=(const ExprBuilder&) = delete; protected: static Expr *assertNotNull(Expr *E) { assert(E && "Expression construction must not fail."); return E; } public: ExprBuilder() {} virtual ~ExprBuilder() {} virtual Expr *build(Sema &S, SourceLocation Loc) const = 0; }; class RefBuilder: public ExprBuilder { VarDecl *Var; QualType VarType; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.BuildDeclRefExpr(Var, VarType, VK_LValue, Loc).get()); } RefBuilder(VarDecl *Var, QualType VarType) : Var(Var), VarType(VarType) {} }; class ThisBuilder: public ExprBuilder { public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.ActOnCXXThis(Loc).getAs()); } }; class CastBuilder: public ExprBuilder { const ExprBuilder &Builder; QualType Type; ExprValueKind Kind; const CXXCastPath &Path; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.ImpCastExprToType(Builder.build(S, Loc), Type, CK_UncheckedDerivedToBase, Kind, &Path).get()); } CastBuilder(const ExprBuilder &Builder, QualType Type, ExprValueKind Kind, const CXXCastPath &Path) : Builder(Builder), Type(Type), Kind(Kind), Path(Path) {} }; class DerefBuilder: public ExprBuilder { const ExprBuilder &Builder; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull( S.CreateBuiltinUnaryOp(Loc, UO_Deref, Builder.build(S, Loc)).get()); } DerefBuilder(const ExprBuilder &Builder) : Builder(Builder) {} }; class MemberBuilder: public ExprBuilder { const ExprBuilder &Builder; QualType Type; CXXScopeSpec SS; bool IsArrow; LookupResult &MemberLookup; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.BuildMemberReferenceExpr( Builder.build(S, Loc), Type, Loc, IsArrow, SS, SourceLocation(), nullptr, MemberLookup, nullptr, nullptr).get()); } MemberBuilder(const ExprBuilder &Builder, QualType Type, bool IsArrow, LookupResult &MemberLookup) : Builder(Builder), Type(Type), IsArrow(IsArrow), MemberLookup(MemberLookup) {} }; class MoveCastBuilder: public ExprBuilder { const ExprBuilder &Builder; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(CastForMoving(S, Builder.build(S, Loc))); } MoveCastBuilder(const ExprBuilder &Builder) : Builder(Builder) {} }; class LvalueConvBuilder: public ExprBuilder { const ExprBuilder &Builder; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull( S.DefaultLvalueConversion(Builder.build(S, Loc)).get()); } LvalueConvBuilder(const ExprBuilder &Builder) : Builder(Builder) {} }; class SubscriptBuilder: public ExprBuilder { const ExprBuilder &Base; const ExprBuilder &Index; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.CreateBuiltinArraySubscriptExpr( Base.build(S, Loc), Loc, Index.build(S, Loc), Loc).get()); } SubscriptBuilder(const ExprBuilder &Base, const ExprBuilder &Index) : Base(Base), Index(Index) {} }; } // end anonymous namespace /// When generating a defaulted copy or move assignment operator, if a field /// should be copied with __builtin_memcpy rather than via explicit assignments, /// do so. This optimization only applies for arrays of scalars, and for arrays /// of class type where the selected copy/move-assignment operator is trivial. static StmtResult buildMemcpyForAssignmentOp(Sema &S, SourceLocation Loc, QualType T, const ExprBuilder &ToB, const ExprBuilder &FromB) { // Compute the size of the memory buffer to be copied. QualType SizeType = S.Context.getSizeType(); llvm::APInt Size(S.Context.getTypeSize(SizeType), S.Context.getTypeSizeInChars(T).getQuantity()); // Take the address of the field references for "from" and "to". We // directly construct UnaryOperators here because semantic analysis // does not permit us to take the address of an xvalue. Expr *From = FromB.build(S, Loc); From = new (S.Context) UnaryOperator(From, UO_AddrOf, S.Context.getPointerType(From->getType()), VK_RValue, OK_Ordinary, Loc); Expr *To = ToB.build(S, Loc); To = new (S.Context) UnaryOperator(To, UO_AddrOf, S.Context.getPointerType(To->getType()), VK_RValue, OK_Ordinary, Loc); const Type *E = T->getBaseElementTypeUnsafe(); bool NeedsCollectableMemCpy = E->isRecordType() && E->getAs()->getDecl()->hasObjectMember(); // Create a reference to the __builtin_objc_memmove_collectable function StringRef MemCpyName = NeedsCollectableMemCpy ? "__builtin_objc_memmove_collectable" : "__builtin_memcpy"; LookupResult R(S, &S.Context.Idents.get(MemCpyName), Loc, Sema::LookupOrdinaryName); S.LookupName(R, S.TUScope, true); FunctionDecl *MemCpy = R.getAsSingle(); if (!MemCpy) // Something went horribly wrong earlier, and we will have complained // about it. return StmtError(); ExprResult MemCpyRef = S.BuildDeclRefExpr(MemCpy, S.Context.BuiltinFnTy, VK_RValue, Loc, nullptr); assert(MemCpyRef.isUsable() && "Builtin reference cannot fail"); Expr *CallArgs[] = { To, From, IntegerLiteral::Create(S.Context, Size, SizeType, Loc) }; ExprResult Call = S.ActOnCallExpr(/*Scope=*/nullptr, MemCpyRef.get(), Loc, CallArgs, Loc); assert(!Call.isInvalid() && "Call to __builtin_memcpy cannot fail!"); return Call.getAs(); } /// \brief Builds a statement that copies/moves the given entity from \p From to /// \c To. /// /// This routine is used to copy/move the members of a class with an /// implicitly-declared copy/move assignment operator. When the entities being /// copied are arrays, this routine builds for loops to copy them. /// /// \param S The Sema object used for type-checking. /// /// \param Loc The location where the implicit copy/move is being generated. /// /// \param T The type of the expressions being copied/moved. Both expressions /// must have this type. /// /// \param To The expression we are copying/moving to. /// /// \param From The expression we are copying/moving from. /// /// \param CopyingBaseSubobject Whether we're copying/moving a base subobject. /// Otherwise, it's a non-static member subobject. /// /// \param Copying Whether we're copying or moving. /// /// \param Depth Internal parameter recording the depth of the recursion. /// /// \returns A statement or a loop that copies the expressions, or StmtResult(0) /// if a memcpy should be used instead. static StmtResult buildSingleCopyAssignRecursively(Sema &S, SourceLocation Loc, QualType T, const ExprBuilder &To, const ExprBuilder &From, bool CopyingBaseSubobject, bool Copying, unsigned Depth = 0) { // C++11 [class.copy]p28: // Each subobject is assigned in the manner appropriate to its type: // // - if the subobject is of class type, as if by a call to operator= with // the subobject as the object expression and the corresponding // subobject of x as a single function argument (as if by explicit // qualification; that is, ignoring any possible virtual overriding // functions in more derived classes); // // C++03 [class.copy]p13: // - if the subobject is of class type, the copy assignment operator for // the class is used (as if by explicit qualification; that is, // ignoring any possible virtual overriding functions in more derived // classes); if (const RecordType *RecordTy = T->getAs()) { CXXRecordDecl *ClassDecl = cast(RecordTy->getDecl()); // Look for operator=. DeclarationName Name = S.Context.DeclarationNames.getCXXOperatorName(OO_Equal); LookupResult OpLookup(S, Name, Loc, Sema::LookupOrdinaryName); S.LookupQualifiedName(OpLookup, ClassDecl, false); // Prior to C++11, filter out any result that isn't a copy/move-assignment // operator. if (!S.getLangOpts().CPlusPlus11) { LookupResult::Filter F = OpLookup.makeFilter(); while (F.hasNext()) { NamedDecl *D = F.next(); if (CXXMethodDecl *Method = dyn_cast(D)) if (Method->isCopyAssignmentOperator() || (!Copying && Method->isMoveAssignmentOperator())) continue; F.erase(); } F.done(); } // Suppress the protected check (C++ [class.protected]) for each of the // assignment operators we found. This strange dance is required when // we're assigning via a base classes's copy-assignment operator. To // ensure that we're getting the right base class subobject (without // ambiguities), we need to cast "this" to that subobject type; to // ensure that we don't go through the virtual call mechanism, we need // to qualify the operator= name with the base class (see below). However, // this means that if the base class has a protected copy assignment // operator, the protected member access check will fail. So, we // rewrite "protected" access to "public" access in this case, since we // know by construction that we're calling from a derived class. if (CopyingBaseSubobject) { for (LookupResult::iterator L = OpLookup.begin(), LEnd = OpLookup.end(); L != LEnd; ++L) { if (L.getAccess() == AS_protected) L.setAccess(AS_public); } } // Create the nested-name-specifier that will be used to qualify the // reference to operator=; this is required to suppress the virtual // call mechanism. CXXScopeSpec SS; const Type *CanonicalT = S.Context.getCanonicalType(T.getTypePtr()); SS.MakeTrivial(S.Context, NestedNameSpecifier::Create(S.Context, nullptr, false, CanonicalT), Loc); // Create the reference to operator=. ExprResult OpEqualRef = S.BuildMemberReferenceExpr(To.build(S, Loc), T, Loc, /*isArrow=*/false, SS, /*TemplateKWLoc=*/SourceLocation(), /*FirstQualifierInScope=*/nullptr, OpLookup, /*TemplateArgs=*/nullptr, /*S*/nullptr, /*SuppressQualifierCheck=*/true); if (OpEqualRef.isInvalid()) return StmtError(); // Build the call to the assignment operator. Expr *FromInst = From.build(S, Loc); ExprResult Call = S.BuildCallToMemberFunction(/*Scope=*/nullptr, OpEqualRef.getAs(), Loc, FromInst, Loc); if (Call.isInvalid()) return StmtError(); // If we built a call to a trivial 'operator=' while copying an array, // bail out. We'll replace the whole shebang with a memcpy. CXXMemberCallExpr *CE = dyn_cast(Call.get()); if (CE && CE->getMethodDecl()->isTrivial() && Depth) return StmtResult((Stmt*)nullptr); // Convert to an expression-statement, and clean up any produced // temporaries. return S.ActOnExprStmt(Call); } // - if the subobject is of scalar type, the built-in assignment // operator is used. const ConstantArrayType *ArrayTy = S.Context.getAsConstantArrayType(T); if (!ArrayTy) { ExprResult Assignment = S.CreateBuiltinBinOp( Loc, BO_Assign, To.build(S, Loc), From.build(S, Loc)); if (Assignment.isInvalid()) return StmtError(); return S.ActOnExprStmt(Assignment); } // - if the subobject is an array, each element is assigned, in the // manner appropriate to the element type; // Construct a loop over the array bounds, e.g., // // for (__SIZE_TYPE__ i0 = 0; i0 != array-size; ++i0) // // that will copy each of the array elements. QualType SizeType = S.Context.getSizeType(); // Create the iteration variable. IdentifierInfo *IterationVarName = nullptr; { SmallString<8> Str; llvm::raw_svector_ostream OS(Str); OS << "__i" << Depth; IterationVarName = &S.Context.Idents.get(OS.str()); } VarDecl *IterationVar = VarDecl::Create(S.Context, S.CurContext, Loc, Loc, IterationVarName, SizeType, S.Context.getTrivialTypeSourceInfo(SizeType, Loc), SC_None); // Initialize the iteration variable to zero. llvm::APInt Zero(S.Context.getTypeSize(SizeType), 0); IterationVar->setInit(IntegerLiteral::Create(S.Context, Zero, SizeType, Loc)); // Creates a reference to the iteration variable. RefBuilder IterationVarRef(IterationVar, SizeType); LvalueConvBuilder IterationVarRefRVal(IterationVarRef); // Create the DeclStmt that holds the iteration variable. Stmt *InitStmt = new (S.Context) DeclStmt(DeclGroupRef(IterationVar),Loc,Loc); // Subscript the "from" and "to" expressions with the iteration variable. SubscriptBuilder FromIndexCopy(From, IterationVarRefRVal); MoveCastBuilder FromIndexMove(FromIndexCopy); const ExprBuilder *FromIndex; if (Copying) FromIndex = &FromIndexCopy; else FromIndex = &FromIndexMove; SubscriptBuilder ToIndex(To, IterationVarRefRVal); // Build the copy/move for an individual element of the array. StmtResult Copy = buildSingleCopyAssignRecursively(S, Loc, ArrayTy->getElementType(), ToIndex, *FromIndex, CopyingBaseSubobject, Copying, Depth + 1); // Bail out if copying fails or if we determined that we should use memcpy. if (Copy.isInvalid() || !Copy.get()) return Copy; // Create the comparison against the array bound. llvm::APInt Upper = ArrayTy->getSize().zextOrTrunc(S.Context.getTypeSize(SizeType)); Expr *Comparison = new (S.Context) BinaryOperator(IterationVarRefRVal.build(S, Loc), IntegerLiteral::Create(S.Context, Upper, SizeType, Loc), BO_NE, S.Context.BoolTy, VK_RValue, OK_Ordinary, Loc, FPOptions()); // Create the pre-increment of the iteration variable. Expr *Increment = new (S.Context) UnaryOperator(IterationVarRef.build(S, Loc), UO_PreInc, SizeType, VK_LValue, OK_Ordinary, Loc); // Construct the loop that copies all elements of this array. return S.ActOnForStmt( Loc, Loc, InitStmt, S.ActOnCondition(nullptr, Loc, Comparison, Sema::ConditionKind::Boolean), S.MakeFullDiscardedValueExpr(Increment), Loc, Copy.get()); } static StmtResult buildSingleCopyAssign(Sema &S, SourceLocation Loc, QualType T, const ExprBuilder &To, const ExprBuilder &From, bool CopyingBaseSubobject, bool Copying) { // Maybe we should use a memcpy? if (T->isArrayType() && !T.isConstQualified() && !T.isVolatileQualified() && T.isTriviallyCopyableType(S.Context)) return buildMemcpyForAssignmentOp(S, Loc, T, To, From); StmtResult Result(buildSingleCopyAssignRecursively(S, Loc, T, To, From, CopyingBaseSubobject, Copying, 0)); // If we ended up picking a trivial assignment operator for an array of a // non-trivially-copyable class type, just emit a memcpy. if (!Result.isInvalid() && !Result.get()) return buildMemcpyForAssignmentOp(S, Loc, T, To, From); return Result; } CXXMethodDecl *Sema::DeclareImplicitCopyAssignment(CXXRecordDecl *ClassDecl) { // Note: The following rules are largely analoguous to the copy // constructor rules. Note that virtual bases are not taken into account // for determining the argument type of the operator. Note also that // operators taking an object instead of a reference are allowed. assert(ClassDecl->needsImplicitCopyAssignment()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXCopyAssignment); if (DSM.isAlreadyBeingDeclared()) return nullptr; QualType ArgType = Context.getTypeDeclType(ClassDecl); QualType RetType = Context.getLValueReferenceType(ArgType); bool Const = ClassDecl->implicitCopyAssignmentHasConstParam(); if (Const) ArgType = ArgType.withConst(); ArgType = Context.getLValueReferenceType(ArgType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXCopyAssignment, Const); // An implicitly-declared copy assignment operator is an inline public // member of its class. DeclarationName Name = Context.DeclarationNames.getCXXOperatorName(OO_Equal); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXMethodDecl *CopyAssignment = CXXMethodDecl::Create(Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*StorageClass=*/SC_None, /*isInline=*/true, Constexpr, SourceLocation()); CopyAssignment->setAccess(AS_public); CopyAssignment->setDefaulted(); CopyAssignment->setImplicit(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXCopyAssignment, CopyAssignment, /* ConstRHS */ Const, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, CopyAssignment); CopyAssignment->setType(Context.getFunctionType(RetType, ArgType, EPI)); // Add the parameter to the operator. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, CopyAssignment, ClassLoc, ClassLoc, /*Id=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); CopyAssignment->setParams(FromParam); CopyAssignment->setTrivial( ClassDecl->needsOverloadResolutionForCopyAssignment() ? SpecialMemberIsTrivial(CopyAssignment, CXXCopyAssignment) : ClassDecl->hasTrivialCopyAssignment()); // Note that we have added this copy-assignment operator. ++ASTContext::NumImplicitCopyAssignmentOperatorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, CopyAssignment); if (ShouldDeleteSpecialMember(CopyAssignment, CXXCopyAssignment)) SetDeclDeleted(CopyAssignment, ClassLoc); if (S) PushOnScopeChains(CopyAssignment, S, false); ClassDecl->addDecl(CopyAssignment); return CopyAssignment; } /// Diagnose an implicit copy operation for a class which is odr-used, but /// which is deprecated because the class has a user-declared copy constructor, /// copy assignment operator, or destructor. static void diagnoseDeprecatedCopyOperation(Sema &S, CXXMethodDecl *CopyOp) { assert(CopyOp->isImplicit()); CXXRecordDecl *RD = CopyOp->getParent(); CXXMethodDecl *UserDeclaredOperation = nullptr; // In Microsoft mode, assignment operations don't affect constructors and // vice versa. if (RD->hasUserDeclaredDestructor()) { UserDeclaredOperation = RD->getDestructor(); } else if (!isa(CopyOp) && RD->hasUserDeclaredCopyConstructor() && !S.getLangOpts().MSVCCompat) { // Find any user-declared copy constructor. for (auto *I : RD->ctors()) { if (I->isCopyConstructor()) { UserDeclaredOperation = I; break; } } assert(UserDeclaredOperation); } else if (isa(CopyOp) && RD->hasUserDeclaredCopyAssignment() && !S.getLangOpts().MSVCCompat) { // Find any user-declared move assignment operator. for (auto *I : RD->methods()) { if (I->isCopyAssignmentOperator()) { UserDeclaredOperation = I; break; } } assert(UserDeclaredOperation); } if (UserDeclaredOperation) { S.Diag(UserDeclaredOperation->getLocation(), diag::warn_deprecated_copy_operation) << RD << /*copy assignment*/!isa(CopyOp) << /*destructor*/isa(UserDeclaredOperation); } } void Sema::DefineImplicitCopyAssignment(SourceLocation CurrentLocation, CXXMethodDecl *CopyAssignOperator) { assert((CopyAssignOperator->isDefaulted() && CopyAssignOperator->isOverloadedOperator() && CopyAssignOperator->getOverloadedOperator() == OO_Equal && !CopyAssignOperator->doesThisDeclarationHaveABody() && !CopyAssignOperator->isDeleted()) && "DefineImplicitCopyAssignment called for wrong function"); if (CopyAssignOperator->willHaveBody() || CopyAssignOperator->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = CopyAssignOperator->getParent(); if (ClassDecl->isInvalidDecl()) { CopyAssignOperator->setInvalidDecl(); return; } SynthesizedFunctionScope Scope(*this, CopyAssignOperator); // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, CopyAssignOperator->getType()->castAs()); // Add a context note for diagnostics produced after this point. Scope.addContextNote(CurrentLocation); // C++11 [class.copy]p18: // The [definition of an implicitly declared copy assignment operator] is // deprecated if the class has a user-declared copy constructor or a // user-declared destructor. if (getLangOpts().CPlusPlus11 && CopyAssignOperator->isImplicit()) diagnoseDeprecatedCopyOperation(*this, CopyAssignOperator); // C++0x [class.copy]p30: // The implicitly-defined or explicitly-defaulted copy assignment operator // for a non-union class X performs memberwise copy assignment of its // subobjects. The direct base classes of X are assigned first, in the // order of their declaration in the base-specifier-list, and then the // immediate non-static data members of X are assigned, in the order in // which they were declared in the class definition. // The statements that form the synthesized function body. SmallVector Statements; // The parameter for the "other" object, which we are copying from. ParmVarDecl *Other = CopyAssignOperator->getParamDecl(0); Qualifiers OtherQuals = Other->getType().getQualifiers(); QualType OtherRefType = Other->getType(); if (const LValueReferenceType *OtherRef = OtherRefType->getAs()) { OtherRefType = OtherRef->getPointeeType(); OtherQuals = OtherRefType.getQualifiers(); } // Our location for everything implicitly-generated. SourceLocation Loc = CopyAssignOperator->getLocEnd().isValid() ? CopyAssignOperator->getLocEnd() : CopyAssignOperator->getLocation(); // Builds a DeclRefExpr for the "other" object. RefBuilder OtherRef(Other, OtherRefType); // Builds the "this" pointer. ThisBuilder This; // Assign base classes. bool Invalid = false; for (auto &Base : ClassDecl->bases()) { // Form the assignment: // static_cast(this)->Base::operator=(static_cast(other)); QualType BaseType = Base.getType().getUnqualifiedType(); if (!BaseType->isRecordType()) { Invalid = true; continue; } CXXCastPath BasePath; BasePath.push_back(&Base); // Construct the "from" expression, which is an implicit cast to the // appropriately-qualified base type. CastBuilder From(OtherRef, Context.getQualifiedType(BaseType, OtherQuals), VK_LValue, BasePath); // Dereference "this". DerefBuilder DerefThis(This); CastBuilder To(DerefThis, Context.getCVRQualifiedType( BaseType, CopyAssignOperator->getTypeQualifiers()), VK_LValue, BasePath); // Build the copy. StmtResult Copy = buildSingleCopyAssign(*this, Loc, BaseType, To, From, /*CopyingBaseSubobject=*/true, /*Copying=*/true); if (Copy.isInvalid()) { CopyAssignOperator->setInvalidDecl(); return; } // Success! Record the copy. Statements.push_back(Copy.getAs()); } // Assign non-static members. for (auto *Field : ClassDecl->fields()) { // FIXME: We should form some kind of AST representation for the implied // memcpy in a union copy operation. if (Field->isUnnamedBitfield() || Field->getParent()->isUnion()) continue; if (Field->isInvalidDecl()) { Invalid = true; continue; } // Check for members of reference type; we can't copy those. if (Field->getType()->isReferenceType()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 0 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Invalid = true; continue; } // Check for members of const-qualified, non-class type. QualType BaseType = Context.getBaseElementType(Field->getType()); if (!BaseType->getAs() && BaseType.isConstQualified()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 1 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Invalid = true; continue; } // Suppress assigning zero-width bitfields. if (Field->isBitField() && Field->getBitWidthValue(Context) == 0) continue; QualType FieldType = Field->getType().getNonReferenceType(); if (FieldType->isIncompleteArrayType()) { assert(ClassDecl->hasFlexibleArrayMember() && "Incomplete array type is not valid"); continue; } // Build references to the field in the object we're copying from and to. CXXScopeSpec SS; // Intentionally empty LookupResult MemberLookup(*this, Field->getDeclName(), Loc, LookupMemberName); MemberLookup.addDecl(Field); MemberLookup.resolveKind(); MemberBuilder From(OtherRef, OtherRefType, /*IsArrow=*/false, MemberLookup); MemberBuilder To(This, getCurrentThisType(), /*IsArrow=*/true, MemberLookup); // Build the copy of this field. StmtResult Copy = buildSingleCopyAssign(*this, Loc, FieldType, To, From, /*CopyingBaseSubobject=*/false, /*Copying=*/true); if (Copy.isInvalid()) { CopyAssignOperator->setInvalidDecl(); return; } // Success! Record the copy. Statements.push_back(Copy.getAs()); } if (!Invalid) { // Add a "return *this;" ExprResult ThisObj = CreateBuiltinUnaryOp(Loc, UO_Deref, This.build(*this, Loc)); StmtResult Return = BuildReturnStmt(Loc, ThisObj.get()); if (Return.isInvalid()) Invalid = true; else Statements.push_back(Return.getAs()); } if (Invalid) { CopyAssignOperator->setInvalidDecl(); return; } StmtResult Body; { CompoundScopeRAII CompoundScope(*this); Body = ActOnCompoundStmt(Loc, Loc, Statements, /*isStmtExpr=*/false); assert(!Body.isInvalid() && "Compound statement creation cannot fail"); } CopyAssignOperator->setBody(Body.getAs()); CopyAssignOperator->markUsed(Context); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(CopyAssignOperator); } } CXXMethodDecl *Sema::DeclareImplicitMoveAssignment(CXXRecordDecl *ClassDecl) { assert(ClassDecl->needsImplicitMoveAssignment()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXMoveAssignment); if (DSM.isAlreadyBeingDeclared()) return nullptr; // Note: The following rules are largely analoguous to the move // constructor rules. QualType ArgType = Context.getTypeDeclType(ClassDecl); QualType RetType = Context.getLValueReferenceType(ArgType); ArgType = Context.getRValueReferenceType(ArgType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXMoveAssignment, false); // An implicitly-declared move assignment operator is an inline public // member of its class. DeclarationName Name = Context.DeclarationNames.getCXXOperatorName(OO_Equal); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXMethodDecl *MoveAssignment = CXXMethodDecl::Create(Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*StorageClass=*/SC_None, /*isInline=*/true, Constexpr, SourceLocation()); MoveAssignment->setAccess(AS_public); MoveAssignment->setDefaulted(); MoveAssignment->setImplicit(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXMoveAssignment, MoveAssignment, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, MoveAssignment); MoveAssignment->setType(Context.getFunctionType(RetType, ArgType, EPI)); // Add the parameter to the operator. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, MoveAssignment, ClassLoc, ClassLoc, /*Id=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); MoveAssignment->setParams(FromParam); MoveAssignment->setTrivial( ClassDecl->needsOverloadResolutionForMoveAssignment() ? SpecialMemberIsTrivial(MoveAssignment, CXXMoveAssignment) : ClassDecl->hasTrivialMoveAssignment()); // Note that we have added this copy-assignment operator. ++ASTContext::NumImplicitMoveAssignmentOperatorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, MoveAssignment); if (ShouldDeleteSpecialMember(MoveAssignment, CXXMoveAssignment)) { ClassDecl->setImplicitMoveAssignmentIsDeleted(); SetDeclDeleted(MoveAssignment, ClassLoc); } if (S) PushOnScopeChains(MoveAssignment, S, false); ClassDecl->addDecl(MoveAssignment); return MoveAssignment; } /// Check if we're implicitly defining a move assignment operator for a class /// with virtual bases. Such a move assignment might move-assign the virtual /// base multiple times. static void checkMoveAssignmentForRepeatedMove(Sema &S, CXXRecordDecl *Class, SourceLocation CurrentLocation) { assert(!Class->isDependentContext() && "should not define dependent move"); // Only a virtual base could get implicitly move-assigned multiple times. // Only a non-trivial move assignment can observe this. We only want to // diagnose if we implicitly define an assignment operator that assigns // two base classes, both of which move-assign the same virtual base. if (Class->getNumVBases() == 0 || Class->hasTrivialMoveAssignment() || Class->getNumBases() < 2) return; llvm::SmallVector Worklist; typedef llvm::DenseMap VBaseMap; VBaseMap VBases; for (auto &BI : Class->bases()) { Worklist.push_back(&BI); while (!Worklist.empty()) { CXXBaseSpecifier *BaseSpec = Worklist.pop_back_val(); CXXRecordDecl *Base = BaseSpec->getType()->getAsCXXRecordDecl(); // If the base has no non-trivial move assignment operators, // we don't care about moves from it. if (!Base->hasNonTrivialMoveAssignment()) continue; // If there's nothing virtual here, skip it. if (!BaseSpec->isVirtual() && !Base->getNumVBases()) continue; // If we're not actually going to call a move assignment for this base, // or the selected move assignment is trivial, skip it. Sema::SpecialMemberOverloadResult SMOR = S.LookupSpecialMember(Base, Sema::CXXMoveAssignment, /*ConstArg*/false, /*VolatileArg*/false, /*RValueThis*/true, /*ConstThis*/false, /*VolatileThis*/false); if (!SMOR.getMethod() || SMOR.getMethod()->isTrivial() || !SMOR.getMethod()->isMoveAssignmentOperator()) continue; if (BaseSpec->isVirtual()) { // We're going to move-assign this virtual base, and its move // assignment operator is not trivial. If this can happen for // multiple distinct direct bases of Class, diagnose it. (If it // only happens in one base, we'll diagnose it when synthesizing // that base class's move assignment operator.) CXXBaseSpecifier *&Existing = VBases.insert(std::make_pair(Base->getCanonicalDecl(), &BI)) .first->second; if (Existing && Existing != &BI) { S.Diag(CurrentLocation, diag::warn_vbase_moved_multiple_times) << Class << Base; S.Diag(Existing->getLocStart(), diag::note_vbase_moved_here) << (Base->getCanonicalDecl() == Existing->getType()->getAsCXXRecordDecl()->getCanonicalDecl()) << Base << Existing->getType() << Existing->getSourceRange(); S.Diag(BI.getLocStart(), diag::note_vbase_moved_here) << (Base->getCanonicalDecl() == BI.getType()->getAsCXXRecordDecl()->getCanonicalDecl()) << Base << BI.getType() << BaseSpec->getSourceRange(); // Only diagnose each vbase once. Existing = nullptr; } } else { // Only walk over bases that have defaulted move assignment operators. // We assume that any user-provided move assignment operator handles // the multiple-moves-of-vbase case itself somehow. if (!SMOR.getMethod()->isDefaulted()) continue; // We're going to move the base classes of Base. Add them to the list. for (auto &BI : Base->bases()) Worklist.push_back(&BI); } } } } void Sema::DefineImplicitMoveAssignment(SourceLocation CurrentLocation, CXXMethodDecl *MoveAssignOperator) { assert((MoveAssignOperator->isDefaulted() && MoveAssignOperator->isOverloadedOperator() && MoveAssignOperator->getOverloadedOperator() == OO_Equal && !MoveAssignOperator->doesThisDeclarationHaveABody() && !MoveAssignOperator->isDeleted()) && "DefineImplicitMoveAssignment called for wrong function"); if (MoveAssignOperator->willHaveBody() || MoveAssignOperator->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = MoveAssignOperator->getParent(); if (ClassDecl->isInvalidDecl()) { MoveAssignOperator->setInvalidDecl(); return; } // C++0x [class.copy]p28: // The implicitly-defined or move assignment operator for a non-union class // X performs memberwise move assignment of its subobjects. The direct base // classes of X are assigned first, in the order of their declaration in the // base-specifier-list, and then the immediate non-static data members of X // are assigned, in the order in which they were declared in the class // definition. // Issue a warning if our implicit move assignment operator will move // from a virtual base more than once. checkMoveAssignmentForRepeatedMove(*this, ClassDecl, CurrentLocation); SynthesizedFunctionScope Scope(*this, MoveAssignOperator); // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, MoveAssignOperator->getType()->castAs()); // Add a context note for diagnostics produced after this point. Scope.addContextNote(CurrentLocation); // The statements that form the synthesized function body. SmallVector Statements; // The parameter for the "other" object, which we are move from. ParmVarDecl *Other = MoveAssignOperator->getParamDecl(0); QualType OtherRefType = Other->getType()-> getAs()->getPointeeType(); assert(!OtherRefType.getQualifiers() && "Bad argument type of defaulted move assignment"); // Our location for everything implicitly-generated. SourceLocation Loc = MoveAssignOperator->getLocEnd().isValid() ? MoveAssignOperator->getLocEnd() : MoveAssignOperator->getLocation(); // Builds a reference to the "other" object. RefBuilder OtherRef(Other, OtherRefType); // Cast to rvalue. MoveCastBuilder MoveOther(OtherRef); // Builds the "this" pointer. ThisBuilder This; // Assign base classes. bool Invalid = false; for (auto &Base : ClassDecl->bases()) { // C++11 [class.copy]p28: // It is unspecified whether subobjects representing virtual base classes // are assigned more than once by the implicitly-defined copy assignment // operator. // FIXME: Do not assign to a vbase that will be assigned by some other base // class. For a move-assignment, this can result in the vbase being moved // multiple times. // Form the assignment: // static_cast(this)->Base::operator=(static_cast(other)); QualType BaseType = Base.getType().getUnqualifiedType(); if (!BaseType->isRecordType()) { Invalid = true; continue; } CXXCastPath BasePath; BasePath.push_back(&Base); // Construct the "from" expression, which is an implicit cast to the // appropriately-qualified base type. CastBuilder From(OtherRef, BaseType, VK_XValue, BasePath); // Dereference "this". DerefBuilder DerefThis(This); // Implicitly cast "this" to the appropriately-qualified base type. CastBuilder To(DerefThis, Context.getCVRQualifiedType( BaseType, MoveAssignOperator->getTypeQualifiers()), VK_LValue, BasePath); // Build the move. StmtResult Move = buildSingleCopyAssign(*this, Loc, BaseType, To, From, /*CopyingBaseSubobject=*/true, /*Copying=*/false); if (Move.isInvalid()) { MoveAssignOperator->setInvalidDecl(); return; } // Success! Record the move. Statements.push_back(Move.getAs()); } // Assign non-static members. for (auto *Field : ClassDecl->fields()) { // FIXME: We should form some kind of AST representation for the implied // memcpy in a union copy operation. if (Field->isUnnamedBitfield() || Field->getParent()->isUnion()) continue; if (Field->isInvalidDecl()) { Invalid = true; continue; } // Check for members of reference type; we can't move those. if (Field->getType()->isReferenceType()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 0 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Invalid = true; continue; } // Check for members of const-qualified, non-class type. QualType BaseType = Context.getBaseElementType(Field->getType()); if (!BaseType->getAs() && BaseType.isConstQualified()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 1 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Invalid = true; continue; } // Suppress assigning zero-width bitfields. if (Field->isBitField() && Field->getBitWidthValue(Context) == 0) continue; QualType FieldType = Field->getType().getNonReferenceType(); if (FieldType->isIncompleteArrayType()) { assert(ClassDecl->hasFlexibleArrayMember() && "Incomplete array type is not valid"); continue; } // Build references to the field in the object we're copying from and to. LookupResult MemberLookup(*this, Field->getDeclName(), Loc, LookupMemberName); MemberLookup.addDecl(Field); MemberLookup.resolveKind(); MemberBuilder From(MoveOther, OtherRefType, /*IsArrow=*/false, MemberLookup); MemberBuilder To(This, getCurrentThisType(), /*IsArrow=*/true, MemberLookup); assert(!From.build(*this, Loc)->isLValue() && // could be xvalue or prvalue "Member reference with rvalue base must be rvalue except for reference " "members, which aren't allowed for move assignment."); // Build the move of this field. StmtResult Move = buildSingleCopyAssign(*this, Loc, FieldType, To, From, /*CopyingBaseSubobject=*/false, /*Copying=*/false); if (Move.isInvalid()) { MoveAssignOperator->setInvalidDecl(); return; } // Success! Record the copy. Statements.push_back(Move.getAs()); } if (!Invalid) { // Add a "return *this;" ExprResult ThisObj = CreateBuiltinUnaryOp(Loc, UO_Deref, This.build(*this, Loc)); StmtResult Return = BuildReturnStmt(Loc, ThisObj.get()); if (Return.isInvalid()) Invalid = true; else Statements.push_back(Return.getAs()); } if (Invalid) { MoveAssignOperator->setInvalidDecl(); return; } StmtResult Body; { CompoundScopeRAII CompoundScope(*this); Body = ActOnCompoundStmt(Loc, Loc, Statements, /*isStmtExpr=*/false); assert(!Body.isInvalid() && "Compound statement creation cannot fail"); } MoveAssignOperator->setBody(Body.getAs()); MoveAssignOperator->markUsed(Context); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(MoveAssignOperator); } } CXXConstructorDecl *Sema::DeclareImplicitCopyConstructor( CXXRecordDecl *ClassDecl) { // C++ [class.copy]p4: // If the class definition does not explicitly declare a copy // constructor, one is declared implicitly. assert(ClassDecl->needsImplicitCopyConstructor()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXCopyConstructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; QualType ClassType = Context.getTypeDeclType(ClassDecl); QualType ArgType = ClassType; bool Const = ClassDecl->implicitCopyConstructorHasConstParam(); if (Const) ArgType = ArgType.withConst(); ArgType = Context.getLValueReferenceType(ArgType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXCopyConstructor, Const); DeclarationName Name = Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(ClassType)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); // An implicitly-declared copy constructor is an inline public // member of its class. CXXConstructorDecl *CopyConstructor = CXXConstructorDecl::Create( Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*isExplicit=*/false, /*isInline=*/true, /*isImplicitlyDeclared=*/true, Constexpr); CopyConstructor->setAccess(AS_public); CopyConstructor->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXCopyConstructor, CopyConstructor, /* ConstRHS */ Const, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, CopyConstructor); CopyConstructor->setType( Context.getFunctionType(Context.VoidTy, ArgType, EPI)); // Add the parameter to the constructor. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, CopyConstructor, ClassLoc, ClassLoc, /*IdentifierInfo=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); CopyConstructor->setParams(FromParam); CopyConstructor->setTrivial( ClassDecl->needsOverloadResolutionForCopyConstructor() ? SpecialMemberIsTrivial(CopyConstructor, CXXCopyConstructor) : ClassDecl->hasTrivialCopyConstructor()); // Note that we have declared this constructor. ++ASTContext::NumImplicitCopyConstructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, CopyConstructor); - if (ShouldDeleteSpecialMember(CopyConstructor, CXXCopyConstructor)) + if (ShouldDeleteSpecialMember(CopyConstructor, CXXCopyConstructor)) { + ClassDecl->setImplicitCopyConstructorIsDeleted(); SetDeclDeleted(CopyConstructor, ClassLoc); + } if (S) PushOnScopeChains(CopyConstructor, S, false); ClassDecl->addDecl(CopyConstructor); return CopyConstructor; } void Sema::DefineImplicitCopyConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *CopyConstructor) { assert((CopyConstructor->isDefaulted() && CopyConstructor->isCopyConstructor() && !CopyConstructor->doesThisDeclarationHaveABody() && !CopyConstructor->isDeleted()) && "DefineImplicitCopyConstructor - call it for implicit copy ctor"); if (CopyConstructor->willHaveBody() || CopyConstructor->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = CopyConstructor->getParent(); assert(ClassDecl && "DefineImplicitCopyConstructor - invalid constructor"); SynthesizedFunctionScope Scope(*this, CopyConstructor); // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, CopyConstructor->getType()->castAs()); MarkVTableUsed(CurrentLocation, ClassDecl); // Add a context note for diagnostics produced after this point. Scope.addContextNote(CurrentLocation); // C++11 [class.copy]p7: // The [definition of an implicitly declared copy constructor] is // deprecated if the class has a user-declared copy assignment operator // or a user-declared destructor. if (getLangOpts().CPlusPlus11 && CopyConstructor->isImplicit()) diagnoseDeprecatedCopyOperation(*this, CopyConstructor); if (SetCtorInitializers(CopyConstructor, /*AnyErrors=*/false)) { CopyConstructor->setInvalidDecl(); } else { SourceLocation Loc = CopyConstructor->getLocEnd().isValid() ? CopyConstructor->getLocEnd() : CopyConstructor->getLocation(); Sema::CompoundScopeRAII CompoundScope(*this); CopyConstructor->setBody( ActOnCompoundStmt(Loc, Loc, None, /*isStmtExpr=*/false).getAs()); CopyConstructor->markUsed(Context); } if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(CopyConstructor); } } CXXConstructorDecl *Sema::DeclareImplicitMoveConstructor( CXXRecordDecl *ClassDecl) { assert(ClassDecl->needsImplicitMoveConstructor()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXMoveConstructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; QualType ClassType = Context.getTypeDeclType(ClassDecl); QualType ArgType = Context.getRValueReferenceType(ClassType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXMoveConstructor, false); DeclarationName Name = Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(ClassType)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); // C++11 [class.copy]p11: // An implicitly-declared copy/move constructor is an inline public // member of its class. CXXConstructorDecl *MoveConstructor = CXXConstructorDecl::Create( Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*isExplicit=*/false, /*isInline=*/true, /*isImplicitlyDeclared=*/true, Constexpr); MoveConstructor->setAccess(AS_public); MoveConstructor->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXMoveConstructor, MoveConstructor, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, MoveConstructor); MoveConstructor->setType( Context.getFunctionType(Context.VoidTy, ArgType, EPI)); // Add the parameter to the constructor. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, MoveConstructor, ClassLoc, ClassLoc, /*IdentifierInfo=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); MoveConstructor->setParams(FromParam); MoveConstructor->setTrivial( ClassDecl->needsOverloadResolutionForMoveConstructor() ? SpecialMemberIsTrivial(MoveConstructor, CXXMoveConstructor) : ClassDecl->hasTrivialMoveConstructor()); // Note that we have declared this constructor. ++ASTContext::NumImplicitMoveConstructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, MoveConstructor); if (ShouldDeleteSpecialMember(MoveConstructor, CXXMoveConstructor)) { ClassDecl->setImplicitMoveConstructorIsDeleted(); SetDeclDeleted(MoveConstructor, ClassLoc); } if (S) PushOnScopeChains(MoveConstructor, S, false); ClassDecl->addDecl(MoveConstructor); return MoveConstructor; } void Sema::DefineImplicitMoveConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *MoveConstructor) { assert((MoveConstructor->isDefaulted() && MoveConstructor->isMoveConstructor() && !MoveConstructor->doesThisDeclarationHaveABody() && !MoveConstructor->isDeleted()) && "DefineImplicitMoveConstructor - call it for implicit move ctor"); if (MoveConstructor->willHaveBody() || MoveConstructor->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = MoveConstructor->getParent(); assert(ClassDecl && "DefineImplicitMoveConstructor - invalid constructor"); SynthesizedFunctionScope Scope(*this, MoveConstructor); // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, MoveConstructor->getType()->castAs()); MarkVTableUsed(CurrentLocation, ClassDecl); // Add a context note for diagnostics produced after this point. Scope.addContextNote(CurrentLocation); if (SetCtorInitializers(MoveConstructor, /*AnyErrors=*/false)) { MoveConstructor->setInvalidDecl(); } else { SourceLocation Loc = MoveConstructor->getLocEnd().isValid() ? MoveConstructor->getLocEnd() : MoveConstructor->getLocation(); Sema::CompoundScopeRAII CompoundScope(*this); MoveConstructor->setBody(ActOnCompoundStmt( Loc, Loc, None, /*isStmtExpr=*/ false).getAs()); MoveConstructor->markUsed(Context); } if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(MoveConstructor); } } bool Sema::isImplicitlyDeleted(FunctionDecl *FD) { return FD->isDeleted() && FD->isDefaulted() && isa(FD); } void Sema::DefineImplicitLambdaToFunctionPointerConversion( SourceLocation CurrentLocation, CXXConversionDecl *Conv) { SynthesizedFunctionScope Scope(*this, Conv); CXXRecordDecl *Lambda = Conv->getParent(); CXXMethodDecl *CallOp = Lambda->getLambdaCallOperator(); // If we are defining a specialization of a conversion to function-ptr // cache the deduced template arguments for this specialization // so that we can use them to retrieve the corresponding call-operator // and static-invoker. const TemplateArgumentList *DeducedTemplateArgs = nullptr; // Retrieve the corresponding call-operator specialization. if (Lambda->isGenericLambda()) { assert(Conv->isFunctionTemplateSpecialization()); FunctionTemplateDecl *CallOpTemplate = CallOp->getDescribedFunctionTemplate(); DeducedTemplateArgs = Conv->getTemplateSpecializationArgs(); void *InsertPos = nullptr; FunctionDecl *CallOpSpec = CallOpTemplate->findSpecialization( DeducedTemplateArgs->asArray(), InsertPos); assert(CallOpSpec && "Conversion operator must have a corresponding call operator"); CallOp = cast(CallOpSpec); } // Mark the call operator referenced (and add to pending instantiations // if necessary). // For both the conversion and static-invoker template specializations // we construct their body's in this function, so no need to add them // to the PendingInstantiations. MarkFunctionReferenced(CurrentLocation, CallOp); // Retrieve the static invoker... CXXMethodDecl *Invoker = Lambda->getLambdaStaticInvoker(); // ... and get the corresponding specialization for a generic lambda. if (Lambda->isGenericLambda()) { assert(DeducedTemplateArgs && "Must have deduced template arguments from Conversion Operator"); FunctionTemplateDecl *InvokeTemplate = Invoker->getDescribedFunctionTemplate(); void *InsertPos = nullptr; FunctionDecl *InvokeSpec = InvokeTemplate->findSpecialization( DeducedTemplateArgs->asArray(), InsertPos); assert(InvokeSpec && "Must have a corresponding static invoker specialization"); Invoker = cast(InvokeSpec); } // Construct the body of the conversion function { return __invoke; }. Expr *FunctionRef = BuildDeclRefExpr(Invoker, Invoker->getType(), VK_LValue, Conv->getLocation()).get(); assert(FunctionRef && "Can't refer to __invoke function?"); Stmt *Return = BuildReturnStmt(Conv->getLocation(), FunctionRef).get(); Conv->setBody(new (Context) CompoundStmt(Context, Return, Conv->getLocation(), Conv->getLocation())); Conv->markUsed(Context); Conv->setReferenced(); // Fill in the __invoke function with a dummy implementation. IR generation // will fill in the actual details. Invoker->markUsed(Context); Invoker->setReferenced(); Invoker->setBody(new (Context) CompoundStmt(Conv->getLocation())); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Conv); L->CompletedImplicitDefinition(Invoker); } } void Sema::DefineImplicitLambdaToBlockPointerConversion( SourceLocation CurrentLocation, CXXConversionDecl *Conv) { assert(!Conv->getParent()->isGenericLambda()); SynthesizedFunctionScope Scope(*this, Conv); // Copy-initialize the lambda object as needed to capture it. Expr *This = ActOnCXXThis(CurrentLocation).get(); Expr *DerefThis =CreateBuiltinUnaryOp(CurrentLocation, UO_Deref, This).get(); ExprResult BuildBlock = BuildBlockForLambdaConversion(CurrentLocation, Conv->getLocation(), Conv, DerefThis); // If we're not under ARC, make sure we still get the _Block_copy/autorelease // behavior. Note that only the general conversion function does this // (since it's unusable otherwise); in the case where we inline the // block literal, it has block literal lifetime semantics. if (!BuildBlock.isInvalid() && !getLangOpts().ObjCAutoRefCount) BuildBlock = ImplicitCastExpr::Create(Context, BuildBlock.get()->getType(), CK_CopyAndAutoreleaseBlockObject, BuildBlock.get(), nullptr, VK_RValue); if (BuildBlock.isInvalid()) { Diag(CurrentLocation, diag::note_lambda_to_block_conv); Conv->setInvalidDecl(); return; } // Create the return statement that returns the block from the conversion // function. StmtResult Return = BuildReturnStmt(Conv->getLocation(), BuildBlock.get()); if (Return.isInvalid()) { Diag(CurrentLocation, diag::note_lambda_to_block_conv); Conv->setInvalidDecl(); return; } // Set the body of the conversion function. Stmt *ReturnS = Return.get(); Conv->setBody(new (Context) CompoundStmt(Context, ReturnS, Conv->getLocation(), Conv->getLocation())); Conv->markUsed(Context); // We're done; notify the mutation listener, if any. if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Conv); } } /// \brief Determine whether the given list arguments contains exactly one /// "real" (non-default) argument. static bool hasOneRealArgument(MultiExprArg Args) { switch (Args.size()) { case 0: return false; default: if (!Args[1]->isDefaultArgument()) return false; // fall through case 1: return !Args[0]->isDefaultArgument(); } return false; } ExprResult Sema::BuildCXXConstructExpr(SourceLocation ConstructLoc, QualType DeclInitType, NamedDecl *FoundDecl, CXXConstructorDecl *Constructor, MultiExprArg ExprArgs, bool HadMultipleCandidates, bool IsListInitialization, bool IsStdInitListInitialization, bool RequiresZeroInit, unsigned ConstructKind, SourceRange ParenRange) { bool Elidable = false; // C++0x [class.copy]p34: // When certain criteria are met, an implementation is allowed to // omit the copy/move construction of a class object, even if the // copy/move constructor and/or destructor for the object have // side effects. [...] // - when a temporary class object that has not been bound to a // reference (12.2) would be copied/moved to a class object // with the same cv-unqualified type, the copy/move operation // can be omitted by constructing the temporary object // directly into the target of the omitted copy/move if (ConstructKind == CXXConstructExpr::CK_Complete && Constructor && Constructor->isCopyOrMoveConstructor() && hasOneRealArgument(ExprArgs)) { Expr *SubExpr = ExprArgs[0]; Elidable = SubExpr->isTemporaryObject( Context, cast(FoundDecl->getDeclContext())); } return BuildCXXConstructExpr(ConstructLoc, DeclInitType, FoundDecl, Constructor, Elidable, ExprArgs, HadMultipleCandidates, IsListInitialization, IsStdInitListInitialization, RequiresZeroInit, ConstructKind, ParenRange); } ExprResult Sema::BuildCXXConstructExpr(SourceLocation ConstructLoc, QualType DeclInitType, NamedDecl *FoundDecl, CXXConstructorDecl *Constructor, bool Elidable, MultiExprArg ExprArgs, bool HadMultipleCandidates, bool IsListInitialization, bool IsStdInitListInitialization, bool RequiresZeroInit, unsigned ConstructKind, SourceRange ParenRange) { if (auto *Shadow = dyn_cast(FoundDecl)) { Constructor = findInheritingConstructor(ConstructLoc, Constructor, Shadow); if (DiagnoseUseOfDecl(Constructor, ConstructLoc)) return ExprError(); } return BuildCXXConstructExpr( ConstructLoc, DeclInitType, Constructor, Elidable, ExprArgs, HadMultipleCandidates, IsListInitialization, IsStdInitListInitialization, RequiresZeroInit, ConstructKind, ParenRange); } /// BuildCXXConstructExpr - Creates a complete call to a constructor, /// including handling of its default argument expressions. ExprResult Sema::BuildCXXConstructExpr(SourceLocation ConstructLoc, QualType DeclInitType, CXXConstructorDecl *Constructor, bool Elidable, MultiExprArg ExprArgs, bool HadMultipleCandidates, bool IsListInitialization, bool IsStdInitListInitialization, bool RequiresZeroInit, unsigned ConstructKind, SourceRange ParenRange) { assert(declaresSameEntity( Constructor->getParent(), DeclInitType->getBaseElementTypeUnsafe()->getAsCXXRecordDecl()) && "given constructor for wrong type"); MarkFunctionReferenced(ConstructLoc, Constructor); if (getLangOpts().CUDA && !CheckCUDACall(ConstructLoc, Constructor)) return ExprError(); return CXXConstructExpr::Create( Context, DeclInitType, ConstructLoc, Constructor, Elidable, ExprArgs, HadMultipleCandidates, IsListInitialization, IsStdInitListInitialization, RequiresZeroInit, static_cast(ConstructKind), ParenRange); } ExprResult Sema::BuildCXXDefaultInitExpr(SourceLocation Loc, FieldDecl *Field) { assert(Field->hasInClassInitializer()); // If we already have the in-class initializer nothing needs to be done. if (Field->getInClassInitializer()) return CXXDefaultInitExpr::Create(Context, Loc, Field); // If we might have already tried and failed to instantiate, don't try again. if (Field->isInvalidDecl()) return ExprError(); // Maybe we haven't instantiated the in-class initializer. Go check the // pattern FieldDecl to see if it has one. CXXRecordDecl *ParentRD = cast(Field->getParent()); if (isTemplateInstantiation(ParentRD->getTemplateSpecializationKind())) { CXXRecordDecl *ClassPattern = ParentRD->getTemplateInstantiationPattern(); DeclContext::lookup_result Lookup = ClassPattern->lookup(Field->getDeclName()); // Lookup can return at most two results: the pattern for the field, or the // injected class name of the parent record. No other member can have the // same name as the field. // In modules mode, lookup can return multiple results (coming from // different modules). assert((getLangOpts().Modules || (!Lookup.empty() && Lookup.size() <= 2)) && "more than two lookup results for field name"); FieldDecl *Pattern = dyn_cast(Lookup[0]); if (!Pattern) { assert(isa(Lookup[0]) && "cannot have other non-field member with same name"); for (auto L : Lookup) if (isa(L)) { Pattern = cast(L); break; } assert(Pattern && "We must have set the Pattern!"); } if (InstantiateInClassInitializer(Loc, Field, Pattern, getTemplateInstantiationArgs(Field))) { // Don't diagnose this again. Field->setInvalidDecl(); return ExprError(); } return CXXDefaultInitExpr::Create(Context, Loc, Field); } // DR1351: // If the brace-or-equal-initializer of a non-static data member // invokes a defaulted default constructor of its class or of an // enclosing class in a potentially evaluated subexpression, the // program is ill-formed. // // This resolution is unworkable: the exception specification of the // default constructor can be needed in an unevaluated context, in // particular, in the operand of a noexcept-expression, and we can be // unable to compute an exception specification for an enclosed class. // // Any attempt to resolve the exception specification of a defaulted default // constructor before the initializer is lexically complete will ultimately // come here at which point we can diagnose it. RecordDecl *OutermostClass = ParentRD->getOuterLexicalRecordContext(); Diag(Loc, diag::err_in_class_initializer_not_yet_parsed) << OutermostClass << Field; Diag(Field->getLocEnd(), diag::note_in_class_initializer_not_yet_parsed); // Recover by marking the field invalid, unless we're in a SFINAE context. if (!isSFINAEContext()) Field->setInvalidDecl(); return ExprError(); } void Sema::FinalizeVarWithDestructor(VarDecl *VD, const RecordType *Record) { if (VD->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = cast(Record->getDecl()); if (ClassDecl->isInvalidDecl()) return; if (ClassDecl->hasIrrelevantDestructor()) return; if (ClassDecl->isDependentContext()) return; CXXDestructorDecl *Destructor = LookupDestructor(ClassDecl); MarkFunctionReferenced(VD->getLocation(), Destructor); CheckDestructorAccess(VD->getLocation(), Destructor, PDiag(diag::err_access_dtor_var) << VD->getDeclName() << VD->getType()); DiagnoseUseOfDecl(Destructor, VD->getLocation()); if (Destructor->isTrivial()) return; if (!VD->hasGlobalStorage()) return; // Emit warning for non-trivial dtor in global scope (a real global, // class-static, function-static). Diag(VD->getLocation(), diag::warn_exit_time_destructor); // TODO: this should be re-enabled for static locals by !CXAAtExit if (!VD->isStaticLocal()) Diag(VD->getLocation(), diag::warn_global_destructor); } /// \brief Given a constructor and the set of arguments provided for the /// constructor, convert the arguments and add any required default arguments /// to form a proper call to this constructor. /// /// \returns true if an error occurred, false otherwise. bool Sema::CompleteConstructorCall(CXXConstructorDecl *Constructor, MultiExprArg ArgsPtr, SourceLocation Loc, SmallVectorImpl &ConvertedArgs, bool AllowExplicit, bool IsListInitialization) { // FIXME: This duplicates a lot of code from Sema::ConvertArgumentsForCall. unsigned NumArgs = ArgsPtr.size(); Expr **Args = ArgsPtr.data(); const FunctionProtoType *Proto = Constructor->getType()->getAs(); assert(Proto && "Constructor without a prototype?"); unsigned NumParams = Proto->getNumParams(); // If too few arguments are available, we'll fill in the rest with defaults. if (NumArgs < NumParams) ConvertedArgs.reserve(NumParams); else ConvertedArgs.reserve(NumArgs); VariadicCallType CallType = Proto->isVariadic() ? VariadicConstructor : VariadicDoesNotApply; SmallVector AllArgs; bool Invalid = GatherArgumentsForCall(Loc, Constructor, Proto, 0, llvm::makeArrayRef(Args, NumArgs), AllArgs, CallType, AllowExplicit, IsListInitialization); ConvertedArgs.append(AllArgs.begin(), AllArgs.end()); DiagnoseSentinelCalls(Constructor, Loc, AllArgs); CheckConstructorCall(Constructor, llvm::makeArrayRef(AllArgs.data(), AllArgs.size()), Proto, Loc); return Invalid; } static inline bool CheckOperatorNewDeleteDeclarationScope(Sema &SemaRef, const FunctionDecl *FnDecl) { const DeclContext *DC = FnDecl->getDeclContext()->getRedeclContext(); if (isa(DC)) { return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_declared_in_namespace) << FnDecl->getDeclName(); } if (isa(DC) && FnDecl->getStorageClass() == SC_Static) { return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_declared_static) << FnDecl->getDeclName(); } return false; } static inline bool CheckOperatorNewDeleteTypes(Sema &SemaRef, const FunctionDecl *FnDecl, CanQualType ExpectedResultType, CanQualType ExpectedFirstParamType, unsigned DependentParamTypeDiag, unsigned InvalidParamTypeDiag) { QualType ResultType = FnDecl->getType()->getAs()->getReturnType(); // Check that the result type is not dependent. if (ResultType->isDependentType()) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_dependent_result_type) << FnDecl->getDeclName() << ExpectedResultType; // Check that the result type is what we expect. if (SemaRef.Context.getCanonicalType(ResultType) != ExpectedResultType) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_invalid_result_type) << FnDecl->getDeclName() << ExpectedResultType; // A function template must have at least 2 parameters. if (FnDecl->getDescribedFunctionTemplate() && FnDecl->getNumParams() < 2) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_template_too_few_parameters) << FnDecl->getDeclName(); // The function decl must have at least 1 parameter. if (FnDecl->getNumParams() == 0) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_too_few_parameters) << FnDecl->getDeclName(); // Check the first parameter type is not dependent. QualType FirstParamType = FnDecl->getParamDecl(0)->getType(); if (FirstParamType->isDependentType()) return SemaRef.Diag(FnDecl->getLocation(), DependentParamTypeDiag) << FnDecl->getDeclName() << ExpectedFirstParamType; // Check that the first parameter type is what we expect. if (SemaRef.Context.getCanonicalType(FirstParamType).getUnqualifiedType() != ExpectedFirstParamType) return SemaRef.Diag(FnDecl->getLocation(), InvalidParamTypeDiag) << FnDecl->getDeclName() << ExpectedFirstParamType; return false; } static bool CheckOperatorNewDeclaration(Sema &SemaRef, const FunctionDecl *FnDecl) { // C++ [basic.stc.dynamic.allocation]p1: // A program is ill-formed if an allocation function is declared in a // namespace scope other than global scope or declared static in global // scope. if (CheckOperatorNewDeleteDeclarationScope(SemaRef, FnDecl)) return true; CanQualType SizeTy = SemaRef.Context.getCanonicalType(SemaRef.Context.getSizeType()); // C++ [basic.stc.dynamic.allocation]p1: // The return type shall be void*. The first parameter shall have type // std::size_t. if (CheckOperatorNewDeleteTypes(SemaRef, FnDecl, SemaRef.Context.VoidPtrTy, SizeTy, diag::err_operator_new_dependent_param_type, diag::err_operator_new_param_type)) return true; // C++ [basic.stc.dynamic.allocation]p1: // The first parameter shall not have an associated default argument. if (FnDecl->getParamDecl(0)->hasDefaultArg()) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_default_arg) << FnDecl->getDeclName() << FnDecl->getParamDecl(0)->getDefaultArgRange(); return false; } static bool CheckOperatorDeleteDeclaration(Sema &SemaRef, FunctionDecl *FnDecl) { // C++ [basic.stc.dynamic.deallocation]p1: // A program is ill-formed if deallocation functions are declared in a // namespace scope other than global scope or declared static in global // scope. if (CheckOperatorNewDeleteDeclarationScope(SemaRef, FnDecl)) return true; // C++ [basic.stc.dynamic.deallocation]p2: // Each deallocation function shall return void and its first parameter // shall be void*. if (CheckOperatorNewDeleteTypes(SemaRef, FnDecl, SemaRef.Context.VoidTy, SemaRef.Context.VoidPtrTy, diag::err_operator_delete_dependent_param_type, diag::err_operator_delete_param_type)) return true; return false; } /// CheckOverloadedOperatorDeclaration - Check whether the declaration /// of this overloaded operator is well-formed. If so, returns false; /// otherwise, emits appropriate diagnostics and returns true. bool Sema::CheckOverloadedOperatorDeclaration(FunctionDecl *FnDecl) { assert(FnDecl && FnDecl->isOverloadedOperator() && "Expected an overloaded operator declaration"); OverloadedOperatorKind Op = FnDecl->getOverloadedOperator(); // C++ [over.oper]p5: // The allocation and deallocation functions, operator new, // operator new[], operator delete and operator delete[], are // described completely in 3.7.3. The attributes and restrictions // found in the rest of this subclause do not apply to them unless // explicitly stated in 3.7.3. if (Op == OO_Delete || Op == OO_Array_Delete) return CheckOperatorDeleteDeclaration(*this, FnDecl); if (Op == OO_New || Op == OO_Array_New) return CheckOperatorNewDeclaration(*this, FnDecl); // C++ [over.oper]p6: // An operator function shall either be a non-static member // function or be a non-member function and have at least one // parameter whose type is a class, a reference to a class, an // enumeration, or a reference to an enumeration. if (CXXMethodDecl *MethodDecl = dyn_cast(FnDecl)) { if (MethodDecl->isStatic()) return Diag(FnDecl->getLocation(), diag::err_operator_overload_static) << FnDecl->getDeclName(); } else { bool ClassOrEnumParam = false; for (auto Param : FnDecl->parameters()) { QualType ParamType = Param->getType().getNonReferenceType(); if (ParamType->isDependentType() || ParamType->isRecordType() || ParamType->isEnumeralType()) { ClassOrEnumParam = true; break; } } if (!ClassOrEnumParam) return Diag(FnDecl->getLocation(), diag::err_operator_overload_needs_class_or_enum) << FnDecl->getDeclName(); } // C++ [over.oper]p8: // An operator function cannot have default arguments (8.3.6), // except where explicitly stated below. // // Only the function-call operator allows default arguments // (C++ [over.call]p1). if (Op != OO_Call) { for (auto Param : FnDecl->parameters()) { if (Param->hasDefaultArg()) return Diag(Param->getLocation(), diag::err_operator_overload_default_arg) << FnDecl->getDeclName() << Param->getDefaultArgRange(); } } static const bool OperatorUses[NUM_OVERLOADED_OPERATORS][3] = { { false, false, false } #define OVERLOADED_OPERATOR(Name,Spelling,Token,Unary,Binary,MemberOnly) \ , { Unary, Binary, MemberOnly } #include "clang/Basic/OperatorKinds.def" }; bool CanBeUnaryOperator = OperatorUses[Op][0]; bool CanBeBinaryOperator = OperatorUses[Op][1]; bool MustBeMemberOperator = OperatorUses[Op][2]; // C++ [over.oper]p8: // [...] Operator functions cannot have more or fewer parameters // than the number required for the corresponding operator, as // described in the rest of this subclause. unsigned NumParams = FnDecl->getNumParams() + (isa(FnDecl)? 1 : 0); if (Op != OO_Call && ((NumParams == 1 && !CanBeUnaryOperator) || (NumParams == 2 && !CanBeBinaryOperator) || (NumParams < 1) || (NumParams > 2))) { // We have the wrong number of parameters. unsigned ErrorKind; if (CanBeUnaryOperator && CanBeBinaryOperator) { ErrorKind = 2; // 2 -> unary or binary. } else if (CanBeUnaryOperator) { ErrorKind = 0; // 0 -> unary } else { assert(CanBeBinaryOperator && "All non-call overloaded operators are unary or binary!"); ErrorKind = 1; // 1 -> binary } return Diag(FnDecl->getLocation(), diag::err_operator_overload_must_be) << FnDecl->getDeclName() << NumParams << ErrorKind; } // Overloaded operators other than operator() cannot be variadic. if (Op != OO_Call && FnDecl->getType()->getAs()->isVariadic()) { return Diag(FnDecl->getLocation(), diag::err_operator_overload_variadic) << FnDecl->getDeclName(); } // Some operators must be non-static member functions. if (MustBeMemberOperator && !isa(FnDecl)) { return Diag(FnDecl->getLocation(), diag::err_operator_overload_must_be_member) << FnDecl->getDeclName(); } // C++ [over.inc]p1: // The user-defined function called operator++ implements the // prefix and postfix ++ operator. If this function is a member // function with no parameters, or a non-member function with one // parameter of class or enumeration type, it defines the prefix // increment operator ++ for objects of that type. If the function // is a member function with one parameter (which shall be of type // int) or a non-member function with two parameters (the second // of which shall be of type int), it defines the postfix // increment operator ++ for objects of that type. if ((Op == OO_PlusPlus || Op == OO_MinusMinus) && NumParams == 2) { ParmVarDecl *LastParam = FnDecl->getParamDecl(FnDecl->getNumParams() - 1); QualType ParamType = LastParam->getType(); if (!ParamType->isSpecificBuiltinType(BuiltinType::Int) && !ParamType->isDependentType()) return Diag(LastParam->getLocation(), diag::err_operator_overload_post_incdec_must_be_int) << LastParam->getType() << (Op == OO_MinusMinus); } return false; } static bool checkLiteralOperatorTemplateParameterList(Sema &SemaRef, FunctionTemplateDecl *TpDecl) { TemplateParameterList *TemplateParams = TpDecl->getTemplateParameters(); // Must have one or two template parameters. if (TemplateParams->size() == 1) { NonTypeTemplateParmDecl *PmDecl = dyn_cast(TemplateParams->getParam(0)); // The template parameter must be a char parameter pack. if (PmDecl && PmDecl->isTemplateParameterPack() && SemaRef.Context.hasSameType(PmDecl->getType(), SemaRef.Context.CharTy)) return false; } else if (TemplateParams->size() == 2) { TemplateTypeParmDecl *PmType = dyn_cast(TemplateParams->getParam(0)); NonTypeTemplateParmDecl *PmArgs = dyn_cast(TemplateParams->getParam(1)); // The second template parameter must be a parameter pack with the // first template parameter as its type. if (PmType && PmArgs && !PmType->isTemplateParameterPack() && PmArgs->isTemplateParameterPack()) { const TemplateTypeParmType *TArgs = PmArgs->getType()->getAs(); if (TArgs && TArgs->getDepth() == PmType->getDepth() && TArgs->getIndex() == PmType->getIndex()) { if (!SemaRef.inTemplateInstantiation()) SemaRef.Diag(TpDecl->getLocation(), diag::ext_string_literal_operator_template); return false; } } } SemaRef.Diag(TpDecl->getTemplateParameters()->getSourceRange().getBegin(), diag::err_literal_operator_template) << TpDecl->getTemplateParameters()->getSourceRange(); return true; } /// CheckLiteralOperatorDeclaration - Check whether the declaration /// of this literal operator function is well-formed. If so, returns /// false; otherwise, emits appropriate diagnostics and returns true. bool Sema::CheckLiteralOperatorDeclaration(FunctionDecl *FnDecl) { if (isa(FnDecl)) { Diag(FnDecl->getLocation(), diag::err_literal_operator_outside_namespace) << FnDecl->getDeclName(); return true; } if (FnDecl->isExternC()) { Diag(FnDecl->getLocation(), diag::err_literal_operator_extern_c); if (const LinkageSpecDecl *LSD = FnDecl->getDeclContext()->getExternCContext()) Diag(LSD->getExternLoc(), diag::note_extern_c_begins_here); return true; } // This might be the definition of a literal operator template. FunctionTemplateDecl *TpDecl = FnDecl->getDescribedFunctionTemplate(); // This might be a specialization of a literal operator template. if (!TpDecl) TpDecl = FnDecl->getPrimaryTemplate(); // template type operator "" name() and // template type operator "" name() are the only valid // template signatures, and the only valid signatures with no parameters. if (TpDecl) { if (FnDecl->param_size() != 0) { Diag(FnDecl->getLocation(), diag::err_literal_operator_template_with_params); return true; } if (checkLiteralOperatorTemplateParameterList(*this, TpDecl)) return true; } else if (FnDecl->param_size() == 1) { const ParmVarDecl *Param = FnDecl->getParamDecl(0); QualType ParamType = Param->getType().getUnqualifiedType(); // Only unsigned long long int, long double, any character type, and const // char * are allowed as the only parameters. if (ParamType->isSpecificBuiltinType(BuiltinType::ULongLong) || ParamType->isSpecificBuiltinType(BuiltinType::LongDouble) || Context.hasSameType(ParamType, Context.CharTy) || Context.hasSameType(ParamType, Context.WideCharTy) || Context.hasSameType(ParamType, Context.Char16Ty) || Context.hasSameType(ParamType, Context.Char32Ty)) { } else if (const PointerType *Ptr = ParamType->getAs()) { QualType InnerType = Ptr->getPointeeType(); // Pointer parameter must be a const char *. if (!(Context.hasSameType(InnerType.getUnqualifiedType(), Context.CharTy) && InnerType.isConstQualified() && !InnerType.isVolatileQualified())) { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_param) << ParamType << "'const char *'" << Param->getSourceRange(); return true; } } else if (ParamType->isRealFloatingType()) { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_param) << ParamType << Context.LongDoubleTy << Param->getSourceRange(); return true; } else if (ParamType->isIntegerType()) { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_param) << ParamType << Context.UnsignedLongLongTy << Param->getSourceRange(); return true; } else { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_invalid_param) << ParamType << Param->getSourceRange(); return true; } } else if (FnDecl->param_size() == 2) { FunctionDecl::param_iterator Param = FnDecl->param_begin(); // First, verify that the first parameter is correct. QualType FirstParamType = (*Param)->getType().getUnqualifiedType(); // Two parameter function must have a pointer to const as a // first parameter; let's strip those qualifiers. const PointerType *PT = FirstParamType->getAs(); if (!PT) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << FirstParamType << "'const char *'" << (*Param)->getSourceRange(); return true; } QualType PointeeType = PT->getPointeeType(); // First parameter must be const if (!PointeeType.isConstQualified() || PointeeType.isVolatileQualified()) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << FirstParamType << "'const char *'" << (*Param)->getSourceRange(); return true; } QualType InnerType = PointeeType.getUnqualifiedType(); // Only const char *, const wchar_t*, const char16_t*, and const char32_t* // are allowed as the first parameter to a two-parameter function if (!(Context.hasSameType(InnerType, Context.CharTy) || Context.hasSameType(InnerType, Context.WideCharTy) || Context.hasSameType(InnerType, Context.Char16Ty) || Context.hasSameType(InnerType, Context.Char32Ty))) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << FirstParamType << "'const char *'" << (*Param)->getSourceRange(); return true; } // Move on to the second and final parameter. ++Param; // The second parameter must be a std::size_t. QualType SecondParamType = (*Param)->getType().getUnqualifiedType(); if (!Context.hasSameType(SecondParamType, Context.getSizeType())) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << SecondParamType << Context.getSizeType() << (*Param)->getSourceRange(); return true; } } else { Diag(FnDecl->getLocation(), diag::err_literal_operator_bad_param_count); return true; } // Parameters are good. // A parameter-declaration-clause containing a default argument is not // equivalent to any of the permitted forms. for (auto Param : FnDecl->parameters()) { if (Param->hasDefaultArg()) { Diag(Param->getDefaultArgRange().getBegin(), diag::err_literal_operator_default_argument) << Param->getDefaultArgRange(); break; } } StringRef LiteralName = FnDecl->getDeclName().getCXXLiteralIdentifier()->getName(); if (LiteralName[0] != '_') { // C++11 [usrlit.suffix]p1: // Literal suffix identifiers that do not start with an underscore // are reserved for future standardization. Diag(FnDecl->getLocation(), diag::warn_user_literal_reserved) << StringLiteralParser::isValidUDSuffix(getLangOpts(), LiteralName); } return false; } /// ActOnStartLinkageSpecification - Parsed the beginning of a C++ /// linkage specification, including the language and (if present) /// the '{'. ExternLoc is the location of the 'extern', Lang is the /// language string literal. LBraceLoc, if valid, provides the location of /// the '{' brace. Otherwise, this linkage specification does not /// have any braces. Decl *Sema::ActOnStartLinkageSpecification(Scope *S, SourceLocation ExternLoc, Expr *LangStr, SourceLocation LBraceLoc) { StringLiteral *Lit = cast(LangStr); if (!Lit->isAscii()) { Diag(LangStr->getExprLoc(), diag::err_language_linkage_spec_not_ascii) << LangStr->getSourceRange(); return nullptr; } StringRef Lang = Lit->getString(); LinkageSpecDecl::LanguageIDs Language; if (Lang == "C") Language = LinkageSpecDecl::lang_c; else if (Lang == "C++") Language = LinkageSpecDecl::lang_cxx; else { Diag(LangStr->getExprLoc(), diag::err_language_linkage_spec_unknown) << LangStr->getSourceRange(); return nullptr; } // FIXME: Add all the various semantics of linkage specifications LinkageSpecDecl *D = LinkageSpecDecl::Create(Context, CurContext, ExternLoc, LangStr->getExprLoc(), Language, LBraceLoc.isValid()); CurContext->addDecl(D); PushDeclContext(S, D); return D; } /// ActOnFinishLinkageSpecification - Complete the definition of /// the C++ linkage specification LinkageSpec. If RBraceLoc is /// valid, it's the position of the closing '}' brace in a linkage /// specification that uses braces. Decl *Sema::ActOnFinishLinkageSpecification(Scope *S, Decl *LinkageSpec, SourceLocation RBraceLoc) { if (RBraceLoc.isValid()) { LinkageSpecDecl* LSDecl = cast(LinkageSpec); LSDecl->setRBraceLoc(RBraceLoc); } PopDeclContext(); return LinkageSpec; } Decl *Sema::ActOnEmptyDeclaration(Scope *S, AttributeList *AttrList, SourceLocation SemiLoc) { Decl *ED = EmptyDecl::Create(Context, CurContext, SemiLoc); // Attribute declarations appertain to empty declaration so we handle // them here. if (AttrList) ProcessDeclAttributeList(S, ED, AttrList); CurContext->addDecl(ED); return ED; } /// \brief Perform semantic analysis for the variable declaration that /// occurs within a C++ catch clause, returning the newly-created /// variable. VarDecl *Sema::BuildExceptionDeclaration(Scope *S, TypeSourceInfo *TInfo, SourceLocation StartLoc, SourceLocation Loc, IdentifierInfo *Name) { bool Invalid = false; QualType ExDeclType = TInfo->getType(); // Arrays and functions decay. if (ExDeclType->isArrayType()) ExDeclType = Context.getArrayDecayedType(ExDeclType); else if (ExDeclType->isFunctionType()) ExDeclType = Context.getPointerType(ExDeclType); // C++ 15.3p1: The exception-declaration shall not denote an incomplete type. // The exception-declaration shall not denote a pointer or reference to an // incomplete type, other than [cv] void*. // N2844 forbids rvalue references. if (!ExDeclType->isDependentType() && ExDeclType->isRValueReferenceType()) { Diag(Loc, diag::err_catch_rvalue_ref); Invalid = true; } if (ExDeclType->isVariablyModifiedType()) { Diag(Loc, diag::err_catch_variably_modified) << ExDeclType; Invalid = true; } QualType BaseType = ExDeclType; int Mode = 0; // 0 for direct type, 1 for pointer, 2 for reference unsigned DK = diag::err_catch_incomplete; if (const PointerType *Ptr = BaseType->getAs()) { BaseType = Ptr->getPointeeType(); Mode = 1; DK = diag::err_catch_incomplete_ptr; } else if (const ReferenceType *Ref = BaseType->getAs()) { // For the purpose of error recovery, we treat rvalue refs like lvalue refs. BaseType = Ref->getPointeeType(); Mode = 2; DK = diag::err_catch_incomplete_ref; } if (!Invalid && (Mode == 0 || !BaseType->isVoidType()) && !BaseType->isDependentType() && RequireCompleteType(Loc, BaseType, DK)) Invalid = true; if (!Invalid && !ExDeclType->isDependentType() && RequireNonAbstractType(Loc, ExDeclType, diag::err_abstract_type_in_decl, AbstractVariableType)) Invalid = true; // Only the non-fragile NeXT runtime currently supports C++ catches // of ObjC types, and no runtime supports catching ObjC types by value. if (!Invalid && getLangOpts().ObjC1) { QualType T = ExDeclType; if (const ReferenceType *RT = T->getAs()) T = RT->getPointeeType(); if (T->isObjCObjectType()) { Diag(Loc, diag::err_objc_object_catch); Invalid = true; } else if (T->isObjCObjectPointerType()) { // FIXME: should this be a test for macosx-fragile specifically? if (getLangOpts().ObjCRuntime.isFragile()) Diag(Loc, diag::warn_objc_pointer_cxx_catch_fragile); } } VarDecl *ExDecl = VarDecl::Create(Context, CurContext, StartLoc, Loc, Name, ExDeclType, TInfo, SC_None); ExDecl->setExceptionVariable(true); // In ARC, infer 'retaining' for variables of retainable type. if (getLangOpts().ObjCAutoRefCount && inferObjCARCLifetime(ExDecl)) Invalid = true; if (!Invalid && !ExDeclType->isDependentType()) { if (const RecordType *recordType = ExDeclType->getAs()) { // Insulate this from anything else we might currently be parsing. EnterExpressionEvaluationContext scope( *this, ExpressionEvaluationContext::PotentiallyEvaluated); // C++ [except.handle]p16: // The object declared in an exception-declaration or, if the // exception-declaration does not specify a name, a temporary (12.2) is // copy-initialized (8.5) from the exception object. [...] // The object is destroyed when the handler exits, after the destruction // of any automatic objects initialized within the handler. // // We just pretend to initialize the object with itself, then make sure // it can be destroyed later. QualType initType = Context.getExceptionObjectType(ExDeclType); InitializedEntity entity = InitializedEntity::InitializeVariable(ExDecl); InitializationKind initKind = InitializationKind::CreateCopy(Loc, SourceLocation()); Expr *opaqueValue = new (Context) OpaqueValueExpr(Loc, initType, VK_LValue, OK_Ordinary); InitializationSequence sequence(*this, entity, initKind, opaqueValue); ExprResult result = sequence.Perform(*this, entity, initKind, opaqueValue); if (result.isInvalid()) Invalid = true; else { // If the constructor used was non-trivial, set this as the // "initializer". CXXConstructExpr *construct = result.getAs(); if (!construct->getConstructor()->isTrivial()) { Expr *init = MaybeCreateExprWithCleanups(construct); ExDecl->setInit(init); } // And make sure it's destructable. FinalizeVarWithDestructor(ExDecl, recordType); } } } if (Invalid) ExDecl->setInvalidDecl(); return ExDecl; } /// ActOnExceptionDeclarator - Parsed the exception-declarator in a C++ catch /// handler. Decl *Sema::ActOnExceptionDeclarator(Scope *S, Declarator &D) { TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); bool Invalid = D.isInvalidType(); // Check for unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(D.getIdentifierLoc(), TInfo, UPPC_ExceptionType)) { TInfo = Context.getTrivialTypeSourceInfo(Context.IntTy, D.getIdentifierLoc()); Invalid = true; } IdentifierInfo *II = D.getIdentifier(); if (NamedDecl *PrevDecl = LookupSingleName(S, II, D.getIdentifierLoc(), LookupOrdinaryName, ForRedeclaration)) { // The scope should be freshly made just for us. There is just no way // it contains any previous declaration, except for function parameters in // a function-try-block's catch statement. assert(!S->isDeclScope(PrevDecl)); if (isDeclInScope(PrevDecl, CurContext, S)) { Diag(D.getIdentifierLoc(), diag::err_redefinition) << D.getIdentifier(); Diag(PrevDecl->getLocation(), diag::note_previous_definition); Invalid = true; } else if (PrevDecl->isTemplateParameter()) // Maybe we will complain about the shadowed template parameter. DiagnoseTemplateParameterShadow(D.getIdentifierLoc(), PrevDecl); } if (D.getCXXScopeSpec().isSet() && !Invalid) { Diag(D.getIdentifierLoc(), diag::err_qualified_catch_declarator) << D.getCXXScopeSpec().getRange(); Invalid = true; } VarDecl *ExDecl = BuildExceptionDeclaration(S, TInfo, D.getLocStart(), D.getIdentifierLoc(), D.getIdentifier()); if (Invalid) ExDecl->setInvalidDecl(); // Add the exception declaration into this scope. if (II) PushOnScopeChains(ExDecl, S); else CurContext->addDecl(ExDecl); ProcessDeclAttributes(S, ExDecl, D); return ExDecl; } Decl *Sema::ActOnStaticAssertDeclaration(SourceLocation StaticAssertLoc, Expr *AssertExpr, Expr *AssertMessageExpr, SourceLocation RParenLoc) { StringLiteral *AssertMessage = AssertMessageExpr ? cast(AssertMessageExpr) : nullptr; if (DiagnoseUnexpandedParameterPack(AssertExpr, UPPC_StaticAssertExpression)) return nullptr; return BuildStaticAssertDeclaration(StaticAssertLoc, AssertExpr, AssertMessage, RParenLoc, false); } Decl *Sema::BuildStaticAssertDeclaration(SourceLocation StaticAssertLoc, Expr *AssertExpr, StringLiteral *AssertMessage, SourceLocation RParenLoc, bool Failed) { assert(AssertExpr != nullptr && "Expected non-null condition"); if (!AssertExpr->isTypeDependent() && !AssertExpr->isValueDependent() && !Failed) { // In a static_assert-declaration, the constant-expression shall be a // constant expression that can be contextually converted to bool. ExprResult Converted = PerformContextuallyConvertToBool(AssertExpr); if (Converted.isInvalid()) Failed = true; llvm::APSInt Cond; if (!Failed && VerifyIntegerConstantExpression(Converted.get(), &Cond, diag::err_static_assert_expression_is_not_constant, /*AllowFold=*/false).isInvalid()) Failed = true; if (!Failed && !Cond) { SmallString<256> MsgBuffer; llvm::raw_svector_ostream Msg(MsgBuffer); if (AssertMessage) AssertMessage->printPretty(Msg, nullptr, getPrintingPolicy()); Diag(StaticAssertLoc, diag::err_static_assert_failed) << !AssertMessage << Msg.str() << AssertExpr->getSourceRange(); Failed = true; } } ExprResult FullAssertExpr = ActOnFinishFullExpr(AssertExpr, StaticAssertLoc, /*DiscardedValue*/false, /*IsConstexpr*/true); if (FullAssertExpr.isInvalid()) Failed = true; else AssertExpr = FullAssertExpr.get(); Decl *Decl = StaticAssertDecl::Create(Context, CurContext, StaticAssertLoc, AssertExpr, AssertMessage, RParenLoc, Failed); CurContext->addDecl(Decl); return Decl; } /// \brief Perform semantic analysis of the given friend type declaration. /// /// \returns A friend declaration that. FriendDecl *Sema::CheckFriendTypeDecl(SourceLocation LocStart, SourceLocation FriendLoc, TypeSourceInfo *TSInfo) { assert(TSInfo && "NULL TypeSourceInfo for friend type declaration"); QualType T = TSInfo->getType(); SourceRange TypeRange = TSInfo->getTypeLoc().getLocalSourceRange(); // C++03 [class.friend]p2: // An elaborated-type-specifier shall be used in a friend declaration // for a class.* // // * The class-key of the elaborated-type-specifier is required. if (!CodeSynthesisContexts.empty()) { // Do not complain about the form of friend template types during any kind // of code synthesis. For template instantiation, we will have complained // when the template was defined. } else { if (!T->isElaboratedTypeSpecifier()) { // If we evaluated the type to a record type, suggest putting // a tag in front. if (const RecordType *RT = T->getAs()) { RecordDecl *RD = RT->getDecl(); SmallString<16> InsertionText(" "); InsertionText += RD->getKindName(); Diag(TypeRange.getBegin(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_unelaborated_friend_type : diag::ext_unelaborated_friend_type) << (unsigned) RD->getTagKind() << T << FixItHint::CreateInsertion(getLocForEndOfToken(FriendLoc), InsertionText); } else { Diag(FriendLoc, getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_nonclass_type_friend : diag::ext_nonclass_type_friend) << T << TypeRange; } } else if (T->getAs()) { Diag(FriendLoc, getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_enum_friend : diag::ext_enum_friend) << T << TypeRange; } // C++11 [class.friend]p3: // A friend declaration that does not declare a function shall have one // of the following forms: // friend elaborated-type-specifier ; // friend simple-type-specifier ; // friend typename-specifier ; if (getLangOpts().CPlusPlus11 && LocStart != FriendLoc) Diag(FriendLoc, diag::err_friend_not_first_in_declaration) << T; } // If the type specifier in a friend declaration designates a (possibly // cv-qualified) class type, that class is declared as a friend; otherwise, // the friend declaration is ignored. return FriendDecl::Create(Context, CurContext, TSInfo->getTypeLoc().getLocStart(), TSInfo, FriendLoc); } /// Handle a friend tag declaration where the scope specifier was /// templated. Decl *Sema::ActOnTemplatedFriendTag(Scope *S, SourceLocation FriendLoc, unsigned TagSpec, SourceLocation TagLoc, CXXScopeSpec &SS, IdentifierInfo *Name, SourceLocation NameLoc, AttributeList *Attr, MultiTemplateParamsArg TempParamLists) { TagTypeKind Kind = TypeWithKeyword::getTagTypeKindForTypeSpec(TagSpec); bool IsMemberSpecialization = false; bool Invalid = false; if (TemplateParameterList *TemplateParams = MatchTemplateParametersToScopeSpecifier( TagLoc, NameLoc, SS, nullptr, TempParamLists, /*friend*/ true, IsMemberSpecialization, Invalid)) { if (TemplateParams->size() > 0) { // This is a declaration of a class template. if (Invalid) return nullptr; return CheckClassTemplate(S, TagSpec, TUK_Friend, TagLoc, SS, Name, NameLoc, Attr, TemplateParams, AS_public, /*ModulePrivateLoc=*/SourceLocation(), FriendLoc, TempParamLists.size() - 1, TempParamLists.data()).get(); } else { // The "template<>" header is extraneous. Diag(TemplateParams->getTemplateLoc(), diag::err_template_tag_noparams) << TypeWithKeyword::getTagTypeKindName(Kind) << Name; IsMemberSpecialization = true; } } if (Invalid) return nullptr; bool isAllExplicitSpecializations = true; for (unsigned I = TempParamLists.size(); I-- > 0; ) { if (TempParamLists[I]->size()) { isAllExplicitSpecializations = false; break; } } // FIXME: don't ignore attributes. // If it's explicit specializations all the way down, just forget // about the template header and build an appropriate non-templated // friend. TODO: for source fidelity, remember the headers. if (isAllExplicitSpecializations) { if (SS.isEmpty()) { bool Owned = false; bool IsDependent = false; return ActOnTag(S, TagSpec, TUK_Friend, TagLoc, SS, Name, NameLoc, Attr, AS_public, /*ModulePrivateLoc=*/SourceLocation(), MultiTemplateParamsArg(), Owned, IsDependent, /*ScopedEnumKWLoc=*/SourceLocation(), /*ScopedEnumUsesClassTag=*/false, /*UnderlyingType=*/TypeResult(), /*IsTypeSpecifier=*/false, /*IsTemplateParamOrArg=*/false); } NestedNameSpecifierLoc QualifierLoc = SS.getWithLocInContext(Context); ElaboratedTypeKeyword Keyword = TypeWithKeyword::getKeywordForTagTypeKind(Kind); QualType T = CheckTypenameType(Keyword, TagLoc, QualifierLoc, *Name, NameLoc); if (T.isNull()) return nullptr; TypeSourceInfo *TSI = Context.CreateTypeSourceInfo(T); if (isa(T)) { DependentNameTypeLoc TL = TSI->getTypeLoc().castAs(); TL.setElaboratedKeywordLoc(TagLoc); TL.setQualifierLoc(QualifierLoc); TL.setNameLoc(NameLoc); } else { ElaboratedTypeLoc TL = TSI->getTypeLoc().castAs(); TL.setElaboratedKeywordLoc(TagLoc); TL.setQualifierLoc(QualifierLoc); TL.getNamedTypeLoc().castAs().setNameLoc(NameLoc); } FriendDecl *Friend = FriendDecl::Create(Context, CurContext, NameLoc, TSI, FriendLoc, TempParamLists); Friend->setAccess(AS_public); CurContext->addDecl(Friend); return Friend; } assert(SS.isNotEmpty() && "valid templated tag with no SS and no direct?"); // Handle the case of a templated-scope friend class. e.g. // template class A::B; // FIXME: we don't support these right now. Diag(NameLoc, diag::warn_template_qualified_friend_unsupported) << SS.getScopeRep() << SS.getRange() << cast(CurContext); ElaboratedTypeKeyword ETK = TypeWithKeyword::getKeywordForTagTypeKind(Kind); QualType T = Context.getDependentNameType(ETK, SS.getScopeRep(), Name); TypeSourceInfo *TSI = Context.CreateTypeSourceInfo(T); DependentNameTypeLoc TL = TSI->getTypeLoc().castAs(); TL.setElaboratedKeywordLoc(TagLoc); TL.setQualifierLoc(SS.getWithLocInContext(Context)); TL.setNameLoc(NameLoc); FriendDecl *Friend = FriendDecl::Create(Context, CurContext, NameLoc, TSI, FriendLoc, TempParamLists); Friend->setAccess(AS_public); Friend->setUnsupportedFriend(true); CurContext->addDecl(Friend); return Friend; } /// Handle a friend type declaration. This works in tandem with /// ActOnTag. /// /// Notes on friend class templates: /// /// We generally treat friend class declarations as if they were /// declaring a class. So, for example, the elaborated type specifier /// in a friend declaration is required to obey the restrictions of a /// class-head (i.e. no typedefs in the scope chain), template /// parameters are required to match up with simple template-ids, &c. /// However, unlike when declaring a template specialization, it's /// okay to refer to a template specialization without an empty /// template parameter declaration, e.g. /// friend class A::B; /// We permit this as a special case; if there are any template /// parameters present at all, require proper matching, i.e. /// template <> template \ friend class A::B; Decl *Sema::ActOnFriendTypeDecl(Scope *S, const DeclSpec &DS, MultiTemplateParamsArg TempParams) { SourceLocation Loc = DS.getLocStart(); assert(DS.isFriendSpecified()); assert(DS.getStorageClassSpec() == DeclSpec::SCS_unspecified); // Try to convert the decl specifier to a type. This works for // friend templates because ActOnTag never produces a ClassTemplateDecl // for a TUK_Friend. Declarator TheDeclarator(DS, Declarator::MemberContext); TypeSourceInfo *TSI = GetTypeForDeclarator(TheDeclarator, S); QualType T = TSI->getType(); if (TheDeclarator.isInvalidType()) return nullptr; if (DiagnoseUnexpandedParameterPack(Loc, TSI, UPPC_FriendDeclaration)) return nullptr; // This is definitely an error in C++98. It's probably meant to // be forbidden in C++0x, too, but the specification is just // poorly written. // // The problem is with declarations like the following: // template friend A::foo; // where deciding whether a class C is a friend or not now hinges // on whether there exists an instantiation of A that causes // 'foo' to equal C. There are restrictions on class-heads // (which we declare (by fiat) elaborated friend declarations to // be) that makes this tractable. // // FIXME: handle "template <> friend class A;", which // is possibly well-formed? Who even knows? if (TempParams.size() && !T->isElaboratedTypeSpecifier()) { Diag(Loc, diag::err_tagless_friend_type_template) << DS.getSourceRange(); return nullptr; } // C++98 [class.friend]p1: A friend of a class is a function // or class that is not a member of the class . . . // This is fixed in DR77, which just barely didn't make the C++03 // deadline. It's also a very silly restriction that seriously // affects inner classes and which nobody else seems to implement; // thus we never diagnose it, not even in -pedantic. // // But note that we could warn about it: it's always useless to // friend one of your own members (it's not, however, worthless to // friend a member of an arbitrary specialization of your template). Decl *D; if (!TempParams.empty()) D = FriendTemplateDecl::Create(Context, CurContext, Loc, TempParams, TSI, DS.getFriendSpecLoc()); else D = CheckFriendTypeDecl(Loc, DS.getFriendSpecLoc(), TSI); if (!D) return nullptr; D->setAccess(AS_public); CurContext->addDecl(D); return D; } NamedDecl *Sema::ActOnFriendFunctionDecl(Scope *S, Declarator &D, MultiTemplateParamsArg TemplateParams) { const DeclSpec &DS = D.getDeclSpec(); assert(DS.isFriendSpecified()); assert(DS.getStorageClassSpec() == DeclSpec::SCS_unspecified); SourceLocation Loc = D.getIdentifierLoc(); TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); // C++ [class.friend]p1 // A friend of a class is a function or class.... // Note that this sees through typedefs, which is intended. // It *doesn't* see through dependent types, which is correct // according to [temp.arg.type]p3: // If a declaration acquires a function type through a // type dependent on a template-parameter and this causes // a declaration that does not use the syntactic form of a // function declarator to have a function type, the program // is ill-formed. if (!TInfo->getType()->isFunctionType()) { Diag(Loc, diag::err_unexpected_friend); // It might be worthwhile to try to recover by creating an // appropriate declaration. return nullptr; } // C++ [namespace.memdef]p3 // - If a friend declaration in a non-local class first declares a // class or function, the friend class or function is a member // of the innermost enclosing namespace. // - The name of the friend is not found by simple name lookup // until a matching declaration is provided in that namespace // scope (either before or after the class declaration granting // friendship). // - If a friend function is called, its name may be found by the // name lookup that considers functions from namespaces and // classes associated with the types of the function arguments. // - When looking for a prior declaration of a class or a function // declared as a friend, scopes outside the innermost enclosing // namespace scope are not considered. CXXScopeSpec &SS = D.getCXXScopeSpec(); DeclarationNameInfo NameInfo = GetNameForDeclarator(D); DeclarationName Name = NameInfo.getName(); assert(Name); // Check for unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(Loc, TInfo, UPPC_FriendDeclaration) || DiagnoseUnexpandedParameterPack(NameInfo, UPPC_FriendDeclaration) || DiagnoseUnexpandedParameterPack(SS, UPPC_FriendDeclaration)) return nullptr; // The context we found the declaration in, or in which we should // create the declaration. DeclContext *DC; Scope *DCScope = S; LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); // There are five cases here. // - There's no scope specifier and we're in a local class. Only look // for functions declared in the immediately-enclosing block scope. // We recover from invalid scope qualifiers as if they just weren't there. FunctionDecl *FunctionContainingLocalClass = nullptr; if ((SS.isInvalid() || !SS.isSet()) && (FunctionContainingLocalClass = cast(CurContext)->isLocalClass())) { // C++11 [class.friend]p11: // If a friend declaration appears in a local class and the name // specified is an unqualified name, a prior declaration is // looked up without considering scopes that are outside the // innermost enclosing non-class scope. For a friend function // declaration, if there is no prior declaration, the program is // ill-formed. // Find the innermost enclosing non-class scope. This is the block // scope containing the local class definition (or for a nested class, // the outer local class). DCScope = S->getFnParent(); // Look up the function name in the scope. Previous.clear(LookupLocalFriendName); LookupName(Previous, S, /*AllowBuiltinCreation*/false); if (!Previous.empty()) { // All possible previous declarations must have the same context: // either they were declared at block scope or they are members of // one of the enclosing local classes. DC = Previous.getRepresentativeDecl()->getDeclContext(); } else { // This is ill-formed, but provide the context that we would have // declared the function in, if we were permitted to, for error recovery. DC = FunctionContainingLocalClass; } adjustContextForLocalExternDecl(DC); // C++ [class.friend]p6: // A function can be defined in a friend declaration of a class if and // only if the class is a non-local class (9.8), the function name is // unqualified, and the function has namespace scope. if (D.isFunctionDefinition()) { Diag(NameInfo.getBeginLoc(), diag::err_friend_def_in_local_class); } // - There's no scope specifier, in which case we just go to the // appropriate scope and look for a function or function template // there as appropriate. } else if (SS.isInvalid() || !SS.isSet()) { // C++11 [namespace.memdef]p3: // If the name in a friend declaration is neither qualified nor // a template-id and the declaration is a function or an // elaborated-type-specifier, the lookup to determine whether // the entity has been previously declared shall not consider // any scopes outside the innermost enclosing namespace. bool isTemplateId = D.getName().getKind() == UnqualifiedId::IK_TemplateId; // Find the appropriate context according to the above. DC = CurContext; // Skip class contexts. If someone can cite chapter and verse // for this behavior, that would be nice --- it's what GCC and // EDG do, and it seems like a reasonable intent, but the spec // really only says that checks for unqualified existing // declarations should stop at the nearest enclosing namespace, // not that they should only consider the nearest enclosing // namespace. while (DC->isRecord()) DC = DC->getParent(); DeclContext *LookupDC = DC; while (LookupDC->isTransparentContext()) LookupDC = LookupDC->getParent(); while (true) { LookupQualifiedName(Previous, LookupDC); if (!Previous.empty()) { DC = LookupDC; break; } if (isTemplateId) { if (isa(LookupDC)) break; } else { if (LookupDC->isFileContext()) break; } LookupDC = LookupDC->getParent(); } DCScope = getScopeForDeclContext(S, DC); // - There's a non-dependent scope specifier, in which case we // compute it and do a previous lookup there for a function // or function template. } else if (!SS.getScopeRep()->isDependent()) { DC = computeDeclContext(SS); if (!DC) return nullptr; if (RequireCompleteDeclContext(SS, DC)) return nullptr; LookupQualifiedName(Previous, DC); // Ignore things found implicitly in the wrong scope. // TODO: better diagnostics for this case. Suggesting the right // qualified scope would be nice... LookupResult::Filter F = Previous.makeFilter(); while (F.hasNext()) { NamedDecl *D = F.next(); if (!DC->InEnclosingNamespaceSetOf( D->getDeclContext()->getRedeclContext())) F.erase(); } F.done(); if (Previous.empty()) { D.setInvalidType(); Diag(Loc, diag::err_qualified_friend_not_found) << Name << TInfo->getType(); return nullptr; } // C++ [class.friend]p1: A friend of a class is a function or // class that is not a member of the class . . . if (DC->Equals(CurContext)) Diag(DS.getFriendSpecLoc(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_friend_is_member : diag::err_friend_is_member); if (D.isFunctionDefinition()) { // C++ [class.friend]p6: // A function can be defined in a friend declaration of a class if and // only if the class is a non-local class (9.8), the function name is // unqualified, and the function has namespace scope. SemaDiagnosticBuilder DB = Diag(SS.getRange().getBegin(), diag::err_qualified_friend_def); DB << SS.getScopeRep(); if (DC->isFileContext()) DB << FixItHint::CreateRemoval(SS.getRange()); SS.clear(); } // - There's a scope specifier that does not match any template // parameter lists, in which case we use some arbitrary context, // create a method or method template, and wait for instantiation. // - There's a scope specifier that does match some template // parameter lists, which we don't handle right now. } else { if (D.isFunctionDefinition()) { // C++ [class.friend]p6: // A function can be defined in a friend declaration of a class if and // only if the class is a non-local class (9.8), the function name is // unqualified, and the function has namespace scope. Diag(SS.getRange().getBegin(), diag::err_qualified_friend_def) << SS.getScopeRep(); } DC = CurContext; assert(isa(DC) && "friend declaration not in class?"); } if (!DC->isRecord()) { int DiagArg = -1; switch (D.getName().getKind()) { case UnqualifiedId::IK_ConstructorTemplateId: case UnqualifiedId::IK_ConstructorName: DiagArg = 0; break; case UnqualifiedId::IK_DestructorName: DiagArg = 1; break; case UnqualifiedId::IK_ConversionFunctionId: DiagArg = 2; break; case UnqualifiedId::IK_DeductionGuideName: DiagArg = 3; break; case UnqualifiedId::IK_Identifier: case UnqualifiedId::IK_ImplicitSelfParam: case UnqualifiedId::IK_LiteralOperatorId: case UnqualifiedId::IK_OperatorFunctionId: case UnqualifiedId::IK_TemplateId: break; } // This implies that it has to be an operator or function. if (DiagArg >= 0) { Diag(Loc, diag::err_introducing_special_friend) << DiagArg; return nullptr; } } // FIXME: This is an egregious hack to cope with cases where the scope stack // does not contain the declaration context, i.e., in an out-of-line // definition of a class. Scope FakeDCScope(S, Scope::DeclScope, Diags); if (!DCScope) { FakeDCScope.setEntity(DC); DCScope = &FakeDCScope; } bool AddToScope = true; NamedDecl *ND = ActOnFunctionDeclarator(DCScope, D, DC, TInfo, Previous, TemplateParams, AddToScope); if (!ND) return nullptr; assert(ND->getLexicalDeclContext() == CurContext); // If we performed typo correction, we might have added a scope specifier // and changed the decl context. DC = ND->getDeclContext(); // Add the function declaration to the appropriate lookup tables, // adjusting the redeclarations list as necessary. We don't // want to do this yet if the friending class is dependent. // // Also update the scope-based lookup if the target context's // lookup context is in lexical scope. if (!CurContext->isDependentContext()) { DC = DC->getRedeclContext(); DC->makeDeclVisibleInContext(ND); if (Scope *EnclosingScope = getScopeForDeclContext(S, DC)) PushOnScopeChains(ND, EnclosingScope, /*AddToContext=*/ false); } FriendDecl *FrD = FriendDecl::Create(Context, CurContext, D.getIdentifierLoc(), ND, DS.getFriendSpecLoc()); FrD->setAccess(AS_public); CurContext->addDecl(FrD); if (ND->isInvalidDecl()) { FrD->setInvalidDecl(); } else { if (DC->isRecord()) CheckFriendAccess(ND); FunctionDecl *FD; if (FunctionTemplateDecl *FTD = dyn_cast(ND)) FD = FTD->getTemplatedDecl(); else FD = cast(ND); // C++11 [dcl.fct.default]p4: If a friend declaration specifies a // default argument expression, that declaration shall be a definition // and shall be the only declaration of the function or function // template in the translation unit. if (functionDeclHasDefaultArgument(FD)) { // We can't look at FD->getPreviousDecl() because it may not have been set // if we're in a dependent context. If the function is known to be a // redeclaration, we will have narrowed Previous down to the right decl. if (D.isRedeclaration()) { Diag(FD->getLocation(), diag::err_friend_decl_with_def_arg_redeclared); Diag(Previous.getRepresentativeDecl()->getLocation(), diag::note_previous_declaration); } else if (!D.isFunctionDefinition()) Diag(FD->getLocation(), diag::err_friend_decl_with_def_arg_must_be_def); } // Mark templated-scope function declarations as unsupported. if (FD->getNumTemplateParameterLists() && SS.isValid()) { Diag(FD->getLocation(), diag::warn_template_qualified_friend_unsupported) << SS.getScopeRep() << SS.getRange() << cast(CurContext); FrD->setUnsupportedFriend(true); } } return ND; } void Sema::SetDeclDeleted(Decl *Dcl, SourceLocation DelLoc) { AdjustDeclIfTemplate(Dcl); FunctionDecl *Fn = dyn_cast_or_null(Dcl); if (!Fn) { Diag(DelLoc, diag::err_deleted_non_function); return; } // Deleted function does not have a body. Fn->setWillHaveBody(false); if (const FunctionDecl *Prev = Fn->getPreviousDecl()) { // Don't consider the implicit declaration we generate for explicit // specializations. FIXME: Do not generate these implicit declarations. if ((Prev->getTemplateSpecializationKind() != TSK_ExplicitSpecialization || Prev->getPreviousDecl()) && !Prev->isDefined()) { Diag(DelLoc, diag::err_deleted_decl_not_first); Diag(Prev->getLocation().isInvalid() ? DelLoc : Prev->getLocation(), Prev->isImplicit() ? diag::note_previous_implicit_declaration : diag::note_previous_declaration); } // If the declaration wasn't the first, we delete the function anyway for // recovery. Fn = Fn->getCanonicalDecl(); } // dllimport/dllexport cannot be deleted. if (const InheritableAttr *DLLAttr = getDLLAttr(Fn)) { Diag(Fn->getLocation(), diag::err_attribute_dll_deleted) << DLLAttr; Fn->setInvalidDecl(); } if (Fn->isDeleted()) return; // See if we're deleting a function which is already known to override a // non-deleted virtual function. if (CXXMethodDecl *MD = dyn_cast(Fn)) { bool IssuedDiagnostic = false; for (CXXMethodDecl::method_iterator I = MD->begin_overridden_methods(), E = MD->end_overridden_methods(); I != E; ++I) { if (!(*MD->begin_overridden_methods())->isDeleted()) { if (!IssuedDiagnostic) { Diag(DelLoc, diag::err_deleted_override) << MD->getDeclName(); IssuedDiagnostic = true; } Diag((*I)->getLocation(), diag::note_overridden_virtual_function); } } // If this function was implicitly deleted because it was defaulted, // explain why it was deleted. if (IssuedDiagnostic && MD->isDefaulted()) ShouldDeleteSpecialMember(MD, getSpecialMember(MD), nullptr, /*Diagnose*/true); } // C++11 [basic.start.main]p3: // A program that defines main as deleted [...] is ill-formed. if (Fn->isMain()) Diag(DelLoc, diag::err_deleted_main); // C++11 [dcl.fct.def.delete]p4: // A deleted function is implicitly inline. Fn->setImplicitlyInline(); Fn->setDeletedAsWritten(); } void Sema::SetDeclDefaulted(Decl *Dcl, SourceLocation DefaultLoc) { CXXMethodDecl *MD = dyn_cast_or_null(Dcl); if (MD) { if (MD->getParent()->isDependentType()) { MD->setDefaulted(); MD->setExplicitlyDefaulted(); return; } CXXSpecialMember Member = getSpecialMember(MD); if (Member == CXXInvalid) { if (!MD->isInvalidDecl()) Diag(DefaultLoc, diag::err_default_special_members); return; } MD->setDefaulted(); MD->setExplicitlyDefaulted(); // Unset that we will have a body for this function. We might not, // if it turns out to be trivial, and we don't need this marking now // that we've marked it as defaulted. MD->setWillHaveBody(false); // If this definition appears within the record, do the checking when // the record is complete. const FunctionDecl *Primary = MD; if (const FunctionDecl *Pattern = MD->getTemplateInstantiationPattern()) // Ask the template instantiation pattern that actually had the // '= default' on it. Primary = Pattern; // If the method was defaulted on its first declaration, we will have // already performed the checking in CheckCompletedCXXClass. Such a // declaration doesn't trigger an implicit definition. if (Primary->getCanonicalDecl()->isDefaulted()) return; CheckExplicitlyDefaultedSpecialMember(MD); if (!MD->isInvalidDecl()) DefineImplicitSpecialMember(*this, MD, DefaultLoc); } else { Diag(DefaultLoc, diag::err_default_special_members); } } static void SearchForReturnInStmt(Sema &Self, Stmt *S) { for (Stmt *SubStmt : S->children()) { if (!SubStmt) continue; if (isa(SubStmt)) Self.Diag(SubStmt->getLocStart(), diag::err_return_in_constructor_handler); if (!isa(SubStmt)) SearchForReturnInStmt(Self, SubStmt); } } void Sema::DiagnoseReturnInConstructorExceptionHandler(CXXTryStmt *TryBlock) { for (unsigned I = 0, E = TryBlock->getNumHandlers(); I != E; ++I) { CXXCatchStmt *Handler = TryBlock->getHandler(I); SearchForReturnInStmt(*this, Handler); } } bool Sema::CheckOverridingFunctionAttributes(const CXXMethodDecl *New, const CXXMethodDecl *Old) { const FunctionType *NewFT = New->getType()->getAs(); const FunctionType *OldFT = Old->getType()->getAs(); CallingConv NewCC = NewFT->getCallConv(), OldCC = OldFT->getCallConv(); // If the calling conventions match, everything is fine if (NewCC == OldCC) return false; // If the calling conventions mismatch because the new function is static, // suppress the calling convention mismatch error; the error about static // function override (err_static_overrides_virtual from // Sema::CheckFunctionDeclaration) is more clear. if (New->getStorageClass() == SC_Static) return false; Diag(New->getLocation(), diag::err_conflicting_overriding_cc_attributes) << New->getDeclName() << New->getType() << Old->getType(); Diag(Old->getLocation(), diag::note_overridden_virtual_function); return true; } bool Sema::CheckOverridingFunctionReturnType(const CXXMethodDecl *New, const CXXMethodDecl *Old) { QualType NewTy = New->getType()->getAs()->getReturnType(); QualType OldTy = Old->getType()->getAs()->getReturnType(); if (Context.hasSameType(NewTy, OldTy) || NewTy->isDependentType() || OldTy->isDependentType()) return false; // Check if the return types are covariant QualType NewClassTy, OldClassTy; /// Both types must be pointers or references to classes. if (const PointerType *NewPT = NewTy->getAs()) { if (const PointerType *OldPT = OldTy->getAs()) { NewClassTy = NewPT->getPointeeType(); OldClassTy = OldPT->getPointeeType(); } } else if (const ReferenceType *NewRT = NewTy->getAs()) { if (const ReferenceType *OldRT = OldTy->getAs()) { if (NewRT->getTypeClass() == OldRT->getTypeClass()) { NewClassTy = NewRT->getPointeeType(); OldClassTy = OldRT->getPointeeType(); } } } // The return types aren't either both pointers or references to a class type. if (NewClassTy.isNull()) { Diag(New->getLocation(), diag::err_different_return_type_for_overriding_virtual_function) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } if (!Context.hasSameUnqualifiedType(NewClassTy, OldClassTy)) { // C++14 [class.virtual]p8: // If the class type in the covariant return type of D::f differs from // that of B::f, the class type in the return type of D::f shall be // complete at the point of declaration of D::f or shall be the class // type D. if (const RecordType *RT = NewClassTy->getAs()) { if (!RT->isBeingDefined() && RequireCompleteType(New->getLocation(), NewClassTy, diag::err_covariant_return_incomplete, New->getDeclName())) return true; } // Check if the new class derives from the old class. if (!IsDerivedFrom(New->getLocation(), NewClassTy, OldClassTy)) { Diag(New->getLocation(), diag::err_covariant_return_not_derived) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } // Check if we the conversion from derived to base is valid. if (CheckDerivedToBaseConversion( NewClassTy, OldClassTy, diag::err_covariant_return_inaccessible_base, diag::err_covariant_return_ambiguous_derived_to_base_conv, New->getLocation(), New->getReturnTypeSourceRange(), New->getDeclName(), nullptr)) { // FIXME: this note won't trigger for delayed access control // diagnostics, and it's impossible to get an undelayed error // here from access control during the original parse because // the ParsingDeclSpec/ParsingDeclarator are still in scope. Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } } // The qualifiers of the return types must be the same. if (NewTy.getLocalCVRQualifiers() != OldTy.getLocalCVRQualifiers()) { Diag(New->getLocation(), diag::err_covariant_return_type_different_qualifications) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } // The new class type must have the same or less qualifiers as the old type. if (NewClassTy.isMoreQualifiedThan(OldClassTy)) { Diag(New->getLocation(), diag::err_covariant_return_type_class_type_more_qualified) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } return false; } /// \brief Mark the given method pure. /// /// \param Method the method to be marked pure. /// /// \param InitRange the source range that covers the "0" initializer. bool Sema::CheckPureMethod(CXXMethodDecl *Method, SourceRange InitRange) { SourceLocation EndLoc = InitRange.getEnd(); if (EndLoc.isValid()) Method->setRangeEnd(EndLoc); if (Method->isVirtual() || Method->getParent()->isDependentContext()) { Method->setPure(); return false; } if (!Method->isInvalidDecl()) Diag(Method->getLocation(), diag::err_non_virtual_pure) << Method->getDeclName() << InitRange; return true; } void Sema::ActOnPureSpecifier(Decl *D, SourceLocation ZeroLoc) { if (D->getFriendObjectKind()) Diag(D->getLocation(), diag::err_pure_friend); else if (auto *M = dyn_cast(D)) CheckPureMethod(M, ZeroLoc); else Diag(D->getLocation(), diag::err_illegal_initializer); } /// \brief Determine whether the given declaration is a static data member. static bool isStaticDataMember(const Decl *D) { if (const VarDecl *Var = dyn_cast_or_null(D)) return Var->isStaticDataMember(); return false; } /// ActOnCXXEnterDeclInitializer - Invoked when we are about to parse /// an initializer for the out-of-line declaration 'Dcl'. The scope /// is a fresh scope pushed for just this purpose. /// /// After this method is called, according to [C++ 3.4.1p13], if 'Dcl' is a /// static data member of class X, names should be looked up in the scope of /// class X. void Sema::ActOnCXXEnterDeclInitializer(Scope *S, Decl *D) { // If there is no declaration, there was an error parsing it. if (!D || D->isInvalidDecl()) return; // We will always have a nested name specifier here, but this declaration // might not be out of line if the specifier names the current namespace: // extern int n; // int ::n = 0; if (D->isOutOfLine()) EnterDeclaratorContext(S, D->getDeclContext()); // If we are parsing the initializer for a static data member, push a // new expression evaluation context that is associated with this static // data member. if (isStaticDataMember(D)) PushExpressionEvaluationContext( ExpressionEvaluationContext::PotentiallyEvaluated, D); } /// ActOnCXXExitDeclInitializer - Invoked after we are finished parsing an /// initializer for the out-of-line declaration 'D'. void Sema::ActOnCXXExitDeclInitializer(Scope *S, Decl *D) { // If there is no declaration, there was an error parsing it. if (!D || D->isInvalidDecl()) return; if (isStaticDataMember(D)) PopExpressionEvaluationContext(); if (D->isOutOfLine()) ExitDeclaratorContext(S); } /// ActOnCXXConditionDeclarationExpr - Parsed a condition declaration of a /// C++ if/switch/while/for statement. /// e.g: "if (int x = f()) {...}" DeclResult Sema::ActOnCXXConditionDeclaration(Scope *S, Declarator &D) { // C++ 6.4p2: // The declarator shall not specify a function or an array. // The type-specifier-seq shall not contain typedef and shall not declare a // new class or enumeration. assert(D.getDeclSpec().getStorageClassSpec() != DeclSpec::SCS_typedef && "Parser allowed 'typedef' as storage class of condition decl."); Decl *Dcl = ActOnDeclarator(S, D); if (!Dcl) return true; if (isa(Dcl)) { // The declarator shall not specify a function. Diag(Dcl->getLocation(), diag::err_invalid_use_of_function_type) << D.getSourceRange(); return true; } return Dcl; } void Sema::LoadExternalVTableUses() { if (!ExternalSource) return; SmallVector VTables; ExternalSource->ReadUsedVTables(VTables); SmallVector NewUses; for (unsigned I = 0, N = VTables.size(); I != N; ++I) { llvm::DenseMap::iterator Pos = VTablesUsed.find(VTables[I].Record); // Even if a definition wasn't required before, it may be required now. if (Pos != VTablesUsed.end()) { if (!Pos->second && VTables[I].DefinitionRequired) Pos->second = true; continue; } VTablesUsed[VTables[I].Record] = VTables[I].DefinitionRequired; NewUses.push_back(VTableUse(VTables[I].Record, VTables[I].Location)); } VTableUses.insert(VTableUses.begin(), NewUses.begin(), NewUses.end()); } void Sema::MarkVTableUsed(SourceLocation Loc, CXXRecordDecl *Class, bool DefinitionRequired) { // Ignore any vtable uses in unevaluated operands or for classes that do // not have a vtable. if (!Class->isDynamicClass() || Class->isDependentContext() || CurContext->isDependentContext() || isUnevaluatedContext()) return; // Try to insert this class into the map. LoadExternalVTableUses(); Class = cast(Class->getCanonicalDecl()); std::pair::iterator, bool> Pos = VTablesUsed.insert(std::make_pair(Class, DefinitionRequired)); if (!Pos.second) { // If we already had an entry, check to see if we are promoting this vtable // to require a definition. If so, we need to reappend to the VTableUses // list, since we may have already processed the first entry. if (DefinitionRequired && !Pos.first->second) { Pos.first->second = true; } else { // Otherwise, we can early exit. return; } } else { // The Microsoft ABI requires that we perform the destructor body // checks (i.e. operator delete() lookup) when the vtable is marked used, as // the deleting destructor is emitted with the vtable, not with the // destructor definition as in the Itanium ABI. if (Context.getTargetInfo().getCXXABI().isMicrosoft()) { CXXDestructorDecl *DD = Class->getDestructor(); if (DD && DD->isVirtual() && !DD->isDeleted()) { if (Class->hasUserDeclaredDestructor() && !DD->isDefined()) { // If this is an out-of-line declaration, marking it referenced will // not do anything. Manually call CheckDestructor to look up operator // delete(). ContextRAII SavedContext(*this, DD); CheckDestructor(DD); } else { MarkFunctionReferenced(Loc, Class->getDestructor()); } } } } // Local classes need to have their virtual members marked // immediately. For all other classes, we mark their virtual members // at the end of the translation unit. if (Class->isLocalClass()) MarkVirtualMembersReferenced(Loc, Class); else VTableUses.push_back(std::make_pair(Class, Loc)); } bool Sema::DefineUsedVTables() { LoadExternalVTableUses(); if (VTableUses.empty()) return false; // Note: The VTableUses vector could grow as a result of marking // the members of a class as "used", so we check the size each // time through the loop and prefer indices (which are stable) to // iterators (which are not). bool DefinedAnything = false; for (unsigned I = 0; I != VTableUses.size(); ++I) { CXXRecordDecl *Class = VTableUses[I].first->getDefinition(); if (!Class) continue; TemplateSpecializationKind ClassTSK = Class->getTemplateSpecializationKind(); SourceLocation Loc = VTableUses[I].second; bool DefineVTable = true; // If this class has a key function, but that key function is // defined in another translation unit, we don't need to emit the // vtable even though we're using it. const CXXMethodDecl *KeyFunction = Context.getCurrentKeyFunction(Class); if (KeyFunction && !KeyFunction->hasBody()) { // The key function is in another translation unit. DefineVTable = false; TemplateSpecializationKind TSK = KeyFunction->getTemplateSpecializationKind(); assert(TSK != TSK_ExplicitInstantiationDefinition && TSK != TSK_ImplicitInstantiation && "Instantiations don't have key functions"); (void)TSK; } else if (!KeyFunction) { // If we have a class with no key function that is the subject // of an explicit instantiation declaration, suppress the // vtable; it will live with the explicit instantiation // definition. bool IsExplicitInstantiationDeclaration = ClassTSK == TSK_ExplicitInstantiationDeclaration; for (auto R : Class->redecls()) { TemplateSpecializationKind TSK = cast(R)->getTemplateSpecializationKind(); if (TSK == TSK_ExplicitInstantiationDeclaration) IsExplicitInstantiationDeclaration = true; else if (TSK == TSK_ExplicitInstantiationDefinition) { IsExplicitInstantiationDeclaration = false; break; } } if (IsExplicitInstantiationDeclaration) DefineVTable = false; } // The exception specifications for all virtual members may be needed even // if we are not providing an authoritative form of the vtable in this TU. // We may choose to emit it available_externally anyway. if (!DefineVTable) { MarkVirtualMemberExceptionSpecsNeeded(Loc, Class); continue; } // Mark all of the virtual members of this class as referenced, so // that we can build a vtable. Then, tell the AST consumer that a // vtable for this class is required. DefinedAnything = true; MarkVirtualMembersReferenced(Loc, Class); CXXRecordDecl *Canonical = cast(Class->getCanonicalDecl()); if (VTablesUsed[Canonical]) Consumer.HandleVTable(Class); // Warn if we're emitting a weak vtable. The vtable will be weak if there is // no key function or the key function is inlined. Don't warn in C++ ABIs // that lack key functions, since the user won't be able to make one. if (Context.getTargetInfo().getCXXABI().hasKeyFunctions() && Class->isExternallyVisible() && ClassTSK != TSK_ImplicitInstantiation) { const FunctionDecl *KeyFunctionDef = nullptr; if (!KeyFunction || (KeyFunction->hasBody(KeyFunctionDef) && KeyFunctionDef->isInlined())) { Diag(Class->getLocation(), ClassTSK == TSK_ExplicitInstantiationDefinition ? diag::warn_weak_template_vtable : diag::warn_weak_vtable) << Class; } } } VTableUses.clear(); return DefinedAnything; } void Sema::MarkVirtualMemberExceptionSpecsNeeded(SourceLocation Loc, const CXXRecordDecl *RD) { for (const auto *I : RD->methods()) if (I->isVirtual() && !I->isPure()) ResolveExceptionSpec(Loc, I->getType()->castAs()); } void Sema::MarkVirtualMembersReferenced(SourceLocation Loc, const CXXRecordDecl *RD) { // Mark all functions which will appear in RD's vtable as used. CXXFinalOverriderMap FinalOverriders; RD->getFinalOverriders(FinalOverriders); for (CXXFinalOverriderMap::const_iterator I = FinalOverriders.begin(), E = FinalOverriders.end(); I != E; ++I) { for (OverridingMethods::const_iterator OI = I->second.begin(), OE = I->second.end(); OI != OE; ++OI) { assert(OI->second.size() > 0 && "no final overrider"); CXXMethodDecl *Overrider = OI->second.front().Method; // C++ [basic.def.odr]p2: // [...] A virtual member function is used if it is not pure. [...] if (!Overrider->isPure()) MarkFunctionReferenced(Loc, Overrider); } } // Only classes that have virtual bases need a VTT. if (RD->getNumVBases() == 0) return; for (const auto &I : RD->bases()) { const CXXRecordDecl *Base = cast(I.getType()->getAs()->getDecl()); if (Base->getNumVBases() == 0) continue; MarkVirtualMembersReferenced(Loc, Base); } } /// SetIvarInitializers - This routine builds initialization ASTs for the /// Objective-C implementation whose ivars need be initialized. void Sema::SetIvarInitializers(ObjCImplementationDecl *ObjCImplementation) { if (!getLangOpts().CPlusPlus) return; if (ObjCInterfaceDecl *OID = ObjCImplementation->getClassInterface()) { SmallVector ivars; CollectIvarsToConstructOrDestruct(OID, ivars); if (ivars.empty()) return; SmallVector AllToInit; for (unsigned i = 0; i < ivars.size(); i++) { FieldDecl *Field = ivars[i]; if (Field->isInvalidDecl()) continue; CXXCtorInitializer *Member; InitializedEntity InitEntity = InitializedEntity::InitializeMember(Field); InitializationKind InitKind = InitializationKind::CreateDefault(ObjCImplementation->getLocation()); InitializationSequence InitSeq(*this, InitEntity, InitKind, None); ExprResult MemberInit = InitSeq.Perform(*this, InitEntity, InitKind, None); MemberInit = MaybeCreateExprWithCleanups(MemberInit); // Note, MemberInit could actually come back empty if no initialization // is required (e.g., because it would call a trivial default constructor) if (!MemberInit.get() || MemberInit.isInvalid()) continue; Member = new (Context) CXXCtorInitializer(Context, Field, SourceLocation(), SourceLocation(), MemberInit.getAs(), SourceLocation()); AllToInit.push_back(Member); // Be sure that the destructor is accessible and is marked as referenced. if (const RecordType *RecordTy = Context.getBaseElementType(Field->getType()) ->getAs()) { CXXRecordDecl *RD = cast(RecordTy->getDecl()); if (CXXDestructorDecl *Destructor = LookupDestructor(RD)) { MarkFunctionReferenced(Field->getLocation(), Destructor); CheckDestructorAccess(Field->getLocation(), Destructor, PDiag(diag::err_access_dtor_ivar) << Context.getBaseElementType(Field->getType())); } } } ObjCImplementation->setIvarInitializers(Context, AllToInit.data(), AllToInit.size()); } } static void DelegatingCycleHelper(CXXConstructorDecl* Ctor, llvm::SmallSet &Valid, llvm::SmallSet &Invalid, llvm::SmallSet &Current, Sema &S) { if (Ctor->isInvalidDecl()) return; CXXConstructorDecl *Target = Ctor->getTargetConstructor(); // Target may not be determinable yet, for instance if this is a dependent // call in an uninstantiated template. if (Target) { const FunctionDecl *FNTarget = nullptr; (void)Target->hasBody(FNTarget); Target = const_cast( cast_or_null(FNTarget)); } CXXConstructorDecl *Canonical = Ctor->getCanonicalDecl(), // Avoid dereferencing a null pointer here. *TCanonical = Target? Target->getCanonicalDecl() : nullptr; if (!Current.insert(Canonical).second) return; // We know that beyond here, we aren't chaining into a cycle. if (!Target || !Target->isDelegatingConstructor() || Target->isInvalidDecl() || Valid.count(TCanonical)) { Valid.insert(Current.begin(), Current.end()); Current.clear(); // We've hit a cycle. } else if (TCanonical == Canonical || Invalid.count(TCanonical) || Current.count(TCanonical)) { // If we haven't diagnosed this cycle yet, do so now. if (!Invalid.count(TCanonical)) { S.Diag((*Ctor->init_begin())->getSourceLocation(), diag::warn_delegating_ctor_cycle) << Ctor; // Don't add a note for a function delegating directly to itself. if (TCanonical != Canonical) S.Diag(Target->getLocation(), diag::note_it_delegates_to); CXXConstructorDecl *C = Target; while (C->getCanonicalDecl() != Canonical) { const FunctionDecl *FNTarget = nullptr; (void)C->getTargetConstructor()->hasBody(FNTarget); assert(FNTarget && "Ctor cycle through bodiless function"); C = const_cast( cast(FNTarget)); S.Diag(C->getLocation(), diag::note_which_delegates_to); } } Invalid.insert(Current.begin(), Current.end()); Current.clear(); } else { DelegatingCycleHelper(Target, Valid, Invalid, Current, S); } } void Sema::CheckDelegatingCtorCycles() { llvm::SmallSet Valid, Invalid, Current; for (DelegatingCtorDeclsType::iterator I = DelegatingCtorDecls.begin(ExternalSource), E = DelegatingCtorDecls.end(); I != E; ++I) DelegatingCycleHelper(*I, Valid, Invalid, Current, *this); for (llvm::SmallSet::iterator CI = Invalid.begin(), CE = Invalid.end(); CI != CE; ++CI) (*CI)->setInvalidDecl(); } namespace { /// \brief AST visitor that finds references to the 'this' expression. class FindCXXThisExpr : public RecursiveASTVisitor { Sema &S; public: explicit FindCXXThisExpr(Sema &S) : S(S) { } bool VisitCXXThisExpr(CXXThisExpr *E) { S.Diag(E->getLocation(), diag::err_this_static_member_func) << E->isImplicit(); return false; } }; } bool Sema::checkThisInStaticMemberFunctionType(CXXMethodDecl *Method) { TypeSourceInfo *TSInfo = Method->getTypeSourceInfo(); if (!TSInfo) return false; TypeLoc TL = TSInfo->getTypeLoc(); FunctionProtoTypeLoc ProtoTL = TL.getAs(); if (!ProtoTL) return false; // C++11 [expr.prim.general]p3: // [The expression this] shall not appear before the optional // cv-qualifier-seq and it shall not appear within the declaration of a // static member function (although its type and value category are defined // within a static member function as they are within a non-static member // function). [ Note: this is because declaration matching does not occur // until the complete declarator is known. - end note ] const FunctionProtoType *Proto = ProtoTL.getTypePtr(); FindCXXThisExpr Finder(*this); // If the return type came after the cv-qualifier-seq, check it now. if (Proto->hasTrailingReturn() && !Finder.TraverseTypeLoc(ProtoTL.getReturnLoc())) return true; // Check the exception specification. if (checkThisInStaticMemberFunctionExceptionSpec(Method)) return true; return checkThisInStaticMemberFunctionAttributes(Method); } bool Sema::checkThisInStaticMemberFunctionExceptionSpec(CXXMethodDecl *Method) { TypeSourceInfo *TSInfo = Method->getTypeSourceInfo(); if (!TSInfo) return false; TypeLoc TL = TSInfo->getTypeLoc(); FunctionProtoTypeLoc ProtoTL = TL.getAs(); if (!ProtoTL) return false; const FunctionProtoType *Proto = ProtoTL.getTypePtr(); FindCXXThisExpr Finder(*this); switch (Proto->getExceptionSpecType()) { case EST_Unparsed: case EST_Uninstantiated: case EST_Unevaluated: case EST_BasicNoexcept: case EST_DynamicNone: case EST_MSAny: case EST_None: break; case EST_ComputedNoexcept: if (!Finder.TraverseStmt(Proto->getNoexceptExpr())) return true; LLVM_FALLTHROUGH; case EST_Dynamic: for (const auto &E : Proto->exceptions()) { if (!Finder.TraverseType(E)) return true; } break; } return false; } bool Sema::checkThisInStaticMemberFunctionAttributes(CXXMethodDecl *Method) { FindCXXThisExpr Finder(*this); // Check attributes. for (const auto *A : Method->attrs()) { // FIXME: This should be emitted by tblgen. Expr *Arg = nullptr; ArrayRef Args; if (const auto *G = dyn_cast(A)) Arg = G->getArg(); else if (const auto *G = dyn_cast(A)) Arg = G->getArg(); else if (const auto *AA = dyn_cast(A)) Args = llvm::makeArrayRef(AA->args_begin(), AA->args_size()); else if (const auto *AB = dyn_cast(A)) Args = llvm::makeArrayRef(AB->args_begin(), AB->args_size()); else if (const auto *ETLF = dyn_cast(A)) { Arg = ETLF->getSuccessValue(); Args = llvm::makeArrayRef(ETLF->args_begin(), ETLF->args_size()); } else if (const auto *STLF = dyn_cast(A)) { Arg = STLF->getSuccessValue(); Args = llvm::makeArrayRef(STLF->args_begin(), STLF->args_size()); } else if (const auto *LR = dyn_cast(A)) Arg = LR->getArg(); else if (const auto *LE = dyn_cast(A)) Args = llvm::makeArrayRef(LE->args_begin(), LE->args_size()); else if (const auto *RC = dyn_cast(A)) Args = llvm::makeArrayRef(RC->args_begin(), RC->args_size()); else if (const auto *AC = dyn_cast(A)) Args = llvm::makeArrayRef(AC->args_begin(), AC->args_size()); else if (const auto *AC = dyn_cast(A)) Args = llvm::makeArrayRef(AC->args_begin(), AC->args_size()); else if (const auto *RC = dyn_cast(A)) Args = llvm::makeArrayRef(RC->args_begin(), RC->args_size()); if (Arg && !Finder.TraverseStmt(Arg)) return true; for (unsigned I = 0, N = Args.size(); I != N; ++I) { if (!Finder.TraverseStmt(Args[I])) return true; } } return false; } void Sema::checkExceptionSpecification( bool IsTopLevel, ExceptionSpecificationType EST, ArrayRef DynamicExceptions, ArrayRef DynamicExceptionRanges, Expr *NoexceptExpr, SmallVectorImpl &Exceptions, FunctionProtoType::ExceptionSpecInfo &ESI) { Exceptions.clear(); ESI.Type = EST; if (EST == EST_Dynamic) { Exceptions.reserve(DynamicExceptions.size()); for (unsigned ei = 0, ee = DynamicExceptions.size(); ei != ee; ++ei) { // FIXME: Preserve type source info. QualType ET = GetTypeFromParser(DynamicExceptions[ei]); if (IsTopLevel) { SmallVector Unexpanded; collectUnexpandedParameterPacks(ET, Unexpanded); if (!Unexpanded.empty()) { DiagnoseUnexpandedParameterPacks( DynamicExceptionRanges[ei].getBegin(), UPPC_ExceptionType, Unexpanded); continue; } } // Check that the type is valid for an exception spec, and // drop it if not. if (!CheckSpecifiedExceptionType(ET, DynamicExceptionRanges[ei])) Exceptions.push_back(ET); } ESI.Exceptions = Exceptions; return; } if (EST == EST_ComputedNoexcept) { // If an error occurred, there's no expression here. if (NoexceptExpr) { assert((NoexceptExpr->isTypeDependent() || NoexceptExpr->getType()->getCanonicalTypeUnqualified() == Context.BoolTy) && "Parser should have made sure that the expression is boolean"); if (IsTopLevel && NoexceptExpr && DiagnoseUnexpandedParameterPack(NoexceptExpr)) { ESI.Type = EST_BasicNoexcept; return; } if (!NoexceptExpr->isValueDependent()) NoexceptExpr = VerifyIntegerConstantExpression(NoexceptExpr, nullptr, diag::err_noexcept_needs_constant_expression, /*AllowFold*/ false).get(); ESI.NoexceptExpr = NoexceptExpr; } return; } } void Sema::actOnDelayedExceptionSpecification(Decl *MethodD, ExceptionSpecificationType EST, SourceRange SpecificationRange, ArrayRef DynamicExceptions, ArrayRef DynamicExceptionRanges, Expr *NoexceptExpr) { if (!MethodD) return; // Dig out the method we're referring to. if (FunctionTemplateDecl *FunTmpl = dyn_cast(MethodD)) MethodD = FunTmpl->getTemplatedDecl(); CXXMethodDecl *Method = dyn_cast(MethodD); if (!Method) return; // Check the exception specification. llvm::SmallVector Exceptions; FunctionProtoType::ExceptionSpecInfo ESI; checkExceptionSpecification(/*IsTopLevel*/true, EST, DynamicExceptions, DynamicExceptionRanges, NoexceptExpr, Exceptions, ESI); // Update the exception specification on the function type. Context.adjustExceptionSpec(Method, ESI, /*AsWritten*/true); if (Method->isStatic()) checkThisInStaticMemberFunctionExceptionSpec(Method); if (Method->isVirtual()) { // Check overrides, which we previously had to delay. for (CXXMethodDecl::method_iterator O = Method->begin_overridden_methods(), OEnd = Method->end_overridden_methods(); O != OEnd; ++O) CheckOverridingFunctionExceptionSpec(Method, *O); } } /// HandleMSProperty - Analyze a __delcspec(property) field of a C++ class. /// MSPropertyDecl *Sema::HandleMSProperty(Scope *S, RecordDecl *Record, SourceLocation DeclStart, Declarator &D, Expr *BitWidth, InClassInitStyle InitStyle, AccessSpecifier AS, AttributeList *MSPropertyAttr) { IdentifierInfo *II = D.getIdentifier(); if (!II) { Diag(DeclStart, diag::err_anonymous_property); return nullptr; } SourceLocation Loc = D.getIdentifierLoc(); TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); QualType T = TInfo->getType(); if (getLangOpts().CPlusPlus) { CheckExtraCXXDefaultArguments(D); if (DiagnoseUnexpandedParameterPack(D.getIdentifierLoc(), TInfo, UPPC_DataMemberType)) { D.setInvalidType(); T = Context.IntTy; TInfo = Context.getTrivialTypeSourceInfo(T, Loc); } } DiagnoseFunctionSpecifiers(D.getDeclSpec()); if (D.getDeclSpec().isInlineSpecified()) Diag(D.getDeclSpec().getInlineSpecLoc(), diag::err_inline_non_function) << getLangOpts().CPlusPlus1z; if (DeclSpec::TSCS TSCS = D.getDeclSpec().getThreadStorageClassSpec()) Diag(D.getDeclSpec().getThreadStorageClassSpecLoc(), diag::err_invalid_thread) << DeclSpec::getSpecifierName(TSCS); // Check to see if this name was declared as a member previously NamedDecl *PrevDecl = nullptr; LookupResult Previous(*this, II, Loc, LookupMemberName, ForRedeclaration); LookupName(Previous, S); switch (Previous.getResultKind()) { case LookupResult::Found: case LookupResult::FoundUnresolvedValue: PrevDecl = Previous.getAsSingle(); break; case LookupResult::FoundOverloaded: PrevDecl = Previous.getRepresentativeDecl(); break; case LookupResult::NotFound: case LookupResult::NotFoundInCurrentInstantiation: case LookupResult::Ambiguous: break; } if (PrevDecl && PrevDecl->isTemplateParameter()) { // Maybe we will complain about the shadowed template parameter. DiagnoseTemplateParameterShadow(D.getIdentifierLoc(), PrevDecl); // Just pretend that we didn't see the previous declaration. PrevDecl = nullptr; } if (PrevDecl && !isDeclInScope(PrevDecl, Record, S)) PrevDecl = nullptr; SourceLocation TSSL = D.getLocStart(); const AttributeList::PropertyData &Data = MSPropertyAttr->getPropertyData(); MSPropertyDecl *NewPD = MSPropertyDecl::Create( Context, Record, Loc, II, T, TInfo, TSSL, Data.GetterId, Data.SetterId); ProcessDeclAttributes(TUScope, NewPD, D); NewPD->setAccess(AS); if (NewPD->isInvalidDecl()) Record->setInvalidDecl(); if (D.getDeclSpec().isModulePrivateSpecified()) NewPD->setModulePrivate(); if (NewPD->isInvalidDecl() && PrevDecl) { // Don't introduce NewFD into scope; there's already something // with the same name in the same scope. } else if (II) { PushOnScopeChains(NewPD, S); } else Record->addDecl(NewPD); return NewPD; } diff --git a/lib/Sema/SemaObjCProperty.cpp b/lib/Sema/SemaObjCProperty.cpp index e1e85dfd5e55..bfb0071a54f9 100644 --- a/lib/Sema/SemaObjCProperty.cpp +++ b/lib/Sema/SemaObjCProperty.cpp @@ -1,2681 +1,2681 @@ //===--- SemaObjCProperty.cpp - Semantic Analysis for ObjC @property ------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements semantic analysis for Objective C @property and // @synthesize declarations. // //===----------------------------------------------------------------------===// #include "clang/Sema/SemaInternal.h" #include "clang/AST/ASTMutationListener.h" #include "clang/AST/DeclObjC.h" #include "clang/AST/ExprCXX.h" #include "clang/AST/ExprObjC.h" #include "clang/Basic/SourceManager.h" #include "clang/Lex/Lexer.h" #include "clang/Lex/Preprocessor.h" #include "clang/Sema/Initialization.h" #include "llvm/ADT/DenseSet.h" #include "llvm/ADT/SmallString.h" using namespace clang; //===----------------------------------------------------------------------===// // Grammar actions. //===----------------------------------------------------------------------===// /// getImpliedARCOwnership - Given a set of property attributes and a /// type, infer an expected lifetime. The type's ownership qualification /// is not considered. /// /// Returns OCL_None if the attributes as stated do not imply an ownership. /// Never returns OCL_Autoreleasing. static Qualifiers::ObjCLifetime getImpliedARCOwnership( ObjCPropertyDecl::PropertyAttributeKind attrs, QualType type) { // retain, strong, copy, weak, and unsafe_unretained are only legal // on properties of retainable pointer type. if (attrs & (ObjCPropertyDecl::OBJC_PR_retain | ObjCPropertyDecl::OBJC_PR_strong | ObjCPropertyDecl::OBJC_PR_copy)) { return Qualifiers::OCL_Strong; } else if (attrs & ObjCPropertyDecl::OBJC_PR_weak) { return Qualifiers::OCL_Weak; } else if (attrs & ObjCPropertyDecl::OBJC_PR_unsafe_unretained) { return Qualifiers::OCL_ExplicitNone; } // assign can appear on other types, so we have to check the // property type. if (attrs & ObjCPropertyDecl::OBJC_PR_assign && type->isObjCRetainableType()) { return Qualifiers::OCL_ExplicitNone; } return Qualifiers::OCL_None; } /// Check the internal consistency of a property declaration with /// an explicit ownership qualifier. static void checkPropertyDeclWithOwnership(Sema &S, ObjCPropertyDecl *property) { if (property->isInvalidDecl()) return; ObjCPropertyDecl::PropertyAttributeKind propertyKind = property->getPropertyAttributes(); Qualifiers::ObjCLifetime propertyLifetime = property->getType().getObjCLifetime(); assert(propertyLifetime != Qualifiers::OCL_None); Qualifiers::ObjCLifetime expectedLifetime = getImpliedARCOwnership(propertyKind, property->getType()); if (!expectedLifetime) { // We have a lifetime qualifier but no dominating property // attribute. That's okay, but restore reasonable invariants by // setting the property attribute according to the lifetime // qualifier. ObjCPropertyDecl::PropertyAttributeKind attr; if (propertyLifetime == Qualifiers::OCL_Strong) { attr = ObjCPropertyDecl::OBJC_PR_strong; } else if (propertyLifetime == Qualifiers::OCL_Weak) { attr = ObjCPropertyDecl::OBJC_PR_weak; } else { assert(propertyLifetime == Qualifiers::OCL_ExplicitNone); attr = ObjCPropertyDecl::OBJC_PR_unsafe_unretained; } property->setPropertyAttributes(attr); return; } if (propertyLifetime == expectedLifetime) return; property->setInvalidDecl(); S.Diag(property->getLocation(), diag::err_arc_inconsistent_property_ownership) << property->getDeclName() << expectedLifetime << propertyLifetime; } /// \brief Check this Objective-C property against a property declared in the /// given protocol. static void CheckPropertyAgainstProtocol(Sema &S, ObjCPropertyDecl *Prop, ObjCProtocolDecl *Proto, llvm::SmallPtrSetImpl &Known) { // Have we seen this protocol before? if (!Known.insert(Proto).second) return; // Look for a property with the same name. DeclContext::lookup_result R = Proto->lookup(Prop->getDeclName()); for (unsigned I = 0, N = R.size(); I != N; ++I) { if (ObjCPropertyDecl *ProtoProp = dyn_cast(R[I])) { S.DiagnosePropertyMismatch(Prop, ProtoProp, Proto->getIdentifier(), true); return; } } // Check this property against any protocols we inherit. for (auto *P : Proto->protocols()) CheckPropertyAgainstProtocol(S, Prop, P, Known); } static unsigned deducePropertyOwnershipFromType(Sema &S, QualType T) { // In GC mode, just look for the __weak qualifier. if (S.getLangOpts().getGC() != LangOptions::NonGC) { if (T.isObjCGCWeak()) return ObjCDeclSpec::DQ_PR_weak; // In ARC/MRC, look for an explicit ownership qualifier. // For some reason, this only applies to __weak. } else if (auto ownership = T.getObjCLifetime()) { switch (ownership) { case Qualifiers::OCL_Weak: return ObjCDeclSpec::DQ_PR_weak; case Qualifiers::OCL_Strong: return ObjCDeclSpec::DQ_PR_strong; case Qualifiers::OCL_ExplicitNone: return ObjCDeclSpec::DQ_PR_unsafe_unretained; case Qualifiers::OCL_Autoreleasing: case Qualifiers::OCL_None: return 0; } llvm_unreachable("bad qualifier"); } return 0; } static const unsigned OwnershipMask = (ObjCPropertyDecl::OBJC_PR_assign | ObjCPropertyDecl::OBJC_PR_retain | ObjCPropertyDecl::OBJC_PR_copy | ObjCPropertyDecl::OBJC_PR_weak | ObjCPropertyDecl::OBJC_PR_strong | ObjCPropertyDecl::OBJC_PR_unsafe_unretained); static unsigned getOwnershipRule(unsigned attr) { unsigned result = attr & OwnershipMask; // From an ownership perspective, assign and unsafe_unretained are // identical; make sure one also implies the other. if (result & (ObjCPropertyDecl::OBJC_PR_assign | ObjCPropertyDecl::OBJC_PR_unsafe_unretained)) { result |= ObjCPropertyDecl::OBJC_PR_assign | ObjCPropertyDecl::OBJC_PR_unsafe_unretained; } return result; } Decl *Sema::ActOnProperty(Scope *S, SourceLocation AtLoc, SourceLocation LParenLoc, FieldDeclarator &FD, ObjCDeclSpec &ODS, Selector GetterSel, Selector SetterSel, tok::ObjCKeywordKind MethodImplKind, DeclContext *lexicalDC) { unsigned Attributes = ODS.getPropertyAttributes(); FD.D.setObjCWeakProperty((Attributes & ObjCDeclSpec::DQ_PR_weak) != 0); TypeSourceInfo *TSI = GetTypeForDeclarator(FD.D, S); QualType T = TSI->getType(); if (!getOwnershipRule(Attributes)) { Attributes |= deducePropertyOwnershipFromType(*this, T); } bool isReadWrite = ((Attributes & ObjCDeclSpec::DQ_PR_readwrite) || // default is readwrite! !(Attributes & ObjCDeclSpec::DQ_PR_readonly)); // Proceed with constructing the ObjCPropertyDecls. ObjCContainerDecl *ClassDecl = cast(CurContext); ObjCPropertyDecl *Res = nullptr; if (ObjCCategoryDecl *CDecl = dyn_cast(ClassDecl)) { if (CDecl->IsClassExtension()) { Res = HandlePropertyInClassExtension(S, AtLoc, LParenLoc, FD, GetterSel, ODS.getGetterNameLoc(), SetterSel, ODS.getSetterNameLoc(), isReadWrite, Attributes, ODS.getPropertyAttributes(), T, TSI, MethodImplKind); if (!Res) return nullptr; } } if (!Res) { Res = CreatePropertyDecl(S, ClassDecl, AtLoc, LParenLoc, FD, GetterSel, ODS.getGetterNameLoc(), SetterSel, ODS.getSetterNameLoc(), isReadWrite, Attributes, ODS.getPropertyAttributes(), T, TSI, MethodImplKind); if (lexicalDC) Res->setLexicalDeclContext(lexicalDC); } // Validate the attributes on the @property. CheckObjCPropertyAttributes(Res, AtLoc, Attributes, (isa(ClassDecl) || isa(ClassDecl))); // Check consistency if the type has explicit ownership qualification. if (Res->getType().getObjCLifetime()) checkPropertyDeclWithOwnership(*this, Res); llvm::SmallPtrSet KnownProtos; if (ObjCInterfaceDecl *IFace = dyn_cast(ClassDecl)) { // For a class, compare the property against a property in our superclass. bool FoundInSuper = false; ObjCInterfaceDecl *CurrentInterfaceDecl = IFace; while (ObjCInterfaceDecl *Super = CurrentInterfaceDecl->getSuperClass()) { DeclContext::lookup_result R = Super->lookup(Res->getDeclName()); for (unsigned I = 0, N = R.size(); I != N; ++I) { if (ObjCPropertyDecl *SuperProp = dyn_cast(R[I])) { DiagnosePropertyMismatch(Res, SuperProp, Super->getIdentifier(), false); FoundInSuper = true; break; } } if (FoundInSuper) break; else CurrentInterfaceDecl = Super; } if (FoundInSuper) { // Also compare the property against a property in our protocols. for (auto *P : CurrentInterfaceDecl->protocols()) { CheckPropertyAgainstProtocol(*this, Res, P, KnownProtos); } } else { // Slower path: look in all protocols we referenced. for (auto *P : IFace->all_referenced_protocols()) { CheckPropertyAgainstProtocol(*this, Res, P, KnownProtos); } } } else if (ObjCCategoryDecl *Cat = dyn_cast(ClassDecl)) { // We don't check if class extension. Because properties in class extension // are meant to override some of the attributes and checking has already done // when property in class extension is constructed. if (!Cat->IsClassExtension()) for (auto *P : Cat->protocols()) CheckPropertyAgainstProtocol(*this, Res, P, KnownProtos); } else { ObjCProtocolDecl *Proto = cast(ClassDecl); for (auto *P : Proto->protocols()) CheckPropertyAgainstProtocol(*this, Res, P, KnownProtos); } ActOnDocumentableDecl(Res); return Res; } static ObjCPropertyDecl::PropertyAttributeKind makePropertyAttributesAsWritten(unsigned Attributes) { unsigned attributesAsWritten = 0; if (Attributes & ObjCDeclSpec::DQ_PR_readonly) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_readonly; if (Attributes & ObjCDeclSpec::DQ_PR_readwrite) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_readwrite; if (Attributes & ObjCDeclSpec::DQ_PR_getter) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_getter; if (Attributes & ObjCDeclSpec::DQ_PR_setter) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_setter; if (Attributes & ObjCDeclSpec::DQ_PR_assign) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_assign; if (Attributes & ObjCDeclSpec::DQ_PR_retain) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_retain; if (Attributes & ObjCDeclSpec::DQ_PR_strong) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_strong; if (Attributes & ObjCDeclSpec::DQ_PR_weak) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_weak; if (Attributes & ObjCDeclSpec::DQ_PR_copy) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_copy; if (Attributes & ObjCDeclSpec::DQ_PR_unsafe_unretained) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_unsafe_unretained; if (Attributes & ObjCDeclSpec::DQ_PR_nonatomic) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_nonatomic; if (Attributes & ObjCDeclSpec::DQ_PR_atomic) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_atomic; if (Attributes & ObjCDeclSpec::DQ_PR_class) attributesAsWritten |= ObjCPropertyDecl::OBJC_PR_class; return (ObjCPropertyDecl::PropertyAttributeKind)attributesAsWritten; } static bool LocPropertyAttribute( ASTContext &Context, const char *attrName, SourceLocation LParenLoc, SourceLocation &Loc) { if (LParenLoc.isMacroID()) return false; SourceManager &SM = Context.getSourceManager(); std::pair locInfo = SM.getDecomposedLoc(LParenLoc); // Try to load the file buffer. bool invalidTemp = false; StringRef file = SM.getBufferData(locInfo.first, &invalidTemp); if (invalidTemp) return false; const char *tokenBegin = file.data() + locInfo.second; // Lex from the start of the given location. Lexer lexer(SM.getLocForStartOfFile(locInfo.first), Context.getLangOpts(), file.begin(), tokenBegin, file.end()); Token Tok; do { lexer.LexFromRawLexer(Tok); if (Tok.is(tok::raw_identifier) && Tok.getRawIdentifier() == attrName) { Loc = Tok.getLocation(); return true; } } while (Tok.isNot(tok::r_paren)); return false; } /// Check for a mismatch in the atomicity of the given properties. static void checkAtomicPropertyMismatch(Sema &S, ObjCPropertyDecl *OldProperty, ObjCPropertyDecl *NewProperty, bool PropagateAtomicity) { // If the atomicity of both matches, we're done. bool OldIsAtomic = (OldProperty->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_nonatomic) == 0; bool NewIsAtomic = (NewProperty->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_nonatomic) == 0; if (OldIsAtomic == NewIsAtomic) return; // Determine whether the given property is readonly and implicitly // atomic. auto isImplicitlyReadonlyAtomic = [](ObjCPropertyDecl *Property) -> bool { // Is it readonly? auto Attrs = Property->getPropertyAttributes(); if ((Attrs & ObjCPropertyDecl::OBJC_PR_readonly) == 0) return false; // Is it nonatomic? if (Attrs & ObjCPropertyDecl::OBJC_PR_nonatomic) return false; // Was 'atomic' specified directly? if (Property->getPropertyAttributesAsWritten() & ObjCPropertyDecl::OBJC_PR_atomic) return false; return true; }; // If we're allowed to propagate atomicity, and the new property did // not specify atomicity at all, propagate. const unsigned AtomicityMask = (ObjCPropertyDecl::OBJC_PR_atomic | ObjCPropertyDecl::OBJC_PR_nonatomic); if (PropagateAtomicity && ((NewProperty->getPropertyAttributesAsWritten() & AtomicityMask) == 0)) { unsigned Attrs = NewProperty->getPropertyAttributes(); Attrs = Attrs & ~AtomicityMask; if (OldIsAtomic) Attrs |= ObjCPropertyDecl::OBJC_PR_atomic; else Attrs |= ObjCPropertyDecl::OBJC_PR_nonatomic; NewProperty->overwritePropertyAttributes(Attrs); return; } // One of the properties is atomic; if it's a readonly property, and // 'atomic' wasn't explicitly specified, we're okay. if ((OldIsAtomic && isImplicitlyReadonlyAtomic(OldProperty)) || (NewIsAtomic && isImplicitlyReadonlyAtomic(NewProperty))) return; // Diagnose the conflict. const IdentifierInfo *OldContextName; auto *OldDC = OldProperty->getDeclContext(); if (auto Category = dyn_cast(OldDC)) OldContextName = Category->getClassInterface()->getIdentifier(); else OldContextName = cast(OldDC)->getIdentifier(); S.Diag(NewProperty->getLocation(), diag::warn_property_attribute) << NewProperty->getDeclName() << "atomic" << OldContextName; S.Diag(OldProperty->getLocation(), diag::note_property_declare); } ObjCPropertyDecl * Sema::HandlePropertyInClassExtension(Scope *S, SourceLocation AtLoc, SourceLocation LParenLoc, FieldDeclarator &FD, Selector GetterSel, SourceLocation GetterNameLoc, Selector SetterSel, SourceLocation SetterNameLoc, const bool isReadWrite, unsigned &Attributes, const unsigned AttributesAsWritten, QualType T, TypeSourceInfo *TSI, tok::ObjCKeywordKind MethodImplKind) { ObjCCategoryDecl *CDecl = cast(CurContext); // Diagnose if this property is already in continuation class. DeclContext *DC = CurContext; IdentifierInfo *PropertyId = FD.D.getIdentifier(); ObjCInterfaceDecl *CCPrimary = CDecl->getClassInterface(); // We need to look in the @interface to see if the @property was // already declared. if (!CCPrimary) { Diag(CDecl->getLocation(), diag::err_continuation_class); return nullptr; } bool isClassProperty = (AttributesAsWritten & ObjCDeclSpec::DQ_PR_class) || (Attributes & ObjCDeclSpec::DQ_PR_class); // Find the property in the extended class's primary class or // extensions. ObjCPropertyDecl *PIDecl = CCPrimary->FindPropertyVisibleInPrimaryClass( PropertyId, ObjCPropertyDecl::getQueryKind(isClassProperty)); // If we found a property in an extension, complain. if (PIDecl && isa(PIDecl->getDeclContext())) { Diag(AtLoc, diag::err_duplicate_property); Diag(PIDecl->getLocation(), diag::note_property_declare); return nullptr; } // Check for consistency with the previous declaration, if there is one. if (PIDecl) { // A readonly property declared in the primary class can be refined // by adding a readwrite property within an extension. // Anything else is an error. if (!(PIDecl->isReadOnly() && isReadWrite)) { // Tailor the diagnostics for the common case where a readwrite // property is declared both in the @interface and the continuation. // This is a common error where the user often intended the original // declaration to be readonly. unsigned diag = (Attributes & ObjCDeclSpec::DQ_PR_readwrite) && (PIDecl->getPropertyAttributesAsWritten() & ObjCPropertyDecl::OBJC_PR_readwrite) ? diag::err_use_continuation_class_redeclaration_readwrite : diag::err_use_continuation_class; Diag(AtLoc, diag) << CCPrimary->getDeclName(); Diag(PIDecl->getLocation(), diag::note_property_declare); return nullptr; } // Check for consistency of getters. if (PIDecl->getGetterName() != GetterSel) { // If the getter was written explicitly, complain. if (AttributesAsWritten & ObjCDeclSpec::DQ_PR_getter) { Diag(AtLoc, diag::warn_property_redecl_getter_mismatch) << PIDecl->getGetterName() << GetterSel; Diag(PIDecl->getLocation(), diag::note_property_declare); } // Always adopt the getter from the original declaration. GetterSel = PIDecl->getGetterName(); Attributes |= ObjCDeclSpec::DQ_PR_getter; } // Check consistency of ownership. unsigned ExistingOwnership = getOwnershipRule(PIDecl->getPropertyAttributes()); unsigned NewOwnership = getOwnershipRule(Attributes); if (ExistingOwnership && NewOwnership != ExistingOwnership) { // If the ownership was written explicitly, complain. if (getOwnershipRule(AttributesAsWritten)) { Diag(AtLoc, diag::warn_property_attr_mismatch); Diag(PIDecl->getLocation(), diag::note_property_declare); } // Take the ownership from the original property. Attributes = (Attributes & ~OwnershipMask) | ExistingOwnership; } // If the redeclaration is 'weak' but the original property is not, if ((Attributes & ObjCPropertyDecl::OBJC_PR_weak) && !(PIDecl->getPropertyAttributesAsWritten() & ObjCPropertyDecl::OBJC_PR_weak) && PIDecl->getType()->getAs() && PIDecl->getType().getObjCLifetime() == Qualifiers::OCL_None) { Diag(AtLoc, diag::warn_property_implicitly_mismatched); Diag(PIDecl->getLocation(), diag::note_property_declare); } } // Create a new ObjCPropertyDecl with the DeclContext being // the class extension. ObjCPropertyDecl *PDecl = CreatePropertyDecl(S, CDecl, AtLoc, LParenLoc, FD, GetterSel, GetterNameLoc, SetterSel, SetterNameLoc, isReadWrite, Attributes, AttributesAsWritten, T, TSI, MethodImplKind, DC); // If there was no declaration of a property with the same name in // the primary class, we're done. if (!PIDecl) { ProcessPropertyDecl(PDecl); return PDecl; } if (!Context.hasSameType(PIDecl->getType(), PDecl->getType())) { bool IncompatibleObjC = false; QualType ConvertedType; // Relax the strict type matching for property type in continuation class. // Allow property object type of continuation class to be different as long // as it narrows the object type in its primary class property. Note that // this conversion is safe only because the wider type is for a 'readonly' // property in primary class and 'narrowed' type for a 'readwrite' property // in continuation class. QualType PrimaryClassPropertyT = Context.getCanonicalType(PIDecl->getType()); QualType ClassExtPropertyT = Context.getCanonicalType(PDecl->getType()); if (!isa(PrimaryClassPropertyT) || !isa(ClassExtPropertyT) || (!isObjCPointerConversion(ClassExtPropertyT, PrimaryClassPropertyT, ConvertedType, IncompatibleObjC)) || IncompatibleObjC) { Diag(AtLoc, diag::err_type_mismatch_continuation_class) << PDecl->getType(); Diag(PIDecl->getLocation(), diag::note_property_declare); return nullptr; } } // Check that atomicity of property in class extension matches the previous // declaration. checkAtomicPropertyMismatch(*this, PIDecl, PDecl, true); // Make sure getter/setter are appropriately synthesized. ProcessPropertyDecl(PDecl); return PDecl; } ObjCPropertyDecl *Sema::CreatePropertyDecl(Scope *S, ObjCContainerDecl *CDecl, SourceLocation AtLoc, SourceLocation LParenLoc, FieldDeclarator &FD, Selector GetterSel, SourceLocation GetterNameLoc, Selector SetterSel, SourceLocation SetterNameLoc, const bool isReadWrite, const unsigned Attributes, const unsigned AttributesAsWritten, QualType T, TypeSourceInfo *TInfo, tok::ObjCKeywordKind MethodImplKind, DeclContext *lexicalDC){ IdentifierInfo *PropertyId = FD.D.getIdentifier(); // Property defaults to 'assign' if it is readwrite, unless this is ARC // and the type is retainable. bool isAssign; if (Attributes & (ObjCDeclSpec::DQ_PR_assign | ObjCDeclSpec::DQ_PR_unsafe_unretained)) { isAssign = true; } else if (getOwnershipRule(Attributes) || !isReadWrite) { isAssign = false; } else { isAssign = (!getLangOpts().ObjCAutoRefCount || !T->isObjCRetainableType()); } // Issue a warning if property is 'assign' as default and its // object, which is gc'able conforms to NSCopying protocol if (getLangOpts().getGC() != LangOptions::NonGC && isAssign && !(Attributes & ObjCDeclSpec::DQ_PR_assign)) { if (const ObjCObjectPointerType *ObjPtrTy = T->getAs()) { ObjCInterfaceDecl *IDecl = ObjPtrTy->getObjectType()->getInterface(); if (IDecl) if (ObjCProtocolDecl* PNSCopying = LookupProtocol(&Context.Idents.get("NSCopying"), AtLoc)) if (IDecl->ClassImplementsProtocol(PNSCopying, true)) Diag(AtLoc, diag::warn_implements_nscopying) << PropertyId; } } if (T->isObjCObjectType()) { SourceLocation StarLoc = TInfo->getTypeLoc().getLocEnd(); StarLoc = getLocForEndOfToken(StarLoc); Diag(FD.D.getIdentifierLoc(), diag::err_statically_allocated_object) << FixItHint::CreateInsertion(StarLoc, "*"); T = Context.getObjCObjectPointerType(T); SourceLocation TLoc = TInfo->getTypeLoc().getLocStart(); TInfo = Context.getTrivialTypeSourceInfo(T, TLoc); } DeclContext *DC = cast(CDecl); ObjCPropertyDecl *PDecl = ObjCPropertyDecl::Create(Context, DC, FD.D.getIdentifierLoc(), PropertyId, AtLoc, LParenLoc, T, TInfo); bool isClassProperty = (AttributesAsWritten & ObjCDeclSpec::DQ_PR_class) || (Attributes & ObjCDeclSpec::DQ_PR_class); // Class property and instance property can have the same name. if (ObjCPropertyDecl *prevDecl = ObjCPropertyDecl::findPropertyDecl( DC, PropertyId, ObjCPropertyDecl::getQueryKind(isClassProperty))) { Diag(PDecl->getLocation(), diag::err_duplicate_property); Diag(prevDecl->getLocation(), diag::note_property_declare); PDecl->setInvalidDecl(); } else { DC->addDecl(PDecl); if (lexicalDC) PDecl->setLexicalDeclContext(lexicalDC); } if (T->isArrayType() || T->isFunctionType()) { Diag(AtLoc, diag::err_property_type) << T; PDecl->setInvalidDecl(); } ProcessDeclAttributes(S, PDecl, FD.D); // Regardless of setter/getter attribute, we save the default getter/setter // selector names in anticipation of declaration of setter/getter methods. PDecl->setGetterName(GetterSel, GetterNameLoc); PDecl->setSetterName(SetterSel, SetterNameLoc); PDecl->setPropertyAttributesAsWritten( makePropertyAttributesAsWritten(AttributesAsWritten)); if (Attributes & ObjCDeclSpec::DQ_PR_readonly) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_readonly); if (Attributes & ObjCDeclSpec::DQ_PR_getter) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_getter); if (Attributes & ObjCDeclSpec::DQ_PR_setter) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_setter); if (isReadWrite) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_readwrite); if (Attributes & ObjCDeclSpec::DQ_PR_retain) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_retain); if (Attributes & ObjCDeclSpec::DQ_PR_strong) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_strong); if (Attributes & ObjCDeclSpec::DQ_PR_weak) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_weak); if (Attributes & ObjCDeclSpec::DQ_PR_copy) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_copy); if (Attributes & ObjCDeclSpec::DQ_PR_unsafe_unretained) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_unsafe_unretained); if (isAssign) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_assign); // In the semantic attributes, one of nonatomic or atomic is always set. if (Attributes & ObjCDeclSpec::DQ_PR_nonatomic) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_nonatomic); else PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_atomic); // 'unsafe_unretained' is alias for 'assign'. if (Attributes & ObjCDeclSpec::DQ_PR_unsafe_unretained) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_assign); if (isAssign) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_unsafe_unretained); if (MethodImplKind == tok::objc_required) PDecl->setPropertyImplementation(ObjCPropertyDecl::Required); else if (MethodImplKind == tok::objc_optional) PDecl->setPropertyImplementation(ObjCPropertyDecl::Optional); if (Attributes & ObjCDeclSpec::DQ_PR_nullability) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_nullability); if (Attributes & ObjCDeclSpec::DQ_PR_null_resettable) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_null_resettable); if (Attributes & ObjCDeclSpec::DQ_PR_class) PDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_class); return PDecl; } static void checkARCPropertyImpl(Sema &S, SourceLocation propertyImplLoc, ObjCPropertyDecl *property, ObjCIvarDecl *ivar) { if (property->isInvalidDecl() || ivar->isInvalidDecl()) return; QualType ivarType = ivar->getType(); Qualifiers::ObjCLifetime ivarLifetime = ivarType.getObjCLifetime(); // The lifetime implied by the property's attributes. Qualifiers::ObjCLifetime propertyLifetime = getImpliedARCOwnership(property->getPropertyAttributes(), property->getType()); // We're fine if they match. if (propertyLifetime == ivarLifetime) return; // None isn't a valid lifetime for an object ivar in ARC, and // __autoreleasing is never valid; don't diagnose twice. if ((ivarLifetime == Qualifiers::OCL_None && S.getLangOpts().ObjCAutoRefCount) || ivarLifetime == Qualifiers::OCL_Autoreleasing) return; // If the ivar is private, and it's implicitly __unsafe_unretained // becaues of its type, then pretend it was actually implicitly // __strong. This is only sound because we're processing the // property implementation before parsing any method bodies. if (ivarLifetime == Qualifiers::OCL_ExplicitNone && propertyLifetime == Qualifiers::OCL_Strong && ivar->getAccessControl() == ObjCIvarDecl::Private) { SplitQualType split = ivarType.split(); if (split.Quals.hasObjCLifetime()) { assert(ivarType->isObjCARCImplicitlyUnretainedType()); split.Quals.setObjCLifetime(Qualifiers::OCL_Strong); ivarType = S.Context.getQualifiedType(split); ivar->setType(ivarType); return; } } switch (propertyLifetime) { case Qualifiers::OCL_Strong: S.Diag(ivar->getLocation(), diag::err_arc_strong_property_ownership) << property->getDeclName() << ivar->getDeclName() << ivarLifetime; break; case Qualifiers::OCL_Weak: S.Diag(ivar->getLocation(), diag::err_weak_property) << property->getDeclName() << ivar->getDeclName(); break; case Qualifiers::OCL_ExplicitNone: S.Diag(ivar->getLocation(), diag::err_arc_assign_property_ownership) << property->getDeclName() << ivar->getDeclName() << ((property->getPropertyAttributesAsWritten() & ObjCPropertyDecl::OBJC_PR_assign) != 0); break; case Qualifiers::OCL_Autoreleasing: llvm_unreachable("properties cannot be autoreleasing"); case Qualifiers::OCL_None: // Any other property should be ignored. return; } S.Diag(property->getLocation(), diag::note_property_declare); if (propertyImplLoc.isValid()) S.Diag(propertyImplLoc, diag::note_property_synthesize); } /// setImpliedPropertyAttributeForReadOnlyProperty - /// This routine evaludates life-time attributes for a 'readonly' /// property with no known lifetime of its own, using backing /// 'ivar's attribute, if any. If no backing 'ivar', property's /// life-time is assumed 'strong'. static void setImpliedPropertyAttributeForReadOnlyProperty( ObjCPropertyDecl *property, ObjCIvarDecl *ivar) { Qualifiers::ObjCLifetime propertyLifetime = getImpliedARCOwnership(property->getPropertyAttributes(), property->getType()); if (propertyLifetime != Qualifiers::OCL_None) return; if (!ivar) { // if no backing ivar, make property 'strong'. property->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_strong); return; } // property assumes owenership of backing ivar. QualType ivarType = ivar->getType(); Qualifiers::ObjCLifetime ivarLifetime = ivarType.getObjCLifetime(); if (ivarLifetime == Qualifiers::OCL_Strong) property->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_strong); else if (ivarLifetime == Qualifiers::OCL_Weak) property->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_weak); } static bool isIncompatiblePropertyAttribute(unsigned Attr1, unsigned Attr2, ObjCPropertyDecl::PropertyAttributeKind Kind) { return (Attr1 & Kind) != (Attr2 & Kind); } static bool areIncompatiblePropertyAttributes(unsigned Attr1, unsigned Attr2, unsigned Kinds) { return ((Attr1 & Kinds) != 0) != ((Attr2 & Kinds) != 0); } /// SelectPropertyForSynthesisFromProtocols - Finds the most appropriate /// property declaration that should be synthesised in all of the inherited /// protocols. It also diagnoses properties declared in inherited protocols with /// mismatched types or attributes, since any of them can be candidate for /// synthesis. static ObjCPropertyDecl * SelectPropertyForSynthesisFromProtocols(Sema &S, SourceLocation AtLoc, ObjCInterfaceDecl *ClassDecl, ObjCPropertyDecl *Property) { assert(isa(Property->getDeclContext()) && "Expected a property from a protocol"); ObjCInterfaceDecl::ProtocolPropertySet ProtocolSet; ObjCInterfaceDecl::PropertyDeclOrder Properties; for (const auto *PI : ClassDecl->all_referenced_protocols()) { if (const ObjCProtocolDecl *PDecl = PI->getDefinition()) PDecl->collectInheritedProtocolProperties(Property, ProtocolSet, Properties); } if (ObjCInterfaceDecl *SDecl = ClassDecl->getSuperClass()) { while (SDecl) { for (const auto *PI : SDecl->all_referenced_protocols()) { if (const ObjCProtocolDecl *PDecl = PI->getDefinition()) PDecl->collectInheritedProtocolProperties(Property, ProtocolSet, Properties); } SDecl = SDecl->getSuperClass(); } } if (Properties.empty()) return Property; ObjCPropertyDecl *OriginalProperty = Property; size_t SelectedIndex = 0; for (const auto &Prop : llvm::enumerate(Properties)) { // Select the 'readwrite' property if such property exists. if (Property->isReadOnly() && !Prop.value()->isReadOnly()) { Property = Prop.value(); SelectedIndex = Prop.index(); } } if (Property != OriginalProperty) { // Check that the old property is compatible with the new one. Properties[SelectedIndex] = OriginalProperty; } QualType RHSType = S.Context.getCanonicalType(Property->getType()); - unsigned OriginalAttributes = Property->getPropertyAttributes(); + unsigned OriginalAttributes = Property->getPropertyAttributesAsWritten(); enum MismatchKind { IncompatibleType = 0, HasNoExpectedAttribute, HasUnexpectedAttribute, DifferentGetter, DifferentSetter }; // Represents a property from another protocol that conflicts with the // selected declaration. struct MismatchingProperty { const ObjCPropertyDecl *Prop; MismatchKind Kind; StringRef AttributeName; }; SmallVector Mismatches; for (ObjCPropertyDecl *Prop : Properties) { // Verify the property attributes. - unsigned Attr = Prop->getPropertyAttributes(); + unsigned Attr = Prop->getPropertyAttributesAsWritten(); if (Attr != OriginalAttributes) { auto Diag = [&](bool OriginalHasAttribute, StringRef AttributeName) { MismatchKind Kind = OriginalHasAttribute ? HasNoExpectedAttribute : HasUnexpectedAttribute; Mismatches.push_back({Prop, Kind, AttributeName}); }; if (isIncompatiblePropertyAttribute(OriginalAttributes, Attr, ObjCPropertyDecl::OBJC_PR_copy)) { Diag(OriginalAttributes & ObjCPropertyDecl::OBJC_PR_copy, "copy"); continue; } if (areIncompatiblePropertyAttributes( OriginalAttributes, Attr, ObjCPropertyDecl::OBJC_PR_retain | ObjCPropertyDecl::OBJC_PR_strong)) { Diag(OriginalAttributes & (ObjCPropertyDecl::OBJC_PR_retain | ObjCPropertyDecl::OBJC_PR_strong), "retain (or strong)"); continue; } if (isIncompatiblePropertyAttribute(OriginalAttributes, Attr, ObjCPropertyDecl::OBJC_PR_atomic)) { Diag(OriginalAttributes & ObjCPropertyDecl::OBJC_PR_atomic, "atomic"); continue; } } if (Property->getGetterName() != Prop->getGetterName()) { Mismatches.push_back({Prop, DifferentGetter, ""}); continue; } if (!Property->isReadOnly() && !Prop->isReadOnly() && Property->getSetterName() != Prop->getSetterName()) { Mismatches.push_back({Prop, DifferentSetter, ""}); continue; } QualType LHSType = S.Context.getCanonicalType(Prop->getType()); if (!S.Context.propertyTypesAreCompatible(LHSType, RHSType)) { bool IncompatibleObjC = false; QualType ConvertedType; if (!S.isObjCPointerConversion(RHSType, LHSType, ConvertedType, IncompatibleObjC) || IncompatibleObjC) { Mismatches.push_back({Prop, IncompatibleType, ""}); continue; } } } if (Mismatches.empty()) return Property; // Diagnose incompability. { bool HasIncompatibleAttributes = false; for (const auto &Note : Mismatches) HasIncompatibleAttributes = Note.Kind != IncompatibleType ? true : HasIncompatibleAttributes; // Promote the warning to an error if there are incompatible attributes or // incompatible types together with readwrite/readonly incompatibility. auto Diag = S.Diag(Property->getLocation(), Property != OriginalProperty || HasIncompatibleAttributes ? diag::err_protocol_property_mismatch : diag::warn_protocol_property_mismatch); Diag << Mismatches[0].Kind; switch (Mismatches[0].Kind) { case IncompatibleType: Diag << Property->getType(); break; case HasNoExpectedAttribute: case HasUnexpectedAttribute: Diag << Mismatches[0].AttributeName; break; case DifferentGetter: Diag << Property->getGetterName(); break; case DifferentSetter: Diag << Property->getSetterName(); break; } } for (const auto &Note : Mismatches) { auto Diag = S.Diag(Note.Prop->getLocation(), diag::note_protocol_property_declare) << Note.Kind; switch (Note.Kind) { case IncompatibleType: Diag << Note.Prop->getType(); break; case HasNoExpectedAttribute: case HasUnexpectedAttribute: Diag << Note.AttributeName; break; case DifferentGetter: Diag << Note.Prop->getGetterName(); break; case DifferentSetter: Diag << Note.Prop->getSetterName(); break; } } if (AtLoc.isValid()) S.Diag(AtLoc, diag::note_property_synthesize); return Property; } /// Determine whether any storage attributes were written on the property. static bool hasWrittenStorageAttribute(ObjCPropertyDecl *Prop, ObjCPropertyQueryKind QueryKind) { if (Prop->getPropertyAttributesAsWritten() & OwnershipMask) return true; // If this is a readwrite property in a class extension that refines // a readonly property in the original class definition, check it as // well. // If it's a readonly property, we're not interested. if (Prop->isReadOnly()) return false; // Is it declared in an extension? auto Category = dyn_cast(Prop->getDeclContext()); if (!Category || !Category->IsClassExtension()) return false; // Find the corresponding property in the primary class definition. auto OrigClass = Category->getClassInterface(); for (auto Found : OrigClass->lookup(Prop->getDeclName())) { if (ObjCPropertyDecl *OrigProp = dyn_cast(Found)) return OrigProp->getPropertyAttributesAsWritten() & OwnershipMask; } // Look through all of the protocols. for (const auto *Proto : OrigClass->all_referenced_protocols()) { if (ObjCPropertyDecl *OrigProp = Proto->FindPropertyDeclaration( Prop->getIdentifier(), QueryKind)) return OrigProp->getPropertyAttributesAsWritten() & OwnershipMask; } return false; } /// ActOnPropertyImplDecl - This routine performs semantic checks and /// builds the AST node for a property implementation declaration; declared /// as \@synthesize or \@dynamic. /// Decl *Sema::ActOnPropertyImplDecl(Scope *S, SourceLocation AtLoc, SourceLocation PropertyLoc, bool Synthesize, IdentifierInfo *PropertyId, IdentifierInfo *PropertyIvar, SourceLocation PropertyIvarLoc, ObjCPropertyQueryKind QueryKind) { ObjCContainerDecl *ClassImpDecl = dyn_cast(CurContext); // Make sure we have a context for the property implementation declaration. if (!ClassImpDecl) { Diag(AtLoc, diag::err_missing_property_context); return nullptr; } if (PropertyIvarLoc.isInvalid()) PropertyIvarLoc = PropertyLoc; SourceLocation PropertyDiagLoc = PropertyLoc; if (PropertyDiagLoc.isInvalid()) PropertyDiagLoc = ClassImpDecl->getLocStart(); ObjCPropertyDecl *property = nullptr; ObjCInterfaceDecl *IDecl = nullptr; // Find the class or category class where this property must have // a declaration. ObjCImplementationDecl *IC = nullptr; ObjCCategoryImplDecl *CatImplClass = nullptr; if ((IC = dyn_cast(ClassImpDecl))) { IDecl = IC->getClassInterface(); // We always synthesize an interface for an implementation // without an interface decl. So, IDecl is always non-zero. assert(IDecl && "ActOnPropertyImplDecl - @implementation without @interface"); // Look for this property declaration in the @implementation's @interface property = IDecl->FindPropertyDeclaration(PropertyId, QueryKind); if (!property) { Diag(PropertyLoc, diag::err_bad_property_decl) << IDecl->getDeclName(); return nullptr; } if (property->isClassProperty() && Synthesize) { Diag(PropertyLoc, diag::err_synthesize_on_class_property) << PropertyId; return nullptr; } unsigned PIkind = property->getPropertyAttributesAsWritten(); if ((PIkind & (ObjCPropertyDecl::OBJC_PR_atomic | ObjCPropertyDecl::OBJC_PR_nonatomic) ) == 0) { if (AtLoc.isValid()) Diag(AtLoc, diag::warn_implicit_atomic_property); else Diag(IC->getLocation(), diag::warn_auto_implicit_atomic_property); Diag(property->getLocation(), diag::note_property_declare); } if (const ObjCCategoryDecl *CD = dyn_cast(property->getDeclContext())) { if (!CD->IsClassExtension()) { Diag(PropertyLoc, diag::err_category_property) << CD->getDeclName(); Diag(property->getLocation(), diag::note_property_declare); return nullptr; } } if (Synthesize&& (PIkind & ObjCPropertyDecl::OBJC_PR_readonly) && property->hasAttr() && !AtLoc.isValid()) { bool ReadWriteProperty = false; // Search into the class extensions and see if 'readonly property is // redeclared 'readwrite', then no warning is to be issued. for (auto *Ext : IDecl->known_extensions()) { DeclContext::lookup_result R = Ext->lookup(property->getDeclName()); if (!R.empty()) if (ObjCPropertyDecl *ExtProp = dyn_cast(R[0])) { PIkind = ExtProp->getPropertyAttributesAsWritten(); if (PIkind & ObjCPropertyDecl::OBJC_PR_readwrite) { ReadWriteProperty = true; break; } } } if (!ReadWriteProperty) { Diag(property->getLocation(), diag::warn_auto_readonly_iboutlet_property) << property; SourceLocation readonlyLoc; if (LocPropertyAttribute(Context, "readonly", property->getLParenLoc(), readonlyLoc)) { SourceLocation endLoc = readonlyLoc.getLocWithOffset(strlen("readonly")-1); SourceRange ReadonlySourceRange(readonlyLoc, endLoc); Diag(property->getLocation(), diag::note_auto_readonly_iboutlet_fixup_suggest) << FixItHint::CreateReplacement(ReadonlySourceRange, "readwrite"); } } } if (Synthesize && isa(property->getDeclContext())) property = SelectPropertyForSynthesisFromProtocols(*this, AtLoc, IDecl, property); } else if ((CatImplClass = dyn_cast(ClassImpDecl))) { if (Synthesize) { Diag(AtLoc, diag::err_synthesize_category_decl); return nullptr; } IDecl = CatImplClass->getClassInterface(); if (!IDecl) { Diag(AtLoc, diag::err_missing_property_interface); return nullptr; } ObjCCategoryDecl *Category = IDecl->FindCategoryDeclaration(CatImplClass->getIdentifier()); // If category for this implementation not found, it is an error which // has already been reported eralier. if (!Category) return nullptr; // Look for this property declaration in @implementation's category property = Category->FindPropertyDeclaration(PropertyId, QueryKind); if (!property) { Diag(PropertyLoc, diag::err_bad_category_property_decl) << Category->getDeclName(); return nullptr; } } else { Diag(AtLoc, diag::err_bad_property_context); return nullptr; } ObjCIvarDecl *Ivar = nullptr; bool CompleteTypeErr = false; bool compat = true; // Check that we have a valid, previously declared ivar for @synthesize if (Synthesize) { // @synthesize if (!PropertyIvar) PropertyIvar = PropertyId; // Check that this is a previously declared 'ivar' in 'IDecl' interface ObjCInterfaceDecl *ClassDeclared; Ivar = IDecl->lookupInstanceVariable(PropertyIvar, ClassDeclared); QualType PropType = property->getType(); QualType PropertyIvarType = PropType.getNonReferenceType(); if (RequireCompleteType(PropertyDiagLoc, PropertyIvarType, diag::err_incomplete_synthesized_property, property->getDeclName())) { Diag(property->getLocation(), diag::note_property_declare); CompleteTypeErr = true; } if (getLangOpts().ObjCAutoRefCount && (property->getPropertyAttributesAsWritten() & ObjCPropertyDecl::OBJC_PR_readonly) && PropertyIvarType->isObjCRetainableType()) { setImpliedPropertyAttributeForReadOnlyProperty(property, Ivar); } ObjCPropertyDecl::PropertyAttributeKind kind = property->getPropertyAttributes(); bool isARCWeak = false; if (kind & ObjCPropertyDecl::OBJC_PR_weak) { // Add GC __weak to the ivar type if the property is weak. if (getLangOpts().getGC() != LangOptions::NonGC) { assert(!getLangOpts().ObjCAutoRefCount); if (PropertyIvarType.isObjCGCStrong()) { Diag(PropertyDiagLoc, diag::err_gc_weak_property_strong_type); Diag(property->getLocation(), diag::note_property_declare); } else { PropertyIvarType = Context.getObjCGCQualType(PropertyIvarType, Qualifiers::Weak); } // Otherwise, check whether ARC __weak is enabled and works with // the property type. } else { if (!getLangOpts().ObjCWeak) { // Only complain here when synthesizing an ivar. if (!Ivar) { Diag(PropertyDiagLoc, getLangOpts().ObjCWeakRuntime ? diag::err_synthesizing_arc_weak_property_disabled : diag::err_synthesizing_arc_weak_property_no_runtime); Diag(property->getLocation(), diag::note_property_declare); } CompleteTypeErr = true; // suppress later diagnostics about the ivar } else { isARCWeak = true; if (const ObjCObjectPointerType *ObjT = PropertyIvarType->getAs()) { const ObjCInterfaceDecl *ObjI = ObjT->getInterfaceDecl(); if (ObjI && ObjI->isArcWeakrefUnavailable()) { Diag(property->getLocation(), diag::err_arc_weak_unavailable_property) << PropertyIvarType; Diag(ClassImpDecl->getLocation(), diag::note_implemented_by_class) << ClassImpDecl->getName(); } } } } } if (AtLoc.isInvalid()) { // Check when default synthesizing a property that there is // an ivar matching property name and issue warning; since this // is the most common case of not using an ivar used for backing // property in non-default synthesis case. ObjCInterfaceDecl *ClassDeclared=nullptr; ObjCIvarDecl *originalIvar = IDecl->lookupInstanceVariable(property->getIdentifier(), ClassDeclared); if (originalIvar) { Diag(PropertyDiagLoc, diag::warn_autosynthesis_property_ivar_match) << PropertyId << (Ivar == nullptr) << PropertyIvar << originalIvar->getIdentifier(); Diag(property->getLocation(), diag::note_property_declare); Diag(originalIvar->getLocation(), diag::note_ivar_decl); } } if (!Ivar) { // In ARC, give the ivar a lifetime qualifier based on the // property attributes. if ((getLangOpts().ObjCAutoRefCount || isARCWeak) && !PropertyIvarType.getObjCLifetime() && PropertyIvarType->isObjCRetainableType()) { // It's an error if we have to do this and the user didn't // explicitly write an ownership attribute on the property. if (!hasWrittenStorageAttribute(property, QueryKind) && !(kind & ObjCPropertyDecl::OBJC_PR_strong)) { Diag(PropertyDiagLoc, diag::err_arc_objc_property_default_assign_on_object); Diag(property->getLocation(), diag::note_property_declare); } else { Qualifiers::ObjCLifetime lifetime = getImpliedARCOwnership(kind, PropertyIvarType); assert(lifetime && "no lifetime for property?"); Qualifiers qs; qs.addObjCLifetime(lifetime); PropertyIvarType = Context.getQualifiedType(PropertyIvarType, qs); } } Ivar = ObjCIvarDecl::Create(Context, ClassImpDecl, PropertyIvarLoc,PropertyIvarLoc, PropertyIvar, PropertyIvarType, /*Dinfo=*/nullptr, ObjCIvarDecl::Private, (Expr *)nullptr, true); if (RequireNonAbstractType(PropertyIvarLoc, PropertyIvarType, diag::err_abstract_type_in_decl, AbstractSynthesizedIvarType)) { Diag(property->getLocation(), diag::note_property_declare); // An abstract type is as bad as an incomplete type. CompleteTypeErr = true; } if (CompleteTypeErr) Ivar->setInvalidDecl(); ClassImpDecl->addDecl(Ivar); IDecl->makeDeclVisibleInContext(Ivar); if (getLangOpts().ObjCRuntime.isFragile()) Diag(PropertyDiagLoc, diag::err_missing_property_ivar_decl) << PropertyId; // Note! I deliberately want it to fall thru so, we have a // a property implementation and to avoid future warnings. } else if (getLangOpts().ObjCRuntime.isNonFragile() && !declaresSameEntity(ClassDeclared, IDecl)) { Diag(PropertyDiagLoc, diag::err_ivar_in_superclass_use) << property->getDeclName() << Ivar->getDeclName() << ClassDeclared->getDeclName(); Diag(Ivar->getLocation(), diag::note_previous_access_declaration) << Ivar << Ivar->getName(); // Note! I deliberately want it to fall thru so more errors are caught. } property->setPropertyIvarDecl(Ivar); QualType IvarType = Context.getCanonicalType(Ivar->getType()); // Check that type of property and its ivar are type compatible. if (!Context.hasSameType(PropertyIvarType, IvarType)) { if (isa(PropertyIvarType) && isa(IvarType)) compat = Context.canAssignObjCInterfaces( PropertyIvarType->getAs(), IvarType->getAs()); else { compat = (CheckAssignmentConstraints(PropertyIvarLoc, PropertyIvarType, IvarType) == Compatible); } if (!compat) { Diag(PropertyDiagLoc, diag::err_property_ivar_type) << property->getDeclName() << PropType << Ivar->getDeclName() << IvarType; Diag(Ivar->getLocation(), diag::note_ivar_decl); // Note! I deliberately want it to fall thru so, we have a // a property implementation and to avoid future warnings. } else { // FIXME! Rules for properties are somewhat different that those // for assignments. Use a new routine to consolidate all cases; // specifically for property redeclarations as well as for ivars. QualType lhsType =Context.getCanonicalType(PropertyIvarType).getUnqualifiedType(); QualType rhsType =Context.getCanonicalType(IvarType).getUnqualifiedType(); if (lhsType != rhsType && lhsType->isArithmeticType()) { Diag(PropertyDiagLoc, diag::err_property_ivar_type) << property->getDeclName() << PropType << Ivar->getDeclName() << IvarType; Diag(Ivar->getLocation(), diag::note_ivar_decl); // Fall thru - see previous comment } } // __weak is explicit. So it works on Canonical type. if ((PropType.isObjCGCWeak() && !IvarType.isObjCGCWeak() && getLangOpts().getGC() != LangOptions::NonGC)) { Diag(PropertyDiagLoc, diag::err_weak_property) << property->getDeclName() << Ivar->getDeclName(); Diag(Ivar->getLocation(), diag::note_ivar_decl); // Fall thru - see previous comment } // Fall thru - see previous comment if ((property->getType()->isObjCObjectPointerType() || PropType.isObjCGCStrong()) && IvarType.isObjCGCWeak() && getLangOpts().getGC() != LangOptions::NonGC) { Diag(PropertyDiagLoc, diag::err_strong_property) << property->getDeclName() << Ivar->getDeclName(); // Fall thru - see previous comment } } if (getLangOpts().ObjCAutoRefCount || isARCWeak || Ivar->getType().getObjCLifetime()) checkARCPropertyImpl(*this, PropertyLoc, property, Ivar); } else if (PropertyIvar) // @dynamic Diag(PropertyDiagLoc, diag::err_dynamic_property_ivar_decl); assert (property && "ActOnPropertyImplDecl - property declaration missing"); ObjCPropertyImplDecl *PIDecl = ObjCPropertyImplDecl::Create(Context, CurContext, AtLoc, PropertyLoc, property, (Synthesize ? ObjCPropertyImplDecl::Synthesize : ObjCPropertyImplDecl::Dynamic), Ivar, PropertyIvarLoc); if (CompleteTypeErr || !compat) PIDecl->setInvalidDecl(); if (ObjCMethodDecl *getterMethod = property->getGetterMethodDecl()) { getterMethod->createImplicitParams(Context, IDecl); if (getLangOpts().CPlusPlus && Synthesize && !CompleteTypeErr && Ivar->getType()->isRecordType()) { // For Objective-C++, need to synthesize the AST for the IVAR object to be // returned by the getter as it must conform to C++'s copy-return rules. // FIXME. Eventually we want to do this for Objective-C as well. SynthesizedFunctionScope Scope(*this, getterMethod); ImplicitParamDecl *SelfDecl = getterMethod->getSelfDecl(); DeclRefExpr *SelfExpr = new (Context) DeclRefExpr(SelfDecl, false, SelfDecl->getType(), VK_LValue, PropertyDiagLoc); MarkDeclRefReferenced(SelfExpr); Expr *LoadSelfExpr = ImplicitCastExpr::Create(Context, SelfDecl->getType(), CK_LValueToRValue, SelfExpr, nullptr, VK_RValue); Expr *IvarRefExpr = new (Context) ObjCIvarRefExpr(Ivar, Ivar->getUsageType(SelfDecl->getType()), PropertyDiagLoc, Ivar->getLocation(), LoadSelfExpr, true, true); ExprResult Res = PerformCopyInitialization( InitializedEntity::InitializeResult(PropertyDiagLoc, getterMethod->getReturnType(), /*NRVO=*/false), PropertyDiagLoc, IvarRefExpr); if (!Res.isInvalid()) { Expr *ResExpr = Res.getAs(); if (ResExpr) ResExpr = MaybeCreateExprWithCleanups(ResExpr); PIDecl->setGetterCXXConstructor(ResExpr); } } if (property->hasAttr() && !getterMethod->hasAttr()) { Diag(getterMethod->getLocation(), diag::warn_property_getter_owning_mismatch); Diag(property->getLocation(), diag::note_property_declare); } if (getLangOpts().ObjCAutoRefCount && Synthesize) switch (getterMethod->getMethodFamily()) { case OMF_retain: case OMF_retainCount: case OMF_release: case OMF_autorelease: Diag(getterMethod->getLocation(), diag::err_arc_illegal_method_def) << 1 << getterMethod->getSelector(); break; default: break; } } if (ObjCMethodDecl *setterMethod = property->getSetterMethodDecl()) { setterMethod->createImplicitParams(Context, IDecl); if (getLangOpts().CPlusPlus && Synthesize && !CompleteTypeErr && Ivar->getType()->isRecordType()) { // FIXME. Eventually we want to do this for Objective-C as well. SynthesizedFunctionScope Scope(*this, setterMethod); ImplicitParamDecl *SelfDecl = setterMethod->getSelfDecl(); DeclRefExpr *SelfExpr = new (Context) DeclRefExpr(SelfDecl, false, SelfDecl->getType(), VK_LValue, PropertyDiagLoc); MarkDeclRefReferenced(SelfExpr); Expr *LoadSelfExpr = ImplicitCastExpr::Create(Context, SelfDecl->getType(), CK_LValueToRValue, SelfExpr, nullptr, VK_RValue); Expr *lhs = new (Context) ObjCIvarRefExpr(Ivar, Ivar->getUsageType(SelfDecl->getType()), PropertyDiagLoc, Ivar->getLocation(), LoadSelfExpr, true, true); ObjCMethodDecl::param_iterator P = setterMethod->param_begin(); ParmVarDecl *Param = (*P); QualType T = Param->getType().getNonReferenceType(); DeclRefExpr *rhs = new (Context) DeclRefExpr(Param, false, T, VK_LValue, PropertyDiagLoc); MarkDeclRefReferenced(rhs); ExprResult Res = BuildBinOp(S, PropertyDiagLoc, BO_Assign, lhs, rhs); if (property->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_atomic) { Expr *callExpr = Res.getAs(); if (const CXXOperatorCallExpr *CXXCE = dyn_cast_or_null(callExpr)) if (const FunctionDecl *FuncDecl = CXXCE->getDirectCallee()) if (!FuncDecl->isTrivial()) if (property->getType()->isReferenceType()) { Diag(PropertyDiagLoc, diag::err_atomic_property_nontrivial_assign_op) << property->getType(); Diag(FuncDecl->getLocStart(), diag::note_callee_decl) << FuncDecl; } } PIDecl->setSetterCXXAssignment(Res.getAs()); } } if (IC) { if (Synthesize) if (ObjCPropertyImplDecl *PPIDecl = IC->FindPropertyImplIvarDecl(PropertyIvar)) { Diag(PropertyLoc, diag::err_duplicate_ivar_use) << PropertyId << PPIDecl->getPropertyDecl()->getIdentifier() << PropertyIvar; Diag(PPIDecl->getLocation(), diag::note_previous_use); } if (ObjCPropertyImplDecl *PPIDecl = IC->FindPropertyImplDecl(PropertyId, QueryKind)) { Diag(PropertyLoc, diag::err_property_implemented) << PropertyId; Diag(PPIDecl->getLocation(), diag::note_previous_declaration); return nullptr; } IC->addPropertyImplementation(PIDecl); if (getLangOpts().ObjCDefaultSynthProperties && getLangOpts().ObjCRuntime.isNonFragile() && !IDecl->isObjCRequiresPropertyDefs()) { // Diagnose if an ivar was lazily synthesdized due to a previous // use and if 1) property is @dynamic or 2) property is synthesized // but it requires an ivar of different name. ObjCInterfaceDecl *ClassDeclared=nullptr; ObjCIvarDecl *Ivar = nullptr; if (!Synthesize) Ivar = IDecl->lookupInstanceVariable(PropertyId, ClassDeclared); else { if (PropertyIvar && PropertyIvar != PropertyId) Ivar = IDecl->lookupInstanceVariable(PropertyId, ClassDeclared); } // Issue diagnostics only if Ivar belongs to current class. if (Ivar && Ivar->getSynthesize() && declaresSameEntity(IC->getClassInterface(), ClassDeclared)) { Diag(Ivar->getLocation(), diag::err_undeclared_var_use) << PropertyId; Ivar->setInvalidDecl(); } } } else { if (Synthesize) if (ObjCPropertyImplDecl *PPIDecl = CatImplClass->FindPropertyImplIvarDecl(PropertyIvar)) { Diag(PropertyDiagLoc, diag::err_duplicate_ivar_use) << PropertyId << PPIDecl->getPropertyDecl()->getIdentifier() << PropertyIvar; Diag(PPIDecl->getLocation(), diag::note_previous_use); } if (ObjCPropertyImplDecl *PPIDecl = CatImplClass->FindPropertyImplDecl(PropertyId, QueryKind)) { Diag(PropertyDiagLoc, diag::err_property_implemented) << PropertyId; Diag(PPIDecl->getLocation(), diag::note_previous_declaration); return nullptr; } CatImplClass->addPropertyImplementation(PIDecl); } return PIDecl; } //===----------------------------------------------------------------------===// // Helper methods. //===----------------------------------------------------------------------===// /// DiagnosePropertyMismatch - Compares two properties for their /// attributes and types and warns on a variety of inconsistencies. /// void Sema::DiagnosePropertyMismatch(ObjCPropertyDecl *Property, ObjCPropertyDecl *SuperProperty, const IdentifierInfo *inheritedName, bool OverridingProtocolProperty) { ObjCPropertyDecl::PropertyAttributeKind CAttr = Property->getPropertyAttributes(); ObjCPropertyDecl::PropertyAttributeKind SAttr = SuperProperty->getPropertyAttributes(); // We allow readonly properties without an explicit ownership // (assign/unsafe_unretained/weak/retain/strong/copy) in super class // to be overridden by a property with any explicit ownership in the subclass. if (!OverridingProtocolProperty && !getOwnershipRule(SAttr) && getOwnershipRule(CAttr)) ; else { if ((CAttr & ObjCPropertyDecl::OBJC_PR_readonly) && (SAttr & ObjCPropertyDecl::OBJC_PR_readwrite)) Diag(Property->getLocation(), diag::warn_readonly_property) << Property->getDeclName() << inheritedName; if ((CAttr & ObjCPropertyDecl::OBJC_PR_copy) != (SAttr & ObjCPropertyDecl::OBJC_PR_copy)) Diag(Property->getLocation(), diag::warn_property_attribute) << Property->getDeclName() << "copy" << inheritedName; else if (!(SAttr & ObjCPropertyDecl::OBJC_PR_readonly)){ unsigned CAttrRetain = (CAttr & (ObjCPropertyDecl::OBJC_PR_retain | ObjCPropertyDecl::OBJC_PR_strong)); unsigned SAttrRetain = (SAttr & (ObjCPropertyDecl::OBJC_PR_retain | ObjCPropertyDecl::OBJC_PR_strong)); bool CStrong = (CAttrRetain != 0); bool SStrong = (SAttrRetain != 0); if (CStrong != SStrong) Diag(Property->getLocation(), diag::warn_property_attribute) << Property->getDeclName() << "retain (or strong)" << inheritedName; } } // Check for nonatomic; note that nonatomic is effectively // meaningless for readonly properties, so don't diagnose if the // atomic property is 'readonly'. checkAtomicPropertyMismatch(*this, SuperProperty, Property, false); if (Property->getSetterName() != SuperProperty->getSetterName()) { Diag(Property->getLocation(), diag::warn_property_attribute) << Property->getDeclName() << "setter" << inheritedName; Diag(SuperProperty->getLocation(), diag::note_property_declare); } if (Property->getGetterName() != SuperProperty->getGetterName()) { Diag(Property->getLocation(), diag::warn_property_attribute) << Property->getDeclName() << "getter" << inheritedName; Diag(SuperProperty->getLocation(), diag::note_property_declare); } QualType LHSType = Context.getCanonicalType(SuperProperty->getType()); QualType RHSType = Context.getCanonicalType(Property->getType()); if (!Context.propertyTypesAreCompatible(LHSType, RHSType)) { // Do cases not handled in above. // FIXME. For future support of covariant property types, revisit this. bool IncompatibleObjC = false; QualType ConvertedType; if (!isObjCPointerConversion(RHSType, LHSType, ConvertedType, IncompatibleObjC) || IncompatibleObjC) { Diag(Property->getLocation(), diag::warn_property_types_are_incompatible) << Property->getType() << SuperProperty->getType() << inheritedName; Diag(SuperProperty->getLocation(), diag::note_property_declare); } } } bool Sema::DiagnosePropertyAccessorMismatch(ObjCPropertyDecl *property, ObjCMethodDecl *GetterMethod, SourceLocation Loc) { if (!GetterMethod) return false; QualType GetterType = GetterMethod->getReturnType().getNonReferenceType(); QualType PropertyRValueType = property->getType().getNonReferenceType().getAtomicUnqualifiedType(); bool compat = Context.hasSameType(PropertyRValueType, GetterType); if (!compat) { const ObjCObjectPointerType *propertyObjCPtr = nullptr; const ObjCObjectPointerType *getterObjCPtr = nullptr; if ((propertyObjCPtr = PropertyRValueType->getAs()) && (getterObjCPtr = GetterType->getAs())) compat = Context.canAssignObjCInterfaces(getterObjCPtr, propertyObjCPtr); else if (CheckAssignmentConstraints(Loc, GetterType, PropertyRValueType) != Compatible) { Diag(Loc, diag::err_property_accessor_type) << property->getDeclName() << PropertyRValueType << GetterMethod->getSelector() << GetterType; Diag(GetterMethod->getLocation(), diag::note_declared_at); return true; } else { compat = true; QualType lhsType = Context.getCanonicalType(PropertyRValueType); QualType rhsType =Context.getCanonicalType(GetterType).getUnqualifiedType(); if (lhsType != rhsType && lhsType->isArithmeticType()) compat = false; } } if (!compat) { Diag(Loc, diag::warn_accessor_property_type_mismatch) << property->getDeclName() << GetterMethod->getSelector(); Diag(GetterMethod->getLocation(), diag::note_declared_at); return true; } return false; } /// CollectImmediateProperties - This routine collects all properties in /// the class and its conforming protocols; but not those in its super class. static void CollectImmediateProperties(ObjCContainerDecl *CDecl, ObjCContainerDecl::PropertyMap &PropMap, ObjCContainerDecl::PropertyMap &SuperPropMap, bool CollectClassPropsOnly = false, bool IncludeProtocols = true) { if (ObjCInterfaceDecl *IDecl = dyn_cast(CDecl)) { for (auto *Prop : IDecl->properties()) { if (CollectClassPropsOnly && !Prop->isClassProperty()) continue; PropMap[std::make_pair(Prop->getIdentifier(), Prop->isClassProperty())] = Prop; } // Collect the properties from visible extensions. for (auto *Ext : IDecl->visible_extensions()) CollectImmediateProperties(Ext, PropMap, SuperPropMap, CollectClassPropsOnly, IncludeProtocols); if (IncludeProtocols) { // Scan through class's protocols. for (auto *PI : IDecl->all_referenced_protocols()) CollectImmediateProperties(PI, PropMap, SuperPropMap, CollectClassPropsOnly); } } if (ObjCCategoryDecl *CATDecl = dyn_cast(CDecl)) { for (auto *Prop : CATDecl->properties()) { if (CollectClassPropsOnly && !Prop->isClassProperty()) continue; PropMap[std::make_pair(Prop->getIdentifier(), Prop->isClassProperty())] = Prop; } if (IncludeProtocols) { // Scan through class's protocols. for (auto *PI : CATDecl->protocols()) CollectImmediateProperties(PI, PropMap, SuperPropMap, CollectClassPropsOnly); } } else if (ObjCProtocolDecl *PDecl = dyn_cast(CDecl)) { for (auto *Prop : PDecl->properties()) { if (CollectClassPropsOnly && !Prop->isClassProperty()) continue; ObjCPropertyDecl *PropertyFromSuper = SuperPropMap[std::make_pair(Prop->getIdentifier(), Prop->isClassProperty())]; // Exclude property for protocols which conform to class's super-class, // as super-class has to implement the property. if (!PropertyFromSuper || PropertyFromSuper->getIdentifier() != Prop->getIdentifier()) { ObjCPropertyDecl *&PropEntry = PropMap[std::make_pair(Prop->getIdentifier(), Prop->isClassProperty())]; if (!PropEntry) PropEntry = Prop; } } // Scan through protocol's protocols. for (auto *PI : PDecl->protocols()) CollectImmediateProperties(PI, PropMap, SuperPropMap, CollectClassPropsOnly); } } /// CollectSuperClassPropertyImplementations - This routine collects list of /// properties to be implemented in super class(s) and also coming from their /// conforming protocols. static void CollectSuperClassPropertyImplementations(ObjCInterfaceDecl *CDecl, ObjCInterfaceDecl::PropertyMap &PropMap) { if (ObjCInterfaceDecl *SDecl = CDecl->getSuperClass()) { ObjCInterfaceDecl::PropertyDeclOrder PO; while (SDecl) { SDecl->collectPropertiesToImplement(PropMap, PO); SDecl = SDecl->getSuperClass(); } } } /// IvarBacksCurrentMethodAccessor - This routine returns 'true' if 'IV' is /// an ivar synthesized for 'Method' and 'Method' is a property accessor /// declared in class 'IFace'. bool Sema::IvarBacksCurrentMethodAccessor(ObjCInterfaceDecl *IFace, ObjCMethodDecl *Method, ObjCIvarDecl *IV) { if (!IV->getSynthesize()) return false; ObjCMethodDecl *IMD = IFace->lookupMethod(Method->getSelector(), Method->isInstanceMethod()); if (!IMD || !IMD->isPropertyAccessor()) return false; // look up a property declaration whose one of its accessors is implemented // by this method. for (const auto *Property : IFace->instance_properties()) { if ((Property->getGetterName() == IMD->getSelector() || Property->getSetterName() == IMD->getSelector()) && (Property->getPropertyIvarDecl() == IV)) return true; } // Also look up property declaration in class extension whose one of its // accessors is implemented by this method. for (const auto *Ext : IFace->known_extensions()) for (const auto *Property : Ext->instance_properties()) if ((Property->getGetterName() == IMD->getSelector() || Property->getSetterName() == IMD->getSelector()) && (Property->getPropertyIvarDecl() == IV)) return true; return false; } static bool SuperClassImplementsProperty(ObjCInterfaceDecl *IDecl, ObjCPropertyDecl *Prop) { bool SuperClassImplementsGetter = false; bool SuperClassImplementsSetter = false; if (Prop->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_readonly) SuperClassImplementsSetter = true; while (IDecl->getSuperClass()) { ObjCInterfaceDecl *SDecl = IDecl->getSuperClass(); if (!SuperClassImplementsGetter && SDecl->getInstanceMethod(Prop->getGetterName())) SuperClassImplementsGetter = true; if (!SuperClassImplementsSetter && SDecl->getInstanceMethod(Prop->getSetterName())) SuperClassImplementsSetter = true; if (SuperClassImplementsGetter && SuperClassImplementsSetter) return true; IDecl = IDecl->getSuperClass(); } return false; } /// \brief Default synthesizes all properties which must be synthesized /// in class's \@implementation. void Sema::DefaultSynthesizeProperties(Scope *S, ObjCImplDecl *IMPDecl, ObjCInterfaceDecl *IDecl, SourceLocation AtEnd) { ObjCInterfaceDecl::PropertyMap PropMap; ObjCInterfaceDecl::PropertyDeclOrder PropertyOrder; IDecl->collectPropertiesToImplement(PropMap, PropertyOrder); if (PropMap.empty()) return; ObjCInterfaceDecl::PropertyMap SuperPropMap; CollectSuperClassPropertyImplementations(IDecl, SuperPropMap); for (unsigned i = 0, e = PropertyOrder.size(); i != e; i++) { ObjCPropertyDecl *Prop = PropertyOrder[i]; // Is there a matching property synthesize/dynamic? if (Prop->isInvalidDecl() || Prop->isClassProperty() || Prop->getPropertyImplementation() == ObjCPropertyDecl::Optional) continue; // Property may have been synthesized by user. if (IMPDecl->FindPropertyImplDecl( Prop->getIdentifier(), Prop->getQueryKind())) continue; if (IMPDecl->getInstanceMethod(Prop->getGetterName())) { if (Prop->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_readonly) continue; if (IMPDecl->getInstanceMethod(Prop->getSetterName())) continue; } if (ObjCPropertyImplDecl *PID = IMPDecl->FindPropertyImplIvarDecl(Prop->getIdentifier())) { Diag(Prop->getLocation(), diag::warn_no_autosynthesis_shared_ivar_property) << Prop->getIdentifier(); if (PID->getLocation().isValid()) Diag(PID->getLocation(), diag::note_property_synthesize); continue; } ObjCPropertyDecl *PropInSuperClass = SuperPropMap[std::make_pair(Prop->getIdentifier(), Prop->isClassProperty())]; if (ObjCProtocolDecl *Proto = dyn_cast(Prop->getDeclContext())) { // We won't auto-synthesize properties declared in protocols. // Suppress the warning if class's superclass implements property's // getter and implements property's setter (if readwrite property). // Or, if property is going to be implemented in its super class. if (!SuperClassImplementsProperty(IDecl, Prop) && !PropInSuperClass) { Diag(IMPDecl->getLocation(), diag::warn_auto_synthesizing_protocol_property) << Prop << Proto; Diag(Prop->getLocation(), diag::note_property_declare); std::string FixIt = (Twine("@synthesize ") + Prop->getName() + ";\n\n").str(); Diag(AtEnd, diag::note_add_synthesize_directive) << FixItHint::CreateInsertion(AtEnd, FixIt); } continue; } // If property to be implemented in the super class, ignore. if (PropInSuperClass) { if ((Prop->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_readwrite) && (PropInSuperClass->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_readonly) && !IMPDecl->getInstanceMethod(Prop->getSetterName()) && !IDecl->HasUserDeclaredSetterMethod(Prop)) { Diag(Prop->getLocation(), diag::warn_no_autosynthesis_property) << Prop->getIdentifier(); Diag(PropInSuperClass->getLocation(), diag::note_property_declare); } else { Diag(Prop->getLocation(), diag::warn_autosynthesis_property_in_superclass) << Prop->getIdentifier(); Diag(PropInSuperClass->getLocation(), diag::note_property_declare); Diag(IMPDecl->getLocation(), diag::note_while_in_implementation); } continue; } // We use invalid SourceLocations for the synthesized ivars since they // aren't really synthesized at a particular location; they just exist. // Saying that they are located at the @implementation isn't really going // to help users. ObjCPropertyImplDecl *PIDecl = dyn_cast_or_null( ActOnPropertyImplDecl(S, SourceLocation(), SourceLocation(), true, /* property = */ Prop->getIdentifier(), /* ivar = */ Prop->getDefaultSynthIvarName(Context), Prop->getLocation(), Prop->getQueryKind())); if (PIDecl) { Diag(Prop->getLocation(), diag::warn_missing_explicit_synthesis); Diag(IMPDecl->getLocation(), diag::note_while_in_implementation); } } } void Sema::DefaultSynthesizeProperties(Scope *S, Decl *D, SourceLocation AtEnd) { if (!LangOpts.ObjCDefaultSynthProperties || LangOpts.ObjCRuntime.isFragile()) return; ObjCImplementationDecl *IC=dyn_cast_or_null(D); if (!IC) return; if (ObjCInterfaceDecl* IDecl = IC->getClassInterface()) if (!IDecl->isObjCRequiresPropertyDefs()) DefaultSynthesizeProperties(S, IC, IDecl, AtEnd); } static void DiagnoseUnimplementedAccessor( Sema &S, ObjCInterfaceDecl *PrimaryClass, Selector Method, ObjCImplDecl *IMPDecl, ObjCContainerDecl *CDecl, ObjCCategoryDecl *C, ObjCPropertyDecl *Prop, llvm::SmallPtrSet &SMap) { // Check to see if we have a corresponding selector in SMap and with the // right method type. auto I = std::find_if(SMap.begin(), SMap.end(), [&](const ObjCMethodDecl *x) { return x->getSelector() == Method && x->isClassMethod() == Prop->isClassProperty(); }); // When reporting on missing property setter/getter implementation in // categories, do not report when they are declared in primary class, // class's protocol, or one of it super classes. This is because, // the class is going to implement them. if (I == SMap.end() && (PrimaryClass == nullptr || !PrimaryClass->lookupPropertyAccessor(Method, C, Prop->isClassProperty()))) { unsigned diag = isa(CDecl) ? (Prop->isClassProperty() ? diag::warn_impl_required_in_category_for_class_property : diag::warn_setter_getter_impl_required_in_category) : (Prop->isClassProperty() ? diag::warn_impl_required_for_class_property : diag::warn_setter_getter_impl_required); S.Diag(IMPDecl->getLocation(), diag) << Prop->getDeclName() << Method; S.Diag(Prop->getLocation(), diag::note_property_declare); if (S.LangOpts.ObjCDefaultSynthProperties && S.LangOpts.ObjCRuntime.isNonFragile()) if (ObjCInterfaceDecl *ID = dyn_cast(CDecl)) if (const ObjCInterfaceDecl *RID = ID->isObjCRequiresPropertyDefs()) S.Diag(RID->getLocation(), diag::note_suppressed_class_declare); } } void Sema::DiagnoseUnimplementedProperties(Scope *S, ObjCImplDecl* IMPDecl, ObjCContainerDecl *CDecl, bool SynthesizeProperties) { ObjCContainerDecl::PropertyMap PropMap; ObjCInterfaceDecl *IDecl = dyn_cast(CDecl); // Since we don't synthesize class properties, we should emit diagnose even // if SynthesizeProperties is true. ObjCContainerDecl::PropertyMap NoNeedToImplPropMap; // Gather properties which need not be implemented in this class // or category. if (!IDecl) if (ObjCCategoryDecl *C = dyn_cast(CDecl)) { // For categories, no need to implement properties declared in // its primary class (and its super classes) if property is // declared in one of those containers. if ((IDecl = C->getClassInterface())) { ObjCInterfaceDecl::PropertyDeclOrder PO; IDecl->collectPropertiesToImplement(NoNeedToImplPropMap, PO); } } if (IDecl) CollectSuperClassPropertyImplementations(IDecl, NoNeedToImplPropMap); // When SynthesizeProperties is true, we only check class properties. CollectImmediateProperties(CDecl, PropMap, NoNeedToImplPropMap, SynthesizeProperties/*CollectClassPropsOnly*/); // Scan the @interface to see if any of the protocols it adopts // require an explicit implementation, via attribute // 'objc_protocol_requires_explicit_implementation'. if (IDecl) { std::unique_ptr LazyMap; for (auto *PDecl : IDecl->all_referenced_protocols()) { if (!PDecl->hasAttr()) continue; // Lazily construct a set of all the properties in the @interface // of the class, without looking at the superclass. We cannot // use the call to CollectImmediateProperties() above as that // utilizes information from the super class's properties as well // as scans the adopted protocols. This work only triggers for protocols // with the attribute, which is very rare, and only occurs when // analyzing the @implementation. if (!LazyMap) { ObjCContainerDecl::PropertyMap NoNeedToImplPropMap; LazyMap.reset(new ObjCContainerDecl::PropertyMap()); CollectImmediateProperties(CDecl, *LazyMap, NoNeedToImplPropMap, /* CollectClassPropsOnly */ false, /* IncludeProtocols */ false); } // Add the properties of 'PDecl' to the list of properties that // need to be implemented. for (auto *PropDecl : PDecl->properties()) { if ((*LazyMap)[std::make_pair(PropDecl->getIdentifier(), PropDecl->isClassProperty())]) continue; PropMap[std::make_pair(PropDecl->getIdentifier(), PropDecl->isClassProperty())] = PropDecl; } } } if (PropMap.empty()) return; llvm::DenseSet PropImplMap; for (const auto *I : IMPDecl->property_impls()) PropImplMap.insert(I->getPropertyDecl()); llvm::SmallPtrSet InsMap; // Collect property accessors implemented in current implementation. for (const auto *I : IMPDecl->methods()) InsMap.insert(I); ObjCCategoryDecl *C = dyn_cast(CDecl); ObjCInterfaceDecl *PrimaryClass = nullptr; if (C && !C->IsClassExtension()) if ((PrimaryClass = C->getClassInterface())) // Report unimplemented properties in the category as well. if (ObjCImplDecl *IMP = PrimaryClass->getImplementation()) { // When reporting on missing setter/getters, do not report when // setter/getter is implemented in category's primary class // implementation. for (const auto *I : IMP->methods()) InsMap.insert(I); } for (ObjCContainerDecl::PropertyMap::iterator P = PropMap.begin(), E = PropMap.end(); P != E; ++P) { ObjCPropertyDecl *Prop = P->second; // Is there a matching property synthesize/dynamic? if (Prop->isInvalidDecl() || Prop->getPropertyImplementation() == ObjCPropertyDecl::Optional || PropImplMap.count(Prop) || Prop->getAvailability() == AR_Unavailable) continue; // Diagnose unimplemented getters and setters. DiagnoseUnimplementedAccessor(*this, PrimaryClass, Prop->getGetterName(), IMPDecl, CDecl, C, Prop, InsMap); if (!Prop->isReadOnly()) DiagnoseUnimplementedAccessor(*this, PrimaryClass, Prop->getSetterName(), IMPDecl, CDecl, C, Prop, InsMap); } } void Sema::diagnoseNullResettableSynthesizedSetters(const ObjCImplDecl *impDecl) { for (const auto *propertyImpl : impDecl->property_impls()) { const auto *property = propertyImpl->getPropertyDecl(); // Warn about null_resettable properties with synthesized setters, // because the setter won't properly handle nil. if (propertyImpl->getPropertyImplementation() == ObjCPropertyImplDecl::Synthesize && (property->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_null_resettable) && property->getGetterMethodDecl() && property->getSetterMethodDecl()) { auto *getterMethod = property->getGetterMethodDecl(); auto *setterMethod = property->getSetterMethodDecl(); if (!impDecl->getInstanceMethod(setterMethod->getSelector()) && !impDecl->getInstanceMethod(getterMethod->getSelector())) { SourceLocation loc = propertyImpl->getLocation(); if (loc.isInvalid()) loc = impDecl->getLocStart(); Diag(loc, diag::warn_null_resettable_setter) << setterMethod->getSelector() << property->getDeclName(); } } } } void Sema::AtomicPropertySetterGetterRules (ObjCImplDecl* IMPDecl, ObjCInterfaceDecl* IDecl) { // Rules apply in non-GC mode only if (getLangOpts().getGC() != LangOptions::NonGC) return; ObjCContainerDecl::PropertyMap PM; for (auto *Prop : IDecl->properties()) PM[std::make_pair(Prop->getIdentifier(), Prop->isClassProperty())] = Prop; for (const auto *Ext : IDecl->known_extensions()) for (auto *Prop : Ext->properties()) PM[std::make_pair(Prop->getIdentifier(), Prop->isClassProperty())] = Prop; for (ObjCContainerDecl::PropertyMap::iterator I = PM.begin(), E = PM.end(); I != E; ++I) { const ObjCPropertyDecl *Property = I->second; ObjCMethodDecl *GetterMethod = nullptr; ObjCMethodDecl *SetterMethod = nullptr; bool LookedUpGetterSetter = false; unsigned Attributes = Property->getPropertyAttributes(); unsigned AttributesAsWritten = Property->getPropertyAttributesAsWritten(); if (!(AttributesAsWritten & ObjCPropertyDecl::OBJC_PR_atomic) && !(AttributesAsWritten & ObjCPropertyDecl::OBJC_PR_nonatomic)) { GetterMethod = Property->isClassProperty() ? IMPDecl->getClassMethod(Property->getGetterName()) : IMPDecl->getInstanceMethod(Property->getGetterName()); SetterMethod = Property->isClassProperty() ? IMPDecl->getClassMethod(Property->getSetterName()) : IMPDecl->getInstanceMethod(Property->getSetterName()); LookedUpGetterSetter = true; if (GetterMethod) { Diag(GetterMethod->getLocation(), diag::warn_default_atomic_custom_getter_setter) << Property->getIdentifier() << 0; Diag(Property->getLocation(), diag::note_property_declare); } if (SetterMethod) { Diag(SetterMethod->getLocation(), diag::warn_default_atomic_custom_getter_setter) << Property->getIdentifier() << 1; Diag(Property->getLocation(), diag::note_property_declare); } } // We only care about readwrite atomic property. if ((Attributes & ObjCPropertyDecl::OBJC_PR_nonatomic) || !(Attributes & ObjCPropertyDecl::OBJC_PR_readwrite)) continue; if (const ObjCPropertyImplDecl *PIDecl = IMPDecl->FindPropertyImplDecl( Property->getIdentifier(), Property->getQueryKind())) { if (PIDecl->getPropertyImplementation() == ObjCPropertyImplDecl::Dynamic) continue; if (!LookedUpGetterSetter) { GetterMethod = Property->isClassProperty() ? IMPDecl->getClassMethod(Property->getGetterName()) : IMPDecl->getInstanceMethod(Property->getGetterName()); SetterMethod = Property->isClassProperty() ? IMPDecl->getClassMethod(Property->getSetterName()) : IMPDecl->getInstanceMethod(Property->getSetterName()); } if ((GetterMethod && !SetterMethod) || (!GetterMethod && SetterMethod)) { SourceLocation MethodLoc = (GetterMethod ? GetterMethod->getLocation() : SetterMethod->getLocation()); Diag(MethodLoc, diag::warn_atomic_property_rule) << Property->getIdentifier() << (GetterMethod != nullptr) << (SetterMethod != nullptr); // fixit stuff. if (Property->getLParenLoc().isValid() && !(AttributesAsWritten & ObjCPropertyDecl::OBJC_PR_atomic)) { // @property () ... case. SourceLocation AfterLParen = getLocForEndOfToken(Property->getLParenLoc()); StringRef NonatomicStr = AttributesAsWritten? "nonatomic, " : "nonatomic"; Diag(Property->getLocation(), diag::note_atomic_property_fixup_suggest) << FixItHint::CreateInsertion(AfterLParen, NonatomicStr); } else if (Property->getLParenLoc().isInvalid()) { //@property id etc. SourceLocation startLoc = Property->getTypeSourceInfo()->getTypeLoc().getBeginLoc(); Diag(Property->getLocation(), diag::note_atomic_property_fixup_suggest) << FixItHint::CreateInsertion(startLoc, "(nonatomic) "); } else Diag(MethodLoc, diag::note_atomic_property_fixup_suggest); Diag(Property->getLocation(), diag::note_property_declare); } } } } void Sema::DiagnoseOwningPropertyGetterSynthesis(const ObjCImplementationDecl *D) { if (getLangOpts().getGC() == LangOptions::GCOnly) return; for (const auto *PID : D->property_impls()) { const ObjCPropertyDecl *PD = PID->getPropertyDecl(); if (PD && !PD->hasAttr() && !PD->isClassProperty() && !D->getInstanceMethod(PD->getGetterName())) { ObjCMethodDecl *method = PD->getGetterMethodDecl(); if (!method) continue; ObjCMethodFamily family = method->getMethodFamily(); if (family == OMF_alloc || family == OMF_copy || family == OMF_mutableCopy || family == OMF_new) { if (getLangOpts().ObjCAutoRefCount) Diag(PD->getLocation(), diag::err_cocoa_naming_owned_rule); else Diag(PD->getLocation(), diag::warn_cocoa_naming_owned_rule); // Look for a getter explicitly declared alongside the property. // If we find one, use its location for the note. SourceLocation noteLoc = PD->getLocation(); SourceLocation fixItLoc; for (auto *getterRedecl : method->redecls()) { if (getterRedecl->isImplicit()) continue; if (getterRedecl->getDeclContext() != PD->getDeclContext()) continue; noteLoc = getterRedecl->getLocation(); fixItLoc = getterRedecl->getLocEnd(); } Preprocessor &PP = getPreprocessor(); TokenValue tokens[] = { tok::kw___attribute, tok::l_paren, tok::l_paren, PP.getIdentifierInfo("objc_method_family"), tok::l_paren, PP.getIdentifierInfo("none"), tok::r_paren, tok::r_paren, tok::r_paren }; StringRef spelling = "__attribute__((objc_method_family(none)))"; StringRef macroName = PP.getLastMacroWithSpelling(noteLoc, tokens); if (!macroName.empty()) spelling = macroName; auto noteDiag = Diag(noteLoc, diag::note_cocoa_naming_declare_family) << method->getDeclName() << spelling; if (fixItLoc.isValid()) { SmallString<64> fixItText(" "); fixItText += spelling; noteDiag << FixItHint::CreateInsertion(fixItLoc, fixItText); } } } } } void Sema::DiagnoseMissingDesignatedInitOverrides( const ObjCImplementationDecl *ImplD, const ObjCInterfaceDecl *IFD) { assert(IFD->hasDesignatedInitializers()); const ObjCInterfaceDecl *SuperD = IFD->getSuperClass(); if (!SuperD) return; SelectorSet InitSelSet; for (const auto *I : ImplD->instance_methods()) if (I->getMethodFamily() == OMF_init) InitSelSet.insert(I->getSelector()); SmallVector DesignatedInits; SuperD->getDesignatedInitializers(DesignatedInits); for (SmallVector::iterator I = DesignatedInits.begin(), E = DesignatedInits.end(); I != E; ++I) { const ObjCMethodDecl *MD = *I; if (!InitSelSet.count(MD->getSelector())) { bool Ignore = false; if (auto *IMD = IFD->getInstanceMethod(MD->getSelector())) { Ignore = IMD->isUnavailable(); } if (!Ignore) { Diag(ImplD->getLocation(), diag::warn_objc_implementation_missing_designated_init_override) << MD->getSelector(); Diag(MD->getLocation(), diag::note_objc_designated_init_marked_here); } } } } /// AddPropertyAttrs - Propagates attributes from a property to the /// implicitly-declared getter or setter for that property. static void AddPropertyAttrs(Sema &S, ObjCMethodDecl *PropertyMethod, ObjCPropertyDecl *Property) { // Should we just clone all attributes over? for (const auto *A : Property->attrs()) { if (isa(A) || isa(A) || isa(A)) PropertyMethod->addAttr(A->clone(S.Context)); } } /// ProcessPropertyDecl - Make sure that any user-defined setter/getter methods /// have the property type and issue diagnostics if they don't. /// Also synthesize a getter/setter method if none exist (and update the /// appropriate lookup tables. void Sema::ProcessPropertyDecl(ObjCPropertyDecl *property) { ObjCMethodDecl *GetterMethod, *SetterMethod; ObjCContainerDecl *CD = cast(property->getDeclContext()); if (CD->isInvalidDecl()) return; bool IsClassProperty = property->isClassProperty(); GetterMethod = IsClassProperty ? CD->getClassMethod(property->getGetterName()) : CD->getInstanceMethod(property->getGetterName()); // if setter or getter is not found in class extension, it might be // in the primary class. if (!GetterMethod) if (const ObjCCategoryDecl *CatDecl = dyn_cast(CD)) if (CatDecl->IsClassExtension()) GetterMethod = IsClassProperty ? CatDecl->getClassInterface()-> getClassMethod(property->getGetterName()) : CatDecl->getClassInterface()-> getInstanceMethod(property->getGetterName()); SetterMethod = IsClassProperty ? CD->getClassMethod(property->getSetterName()) : CD->getInstanceMethod(property->getSetterName()); if (!SetterMethod) if (const ObjCCategoryDecl *CatDecl = dyn_cast(CD)) if (CatDecl->IsClassExtension()) SetterMethod = IsClassProperty ? CatDecl->getClassInterface()-> getClassMethod(property->getSetterName()) : CatDecl->getClassInterface()-> getInstanceMethod(property->getSetterName()); DiagnosePropertyAccessorMismatch(property, GetterMethod, property->getLocation()); if (!property->isReadOnly() && SetterMethod) { if (Context.getCanonicalType(SetterMethod->getReturnType()) != Context.VoidTy) Diag(SetterMethod->getLocation(), diag::err_setter_type_void); if (SetterMethod->param_size() != 1 || !Context.hasSameUnqualifiedType( (*SetterMethod->param_begin())->getType().getNonReferenceType(), property->getType().getNonReferenceType())) { Diag(property->getLocation(), diag::warn_accessor_property_type_mismatch) << property->getDeclName() << SetterMethod->getSelector(); Diag(SetterMethod->getLocation(), diag::note_declared_at); } } // Synthesize getter/setter methods if none exist. // Find the default getter and if one not found, add one. // FIXME: The synthesized property we set here is misleading. We almost always // synthesize these methods unless the user explicitly provided prototypes // (which is odd, but allowed). Sema should be typechecking that the // declarations jive in that situation (which it is not currently). if (!GetterMethod) { // No instance/class method of same name as property getter name was found. // Declare a getter method and add it to the list of methods // for this class. SourceLocation Loc = property->getLocation(); // The getter returns the declared property type with all qualifiers // removed. QualType resultTy = property->getType().getAtomicUnqualifiedType(); // If the property is null_resettable, the getter returns nonnull. if (property->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_null_resettable) { QualType modifiedTy = resultTy; if (auto nullability = AttributedType::stripOuterNullability(modifiedTy)) { if (*nullability == NullabilityKind::Unspecified) resultTy = Context.getAttributedType(AttributedType::attr_nonnull, modifiedTy, modifiedTy); } } GetterMethod = ObjCMethodDecl::Create(Context, Loc, Loc, property->getGetterName(), resultTy, nullptr, CD, !IsClassProperty, /*isVariadic=*/false, /*isPropertyAccessor=*/true, /*isImplicitlyDeclared=*/true, /*isDefined=*/false, (property->getPropertyImplementation() == ObjCPropertyDecl::Optional) ? ObjCMethodDecl::Optional : ObjCMethodDecl::Required); CD->addDecl(GetterMethod); AddPropertyAttrs(*this, GetterMethod, property); if (property->hasAttr()) GetterMethod->addAttr(NSReturnsNotRetainedAttr::CreateImplicit(Context, Loc)); if (property->hasAttr()) GetterMethod->addAttr( ObjCReturnsInnerPointerAttr::CreateImplicit(Context, Loc)); if (const SectionAttr *SA = property->getAttr()) GetterMethod->addAttr( SectionAttr::CreateImplicit(Context, SectionAttr::GNU_section, SA->getName(), Loc)); if (getLangOpts().ObjCAutoRefCount) CheckARCMethodDecl(GetterMethod); } else // A user declared getter will be synthesize when @synthesize of // the property with the same name is seen in the @implementation GetterMethod->setPropertyAccessor(true); property->setGetterMethodDecl(GetterMethod); // Skip setter if property is read-only. if (!property->isReadOnly()) { // Find the default setter and if one not found, add one. if (!SetterMethod) { // No instance/class method of same name as property setter name was // found. // Declare a setter method and add it to the list of methods // for this class. SourceLocation Loc = property->getLocation(); SetterMethod = ObjCMethodDecl::Create(Context, Loc, Loc, property->getSetterName(), Context.VoidTy, nullptr, CD, !IsClassProperty, /*isVariadic=*/false, /*isPropertyAccessor=*/true, /*isImplicitlyDeclared=*/true, /*isDefined=*/false, (property->getPropertyImplementation() == ObjCPropertyDecl::Optional) ? ObjCMethodDecl::Optional : ObjCMethodDecl::Required); // Remove all qualifiers from the setter's parameter type. QualType paramTy = property->getType().getUnqualifiedType().getAtomicUnqualifiedType(); // If the property is null_resettable, the setter accepts a // nullable value. if (property->getPropertyAttributes() & ObjCPropertyDecl::OBJC_PR_null_resettable) { QualType modifiedTy = paramTy; if (auto nullability = AttributedType::stripOuterNullability(modifiedTy)){ if (*nullability == NullabilityKind::Unspecified) paramTy = Context.getAttributedType(AttributedType::attr_nullable, modifiedTy, modifiedTy); } } // Invent the arguments for the setter. We don't bother making a // nice name for the argument. ParmVarDecl *Argument = ParmVarDecl::Create(Context, SetterMethod, Loc, Loc, property->getIdentifier(), paramTy, /*TInfo=*/nullptr, SC_None, nullptr); SetterMethod->setMethodParams(Context, Argument, None); AddPropertyAttrs(*this, SetterMethod, property); CD->addDecl(SetterMethod); if (const SectionAttr *SA = property->getAttr()) SetterMethod->addAttr( SectionAttr::CreateImplicit(Context, SectionAttr::GNU_section, SA->getName(), Loc)); // It's possible for the user to have set a very odd custom // setter selector that causes it to have a method family. if (getLangOpts().ObjCAutoRefCount) CheckARCMethodDecl(SetterMethod); } else // A user declared setter will be synthesize when @synthesize of // the property with the same name is seen in the @implementation SetterMethod->setPropertyAccessor(true); property->setSetterMethodDecl(SetterMethod); } // Add any synthesized methods to the global pool. This allows us to // handle the following, which is supported by GCC (and part of the design). // // @interface Foo // @property double bar; // @end // // void thisIsUnfortunate() { // id foo; // double bar = [foo bar]; // } // if (!IsClassProperty) { if (GetterMethod) AddInstanceMethodToGlobalPool(GetterMethod); if (SetterMethod) AddInstanceMethodToGlobalPool(SetterMethod); } else { if (GetterMethod) AddFactoryMethodToGlobalPool(GetterMethod); if (SetterMethod) AddFactoryMethodToGlobalPool(SetterMethod); } ObjCInterfaceDecl *CurrentClass = dyn_cast(CD); if (!CurrentClass) { if (ObjCCategoryDecl *Cat = dyn_cast(CD)) CurrentClass = Cat->getClassInterface(); else if (ObjCImplDecl *Impl = dyn_cast(CD)) CurrentClass = Impl->getClassInterface(); } if (GetterMethod) CheckObjCMethodOverrides(GetterMethod, CurrentClass, Sema::RTC_Unknown); if (SetterMethod) CheckObjCMethodOverrides(SetterMethod, CurrentClass, Sema::RTC_Unknown); } void Sema::CheckObjCPropertyAttributes(Decl *PDecl, SourceLocation Loc, unsigned &Attributes, bool propertyInPrimaryClass) { // FIXME: Improve the reported location. if (!PDecl || PDecl->isInvalidDecl()) return; if ((Attributes & ObjCDeclSpec::DQ_PR_readonly) && (Attributes & ObjCDeclSpec::DQ_PR_readwrite)) Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "readonly" << "readwrite"; ObjCPropertyDecl *PropertyDecl = cast(PDecl); QualType PropertyTy = PropertyDecl->getType(); // Check for copy or retain on non-object types. if ((Attributes & (ObjCDeclSpec::DQ_PR_weak | ObjCDeclSpec::DQ_PR_copy | ObjCDeclSpec::DQ_PR_retain | ObjCDeclSpec::DQ_PR_strong)) && !PropertyTy->isObjCRetainableType() && !PropertyDecl->hasAttr()) { Diag(Loc, diag::err_objc_property_requires_object) << (Attributes & ObjCDeclSpec::DQ_PR_weak ? "weak" : Attributes & ObjCDeclSpec::DQ_PR_copy ? "copy" : "retain (or strong)"); Attributes &= ~(ObjCDeclSpec::DQ_PR_weak | ObjCDeclSpec::DQ_PR_copy | ObjCDeclSpec::DQ_PR_retain | ObjCDeclSpec::DQ_PR_strong); PropertyDecl->setInvalidDecl(); } // Check for more than one of { assign, copy, retain }. if (Attributes & ObjCDeclSpec::DQ_PR_assign) { if (Attributes & ObjCDeclSpec::DQ_PR_copy) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "assign" << "copy"; Attributes &= ~ObjCDeclSpec::DQ_PR_copy; } if (Attributes & ObjCDeclSpec::DQ_PR_retain) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "assign" << "retain"; Attributes &= ~ObjCDeclSpec::DQ_PR_retain; } if (Attributes & ObjCDeclSpec::DQ_PR_strong) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "assign" << "strong"; Attributes &= ~ObjCDeclSpec::DQ_PR_strong; } if (getLangOpts().ObjCAutoRefCount && (Attributes & ObjCDeclSpec::DQ_PR_weak)) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "assign" << "weak"; Attributes &= ~ObjCDeclSpec::DQ_PR_weak; } if (PropertyDecl->hasAttr()) Diag(Loc, diag::warn_iboutletcollection_property_assign); } else if (Attributes & ObjCDeclSpec::DQ_PR_unsafe_unretained) { if (Attributes & ObjCDeclSpec::DQ_PR_copy) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "unsafe_unretained" << "copy"; Attributes &= ~ObjCDeclSpec::DQ_PR_copy; } if (Attributes & ObjCDeclSpec::DQ_PR_retain) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "unsafe_unretained" << "retain"; Attributes &= ~ObjCDeclSpec::DQ_PR_retain; } if (Attributes & ObjCDeclSpec::DQ_PR_strong) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "unsafe_unretained" << "strong"; Attributes &= ~ObjCDeclSpec::DQ_PR_strong; } if (getLangOpts().ObjCAutoRefCount && (Attributes & ObjCDeclSpec::DQ_PR_weak)) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "unsafe_unretained" << "weak"; Attributes &= ~ObjCDeclSpec::DQ_PR_weak; } } else if (Attributes & ObjCDeclSpec::DQ_PR_copy) { if (Attributes & ObjCDeclSpec::DQ_PR_retain) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "copy" << "retain"; Attributes &= ~ObjCDeclSpec::DQ_PR_retain; } if (Attributes & ObjCDeclSpec::DQ_PR_strong) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "copy" << "strong"; Attributes &= ~ObjCDeclSpec::DQ_PR_strong; } if (Attributes & ObjCDeclSpec::DQ_PR_weak) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "copy" << "weak"; Attributes &= ~ObjCDeclSpec::DQ_PR_weak; } } else if ((Attributes & ObjCDeclSpec::DQ_PR_retain) && (Attributes & ObjCDeclSpec::DQ_PR_weak)) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "retain" << "weak"; Attributes &= ~ObjCDeclSpec::DQ_PR_retain; } else if ((Attributes & ObjCDeclSpec::DQ_PR_strong) && (Attributes & ObjCDeclSpec::DQ_PR_weak)) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "strong" << "weak"; Attributes &= ~ObjCDeclSpec::DQ_PR_weak; } if (Attributes & ObjCDeclSpec::DQ_PR_weak) { // 'weak' and 'nonnull' are mutually exclusive. if (auto nullability = PropertyTy->getNullability(Context)) { if (*nullability == NullabilityKind::NonNull) Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "nonnull" << "weak"; } } if ((Attributes & ObjCDeclSpec::DQ_PR_atomic) && (Attributes & ObjCDeclSpec::DQ_PR_nonatomic)) { Diag(Loc, diag::err_objc_property_attr_mutually_exclusive) << "atomic" << "nonatomic"; Attributes &= ~ObjCDeclSpec::DQ_PR_atomic; } // Warn if user supplied no assignment attribute, property is // readwrite, and this is an object type. if (!getOwnershipRule(Attributes) && PropertyTy->isObjCRetainableType()) { if (Attributes & ObjCDeclSpec::DQ_PR_readonly) { // do nothing } else if (getLangOpts().ObjCAutoRefCount) { // With arc, @property definitions should default to strong when // not specified. PropertyDecl->setPropertyAttributes(ObjCPropertyDecl::OBJC_PR_strong); } else if (PropertyTy->isObjCObjectPointerType()) { bool isAnyClassTy = (PropertyTy->isObjCClassType() || PropertyTy->isObjCQualifiedClassType()); // In non-gc, non-arc mode, 'Class' is treated as a 'void *' no need to // issue any warning. if (isAnyClassTy && getLangOpts().getGC() == LangOptions::NonGC) ; else if (propertyInPrimaryClass) { // Don't issue warning on property with no life time in class // extension as it is inherited from property in primary class. // Skip this warning in gc-only mode. if (getLangOpts().getGC() != LangOptions::GCOnly) Diag(Loc, diag::warn_objc_property_no_assignment_attribute); // If non-gc code warn that this is likely inappropriate. if (getLangOpts().getGC() == LangOptions::NonGC) Diag(Loc, diag::warn_objc_property_default_assign_on_object); } } // FIXME: Implement warning dependent on NSCopying being // implemented. See also: // // (please trim this list while you are at it). } if (!(Attributes & ObjCDeclSpec::DQ_PR_copy) &&!(Attributes & ObjCDeclSpec::DQ_PR_readonly) && getLangOpts().getGC() == LangOptions::GCOnly && PropertyTy->isBlockPointerType()) Diag(Loc, diag::warn_objc_property_copy_missing_on_block); else if ((Attributes & ObjCDeclSpec::DQ_PR_retain) && !(Attributes & ObjCDeclSpec::DQ_PR_readonly) && !(Attributes & ObjCDeclSpec::DQ_PR_strong) && PropertyTy->isBlockPointerType()) Diag(Loc, diag::warn_objc_property_retain_of_block); if ((Attributes & ObjCDeclSpec::DQ_PR_readonly) && (Attributes & ObjCDeclSpec::DQ_PR_setter)) Diag(Loc, diag::warn_objc_readonly_property_has_setter); } diff --git a/lib/Serialization/ASTReaderDecl.cpp b/lib/Serialization/ASTReaderDecl.cpp index abed2586561a..085341571ced 100644 --- a/lib/Serialization/ASTReaderDecl.cpp +++ b/lib/Serialization/ASTReaderDecl.cpp @@ -1,4196 +1,4202 @@ //===--- ASTReaderDecl.cpp - Decl Deserialization ---------------*- C++ -*-===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements the ASTReader::ReadDeclRecord method, which is the // entrypoint for loading a decl. // //===----------------------------------------------------------------------===// #include "ASTCommon.h" #include "ASTReaderInternals.h" #include "clang/AST/ASTContext.h" #include "clang/AST/DeclCXX.h" #include "clang/AST/DeclGroup.h" #include "clang/AST/DeclTemplate.h" #include "clang/AST/DeclVisitor.h" #include "clang/AST/Expr.h" #include "clang/Sema/IdentifierResolver.h" #include "clang/Sema/SemaDiagnostic.h" #include "clang/Serialization/ASTReader.h" #include "llvm/Support/SaveAndRestore.h" using namespace clang; using namespace clang::serialization; //===----------------------------------------------------------------------===// // Declaration deserialization //===----------------------------------------------------------------------===// namespace clang { class ASTDeclReader : public DeclVisitor { ASTReader &Reader; ASTRecordReader &Record; ASTReader::RecordLocation Loc; const DeclID ThisDeclID; const SourceLocation ThisDeclLoc; typedef ASTReader::RecordData RecordData; TypeID TypeIDForTypeDecl; unsigned AnonymousDeclNumber; GlobalDeclID NamedDeclForTagDecl; IdentifierInfo *TypedefNameForLinkage; bool HasPendingBody; ///\brief A flag to carry the information for a decl from the entity is /// used. We use it to delay the marking of the canonical decl as used until /// the entire declaration is deserialized and merged. bool IsDeclMarkedUsed; uint64_t GetCurrentCursorOffset(); uint64_t ReadLocalOffset() { uint64_t LocalOffset = Record.readInt(); assert(LocalOffset < Loc.Offset && "offset point after current record"); return LocalOffset ? Loc.Offset - LocalOffset : 0; } uint64_t ReadGlobalOffset() { uint64_t Local = ReadLocalOffset(); return Local ? Record.getGlobalBitOffset(Local) : 0; } SourceLocation ReadSourceLocation() { return Record.readSourceLocation(); } SourceRange ReadSourceRange() { return Record.readSourceRange(); } TypeSourceInfo *GetTypeSourceInfo() { return Record.getTypeSourceInfo(); } serialization::DeclID ReadDeclID() { return Record.readDeclID(); } std::string ReadString() { return Record.readString(); } void ReadDeclIDList(SmallVectorImpl &IDs) { for (unsigned I = 0, Size = Record.readInt(); I != Size; ++I) IDs.push_back(ReadDeclID()); } Decl *ReadDecl() { return Record.readDecl(); } template T *ReadDeclAs() { return Record.readDeclAs(); } void ReadQualifierInfo(QualifierInfo &Info) { Record.readQualifierInfo(Info); } void ReadDeclarationNameLoc(DeclarationNameLoc &DNLoc, DeclarationName Name) { Record.readDeclarationNameLoc(DNLoc, Name); } serialization::SubmoduleID readSubmoduleID() { if (Record.getIdx() == Record.size()) return 0; return Record.getGlobalSubmoduleID(Record.readInt()); } Module *readModule() { return Record.getSubmodule(readSubmoduleID()); } void ReadCXXRecordDefinition(CXXRecordDecl *D, bool Update); void ReadCXXDefinitionData(struct CXXRecordDecl::DefinitionData &Data, const CXXRecordDecl *D); void MergeDefinitionData(CXXRecordDecl *D, struct CXXRecordDecl::DefinitionData &&NewDD); void ReadObjCDefinitionData(struct ObjCInterfaceDecl::DefinitionData &Data); void MergeDefinitionData(ObjCInterfaceDecl *D, struct ObjCInterfaceDecl::DefinitionData &&NewDD); void ReadObjCDefinitionData(struct ObjCProtocolDecl::DefinitionData &Data); void MergeDefinitionData(ObjCProtocolDecl *D, struct ObjCProtocolDecl::DefinitionData &&NewDD); static NamedDecl *getAnonymousDeclForMerging(ASTReader &Reader, DeclContext *DC, unsigned Index); static void setAnonymousDeclForMerging(ASTReader &Reader, DeclContext *DC, unsigned Index, NamedDecl *D); /// Results from loading a RedeclarableDecl. class RedeclarableResult { Decl *MergeWith; GlobalDeclID FirstID; bool IsKeyDecl; public: RedeclarableResult(Decl *MergeWith, GlobalDeclID FirstID, bool IsKeyDecl) : MergeWith(MergeWith), FirstID(FirstID), IsKeyDecl(IsKeyDecl) {} /// \brief Retrieve the first ID. GlobalDeclID getFirstID() const { return FirstID; } /// \brief Is this declaration a key declaration? bool isKeyDecl() const { return IsKeyDecl; } /// \brief Get a known declaration that this should be merged with, if /// any. Decl *getKnownMergeTarget() const { return MergeWith; } }; /// \brief Class used to capture the result of searching for an existing /// declaration of a specific kind and name, along with the ability /// to update the place where this result was found (the declaration /// chain hanging off an identifier or the DeclContext we searched in) /// if requested. class FindExistingResult { ASTReader &Reader; NamedDecl *New; NamedDecl *Existing; bool AddResult; unsigned AnonymousDeclNumber; IdentifierInfo *TypedefNameForLinkage; void operator=(FindExistingResult &&) = delete; public: FindExistingResult(ASTReader &Reader) : Reader(Reader), New(nullptr), Existing(nullptr), AddResult(false), AnonymousDeclNumber(0), TypedefNameForLinkage(nullptr) {} FindExistingResult(ASTReader &Reader, NamedDecl *New, NamedDecl *Existing, unsigned AnonymousDeclNumber, IdentifierInfo *TypedefNameForLinkage) : Reader(Reader), New(New), Existing(Existing), AddResult(true), AnonymousDeclNumber(AnonymousDeclNumber), TypedefNameForLinkage(TypedefNameForLinkage) {} FindExistingResult(FindExistingResult &&Other) : Reader(Other.Reader), New(Other.New), Existing(Other.Existing), AddResult(Other.AddResult), AnonymousDeclNumber(Other.AnonymousDeclNumber), TypedefNameForLinkage(Other.TypedefNameForLinkage) { Other.AddResult = false; } ~FindExistingResult(); /// \brief Suppress the addition of this result into the known set of /// names. void suppress() { AddResult = false; } operator NamedDecl*() const { return Existing; } template operator T*() const { return dyn_cast_or_null(Existing); } }; static DeclContext *getPrimaryContextForMerging(ASTReader &Reader, DeclContext *DC); FindExistingResult findExisting(NamedDecl *D); public: ASTDeclReader(ASTReader &Reader, ASTRecordReader &Record, ASTReader::RecordLocation Loc, DeclID thisDeclID, SourceLocation ThisDeclLoc) : Reader(Reader), Record(Record), Loc(Loc), ThisDeclID(thisDeclID), ThisDeclLoc(ThisDeclLoc), TypeIDForTypeDecl(0), NamedDeclForTagDecl(0), TypedefNameForLinkage(nullptr), HasPendingBody(false), IsDeclMarkedUsed(false) {} template static void AddLazySpecializations(T *D, SmallVectorImpl& IDs) { if (IDs.empty()) return; // FIXME: We should avoid this pattern of getting the ASTContext. ASTContext &C = D->getASTContext(); auto *&LazySpecializations = D->getCommonPtr()->LazySpecializations; if (auto &Old = LazySpecializations) { IDs.insert(IDs.end(), Old + 1, Old + 1 + Old[0]); std::sort(IDs.begin(), IDs.end()); IDs.erase(std::unique(IDs.begin(), IDs.end()), IDs.end()); } auto *Result = new (C) serialization::DeclID[1 + IDs.size()]; *Result = IDs.size(); std::copy(IDs.begin(), IDs.end(), Result + 1); LazySpecializations = Result; } template static Decl *getMostRecentDeclImpl(Redeclarable *D); static Decl *getMostRecentDeclImpl(...); static Decl *getMostRecentDecl(Decl *D); template static void attachPreviousDeclImpl(ASTReader &Reader, Redeclarable *D, Decl *Previous, Decl *Canon); static void attachPreviousDeclImpl(ASTReader &Reader, ...); static void attachPreviousDecl(ASTReader &Reader, Decl *D, Decl *Previous, Decl *Canon); template static void attachLatestDeclImpl(Redeclarable *D, Decl *Latest); static void attachLatestDeclImpl(...); static void attachLatestDecl(Decl *D, Decl *latest); template static void markIncompleteDeclChainImpl(Redeclarable *D); static void markIncompleteDeclChainImpl(...); /// \brief Determine whether this declaration has a pending body. bool hasPendingBody() const { return HasPendingBody; } void ReadFunctionDefinition(FunctionDecl *FD); void Visit(Decl *D); void UpdateDecl(Decl *D, llvm::SmallVectorImpl&); static void setNextObjCCategory(ObjCCategoryDecl *Cat, ObjCCategoryDecl *Next) { Cat->NextClassCategory = Next; } void VisitDecl(Decl *D); void VisitPragmaCommentDecl(PragmaCommentDecl *D); void VisitPragmaDetectMismatchDecl(PragmaDetectMismatchDecl *D); void VisitTranslationUnitDecl(TranslationUnitDecl *TU); void VisitNamedDecl(NamedDecl *ND); void VisitLabelDecl(LabelDecl *LD); void VisitNamespaceDecl(NamespaceDecl *D); void VisitUsingDirectiveDecl(UsingDirectiveDecl *D); void VisitNamespaceAliasDecl(NamespaceAliasDecl *D); void VisitTypeDecl(TypeDecl *TD); RedeclarableResult VisitTypedefNameDecl(TypedefNameDecl *TD); void VisitTypedefDecl(TypedefDecl *TD); void VisitTypeAliasDecl(TypeAliasDecl *TD); void VisitUnresolvedUsingTypenameDecl(UnresolvedUsingTypenameDecl *D); RedeclarableResult VisitTagDecl(TagDecl *TD); void VisitEnumDecl(EnumDecl *ED); RedeclarableResult VisitRecordDeclImpl(RecordDecl *RD); void VisitRecordDecl(RecordDecl *RD) { VisitRecordDeclImpl(RD); } RedeclarableResult VisitCXXRecordDeclImpl(CXXRecordDecl *D); void VisitCXXRecordDecl(CXXRecordDecl *D) { VisitCXXRecordDeclImpl(D); } RedeclarableResult VisitClassTemplateSpecializationDeclImpl( ClassTemplateSpecializationDecl *D); void VisitClassTemplateSpecializationDecl( ClassTemplateSpecializationDecl *D) { VisitClassTemplateSpecializationDeclImpl(D); } void VisitClassTemplatePartialSpecializationDecl( ClassTemplatePartialSpecializationDecl *D); void VisitClassScopeFunctionSpecializationDecl( ClassScopeFunctionSpecializationDecl *D); RedeclarableResult VisitVarTemplateSpecializationDeclImpl(VarTemplateSpecializationDecl *D); void VisitVarTemplateSpecializationDecl(VarTemplateSpecializationDecl *D) { VisitVarTemplateSpecializationDeclImpl(D); } void VisitVarTemplatePartialSpecializationDecl( VarTemplatePartialSpecializationDecl *D); void VisitTemplateTypeParmDecl(TemplateTypeParmDecl *D); void VisitValueDecl(ValueDecl *VD); void VisitEnumConstantDecl(EnumConstantDecl *ECD); void VisitUnresolvedUsingValueDecl(UnresolvedUsingValueDecl *D); void VisitDeclaratorDecl(DeclaratorDecl *DD); void VisitFunctionDecl(FunctionDecl *FD); void VisitCXXDeductionGuideDecl(CXXDeductionGuideDecl *GD); void VisitCXXMethodDecl(CXXMethodDecl *D); void VisitCXXConstructorDecl(CXXConstructorDecl *D); void VisitCXXDestructorDecl(CXXDestructorDecl *D); void VisitCXXConversionDecl(CXXConversionDecl *D); void VisitFieldDecl(FieldDecl *FD); void VisitMSPropertyDecl(MSPropertyDecl *FD); void VisitIndirectFieldDecl(IndirectFieldDecl *FD); RedeclarableResult VisitVarDeclImpl(VarDecl *D); void VisitVarDecl(VarDecl *VD) { VisitVarDeclImpl(VD); } void VisitImplicitParamDecl(ImplicitParamDecl *PD); void VisitParmVarDecl(ParmVarDecl *PD); void VisitDecompositionDecl(DecompositionDecl *DD); void VisitBindingDecl(BindingDecl *BD); void VisitNonTypeTemplateParmDecl(NonTypeTemplateParmDecl *D); DeclID VisitTemplateDecl(TemplateDecl *D); RedeclarableResult VisitRedeclarableTemplateDecl(RedeclarableTemplateDecl *D); void VisitClassTemplateDecl(ClassTemplateDecl *D); void VisitBuiltinTemplateDecl(BuiltinTemplateDecl *D); void VisitVarTemplateDecl(VarTemplateDecl *D); void VisitFunctionTemplateDecl(FunctionTemplateDecl *D); void VisitTemplateTemplateParmDecl(TemplateTemplateParmDecl *D); void VisitTypeAliasTemplateDecl(TypeAliasTemplateDecl *D); void VisitUsingDecl(UsingDecl *D); void VisitUsingPackDecl(UsingPackDecl *D); void VisitUsingShadowDecl(UsingShadowDecl *D); void VisitConstructorUsingShadowDecl(ConstructorUsingShadowDecl *D); void VisitLinkageSpecDecl(LinkageSpecDecl *D); void VisitExportDecl(ExportDecl *D); void VisitFileScopeAsmDecl(FileScopeAsmDecl *AD); void VisitImportDecl(ImportDecl *D); void VisitAccessSpecDecl(AccessSpecDecl *D); void VisitFriendDecl(FriendDecl *D); void VisitFriendTemplateDecl(FriendTemplateDecl *D); void VisitStaticAssertDecl(StaticAssertDecl *D); void VisitBlockDecl(BlockDecl *BD); void VisitCapturedDecl(CapturedDecl *CD); void VisitEmptyDecl(EmptyDecl *D); std::pair VisitDeclContext(DeclContext *DC); template RedeclarableResult VisitRedeclarable(Redeclarable *D); template void mergeRedeclarable(Redeclarable *D, RedeclarableResult &Redecl, DeclID TemplatePatternID = 0); template void mergeRedeclarable(Redeclarable *D, T *Existing, RedeclarableResult &Redecl, DeclID TemplatePatternID = 0); template void mergeMergeable(Mergeable *D); void mergeTemplatePattern(RedeclarableTemplateDecl *D, RedeclarableTemplateDecl *Existing, DeclID DsID, bool IsKeyDecl); ObjCTypeParamList *ReadObjCTypeParamList(); // FIXME: Reorder according to DeclNodes.td? void VisitObjCMethodDecl(ObjCMethodDecl *D); void VisitObjCTypeParamDecl(ObjCTypeParamDecl *D); void VisitObjCContainerDecl(ObjCContainerDecl *D); void VisitObjCInterfaceDecl(ObjCInterfaceDecl *D); void VisitObjCIvarDecl(ObjCIvarDecl *D); void VisitObjCProtocolDecl(ObjCProtocolDecl *D); void VisitObjCAtDefsFieldDecl(ObjCAtDefsFieldDecl *D); void VisitObjCCategoryDecl(ObjCCategoryDecl *D); void VisitObjCImplDecl(ObjCImplDecl *D); void VisitObjCCategoryImplDecl(ObjCCategoryImplDecl *D); void VisitObjCImplementationDecl(ObjCImplementationDecl *D); void VisitObjCCompatibleAliasDecl(ObjCCompatibleAliasDecl *D); void VisitObjCPropertyDecl(ObjCPropertyDecl *D); void VisitObjCPropertyImplDecl(ObjCPropertyImplDecl *D); void VisitOMPThreadPrivateDecl(OMPThreadPrivateDecl *D); void VisitOMPDeclareReductionDecl(OMPDeclareReductionDecl *D); void VisitOMPCapturedExprDecl(OMPCapturedExprDecl *D); }; } // end namespace clang namespace { /// Iterator over the redeclarations of a declaration that have already /// been merged into the same redeclaration chain. template class MergedRedeclIterator { DeclT *Start, *Canonical, *Current; public: MergedRedeclIterator() : Current(nullptr) {} MergedRedeclIterator(DeclT *Start) : Start(Start), Canonical(nullptr), Current(Start) {} DeclT *operator*() { return Current; } MergedRedeclIterator &operator++() { if (Current->isFirstDecl()) { Canonical = Current; Current = Current->getMostRecentDecl(); } else Current = Current->getPreviousDecl(); // If we started in the merged portion, we'll reach our start position // eventually. Otherwise, we'll never reach it, but the second declaration // we reached was the canonical declaration, so stop when we see that one // again. if (Current == Start || Current == Canonical) Current = nullptr; return *this; } friend bool operator!=(const MergedRedeclIterator &A, const MergedRedeclIterator &B) { return A.Current != B.Current; } }; } // end anonymous namespace template static llvm::iterator_range> merged_redecls(DeclT *D) { return llvm::make_range(MergedRedeclIterator(D), MergedRedeclIterator()); } uint64_t ASTDeclReader::GetCurrentCursorOffset() { return Loc.F->DeclsCursor.GetCurrentBitNo() + Loc.F->GlobalBitOffset; } void ASTDeclReader::ReadFunctionDefinition(FunctionDecl *FD) { if (Record.readInt()) Reader.BodySource[FD] = Loc.F->Kind == ModuleKind::MK_MainFile; if (auto *CD = dyn_cast(FD)) { CD->NumCtorInitializers = Record.readInt(); if (CD->NumCtorInitializers) CD->CtorInitializers = ReadGlobalOffset(); } // Store the offset of the body so we can lazily load it later. Reader.PendingBodies[FD] = GetCurrentCursorOffset(); HasPendingBody = true; } void ASTDeclReader::Visit(Decl *D) { DeclVisitor::Visit(D); // At this point we have deserialized and merged the decl and it is safe to // update its canonical decl to signal that the entire entity is used. D->getCanonicalDecl()->Used |= IsDeclMarkedUsed; IsDeclMarkedUsed = false; if (DeclaratorDecl *DD = dyn_cast(D)) { if (DD->DeclInfo) { DeclaratorDecl::ExtInfo *Info = DD->DeclInfo.get(); Info->TInfo = GetTypeSourceInfo(); } else { DD->DeclInfo = GetTypeSourceInfo(); } } if (TypeDecl *TD = dyn_cast(D)) { // We have a fully initialized TypeDecl. Read its type now. TD->setTypeForDecl(Reader.GetType(TypeIDForTypeDecl).getTypePtrOrNull()); // If this is a tag declaration with a typedef name for linkage, it's safe // to load that typedef now. if (NamedDeclForTagDecl) cast(D)->TypedefNameDeclOrQualifier = cast(Reader.GetDecl(NamedDeclForTagDecl)); } else if (ObjCInterfaceDecl *ID = dyn_cast(D)) { // if we have a fully initialized TypeDecl, we can safely read its type now. ID->TypeForDecl = Reader.GetType(TypeIDForTypeDecl).getTypePtrOrNull(); } else if (FunctionDecl *FD = dyn_cast(D)) { // FunctionDecl's body was written last after all other Stmts/Exprs. // We only read it if FD doesn't already have a body (e.g., from another // module). // FIXME: Can we diagnose ODR violations somehow? if (Record.readInt()) ReadFunctionDefinition(FD); } } void ASTDeclReader::VisitDecl(Decl *D) { if (D->isTemplateParameter() || D->isTemplateParameterPack() || isa(D)) { // We don't want to deserialize the DeclContext of a template // parameter or of a parameter of a function template immediately. These // entities might be used in the formulation of its DeclContext (for // example, a function parameter can be used in decltype() in trailing // return type of the function). Use the translation unit DeclContext as a // placeholder. GlobalDeclID SemaDCIDForTemplateParmDecl = ReadDeclID(); GlobalDeclID LexicalDCIDForTemplateParmDecl = ReadDeclID(); if (!LexicalDCIDForTemplateParmDecl) LexicalDCIDForTemplateParmDecl = SemaDCIDForTemplateParmDecl; Reader.addPendingDeclContextInfo(D, SemaDCIDForTemplateParmDecl, LexicalDCIDForTemplateParmDecl); D->setDeclContext(Reader.getContext().getTranslationUnitDecl()); } else { DeclContext *SemaDC = ReadDeclAs(); DeclContext *LexicalDC = ReadDeclAs(); if (!LexicalDC) LexicalDC = SemaDC; DeclContext *MergedSemaDC = Reader.MergedDeclContexts.lookup(SemaDC); // Avoid calling setLexicalDeclContext() directly because it uses // Decl::getASTContext() internally which is unsafe during derialization. D->setDeclContextsImpl(MergedSemaDC ? MergedSemaDC : SemaDC, LexicalDC, Reader.getContext()); } D->setLocation(ThisDeclLoc); D->setInvalidDecl(Record.readInt()); if (Record.readInt()) { // hasAttrs AttrVec Attrs; Record.readAttributes(Attrs); // Avoid calling setAttrs() directly because it uses Decl::getASTContext() // internally which is unsafe during derialization. D->setAttrsImpl(Attrs, Reader.getContext()); } D->setImplicit(Record.readInt()); D->Used = Record.readInt(); IsDeclMarkedUsed |= D->Used; D->setReferenced(Record.readInt()); D->setTopLevelDeclInObjCContainer(Record.readInt()); D->setAccess((AccessSpecifier)Record.readInt()); D->FromASTFile = true; bool ModulePrivate = Record.readInt(); // Determine whether this declaration is part of a (sub)module. If so, it // may not yet be visible. if (unsigned SubmoduleID = readSubmoduleID()) { // Store the owning submodule ID in the declaration. D->setModuleOwnershipKind( ModulePrivate ? Decl::ModuleOwnershipKind::ModulePrivate : Decl::ModuleOwnershipKind::VisibleWhenImported); D->setOwningModuleID(SubmoduleID); if (ModulePrivate) { // Module-private declarations are never visible, so there is no work to // do. } else if (Reader.getContext().getLangOpts().ModulesLocalVisibility) { // If local visibility is being tracked, this declaration will become // hidden and visible as the owning module does. } else if (Module *Owner = Reader.getSubmodule(SubmoduleID)) { // Mark the declaration as visible when its owning module becomes visible. if (Owner->NameVisibility == Module::AllVisible) D->setVisibleDespiteOwningModule(); else Reader.HiddenNamesMap[Owner].push_back(D); } } else if (ModulePrivate) { D->setModuleOwnershipKind(Decl::ModuleOwnershipKind::ModulePrivate); } } void ASTDeclReader::VisitPragmaCommentDecl(PragmaCommentDecl *D) { VisitDecl(D); D->setLocation(ReadSourceLocation()); D->CommentKind = (PragmaMSCommentKind)Record.readInt(); std::string Arg = ReadString(); memcpy(D->getTrailingObjects(), Arg.data(), Arg.size()); D->getTrailingObjects()[Arg.size()] = '\0'; } void ASTDeclReader::VisitPragmaDetectMismatchDecl(PragmaDetectMismatchDecl *D) { VisitDecl(D); D->setLocation(ReadSourceLocation()); std::string Name = ReadString(); memcpy(D->getTrailingObjects(), Name.data(), Name.size()); D->getTrailingObjects()[Name.size()] = '\0'; D->ValueStart = Name.size() + 1; std::string Value = ReadString(); memcpy(D->getTrailingObjects() + D->ValueStart, Value.data(), Value.size()); D->getTrailingObjects()[D->ValueStart + Value.size()] = '\0'; } void ASTDeclReader::VisitTranslationUnitDecl(TranslationUnitDecl *TU) { llvm_unreachable("Translation units are not serialized"); } void ASTDeclReader::VisitNamedDecl(NamedDecl *ND) { VisitDecl(ND); ND->setDeclName(Record.readDeclarationName()); AnonymousDeclNumber = Record.readInt(); } void ASTDeclReader::VisitTypeDecl(TypeDecl *TD) { VisitNamedDecl(TD); TD->setLocStart(ReadSourceLocation()); // Delay type reading until after we have fully initialized the decl. TypeIDForTypeDecl = Record.getGlobalTypeID(Record.readInt()); } ASTDeclReader::RedeclarableResult ASTDeclReader::VisitTypedefNameDecl(TypedefNameDecl *TD) { RedeclarableResult Redecl = VisitRedeclarable(TD); VisitTypeDecl(TD); TypeSourceInfo *TInfo = GetTypeSourceInfo(); if (Record.readInt()) { // isModed QualType modedT = Record.readType(); TD->setModedTypeSourceInfo(TInfo, modedT); } else TD->setTypeSourceInfo(TInfo); // Read and discard the declaration for which this is a typedef name for // linkage, if it exists. We cannot rely on our type to pull in this decl, // because it might have been merged with a type from another module and // thus might not refer to our version of the declaration. ReadDecl(); return Redecl; } void ASTDeclReader::VisitTypedefDecl(TypedefDecl *TD) { RedeclarableResult Redecl = VisitTypedefNameDecl(TD); mergeRedeclarable(TD, Redecl); } void ASTDeclReader::VisitTypeAliasDecl(TypeAliasDecl *TD) { RedeclarableResult Redecl = VisitTypedefNameDecl(TD); if (auto *Template = ReadDeclAs()) // Merged when we merge the template. TD->setDescribedAliasTemplate(Template); else mergeRedeclarable(TD, Redecl); } ASTDeclReader::RedeclarableResult ASTDeclReader::VisitTagDecl(TagDecl *TD) { RedeclarableResult Redecl = VisitRedeclarable(TD); VisitTypeDecl(TD); TD->IdentifierNamespace = Record.readInt(); TD->setTagKind((TagDecl::TagKind)Record.readInt()); if (!isa(TD)) TD->setCompleteDefinition(Record.readInt()); TD->setEmbeddedInDeclarator(Record.readInt()); TD->setFreeStanding(Record.readInt()); TD->setCompleteDefinitionRequired(Record.readInt()); TD->setBraceRange(ReadSourceRange()); switch (Record.readInt()) { case 0: break; case 1: { // ExtInfo TagDecl::ExtInfo *Info = new (Reader.getContext()) TagDecl::ExtInfo(); ReadQualifierInfo(*Info); TD->TypedefNameDeclOrQualifier = Info; break; } case 2: // TypedefNameForAnonDecl NamedDeclForTagDecl = ReadDeclID(); TypedefNameForLinkage = Record.getIdentifierInfo(); break; default: llvm_unreachable("unexpected tag info kind"); } if (!isa(TD)) mergeRedeclarable(TD, Redecl); return Redecl; } void ASTDeclReader::VisitEnumDecl(EnumDecl *ED) { VisitTagDecl(ED); if (TypeSourceInfo *TI = GetTypeSourceInfo()) ED->setIntegerTypeSourceInfo(TI); else ED->setIntegerType(Record.readType()); ED->setPromotionType(Record.readType()); ED->setNumPositiveBits(Record.readInt()); ED->setNumNegativeBits(Record.readInt()); ED->IsScoped = Record.readInt(); ED->IsScopedUsingClassTag = Record.readInt(); ED->IsFixed = Record.readInt(); // If this is a definition subject to the ODR, and we already have a // definition, merge this one into it. if (ED->IsCompleteDefinition && Reader.getContext().getLangOpts().Modules && Reader.getContext().getLangOpts().CPlusPlus) { EnumDecl *&OldDef = Reader.EnumDefinitions[ED->getCanonicalDecl()]; if (!OldDef) { // This is the first time we've seen an imported definition. Look for a // local definition before deciding that we are the first definition. for (auto *D : merged_redecls(ED->getCanonicalDecl())) { if (!D->isFromASTFile() && D->isCompleteDefinition()) { OldDef = D; break; } } } if (OldDef) { Reader.MergedDeclContexts.insert(std::make_pair(ED, OldDef)); ED->IsCompleteDefinition = false; Reader.mergeDefinitionVisibility(OldDef, ED); } else { OldDef = ED; } } if (EnumDecl *InstED = ReadDeclAs()) { TemplateSpecializationKind TSK = (TemplateSpecializationKind)Record.readInt(); SourceLocation POI = ReadSourceLocation(); ED->setInstantiationOfMemberEnum(Reader.getContext(), InstED, TSK); ED->getMemberSpecializationInfo()->setPointOfInstantiation(POI); } } ASTDeclReader::RedeclarableResult ASTDeclReader::VisitRecordDeclImpl(RecordDecl *RD) { RedeclarableResult Redecl = VisitTagDecl(RD); RD->setHasFlexibleArrayMember(Record.readInt()); RD->setAnonymousStructOrUnion(Record.readInt()); RD->setHasObjectMember(Record.readInt()); RD->setHasVolatileMember(Record.readInt()); return Redecl; } void ASTDeclReader::VisitValueDecl(ValueDecl *VD) { VisitNamedDecl(VD); VD->setType(Record.readType()); } void ASTDeclReader::VisitEnumConstantDecl(EnumConstantDecl *ECD) { VisitValueDecl(ECD); if (Record.readInt()) ECD->setInitExpr(Record.readExpr()); ECD->setInitVal(Record.readAPSInt()); mergeMergeable(ECD); } void ASTDeclReader::VisitDeclaratorDecl(DeclaratorDecl *DD) { VisitValueDecl(DD); DD->setInnerLocStart(ReadSourceLocation()); if (Record.readInt()) { // hasExtInfo DeclaratorDecl::ExtInfo *Info = new (Reader.getContext()) DeclaratorDecl::ExtInfo(); ReadQualifierInfo(*Info); DD->DeclInfo = Info; } } void ASTDeclReader::VisitFunctionDecl(FunctionDecl *FD) { RedeclarableResult Redecl = VisitRedeclarable(FD); VisitDeclaratorDecl(FD); ReadDeclarationNameLoc(FD->DNLoc, FD->getDeclName()); FD->IdentifierNamespace = Record.readInt(); // FunctionDecl's body is handled last at ASTDeclReader::Visit, // after everything else is read. FD->SClass = (StorageClass)Record.readInt(); FD->IsInline = Record.readInt(); FD->IsInlineSpecified = Record.readInt(); FD->IsExplicitSpecified = Record.readInt(); FD->IsVirtualAsWritten = Record.readInt(); FD->IsPure = Record.readInt(); FD->HasInheritedPrototype = Record.readInt(); FD->HasWrittenPrototype = Record.readInt(); FD->IsDeleted = Record.readInt(); FD->IsTrivial = Record.readInt(); FD->IsDefaulted = Record.readInt(); FD->IsExplicitlyDefaulted = Record.readInt(); FD->HasImplicitReturnZero = Record.readInt(); FD->IsConstexpr = Record.readInt(); FD->UsesSEHTry = Record.readInt(); FD->HasSkippedBody = Record.readInt(); FD->IsLateTemplateParsed = Record.readInt(); FD->setCachedLinkage(Linkage(Record.readInt())); FD->EndRangeLoc = ReadSourceLocation(); switch ((FunctionDecl::TemplatedKind)Record.readInt()) { case FunctionDecl::TK_NonTemplate: mergeRedeclarable(FD, Redecl); break; case FunctionDecl::TK_FunctionTemplate: // Merged when we merge the template. FD->setDescribedFunctionTemplate(ReadDeclAs()); break; case FunctionDecl::TK_MemberSpecialization: { FunctionDecl *InstFD = ReadDeclAs(); TemplateSpecializationKind TSK = (TemplateSpecializationKind)Record.readInt(); SourceLocation POI = ReadSourceLocation(); FD->setInstantiationOfMemberFunction(Reader.getContext(), InstFD, TSK); FD->getMemberSpecializationInfo()->setPointOfInstantiation(POI); mergeRedeclarable(FD, Redecl); break; } case FunctionDecl::TK_FunctionTemplateSpecialization: { FunctionTemplateDecl *Template = ReadDeclAs(); TemplateSpecializationKind TSK = (TemplateSpecializationKind)Record.readInt(); // Template arguments. SmallVector TemplArgs; Record.readTemplateArgumentList(TemplArgs, /*Canonicalize*/ true); // Template args as written. SmallVector TemplArgLocs; SourceLocation LAngleLoc, RAngleLoc; bool HasTemplateArgumentsAsWritten = Record.readInt(); if (HasTemplateArgumentsAsWritten) { unsigned NumTemplateArgLocs = Record.readInt(); TemplArgLocs.reserve(NumTemplateArgLocs); for (unsigned i=0; i != NumTemplateArgLocs; ++i) TemplArgLocs.push_back(Record.readTemplateArgumentLoc()); LAngleLoc = ReadSourceLocation(); RAngleLoc = ReadSourceLocation(); } SourceLocation POI = ReadSourceLocation(); ASTContext &C = Reader.getContext(); TemplateArgumentList *TemplArgList = TemplateArgumentList::CreateCopy(C, TemplArgs); TemplateArgumentListInfo TemplArgsInfo(LAngleLoc, RAngleLoc); for (unsigned i=0, e = TemplArgLocs.size(); i != e; ++i) TemplArgsInfo.addArgument(TemplArgLocs[i]); FunctionTemplateSpecializationInfo *FTInfo = FunctionTemplateSpecializationInfo::Create(C, FD, Template, TSK, TemplArgList, HasTemplateArgumentsAsWritten ? &TemplArgsInfo : nullptr, POI); FD->TemplateOrSpecialization = FTInfo; if (FD->isCanonicalDecl()) { // if canonical add to template's set. // The template that contains the specializations set. It's not safe to // use getCanonicalDecl on Template since it may still be initializing. FunctionTemplateDecl *CanonTemplate = ReadDeclAs(); // Get the InsertPos by FindNodeOrInsertPos() instead of calling // InsertNode(FTInfo) directly to avoid the getASTContext() call in // FunctionTemplateSpecializationInfo's Profile(). // We avoid getASTContext because a decl in the parent hierarchy may // be initializing. llvm::FoldingSetNodeID ID; FunctionTemplateSpecializationInfo::Profile(ID, TemplArgs, C); void *InsertPos = nullptr; FunctionTemplateDecl::Common *CommonPtr = CanonTemplate->getCommonPtr(); FunctionTemplateSpecializationInfo *ExistingInfo = CommonPtr->Specializations.FindNodeOrInsertPos(ID, InsertPos); if (InsertPos) CommonPtr->Specializations.InsertNode(FTInfo, InsertPos); else { assert(Reader.getContext().getLangOpts().Modules && "already deserialized this template specialization"); mergeRedeclarable(FD, ExistingInfo->Function, Redecl); } } break; } case FunctionDecl::TK_DependentFunctionTemplateSpecialization: { // Templates. UnresolvedSet<8> TemplDecls; unsigned NumTemplates = Record.readInt(); while (NumTemplates--) TemplDecls.addDecl(ReadDeclAs()); // Templates args. TemplateArgumentListInfo TemplArgs; unsigned NumArgs = Record.readInt(); while (NumArgs--) TemplArgs.addArgument(Record.readTemplateArgumentLoc()); TemplArgs.setLAngleLoc(ReadSourceLocation()); TemplArgs.setRAngleLoc(ReadSourceLocation()); FD->setDependentTemplateSpecialization(Reader.getContext(), TemplDecls, TemplArgs); // These are not merged; we don't need to merge redeclarations of dependent // template friends. break; } } // Read in the parameters. unsigned NumParams = Record.readInt(); SmallVector Params; Params.reserve(NumParams); for (unsigned I = 0; I != NumParams; ++I) Params.push_back(ReadDeclAs()); FD->setParams(Reader.getContext(), Params); } void ASTDeclReader::VisitObjCMethodDecl(ObjCMethodDecl *MD) { VisitNamedDecl(MD); if (Record.readInt()) { // Load the body on-demand. Most clients won't care, because method // definitions rarely show up in headers. Reader.PendingBodies[MD] = GetCurrentCursorOffset(); HasPendingBody = true; MD->setSelfDecl(ReadDeclAs()); MD->setCmdDecl(ReadDeclAs()); } MD->setInstanceMethod(Record.readInt()); MD->setVariadic(Record.readInt()); MD->setPropertyAccessor(Record.readInt()); MD->setDefined(Record.readInt()); MD->IsOverriding = Record.readInt(); MD->HasSkippedBody = Record.readInt(); MD->IsRedeclaration = Record.readInt(); MD->HasRedeclaration = Record.readInt(); if (MD->HasRedeclaration) Reader.getContext().setObjCMethodRedeclaration(MD, ReadDeclAs()); MD->setDeclImplementation((ObjCMethodDecl::ImplementationControl)Record.readInt()); MD->setObjCDeclQualifier((Decl::ObjCDeclQualifier)Record.readInt()); MD->SetRelatedResultType(Record.readInt()); MD->setReturnType(Record.readType()); MD->setReturnTypeSourceInfo(GetTypeSourceInfo()); MD->DeclEndLoc = ReadSourceLocation(); unsigned NumParams = Record.readInt(); SmallVector Params; Params.reserve(NumParams); for (unsigned I = 0; I != NumParams; ++I) Params.push_back(ReadDeclAs()); MD->SelLocsKind = Record.readInt(); unsigned NumStoredSelLocs = Record.readInt(); SmallVector SelLocs; SelLocs.reserve(NumStoredSelLocs); for (unsigned i = 0; i != NumStoredSelLocs; ++i) SelLocs.push_back(ReadSourceLocation()); MD->setParamsAndSelLocs(Reader.getContext(), Params, SelLocs); } void ASTDeclReader::VisitObjCTypeParamDecl(ObjCTypeParamDecl *D) { VisitTypedefNameDecl(D); D->Variance = Record.readInt(); D->Index = Record.readInt(); D->VarianceLoc = ReadSourceLocation(); D->ColonLoc = ReadSourceLocation(); } void ASTDeclReader::VisitObjCContainerDecl(ObjCContainerDecl *CD) { VisitNamedDecl(CD); CD->setAtStartLoc(ReadSourceLocation()); CD->setAtEndRange(ReadSourceRange()); } ObjCTypeParamList *ASTDeclReader::ReadObjCTypeParamList() { unsigned numParams = Record.readInt(); if (numParams == 0) return nullptr; SmallVector typeParams; typeParams.reserve(numParams); for (unsigned i = 0; i != numParams; ++i) { auto typeParam = ReadDeclAs(); if (!typeParam) return nullptr; typeParams.push_back(typeParam); } SourceLocation lAngleLoc = ReadSourceLocation(); SourceLocation rAngleLoc = ReadSourceLocation(); return ObjCTypeParamList::create(Reader.getContext(), lAngleLoc, typeParams, rAngleLoc); } void ASTDeclReader::ReadObjCDefinitionData( struct ObjCInterfaceDecl::DefinitionData &Data) { // Read the superclass. Data.SuperClassTInfo = GetTypeSourceInfo(); Data.EndLoc = ReadSourceLocation(); Data.HasDesignatedInitializers = Record.readInt(); // Read the directly referenced protocols and their SourceLocations. unsigned NumProtocols = Record.readInt(); SmallVector Protocols; Protocols.reserve(NumProtocols); for (unsigned I = 0; I != NumProtocols; ++I) Protocols.push_back(ReadDeclAs()); SmallVector ProtoLocs; ProtoLocs.reserve(NumProtocols); for (unsigned I = 0; I != NumProtocols; ++I) ProtoLocs.push_back(ReadSourceLocation()); Data.ReferencedProtocols.set(Protocols.data(), NumProtocols, ProtoLocs.data(), Reader.getContext()); // Read the transitive closure of protocols referenced by this class. NumProtocols = Record.readInt(); Protocols.clear(); Protocols.reserve(NumProtocols); for (unsigned I = 0; I != NumProtocols; ++I) Protocols.push_back(ReadDeclAs()); Data.AllReferencedProtocols.set(Protocols.data(), NumProtocols, Reader.getContext()); } void ASTDeclReader::MergeDefinitionData(ObjCInterfaceDecl *D, struct ObjCInterfaceDecl::DefinitionData &&NewDD) { // FIXME: odr checking? } void ASTDeclReader::VisitObjCInterfaceDecl(ObjCInterfaceDecl *ID) { RedeclarableResult Redecl = VisitRedeclarable(ID); VisitObjCContainerDecl(ID); TypeIDForTypeDecl = Record.getGlobalTypeID(Record.readInt()); mergeRedeclarable(ID, Redecl); ID->TypeParamList = ReadObjCTypeParamList(); if (Record.readInt()) { // Read the definition. ID->allocateDefinitionData(); ReadObjCDefinitionData(ID->data()); ObjCInterfaceDecl *Canon = ID->getCanonicalDecl(); if (Canon->Data.getPointer()) { // If we already have a definition, keep the definition invariant and // merge the data. MergeDefinitionData(Canon, std::move(ID->data())); ID->Data = Canon->Data; } else { // Set the definition data of the canonical declaration, so other // redeclarations will see it. ID->getCanonicalDecl()->Data = ID->Data; // We will rebuild this list lazily. ID->setIvarList(nullptr); } // Note that we have deserialized a definition. Reader.PendingDefinitions.insert(ID); // Note that we've loaded this Objective-C class. Reader.ObjCClassesLoaded.push_back(ID); } else { ID->Data = ID->getCanonicalDecl()->Data; } } void ASTDeclReader::VisitObjCIvarDecl(ObjCIvarDecl *IVD) { VisitFieldDecl(IVD); IVD->setAccessControl((ObjCIvarDecl::AccessControl)Record.readInt()); // This field will be built lazily. IVD->setNextIvar(nullptr); bool synth = Record.readInt(); IVD->setSynthesize(synth); } void ASTDeclReader::ReadObjCDefinitionData( struct ObjCProtocolDecl::DefinitionData &Data) { unsigned NumProtoRefs = Record.readInt(); SmallVector ProtoRefs; ProtoRefs.reserve(NumProtoRefs); for (unsigned I = 0; I != NumProtoRefs; ++I) ProtoRefs.push_back(ReadDeclAs()); SmallVector ProtoLocs; ProtoLocs.reserve(NumProtoRefs); for (unsigned I = 0; I != NumProtoRefs; ++I) ProtoLocs.push_back(ReadSourceLocation()); Data.ReferencedProtocols.set(ProtoRefs.data(), NumProtoRefs, ProtoLocs.data(), Reader.getContext()); } void ASTDeclReader::MergeDefinitionData(ObjCProtocolDecl *D, struct ObjCProtocolDecl::DefinitionData &&NewDD) { // FIXME: odr checking? } void ASTDeclReader::VisitObjCProtocolDecl(ObjCProtocolDecl *PD) { RedeclarableResult Redecl = VisitRedeclarable(PD); VisitObjCContainerDecl(PD); mergeRedeclarable(PD, Redecl); if (Record.readInt()) { // Read the definition. PD->allocateDefinitionData(); ReadObjCDefinitionData(PD->data()); ObjCProtocolDecl *Canon = PD->getCanonicalDecl(); if (Canon->Data.getPointer()) { // If we already have a definition, keep the definition invariant and // merge the data. MergeDefinitionData(Canon, std::move(PD->data())); PD->Data = Canon->Data; } else { // Set the definition data of the canonical declaration, so other // redeclarations will see it. PD->getCanonicalDecl()->Data = PD->Data; } // Note that we have deserialized a definition. Reader.PendingDefinitions.insert(PD); } else { PD->Data = PD->getCanonicalDecl()->Data; } } void ASTDeclReader::VisitObjCAtDefsFieldDecl(ObjCAtDefsFieldDecl *FD) { VisitFieldDecl(FD); } void ASTDeclReader::VisitObjCCategoryDecl(ObjCCategoryDecl *CD) { VisitObjCContainerDecl(CD); CD->setCategoryNameLoc(ReadSourceLocation()); CD->setIvarLBraceLoc(ReadSourceLocation()); CD->setIvarRBraceLoc(ReadSourceLocation()); // Note that this category has been deserialized. We do this before // deserializing the interface declaration, so that it will consider this /// category. Reader.CategoriesDeserialized.insert(CD); CD->ClassInterface = ReadDeclAs(); CD->TypeParamList = ReadObjCTypeParamList(); unsigned NumProtoRefs = Record.readInt(); SmallVector ProtoRefs; ProtoRefs.reserve(NumProtoRefs); for (unsigned I = 0; I != NumProtoRefs; ++I) ProtoRefs.push_back(ReadDeclAs()); SmallVector ProtoLocs; ProtoLocs.reserve(NumProtoRefs); for (unsigned I = 0; I != NumProtoRefs; ++I) ProtoLocs.push_back(ReadSourceLocation()); CD->setProtocolList(ProtoRefs.data(), NumProtoRefs, ProtoLocs.data(), Reader.getContext()); } void ASTDeclReader::VisitObjCCompatibleAliasDecl(ObjCCompatibleAliasDecl *CAD) { VisitNamedDecl(CAD); CAD->setClassInterface(ReadDeclAs()); } void ASTDeclReader::VisitObjCPropertyDecl(ObjCPropertyDecl *D) { VisitNamedDecl(D); D->setAtLoc(ReadSourceLocation()); D->setLParenLoc(ReadSourceLocation()); QualType T = Record.readType(); TypeSourceInfo *TSI = GetTypeSourceInfo(); D->setType(T, TSI); D->setPropertyAttributes( (ObjCPropertyDecl::PropertyAttributeKind)Record.readInt()); D->setPropertyAttributesAsWritten( (ObjCPropertyDecl::PropertyAttributeKind)Record.readInt()); D->setPropertyImplementation( (ObjCPropertyDecl::PropertyControl)Record.readInt()); DeclarationName GetterName = Record.readDeclarationName(); SourceLocation GetterLoc = ReadSourceLocation(); D->setGetterName(GetterName.getObjCSelector(), GetterLoc); DeclarationName SetterName = Record.readDeclarationName(); SourceLocation SetterLoc = ReadSourceLocation(); D->setSetterName(SetterName.getObjCSelector(), SetterLoc); D->setGetterMethodDecl(ReadDeclAs()); D->setSetterMethodDecl(ReadDeclAs()); D->setPropertyIvarDecl(ReadDeclAs()); } void ASTDeclReader::VisitObjCImplDecl(ObjCImplDecl *D) { VisitObjCContainerDecl(D); D->setClassInterface(ReadDeclAs()); } void ASTDeclReader::VisitObjCCategoryImplDecl(ObjCCategoryImplDecl *D) { VisitObjCImplDecl(D); D->CategoryNameLoc = ReadSourceLocation(); } void ASTDeclReader::VisitObjCImplementationDecl(ObjCImplementationDecl *D) { VisitObjCImplDecl(D); D->setSuperClass(ReadDeclAs()); D->SuperLoc = ReadSourceLocation(); D->setIvarLBraceLoc(ReadSourceLocation()); D->setIvarRBraceLoc(ReadSourceLocation()); D->setHasNonZeroConstructors(Record.readInt()); D->setHasDestructors(Record.readInt()); D->NumIvarInitializers = Record.readInt(); if (D->NumIvarInitializers) D->IvarInitializers = ReadGlobalOffset(); } void ASTDeclReader::VisitObjCPropertyImplDecl(ObjCPropertyImplDecl *D) { VisitDecl(D); D->setAtLoc(ReadSourceLocation()); D->setPropertyDecl(ReadDeclAs()); D->PropertyIvarDecl = ReadDeclAs(); D->IvarLoc = ReadSourceLocation(); D->setGetterCXXConstructor(Record.readExpr()); D->setSetterCXXAssignment(Record.readExpr()); } void ASTDeclReader::VisitFieldDecl(FieldDecl *FD) { VisitDeclaratorDecl(FD); FD->Mutable = Record.readInt(); if (int BitWidthOrInitializer = Record.readInt()) { FD->InitStorage.setInt( static_cast(BitWidthOrInitializer - 1)); if (FD->InitStorage.getInt() == FieldDecl::ISK_CapturedVLAType) { // Read captured variable length array. FD->InitStorage.setPointer(Record.readType().getAsOpaquePtr()); } else { FD->InitStorage.setPointer(Record.readExpr()); } } if (!FD->getDeclName()) { if (FieldDecl *Tmpl = ReadDeclAs()) Reader.getContext().setInstantiatedFromUnnamedFieldDecl(FD, Tmpl); } mergeMergeable(FD); } void ASTDeclReader::VisitMSPropertyDecl(MSPropertyDecl *PD) { VisitDeclaratorDecl(PD); PD->GetterId = Record.getIdentifierInfo(); PD->SetterId = Record.getIdentifierInfo(); } void ASTDeclReader::VisitIndirectFieldDecl(IndirectFieldDecl *FD) { VisitValueDecl(FD); FD->ChainingSize = Record.readInt(); assert(FD->ChainingSize >= 2 && "Anonymous chaining must be >= 2"); FD->Chaining = new (Reader.getContext())NamedDecl*[FD->ChainingSize]; for (unsigned I = 0; I != FD->ChainingSize; ++I) FD->Chaining[I] = ReadDeclAs(); mergeMergeable(FD); } ASTDeclReader::RedeclarableResult ASTDeclReader::VisitVarDeclImpl(VarDecl *VD) { RedeclarableResult Redecl = VisitRedeclarable(VD); VisitDeclaratorDecl(VD); VD->VarDeclBits.SClass = (StorageClass)Record.readInt(); VD->VarDeclBits.TSCSpec = Record.readInt(); VD->VarDeclBits.InitStyle = Record.readInt(); if (!isa(VD)) { VD->NonParmVarDeclBits.IsThisDeclarationADemotedDefinition = Record.readInt(); VD->NonParmVarDeclBits.ExceptionVar = Record.readInt(); VD->NonParmVarDeclBits.NRVOVariable = Record.readInt(); VD->NonParmVarDeclBits.CXXForRangeDecl = Record.readInt(); VD->NonParmVarDeclBits.ARCPseudoStrong = Record.readInt(); VD->NonParmVarDeclBits.IsInline = Record.readInt(); VD->NonParmVarDeclBits.IsInlineSpecified = Record.readInt(); VD->NonParmVarDeclBits.IsConstexpr = Record.readInt(); VD->NonParmVarDeclBits.IsInitCapture = Record.readInt(); VD->NonParmVarDeclBits.PreviousDeclInSameBlockScope = Record.readInt(); VD->NonParmVarDeclBits.ImplicitParamKind = Record.readInt(); } Linkage VarLinkage = Linkage(Record.readInt()); VD->setCachedLinkage(VarLinkage); // Reconstruct the one piece of the IdentifierNamespace that we need. if (VD->getStorageClass() == SC_Extern && VarLinkage != NoLinkage && VD->getLexicalDeclContext()->isFunctionOrMethod()) VD->setLocalExternDecl(); if (uint64_t Val = Record.readInt()) { VD->setInit(Record.readExpr()); if (Val > 1) { // IsInitKnownICE = 1, IsInitNotICE = 2, IsInitICE = 3 EvaluatedStmt *Eval = VD->ensureEvaluatedStmt(); Eval->CheckedICE = true; Eval->IsICE = Val == 3; } } enum VarKind { VarNotTemplate = 0, VarTemplate, StaticDataMemberSpecialization }; switch ((VarKind)Record.readInt()) { case VarNotTemplate: // Only true variables (not parameters or implicit parameters) can be // merged; the other kinds are not really redeclarable at all. if (!isa(VD) && !isa(VD) && !isa(VD)) mergeRedeclarable(VD, Redecl); break; case VarTemplate: // Merged when we merge the template. VD->setDescribedVarTemplate(ReadDeclAs()); break; case StaticDataMemberSpecialization: { // HasMemberSpecializationInfo. VarDecl *Tmpl = ReadDeclAs(); TemplateSpecializationKind TSK = (TemplateSpecializationKind)Record.readInt(); SourceLocation POI = ReadSourceLocation(); Reader.getContext().setInstantiatedFromStaticDataMember(VD, Tmpl, TSK,POI); mergeRedeclarable(VD, Redecl); break; } } return Redecl; } void ASTDeclReader::VisitImplicitParamDecl(ImplicitParamDecl *PD) { VisitVarDecl(PD); } void ASTDeclReader::VisitParmVarDecl(ParmVarDecl *PD) { VisitVarDecl(PD); unsigned isObjCMethodParam = Record.readInt(); unsigned scopeDepth = Record.readInt(); unsigned scopeIndex = Record.readInt(); unsigned declQualifier = Record.readInt(); if (isObjCMethodParam) { assert(scopeDepth == 0); PD->setObjCMethodScopeInfo(scopeIndex); PD->ParmVarDeclBits.ScopeDepthOrObjCQuals = declQualifier; } else { PD->setScopeInfo(scopeDepth, scopeIndex); } PD->ParmVarDeclBits.IsKNRPromoted = Record.readInt(); PD->ParmVarDeclBits.HasInheritedDefaultArg = Record.readInt(); if (Record.readInt()) // hasUninstantiatedDefaultArg. PD->setUninstantiatedDefaultArg(Record.readExpr()); // FIXME: If this is a redeclaration of a function from another module, handle // inheritance of default arguments. } void ASTDeclReader::VisitDecompositionDecl(DecompositionDecl *DD) { VisitVarDecl(DD); BindingDecl **BDs = DD->getTrailingObjects(); for (unsigned I = 0; I != DD->NumBindings; ++I) BDs[I] = ReadDeclAs(); } void ASTDeclReader::VisitBindingDecl(BindingDecl *BD) { VisitValueDecl(BD); BD->Binding = Record.readExpr(); } void ASTDeclReader::VisitFileScopeAsmDecl(FileScopeAsmDecl *AD) { VisitDecl(AD); AD->setAsmString(cast(Record.readExpr())); AD->setRParenLoc(ReadSourceLocation()); } void ASTDeclReader::VisitBlockDecl(BlockDecl *BD) { VisitDecl(BD); BD->setBody(cast_or_null(Record.readStmt())); BD->setSignatureAsWritten(GetTypeSourceInfo()); unsigned NumParams = Record.readInt(); SmallVector Params; Params.reserve(NumParams); for (unsigned I = 0; I != NumParams; ++I) Params.push_back(ReadDeclAs()); BD->setParams(Params); BD->setIsVariadic(Record.readInt()); BD->setBlockMissingReturnType(Record.readInt()); BD->setIsConversionFromLambda(Record.readInt()); bool capturesCXXThis = Record.readInt(); unsigned numCaptures = Record.readInt(); SmallVector captures; captures.reserve(numCaptures); for (unsigned i = 0; i != numCaptures; ++i) { VarDecl *decl = ReadDeclAs(); unsigned flags = Record.readInt(); bool byRef = (flags & 1); bool nested = (flags & 2); Expr *copyExpr = ((flags & 4) ? Record.readExpr() : nullptr); captures.push_back(BlockDecl::Capture(decl, byRef, nested, copyExpr)); } BD->setCaptures(Reader.getContext(), captures, capturesCXXThis); } void ASTDeclReader::VisitCapturedDecl(CapturedDecl *CD) { VisitDecl(CD); unsigned ContextParamPos = Record.readInt(); CD->setNothrow(Record.readInt() != 0); // Body is set by VisitCapturedStmt. for (unsigned I = 0; I < CD->NumParams; ++I) { if (I != ContextParamPos) CD->setParam(I, ReadDeclAs()); else CD->setContextParam(I, ReadDeclAs()); } } void ASTDeclReader::VisitLinkageSpecDecl(LinkageSpecDecl *D) { VisitDecl(D); D->setLanguage((LinkageSpecDecl::LanguageIDs)Record.readInt()); D->setExternLoc(ReadSourceLocation()); D->setRBraceLoc(ReadSourceLocation()); } void ASTDeclReader::VisitExportDecl(ExportDecl *D) { VisitDecl(D); D->RBraceLoc = ReadSourceLocation(); } void ASTDeclReader::VisitLabelDecl(LabelDecl *D) { VisitNamedDecl(D); D->setLocStart(ReadSourceLocation()); } void ASTDeclReader::VisitNamespaceDecl(NamespaceDecl *D) { RedeclarableResult Redecl = VisitRedeclarable(D); VisitNamedDecl(D); D->setInline(Record.readInt()); D->LocStart = ReadSourceLocation(); D->RBraceLoc = ReadSourceLocation(); // Defer loading the anonymous namespace until we've finished merging // this namespace; loading it might load a later declaration of the // same namespace, and we have an invariant that older declarations // get merged before newer ones try to merge. GlobalDeclID AnonNamespace = 0; if (Redecl.getFirstID() == ThisDeclID) { AnonNamespace = ReadDeclID(); } else { // Link this namespace back to the first declaration, which has already // been deserialized. D->AnonOrFirstNamespaceAndInline.setPointer(D->getFirstDecl()); } mergeRedeclarable(D, Redecl); if (AnonNamespace) { // Each module has its own anonymous namespace, which is disjoint from // any other module's anonymous namespaces, so don't attach the anonymous // namespace at all. NamespaceDecl *Anon = cast(Reader.GetDecl(AnonNamespace)); if (!Record.isModule()) D->setAnonymousNamespace(Anon); } } void ASTDeclReader::VisitNamespaceAliasDecl(NamespaceAliasDecl *D) { RedeclarableResult Redecl = VisitRedeclarable(D); VisitNamedDecl(D); D->NamespaceLoc = ReadSourceLocation(); D->IdentLoc = ReadSourceLocation(); D->QualifierLoc = Record.readNestedNameSpecifierLoc(); D->Namespace = ReadDeclAs(); mergeRedeclarable(D, Redecl); } void ASTDeclReader::VisitUsingDecl(UsingDecl *D) { VisitNamedDecl(D); D->setUsingLoc(ReadSourceLocation()); D->QualifierLoc = Record.readNestedNameSpecifierLoc(); ReadDeclarationNameLoc(D->DNLoc, D->getDeclName()); D->FirstUsingShadow.setPointer(ReadDeclAs()); D->setTypename(Record.readInt()); if (NamedDecl *Pattern = ReadDeclAs()) Reader.getContext().setInstantiatedFromUsingDecl(D, Pattern); mergeMergeable(D); } void ASTDeclReader::VisitUsingPackDecl(UsingPackDecl *D) { VisitNamedDecl(D); D->InstantiatedFrom = ReadDeclAs(); NamedDecl **Expansions = D->getTrailingObjects(); for (unsigned I = 0; I != D->NumExpansions; ++I) Expansions[I] = ReadDeclAs(); mergeMergeable(D); } void ASTDeclReader::VisitUsingShadowDecl(UsingShadowDecl *D) { RedeclarableResult Redecl = VisitRedeclarable(D); VisitNamedDecl(D); D->setTargetDecl(ReadDeclAs()); D->UsingOrNextShadow = ReadDeclAs(); UsingShadowDecl *Pattern = ReadDeclAs(); if (Pattern) Reader.getContext().setInstantiatedFromUsingShadowDecl(D, Pattern); mergeRedeclarable(D, Redecl); } void ASTDeclReader::VisitConstructorUsingShadowDecl( ConstructorUsingShadowDecl *D) { VisitUsingShadowDecl(D); D->NominatedBaseClassShadowDecl = ReadDeclAs(); D->ConstructedBaseClassShadowDecl = ReadDeclAs(); D->IsVirtual = Record.readInt(); } void ASTDeclReader::VisitUsingDirectiveDecl(UsingDirectiveDecl *D) { VisitNamedDecl(D); D->UsingLoc = ReadSourceLocation(); D->NamespaceLoc = ReadSourceLocation(); D->QualifierLoc = Record.readNestedNameSpecifierLoc(); D->NominatedNamespace = ReadDeclAs(); D->CommonAncestor = ReadDeclAs(); } void ASTDeclReader::VisitUnresolvedUsingValueDecl(UnresolvedUsingValueDecl *D) { VisitValueDecl(D); D->setUsingLoc(ReadSourceLocation()); D->QualifierLoc = Record.readNestedNameSpecifierLoc(); ReadDeclarationNameLoc(D->DNLoc, D->getDeclName()); D->EllipsisLoc = ReadSourceLocation(); mergeMergeable(D); } void ASTDeclReader::VisitUnresolvedUsingTypenameDecl( UnresolvedUsingTypenameDecl *D) { VisitTypeDecl(D); D->TypenameLocation = ReadSourceLocation(); D->QualifierLoc = Record.readNestedNameSpecifierLoc(); D->EllipsisLoc = ReadSourceLocation(); mergeMergeable(D); } void ASTDeclReader::ReadCXXDefinitionData( struct CXXRecordDecl::DefinitionData &Data, const CXXRecordDecl *D) { // Note: the caller has deserialized the IsLambda bit already. Data.UserDeclaredConstructor = Record.readInt(); Data.UserDeclaredSpecialMembers = Record.readInt(); Data.Aggregate = Record.readInt(); Data.PlainOldData = Record.readInt(); Data.Empty = Record.readInt(); Data.Polymorphic = Record.readInt(); Data.Abstract = Record.readInt(); Data.IsStandardLayout = Record.readInt(); Data.HasNoNonEmptyBases = Record.readInt(); Data.HasPrivateFields = Record.readInt(); Data.HasProtectedFields = Record.readInt(); Data.HasPublicFields = Record.readInt(); Data.HasMutableFields = Record.readInt(); Data.HasVariantMembers = Record.readInt(); Data.HasOnlyCMembers = Record.readInt(); Data.HasInClassInitializer = Record.readInt(); Data.HasUninitializedReferenceMember = Record.readInt(); Data.HasUninitializedFields = Record.readInt(); Data.HasInheritedConstructor = Record.readInt(); Data.HasInheritedAssignment = Record.readInt(); + Data.NeedOverloadResolutionForCopyConstructor = Record.readInt(); Data.NeedOverloadResolutionForMoveConstructor = Record.readInt(); Data.NeedOverloadResolutionForMoveAssignment = Record.readInt(); Data.NeedOverloadResolutionForDestructor = Record.readInt(); + Data.DefaultedCopyConstructorIsDeleted = Record.readInt(); Data.DefaultedMoveConstructorIsDeleted = Record.readInt(); Data.DefaultedMoveAssignmentIsDeleted = Record.readInt(); Data.DefaultedDestructorIsDeleted = Record.readInt(); Data.HasTrivialSpecialMembers = Record.readInt(); Data.DeclaredNonTrivialSpecialMembers = Record.readInt(); Data.HasIrrelevantDestructor = Record.readInt(); Data.HasConstexprNonCopyMoveConstructor = Record.readInt(); Data.HasDefaultedDefaultConstructor = Record.readInt(); + Data.CanPassInRegisters = Record.readInt(); Data.DefaultedDefaultConstructorIsConstexpr = Record.readInt(); Data.HasConstexprDefaultConstructor = Record.readInt(); Data.HasNonLiteralTypeFieldsOrBases = Record.readInt(); Data.ComputedVisibleConversions = Record.readInt(); Data.UserProvidedDefaultConstructor = Record.readInt(); Data.DeclaredSpecialMembers = Record.readInt(); Data.ImplicitCopyConstructorCanHaveConstParamForVBase = Record.readInt(); Data.ImplicitCopyConstructorCanHaveConstParamForNonVBase = Record.readInt(); Data.ImplicitCopyAssignmentHasConstParam = Record.readInt(); Data.HasDeclaredCopyConstructorWithConstParam = Record.readInt(); Data.HasDeclaredCopyAssignmentWithConstParam = Record.readInt(); Data.ODRHash = Record.readInt(); Data.HasODRHash = true; if (Record.readInt()) { Reader.BodySource[D] = Loc.F->Kind == ModuleKind::MK_MainFile ? ExternalASTSource::EK_Never : ExternalASTSource::EK_Always; } Data.NumBases = Record.readInt(); if (Data.NumBases) Data.Bases = ReadGlobalOffset(); Data.NumVBases = Record.readInt(); if (Data.NumVBases) Data.VBases = ReadGlobalOffset(); Record.readUnresolvedSet(Data.Conversions); Record.readUnresolvedSet(Data.VisibleConversions); assert(Data.Definition && "Data.Definition should be already set!"); Data.FirstFriend = ReadDeclID(); if (Data.IsLambda) { typedef LambdaCapture Capture; CXXRecordDecl::LambdaDefinitionData &Lambda = static_cast(Data); Lambda.Dependent = Record.readInt(); Lambda.IsGenericLambda = Record.readInt(); Lambda.CaptureDefault = Record.readInt(); Lambda.NumCaptures = Record.readInt(); Lambda.NumExplicitCaptures = Record.readInt(); Lambda.ManglingNumber = Record.readInt(); Lambda.ContextDecl = ReadDeclID(); Lambda.Captures = (Capture *)Reader.getContext().Allocate( sizeof(Capture) * Lambda.NumCaptures); Capture *ToCapture = Lambda.Captures; Lambda.MethodTyInfo = GetTypeSourceInfo(); for (unsigned I = 0, N = Lambda.NumCaptures; I != N; ++I) { SourceLocation Loc = ReadSourceLocation(); bool IsImplicit = Record.readInt(); LambdaCaptureKind Kind = static_cast(Record.readInt()); switch (Kind) { case LCK_StarThis: case LCK_This: case LCK_VLAType: *ToCapture++ = Capture(Loc, IsImplicit, Kind, nullptr,SourceLocation()); break; case LCK_ByCopy: case LCK_ByRef: VarDecl *Var = ReadDeclAs(); SourceLocation EllipsisLoc = ReadSourceLocation(); *ToCapture++ = Capture(Loc, IsImplicit, Kind, Var, EllipsisLoc); break; } } } } void ASTDeclReader::MergeDefinitionData( CXXRecordDecl *D, struct CXXRecordDecl::DefinitionData &&MergeDD) { assert(D->DefinitionData && "merging class definition into non-definition"); auto &DD = *D->DefinitionData; if (DD.Definition != MergeDD.Definition) { // Track that we merged the definitions. Reader.MergedDeclContexts.insert(std::make_pair(MergeDD.Definition, DD.Definition)); Reader.PendingDefinitions.erase(MergeDD.Definition); MergeDD.Definition->IsCompleteDefinition = false; Reader.mergeDefinitionVisibility(DD.Definition, MergeDD.Definition); assert(Reader.Lookups.find(MergeDD.Definition) == Reader.Lookups.end() && "already loaded pending lookups for merged definition"); } auto PFDI = Reader.PendingFakeDefinitionData.find(&DD); if (PFDI != Reader.PendingFakeDefinitionData.end() && PFDI->second == ASTReader::PendingFakeDefinitionKind::Fake) { // We faked up this definition data because we found a class for which we'd // not yet loaded the definition. Replace it with the real thing now. assert(!DD.IsLambda && !MergeDD.IsLambda && "faked up lambda definition?"); PFDI->second = ASTReader::PendingFakeDefinitionKind::FakeLoaded; // Don't change which declaration is the definition; that is required // to be invariant once we select it. auto *Def = DD.Definition; DD = std::move(MergeDD); DD.Definition = Def; return; } // FIXME: Move this out into a .def file? bool DetectedOdrViolation = false; #define OR_FIELD(Field) DD.Field |= MergeDD.Field; #define MATCH_FIELD(Field) \ DetectedOdrViolation |= DD.Field != MergeDD.Field; \ OR_FIELD(Field) MATCH_FIELD(UserDeclaredConstructor) MATCH_FIELD(UserDeclaredSpecialMembers) MATCH_FIELD(Aggregate) MATCH_FIELD(PlainOldData) MATCH_FIELD(Empty) MATCH_FIELD(Polymorphic) MATCH_FIELD(Abstract) MATCH_FIELD(IsStandardLayout) MATCH_FIELD(HasNoNonEmptyBases) MATCH_FIELD(HasPrivateFields) MATCH_FIELD(HasProtectedFields) MATCH_FIELD(HasPublicFields) MATCH_FIELD(HasMutableFields) MATCH_FIELD(HasVariantMembers) MATCH_FIELD(HasOnlyCMembers) MATCH_FIELD(HasInClassInitializer) MATCH_FIELD(HasUninitializedReferenceMember) MATCH_FIELD(HasUninitializedFields) MATCH_FIELD(HasInheritedConstructor) MATCH_FIELD(HasInheritedAssignment) + MATCH_FIELD(NeedOverloadResolutionForCopyConstructor) MATCH_FIELD(NeedOverloadResolutionForMoveConstructor) MATCH_FIELD(NeedOverloadResolutionForMoveAssignment) MATCH_FIELD(NeedOverloadResolutionForDestructor) + MATCH_FIELD(DefaultedCopyConstructorIsDeleted) MATCH_FIELD(DefaultedMoveConstructorIsDeleted) MATCH_FIELD(DefaultedMoveAssignmentIsDeleted) MATCH_FIELD(DefaultedDestructorIsDeleted) OR_FIELD(HasTrivialSpecialMembers) OR_FIELD(DeclaredNonTrivialSpecialMembers) MATCH_FIELD(HasIrrelevantDestructor) OR_FIELD(HasConstexprNonCopyMoveConstructor) OR_FIELD(HasDefaultedDefaultConstructor) + MATCH_FIELD(CanPassInRegisters) MATCH_FIELD(DefaultedDefaultConstructorIsConstexpr) OR_FIELD(HasConstexprDefaultConstructor) MATCH_FIELD(HasNonLiteralTypeFieldsOrBases) // ComputedVisibleConversions is handled below. MATCH_FIELD(UserProvidedDefaultConstructor) OR_FIELD(DeclaredSpecialMembers) MATCH_FIELD(ImplicitCopyConstructorCanHaveConstParamForVBase) MATCH_FIELD(ImplicitCopyConstructorCanHaveConstParamForNonVBase) MATCH_FIELD(ImplicitCopyAssignmentHasConstParam) OR_FIELD(HasDeclaredCopyConstructorWithConstParam) OR_FIELD(HasDeclaredCopyAssignmentWithConstParam) MATCH_FIELD(IsLambda) #undef OR_FIELD #undef MATCH_FIELD if (DD.NumBases != MergeDD.NumBases || DD.NumVBases != MergeDD.NumVBases) DetectedOdrViolation = true; // FIXME: Issue a diagnostic if the base classes don't match when we come // to lazily load them. // FIXME: Issue a diagnostic if the list of conversion functions doesn't // match when we come to lazily load them. if (MergeDD.ComputedVisibleConversions && !DD.ComputedVisibleConversions) { DD.VisibleConversions = std::move(MergeDD.VisibleConversions); DD.ComputedVisibleConversions = true; } // FIXME: Issue a diagnostic if FirstFriend doesn't match when we come to // lazily load it. if (DD.IsLambda) { // FIXME: ODR-checking for merging lambdas (this happens, for instance, // when they occur within the body of a function template specialization). } if (D->getODRHash() != MergeDD.ODRHash) { DetectedOdrViolation = true; } if (DetectedOdrViolation) Reader.PendingOdrMergeFailures[DD.Definition].push_back(MergeDD.Definition); } void ASTDeclReader::ReadCXXRecordDefinition(CXXRecordDecl *D, bool Update) { struct CXXRecordDecl::DefinitionData *DD; ASTContext &C = Reader.getContext(); // Determine whether this is a lambda closure type, so that we can // allocate the appropriate DefinitionData structure. bool IsLambda = Record.readInt(); if (IsLambda) DD = new (C) CXXRecordDecl::LambdaDefinitionData(D, nullptr, false, false, LCD_None); else DD = new (C) struct CXXRecordDecl::DefinitionData(D); ReadCXXDefinitionData(*DD, D); // We might already have a definition for this record. This can happen either // because we're reading an update record, or because we've already done some // merging. Either way, just merge into it. CXXRecordDecl *Canon = D->getCanonicalDecl(); if (Canon->DefinitionData) { MergeDefinitionData(Canon, std::move(*DD)); D->DefinitionData = Canon->DefinitionData; return; } // Mark this declaration as being a definition. D->IsCompleteDefinition = true; D->DefinitionData = DD; // If this is not the first declaration or is an update record, we can have // other redeclarations already. Make a note that we need to propagate the // DefinitionData pointer onto them. if (Update || Canon != D) { Canon->DefinitionData = D->DefinitionData; Reader.PendingDefinitions.insert(D); } } ASTDeclReader::RedeclarableResult ASTDeclReader::VisitCXXRecordDeclImpl(CXXRecordDecl *D) { RedeclarableResult Redecl = VisitRecordDeclImpl(D); ASTContext &C = Reader.getContext(); enum CXXRecKind { CXXRecNotTemplate = 0, CXXRecTemplate, CXXRecMemberSpecialization }; switch ((CXXRecKind)Record.readInt()) { case CXXRecNotTemplate: // Merged when we merge the folding set entry in the primary template. if (!isa(D)) mergeRedeclarable(D, Redecl); break; case CXXRecTemplate: { // Merged when we merge the template. ClassTemplateDecl *Template = ReadDeclAs(); D->TemplateOrInstantiation = Template; if (!Template->getTemplatedDecl()) { // We've not actually loaded the ClassTemplateDecl yet, because we're // currently being loaded as its pattern. Rely on it to set up our // TypeForDecl (see VisitClassTemplateDecl). // // Beware: we do not yet know our canonical declaration, and may still // get merged once the surrounding class template has got off the ground. TypeIDForTypeDecl = 0; } break; } case CXXRecMemberSpecialization: { CXXRecordDecl *RD = ReadDeclAs(); TemplateSpecializationKind TSK = (TemplateSpecializationKind)Record.readInt(); SourceLocation POI = ReadSourceLocation(); MemberSpecializationInfo *MSI = new (C) MemberSpecializationInfo(RD, TSK); MSI->setPointOfInstantiation(POI); D->TemplateOrInstantiation = MSI; mergeRedeclarable(D, Redecl); break; } } bool WasDefinition = Record.readInt(); if (WasDefinition) ReadCXXRecordDefinition(D, /*Update*/false); else // Propagate DefinitionData pointer from the canonical declaration. D->DefinitionData = D->getCanonicalDecl()->DefinitionData; // Lazily load the key function to avoid deserializing every method so we can // compute it. if (WasDefinition) { DeclID KeyFn = ReadDeclID(); if (KeyFn && D->IsCompleteDefinition) // FIXME: This is wrong for the ARM ABI, where some other module may have // made this function no longer be a key function. We need an update // record or similar for that case. C.KeyFunctions[D] = KeyFn; } return Redecl; } void ASTDeclReader::VisitCXXDeductionGuideDecl(CXXDeductionGuideDecl *D) { VisitFunctionDecl(D); } void ASTDeclReader::VisitCXXMethodDecl(CXXMethodDecl *D) { VisitFunctionDecl(D); unsigned NumOverridenMethods = Record.readInt(); if (D->isCanonicalDecl()) { while (NumOverridenMethods--) { // Avoid invariant checking of CXXMethodDecl::addOverriddenMethod, // MD may be initializing. if (CXXMethodDecl *MD = ReadDeclAs()) Reader.getContext().addOverriddenMethod(D, MD->getCanonicalDecl()); } } else { // We don't care about which declarations this used to override; we get // the relevant information from the canonical declaration. Record.skipInts(NumOverridenMethods); } } void ASTDeclReader::VisitCXXConstructorDecl(CXXConstructorDecl *D) { // We need the inherited constructor information to merge the declaration, // so we have to read it before we call VisitCXXMethodDecl. if (D->isInheritingConstructor()) { auto *Shadow = ReadDeclAs(); auto *Ctor = ReadDeclAs(); *D->getTrailingObjects() = InheritedConstructor(Shadow, Ctor); } VisitCXXMethodDecl(D); } void ASTDeclReader::VisitCXXDestructorDecl(CXXDestructorDecl *D) { VisitCXXMethodDecl(D); if (auto *OperatorDelete = ReadDeclAs()) { auto *Canon = cast(D->getCanonicalDecl()); // FIXME: Check consistency if we have an old and new operator delete. if (!Canon->OperatorDelete) Canon->OperatorDelete = OperatorDelete; } } void ASTDeclReader::VisitCXXConversionDecl(CXXConversionDecl *D) { VisitCXXMethodDecl(D); } void ASTDeclReader::VisitImportDecl(ImportDecl *D) { VisitDecl(D); D->ImportedAndComplete.setPointer(readModule()); D->ImportedAndComplete.setInt(Record.readInt()); SourceLocation *StoredLocs = D->getTrailingObjects(); for (unsigned I = 0, N = Record.back(); I != N; ++I) StoredLocs[I] = ReadSourceLocation(); Record.skipInts(1); // The number of stored source locations. } void ASTDeclReader::VisitAccessSpecDecl(AccessSpecDecl *D) { VisitDecl(D); D->setColonLoc(ReadSourceLocation()); } void ASTDeclReader::VisitFriendDecl(FriendDecl *D) { VisitDecl(D); if (Record.readInt()) // hasFriendDecl D->Friend = ReadDeclAs(); else D->Friend = GetTypeSourceInfo(); for (unsigned i = 0; i != D->NumTPLists; ++i) D->getTrailingObjects()[i] = Record.readTemplateParameterList(); D->NextFriend = ReadDeclID(); D->UnsupportedFriend = (Record.readInt() != 0); D->FriendLoc = ReadSourceLocation(); } void ASTDeclReader::VisitFriendTemplateDecl(FriendTemplateDecl *D) { VisitDecl(D); unsigned NumParams = Record.readInt(); D->NumParams = NumParams; D->Params = new TemplateParameterList*[NumParams]; for (unsigned i = 0; i != NumParams; ++i) D->Params[i] = Record.readTemplateParameterList(); if (Record.readInt()) // HasFriendDecl D->Friend = ReadDeclAs(); else D->Friend = GetTypeSourceInfo(); D->FriendLoc = ReadSourceLocation(); } DeclID ASTDeclReader::VisitTemplateDecl(TemplateDecl *D) { VisitNamedDecl(D); DeclID PatternID = ReadDeclID(); NamedDecl *TemplatedDecl = cast_or_null(Reader.GetDecl(PatternID)); TemplateParameterList *TemplateParams = Record.readTemplateParameterList(); // FIXME handle associated constraints D->init(TemplatedDecl, TemplateParams); return PatternID; } ASTDeclReader::RedeclarableResult ASTDeclReader::VisitRedeclarableTemplateDecl(RedeclarableTemplateDecl *D) { RedeclarableResult Redecl = VisitRedeclarable(D); // Make sure we've allocated the Common pointer first. We do this before // VisitTemplateDecl so that getCommonPtr() can be used during initialization. RedeclarableTemplateDecl *CanonD = D->getCanonicalDecl(); if (!CanonD->Common) { CanonD->Common = CanonD->newCommon(Reader.getContext()); Reader.PendingDefinitions.insert(CanonD); } D->Common = CanonD->Common; // If this is the first declaration of the template, fill in the information // for the 'common' pointer. if (ThisDeclID == Redecl.getFirstID()) { if (RedeclarableTemplateDecl *RTD = ReadDeclAs()) { assert(RTD->getKind() == D->getKind() && "InstantiatedFromMemberTemplate kind mismatch"); D->setInstantiatedFromMemberTemplate(RTD); if (Record.readInt()) D->setMemberSpecialization(); } } DeclID PatternID = VisitTemplateDecl(D); D->IdentifierNamespace = Record.readInt(); mergeRedeclarable(D, Redecl, PatternID); // If we merged the template with a prior declaration chain, merge the common // pointer. // FIXME: Actually merge here, don't just overwrite. D->Common = D->getCanonicalDecl()->Common; return Redecl; } void ASTDeclReader::VisitClassTemplateDecl(ClassTemplateDecl *D) { RedeclarableResult Redecl = VisitRedeclarableTemplateDecl(D); if (ThisDeclID == Redecl.getFirstID()) { // This ClassTemplateDecl owns a CommonPtr; read it to keep track of all of // the specializations. SmallVector SpecIDs; ReadDeclIDList(SpecIDs); ASTDeclReader::AddLazySpecializations(D, SpecIDs); } if (D->getTemplatedDecl()->TemplateOrInstantiation) { // We were loaded before our templated declaration was. We've not set up // its corresponding type yet (see VisitCXXRecordDeclImpl), so reconstruct // it now. Reader.getContext().getInjectedClassNameType( D->getTemplatedDecl(), D->getInjectedClassNameSpecialization()); } } void ASTDeclReader::VisitBuiltinTemplateDecl(BuiltinTemplateDecl *D) { llvm_unreachable("BuiltinTemplates are not serialized"); } /// TODO: Unify with ClassTemplateDecl version? /// May require unifying ClassTemplateDecl and /// VarTemplateDecl beyond TemplateDecl... void ASTDeclReader::VisitVarTemplateDecl(VarTemplateDecl *D) { RedeclarableResult Redecl = VisitRedeclarableTemplateDecl(D); if (ThisDeclID == Redecl.getFirstID()) { // This VarTemplateDecl owns a CommonPtr; read it to keep track of all of // the specializations. SmallVector SpecIDs; ReadDeclIDList(SpecIDs); ASTDeclReader::AddLazySpecializations(D, SpecIDs); } } ASTDeclReader::RedeclarableResult ASTDeclReader::VisitClassTemplateSpecializationDeclImpl( ClassTemplateSpecializationDecl *D) { RedeclarableResult Redecl = VisitCXXRecordDeclImpl(D); ASTContext &C = Reader.getContext(); if (Decl *InstD = ReadDecl()) { if (ClassTemplateDecl *CTD = dyn_cast(InstD)) { D->SpecializedTemplate = CTD; } else { SmallVector TemplArgs; Record.readTemplateArgumentList(TemplArgs); TemplateArgumentList *ArgList = TemplateArgumentList::CreateCopy(C, TemplArgs); ClassTemplateSpecializationDecl::SpecializedPartialSpecialization *PS = new (C) ClassTemplateSpecializationDecl:: SpecializedPartialSpecialization(); PS->PartialSpecialization = cast(InstD); PS->TemplateArgs = ArgList; D->SpecializedTemplate = PS; } } SmallVector TemplArgs; Record.readTemplateArgumentList(TemplArgs, /*Canonicalize*/ true); D->TemplateArgs = TemplateArgumentList::CreateCopy(C, TemplArgs); D->PointOfInstantiation = ReadSourceLocation(); D->SpecializationKind = (TemplateSpecializationKind)Record.readInt(); bool writtenAsCanonicalDecl = Record.readInt(); if (writtenAsCanonicalDecl) { ClassTemplateDecl *CanonPattern = ReadDeclAs(); if (D->isCanonicalDecl()) { // It's kept in the folding set. // Set this as, or find, the canonical declaration for this specialization ClassTemplateSpecializationDecl *CanonSpec; if (ClassTemplatePartialSpecializationDecl *Partial = dyn_cast(D)) { CanonSpec = CanonPattern->getCommonPtr()->PartialSpecializations .GetOrInsertNode(Partial); } else { CanonSpec = CanonPattern->getCommonPtr()->Specializations.GetOrInsertNode(D); } // If there was already a canonical specialization, merge into it. if (CanonSpec != D) { mergeRedeclarable(D, CanonSpec, Redecl); // This declaration might be a definition. Merge with any existing // definition. if (auto *DDD = D->DefinitionData) { if (CanonSpec->DefinitionData) MergeDefinitionData(CanonSpec, std::move(*DDD)); else CanonSpec->DefinitionData = D->DefinitionData; } D->DefinitionData = CanonSpec->DefinitionData; } } } // Explicit info. if (TypeSourceInfo *TyInfo = GetTypeSourceInfo()) { ClassTemplateSpecializationDecl::ExplicitSpecializationInfo *ExplicitInfo = new (C) ClassTemplateSpecializationDecl::ExplicitSpecializationInfo; ExplicitInfo->TypeAsWritten = TyInfo; ExplicitInfo->ExternLoc = ReadSourceLocation(); ExplicitInfo->TemplateKeywordLoc = ReadSourceLocation(); D->ExplicitInfo = ExplicitInfo; } return Redecl; } void ASTDeclReader::VisitClassTemplatePartialSpecializationDecl( ClassTemplatePartialSpecializationDecl *D) { RedeclarableResult Redecl = VisitClassTemplateSpecializationDeclImpl(D); D->TemplateParams = Record.readTemplateParameterList(); D->ArgsAsWritten = Record.readASTTemplateArgumentListInfo(); // These are read/set from/to the first declaration. if (ThisDeclID == Redecl.getFirstID()) { D->InstantiatedFromMember.setPointer( ReadDeclAs()); D->InstantiatedFromMember.setInt(Record.readInt()); } } void ASTDeclReader::VisitClassScopeFunctionSpecializationDecl( ClassScopeFunctionSpecializationDecl *D) { VisitDecl(D); D->Specialization = ReadDeclAs(); } void ASTDeclReader::VisitFunctionTemplateDecl(FunctionTemplateDecl *D) { RedeclarableResult Redecl = VisitRedeclarableTemplateDecl(D); if (ThisDeclID == Redecl.getFirstID()) { // This FunctionTemplateDecl owns a CommonPtr; read it. SmallVector SpecIDs; ReadDeclIDList(SpecIDs); ASTDeclReader::AddLazySpecializations(D, SpecIDs); } } /// TODO: Unify with ClassTemplateSpecializationDecl version? /// May require unifying ClassTemplate(Partial)SpecializationDecl and /// VarTemplate(Partial)SpecializationDecl with a new data /// structure Template(Partial)SpecializationDecl, and /// using Template(Partial)SpecializationDecl as input type. ASTDeclReader::RedeclarableResult ASTDeclReader::VisitVarTemplateSpecializationDeclImpl( VarTemplateSpecializationDecl *D) { RedeclarableResult Redecl = VisitVarDeclImpl(D); ASTContext &C = Reader.getContext(); if (Decl *InstD = ReadDecl()) { if (VarTemplateDecl *VTD = dyn_cast(InstD)) { D->SpecializedTemplate = VTD; } else { SmallVector TemplArgs; Record.readTemplateArgumentList(TemplArgs); TemplateArgumentList *ArgList = TemplateArgumentList::CreateCopy( C, TemplArgs); VarTemplateSpecializationDecl::SpecializedPartialSpecialization *PS = new (C) VarTemplateSpecializationDecl::SpecializedPartialSpecialization(); PS->PartialSpecialization = cast(InstD); PS->TemplateArgs = ArgList; D->SpecializedTemplate = PS; } } // Explicit info. if (TypeSourceInfo *TyInfo = GetTypeSourceInfo()) { VarTemplateSpecializationDecl::ExplicitSpecializationInfo *ExplicitInfo = new (C) VarTemplateSpecializationDecl::ExplicitSpecializationInfo; ExplicitInfo->TypeAsWritten = TyInfo; ExplicitInfo->ExternLoc = ReadSourceLocation(); ExplicitInfo->TemplateKeywordLoc = ReadSourceLocation(); D->ExplicitInfo = ExplicitInfo; } SmallVector TemplArgs; Record.readTemplateArgumentList(TemplArgs, /*Canonicalize*/ true); D->TemplateArgs = TemplateArgumentList::CreateCopy(C, TemplArgs); D->PointOfInstantiation = ReadSourceLocation(); D->SpecializationKind = (TemplateSpecializationKind)Record.readInt(); bool writtenAsCanonicalDecl = Record.readInt(); if (writtenAsCanonicalDecl) { VarTemplateDecl *CanonPattern = ReadDeclAs(); if (D->isCanonicalDecl()) { // It's kept in the folding set. // FIXME: If it's already present, merge it. if (VarTemplatePartialSpecializationDecl *Partial = dyn_cast(D)) { CanonPattern->getCommonPtr()->PartialSpecializations .GetOrInsertNode(Partial); } else { CanonPattern->getCommonPtr()->Specializations.GetOrInsertNode(D); } } } return Redecl; } /// TODO: Unify with ClassTemplatePartialSpecializationDecl version? /// May require unifying ClassTemplate(Partial)SpecializationDecl and /// VarTemplate(Partial)SpecializationDecl with a new data /// structure Template(Partial)SpecializationDecl, and /// using Template(Partial)SpecializationDecl as input type. void ASTDeclReader::VisitVarTemplatePartialSpecializationDecl( VarTemplatePartialSpecializationDecl *D) { RedeclarableResult Redecl = VisitVarTemplateSpecializationDeclImpl(D); D->TemplateParams = Record.readTemplateParameterList(); D->ArgsAsWritten = Record.readASTTemplateArgumentListInfo(); // These are read/set from/to the first declaration. if (ThisDeclID == Redecl.getFirstID()) { D->InstantiatedFromMember.setPointer( ReadDeclAs()); D->InstantiatedFromMember.setInt(Record.readInt()); } } void ASTDeclReader::VisitTemplateTypeParmDecl(TemplateTypeParmDecl *D) { VisitTypeDecl(D); D->setDeclaredWithTypename(Record.readInt()); if (Record.readInt()) D->setDefaultArgument(GetTypeSourceInfo()); } void ASTDeclReader::VisitNonTypeTemplateParmDecl(NonTypeTemplateParmDecl *D) { VisitDeclaratorDecl(D); // TemplateParmPosition. D->setDepth(Record.readInt()); D->setPosition(Record.readInt()); if (D->isExpandedParameterPack()) { auto TypesAndInfos = D->getTrailingObjects>(); for (unsigned I = 0, N = D->getNumExpansionTypes(); I != N; ++I) { new (&TypesAndInfos[I].first) QualType(Record.readType()); TypesAndInfos[I].second = GetTypeSourceInfo(); } } else { // Rest of NonTypeTemplateParmDecl. D->ParameterPack = Record.readInt(); if (Record.readInt()) D->setDefaultArgument(Record.readExpr()); } } void ASTDeclReader::VisitTemplateTemplateParmDecl(TemplateTemplateParmDecl *D) { VisitTemplateDecl(D); // TemplateParmPosition. D->setDepth(Record.readInt()); D->setPosition(Record.readInt()); if (D->isExpandedParameterPack()) { TemplateParameterList **Data = D->getTrailingObjects(); for (unsigned I = 0, N = D->getNumExpansionTemplateParameters(); I != N; ++I) Data[I] = Record.readTemplateParameterList(); } else { // Rest of TemplateTemplateParmDecl. D->ParameterPack = Record.readInt(); if (Record.readInt()) D->setDefaultArgument(Reader.getContext(), Record.readTemplateArgumentLoc()); } } void ASTDeclReader::VisitTypeAliasTemplateDecl(TypeAliasTemplateDecl *D) { VisitRedeclarableTemplateDecl(D); } void ASTDeclReader::VisitStaticAssertDecl(StaticAssertDecl *D) { VisitDecl(D); D->AssertExprAndFailed.setPointer(Record.readExpr()); D->AssertExprAndFailed.setInt(Record.readInt()); D->Message = cast_or_null(Record.readExpr()); D->RParenLoc = ReadSourceLocation(); } void ASTDeclReader::VisitEmptyDecl(EmptyDecl *D) { VisitDecl(D); } std::pair ASTDeclReader::VisitDeclContext(DeclContext *DC) { uint64_t LexicalOffset = ReadLocalOffset(); uint64_t VisibleOffset = ReadLocalOffset(); return std::make_pair(LexicalOffset, VisibleOffset); } template ASTDeclReader::RedeclarableResult ASTDeclReader::VisitRedeclarable(Redeclarable *D) { DeclID FirstDeclID = ReadDeclID(); Decl *MergeWith = nullptr; bool IsKeyDecl = ThisDeclID == FirstDeclID; bool IsFirstLocalDecl = false; uint64_t RedeclOffset = 0; // 0 indicates that this declaration was the only declaration of its entity, // and is used for space optimization. if (FirstDeclID == 0) { FirstDeclID = ThisDeclID; IsKeyDecl = true; IsFirstLocalDecl = true; } else if (unsigned N = Record.readInt()) { // This declaration was the first local declaration, but may have imported // other declarations. IsKeyDecl = N == 1; IsFirstLocalDecl = true; // We have some declarations that must be before us in our redeclaration // chain. Read them now, and remember that we ought to merge with one of // them. // FIXME: Provide a known merge target to the second and subsequent such // declaration. for (unsigned I = 0; I != N - 1; ++I) MergeWith = ReadDecl(); RedeclOffset = ReadLocalOffset(); } else { // This declaration was not the first local declaration. Read the first // local declaration now, to trigger the import of other redeclarations. (void)ReadDecl(); } T *FirstDecl = cast_or_null(Reader.GetDecl(FirstDeclID)); if (FirstDecl != D) { // We delay loading of the redeclaration chain to avoid deeply nested calls. // We temporarily set the first (canonical) declaration as the previous one // which is the one that matters and mark the real previous DeclID to be // loaded & attached later on. D->RedeclLink = Redeclarable::PreviousDeclLink(FirstDecl); D->First = FirstDecl->getCanonicalDecl(); } T *DAsT = static_cast(D); // Note that we need to load local redeclarations of this decl and build a // decl chain for them. This must happen *after* we perform the preloading // above; this ensures that the redeclaration chain is built in the correct // order. if (IsFirstLocalDecl) Reader.PendingDeclChains.push_back(std::make_pair(DAsT, RedeclOffset)); return RedeclarableResult(MergeWith, FirstDeclID, IsKeyDecl); } /// \brief Attempts to merge the given declaration (D) with another declaration /// of the same entity. template void ASTDeclReader::mergeRedeclarable(Redeclarable *DBase, RedeclarableResult &Redecl, DeclID TemplatePatternID) { // If modules are not available, there is no reason to perform this merge. if (!Reader.getContext().getLangOpts().Modules) return; // If we're not the canonical declaration, we don't need to merge. if (!DBase->isFirstDecl()) return; T *D = static_cast(DBase); if (auto *Existing = Redecl.getKnownMergeTarget()) // We already know of an existing declaration we should merge with. mergeRedeclarable(D, cast(Existing), Redecl, TemplatePatternID); else if (FindExistingResult ExistingRes = findExisting(D)) if (T *Existing = ExistingRes) mergeRedeclarable(D, Existing, Redecl, TemplatePatternID); } /// \brief "Cast" to type T, asserting if we don't have an implicit conversion. /// We use this to put code in a template that will only be valid for certain /// instantiations. template static T assert_cast(T t) { return t; } template static T assert_cast(...) { llvm_unreachable("bad assert_cast"); } /// \brief Merge together the pattern declarations from two template /// declarations. void ASTDeclReader::mergeTemplatePattern(RedeclarableTemplateDecl *D, RedeclarableTemplateDecl *Existing, DeclID DsID, bool IsKeyDecl) { auto *DPattern = D->getTemplatedDecl(); auto *ExistingPattern = Existing->getTemplatedDecl(); RedeclarableResult Result(/*MergeWith*/ ExistingPattern, DPattern->getCanonicalDecl()->getGlobalID(), IsKeyDecl); if (auto *DClass = dyn_cast(DPattern)) { // Merge with any existing definition. // FIXME: This is duplicated in several places. Refactor. auto *ExistingClass = cast(ExistingPattern)->getCanonicalDecl(); if (auto *DDD = DClass->DefinitionData) { if (ExistingClass->DefinitionData) { MergeDefinitionData(ExistingClass, std::move(*DDD)); } else { ExistingClass->DefinitionData = DClass->DefinitionData; // We may have skipped this before because we thought that DClass // was the canonical declaration. Reader.PendingDefinitions.insert(DClass); } } DClass->DefinitionData = ExistingClass->DefinitionData; return mergeRedeclarable(DClass, cast(ExistingPattern), Result); } if (auto *DFunction = dyn_cast(DPattern)) return mergeRedeclarable(DFunction, cast(ExistingPattern), Result); if (auto *DVar = dyn_cast(DPattern)) return mergeRedeclarable(DVar, cast(ExistingPattern), Result); if (auto *DAlias = dyn_cast(DPattern)) return mergeRedeclarable(DAlias, cast(ExistingPattern), Result); llvm_unreachable("merged an unknown kind of redeclarable template"); } /// \brief Attempts to merge the given declaration (D) with another declaration /// of the same entity. template void ASTDeclReader::mergeRedeclarable(Redeclarable *DBase, T *Existing, RedeclarableResult &Redecl, DeclID TemplatePatternID) { T *D = static_cast(DBase); T *ExistingCanon = Existing->getCanonicalDecl(); T *DCanon = D->getCanonicalDecl(); if (ExistingCanon != DCanon) { assert(DCanon->getGlobalID() == Redecl.getFirstID() && "already merged this declaration"); // Have our redeclaration link point back at the canonical declaration // of the existing declaration, so that this declaration has the // appropriate canonical declaration. D->RedeclLink = Redeclarable::PreviousDeclLink(ExistingCanon); D->First = ExistingCanon; ExistingCanon->Used |= D->Used; D->Used = false; // When we merge a namespace, update its pointer to the first namespace. // We cannot have loaded any redeclarations of this declaration yet, so // there's nothing else that needs to be updated. if (auto *Namespace = dyn_cast(D)) Namespace->AnonOrFirstNamespaceAndInline.setPointer( assert_cast(ExistingCanon)); // When we merge a template, merge its pattern. if (auto *DTemplate = dyn_cast(D)) mergeTemplatePattern( DTemplate, assert_cast(ExistingCanon), TemplatePatternID, Redecl.isKeyDecl()); // If this declaration is a key declaration, make a note of that. if (Redecl.isKeyDecl()) Reader.KeyDecls[ExistingCanon].push_back(Redecl.getFirstID()); } } /// \brief Attempts to merge the given declaration (D) with another declaration /// of the same entity, for the case where the entity is not actually /// redeclarable. This happens, for instance, when merging the fields of /// identical class definitions from two different modules. template void ASTDeclReader::mergeMergeable(Mergeable *D) { // If modules are not available, there is no reason to perform this merge. if (!Reader.getContext().getLangOpts().Modules) return; // ODR-based merging is only performed in C++. In C, identically-named things // in different translation units are not redeclarations (but may still have // compatible types). if (!Reader.getContext().getLangOpts().CPlusPlus) return; if (FindExistingResult ExistingRes = findExisting(static_cast(D))) if (T *Existing = ExistingRes) Reader.getContext().setPrimaryMergedDecl(static_cast(D), Existing->getCanonicalDecl()); } void ASTDeclReader::VisitOMPThreadPrivateDecl(OMPThreadPrivateDecl *D) { VisitDecl(D); unsigned NumVars = D->varlist_size(); SmallVector Vars; Vars.reserve(NumVars); for (unsigned i = 0; i != NumVars; ++i) { Vars.push_back(Record.readExpr()); } D->setVars(Vars); } void ASTDeclReader::VisitOMPDeclareReductionDecl(OMPDeclareReductionDecl *D) { VisitValueDecl(D); D->setLocation(ReadSourceLocation()); D->setCombiner(Record.readExpr()); D->setInitializer(Record.readExpr()); D->PrevDeclInScope = ReadDeclID(); } void ASTDeclReader::VisitOMPCapturedExprDecl(OMPCapturedExprDecl *D) { VisitVarDecl(D); } //===----------------------------------------------------------------------===// // Attribute Reading //===----------------------------------------------------------------------===// /// \brief Reads attributes from the current stream position. void ASTReader::ReadAttributes(ASTRecordReader &Record, AttrVec &Attrs) { for (unsigned i = 0, e = Record.readInt(); i != e; ++i) { Attr *New = nullptr; attr::Kind Kind = (attr::Kind)Record.readInt(); SourceRange Range = Record.readSourceRange(); ASTContext &Context = getContext(); #include "clang/Serialization/AttrPCHRead.inc" assert(New && "Unable to decode attribute?"); Attrs.push_back(New); } } //===----------------------------------------------------------------------===// // ASTReader Implementation //===----------------------------------------------------------------------===// /// \brief Note that we have loaded the declaration with the given /// Index. /// /// This routine notes that this declaration has already been loaded, /// so that future GetDecl calls will return this declaration rather /// than trying to load a new declaration. inline void ASTReader::LoadedDecl(unsigned Index, Decl *D) { assert(!DeclsLoaded[Index] && "Decl loaded twice?"); DeclsLoaded[Index] = D; } /// \brief Determine whether the consumer will be interested in seeing /// this declaration (via HandleTopLevelDecl). /// /// This routine should return true for anything that might affect /// code generation, e.g., inline function definitions, Objective-C /// declarations with metadata, etc. static bool isConsumerInterestedIn(ASTContext &Ctx, Decl *D, bool HasBody) { // An ObjCMethodDecl is never considered as "interesting" because its // implementation container always is. // An ImportDecl or VarDecl imported from a module will get emitted when // we import the relevant module. if ((isa(D) || isa(D)) && D->getImportedOwningModule() && Ctx.DeclMustBeEmitted(D)) return false; if (isa(D) || isa(D) || isa(D) || isa(D) || isa(D) || isa(D)) return true; if (isa(D) || isa(D)) return !D->getDeclContext()->isFunctionOrMethod(); if (VarDecl *Var = dyn_cast(D)) return Var->isFileVarDecl() && Var->isThisDeclarationADefinition() == VarDecl::Definition; if (FunctionDecl *Func = dyn_cast(D)) return Func->doesThisDeclarationHaveABody() || HasBody; if (auto *ES = D->getASTContext().getExternalSource()) if (ES->hasExternalDefinitions(D) == ExternalASTSource::EK_Never) return true; return false; } /// \brief Get the correct cursor and offset for loading a declaration. ASTReader::RecordLocation ASTReader::DeclCursorForID(DeclID ID, SourceLocation &Loc) { GlobalDeclMapType::iterator I = GlobalDeclMap.find(ID); assert(I != GlobalDeclMap.end() && "Corrupted global declaration map"); ModuleFile *M = I->second; const DeclOffset &DOffs = M->DeclOffsets[ID - M->BaseDeclID - NUM_PREDEF_DECL_IDS]; Loc = TranslateSourceLocation(*M, DOffs.getLocation()); return RecordLocation(M, DOffs.BitOffset); } ASTReader::RecordLocation ASTReader::getLocalBitOffset(uint64_t GlobalOffset) { ContinuousRangeMap::iterator I = GlobalBitOffsetsMap.find(GlobalOffset); assert(I != GlobalBitOffsetsMap.end() && "Corrupted global bit offsets map"); return RecordLocation(I->second, GlobalOffset - I->second->GlobalBitOffset); } uint64_t ASTReader::getGlobalBitOffset(ModuleFile &M, uint32_t LocalOffset) { return LocalOffset + M.GlobalBitOffset; } static bool isSameTemplateParameterList(const TemplateParameterList *X, const TemplateParameterList *Y); /// \brief Determine whether two template parameters are similar enough /// that they may be used in declarations of the same template. static bool isSameTemplateParameter(const NamedDecl *X, const NamedDecl *Y) { if (X->getKind() != Y->getKind()) return false; if (const TemplateTypeParmDecl *TX = dyn_cast(X)) { const TemplateTypeParmDecl *TY = cast(Y); return TX->isParameterPack() == TY->isParameterPack(); } if (const NonTypeTemplateParmDecl *TX = dyn_cast(X)) { const NonTypeTemplateParmDecl *TY = cast(Y); return TX->isParameterPack() == TY->isParameterPack() && TX->getASTContext().hasSameType(TX->getType(), TY->getType()); } const TemplateTemplateParmDecl *TX = cast(X); const TemplateTemplateParmDecl *TY = cast(Y); return TX->isParameterPack() == TY->isParameterPack() && isSameTemplateParameterList(TX->getTemplateParameters(), TY->getTemplateParameters()); } static NamespaceDecl *getNamespace(const NestedNameSpecifier *X) { if (auto *NS = X->getAsNamespace()) return NS; if (auto *NAS = X->getAsNamespaceAlias()) return NAS->getNamespace(); return nullptr; } static bool isSameQualifier(const NestedNameSpecifier *X, const NestedNameSpecifier *Y) { if (auto *NSX = getNamespace(X)) { auto *NSY = getNamespace(Y); if (!NSY || NSX->getCanonicalDecl() != NSY->getCanonicalDecl()) return false; } else if (X->getKind() != Y->getKind()) return false; // FIXME: For namespaces and types, we're permitted to check that the entity // is named via the same tokens. We should probably do so. switch (X->getKind()) { case NestedNameSpecifier::Identifier: if (X->getAsIdentifier() != Y->getAsIdentifier()) return false; break; case NestedNameSpecifier::Namespace: case NestedNameSpecifier::NamespaceAlias: // We've already checked that we named the same namespace. break; case NestedNameSpecifier::TypeSpec: case NestedNameSpecifier::TypeSpecWithTemplate: if (X->getAsType()->getCanonicalTypeInternal() != Y->getAsType()->getCanonicalTypeInternal()) return false; break; case NestedNameSpecifier::Global: case NestedNameSpecifier::Super: return true; } // Recurse into earlier portion of NNS, if any. auto *PX = X->getPrefix(); auto *PY = Y->getPrefix(); if (PX && PY) return isSameQualifier(PX, PY); return !PX && !PY; } /// \brief Determine whether two template parameter lists are similar enough /// that they may be used in declarations of the same template. static bool isSameTemplateParameterList(const TemplateParameterList *X, const TemplateParameterList *Y) { if (X->size() != Y->size()) return false; for (unsigned I = 0, N = X->size(); I != N; ++I) if (!isSameTemplateParameter(X->getParam(I), Y->getParam(I))) return false; return true; } /// Determine whether the attributes we can overload on are identical for A and /// B. Will ignore any overloadable attrs represented in the type of A and B. static bool hasSameOverloadableAttrs(const FunctionDecl *A, const FunctionDecl *B) { // Note that pass_object_size attributes are represented in the function's // ExtParameterInfo, so we don't need to check them here. SmallVector AEnableIfs; // Since this is an equality check, we can ignore that enable_if attrs show up // in reverse order. for (const auto *EIA : A->specific_attrs()) AEnableIfs.push_back(EIA); SmallVector BEnableIfs; for (const auto *EIA : B->specific_attrs()) BEnableIfs.push_back(EIA); // Two very common cases: either we have 0 enable_if attrs, or we have an // unequal number of enable_if attrs. if (AEnableIfs.empty() && BEnableIfs.empty()) return true; if (AEnableIfs.size() != BEnableIfs.size()) return false; llvm::FoldingSetNodeID Cand1ID, Cand2ID; for (unsigned I = 0, E = AEnableIfs.size(); I != E; ++I) { Cand1ID.clear(); Cand2ID.clear(); AEnableIfs[I]->getCond()->Profile(Cand1ID, A->getASTContext(), true); BEnableIfs[I]->getCond()->Profile(Cand2ID, B->getASTContext(), true); if (Cand1ID != Cand2ID) return false; } return true; } /// \brief Determine whether the two declarations refer to the same entity. static bool isSameEntity(NamedDecl *X, NamedDecl *Y) { assert(X->getDeclName() == Y->getDeclName() && "Declaration name mismatch!"); if (X == Y) return true; // Must be in the same context. if (!X->getDeclContext()->getRedeclContext()->Equals( Y->getDeclContext()->getRedeclContext())) return false; // Two typedefs refer to the same entity if they have the same underlying // type. if (TypedefNameDecl *TypedefX = dyn_cast(X)) if (TypedefNameDecl *TypedefY = dyn_cast(Y)) return X->getASTContext().hasSameType(TypedefX->getUnderlyingType(), TypedefY->getUnderlyingType()); // Must have the same kind. if (X->getKind() != Y->getKind()) return false; // Objective-C classes and protocols with the same name always match. if (isa(X) || isa(X)) return true; if (isa(X)) { // No need to handle these here: we merge them when adding them to the // template. return false; } // Compatible tags match. if (TagDecl *TagX = dyn_cast(X)) { TagDecl *TagY = cast(Y); return (TagX->getTagKind() == TagY->getTagKind()) || ((TagX->getTagKind() == TTK_Struct || TagX->getTagKind() == TTK_Class || TagX->getTagKind() == TTK_Interface) && (TagY->getTagKind() == TTK_Struct || TagY->getTagKind() == TTK_Class || TagY->getTagKind() == TTK_Interface)); } // Functions with the same type and linkage match. // FIXME: This needs to cope with merging of prototyped/non-prototyped // functions, etc. if (FunctionDecl *FuncX = dyn_cast(X)) { FunctionDecl *FuncY = cast(Y); if (CXXConstructorDecl *CtorX = dyn_cast(X)) { CXXConstructorDecl *CtorY = cast(Y); if (CtorX->getInheritedConstructor() && !isSameEntity(CtorX->getInheritedConstructor().getConstructor(), CtorY->getInheritedConstructor().getConstructor())) return false; } ASTContext &C = FuncX->getASTContext(); if (!C.hasSameType(FuncX->getType(), FuncY->getType())) { // We can get functions with different types on the redecl chain in C++17 // if they have differing exception specifications and at least one of // the excpetion specs is unresolved. // FIXME: Do we need to check for C++14 deduced return types here too? auto *XFPT = FuncX->getType()->getAs(); auto *YFPT = FuncY->getType()->getAs(); if (C.getLangOpts().CPlusPlus1z && XFPT && YFPT && (isUnresolvedExceptionSpec(XFPT->getExceptionSpecType()) || isUnresolvedExceptionSpec(YFPT->getExceptionSpecType())) && C.hasSameFunctionTypeIgnoringExceptionSpec(FuncX->getType(), FuncY->getType())) return true; return false; } return FuncX->getLinkageInternal() == FuncY->getLinkageInternal() && hasSameOverloadableAttrs(FuncX, FuncY); } // Variables with the same type and linkage match. if (VarDecl *VarX = dyn_cast(X)) { VarDecl *VarY = cast(Y); if (VarX->getLinkageInternal() == VarY->getLinkageInternal()) { ASTContext &C = VarX->getASTContext(); if (C.hasSameType(VarX->getType(), VarY->getType())) return true; // We can get decls with different types on the redecl chain. Eg. // template struct S { static T Var[]; }; // #1 // template T S::Var[sizeof(T)]; // #2 // Only? happens when completing an incomplete array type. In this case // when comparing #1 and #2 we should go through their element type. const ArrayType *VarXTy = C.getAsArrayType(VarX->getType()); const ArrayType *VarYTy = C.getAsArrayType(VarY->getType()); if (!VarXTy || !VarYTy) return false; if (VarXTy->isIncompleteArrayType() || VarYTy->isIncompleteArrayType()) return C.hasSameType(VarXTy->getElementType(), VarYTy->getElementType()); } return false; } // Namespaces with the same name and inlinedness match. if (NamespaceDecl *NamespaceX = dyn_cast(X)) { NamespaceDecl *NamespaceY = cast(Y); return NamespaceX->isInline() == NamespaceY->isInline(); } // Identical template names and kinds match if their template parameter lists // and patterns match. if (TemplateDecl *TemplateX = dyn_cast(X)) { TemplateDecl *TemplateY = cast(Y); return isSameEntity(TemplateX->getTemplatedDecl(), TemplateY->getTemplatedDecl()) && isSameTemplateParameterList(TemplateX->getTemplateParameters(), TemplateY->getTemplateParameters()); } // Fields with the same name and the same type match. if (FieldDecl *FDX = dyn_cast(X)) { FieldDecl *FDY = cast(Y); // FIXME: Also check the bitwidth is odr-equivalent, if any. return X->getASTContext().hasSameType(FDX->getType(), FDY->getType()); } // Indirect fields with the same target field match. if (auto *IFDX = dyn_cast(X)) { auto *IFDY = cast(Y); return IFDX->getAnonField()->getCanonicalDecl() == IFDY->getAnonField()->getCanonicalDecl(); } // Enumerators with the same name match. if (isa(X)) // FIXME: Also check the value is odr-equivalent. return true; // Using shadow declarations with the same target match. if (UsingShadowDecl *USX = dyn_cast(X)) { UsingShadowDecl *USY = cast(Y); return USX->getTargetDecl() == USY->getTargetDecl(); } // Using declarations with the same qualifier match. (We already know that // the name matches.) if (auto *UX = dyn_cast(X)) { auto *UY = cast(Y); return isSameQualifier(UX->getQualifier(), UY->getQualifier()) && UX->hasTypename() == UY->hasTypename() && UX->isAccessDeclaration() == UY->isAccessDeclaration(); } if (auto *UX = dyn_cast(X)) { auto *UY = cast(Y); return isSameQualifier(UX->getQualifier(), UY->getQualifier()) && UX->isAccessDeclaration() == UY->isAccessDeclaration(); } if (auto *UX = dyn_cast(X)) return isSameQualifier( UX->getQualifier(), cast(Y)->getQualifier()); // Namespace alias definitions with the same target match. if (auto *NAX = dyn_cast(X)) { auto *NAY = cast(Y); return NAX->getNamespace()->Equals(NAY->getNamespace()); } return false; } /// Find the context in which we should search for previous declarations when /// looking for declarations to merge. DeclContext *ASTDeclReader::getPrimaryContextForMerging(ASTReader &Reader, DeclContext *DC) { if (NamespaceDecl *ND = dyn_cast(DC)) return ND->getOriginalNamespace(); if (CXXRecordDecl *RD = dyn_cast(DC)) { // Try to dig out the definition. auto *DD = RD->DefinitionData; if (!DD) DD = RD->getCanonicalDecl()->DefinitionData; // If there's no definition yet, then DC's definition is added by an update // record, but we've not yet loaded that update record. In this case, we // commit to DC being the canonical definition now, and will fix this when // we load the update record. if (!DD) { DD = new (Reader.getContext()) struct CXXRecordDecl::DefinitionData(RD); RD->IsCompleteDefinition = true; RD->DefinitionData = DD; RD->getCanonicalDecl()->DefinitionData = DD; // Track that we did this horrible thing so that we can fix it later. Reader.PendingFakeDefinitionData.insert( std::make_pair(DD, ASTReader::PendingFakeDefinitionKind::Fake)); } return DD->Definition; } if (EnumDecl *ED = dyn_cast(DC)) return ED->getASTContext().getLangOpts().CPlusPlus? ED->getDefinition() : nullptr; // We can see the TU here only if we have no Sema object. In that case, // there's no TU scope to look in, so using the DC alone is sufficient. if (auto *TU = dyn_cast(DC)) return TU; return nullptr; } ASTDeclReader::FindExistingResult::~FindExistingResult() { // Record that we had a typedef name for linkage whether or not we merge // with that declaration. if (TypedefNameForLinkage) { DeclContext *DC = New->getDeclContext()->getRedeclContext(); Reader.ImportedTypedefNamesForLinkage.insert( std::make_pair(std::make_pair(DC, TypedefNameForLinkage), New)); return; } if (!AddResult || Existing) return; DeclarationName Name = New->getDeclName(); DeclContext *DC = New->getDeclContext()->getRedeclContext(); if (needsAnonymousDeclarationNumber(New)) { setAnonymousDeclForMerging(Reader, New->getLexicalDeclContext(), AnonymousDeclNumber, New); } else if (DC->isTranslationUnit() && !Reader.getContext().getLangOpts().CPlusPlus) { if (Reader.getIdResolver().tryAddTopLevelDecl(New, Name)) Reader.PendingFakeLookupResults[Name.getAsIdentifierInfo()] .push_back(New); } else if (DeclContext *MergeDC = getPrimaryContextForMerging(Reader, DC)) { // Add the declaration to its redeclaration context so later merging // lookups will find it. MergeDC->makeDeclVisibleInContextImpl(New, /*Internal*/true); } } /// Find the declaration that should be merged into, given the declaration found /// by name lookup. If we're merging an anonymous declaration within a typedef, /// we need a matching typedef, and we merge with the type inside it. static NamedDecl *getDeclForMerging(NamedDecl *Found, bool IsTypedefNameForLinkage) { if (!IsTypedefNameForLinkage) return Found; // If we found a typedef declaration that gives a name to some other // declaration, then we want that inner declaration. Declarations from // AST files are handled via ImportedTypedefNamesForLinkage. if (Found->isFromASTFile()) return nullptr; if (auto *TND = dyn_cast(Found)) return TND->getAnonDeclWithTypedefName(/*AnyRedecl*/true); return nullptr; } NamedDecl *ASTDeclReader::getAnonymousDeclForMerging(ASTReader &Reader, DeclContext *DC, unsigned Index) { // If the lexical context has been merged, look into the now-canonical // definition. if (auto *Merged = Reader.MergedDeclContexts.lookup(DC)) DC = Merged; // If we've seen this before, return the canonical declaration. auto &Previous = Reader.AnonymousDeclarationsForMerging[DC]; if (Index < Previous.size() && Previous[Index]) return Previous[Index]; // If this is the first time, but we have parsed a declaration of the context, // build the anonymous declaration list from the parsed declaration. if (!cast(DC)->isFromASTFile()) { numberAnonymousDeclsWithin(DC, [&](NamedDecl *ND, unsigned Number) { if (Previous.size() == Number) Previous.push_back(cast(ND->getCanonicalDecl())); else Previous[Number] = cast(ND->getCanonicalDecl()); }); } return Index < Previous.size() ? Previous[Index] : nullptr; } void ASTDeclReader::setAnonymousDeclForMerging(ASTReader &Reader, DeclContext *DC, unsigned Index, NamedDecl *D) { if (auto *Merged = Reader.MergedDeclContexts.lookup(DC)) DC = Merged; auto &Previous = Reader.AnonymousDeclarationsForMerging[DC]; if (Index >= Previous.size()) Previous.resize(Index + 1); if (!Previous[Index]) Previous[Index] = D; } ASTDeclReader::FindExistingResult ASTDeclReader::findExisting(NamedDecl *D) { DeclarationName Name = TypedefNameForLinkage ? TypedefNameForLinkage : D->getDeclName(); if (!Name && !needsAnonymousDeclarationNumber(D)) { // Don't bother trying to find unnamed declarations that are in // unmergeable contexts. FindExistingResult Result(Reader, D, /*Existing=*/nullptr, AnonymousDeclNumber, TypedefNameForLinkage); Result.suppress(); return Result; } DeclContext *DC = D->getDeclContext()->getRedeclContext(); if (TypedefNameForLinkage) { auto It = Reader.ImportedTypedefNamesForLinkage.find( std::make_pair(DC, TypedefNameForLinkage)); if (It != Reader.ImportedTypedefNamesForLinkage.end()) if (isSameEntity(It->second, D)) return FindExistingResult(Reader, D, It->second, AnonymousDeclNumber, TypedefNameForLinkage); // Go on to check in other places in case an existing typedef name // was not imported. } if (needsAnonymousDeclarationNumber(D)) { // This is an anonymous declaration that we may need to merge. Look it up // in its context by number. if (auto *Existing = getAnonymousDeclForMerging( Reader, D->getLexicalDeclContext(), AnonymousDeclNumber)) if (isSameEntity(Existing, D)) return FindExistingResult(Reader, D, Existing, AnonymousDeclNumber, TypedefNameForLinkage); } else if (DC->isTranslationUnit() && !Reader.getContext().getLangOpts().CPlusPlus) { IdentifierResolver &IdResolver = Reader.getIdResolver(); // Temporarily consider the identifier to be up-to-date. We don't want to // cause additional lookups here. class UpToDateIdentifierRAII { IdentifierInfo *II; bool WasOutToDate; public: explicit UpToDateIdentifierRAII(IdentifierInfo *II) : II(II), WasOutToDate(false) { if (II) { WasOutToDate = II->isOutOfDate(); if (WasOutToDate) II->setOutOfDate(false); } } ~UpToDateIdentifierRAII() { if (WasOutToDate) II->setOutOfDate(true); } } UpToDate(Name.getAsIdentifierInfo()); for (IdentifierResolver::iterator I = IdResolver.begin(Name), IEnd = IdResolver.end(); I != IEnd; ++I) { if (NamedDecl *Existing = getDeclForMerging(*I, TypedefNameForLinkage)) if (isSameEntity(Existing, D)) return FindExistingResult(Reader, D, Existing, AnonymousDeclNumber, TypedefNameForLinkage); } } else if (DeclContext *MergeDC = getPrimaryContextForMerging(Reader, DC)) { DeclContext::lookup_result R = MergeDC->noload_lookup(Name); for (DeclContext::lookup_iterator I = R.begin(), E = R.end(); I != E; ++I) { if (NamedDecl *Existing = getDeclForMerging(*I, TypedefNameForLinkage)) if (isSameEntity(Existing, D)) return FindExistingResult(Reader, D, Existing, AnonymousDeclNumber, TypedefNameForLinkage); } } else { // Not in a mergeable context. return FindExistingResult(Reader); } // If this declaration is from a merged context, make a note that we need to // check that the canonical definition of that context contains the decl. // // FIXME: We should do something similar if we merge two definitions of the // same template specialization into the same CXXRecordDecl. auto MergedDCIt = Reader.MergedDeclContexts.find(D->getLexicalDeclContext()); if (MergedDCIt != Reader.MergedDeclContexts.end() && MergedDCIt->second == D->getDeclContext()) Reader.PendingOdrMergeChecks.push_back(D); return FindExistingResult(Reader, D, /*Existing=*/nullptr, AnonymousDeclNumber, TypedefNameForLinkage); } template Decl *ASTDeclReader::getMostRecentDeclImpl(Redeclarable *D) { return D->RedeclLink.getLatestNotUpdated(); } Decl *ASTDeclReader::getMostRecentDeclImpl(...) { llvm_unreachable("getMostRecentDecl on non-redeclarable declaration"); } Decl *ASTDeclReader::getMostRecentDecl(Decl *D) { assert(D); switch (D->getKind()) { #define ABSTRACT_DECL(TYPE) #define DECL(TYPE, BASE) \ case Decl::TYPE: \ return getMostRecentDeclImpl(cast(D)); #include "clang/AST/DeclNodes.inc" } llvm_unreachable("unknown decl kind"); } Decl *ASTReader::getMostRecentExistingDecl(Decl *D) { return ASTDeclReader::getMostRecentDecl(D->getCanonicalDecl()); } template void ASTDeclReader::attachPreviousDeclImpl(ASTReader &Reader, Redeclarable *D, Decl *Previous, Decl *Canon) { D->RedeclLink.setPrevious(cast(Previous)); D->First = cast(Previous)->First; } namespace clang { template<> void ASTDeclReader::attachPreviousDeclImpl(ASTReader &Reader, Redeclarable *D, Decl *Previous, Decl *Canon) { VarDecl *VD = static_cast(D); VarDecl *PrevVD = cast(Previous); D->RedeclLink.setPrevious(PrevVD); D->First = PrevVD->First; // We should keep at most one definition on the chain. // FIXME: Cache the definition once we've found it. Building a chain with // N definitions currently takes O(N^2) time here. if (VD->isThisDeclarationADefinition() == VarDecl::Definition) { for (VarDecl *CurD = PrevVD; CurD; CurD = CurD->getPreviousDecl()) { if (CurD->isThisDeclarationADefinition() == VarDecl::Definition) { Reader.mergeDefinitionVisibility(CurD, VD); VD->demoteThisDefinitionToDeclaration(); break; } } } } template<> void ASTDeclReader::attachPreviousDeclImpl(ASTReader &Reader, Redeclarable *D, Decl *Previous, Decl *Canon) { FunctionDecl *FD = static_cast(D); FunctionDecl *PrevFD = cast(Previous); FD->RedeclLink.setPrevious(PrevFD); FD->First = PrevFD->First; // If the previous declaration is an inline function declaration, then this // declaration is too. if (PrevFD->IsInline != FD->IsInline) { // FIXME: [dcl.fct.spec]p4: // If a function with external linkage is declared inline in one // translation unit, it shall be declared inline in all translation // units in which it appears. // // Be careful of this case: // // module A: // template struct X { void f(); }; // template inline void X::f() {} // // module B instantiates the declaration of X::f // module C instantiates the definition of X::f // // If module B and C are merged, we do not have a violation of this rule. FD->IsInline = true; } // If we need to propagate an exception specification along the redecl // chain, make a note of that so that we can do so later. auto *FPT = FD->getType()->getAs(); auto *PrevFPT = PrevFD->getType()->getAs(); if (FPT && PrevFPT) { bool IsUnresolved = isUnresolvedExceptionSpec(FPT->getExceptionSpecType()); bool WasUnresolved = isUnresolvedExceptionSpec(PrevFPT->getExceptionSpecType()); if (IsUnresolved != WasUnresolved) Reader.PendingExceptionSpecUpdates.insert( std::make_pair(Canon, IsUnresolved ? PrevFD : FD)); } } } // end namespace clang void ASTDeclReader::attachPreviousDeclImpl(ASTReader &Reader, ...) { llvm_unreachable("attachPreviousDecl on non-redeclarable declaration"); } /// Inherit the default template argument from \p From to \p To. Returns /// \c false if there is no default template for \p From. template static bool inheritDefaultTemplateArgument(ASTContext &Context, ParmDecl *From, Decl *ToD) { auto *To = cast(ToD); if (!From->hasDefaultArgument()) return false; To->setInheritedDefaultArgument(Context, From); return true; } static void inheritDefaultTemplateArguments(ASTContext &Context, TemplateDecl *From, TemplateDecl *To) { auto *FromTP = From->getTemplateParameters(); auto *ToTP = To->getTemplateParameters(); assert(FromTP->size() == ToTP->size() && "merged mismatched templates?"); for (unsigned I = 0, N = FromTP->size(); I != N; ++I) { NamedDecl *FromParam = FromTP->getParam(N - I - 1); if (FromParam->isParameterPack()) continue; NamedDecl *ToParam = ToTP->getParam(N - I - 1); if (auto *FTTP = dyn_cast(FromParam)) { if (!inheritDefaultTemplateArgument(Context, FTTP, ToParam)) break; } else if (auto *FNTTP = dyn_cast(FromParam)) { if (!inheritDefaultTemplateArgument(Context, FNTTP, ToParam)) break; } else { if (!inheritDefaultTemplateArgument( Context, cast(FromParam), ToParam)) break; } } } void ASTDeclReader::attachPreviousDecl(ASTReader &Reader, Decl *D, Decl *Previous, Decl *Canon) { assert(D && Previous); switch (D->getKind()) { #define ABSTRACT_DECL(TYPE) #define DECL(TYPE, BASE) \ case Decl::TYPE: \ attachPreviousDeclImpl(Reader, cast(D), Previous, Canon); \ break; #include "clang/AST/DeclNodes.inc" } // If the declaration was visible in one module, a redeclaration of it in // another module remains visible even if it wouldn't be visible by itself. // // FIXME: In this case, the declaration should only be visible if a module // that makes it visible has been imported. D->IdentifierNamespace |= Previous->IdentifierNamespace & (Decl::IDNS_Ordinary | Decl::IDNS_Tag | Decl::IDNS_Type); // If the declaration declares a template, it may inherit default arguments // from the previous declaration. if (TemplateDecl *TD = dyn_cast(D)) inheritDefaultTemplateArguments(Reader.getContext(), cast(Previous), TD); } template void ASTDeclReader::attachLatestDeclImpl(Redeclarable *D, Decl *Latest) { D->RedeclLink.setLatest(cast(Latest)); } void ASTDeclReader::attachLatestDeclImpl(...) { llvm_unreachable("attachLatestDecl on non-redeclarable declaration"); } void ASTDeclReader::attachLatestDecl(Decl *D, Decl *Latest) { assert(D && Latest); switch (D->getKind()) { #define ABSTRACT_DECL(TYPE) #define DECL(TYPE, BASE) \ case Decl::TYPE: \ attachLatestDeclImpl(cast(D), Latest); \ break; #include "clang/AST/DeclNodes.inc" } } template void ASTDeclReader::markIncompleteDeclChainImpl(Redeclarable *D) { D->RedeclLink.markIncomplete(); } void ASTDeclReader::markIncompleteDeclChainImpl(...) { llvm_unreachable("markIncompleteDeclChain on non-redeclarable declaration"); } void ASTReader::markIncompleteDeclChain(Decl *D) { switch (D->getKind()) { #define ABSTRACT_DECL(TYPE) #define DECL(TYPE, BASE) \ case Decl::TYPE: \ ASTDeclReader::markIncompleteDeclChainImpl(cast(D)); \ break; #include "clang/AST/DeclNodes.inc" } } /// \brief Read the declaration at the given offset from the AST file. Decl *ASTReader::ReadDeclRecord(DeclID ID) { unsigned Index = ID - NUM_PREDEF_DECL_IDS; SourceLocation DeclLoc; RecordLocation Loc = DeclCursorForID(ID, DeclLoc); llvm::BitstreamCursor &DeclsCursor = Loc.F->DeclsCursor; // Keep track of where we are in the stream, then jump back there // after reading this declaration. SavedStreamPosition SavedPosition(DeclsCursor); ReadingKindTracker ReadingKind(Read_Decl, *this); // Note that we are loading a declaration record. Deserializing ADecl(this); DeclsCursor.JumpToBit(Loc.Offset); ASTRecordReader Record(*this, *Loc.F); ASTDeclReader Reader(*this, Record, Loc, ID, DeclLoc); unsigned Code = DeclsCursor.ReadCode(); ASTContext &Context = getContext(); Decl *D = nullptr; switch ((DeclCode)Record.readRecord(DeclsCursor, Code)) { case DECL_CONTEXT_LEXICAL: case DECL_CONTEXT_VISIBLE: llvm_unreachable("Record cannot be de-serialized with ReadDeclRecord"); case DECL_TYPEDEF: D = TypedefDecl::CreateDeserialized(Context, ID); break; case DECL_TYPEALIAS: D = TypeAliasDecl::CreateDeserialized(Context, ID); break; case DECL_ENUM: D = EnumDecl::CreateDeserialized(Context, ID); break; case DECL_RECORD: D = RecordDecl::CreateDeserialized(Context, ID); break; case DECL_ENUM_CONSTANT: D = EnumConstantDecl::CreateDeserialized(Context, ID); break; case DECL_FUNCTION: D = FunctionDecl::CreateDeserialized(Context, ID); break; case DECL_LINKAGE_SPEC: D = LinkageSpecDecl::CreateDeserialized(Context, ID); break; case DECL_EXPORT: D = ExportDecl::CreateDeserialized(Context, ID); break; case DECL_LABEL: D = LabelDecl::CreateDeserialized(Context, ID); break; case DECL_NAMESPACE: D = NamespaceDecl::CreateDeserialized(Context, ID); break; case DECL_NAMESPACE_ALIAS: D = NamespaceAliasDecl::CreateDeserialized(Context, ID); break; case DECL_USING: D = UsingDecl::CreateDeserialized(Context, ID); break; case DECL_USING_PACK: D = UsingPackDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_USING_SHADOW: D = UsingShadowDecl::CreateDeserialized(Context, ID); break; case DECL_CONSTRUCTOR_USING_SHADOW: D = ConstructorUsingShadowDecl::CreateDeserialized(Context, ID); break; case DECL_USING_DIRECTIVE: D = UsingDirectiveDecl::CreateDeserialized(Context, ID); break; case DECL_UNRESOLVED_USING_VALUE: D = UnresolvedUsingValueDecl::CreateDeserialized(Context, ID); break; case DECL_UNRESOLVED_USING_TYPENAME: D = UnresolvedUsingTypenameDecl::CreateDeserialized(Context, ID); break; case DECL_CXX_RECORD: D = CXXRecordDecl::CreateDeserialized(Context, ID); break; case DECL_CXX_DEDUCTION_GUIDE: D = CXXDeductionGuideDecl::CreateDeserialized(Context, ID); break; case DECL_CXX_METHOD: D = CXXMethodDecl::CreateDeserialized(Context, ID); break; case DECL_CXX_CONSTRUCTOR: D = CXXConstructorDecl::CreateDeserialized(Context, ID, false); break; case DECL_CXX_INHERITED_CONSTRUCTOR: D = CXXConstructorDecl::CreateDeserialized(Context, ID, true); break; case DECL_CXX_DESTRUCTOR: D = CXXDestructorDecl::CreateDeserialized(Context, ID); break; case DECL_CXX_CONVERSION: D = CXXConversionDecl::CreateDeserialized(Context, ID); break; case DECL_ACCESS_SPEC: D = AccessSpecDecl::CreateDeserialized(Context, ID); break; case DECL_FRIEND: D = FriendDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_FRIEND_TEMPLATE: D = FriendTemplateDecl::CreateDeserialized(Context, ID); break; case DECL_CLASS_TEMPLATE: D = ClassTemplateDecl::CreateDeserialized(Context, ID); break; case DECL_CLASS_TEMPLATE_SPECIALIZATION: D = ClassTemplateSpecializationDecl::CreateDeserialized(Context, ID); break; case DECL_CLASS_TEMPLATE_PARTIAL_SPECIALIZATION: D = ClassTemplatePartialSpecializationDecl::CreateDeserialized(Context, ID); break; case DECL_VAR_TEMPLATE: D = VarTemplateDecl::CreateDeserialized(Context, ID); break; case DECL_VAR_TEMPLATE_SPECIALIZATION: D = VarTemplateSpecializationDecl::CreateDeserialized(Context, ID); break; case DECL_VAR_TEMPLATE_PARTIAL_SPECIALIZATION: D = VarTemplatePartialSpecializationDecl::CreateDeserialized(Context, ID); break; case DECL_CLASS_SCOPE_FUNCTION_SPECIALIZATION: D = ClassScopeFunctionSpecializationDecl::CreateDeserialized(Context, ID); break; case DECL_FUNCTION_TEMPLATE: D = FunctionTemplateDecl::CreateDeserialized(Context, ID); break; case DECL_TEMPLATE_TYPE_PARM: D = TemplateTypeParmDecl::CreateDeserialized(Context, ID); break; case DECL_NON_TYPE_TEMPLATE_PARM: D = NonTypeTemplateParmDecl::CreateDeserialized(Context, ID); break; case DECL_EXPANDED_NON_TYPE_TEMPLATE_PARM_PACK: D = NonTypeTemplateParmDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_TEMPLATE_TEMPLATE_PARM: D = TemplateTemplateParmDecl::CreateDeserialized(Context, ID); break; case DECL_EXPANDED_TEMPLATE_TEMPLATE_PARM_PACK: D = TemplateTemplateParmDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_TYPE_ALIAS_TEMPLATE: D = TypeAliasTemplateDecl::CreateDeserialized(Context, ID); break; case DECL_STATIC_ASSERT: D = StaticAssertDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_METHOD: D = ObjCMethodDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_INTERFACE: D = ObjCInterfaceDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_IVAR: D = ObjCIvarDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_PROTOCOL: D = ObjCProtocolDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_AT_DEFS_FIELD: D = ObjCAtDefsFieldDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_CATEGORY: D = ObjCCategoryDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_CATEGORY_IMPL: D = ObjCCategoryImplDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_IMPLEMENTATION: D = ObjCImplementationDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_COMPATIBLE_ALIAS: D = ObjCCompatibleAliasDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_PROPERTY: D = ObjCPropertyDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_PROPERTY_IMPL: D = ObjCPropertyImplDecl::CreateDeserialized(Context, ID); break; case DECL_FIELD: D = FieldDecl::CreateDeserialized(Context, ID); break; case DECL_INDIRECTFIELD: D = IndirectFieldDecl::CreateDeserialized(Context, ID); break; case DECL_VAR: D = VarDecl::CreateDeserialized(Context, ID); break; case DECL_IMPLICIT_PARAM: D = ImplicitParamDecl::CreateDeserialized(Context, ID); break; case DECL_PARM_VAR: D = ParmVarDecl::CreateDeserialized(Context, ID); break; case DECL_DECOMPOSITION: D = DecompositionDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_BINDING: D = BindingDecl::CreateDeserialized(Context, ID); break; case DECL_FILE_SCOPE_ASM: D = FileScopeAsmDecl::CreateDeserialized(Context, ID); break; case DECL_BLOCK: D = BlockDecl::CreateDeserialized(Context, ID); break; case DECL_MS_PROPERTY: D = MSPropertyDecl::CreateDeserialized(Context, ID); break; case DECL_CAPTURED: D = CapturedDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_CXX_BASE_SPECIFIERS: Error("attempt to read a C++ base-specifier record as a declaration"); return nullptr; case DECL_CXX_CTOR_INITIALIZERS: Error("attempt to read a C++ ctor initializer record as a declaration"); return nullptr; case DECL_IMPORT: // Note: last entry of the ImportDecl record is the number of stored source // locations. D = ImportDecl::CreateDeserialized(Context, ID, Record.back()); break; case DECL_OMP_THREADPRIVATE: D = OMPThreadPrivateDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_OMP_DECLARE_REDUCTION: D = OMPDeclareReductionDecl::CreateDeserialized(Context, ID); break; case DECL_OMP_CAPTUREDEXPR: D = OMPCapturedExprDecl::CreateDeserialized(Context, ID); break; case DECL_PRAGMA_COMMENT: D = PragmaCommentDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_PRAGMA_DETECT_MISMATCH: D = PragmaDetectMismatchDecl::CreateDeserialized(Context, ID, Record.readInt()); break; case DECL_EMPTY: D = EmptyDecl::CreateDeserialized(Context, ID); break; case DECL_OBJC_TYPE_PARAM: D = ObjCTypeParamDecl::CreateDeserialized(Context, ID); break; } assert(D && "Unknown declaration reading AST file"); LoadedDecl(Index, D); // Set the DeclContext before doing any deserialization, to make sure internal // calls to Decl::getASTContext() by Decl's methods will find the // TranslationUnitDecl without crashing. D->setDeclContext(Context.getTranslationUnitDecl()); Reader.Visit(D); // If this declaration is also a declaration context, get the // offsets for its tables of lexical and visible declarations. if (DeclContext *DC = dyn_cast(D)) { std::pair Offsets = Reader.VisitDeclContext(DC); if (Offsets.first && ReadLexicalDeclContextStorage(*Loc.F, DeclsCursor, Offsets.first, DC)) return nullptr; if (Offsets.second && ReadVisibleDeclContextStorage(*Loc.F, DeclsCursor, Offsets.second, ID)) return nullptr; } assert(Record.getIdx() == Record.size()); // Load any relevant update records. PendingUpdateRecords.push_back( PendingUpdateRecord(ID, D, /*JustLoaded=*/true)); // Load the categories after recursive loading is finished. if (ObjCInterfaceDecl *Class = dyn_cast(D)) // If we already have a definition when deserializing the ObjCInterfaceDecl, // we put the Decl in PendingDefinitions so we can pull the categories here. if (Class->isThisDeclarationADefinition() || PendingDefinitions.count(Class)) loadObjCCategories(ID, Class); // If we have deserialized a declaration that has a definition the // AST consumer might need to know about, queue it. // We don't pass it to the consumer immediately because we may be in recursive // loading, and some declarations may still be initializing. PotentiallyInterestingDecls.push_back( InterestingDecl(D, Reader.hasPendingBody())); return D; } void ASTReader::PassInterestingDeclsToConsumer() { assert(Consumer); if (PassingDeclsToConsumer) return; // Guard variable to avoid recursively redoing the process of passing // decls to consumer. SaveAndRestore GuardPassingDeclsToConsumer(PassingDeclsToConsumer, true); // Ensure that we've loaded all potentially-interesting declarations // that need to be eagerly loaded. for (auto ID : EagerlyDeserializedDecls) GetDecl(ID); EagerlyDeserializedDecls.clear(); while (!PotentiallyInterestingDecls.empty()) { InterestingDecl D = PotentiallyInterestingDecls.front(); PotentiallyInterestingDecls.pop_front(); if (isConsumerInterestedIn(getContext(), D.getDecl(), D.hasPendingBody())) PassInterestingDeclToConsumer(D.getDecl()); } } void ASTReader::loadDeclUpdateRecords(PendingUpdateRecord &Record) { // The declaration may have been modified by files later in the chain. // If this is the case, read the record containing the updates from each file // and pass it to ASTDeclReader to make the modifications. serialization::GlobalDeclID ID = Record.ID; Decl *D = Record.D; ProcessingUpdatesRAIIObj ProcessingUpdates(*this); DeclUpdateOffsetsMap::iterator UpdI = DeclUpdateOffsets.find(ID); llvm::SmallVector PendingLazySpecializationIDs; if (UpdI != DeclUpdateOffsets.end()) { auto UpdateOffsets = std::move(UpdI->second); DeclUpdateOffsets.erase(UpdI); // Check if this decl was interesting to the consumer. If we just loaded // the declaration, then we know it was interesting and we skip the call // to isConsumerInterestedIn because it is unsafe to call in the // current ASTReader state. bool WasInteresting = Record.JustLoaded || isConsumerInterestedIn(getContext(), D, false); for (auto &FileAndOffset : UpdateOffsets) { ModuleFile *F = FileAndOffset.first; uint64_t Offset = FileAndOffset.second; llvm::BitstreamCursor &Cursor = F->DeclsCursor; SavedStreamPosition SavedPosition(Cursor); Cursor.JumpToBit(Offset); unsigned Code = Cursor.ReadCode(); ASTRecordReader Record(*this, *F); unsigned RecCode = Record.readRecord(Cursor, Code); (void)RecCode; assert(RecCode == DECL_UPDATES && "Expected DECL_UPDATES record!"); ASTDeclReader Reader(*this, Record, RecordLocation(F, Offset), ID, SourceLocation()); Reader.UpdateDecl(D, PendingLazySpecializationIDs); // We might have made this declaration interesting. If so, remember that // we need to hand it off to the consumer. if (!WasInteresting && isConsumerInterestedIn(getContext(), D, Reader.hasPendingBody())) { PotentiallyInterestingDecls.push_back( InterestingDecl(D, Reader.hasPendingBody())); WasInteresting = true; } } } // Add the lazy specializations to the template. assert((PendingLazySpecializationIDs.empty() || isa(D) || isa(D) || isa(D)) && "Must not have pending specializations"); if (auto *CTD = dyn_cast(D)) ASTDeclReader::AddLazySpecializations(CTD, PendingLazySpecializationIDs); else if (auto *FTD = dyn_cast(D)) ASTDeclReader::AddLazySpecializations(FTD, PendingLazySpecializationIDs); else if (auto *VTD = dyn_cast(D)) ASTDeclReader::AddLazySpecializations(VTD, PendingLazySpecializationIDs); PendingLazySpecializationIDs.clear(); // Load the pending visible updates for this decl context, if it has any. auto I = PendingVisibleUpdates.find(ID); if (I != PendingVisibleUpdates.end()) { auto VisibleUpdates = std::move(I->second); PendingVisibleUpdates.erase(I); auto *DC = cast(D)->getPrimaryContext(); for (const PendingVisibleUpdate &Update : VisibleUpdates) Lookups[DC].Table.add( Update.Mod, Update.Data, reader::ASTDeclContextNameLookupTrait(*this, *Update.Mod)); DC->setHasExternalVisibleStorage(true); } } void ASTReader::loadPendingDeclChain(Decl *FirstLocal, uint64_t LocalOffset) { // Attach FirstLocal to the end of the decl chain. Decl *CanonDecl = FirstLocal->getCanonicalDecl(); if (FirstLocal != CanonDecl) { Decl *PrevMostRecent = ASTDeclReader::getMostRecentDecl(CanonDecl); ASTDeclReader::attachPreviousDecl( *this, FirstLocal, PrevMostRecent ? PrevMostRecent : CanonDecl, CanonDecl); } if (!LocalOffset) { ASTDeclReader::attachLatestDecl(CanonDecl, FirstLocal); return; } // Load the list of other redeclarations from this module file. ModuleFile *M = getOwningModuleFile(FirstLocal); assert(M && "imported decl from no module file"); llvm::BitstreamCursor &Cursor = M->DeclsCursor; SavedStreamPosition SavedPosition(Cursor); Cursor.JumpToBit(LocalOffset); RecordData Record; unsigned Code = Cursor.ReadCode(); unsigned RecCode = Cursor.readRecord(Code, Record); (void)RecCode; assert(RecCode == LOCAL_REDECLARATIONS && "expected LOCAL_REDECLARATIONS record!"); // FIXME: We have several different dispatches on decl kind here; maybe // we should instead generate one loop per kind and dispatch up-front? Decl *MostRecent = FirstLocal; for (unsigned I = 0, N = Record.size(); I != N; ++I) { auto *D = GetLocalDecl(*M, Record[N - I - 1]); ASTDeclReader::attachPreviousDecl(*this, D, MostRecent, CanonDecl); MostRecent = D; } ASTDeclReader::attachLatestDecl(CanonDecl, MostRecent); } namespace { /// \brief Given an ObjC interface, goes through the modules and links to the /// interface all the categories for it. class ObjCCategoriesVisitor { ASTReader &Reader; ObjCInterfaceDecl *Interface; llvm::SmallPtrSetImpl &Deserialized; ObjCCategoryDecl *Tail; llvm::DenseMap NameCategoryMap; serialization::GlobalDeclID InterfaceID; unsigned PreviousGeneration; void add(ObjCCategoryDecl *Cat) { // Only process each category once. if (!Deserialized.erase(Cat)) return; // Check for duplicate categories. if (Cat->getDeclName()) { ObjCCategoryDecl *&Existing = NameCategoryMap[Cat->getDeclName()]; if (Existing && Reader.getOwningModuleFile(Existing) != Reader.getOwningModuleFile(Cat)) { // FIXME: We should not warn for duplicates in diamond: // // MT // // / \ // // ML MR // // \ / // // MB // // // If there are duplicates in ML/MR, there will be warning when // creating MB *and* when importing MB. We should not warn when // importing. Reader.Diag(Cat->getLocation(), diag::warn_dup_category_def) << Interface->getDeclName() << Cat->getDeclName(); Reader.Diag(Existing->getLocation(), diag::note_previous_definition); } else if (!Existing) { // Record this category. Existing = Cat; } } // Add this category to the end of the chain. if (Tail) ASTDeclReader::setNextObjCCategory(Tail, Cat); else Interface->setCategoryListRaw(Cat); Tail = Cat; } public: ObjCCategoriesVisitor(ASTReader &Reader, ObjCInterfaceDecl *Interface, llvm::SmallPtrSetImpl &Deserialized, serialization::GlobalDeclID InterfaceID, unsigned PreviousGeneration) : Reader(Reader), Interface(Interface), Deserialized(Deserialized), Tail(nullptr), InterfaceID(InterfaceID), PreviousGeneration(PreviousGeneration) { // Populate the name -> category map with the set of known categories. for (auto *Cat : Interface->known_categories()) { if (Cat->getDeclName()) NameCategoryMap[Cat->getDeclName()] = Cat; // Keep track of the tail of the category list. Tail = Cat; } } bool operator()(ModuleFile &M) { // If we've loaded all of the category information we care about from // this module file, we're done. if (M.Generation <= PreviousGeneration) return true; // Map global ID of the definition down to the local ID used in this // module file. If there is no such mapping, we'll find nothing here // (or in any module it imports). DeclID LocalID = Reader.mapGlobalIDToModuleFileGlobalID(M, InterfaceID); if (!LocalID) return true; // Perform a binary search to find the local redeclarations for this // declaration (if any). const ObjCCategoriesInfo Compare = { LocalID, 0 }; const ObjCCategoriesInfo *Result = std::lower_bound(M.ObjCCategoriesMap, M.ObjCCategoriesMap + M.LocalNumObjCCategoriesInMap, Compare); if (Result == M.ObjCCategoriesMap + M.LocalNumObjCCategoriesInMap || Result->DefinitionID != LocalID) { // We didn't find anything. If the class definition is in this module // file, then the module files it depends on cannot have any categories, // so suppress further lookup. return Reader.isDeclIDFromModule(InterfaceID, M); } // We found something. Dig out all of the categories. unsigned Offset = Result->Offset; unsigned N = M.ObjCCategories[Offset]; M.ObjCCategories[Offset++] = 0; // Don't try to deserialize again for (unsigned I = 0; I != N; ++I) add(cast_or_null( Reader.GetLocalDecl(M, M.ObjCCategories[Offset++]))); return true; } }; } // end anonymous namespace void ASTReader::loadObjCCategories(serialization::GlobalDeclID ID, ObjCInterfaceDecl *D, unsigned PreviousGeneration) { ObjCCategoriesVisitor Visitor(*this, D, CategoriesDeserialized, ID, PreviousGeneration); ModuleMgr.visit(Visitor); } template static void forAllLaterRedecls(DeclT *D, Fn F) { F(D); // Check whether we've already merged D into its redeclaration chain. // MostRecent may or may not be nullptr if D has not been merged. If // not, walk the merged redecl chain and see if it's there. auto *MostRecent = D->getMostRecentDecl(); bool Found = false; for (auto *Redecl = MostRecent; Redecl && !Found; Redecl = Redecl->getPreviousDecl()) Found = (Redecl == D); // If this declaration is merged, apply the functor to all later decls. if (Found) { for (auto *Redecl = MostRecent; Redecl != D; Redecl = Redecl->getPreviousDecl()) F(Redecl); } } void ASTDeclReader::UpdateDecl(Decl *D, llvm::SmallVectorImpl &PendingLazySpecializationIDs) { while (Record.getIdx() < Record.size()) { switch ((DeclUpdateKind)Record.readInt()) { case UPD_CXX_ADDED_IMPLICIT_MEMBER: { auto *RD = cast(D); // FIXME: If we also have an update record for instantiating the // definition of D, we need that to happen before we get here. Decl *MD = Record.readDecl(); assert(MD && "couldn't read decl from update record"); // FIXME: We should call addHiddenDecl instead, to add the member // to its DeclContext. RD->addedMember(MD); break; } case UPD_CXX_ADDED_TEMPLATE_SPECIALIZATION: // It will be added to the template's lazy specialization set. PendingLazySpecializationIDs.push_back(ReadDeclID()); break; case UPD_CXX_ADDED_ANONYMOUS_NAMESPACE: { NamespaceDecl *Anon = ReadDeclAs(); // Each module has its own anonymous namespace, which is disjoint from // any other module's anonymous namespaces, so don't attach the anonymous // namespace at all. if (!Record.isModule()) { if (TranslationUnitDecl *TU = dyn_cast(D)) TU->setAnonymousNamespace(Anon); else cast(D)->setAnonymousNamespace(Anon); } break; } case UPD_CXX_INSTANTIATED_STATIC_DATA_MEMBER: { VarDecl *VD = cast(D); VD->getMemberSpecializationInfo()->setPointOfInstantiation( ReadSourceLocation()); uint64_t Val = Record.readInt(); if (Val && !VD->getInit()) { VD->setInit(Record.readExpr()); if (Val > 1) { // IsInitKnownICE = 1, IsInitNotICE = 2, IsInitICE = 3 EvaluatedStmt *Eval = VD->ensureEvaluatedStmt(); Eval->CheckedICE = true; Eval->IsICE = Val == 3; } } break; } case UPD_CXX_INSTANTIATED_DEFAULT_ARGUMENT: { auto Param = cast(D); // We have to read the default argument regardless of whether we use it // so that hypothetical further update records aren't messed up. // TODO: Add a function to skip over the next expr record. auto DefaultArg = Record.readExpr(); // Only apply the update if the parameter still has an uninstantiated // default argument. if (Param->hasUninstantiatedDefaultArg()) Param->setDefaultArg(DefaultArg); break; } case UPD_CXX_INSTANTIATED_DEFAULT_MEMBER_INITIALIZER: { auto FD = cast(D); auto DefaultInit = Record.readExpr(); // Only apply the update if the field still has an uninstantiated // default member initializer. if (FD->hasInClassInitializer() && !FD->getInClassInitializer()) { if (DefaultInit) FD->setInClassInitializer(DefaultInit); else // Instantiation failed. We can get here if we serialized an AST for // an invalid program. FD->removeInClassInitializer(); } break; } case UPD_CXX_ADDED_FUNCTION_DEFINITION: { FunctionDecl *FD = cast(D); if (Reader.PendingBodies[FD]) { // FIXME: Maybe check for ODR violations. // It's safe to stop now because this update record is always last. return; } if (Record.readInt()) { // Maintain AST consistency: any later redeclarations of this function // are inline if this one is. (We might have merged another declaration // into this one.) forAllLaterRedecls(FD, [](FunctionDecl *FD) { FD->setImplicitlyInline(); }); } FD->setInnerLocStart(ReadSourceLocation()); ReadFunctionDefinition(FD); assert(Record.getIdx() == Record.size() && "lazy body must be last"); break; } case UPD_CXX_INSTANTIATED_CLASS_DEFINITION: { auto *RD = cast(D); auto *OldDD = RD->getCanonicalDecl()->DefinitionData; bool HadRealDefinition = OldDD && (OldDD->Definition != RD || !Reader.PendingFakeDefinitionData.count(OldDD)); ReadCXXRecordDefinition(RD, /*Update*/true); // Visible update is handled separately. uint64_t LexicalOffset = ReadLocalOffset(); if (!HadRealDefinition && LexicalOffset) { Record.readLexicalDeclContextStorage(LexicalOffset, RD); Reader.PendingFakeDefinitionData.erase(OldDD); } auto TSK = (TemplateSpecializationKind)Record.readInt(); SourceLocation POI = ReadSourceLocation(); if (MemberSpecializationInfo *MSInfo = RD->getMemberSpecializationInfo()) { MSInfo->setTemplateSpecializationKind(TSK); MSInfo->setPointOfInstantiation(POI); } else { ClassTemplateSpecializationDecl *Spec = cast(RD); Spec->setTemplateSpecializationKind(TSK); Spec->setPointOfInstantiation(POI); if (Record.readInt()) { auto PartialSpec = ReadDeclAs(); SmallVector TemplArgs; Record.readTemplateArgumentList(TemplArgs); auto *TemplArgList = TemplateArgumentList::CreateCopy( Reader.getContext(), TemplArgs); // FIXME: If we already have a partial specialization set, // check that it matches. if (!Spec->getSpecializedTemplateOrPartial() .is()) Spec->setInstantiationOf(PartialSpec, TemplArgList); } } RD->setTagKind((TagTypeKind)Record.readInt()); RD->setLocation(ReadSourceLocation()); RD->setLocStart(ReadSourceLocation()); RD->setBraceRange(ReadSourceRange()); if (Record.readInt()) { AttrVec Attrs; Record.readAttributes(Attrs); // If the declaration already has attributes, we assume that some other // AST file already loaded them. if (!D->hasAttrs()) D->setAttrsImpl(Attrs, Reader.getContext()); } break; } case UPD_CXX_RESOLVED_DTOR_DELETE: { // Set the 'operator delete' directly to avoid emitting another update // record. auto *Del = ReadDeclAs(); auto *First = cast(D->getCanonicalDecl()); // FIXME: Check consistency if we have an old and new operator delete. if (!First->OperatorDelete) First->OperatorDelete = Del; break; } case UPD_CXX_RESOLVED_EXCEPTION_SPEC: { FunctionProtoType::ExceptionSpecInfo ESI; SmallVector ExceptionStorage; Record.readExceptionSpec(ExceptionStorage, ESI); // Update this declaration's exception specification, if needed. auto *FD = cast(D); auto *FPT = FD->getType()->castAs(); // FIXME: If the exception specification is already present, check that it // matches. if (isUnresolvedExceptionSpec(FPT->getExceptionSpecType())) { FD->setType(Reader.getContext().getFunctionType( FPT->getReturnType(), FPT->getParamTypes(), FPT->getExtProtoInfo().withExceptionSpec(ESI))); // When we get to the end of deserializing, see if there are other decls // that we need to propagate this exception specification onto. Reader.PendingExceptionSpecUpdates.insert( std::make_pair(FD->getCanonicalDecl(), FD)); } break; } case UPD_CXX_DEDUCED_RETURN_TYPE: { // FIXME: Also do this when merging redecls. QualType DeducedResultType = Record.readType(); for (auto *Redecl : merged_redecls(D)) { // FIXME: If the return type is already deduced, check that it matches. FunctionDecl *FD = cast(Redecl); Reader.getContext().adjustDeducedFunctionResultType(FD, DeducedResultType); } break; } case UPD_DECL_MARKED_USED: { // Maintain AST consistency: any later redeclarations are used too. D->markUsed(Reader.getContext()); break; } case UPD_MANGLING_NUMBER: Reader.getContext().setManglingNumber(cast(D), Record.readInt()); break; case UPD_STATIC_LOCAL_NUMBER: Reader.getContext().setStaticLocalNumber(cast(D), Record.readInt()); break; case UPD_DECL_MARKED_OPENMP_THREADPRIVATE: D->addAttr(OMPThreadPrivateDeclAttr::CreateImplicit(Reader.getContext(), ReadSourceRange())); break; case UPD_DECL_EXPORTED: { unsigned SubmoduleID = readSubmoduleID(); auto *Exported = cast(D); if (auto *TD = dyn_cast(Exported)) Exported = TD->getDefinition(); Module *Owner = SubmoduleID ? Reader.getSubmodule(SubmoduleID) : nullptr; if (Reader.getContext().getLangOpts().ModulesLocalVisibility) { Reader.getContext().mergeDefinitionIntoModule(cast(Exported), Owner); Reader.PendingMergedDefinitionsToDeduplicate.insert( cast(Exported)); } else if (Owner && Owner->NameVisibility != Module::AllVisible) { // If Owner is made visible at some later point, make this declaration // visible too. Reader.HiddenNamesMap[Owner].push_back(Exported); } else { // The declaration is now visible. Exported->setVisibleDespiteOwningModule(); } break; } case UPD_DECL_MARKED_OPENMP_DECLARETARGET: case UPD_ADDED_ATTR_TO_RECORD: AttrVec Attrs; Record.readAttributes(Attrs); assert(Attrs.size() == 1); D->addAttr(Attrs[0]); break; } } } diff --git a/lib/Serialization/ASTWriter.cpp b/lib/Serialization/ASTWriter.cpp index a875e627bdfb..128e53b91b1d 100644 --- a/lib/Serialization/ASTWriter.cpp +++ b/lib/Serialization/ASTWriter.cpp @@ -1,6293 +1,6296 @@ //===--- ASTWriter.cpp - AST File Writer ------------------------*- C++ -*-===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file defines the ASTWriter class, which writes AST files. // //===----------------------------------------------------------------------===// #include "clang/Serialization/ASTWriter.h" #include "ASTCommon.h" #include "ASTReaderInternals.h" #include "MultiOnDiskHashTable.h" #include "clang/AST/ASTContext.h" #include "clang/AST/ASTUnresolvedSet.h" #include "clang/AST/Decl.h" #include "clang/AST/DeclCXX.h" #include "clang/AST/DeclContextInternals.h" #include "clang/AST/DeclFriend.h" #include "clang/AST/DeclTemplate.h" #include "clang/AST/Expr.h" #include "clang/AST/ExprCXX.h" #include "clang/AST/LambdaCapture.h" #include "clang/AST/NestedNameSpecifier.h" #include "clang/AST/RawCommentList.h" #include "clang/AST/TemplateName.h" #include "clang/AST/Type.h" #include "clang/AST/TypeLocVisitor.h" #include "clang/Basic/DiagnosticOptions.h" #include "clang/Basic/FileManager.h" #include "clang/Basic/FileSystemOptions.h" #include "clang/Basic/LLVM.h" #include "clang/Basic/LangOptions.h" #include "clang/Basic/MemoryBufferCache.h" #include "clang/Basic/Module.h" #include "clang/Basic/ObjCRuntime.h" #include "clang/Basic/SourceManager.h" #include "clang/Basic/SourceManagerInternals.h" #include "clang/Basic/TargetInfo.h" #include "clang/Basic/TargetOptions.h" #include "clang/Basic/Version.h" #include "clang/Basic/VersionTuple.h" #include "clang/Lex/HeaderSearch.h" #include "clang/Lex/HeaderSearchOptions.h" #include "clang/Lex/MacroInfo.h" #include "clang/Lex/ModuleMap.h" #include "clang/Lex/PreprocessingRecord.h" #include "clang/Lex/Preprocessor.h" #include "clang/Lex/PreprocessorOptions.h" #include "clang/Lex/Token.h" #include "clang/Sema/IdentifierResolver.h" #include "clang/Sema/ObjCMethodList.h" #include "clang/Sema/Sema.h" #include "clang/Sema/Weak.h" #include "clang/Serialization/ASTReader.h" #include "clang/Serialization/Module.h" #include "clang/Serialization/ModuleFileExtension.h" #include "clang/Serialization/SerializationDiagnostic.h" #include "llvm/ADT/APFloat.h" #include "llvm/ADT/APInt.h" #include "llvm/ADT/Hashing.h" #include "llvm/ADT/IntrusiveRefCntPtr.h" #include "llvm/ADT/Optional.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/SmallSet.h" #include "llvm/ADT/SmallString.h" #include "llvm/ADT/StringExtras.h" #include "llvm/Bitcode/BitCodes.h" #include "llvm/Bitcode/BitstreamWriter.h" #include "llvm/Support/Casting.h" #include "llvm/Support/Compression.h" #include "llvm/Support/EndianStream.h" #include "llvm/Support/Error.h" #include "llvm/Support/ErrorHandling.h" #include "llvm/Support/MemoryBuffer.h" #include "llvm/Support/OnDiskHashTable.h" #include "llvm/Support/Path.h" #include "llvm/Support/Process.h" #include "llvm/Support/SHA1.h" #include "llvm/Support/raw_ostream.h" #include #include #include #include #include #include #include #include #include #include using namespace clang; using namespace clang::serialization; template static StringRef bytes(const std::vector &v) { if (v.empty()) return StringRef(); return StringRef(reinterpret_cast(&v[0]), sizeof(T) * v.size()); } template static StringRef bytes(const SmallVectorImpl &v) { return StringRef(reinterpret_cast(v.data()), sizeof(T) * v.size()); } //===----------------------------------------------------------------------===// // Type serialization //===----------------------------------------------------------------------===// namespace clang { class ASTTypeWriter { ASTWriter &Writer; ASTRecordWriter Record; /// \brief Type code that corresponds to the record generated. TypeCode Code; /// \brief Abbreviation to use for the record, if any. unsigned AbbrevToUse; public: ASTTypeWriter(ASTWriter &Writer, ASTWriter::RecordDataImpl &Record) : Writer(Writer), Record(Writer, Record), Code((TypeCode)0), AbbrevToUse(0) { } uint64_t Emit() { return Record.Emit(Code, AbbrevToUse); } void Visit(QualType T) { if (T.hasLocalNonFastQualifiers()) { Qualifiers Qs = T.getLocalQualifiers(); Record.AddTypeRef(T.getLocalUnqualifiedType()); Record.push_back(Qs.getAsOpaqueValue()); Code = TYPE_EXT_QUAL; AbbrevToUse = Writer.TypeExtQualAbbrev; } else { switch (T->getTypeClass()) { // For all of the concrete, non-dependent types, call the // appropriate visitor function. #define TYPE(Class, Base) \ case Type::Class: Visit##Class##Type(cast(T)); break; #define ABSTRACT_TYPE(Class, Base) #include "clang/AST/TypeNodes.def" } } } void VisitArrayType(const ArrayType *T); void VisitFunctionType(const FunctionType *T); void VisitTagType(const TagType *T); #define TYPE(Class, Base) void Visit##Class##Type(const Class##Type *T); #define ABSTRACT_TYPE(Class, Base) #include "clang/AST/TypeNodes.def" }; } // end namespace clang void ASTTypeWriter::VisitBuiltinType(const BuiltinType *T) { llvm_unreachable("Built-in types are never serialized"); } void ASTTypeWriter::VisitComplexType(const ComplexType *T) { Record.AddTypeRef(T->getElementType()); Code = TYPE_COMPLEX; } void ASTTypeWriter::VisitPointerType(const PointerType *T) { Record.AddTypeRef(T->getPointeeType()); Code = TYPE_POINTER; } void ASTTypeWriter::VisitDecayedType(const DecayedType *T) { Record.AddTypeRef(T->getOriginalType()); Code = TYPE_DECAYED; } void ASTTypeWriter::VisitAdjustedType(const AdjustedType *T) { Record.AddTypeRef(T->getOriginalType()); Record.AddTypeRef(T->getAdjustedType()); Code = TYPE_ADJUSTED; } void ASTTypeWriter::VisitBlockPointerType(const BlockPointerType *T) { Record.AddTypeRef(T->getPointeeType()); Code = TYPE_BLOCK_POINTER; } void ASTTypeWriter::VisitLValueReferenceType(const LValueReferenceType *T) { Record.AddTypeRef(T->getPointeeTypeAsWritten()); Record.push_back(T->isSpelledAsLValue()); Code = TYPE_LVALUE_REFERENCE; } void ASTTypeWriter::VisitRValueReferenceType(const RValueReferenceType *T) { Record.AddTypeRef(T->getPointeeTypeAsWritten()); Code = TYPE_RVALUE_REFERENCE; } void ASTTypeWriter::VisitMemberPointerType(const MemberPointerType *T) { Record.AddTypeRef(T->getPointeeType()); Record.AddTypeRef(QualType(T->getClass(), 0)); Code = TYPE_MEMBER_POINTER; } void ASTTypeWriter::VisitArrayType(const ArrayType *T) { Record.AddTypeRef(T->getElementType()); Record.push_back(T->getSizeModifier()); // FIXME: stable values Record.push_back(T->getIndexTypeCVRQualifiers()); // FIXME: stable values } void ASTTypeWriter::VisitConstantArrayType(const ConstantArrayType *T) { VisitArrayType(T); Record.AddAPInt(T->getSize()); Code = TYPE_CONSTANT_ARRAY; } void ASTTypeWriter::VisitIncompleteArrayType(const IncompleteArrayType *T) { VisitArrayType(T); Code = TYPE_INCOMPLETE_ARRAY; } void ASTTypeWriter::VisitVariableArrayType(const VariableArrayType *T) { VisitArrayType(T); Record.AddSourceLocation(T->getLBracketLoc()); Record.AddSourceLocation(T->getRBracketLoc()); Record.AddStmt(T->getSizeExpr()); Code = TYPE_VARIABLE_ARRAY; } void ASTTypeWriter::VisitVectorType(const VectorType *T) { Record.AddTypeRef(T->getElementType()); Record.push_back(T->getNumElements()); Record.push_back(T->getVectorKind()); Code = TYPE_VECTOR; } void ASTTypeWriter::VisitExtVectorType(const ExtVectorType *T) { VisitVectorType(T); Code = TYPE_EXT_VECTOR; } void ASTTypeWriter::VisitFunctionType(const FunctionType *T) { Record.AddTypeRef(T->getReturnType()); FunctionType::ExtInfo C = T->getExtInfo(); Record.push_back(C.getNoReturn()); Record.push_back(C.getHasRegParm()); Record.push_back(C.getRegParm()); // FIXME: need to stabilize encoding of calling convention... Record.push_back(C.getCC()); Record.push_back(C.getProducesResult()); Record.push_back(C.getNoCallerSavedRegs()); if (C.getHasRegParm() || C.getRegParm() || C.getProducesResult()) AbbrevToUse = 0; } void ASTTypeWriter::VisitFunctionNoProtoType(const FunctionNoProtoType *T) { VisitFunctionType(T); Code = TYPE_FUNCTION_NO_PROTO; } static void addExceptionSpec(const FunctionProtoType *T, ASTRecordWriter &Record) { Record.push_back(T->getExceptionSpecType()); if (T->getExceptionSpecType() == EST_Dynamic) { Record.push_back(T->getNumExceptions()); for (unsigned I = 0, N = T->getNumExceptions(); I != N; ++I) Record.AddTypeRef(T->getExceptionType(I)); } else if (T->getExceptionSpecType() == EST_ComputedNoexcept) { Record.AddStmt(T->getNoexceptExpr()); } else if (T->getExceptionSpecType() == EST_Uninstantiated) { Record.AddDeclRef(T->getExceptionSpecDecl()); Record.AddDeclRef(T->getExceptionSpecTemplate()); } else if (T->getExceptionSpecType() == EST_Unevaluated) { Record.AddDeclRef(T->getExceptionSpecDecl()); } } void ASTTypeWriter::VisitFunctionProtoType(const FunctionProtoType *T) { VisitFunctionType(T); Record.push_back(T->isVariadic()); Record.push_back(T->hasTrailingReturn()); Record.push_back(T->getTypeQuals()); Record.push_back(static_cast(T->getRefQualifier())); addExceptionSpec(T, Record); Record.push_back(T->getNumParams()); for (unsigned I = 0, N = T->getNumParams(); I != N; ++I) Record.AddTypeRef(T->getParamType(I)); if (T->hasExtParameterInfos()) { for (unsigned I = 0, N = T->getNumParams(); I != N; ++I) Record.push_back(T->getExtParameterInfo(I).getOpaqueValue()); } if (T->isVariadic() || T->hasTrailingReturn() || T->getTypeQuals() || T->getRefQualifier() || T->getExceptionSpecType() != EST_None || T->hasExtParameterInfos()) AbbrevToUse = 0; Code = TYPE_FUNCTION_PROTO; } void ASTTypeWriter::VisitUnresolvedUsingType(const UnresolvedUsingType *T) { Record.AddDeclRef(T->getDecl()); Code = TYPE_UNRESOLVED_USING; } void ASTTypeWriter::VisitTypedefType(const TypedefType *T) { Record.AddDeclRef(T->getDecl()); assert(!T->isCanonicalUnqualified() && "Invalid typedef ?"); Record.AddTypeRef(T->getCanonicalTypeInternal()); Code = TYPE_TYPEDEF; } void ASTTypeWriter::VisitTypeOfExprType(const TypeOfExprType *T) { Record.AddStmt(T->getUnderlyingExpr()); Code = TYPE_TYPEOF_EXPR; } void ASTTypeWriter::VisitTypeOfType(const TypeOfType *T) { Record.AddTypeRef(T->getUnderlyingType()); Code = TYPE_TYPEOF; } void ASTTypeWriter::VisitDecltypeType(const DecltypeType *T) { Record.AddTypeRef(T->getUnderlyingType()); Record.AddStmt(T->getUnderlyingExpr()); Code = TYPE_DECLTYPE; } void ASTTypeWriter::VisitUnaryTransformType(const UnaryTransformType *T) { Record.AddTypeRef(T->getBaseType()); Record.AddTypeRef(T->getUnderlyingType()); Record.push_back(T->getUTTKind()); Code = TYPE_UNARY_TRANSFORM; } void ASTTypeWriter::VisitAutoType(const AutoType *T) { Record.AddTypeRef(T->getDeducedType()); Record.push_back((unsigned)T->getKeyword()); if (T->getDeducedType().isNull()) Record.push_back(T->isDependentType()); Code = TYPE_AUTO; } void ASTTypeWriter::VisitDeducedTemplateSpecializationType( const DeducedTemplateSpecializationType *T) { Record.AddTemplateName(T->getTemplateName()); Record.AddTypeRef(T->getDeducedType()); if (T->getDeducedType().isNull()) Record.push_back(T->isDependentType()); Code = TYPE_DEDUCED_TEMPLATE_SPECIALIZATION; } void ASTTypeWriter::VisitTagType(const TagType *T) { Record.push_back(T->isDependentType()); Record.AddDeclRef(T->getDecl()->getCanonicalDecl()); assert(!T->isBeingDefined() && "Cannot serialize in the middle of a type definition"); } void ASTTypeWriter::VisitRecordType(const RecordType *T) { VisitTagType(T); Code = TYPE_RECORD; } void ASTTypeWriter::VisitEnumType(const EnumType *T) { VisitTagType(T); Code = TYPE_ENUM; } void ASTTypeWriter::VisitAttributedType(const AttributedType *T) { Record.AddTypeRef(T->getModifiedType()); Record.AddTypeRef(T->getEquivalentType()); Record.push_back(T->getAttrKind()); Code = TYPE_ATTRIBUTED; } void ASTTypeWriter::VisitSubstTemplateTypeParmType( const SubstTemplateTypeParmType *T) { Record.AddTypeRef(QualType(T->getReplacedParameter(), 0)); Record.AddTypeRef(T->getReplacementType()); Code = TYPE_SUBST_TEMPLATE_TYPE_PARM; } void ASTTypeWriter::VisitSubstTemplateTypeParmPackType( const SubstTemplateTypeParmPackType *T) { Record.AddTypeRef(QualType(T->getReplacedParameter(), 0)); Record.AddTemplateArgument(T->getArgumentPack()); Code = TYPE_SUBST_TEMPLATE_TYPE_PARM_PACK; } void ASTTypeWriter::VisitTemplateSpecializationType( const TemplateSpecializationType *T) { Record.push_back(T->isDependentType()); Record.AddTemplateName(T->getTemplateName()); Record.push_back(T->getNumArgs()); for (const auto &ArgI : *T) Record.AddTemplateArgument(ArgI); Record.AddTypeRef(T->isTypeAlias() ? T->getAliasedType() : T->isCanonicalUnqualified() ? QualType() : T->getCanonicalTypeInternal()); Code = TYPE_TEMPLATE_SPECIALIZATION; } void ASTTypeWriter::VisitDependentSizedArrayType(const DependentSizedArrayType *T) { VisitArrayType(T); Record.AddStmt(T->getSizeExpr()); Record.AddSourceRange(T->getBracketsRange()); Code = TYPE_DEPENDENT_SIZED_ARRAY; } void ASTTypeWriter::VisitDependentSizedExtVectorType( const DependentSizedExtVectorType *T) { Record.AddTypeRef(T->getElementType()); Record.AddStmt(T->getSizeExpr()); Record.AddSourceLocation(T->getAttributeLoc()); Code = TYPE_DEPENDENT_SIZED_EXT_VECTOR; } void ASTTypeWriter::VisitTemplateTypeParmType(const TemplateTypeParmType *T) { Record.push_back(T->getDepth()); Record.push_back(T->getIndex()); Record.push_back(T->isParameterPack()); Record.AddDeclRef(T->getDecl()); Code = TYPE_TEMPLATE_TYPE_PARM; } void ASTTypeWriter::VisitDependentNameType(const DependentNameType *T) { Record.push_back(T->getKeyword()); Record.AddNestedNameSpecifier(T->getQualifier()); Record.AddIdentifierRef(T->getIdentifier()); Record.AddTypeRef( T->isCanonicalUnqualified() ? QualType() : T->getCanonicalTypeInternal()); Code = TYPE_DEPENDENT_NAME; } void ASTTypeWriter::VisitDependentTemplateSpecializationType( const DependentTemplateSpecializationType *T) { Record.push_back(T->getKeyword()); Record.AddNestedNameSpecifier(T->getQualifier()); Record.AddIdentifierRef(T->getIdentifier()); Record.push_back(T->getNumArgs()); for (const auto &I : *T) Record.AddTemplateArgument(I); Code = TYPE_DEPENDENT_TEMPLATE_SPECIALIZATION; } void ASTTypeWriter::VisitPackExpansionType(const PackExpansionType *T) { Record.AddTypeRef(T->getPattern()); if (Optional NumExpansions = T->getNumExpansions()) Record.push_back(*NumExpansions + 1); else Record.push_back(0); Code = TYPE_PACK_EXPANSION; } void ASTTypeWriter::VisitParenType(const ParenType *T) { Record.AddTypeRef(T->getInnerType()); Code = TYPE_PAREN; } void ASTTypeWriter::VisitElaboratedType(const ElaboratedType *T) { Record.push_back(T->getKeyword()); Record.AddNestedNameSpecifier(T->getQualifier()); Record.AddTypeRef(T->getNamedType()); Code = TYPE_ELABORATED; } void ASTTypeWriter::VisitInjectedClassNameType(const InjectedClassNameType *T) { Record.AddDeclRef(T->getDecl()->getCanonicalDecl()); Record.AddTypeRef(T->getInjectedSpecializationType()); Code = TYPE_INJECTED_CLASS_NAME; } void ASTTypeWriter::VisitObjCInterfaceType(const ObjCInterfaceType *T) { Record.AddDeclRef(T->getDecl()->getCanonicalDecl()); Code = TYPE_OBJC_INTERFACE; } void ASTTypeWriter::VisitObjCTypeParamType(const ObjCTypeParamType *T) { Record.AddDeclRef(T->getDecl()); Record.push_back(T->getNumProtocols()); for (const auto *I : T->quals()) Record.AddDeclRef(I); Code = TYPE_OBJC_TYPE_PARAM; } void ASTTypeWriter::VisitObjCObjectType(const ObjCObjectType *T) { Record.AddTypeRef(T->getBaseType()); Record.push_back(T->getTypeArgsAsWritten().size()); for (auto TypeArg : T->getTypeArgsAsWritten()) Record.AddTypeRef(TypeArg); Record.push_back(T->getNumProtocols()); for (const auto *I : T->quals()) Record.AddDeclRef(I); Record.push_back(T->isKindOfTypeAsWritten()); Code = TYPE_OBJC_OBJECT; } void ASTTypeWriter::VisitObjCObjectPointerType(const ObjCObjectPointerType *T) { Record.AddTypeRef(T->getPointeeType()); Code = TYPE_OBJC_OBJECT_POINTER; } void ASTTypeWriter::VisitAtomicType(const AtomicType *T) { Record.AddTypeRef(T->getValueType()); Code = TYPE_ATOMIC; } void ASTTypeWriter::VisitPipeType(const PipeType *T) { Record.AddTypeRef(T->getElementType()); Record.push_back(T->isReadOnly()); Code = TYPE_PIPE; } namespace { class TypeLocWriter : public TypeLocVisitor { ASTRecordWriter &Record; public: TypeLocWriter(ASTRecordWriter &Record) : Record(Record) { } #define ABSTRACT_TYPELOC(CLASS, PARENT) #define TYPELOC(CLASS, PARENT) \ void Visit##CLASS##TypeLoc(CLASS##TypeLoc TyLoc); #include "clang/AST/TypeLocNodes.def" void VisitArrayTypeLoc(ArrayTypeLoc TyLoc); void VisitFunctionTypeLoc(FunctionTypeLoc TyLoc); }; } // end anonymous namespace void TypeLocWriter::VisitQualifiedTypeLoc(QualifiedTypeLoc TL) { // nothing to do } void TypeLocWriter::VisitBuiltinTypeLoc(BuiltinTypeLoc TL) { Record.AddSourceLocation(TL.getBuiltinLoc()); if (TL.needsExtraLocalData()) { Record.push_back(TL.getWrittenTypeSpec()); Record.push_back(TL.getWrittenSignSpec()); Record.push_back(TL.getWrittenWidthSpec()); Record.push_back(TL.hasModeAttr()); } } void TypeLocWriter::VisitComplexTypeLoc(ComplexTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitPointerTypeLoc(PointerTypeLoc TL) { Record.AddSourceLocation(TL.getStarLoc()); } void TypeLocWriter::VisitDecayedTypeLoc(DecayedTypeLoc TL) { // nothing to do } void TypeLocWriter::VisitAdjustedTypeLoc(AdjustedTypeLoc TL) { // nothing to do } void TypeLocWriter::VisitBlockPointerTypeLoc(BlockPointerTypeLoc TL) { Record.AddSourceLocation(TL.getCaretLoc()); } void TypeLocWriter::VisitLValueReferenceTypeLoc(LValueReferenceTypeLoc TL) { Record.AddSourceLocation(TL.getAmpLoc()); } void TypeLocWriter::VisitRValueReferenceTypeLoc(RValueReferenceTypeLoc TL) { Record.AddSourceLocation(TL.getAmpAmpLoc()); } void TypeLocWriter::VisitMemberPointerTypeLoc(MemberPointerTypeLoc TL) { Record.AddSourceLocation(TL.getStarLoc()); Record.AddTypeSourceInfo(TL.getClassTInfo()); } void TypeLocWriter::VisitArrayTypeLoc(ArrayTypeLoc TL) { Record.AddSourceLocation(TL.getLBracketLoc()); Record.AddSourceLocation(TL.getRBracketLoc()); Record.push_back(TL.getSizeExpr() ? 1 : 0); if (TL.getSizeExpr()) Record.AddStmt(TL.getSizeExpr()); } void TypeLocWriter::VisitConstantArrayTypeLoc(ConstantArrayTypeLoc TL) { VisitArrayTypeLoc(TL); } void TypeLocWriter::VisitIncompleteArrayTypeLoc(IncompleteArrayTypeLoc TL) { VisitArrayTypeLoc(TL); } void TypeLocWriter::VisitVariableArrayTypeLoc(VariableArrayTypeLoc TL) { VisitArrayTypeLoc(TL); } void TypeLocWriter::VisitDependentSizedArrayTypeLoc( DependentSizedArrayTypeLoc TL) { VisitArrayTypeLoc(TL); } void TypeLocWriter::VisitDependentSizedExtVectorTypeLoc( DependentSizedExtVectorTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitVectorTypeLoc(VectorTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitExtVectorTypeLoc(ExtVectorTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitFunctionTypeLoc(FunctionTypeLoc TL) { Record.AddSourceLocation(TL.getLocalRangeBegin()); Record.AddSourceLocation(TL.getLParenLoc()); Record.AddSourceLocation(TL.getRParenLoc()); Record.AddSourceRange(TL.getExceptionSpecRange()); Record.AddSourceLocation(TL.getLocalRangeEnd()); for (unsigned i = 0, e = TL.getNumParams(); i != e; ++i) Record.AddDeclRef(TL.getParam(i)); } void TypeLocWriter::VisitFunctionProtoTypeLoc(FunctionProtoTypeLoc TL) { VisitFunctionTypeLoc(TL); } void TypeLocWriter::VisitFunctionNoProtoTypeLoc(FunctionNoProtoTypeLoc TL) { VisitFunctionTypeLoc(TL); } void TypeLocWriter::VisitUnresolvedUsingTypeLoc(UnresolvedUsingTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitTypedefTypeLoc(TypedefTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitObjCTypeParamTypeLoc(ObjCTypeParamTypeLoc TL) { if (TL.getNumProtocols()) { Record.AddSourceLocation(TL.getProtocolLAngleLoc()); Record.AddSourceLocation(TL.getProtocolRAngleLoc()); } for (unsigned i = 0, e = TL.getNumProtocols(); i != e; ++i) Record.AddSourceLocation(TL.getProtocolLoc(i)); } void TypeLocWriter::VisitTypeOfExprTypeLoc(TypeOfExprTypeLoc TL) { Record.AddSourceLocation(TL.getTypeofLoc()); Record.AddSourceLocation(TL.getLParenLoc()); Record.AddSourceLocation(TL.getRParenLoc()); } void TypeLocWriter::VisitTypeOfTypeLoc(TypeOfTypeLoc TL) { Record.AddSourceLocation(TL.getTypeofLoc()); Record.AddSourceLocation(TL.getLParenLoc()); Record.AddSourceLocation(TL.getRParenLoc()); Record.AddTypeSourceInfo(TL.getUnderlyingTInfo()); } void TypeLocWriter::VisitDecltypeTypeLoc(DecltypeTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitUnaryTransformTypeLoc(UnaryTransformTypeLoc TL) { Record.AddSourceLocation(TL.getKWLoc()); Record.AddSourceLocation(TL.getLParenLoc()); Record.AddSourceLocation(TL.getRParenLoc()); Record.AddTypeSourceInfo(TL.getUnderlyingTInfo()); } void TypeLocWriter::VisitAutoTypeLoc(AutoTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitDeducedTemplateSpecializationTypeLoc( DeducedTemplateSpecializationTypeLoc TL) { Record.AddSourceLocation(TL.getTemplateNameLoc()); } void TypeLocWriter::VisitRecordTypeLoc(RecordTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitEnumTypeLoc(EnumTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitAttributedTypeLoc(AttributedTypeLoc TL) { Record.AddSourceLocation(TL.getAttrNameLoc()); if (TL.hasAttrOperand()) { SourceRange range = TL.getAttrOperandParensRange(); Record.AddSourceLocation(range.getBegin()); Record.AddSourceLocation(range.getEnd()); } if (TL.hasAttrExprOperand()) { Expr *operand = TL.getAttrExprOperand(); Record.push_back(operand ? 1 : 0); if (operand) Record.AddStmt(operand); } else if (TL.hasAttrEnumOperand()) { Record.AddSourceLocation(TL.getAttrEnumOperandLoc()); } } void TypeLocWriter::VisitTemplateTypeParmTypeLoc(TemplateTypeParmTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitSubstTemplateTypeParmTypeLoc( SubstTemplateTypeParmTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitSubstTemplateTypeParmPackTypeLoc( SubstTemplateTypeParmPackTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitTemplateSpecializationTypeLoc( TemplateSpecializationTypeLoc TL) { Record.AddSourceLocation(TL.getTemplateKeywordLoc()); Record.AddSourceLocation(TL.getTemplateNameLoc()); Record.AddSourceLocation(TL.getLAngleLoc()); Record.AddSourceLocation(TL.getRAngleLoc()); for (unsigned i = 0, e = TL.getNumArgs(); i != e; ++i) Record.AddTemplateArgumentLocInfo(TL.getArgLoc(i).getArgument().getKind(), TL.getArgLoc(i).getLocInfo()); } void TypeLocWriter::VisitParenTypeLoc(ParenTypeLoc TL) { Record.AddSourceLocation(TL.getLParenLoc()); Record.AddSourceLocation(TL.getRParenLoc()); } void TypeLocWriter::VisitElaboratedTypeLoc(ElaboratedTypeLoc TL) { Record.AddSourceLocation(TL.getElaboratedKeywordLoc()); Record.AddNestedNameSpecifierLoc(TL.getQualifierLoc()); } void TypeLocWriter::VisitInjectedClassNameTypeLoc(InjectedClassNameTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitDependentNameTypeLoc(DependentNameTypeLoc TL) { Record.AddSourceLocation(TL.getElaboratedKeywordLoc()); Record.AddNestedNameSpecifierLoc(TL.getQualifierLoc()); Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitDependentTemplateSpecializationTypeLoc( DependentTemplateSpecializationTypeLoc TL) { Record.AddSourceLocation(TL.getElaboratedKeywordLoc()); Record.AddNestedNameSpecifierLoc(TL.getQualifierLoc()); Record.AddSourceLocation(TL.getTemplateKeywordLoc()); Record.AddSourceLocation(TL.getTemplateNameLoc()); Record.AddSourceLocation(TL.getLAngleLoc()); Record.AddSourceLocation(TL.getRAngleLoc()); for (unsigned I = 0, E = TL.getNumArgs(); I != E; ++I) Record.AddTemplateArgumentLocInfo(TL.getArgLoc(I).getArgument().getKind(), TL.getArgLoc(I).getLocInfo()); } void TypeLocWriter::VisitPackExpansionTypeLoc(PackExpansionTypeLoc TL) { Record.AddSourceLocation(TL.getEllipsisLoc()); } void TypeLocWriter::VisitObjCInterfaceTypeLoc(ObjCInterfaceTypeLoc TL) { Record.AddSourceLocation(TL.getNameLoc()); } void TypeLocWriter::VisitObjCObjectTypeLoc(ObjCObjectTypeLoc TL) { Record.push_back(TL.hasBaseTypeAsWritten()); Record.AddSourceLocation(TL.getTypeArgsLAngleLoc()); Record.AddSourceLocation(TL.getTypeArgsRAngleLoc()); for (unsigned i = 0, e = TL.getNumTypeArgs(); i != e; ++i) Record.AddTypeSourceInfo(TL.getTypeArgTInfo(i)); Record.AddSourceLocation(TL.getProtocolLAngleLoc()); Record.AddSourceLocation(TL.getProtocolRAngleLoc()); for (unsigned i = 0, e = TL.getNumProtocols(); i != e; ++i) Record.AddSourceLocation(TL.getProtocolLoc(i)); } void TypeLocWriter::VisitObjCObjectPointerTypeLoc(ObjCObjectPointerTypeLoc TL) { Record.AddSourceLocation(TL.getStarLoc()); } void TypeLocWriter::VisitAtomicTypeLoc(AtomicTypeLoc TL) { Record.AddSourceLocation(TL.getKWLoc()); Record.AddSourceLocation(TL.getLParenLoc()); Record.AddSourceLocation(TL.getRParenLoc()); } void TypeLocWriter::VisitPipeTypeLoc(PipeTypeLoc TL) { Record.AddSourceLocation(TL.getKWLoc()); } void ASTWriter::WriteTypeAbbrevs() { using namespace llvm; std::shared_ptr Abv; // Abbreviation for TYPE_EXT_QUAL Abv = std::make_shared(); Abv->Add(BitCodeAbbrevOp(serialization::TYPE_EXT_QUAL)); Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // Type Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 3)); // Quals TypeExtQualAbbrev = Stream.EmitAbbrev(std::move(Abv)); // Abbreviation for TYPE_FUNCTION_PROTO Abv = std::make_shared(); Abv->Add(BitCodeAbbrevOp(serialization::TYPE_FUNCTION_PROTO)); // FunctionType Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // ReturnType Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // NoReturn Abv->Add(BitCodeAbbrevOp(0)); // HasRegParm Abv->Add(BitCodeAbbrevOp(0)); // RegParm Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4)); // CC Abv->Add(BitCodeAbbrevOp(0)); // ProducesResult Abv->Add(BitCodeAbbrevOp(0)); // NoCallerSavedRegs // FunctionProtoType Abv->Add(BitCodeAbbrevOp(0)); // IsVariadic Abv->Add(BitCodeAbbrevOp(0)); // HasTrailingReturn Abv->Add(BitCodeAbbrevOp(0)); // TypeQuals Abv->Add(BitCodeAbbrevOp(0)); // RefQualifier Abv->Add(BitCodeAbbrevOp(EST_None)); // ExceptionSpec Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // NumParams Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array)); Abv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // Params TypeFunctionProtoAbbrev = Stream.EmitAbbrev(std::move(Abv)); } //===----------------------------------------------------------------------===// // ASTWriter Implementation //===----------------------------------------------------------------------===// static void EmitBlockID(unsigned ID, const char *Name, llvm::BitstreamWriter &Stream, ASTWriter::RecordDataImpl &Record) { Record.clear(); Record.push_back(ID); Stream.EmitRecord(llvm::bitc::BLOCKINFO_CODE_SETBID, Record); // Emit the block name if present. if (!Name || Name[0] == 0) return; Record.clear(); while (*Name) Record.push_back(*Name++); Stream.EmitRecord(llvm::bitc::BLOCKINFO_CODE_BLOCKNAME, Record); } static void EmitRecordID(unsigned ID, const char *Name, llvm::BitstreamWriter &Stream, ASTWriter::RecordDataImpl &Record) { Record.clear(); Record.push_back(ID); while (*Name) Record.push_back(*Name++); Stream.EmitRecord(llvm::bitc::BLOCKINFO_CODE_SETRECORDNAME, Record); } static void AddStmtsExprs(llvm::BitstreamWriter &Stream, ASTWriter::RecordDataImpl &Record) { #define RECORD(X) EmitRecordID(X, #X, Stream, Record) RECORD(STMT_STOP); RECORD(STMT_NULL_PTR); RECORD(STMT_REF_PTR); RECORD(STMT_NULL); RECORD(STMT_COMPOUND); RECORD(STMT_CASE); RECORD(STMT_DEFAULT); RECORD(STMT_LABEL); RECORD(STMT_ATTRIBUTED); RECORD(STMT_IF); RECORD(STMT_SWITCH); RECORD(STMT_WHILE); RECORD(STMT_DO); RECORD(STMT_FOR); RECORD(STMT_GOTO); RECORD(STMT_INDIRECT_GOTO); RECORD(STMT_CONTINUE); RECORD(STMT_BREAK); RECORD(STMT_RETURN); RECORD(STMT_DECL); RECORD(STMT_GCCASM); RECORD(STMT_MSASM); RECORD(EXPR_PREDEFINED); RECORD(EXPR_DECL_REF); RECORD(EXPR_INTEGER_LITERAL); RECORD(EXPR_FLOATING_LITERAL); RECORD(EXPR_IMAGINARY_LITERAL); RECORD(EXPR_STRING_LITERAL); RECORD(EXPR_CHARACTER_LITERAL); RECORD(EXPR_PAREN); RECORD(EXPR_PAREN_LIST); RECORD(EXPR_UNARY_OPERATOR); RECORD(EXPR_SIZEOF_ALIGN_OF); RECORD(EXPR_ARRAY_SUBSCRIPT); RECORD(EXPR_CALL); RECORD(EXPR_MEMBER); RECORD(EXPR_BINARY_OPERATOR); RECORD(EXPR_COMPOUND_ASSIGN_OPERATOR); RECORD(EXPR_CONDITIONAL_OPERATOR); RECORD(EXPR_IMPLICIT_CAST); RECORD(EXPR_CSTYLE_CAST); RECORD(EXPR_COMPOUND_LITERAL); RECORD(EXPR_EXT_VECTOR_ELEMENT); RECORD(EXPR_INIT_LIST); RECORD(EXPR_DESIGNATED_INIT); RECORD(EXPR_DESIGNATED_INIT_UPDATE); RECORD(EXPR_IMPLICIT_VALUE_INIT); RECORD(EXPR_NO_INIT); RECORD(EXPR_VA_ARG); RECORD(EXPR_ADDR_LABEL); RECORD(EXPR_STMT); RECORD(EXPR_CHOOSE); RECORD(EXPR_GNU_NULL); RECORD(EXPR_SHUFFLE_VECTOR); RECORD(EXPR_BLOCK); RECORD(EXPR_GENERIC_SELECTION); RECORD(EXPR_OBJC_STRING_LITERAL); RECORD(EXPR_OBJC_BOXED_EXPRESSION); RECORD(EXPR_OBJC_ARRAY_LITERAL); RECORD(EXPR_OBJC_DICTIONARY_LITERAL); RECORD(EXPR_OBJC_ENCODE); RECORD(EXPR_OBJC_SELECTOR_EXPR); RECORD(EXPR_OBJC_PROTOCOL_EXPR); RECORD(EXPR_OBJC_IVAR_REF_EXPR); RECORD(EXPR_OBJC_PROPERTY_REF_EXPR); RECORD(EXPR_OBJC_KVC_REF_EXPR); RECORD(EXPR_OBJC_MESSAGE_EXPR); RECORD(STMT_OBJC_FOR_COLLECTION); RECORD(STMT_OBJC_CATCH); RECORD(STMT_OBJC_FINALLY); RECORD(STMT_OBJC_AT_TRY); RECORD(STMT_OBJC_AT_SYNCHRONIZED); RECORD(STMT_OBJC_AT_THROW); RECORD(EXPR_OBJC_BOOL_LITERAL); RECORD(STMT_CXX_CATCH); RECORD(STMT_CXX_TRY); RECORD(STMT_CXX_FOR_RANGE); RECORD(EXPR_CXX_OPERATOR_CALL); RECORD(EXPR_CXX_MEMBER_CALL); RECORD(EXPR_CXX_CONSTRUCT); RECORD(EXPR_CXX_TEMPORARY_OBJECT); RECORD(EXPR_CXX_STATIC_CAST); RECORD(EXPR_CXX_DYNAMIC_CAST); RECORD(EXPR_CXX_REINTERPRET_CAST); RECORD(EXPR_CXX_CONST_CAST); RECORD(EXPR_CXX_FUNCTIONAL_CAST); RECORD(EXPR_USER_DEFINED_LITERAL); RECORD(EXPR_CXX_STD_INITIALIZER_LIST); RECORD(EXPR_CXX_BOOL_LITERAL); RECORD(EXPR_CXX_NULL_PTR_LITERAL); RECORD(EXPR_CXX_TYPEID_EXPR); RECORD(EXPR_CXX_TYPEID_TYPE); RECORD(EXPR_CXX_THIS); RECORD(EXPR_CXX_THROW); RECORD(EXPR_CXX_DEFAULT_ARG); RECORD(EXPR_CXX_DEFAULT_INIT); RECORD(EXPR_CXX_BIND_TEMPORARY); RECORD(EXPR_CXX_SCALAR_VALUE_INIT); RECORD(EXPR_CXX_NEW); RECORD(EXPR_CXX_DELETE); RECORD(EXPR_CXX_PSEUDO_DESTRUCTOR); RECORD(EXPR_EXPR_WITH_CLEANUPS); RECORD(EXPR_CXX_DEPENDENT_SCOPE_MEMBER); RECORD(EXPR_CXX_DEPENDENT_SCOPE_DECL_REF); RECORD(EXPR_CXX_UNRESOLVED_CONSTRUCT); RECORD(EXPR_CXX_UNRESOLVED_MEMBER); RECORD(EXPR_CXX_UNRESOLVED_LOOKUP); RECORD(EXPR_CXX_EXPRESSION_TRAIT); RECORD(EXPR_CXX_NOEXCEPT); RECORD(EXPR_OPAQUE_VALUE); RECORD(EXPR_BINARY_CONDITIONAL_OPERATOR); RECORD(EXPR_TYPE_TRAIT); RECORD(EXPR_ARRAY_TYPE_TRAIT); RECORD(EXPR_PACK_EXPANSION); RECORD(EXPR_SIZEOF_PACK); RECORD(EXPR_SUBST_NON_TYPE_TEMPLATE_PARM); RECORD(EXPR_SUBST_NON_TYPE_TEMPLATE_PARM_PACK); RECORD(EXPR_FUNCTION_PARM_PACK); RECORD(EXPR_MATERIALIZE_TEMPORARY); RECORD(EXPR_CUDA_KERNEL_CALL); RECORD(EXPR_CXX_UUIDOF_EXPR); RECORD(EXPR_CXX_UUIDOF_TYPE); RECORD(EXPR_LAMBDA); #undef RECORD } void ASTWriter::WriteBlockInfoBlock() { RecordData Record; Stream.EnterBlockInfoBlock(); #define BLOCK(X) EmitBlockID(X ## _ID, #X, Stream, Record) #define RECORD(X) EmitRecordID(X, #X, Stream, Record) // Control Block. BLOCK(CONTROL_BLOCK); RECORD(METADATA); RECORD(MODULE_NAME); RECORD(MODULE_DIRECTORY); RECORD(MODULE_MAP_FILE); RECORD(IMPORTS); RECORD(ORIGINAL_FILE); RECORD(ORIGINAL_PCH_DIR); RECORD(ORIGINAL_FILE_ID); RECORD(INPUT_FILE_OFFSETS); BLOCK(OPTIONS_BLOCK); RECORD(LANGUAGE_OPTIONS); RECORD(TARGET_OPTIONS); RECORD(FILE_SYSTEM_OPTIONS); RECORD(HEADER_SEARCH_OPTIONS); RECORD(PREPROCESSOR_OPTIONS); BLOCK(INPUT_FILES_BLOCK); RECORD(INPUT_FILE); // AST Top-Level Block. BLOCK(AST_BLOCK); RECORD(TYPE_OFFSET); RECORD(DECL_OFFSET); RECORD(IDENTIFIER_OFFSET); RECORD(IDENTIFIER_TABLE); RECORD(EAGERLY_DESERIALIZED_DECLS); RECORD(MODULAR_CODEGEN_DECLS); RECORD(SPECIAL_TYPES); RECORD(STATISTICS); RECORD(TENTATIVE_DEFINITIONS); RECORD(SELECTOR_OFFSETS); RECORD(METHOD_POOL); RECORD(PP_COUNTER_VALUE); RECORD(SOURCE_LOCATION_OFFSETS); RECORD(SOURCE_LOCATION_PRELOADS); RECORD(EXT_VECTOR_DECLS); RECORD(UNUSED_FILESCOPED_DECLS); RECORD(PPD_ENTITIES_OFFSETS); RECORD(VTABLE_USES); RECORD(REFERENCED_SELECTOR_POOL); RECORD(TU_UPDATE_LEXICAL); RECORD(SEMA_DECL_REFS); RECORD(WEAK_UNDECLARED_IDENTIFIERS); RECORD(PENDING_IMPLICIT_INSTANTIATIONS); RECORD(UPDATE_VISIBLE); RECORD(DECL_UPDATE_OFFSETS); RECORD(DECL_UPDATES); RECORD(CUDA_SPECIAL_DECL_REFS); RECORD(HEADER_SEARCH_TABLE); RECORD(FP_PRAGMA_OPTIONS); RECORD(OPENCL_EXTENSIONS); RECORD(OPENCL_EXTENSION_TYPES); RECORD(OPENCL_EXTENSION_DECLS); RECORD(DELEGATING_CTORS); RECORD(KNOWN_NAMESPACES); RECORD(MODULE_OFFSET_MAP); RECORD(SOURCE_MANAGER_LINE_TABLE); RECORD(OBJC_CATEGORIES_MAP); RECORD(FILE_SORTED_DECLS); RECORD(IMPORTED_MODULES); RECORD(OBJC_CATEGORIES); RECORD(MACRO_OFFSET); RECORD(INTERESTING_IDENTIFIERS); RECORD(UNDEFINED_BUT_USED); RECORD(LATE_PARSED_TEMPLATE); RECORD(OPTIMIZE_PRAGMA_OPTIONS); RECORD(MSSTRUCT_PRAGMA_OPTIONS); RECORD(POINTERS_TO_MEMBERS_PRAGMA_OPTIONS); RECORD(UNUSED_LOCAL_TYPEDEF_NAME_CANDIDATES); RECORD(DELETE_EXPRS_TO_ANALYZE); RECORD(CUDA_PRAGMA_FORCE_HOST_DEVICE_DEPTH); RECORD(PP_CONDITIONAL_STACK); // SourceManager Block. BLOCK(SOURCE_MANAGER_BLOCK); RECORD(SM_SLOC_FILE_ENTRY); RECORD(SM_SLOC_BUFFER_ENTRY); RECORD(SM_SLOC_BUFFER_BLOB); RECORD(SM_SLOC_BUFFER_BLOB_COMPRESSED); RECORD(SM_SLOC_EXPANSION_ENTRY); // Preprocessor Block. BLOCK(PREPROCESSOR_BLOCK); RECORD(PP_MACRO_DIRECTIVE_HISTORY); RECORD(PP_MACRO_FUNCTION_LIKE); RECORD(PP_MACRO_OBJECT_LIKE); RECORD(PP_MODULE_MACRO); RECORD(PP_TOKEN); // Submodule Block. BLOCK(SUBMODULE_BLOCK); RECORD(SUBMODULE_METADATA); RECORD(SUBMODULE_DEFINITION); RECORD(SUBMODULE_UMBRELLA_HEADER); RECORD(SUBMODULE_HEADER); RECORD(SUBMODULE_TOPHEADER); RECORD(SUBMODULE_UMBRELLA_DIR); RECORD(SUBMODULE_IMPORTS); RECORD(SUBMODULE_EXPORTS); RECORD(SUBMODULE_REQUIRES); RECORD(SUBMODULE_EXCLUDED_HEADER); RECORD(SUBMODULE_LINK_LIBRARY); RECORD(SUBMODULE_CONFIG_MACRO); RECORD(SUBMODULE_CONFLICT); RECORD(SUBMODULE_PRIVATE_HEADER); RECORD(SUBMODULE_TEXTUAL_HEADER); RECORD(SUBMODULE_PRIVATE_TEXTUAL_HEADER); RECORD(SUBMODULE_INITIALIZERS); // Comments Block. BLOCK(COMMENTS_BLOCK); RECORD(COMMENTS_RAW_COMMENT); // Decls and Types block. BLOCK(DECLTYPES_BLOCK); RECORD(TYPE_EXT_QUAL); RECORD(TYPE_COMPLEX); RECORD(TYPE_POINTER); RECORD(TYPE_BLOCK_POINTER); RECORD(TYPE_LVALUE_REFERENCE); RECORD(TYPE_RVALUE_REFERENCE); RECORD(TYPE_MEMBER_POINTER); RECORD(TYPE_CONSTANT_ARRAY); RECORD(TYPE_INCOMPLETE_ARRAY); RECORD(TYPE_VARIABLE_ARRAY); RECORD(TYPE_VECTOR); RECORD(TYPE_EXT_VECTOR); RECORD(TYPE_FUNCTION_NO_PROTO); RECORD(TYPE_FUNCTION_PROTO); RECORD(TYPE_TYPEDEF); RECORD(TYPE_TYPEOF_EXPR); RECORD(TYPE_TYPEOF); RECORD(TYPE_RECORD); RECORD(TYPE_ENUM); RECORD(TYPE_OBJC_INTERFACE); RECORD(TYPE_OBJC_OBJECT_POINTER); RECORD(TYPE_DECLTYPE); RECORD(TYPE_ELABORATED); RECORD(TYPE_SUBST_TEMPLATE_TYPE_PARM); RECORD(TYPE_UNRESOLVED_USING); RECORD(TYPE_INJECTED_CLASS_NAME); RECORD(TYPE_OBJC_OBJECT); RECORD(TYPE_TEMPLATE_TYPE_PARM); RECORD(TYPE_TEMPLATE_SPECIALIZATION); RECORD(TYPE_DEPENDENT_NAME); RECORD(TYPE_DEPENDENT_TEMPLATE_SPECIALIZATION); RECORD(TYPE_DEPENDENT_SIZED_ARRAY); RECORD(TYPE_PAREN); RECORD(TYPE_PACK_EXPANSION); RECORD(TYPE_ATTRIBUTED); RECORD(TYPE_SUBST_TEMPLATE_TYPE_PARM_PACK); RECORD(TYPE_AUTO); RECORD(TYPE_UNARY_TRANSFORM); RECORD(TYPE_ATOMIC); RECORD(TYPE_DECAYED); RECORD(TYPE_ADJUSTED); RECORD(TYPE_OBJC_TYPE_PARAM); RECORD(LOCAL_REDECLARATIONS); RECORD(DECL_TYPEDEF); RECORD(DECL_TYPEALIAS); RECORD(DECL_ENUM); RECORD(DECL_RECORD); RECORD(DECL_ENUM_CONSTANT); RECORD(DECL_FUNCTION); RECORD(DECL_OBJC_METHOD); RECORD(DECL_OBJC_INTERFACE); RECORD(DECL_OBJC_PROTOCOL); RECORD(DECL_OBJC_IVAR); RECORD(DECL_OBJC_AT_DEFS_FIELD); RECORD(DECL_OBJC_CATEGORY); RECORD(DECL_OBJC_CATEGORY_IMPL); RECORD(DECL_OBJC_IMPLEMENTATION); RECORD(DECL_OBJC_COMPATIBLE_ALIAS); RECORD(DECL_OBJC_PROPERTY); RECORD(DECL_OBJC_PROPERTY_IMPL); RECORD(DECL_FIELD); RECORD(DECL_MS_PROPERTY); RECORD(DECL_VAR); RECORD(DECL_IMPLICIT_PARAM); RECORD(DECL_PARM_VAR); RECORD(DECL_FILE_SCOPE_ASM); RECORD(DECL_BLOCK); RECORD(DECL_CONTEXT_LEXICAL); RECORD(DECL_CONTEXT_VISIBLE); RECORD(DECL_NAMESPACE); RECORD(DECL_NAMESPACE_ALIAS); RECORD(DECL_USING); RECORD(DECL_USING_SHADOW); RECORD(DECL_USING_DIRECTIVE); RECORD(DECL_UNRESOLVED_USING_VALUE); RECORD(DECL_UNRESOLVED_USING_TYPENAME); RECORD(DECL_LINKAGE_SPEC); RECORD(DECL_CXX_RECORD); RECORD(DECL_CXX_METHOD); RECORD(DECL_CXX_CONSTRUCTOR); RECORD(DECL_CXX_INHERITED_CONSTRUCTOR); RECORD(DECL_CXX_DESTRUCTOR); RECORD(DECL_CXX_CONVERSION); RECORD(DECL_ACCESS_SPEC); RECORD(DECL_FRIEND); RECORD(DECL_FRIEND_TEMPLATE); RECORD(DECL_CLASS_TEMPLATE); RECORD(DECL_CLASS_TEMPLATE_SPECIALIZATION); RECORD(DECL_CLASS_TEMPLATE_PARTIAL_SPECIALIZATION); RECORD(DECL_VAR_TEMPLATE); RECORD(DECL_VAR_TEMPLATE_SPECIALIZATION); RECORD(DECL_VAR_TEMPLATE_PARTIAL_SPECIALIZATION); RECORD(DECL_FUNCTION_TEMPLATE); RECORD(DECL_TEMPLATE_TYPE_PARM); RECORD(DECL_NON_TYPE_TEMPLATE_PARM); RECORD(DECL_TEMPLATE_TEMPLATE_PARM); RECORD(DECL_TYPE_ALIAS_TEMPLATE); RECORD(DECL_STATIC_ASSERT); RECORD(DECL_CXX_BASE_SPECIFIERS); RECORD(DECL_CXX_CTOR_INITIALIZERS); RECORD(DECL_INDIRECTFIELD); RECORD(DECL_EXPANDED_NON_TYPE_TEMPLATE_PARM_PACK); RECORD(DECL_EXPANDED_TEMPLATE_TEMPLATE_PARM_PACK); RECORD(DECL_CLASS_SCOPE_FUNCTION_SPECIALIZATION); RECORD(DECL_IMPORT); RECORD(DECL_OMP_THREADPRIVATE); RECORD(DECL_EMPTY); RECORD(DECL_OBJC_TYPE_PARAM); RECORD(DECL_OMP_CAPTUREDEXPR); RECORD(DECL_PRAGMA_COMMENT); RECORD(DECL_PRAGMA_DETECT_MISMATCH); RECORD(DECL_OMP_DECLARE_REDUCTION); // Statements and Exprs can occur in the Decls and Types block. AddStmtsExprs(Stream, Record); BLOCK(PREPROCESSOR_DETAIL_BLOCK); RECORD(PPD_MACRO_EXPANSION); RECORD(PPD_MACRO_DEFINITION); RECORD(PPD_INCLUSION_DIRECTIVE); // Decls and Types block. BLOCK(EXTENSION_BLOCK); RECORD(EXTENSION_METADATA); BLOCK(UNHASHED_CONTROL_BLOCK); RECORD(SIGNATURE); RECORD(DIAGNOSTIC_OPTIONS); RECORD(DIAG_PRAGMA_MAPPINGS); #undef RECORD #undef BLOCK Stream.ExitBlock(); } /// \brief Prepares a path for being written to an AST file by converting it /// to an absolute path and removing nested './'s. /// /// \return \c true if the path was changed. static bool cleanPathForOutput(FileManager &FileMgr, SmallVectorImpl &Path) { bool Changed = FileMgr.makeAbsolutePath(Path); return Changed | llvm::sys::path::remove_dots(Path); } /// \brief Adjusts the given filename to only write out the portion of the /// filename that is not part of the system root directory. /// /// \param Filename the file name to adjust. /// /// \param BaseDir When non-NULL, the PCH file is a relocatable AST file and /// the returned filename will be adjusted by this root directory. /// /// \returns either the original filename (if it needs no adjustment) or the /// adjusted filename (which points into the @p Filename parameter). static const char * adjustFilenameForRelocatableAST(const char *Filename, StringRef BaseDir) { assert(Filename && "No file name to adjust?"); if (BaseDir.empty()) return Filename; // Verify that the filename and the system root have the same prefix. unsigned Pos = 0; for (; Filename[Pos] && Pos < BaseDir.size(); ++Pos) if (Filename[Pos] != BaseDir[Pos]) return Filename; // Prefixes don't match. // We hit the end of the filename before we hit the end of the system root. if (!Filename[Pos]) return Filename; // If there's not a path separator at the end of the base directory nor // immediately after it, then this isn't within the base directory. if (!llvm::sys::path::is_separator(Filename[Pos])) { if (!llvm::sys::path::is_separator(BaseDir.back())) return Filename; } else { // If the file name has a '/' at the current position, skip over the '/'. // We distinguish relative paths from absolute paths by the // absence of '/' at the beginning of relative paths. // // FIXME: This is wrong. We distinguish them by asking if the path is // absolute, which isn't the same thing. And there might be multiple '/'s // in a row. Use a better mechanism to indicate whether we have emitted an // absolute or relative path. ++Pos; } return Filename + Pos; } ASTFileSignature ASTWriter::createSignature(StringRef Bytes) { // Calculate the hash till start of UNHASHED_CONTROL_BLOCK. llvm::SHA1 Hasher; Hasher.update(ArrayRef(Bytes.bytes_begin(), Bytes.size())); auto Hash = Hasher.result(); // Convert to an array [5*i32]. ASTFileSignature Signature; auto LShift = [&](unsigned char Val, unsigned Shift) { return (uint32_t)Val << Shift; }; for (int I = 0; I != 5; ++I) Signature[I] = LShift(Hash[I * 4 + 0], 24) | LShift(Hash[I * 4 + 1], 16) | LShift(Hash[I * 4 + 2], 8) | LShift(Hash[I * 4 + 3], 0); return Signature; } ASTFileSignature ASTWriter::writeUnhashedControlBlock(Preprocessor &PP, ASTContext &Context) { // Flush first to prepare the PCM hash (signature). Stream.FlushToWord(); auto StartOfUnhashedControl = Stream.GetCurrentBitNo() >> 3; // Enter the block and prepare to write records. RecordData Record; Stream.EnterSubblock(UNHASHED_CONTROL_BLOCK_ID, 5); // For implicit modules, write the hash of the PCM as its signature. ASTFileSignature Signature; if (WritingModule && PP.getHeaderSearchInfo().getHeaderSearchOpts().ModulesHashContent) { Signature = createSignature(StringRef(Buffer.begin(), StartOfUnhashedControl)); Record.append(Signature.begin(), Signature.end()); Stream.EmitRecord(SIGNATURE, Record); Record.clear(); } // Diagnostic options. const auto &Diags = Context.getDiagnostics(); const DiagnosticOptions &DiagOpts = Diags.getDiagnosticOptions(); #define DIAGOPT(Name, Bits, Default) Record.push_back(DiagOpts.Name); #define ENUM_DIAGOPT(Name, Type, Bits, Default) \ Record.push_back(static_cast(DiagOpts.get##Name())); #include "clang/Basic/DiagnosticOptions.def" Record.push_back(DiagOpts.Warnings.size()); for (unsigned I = 0, N = DiagOpts.Warnings.size(); I != N; ++I) AddString(DiagOpts.Warnings[I], Record); Record.push_back(DiagOpts.Remarks.size()); for (unsigned I = 0, N = DiagOpts.Remarks.size(); I != N; ++I) AddString(DiagOpts.Remarks[I], Record); // Note: we don't serialize the log or serialization file names, because they // are generally transient files and will almost always be overridden. Stream.EmitRecord(DIAGNOSTIC_OPTIONS, Record); // Write out the diagnostic/pragma mappings. WritePragmaDiagnosticMappings(Diags, /* IsModule = */ WritingModule); // Leave the options block. Stream.ExitBlock(); return Signature; } /// \brief Write the control block. void ASTWriter::WriteControlBlock(Preprocessor &PP, ASTContext &Context, StringRef isysroot, const std::string &OutputFile) { using namespace llvm; Stream.EnterSubblock(CONTROL_BLOCK_ID, 5); RecordData Record; // Metadata auto MetadataAbbrev = std::make_shared(); MetadataAbbrev->Add(BitCodeAbbrevOp(METADATA)); MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 16)); // Major MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 16)); // Minor MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 16)); // Clang maj. MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 16)); // Clang min. MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Relocatable MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Timestamps MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Errors MetadataAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // SVN branch/tag unsigned MetadataAbbrevCode = Stream.EmitAbbrev(std::move(MetadataAbbrev)); assert((!WritingModule || isysroot.empty()) && "writing module as a relocatable PCH?"); { RecordData::value_type Record[] = {METADATA, VERSION_MAJOR, VERSION_MINOR, CLANG_VERSION_MAJOR, CLANG_VERSION_MINOR, !isysroot.empty(), IncludeTimestamps, ASTHasCompilerErrors}; Stream.EmitRecordWithBlob(MetadataAbbrevCode, Record, getClangFullRepositoryVersion()); } if (WritingModule) { // Module name auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(MODULE_NAME)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned AbbrevCode = Stream.EmitAbbrev(std::move(Abbrev)); RecordData::value_type Record[] = {MODULE_NAME}; Stream.EmitRecordWithBlob(AbbrevCode, Record, WritingModule->Name); } if (WritingModule && WritingModule->Directory) { SmallString<128> BaseDir(WritingModule->Directory->getName()); cleanPathForOutput(Context.getSourceManager().getFileManager(), BaseDir); // If the home of the module is the current working directory, then we // want to pick up the cwd of the build process loading the module, not // our cwd, when we load this module. if (!PP.getHeaderSearchInfo() .getHeaderSearchOpts() .ModuleMapFileHomeIsCwd || WritingModule->Directory->getName() != StringRef(".")) { // Module directory. auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(MODULE_DIRECTORY)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Directory unsigned AbbrevCode = Stream.EmitAbbrev(std::move(Abbrev)); RecordData::value_type Record[] = {MODULE_DIRECTORY}; Stream.EmitRecordWithBlob(AbbrevCode, Record, BaseDir); } // Write out all other paths relative to the base directory if possible. BaseDirectory.assign(BaseDir.begin(), BaseDir.end()); } else if (!isysroot.empty()) { // Write out paths relative to the sysroot if possible. BaseDirectory = isysroot; } // Module map file if (WritingModule && WritingModule->Kind == Module::ModuleMapModule) { Record.clear(); auto &Map = PP.getHeaderSearchInfo().getModuleMap(); AddPath(WritingModule->PresumedModuleMapFile.empty() ? Map.getModuleMapFileForUniquing(WritingModule)->getName() : StringRef(WritingModule->PresumedModuleMapFile), Record); // Additional module map files. if (auto *AdditionalModMaps = Map.getAdditionalModuleMapFiles(WritingModule)) { Record.push_back(AdditionalModMaps->size()); for (const FileEntry *F : *AdditionalModMaps) AddPath(F->getName(), Record); } else { Record.push_back(0); } Stream.EmitRecord(MODULE_MAP_FILE, Record); } // Imports if (Chain) { serialization::ModuleManager &Mgr = Chain->getModuleManager(); Record.clear(); for (ModuleFile &M : Mgr) { // Skip modules that weren't directly imported. if (!M.isDirectlyImported()) continue; Record.push_back((unsigned)M.Kind); // FIXME: Stable encoding AddSourceLocation(M.ImportLoc, Record); // If we have calculated signature, there is no need to store // the size or timestamp. Record.push_back(M.Signature ? 0 : M.File->getSize()); Record.push_back(M.Signature ? 0 : getTimestampForOutput(M.File)); for (auto I : M.Signature) Record.push_back(I); AddPath(M.FileName, Record); } Stream.EmitRecord(IMPORTS, Record); } // Write the options block. Stream.EnterSubblock(OPTIONS_BLOCK_ID, 4); // Language options. Record.clear(); const LangOptions &LangOpts = Context.getLangOpts(); #define LANGOPT(Name, Bits, Default, Description) \ Record.push_back(LangOpts.Name); #define ENUM_LANGOPT(Name, Type, Bits, Default, Description) \ Record.push_back(static_cast(LangOpts.get##Name())); #include "clang/Basic/LangOptions.def" #define SANITIZER(NAME, ID) \ Record.push_back(LangOpts.Sanitize.has(SanitizerKind::ID)); #include "clang/Basic/Sanitizers.def" Record.push_back(LangOpts.ModuleFeatures.size()); for (StringRef Feature : LangOpts.ModuleFeatures) AddString(Feature, Record); Record.push_back((unsigned) LangOpts.ObjCRuntime.getKind()); AddVersionTuple(LangOpts.ObjCRuntime.getVersion(), Record); AddString(LangOpts.CurrentModule, Record); // Comment options. Record.push_back(LangOpts.CommentOpts.BlockCommandNames.size()); for (const auto &I : LangOpts.CommentOpts.BlockCommandNames) { AddString(I, Record); } Record.push_back(LangOpts.CommentOpts.ParseAllComments); // OpenMP offloading options. Record.push_back(LangOpts.OMPTargetTriples.size()); for (auto &T : LangOpts.OMPTargetTriples) AddString(T.getTriple(), Record); AddString(LangOpts.OMPHostIRFile, Record); Stream.EmitRecord(LANGUAGE_OPTIONS, Record); // Target options. Record.clear(); const TargetInfo &Target = Context.getTargetInfo(); const TargetOptions &TargetOpts = Target.getTargetOpts(); AddString(TargetOpts.Triple, Record); AddString(TargetOpts.CPU, Record); AddString(TargetOpts.ABI, Record); Record.push_back(TargetOpts.FeaturesAsWritten.size()); for (unsigned I = 0, N = TargetOpts.FeaturesAsWritten.size(); I != N; ++I) { AddString(TargetOpts.FeaturesAsWritten[I], Record); } Record.push_back(TargetOpts.Features.size()); for (unsigned I = 0, N = TargetOpts.Features.size(); I != N; ++I) { AddString(TargetOpts.Features[I], Record); } Stream.EmitRecord(TARGET_OPTIONS, Record); // File system options. Record.clear(); const FileSystemOptions &FSOpts = Context.getSourceManager().getFileManager().getFileSystemOpts(); AddString(FSOpts.WorkingDir, Record); Stream.EmitRecord(FILE_SYSTEM_OPTIONS, Record); // Header search options. Record.clear(); const HeaderSearchOptions &HSOpts = PP.getHeaderSearchInfo().getHeaderSearchOpts(); AddString(HSOpts.Sysroot, Record); // Include entries. Record.push_back(HSOpts.UserEntries.size()); for (unsigned I = 0, N = HSOpts.UserEntries.size(); I != N; ++I) { const HeaderSearchOptions::Entry &Entry = HSOpts.UserEntries[I]; AddString(Entry.Path, Record); Record.push_back(static_cast(Entry.Group)); Record.push_back(Entry.IsFramework); Record.push_back(Entry.IgnoreSysRoot); } // System header prefixes. Record.push_back(HSOpts.SystemHeaderPrefixes.size()); for (unsigned I = 0, N = HSOpts.SystemHeaderPrefixes.size(); I != N; ++I) { AddString(HSOpts.SystemHeaderPrefixes[I].Prefix, Record); Record.push_back(HSOpts.SystemHeaderPrefixes[I].IsSystemHeader); } AddString(HSOpts.ResourceDir, Record); AddString(HSOpts.ModuleCachePath, Record); AddString(HSOpts.ModuleUserBuildPath, Record); Record.push_back(HSOpts.DisableModuleHash); Record.push_back(HSOpts.ImplicitModuleMaps); Record.push_back(HSOpts.ModuleMapFileHomeIsCwd); Record.push_back(HSOpts.UseBuiltinIncludes); Record.push_back(HSOpts.UseStandardSystemIncludes); Record.push_back(HSOpts.UseStandardCXXIncludes); Record.push_back(HSOpts.UseLibcxx); // Write out the specific module cache path that contains the module files. AddString(PP.getHeaderSearchInfo().getModuleCachePath(), Record); Stream.EmitRecord(HEADER_SEARCH_OPTIONS, Record); // Preprocessor options. Record.clear(); const PreprocessorOptions &PPOpts = PP.getPreprocessorOpts(); // Macro definitions. Record.push_back(PPOpts.Macros.size()); for (unsigned I = 0, N = PPOpts.Macros.size(); I != N; ++I) { AddString(PPOpts.Macros[I].first, Record); Record.push_back(PPOpts.Macros[I].second); } // Includes Record.push_back(PPOpts.Includes.size()); for (unsigned I = 0, N = PPOpts.Includes.size(); I != N; ++I) AddString(PPOpts.Includes[I], Record); // Macro includes Record.push_back(PPOpts.MacroIncludes.size()); for (unsigned I = 0, N = PPOpts.MacroIncludes.size(); I != N; ++I) AddString(PPOpts.MacroIncludes[I], Record); Record.push_back(PPOpts.UsePredefines); // Detailed record is important since it is used for the module cache hash. Record.push_back(PPOpts.DetailedRecord); AddString(PPOpts.ImplicitPCHInclude, Record); AddString(PPOpts.ImplicitPTHInclude, Record); Record.push_back(static_cast(PPOpts.ObjCXXARCStandardLibrary)); Stream.EmitRecord(PREPROCESSOR_OPTIONS, Record); // Leave the options block. Stream.ExitBlock(); // Original file name and file ID SourceManager &SM = Context.getSourceManager(); if (const FileEntry *MainFile = SM.getFileEntryForID(SM.getMainFileID())) { auto FileAbbrev = std::make_shared(); FileAbbrev->Add(BitCodeAbbrevOp(ORIGINAL_FILE)); FileAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // File ID FileAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // File name unsigned FileAbbrevCode = Stream.EmitAbbrev(std::move(FileAbbrev)); Record.clear(); Record.push_back(ORIGINAL_FILE); Record.push_back(SM.getMainFileID().getOpaqueValue()); EmitRecordWithPath(FileAbbrevCode, Record, MainFile->getName()); } Record.clear(); Record.push_back(SM.getMainFileID().getOpaqueValue()); Stream.EmitRecord(ORIGINAL_FILE_ID, Record); // Original PCH directory if (!OutputFile.empty() && OutputFile != "-") { auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(ORIGINAL_PCH_DIR)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // File name unsigned AbbrevCode = Stream.EmitAbbrev(std::move(Abbrev)); SmallString<128> OutputPath(OutputFile); SM.getFileManager().makeAbsolutePath(OutputPath); StringRef origDir = llvm::sys::path::parent_path(OutputPath); RecordData::value_type Record[] = {ORIGINAL_PCH_DIR}; Stream.EmitRecordWithBlob(AbbrevCode, Record, origDir); } WriteInputFiles(Context.SourceMgr, PP.getHeaderSearchInfo().getHeaderSearchOpts(), PP.getLangOpts().Modules); Stream.ExitBlock(); } namespace { /// \brief An input file. struct InputFileEntry { const FileEntry *File; bool IsSystemFile; bool IsTransient; bool BufferOverridden; bool IsTopLevelModuleMap; }; } // end anonymous namespace void ASTWriter::WriteInputFiles(SourceManager &SourceMgr, HeaderSearchOptions &HSOpts, bool Modules) { using namespace llvm; Stream.EnterSubblock(INPUT_FILES_BLOCK_ID, 4); // Create input-file abbreviation. auto IFAbbrev = std::make_shared(); IFAbbrev->Add(BitCodeAbbrevOp(INPUT_FILE)); IFAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // ID IFAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 12)); // Size IFAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 32)); // Modification time IFAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Overridden IFAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Transient IFAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Module map IFAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // File name unsigned IFAbbrevCode = Stream.EmitAbbrev(std::move(IFAbbrev)); // Get all ContentCache objects for files, sorted by whether the file is a // system one or not. System files go at the back, users files at the front. std::deque SortedFiles; for (unsigned I = 1, N = SourceMgr.local_sloc_entry_size(); I != N; ++I) { // Get this source location entry. const SrcMgr::SLocEntry *SLoc = &SourceMgr.getLocalSLocEntry(I); assert(&SourceMgr.getSLocEntry(FileID::get(I)) == SLoc); // We only care about file entries that were not overridden. if (!SLoc->isFile()) continue; const SrcMgr::FileInfo &File = SLoc->getFile(); const SrcMgr::ContentCache *Cache = File.getContentCache(); if (!Cache->OrigEntry) continue; InputFileEntry Entry; Entry.File = Cache->OrigEntry; Entry.IsSystemFile = Cache->IsSystemFile; Entry.IsTransient = Cache->IsTransient; Entry.BufferOverridden = Cache->BufferOverridden; Entry.IsTopLevelModuleMap = isModuleMap(File.getFileCharacteristic()) && File.getIncludeLoc().isInvalid(); if (Cache->IsSystemFile) SortedFiles.push_back(Entry); else SortedFiles.push_front(Entry); } unsigned UserFilesNum = 0; // Write out all of the input files. std::vector InputFileOffsets; for (const auto &Entry : SortedFiles) { uint32_t &InputFileID = InputFileIDs[Entry.File]; if (InputFileID != 0) continue; // already recorded this file. // Record this entry's offset. InputFileOffsets.push_back(Stream.GetCurrentBitNo()); InputFileID = InputFileOffsets.size(); if (!Entry.IsSystemFile) ++UserFilesNum; // Emit size/modification time for this file. // And whether this file was overridden. RecordData::value_type Record[] = { INPUT_FILE, InputFileOffsets.size(), (uint64_t)Entry.File->getSize(), (uint64_t)getTimestampForOutput(Entry.File), Entry.BufferOverridden, Entry.IsTransient, Entry.IsTopLevelModuleMap}; EmitRecordWithPath(IFAbbrevCode, Record, Entry.File->getName()); } Stream.ExitBlock(); // Create input file offsets abbreviation. auto OffsetsAbbrev = std::make_shared(); OffsetsAbbrev->Add(BitCodeAbbrevOp(INPUT_FILE_OFFSETS)); OffsetsAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // # input files OffsetsAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // # non-system // input files OffsetsAbbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Array unsigned OffsetsAbbrevCode = Stream.EmitAbbrev(std::move(OffsetsAbbrev)); // Write input file offsets. RecordData::value_type Record[] = {INPUT_FILE_OFFSETS, InputFileOffsets.size(), UserFilesNum}; Stream.EmitRecordWithBlob(OffsetsAbbrevCode, Record, bytes(InputFileOffsets)); } //===----------------------------------------------------------------------===// // Source Manager Serialization //===----------------------------------------------------------------------===// /// \brief Create an abbreviation for the SLocEntry that refers to a /// file. static unsigned CreateSLocFileAbbrev(llvm::BitstreamWriter &Stream) { using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SM_SLOC_FILE_ENTRY)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Offset Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Include location Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 3)); // Characteristic Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Line directives // FileEntry fields. Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // Input File ID Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // NumCreatedFIDs Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 24)); // FirstDeclIndex Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // NumDecls return Stream.EmitAbbrev(std::move(Abbrev)); } /// \brief Create an abbreviation for the SLocEntry that refers to a /// buffer. static unsigned CreateSLocBufferAbbrev(llvm::BitstreamWriter &Stream) { using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SM_SLOC_BUFFER_ENTRY)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Offset Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Include location Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 3)); // Characteristic Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // Line directives Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Buffer name blob return Stream.EmitAbbrev(std::move(Abbrev)); } /// \brief Create an abbreviation for the SLocEntry that refers to a /// buffer's blob. static unsigned CreateSLocBufferBlobAbbrev(llvm::BitstreamWriter &Stream, bool Compressed) { using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(Compressed ? SM_SLOC_BUFFER_BLOB_COMPRESSED : SM_SLOC_BUFFER_BLOB)); if (Compressed) Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Uncompressed size Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Blob return Stream.EmitAbbrev(std::move(Abbrev)); } /// \brief Create an abbreviation for the SLocEntry that refers to a macro /// expansion. static unsigned CreateSLocExpansionAbbrev(llvm::BitstreamWriter &Stream) { using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SM_SLOC_EXPANSION_ENTRY)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Offset Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Spelling location Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // Start location Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // End location Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // Token length return Stream.EmitAbbrev(std::move(Abbrev)); } namespace { // Trait used for the on-disk hash table of header search information. class HeaderFileInfoTrait { ASTWriter &Writer; // Keep track of the framework names we've used during serialization. SmallVector FrameworkStringData; llvm::StringMap FrameworkNameOffset; public: HeaderFileInfoTrait(ASTWriter &Writer) : Writer(Writer) {} struct key_type { StringRef Filename; off_t Size; time_t ModTime; }; typedef const key_type &key_type_ref; using UnresolvedModule = llvm::PointerIntPair; struct data_type { const HeaderFileInfo &HFI; ArrayRef KnownHeaders; UnresolvedModule Unresolved; }; typedef const data_type &data_type_ref; typedef unsigned hash_value_type; typedef unsigned offset_type; hash_value_type ComputeHash(key_type_ref key) { // The hash is based only on size/time of the file, so that the reader can // match even when symlinking or excess path elements ("foo/../", "../") // change the form of the name. However, complete path is still the key. return llvm::hash_combine(key.Size, key.ModTime); } std::pair EmitKeyDataLength(raw_ostream& Out, key_type_ref key, data_type_ref Data) { using namespace llvm::support; endian::Writer LE(Out); unsigned KeyLen = key.Filename.size() + 1 + 8 + 8; LE.write(KeyLen); unsigned DataLen = 1 + 2 + 4 + 4; for (auto ModInfo : Data.KnownHeaders) if (Writer.getLocalOrImportedSubmoduleID(ModInfo.getModule())) DataLen += 4; if (Data.Unresolved.getPointer()) DataLen += 4; LE.write(DataLen); return std::make_pair(KeyLen, DataLen); } void EmitKey(raw_ostream& Out, key_type_ref key, unsigned KeyLen) { using namespace llvm::support; endian::Writer LE(Out); LE.write(key.Size); KeyLen -= 8; LE.write(key.ModTime); KeyLen -= 8; Out.write(key.Filename.data(), KeyLen); } void EmitData(raw_ostream &Out, key_type_ref key, data_type_ref Data, unsigned DataLen) { using namespace llvm::support; endian::Writer LE(Out); uint64_t Start = Out.tell(); (void)Start; unsigned char Flags = (Data.HFI.isImport << 5) | (Data.HFI.isPragmaOnce << 4) | (Data.HFI.DirInfo << 1) | Data.HFI.IndexHeaderMapHeader; LE.write(Flags); LE.write(Data.HFI.NumIncludes); if (!Data.HFI.ControllingMacro) LE.write(Data.HFI.ControllingMacroID); else LE.write(Writer.getIdentifierRef(Data.HFI.ControllingMacro)); unsigned Offset = 0; if (!Data.HFI.Framework.empty()) { // If this header refers into a framework, save the framework name. llvm::StringMap::iterator Pos = FrameworkNameOffset.find(Data.HFI.Framework); if (Pos == FrameworkNameOffset.end()) { Offset = FrameworkStringData.size() + 1; FrameworkStringData.append(Data.HFI.Framework.begin(), Data.HFI.Framework.end()); FrameworkStringData.push_back(0); FrameworkNameOffset[Data.HFI.Framework] = Offset; } else Offset = Pos->second; } LE.write(Offset); auto EmitModule = [&](Module *M, ModuleMap::ModuleHeaderRole Role) { if (uint32_t ModID = Writer.getLocalOrImportedSubmoduleID(M)) { uint32_t Value = (ModID << 2) | (unsigned)Role; assert((Value >> 2) == ModID && "overflow in header module info"); LE.write(Value); } }; // FIXME: If the header is excluded, we should write out some // record of that fact. for (auto ModInfo : Data.KnownHeaders) EmitModule(ModInfo.getModule(), ModInfo.getRole()); if (Data.Unresolved.getPointer()) EmitModule(Data.Unresolved.getPointer(), Data.Unresolved.getInt()); assert(Out.tell() - Start == DataLen && "Wrong data length"); } const char *strings_begin() const { return FrameworkStringData.begin(); } const char *strings_end() const { return FrameworkStringData.end(); } }; } // end anonymous namespace /// \brief Write the header search block for the list of files that /// /// \param HS The header search structure to save. void ASTWriter::WriteHeaderSearch(const HeaderSearch &HS) { HeaderFileInfoTrait GeneratorTrait(*this); llvm::OnDiskChainedHashTableGenerator Generator; SmallVector SavedStrings; unsigned NumHeaderSearchEntries = 0; // Find all unresolved headers for the current module. We generally will // have resolved them before we get here, but not necessarily: we might be // compiling a preprocessed module, where there is no requirement for the // original files to exist any more. const HeaderFileInfo Empty; // So we can take a reference. if (WritingModule) { llvm::SmallVector Worklist(1, WritingModule); while (!Worklist.empty()) { Module *M = Worklist.pop_back_val(); if (!M->isAvailable()) continue; // Map to disk files where possible, to pick up any missing stat // information. This also means we don't need to check the unresolved // headers list when emitting resolved headers in the first loop below. // FIXME: It'd be preferable to avoid doing this if we were given // sufficient stat information in the module map. HS.getModuleMap().resolveHeaderDirectives(M); // If the file didn't exist, we can still create a module if we were given // enough information in the module map. for (auto U : M->MissingHeaders) { // Check that we were given enough information to build a module // without this file existing on disk. if (!U.Size || (!U.ModTime && IncludeTimestamps)) { PP->Diag(U.FileNameLoc, diag::err_module_no_size_mtime_for_header) << WritingModule->getFullModuleName() << U.Size.hasValue() << U.FileName; continue; } // Form the effective relative pathname for the file. SmallString<128> Filename(M->Directory->getName()); llvm::sys::path::append(Filename, U.FileName); PreparePathForOutput(Filename); StringRef FilenameDup = strdup(Filename.c_str()); SavedStrings.push_back(FilenameDup.data()); HeaderFileInfoTrait::key_type Key = { FilenameDup, *U.Size, IncludeTimestamps ? *U.ModTime : 0 }; HeaderFileInfoTrait::data_type Data = { Empty, {}, {M, ModuleMap::headerKindToRole(U.Kind)} }; // FIXME: Deal with cases where there are multiple unresolved header // directives in different submodules for the same header. Generator.insert(Key, Data, GeneratorTrait); ++NumHeaderSearchEntries; } Worklist.append(M->submodule_begin(), M->submodule_end()); } } SmallVector FilesByUID; HS.getFileMgr().GetUniqueIDMapping(FilesByUID); if (FilesByUID.size() > HS.header_file_size()) FilesByUID.resize(HS.header_file_size()); for (unsigned UID = 0, LastUID = FilesByUID.size(); UID != LastUID; ++UID) { const FileEntry *File = FilesByUID[UID]; if (!File) continue; // Get the file info. This will load info from the external source if // necessary. Skip emitting this file if we have no information on it // as a header file (in which case HFI will be null) or if it hasn't // changed since it was loaded. Also skip it if it's for a modular header // from a different module; in that case, we rely on the module(s) // containing the header to provide this information. const HeaderFileInfo *HFI = HS.getExistingFileInfo(File, /*WantExternal*/!Chain); if (!HFI || (HFI->isModuleHeader && !HFI->isCompilingModuleHeader)) continue; // Massage the file path into an appropriate form. StringRef Filename = File->getName(); SmallString<128> FilenameTmp(Filename); if (PreparePathForOutput(FilenameTmp)) { // If we performed any translation on the file name at all, we need to // save this string, since the generator will refer to it later. Filename = StringRef(strdup(FilenameTmp.c_str())); SavedStrings.push_back(Filename.data()); } HeaderFileInfoTrait::key_type Key = { Filename, File->getSize(), getTimestampForOutput(File) }; HeaderFileInfoTrait::data_type Data = { *HFI, HS.getModuleMap().findAllModulesForHeader(File), {} }; Generator.insert(Key, Data, GeneratorTrait); ++NumHeaderSearchEntries; } // Create the on-disk hash table in a buffer. SmallString<4096> TableData; uint32_t BucketOffset; { using namespace llvm::support; llvm::raw_svector_ostream Out(TableData); // Make sure that no bucket is at offset 0 endian::Writer(Out).write(0); BucketOffset = Generator.Emit(Out, GeneratorTrait); } // Create a blob abbreviation using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(HEADER_SEARCH_TABLE)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned TableAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); // Write the header search table RecordData::value_type Record[] = {HEADER_SEARCH_TABLE, BucketOffset, NumHeaderSearchEntries, TableData.size()}; TableData.append(GeneratorTrait.strings_begin(),GeneratorTrait.strings_end()); Stream.EmitRecordWithBlob(TableAbbrev, Record, TableData); // Free all of the strings we had to duplicate. for (unsigned I = 0, N = SavedStrings.size(); I != N; ++I) free(const_cast(SavedStrings[I])); } static void emitBlob(llvm::BitstreamWriter &Stream, StringRef Blob, unsigned SLocBufferBlobCompressedAbbrv, unsigned SLocBufferBlobAbbrv) { typedef ASTWriter::RecordData::value_type RecordDataType; // Compress the buffer if possible. We expect that almost all PCM // consumers will not want its contents. SmallString<0> CompressedBuffer; if (llvm::zlib::isAvailable()) { llvm::Error E = llvm::zlib::compress(Blob.drop_back(1), CompressedBuffer); if (!E) { RecordDataType Record[] = {SM_SLOC_BUFFER_BLOB_COMPRESSED, Blob.size() - 1}; Stream.EmitRecordWithBlob(SLocBufferBlobCompressedAbbrv, Record, CompressedBuffer); return; } llvm::consumeError(std::move(E)); } RecordDataType Record[] = {SM_SLOC_BUFFER_BLOB}; Stream.EmitRecordWithBlob(SLocBufferBlobAbbrv, Record, Blob); } /// \brief Writes the block containing the serialized form of the /// source manager. /// /// TODO: We should probably use an on-disk hash table (stored in a /// blob), indexed based on the file name, so that we only create /// entries for files that we actually need. In the common case (no /// errors), we probably won't have to create file entries for any of /// the files in the AST. void ASTWriter::WriteSourceManagerBlock(SourceManager &SourceMgr, const Preprocessor &PP) { RecordData Record; // Enter the source manager block. Stream.EnterSubblock(SOURCE_MANAGER_BLOCK_ID, 4); // Abbreviations for the various kinds of source-location entries. unsigned SLocFileAbbrv = CreateSLocFileAbbrev(Stream); unsigned SLocBufferAbbrv = CreateSLocBufferAbbrev(Stream); unsigned SLocBufferBlobAbbrv = CreateSLocBufferBlobAbbrev(Stream, false); unsigned SLocBufferBlobCompressedAbbrv = CreateSLocBufferBlobAbbrev(Stream, true); unsigned SLocExpansionAbbrv = CreateSLocExpansionAbbrev(Stream); // Write out the source location entry table. We skip the first // entry, which is always the same dummy entry. std::vector SLocEntryOffsets; RecordData PreloadSLocs; SLocEntryOffsets.reserve(SourceMgr.local_sloc_entry_size() - 1); for (unsigned I = 1, N = SourceMgr.local_sloc_entry_size(); I != N; ++I) { // Get this source location entry. const SrcMgr::SLocEntry *SLoc = &SourceMgr.getLocalSLocEntry(I); FileID FID = FileID::get(I); assert(&SourceMgr.getSLocEntry(FID) == SLoc); // Record the offset of this source-location entry. SLocEntryOffsets.push_back(Stream.GetCurrentBitNo()); // Figure out which record code to use. unsigned Code; if (SLoc->isFile()) { const SrcMgr::ContentCache *Cache = SLoc->getFile().getContentCache(); if (Cache->OrigEntry) { Code = SM_SLOC_FILE_ENTRY; } else Code = SM_SLOC_BUFFER_ENTRY; } else Code = SM_SLOC_EXPANSION_ENTRY; Record.clear(); Record.push_back(Code); // Starting offset of this entry within this module, so skip the dummy. Record.push_back(SLoc->getOffset() - 2); if (SLoc->isFile()) { const SrcMgr::FileInfo &File = SLoc->getFile(); AddSourceLocation(File.getIncludeLoc(), Record); Record.push_back(File.getFileCharacteristic()); // FIXME: stable encoding Record.push_back(File.hasLineDirectives()); const SrcMgr::ContentCache *Content = File.getContentCache(); bool EmitBlob = false; if (Content->OrigEntry) { assert(Content->OrigEntry == Content->ContentsEntry && "Writing to AST an overridden file is not supported"); // The source location entry is a file. Emit input file ID. assert(InputFileIDs[Content->OrigEntry] != 0 && "Missed file entry"); Record.push_back(InputFileIDs[Content->OrigEntry]); Record.push_back(File.NumCreatedFIDs); FileDeclIDsTy::iterator FDI = FileDeclIDs.find(FID); if (FDI != FileDeclIDs.end()) { Record.push_back(FDI->second->FirstDeclIndex); Record.push_back(FDI->second->DeclIDs.size()); } else { Record.push_back(0); Record.push_back(0); } Stream.EmitRecordWithAbbrev(SLocFileAbbrv, Record); if (Content->BufferOverridden || Content->IsTransient) EmitBlob = true; } else { // The source location entry is a buffer. The blob associated // with this entry contains the contents of the buffer. // We add one to the size so that we capture the trailing NULL // that is required by llvm::MemoryBuffer::getMemBuffer (on // the reader side). const llvm::MemoryBuffer *Buffer = Content->getBuffer(PP.getDiagnostics(), PP.getSourceManager()); StringRef Name = Buffer->getBufferIdentifier(); Stream.EmitRecordWithBlob(SLocBufferAbbrv, Record, StringRef(Name.data(), Name.size() + 1)); EmitBlob = true; if (Name == "") PreloadSLocs.push_back(SLocEntryOffsets.size()); } if (EmitBlob) { // Include the implicit terminating null character in the on-disk buffer // if we're writing it uncompressed. const llvm::MemoryBuffer *Buffer = Content->getBuffer(PP.getDiagnostics(), PP.getSourceManager()); StringRef Blob(Buffer->getBufferStart(), Buffer->getBufferSize() + 1); emitBlob(Stream, Blob, SLocBufferBlobCompressedAbbrv, SLocBufferBlobAbbrv); } } else { // The source location entry is a macro expansion. const SrcMgr::ExpansionInfo &Expansion = SLoc->getExpansion(); AddSourceLocation(Expansion.getSpellingLoc(), Record); AddSourceLocation(Expansion.getExpansionLocStart(), Record); AddSourceLocation(Expansion.isMacroArgExpansion() ? SourceLocation() : Expansion.getExpansionLocEnd(), Record); // Compute the token length for this macro expansion. unsigned NextOffset = SourceMgr.getNextLocalOffset(); if (I + 1 != N) NextOffset = SourceMgr.getLocalSLocEntry(I + 1).getOffset(); Record.push_back(NextOffset - SLoc->getOffset() - 1); Stream.EmitRecordWithAbbrev(SLocExpansionAbbrv, Record); } } Stream.ExitBlock(); if (SLocEntryOffsets.empty()) return; // Write the source-location offsets table into the AST block. This // table is used for lazily loading source-location information. using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SOURCE_LOCATION_OFFSETS)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 16)); // # of slocs Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 16)); // total size Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // offsets unsigned SLocOffsetsAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); { RecordData::value_type Record[] = { SOURCE_LOCATION_OFFSETS, SLocEntryOffsets.size(), SourceMgr.getNextLocalOffset() - 1 /* skip dummy */}; Stream.EmitRecordWithBlob(SLocOffsetsAbbrev, Record, bytes(SLocEntryOffsets)); } // Write the source location entry preloads array, telling the AST // reader which source locations entries it should load eagerly. Stream.EmitRecord(SOURCE_LOCATION_PRELOADS, PreloadSLocs); // Write the line table. It depends on remapping working, so it must come // after the source location offsets. if (SourceMgr.hasLineTable()) { LineTableInfo &LineTable = SourceMgr.getLineTable(); Record.clear(); // Emit the needed file names. llvm::DenseMap FilenameMap; for (const auto &L : LineTable) { if (L.first.ID < 0) continue; for (auto &LE : L.second) { if (FilenameMap.insert(std::make_pair(LE.FilenameID, FilenameMap.size())).second) AddPath(LineTable.getFilename(LE.FilenameID), Record); } } Record.push_back(0); // Emit the line entries for (const auto &L : LineTable) { // Only emit entries for local files. if (L.first.ID < 0) continue; // Emit the file ID Record.push_back(L.first.ID); // Emit the line entries Record.push_back(L.second.size()); for (const auto &LE : L.second) { Record.push_back(LE.FileOffset); Record.push_back(LE.LineNo); Record.push_back(FilenameMap[LE.FilenameID]); Record.push_back((unsigned)LE.FileKind); Record.push_back(LE.IncludeOffset); } } Stream.EmitRecord(SOURCE_MANAGER_LINE_TABLE, Record); } } //===----------------------------------------------------------------------===// // Preprocessor Serialization //===----------------------------------------------------------------------===// static bool shouldIgnoreMacro(MacroDirective *MD, bool IsModule, const Preprocessor &PP) { if (MacroInfo *MI = MD->getMacroInfo()) if (MI->isBuiltinMacro()) return true; if (IsModule) { SourceLocation Loc = MD->getLocation(); if (Loc.isInvalid()) return true; if (PP.getSourceManager().getFileID(Loc) == PP.getPredefinesFileID()) return true; } return false; } /// \brief Writes the block containing the serialized form of the /// preprocessor. /// void ASTWriter::WritePreprocessor(const Preprocessor &PP, bool IsModule) { PreprocessingRecord *PPRec = PP.getPreprocessingRecord(); if (PPRec) WritePreprocessorDetail(*PPRec); RecordData Record; RecordData ModuleMacroRecord; // If the preprocessor __COUNTER__ value has been bumped, remember it. if (PP.getCounterValue() != 0) { RecordData::value_type Record[] = {PP.getCounterValue()}; Stream.EmitRecord(PP_COUNTER_VALUE, Record); } if (PP.isRecordingPreamble() && PP.hasRecordedPreamble()) { assert(!IsModule); for (const auto &Cond : PP.getPreambleConditionalStack()) { AddSourceLocation(Cond.IfLoc, Record); Record.push_back(Cond.WasSkipping); Record.push_back(Cond.FoundNonSkip); Record.push_back(Cond.FoundElse); } Stream.EmitRecord(PP_CONDITIONAL_STACK, Record); Record.clear(); } // Enter the preprocessor block. Stream.EnterSubblock(PREPROCESSOR_BLOCK_ID, 3); // If the AST file contains __DATE__ or __TIME__ emit a warning about this. // FIXME: Include a location for the use, and say which one was used. if (PP.SawDateOrTime()) PP.Diag(SourceLocation(), diag::warn_module_uses_date_time) << IsModule; // Loop over all the macro directives that are live at the end of the file, // emitting each to the PP section. // Construct the list of identifiers with macro directives that need to be // serialized. SmallVector MacroIdentifiers; for (auto &Id : PP.getIdentifierTable()) if (Id.second->hadMacroDefinition() && (!Id.second->isFromAST() || Id.second->hasChangedSinceDeserialization())) MacroIdentifiers.push_back(Id.second); // Sort the set of macro definitions that need to be serialized by the // name of the macro, to provide a stable ordering. std::sort(MacroIdentifiers.begin(), MacroIdentifiers.end(), llvm::less_ptr()); // Emit the macro directives as a list and associate the offset with the // identifier they belong to. for (const IdentifierInfo *Name : MacroIdentifiers) { MacroDirective *MD = PP.getLocalMacroDirectiveHistory(Name); auto StartOffset = Stream.GetCurrentBitNo(); // Emit the macro directives in reverse source order. for (; MD; MD = MD->getPrevious()) { // Once we hit an ignored macro, we're done: the rest of the chain // will all be ignored macros. if (shouldIgnoreMacro(MD, IsModule, PP)) break; AddSourceLocation(MD->getLocation(), Record); Record.push_back(MD->getKind()); if (auto *DefMD = dyn_cast(MD)) { Record.push_back(getMacroRef(DefMD->getInfo(), Name)); } else if (auto *VisMD = dyn_cast(MD)) { Record.push_back(VisMD->isPublic()); } } // Write out any exported module macros. bool EmittedModuleMacros = false; // We write out exported module macros for PCH as well. auto Leafs = PP.getLeafModuleMacros(Name); SmallVector Worklist(Leafs.begin(), Leafs.end()); llvm::DenseMap Visits; while (!Worklist.empty()) { auto *Macro = Worklist.pop_back_val(); // Emit a record indicating this submodule exports this macro. ModuleMacroRecord.push_back( getSubmoduleID(Macro->getOwningModule())); ModuleMacroRecord.push_back(getMacroRef(Macro->getMacroInfo(), Name)); for (auto *M : Macro->overrides()) ModuleMacroRecord.push_back(getSubmoduleID(M->getOwningModule())); Stream.EmitRecord(PP_MODULE_MACRO, ModuleMacroRecord); ModuleMacroRecord.clear(); // Enqueue overridden macros once we've visited all their ancestors. for (auto *M : Macro->overrides()) if (++Visits[M] == M->getNumOverridingMacros()) Worklist.push_back(M); EmittedModuleMacros = true; } if (Record.empty() && !EmittedModuleMacros) continue; IdentMacroDirectivesOffsetMap[Name] = StartOffset; Stream.EmitRecord(PP_MACRO_DIRECTIVE_HISTORY, Record); Record.clear(); } /// \brief Offsets of each of the macros into the bitstream, indexed by /// the local macro ID /// /// For each identifier that is associated with a macro, this map /// provides the offset into the bitstream where that macro is /// defined. std::vector MacroOffsets; for (unsigned I = 0, N = MacroInfosToEmit.size(); I != N; ++I) { const IdentifierInfo *Name = MacroInfosToEmit[I].Name; MacroInfo *MI = MacroInfosToEmit[I].MI; MacroID ID = MacroInfosToEmit[I].ID; if (ID < FirstMacroID) { assert(0 && "Loaded MacroInfo entered MacroInfosToEmit ?"); continue; } // Record the local offset of this macro. unsigned Index = ID - FirstMacroID; if (Index == MacroOffsets.size()) MacroOffsets.push_back(Stream.GetCurrentBitNo()); else { if (Index > MacroOffsets.size()) MacroOffsets.resize(Index + 1); MacroOffsets[Index] = Stream.GetCurrentBitNo(); } AddIdentifierRef(Name, Record); AddSourceLocation(MI->getDefinitionLoc(), Record); AddSourceLocation(MI->getDefinitionEndLoc(), Record); Record.push_back(MI->isUsed()); Record.push_back(MI->isUsedForHeaderGuard()); unsigned Code; if (MI->isObjectLike()) { Code = PP_MACRO_OBJECT_LIKE; } else { Code = PP_MACRO_FUNCTION_LIKE; Record.push_back(MI->isC99Varargs()); Record.push_back(MI->isGNUVarargs()); Record.push_back(MI->hasCommaPasting()); Record.push_back(MI->getNumParams()); for (const IdentifierInfo *Param : MI->params()) AddIdentifierRef(Param, Record); } // If we have a detailed preprocessing record, record the macro definition // ID that corresponds to this macro. if (PPRec) Record.push_back(MacroDefinitions[PPRec->findMacroDefinition(MI)]); Stream.EmitRecord(Code, Record); Record.clear(); // Emit the tokens array. for (unsigned TokNo = 0, e = MI->getNumTokens(); TokNo != e; ++TokNo) { // Note that we know that the preprocessor does not have any annotation // tokens in it because they are created by the parser, and thus can't // be in a macro definition. const Token &Tok = MI->getReplacementToken(TokNo); AddToken(Tok, Record); Stream.EmitRecord(PP_TOKEN, Record); Record.clear(); } ++NumMacros; } Stream.ExitBlock(); // Write the offsets table for macro IDs. using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(MACRO_OFFSET)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // # of macros Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // first ID Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned MacroOffsetAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); { RecordData::value_type Record[] = {MACRO_OFFSET, MacroOffsets.size(), FirstMacroID - NUM_PREDEF_MACRO_IDS}; Stream.EmitRecordWithBlob(MacroOffsetAbbrev, Record, bytes(MacroOffsets)); } } void ASTWriter::WritePreprocessorDetail(PreprocessingRecord &PPRec) { if (PPRec.local_begin() == PPRec.local_end()) return; SmallVector PreprocessedEntityOffsets; // Enter the preprocessor block. Stream.EnterSubblock(PREPROCESSOR_DETAIL_BLOCK_ID, 3); // If the preprocessor has a preprocessing record, emit it. unsigned NumPreprocessingRecords = 0; using namespace llvm; // Set up the abbreviation for unsigned InclusionAbbrev = 0; { auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(PPD_INCLUSION_DIRECTIVE)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // filename length Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // in quotes Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 2)); // kind Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // imported module Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); InclusionAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); } unsigned FirstPreprocessorEntityID = (Chain ? PPRec.getNumLoadedPreprocessedEntities() : 0) + NUM_PREDEF_PP_ENTITY_IDS; unsigned NextPreprocessorEntityID = FirstPreprocessorEntityID; RecordData Record; for (PreprocessingRecord::iterator E = PPRec.local_begin(), EEnd = PPRec.local_end(); E != EEnd; (void)++E, ++NumPreprocessingRecords, ++NextPreprocessorEntityID) { Record.clear(); PreprocessedEntityOffsets.push_back( PPEntityOffset((*E)->getSourceRange(), Stream.GetCurrentBitNo())); if (auto *MD = dyn_cast(*E)) { // Record this macro definition's ID. MacroDefinitions[MD] = NextPreprocessorEntityID; AddIdentifierRef(MD->getName(), Record); Stream.EmitRecord(PPD_MACRO_DEFINITION, Record); continue; } if (auto *ME = dyn_cast(*E)) { Record.push_back(ME->isBuiltinMacro()); if (ME->isBuiltinMacro()) AddIdentifierRef(ME->getName(), Record); else Record.push_back(MacroDefinitions[ME->getDefinition()]); Stream.EmitRecord(PPD_MACRO_EXPANSION, Record); continue; } if (auto *ID = dyn_cast(*E)) { Record.push_back(PPD_INCLUSION_DIRECTIVE); Record.push_back(ID->getFileName().size()); Record.push_back(ID->wasInQuotes()); Record.push_back(static_cast(ID->getKind())); Record.push_back(ID->importedModule()); SmallString<64> Buffer; Buffer += ID->getFileName(); // Check that the FileEntry is not null because it was not resolved and // we create a PCH even with compiler errors. if (ID->getFile()) Buffer += ID->getFile()->getName(); Stream.EmitRecordWithBlob(InclusionAbbrev, Record, Buffer); continue; } llvm_unreachable("Unhandled PreprocessedEntity in ASTWriter"); } Stream.ExitBlock(); // Write the offsets table for the preprocessing record. if (NumPreprocessingRecords > 0) { assert(PreprocessedEntityOffsets.size() == NumPreprocessingRecords); // Write the offsets table for identifier IDs. using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(PPD_ENTITIES_OFFSETS)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // first pp entity Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned PPEOffsetAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); RecordData::value_type Record[] = {PPD_ENTITIES_OFFSETS, FirstPreprocessorEntityID - NUM_PREDEF_PP_ENTITY_IDS}; Stream.EmitRecordWithBlob(PPEOffsetAbbrev, Record, bytes(PreprocessedEntityOffsets)); } } unsigned ASTWriter::getLocalOrImportedSubmoduleID(Module *Mod) { if (!Mod) return 0; llvm::DenseMap::iterator Known = SubmoduleIDs.find(Mod); if (Known != SubmoduleIDs.end()) return Known->second; auto *Top = Mod->getTopLevelModule(); if (Top != WritingModule && (getLangOpts().CompilingPCH || !Top->fullModuleNameIs(StringRef(getLangOpts().CurrentModule)))) return 0; return SubmoduleIDs[Mod] = NextSubmoduleID++; } unsigned ASTWriter::getSubmoduleID(Module *Mod) { // FIXME: This can easily happen, if we have a reference to a submodule that // did not result in us loading a module file for that submodule. For // instance, a cross-top-level-module 'conflict' declaration will hit this. unsigned ID = getLocalOrImportedSubmoduleID(Mod); assert((ID || !Mod) && "asked for module ID for non-local, non-imported module"); return ID; } /// \brief Compute the number of modules within the given tree (including the /// given module). static unsigned getNumberOfModules(Module *Mod) { unsigned ChildModules = 0; for (auto Sub = Mod->submodule_begin(), SubEnd = Mod->submodule_end(); Sub != SubEnd; ++Sub) ChildModules += getNumberOfModules(*Sub); return ChildModules + 1; } void ASTWriter::WriteSubmodules(Module *WritingModule) { // Enter the submodule description block. Stream.EnterSubblock(SUBMODULE_BLOCK_ID, /*bits for abbreviations*/5); // Write the abbreviations needed for the submodules block. using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_DEFINITION)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // ID Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // Parent Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // IsFramework Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // IsExplicit Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // IsSystem Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // IsExternC Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // InferSubmodules... Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // InferExplicit... Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // InferExportWild... Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // ConfigMacrosExh... Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned DefinitionAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_UMBRELLA_HEADER)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned UmbrellaAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_HEADER)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned HeaderAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_TOPHEADER)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned TopHeaderAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_UMBRELLA_DIR)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned UmbrellaDirAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_REQUIRES)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // State Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Feature unsigned RequiresAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_EXCLUDED_HEADER)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned ExcludedHeaderAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_TEXTUAL_HEADER)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned TextualHeaderAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_PRIVATE_HEADER)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned PrivateHeaderAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_PRIVATE_TEXTUAL_HEADER)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned PrivateTextualHeaderAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_LINK_LIBRARY)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // IsFramework Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Name unsigned LinkLibraryAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_CONFIG_MACRO)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Macro name unsigned ConfigMacroAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SUBMODULE_CONFLICT)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // Other module Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // Message unsigned ConflictAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); // Write the submodule metadata block. RecordData::value_type Record[] = { getNumberOfModules(WritingModule), FirstSubmoduleID - NUM_PREDEF_SUBMODULE_IDS, (unsigned)WritingModule->Kind}; Stream.EmitRecord(SUBMODULE_METADATA, Record); // Write all of the submodules. std::queue Q; Q.push(WritingModule); while (!Q.empty()) { Module *Mod = Q.front(); Q.pop(); unsigned ID = getSubmoduleID(Mod); uint64_t ParentID = 0; if (Mod->Parent) { assert(SubmoduleIDs[Mod->Parent] && "Submodule parent not written?"); ParentID = SubmoduleIDs[Mod->Parent]; } // Emit the definition of the block. { RecordData::value_type Record[] = {SUBMODULE_DEFINITION, ID, ParentID, Mod->IsFramework, Mod->IsExplicit, Mod->IsSystem, Mod->IsExternC, Mod->InferSubmodules, Mod->InferExplicitSubmodules, Mod->InferExportWildcard, Mod->ConfigMacrosExhaustive}; Stream.EmitRecordWithBlob(DefinitionAbbrev, Record, Mod->Name); } // Emit the requirements. for (const auto &R : Mod->Requirements) { RecordData::value_type Record[] = {SUBMODULE_REQUIRES, R.second}; Stream.EmitRecordWithBlob(RequiresAbbrev, Record, R.first); } // Emit the umbrella header, if there is one. if (auto UmbrellaHeader = Mod->getUmbrellaHeader()) { RecordData::value_type Record[] = {SUBMODULE_UMBRELLA_HEADER}; Stream.EmitRecordWithBlob(UmbrellaAbbrev, Record, UmbrellaHeader.NameAsWritten); } else if (auto UmbrellaDir = Mod->getUmbrellaDir()) { RecordData::value_type Record[] = {SUBMODULE_UMBRELLA_DIR}; Stream.EmitRecordWithBlob(UmbrellaDirAbbrev, Record, UmbrellaDir.NameAsWritten); } // Emit the headers. struct { unsigned RecordKind; unsigned Abbrev; Module::HeaderKind HeaderKind; } HeaderLists[] = { {SUBMODULE_HEADER, HeaderAbbrev, Module::HK_Normal}, {SUBMODULE_TEXTUAL_HEADER, TextualHeaderAbbrev, Module::HK_Textual}, {SUBMODULE_PRIVATE_HEADER, PrivateHeaderAbbrev, Module::HK_Private}, {SUBMODULE_PRIVATE_TEXTUAL_HEADER, PrivateTextualHeaderAbbrev, Module::HK_PrivateTextual}, {SUBMODULE_EXCLUDED_HEADER, ExcludedHeaderAbbrev, Module::HK_Excluded} }; for (auto &HL : HeaderLists) { RecordData::value_type Record[] = {HL.RecordKind}; for (auto &H : Mod->Headers[HL.HeaderKind]) Stream.EmitRecordWithBlob(HL.Abbrev, Record, H.NameAsWritten); } // Emit the top headers. { auto TopHeaders = Mod->getTopHeaders(PP->getFileManager()); RecordData::value_type Record[] = {SUBMODULE_TOPHEADER}; for (auto *H : TopHeaders) Stream.EmitRecordWithBlob(TopHeaderAbbrev, Record, H->getName()); } // Emit the imports. if (!Mod->Imports.empty()) { RecordData Record; for (auto *I : Mod->Imports) Record.push_back(getSubmoduleID(I)); Stream.EmitRecord(SUBMODULE_IMPORTS, Record); } // Emit the exports. if (!Mod->Exports.empty()) { RecordData Record; for (const auto &E : Mod->Exports) { // FIXME: This may fail; we don't require that all exported modules // are local or imported. Record.push_back(getSubmoduleID(E.getPointer())); Record.push_back(E.getInt()); } Stream.EmitRecord(SUBMODULE_EXPORTS, Record); } //FIXME: How do we emit the 'use'd modules? They may not be submodules. // Might be unnecessary as use declarations are only used to build the // module itself. // Emit the link libraries. for (const auto &LL : Mod->LinkLibraries) { RecordData::value_type Record[] = {SUBMODULE_LINK_LIBRARY, LL.IsFramework}; Stream.EmitRecordWithBlob(LinkLibraryAbbrev, Record, LL.Library); } // Emit the conflicts. for (const auto &C : Mod->Conflicts) { // FIXME: This may fail; we don't require that all conflicting modules // are local or imported. RecordData::value_type Record[] = {SUBMODULE_CONFLICT, getSubmoduleID(C.Other)}; Stream.EmitRecordWithBlob(ConflictAbbrev, Record, C.Message); } // Emit the configuration macros. for (const auto &CM : Mod->ConfigMacros) { RecordData::value_type Record[] = {SUBMODULE_CONFIG_MACRO}; Stream.EmitRecordWithBlob(ConfigMacroAbbrev, Record, CM); } // Emit the initializers, if any. RecordData Inits; for (Decl *D : Context->getModuleInitializers(Mod)) Inits.push_back(GetDeclRef(D)); if (!Inits.empty()) Stream.EmitRecord(SUBMODULE_INITIALIZERS, Inits); // Queue up the submodules of this module. for (auto *M : Mod->submodules()) Q.push(M); } Stream.ExitBlock(); assert((NextSubmoduleID - FirstSubmoduleID == getNumberOfModules(WritingModule)) && "Wrong # of submodules; found a reference to a non-local, " "non-imported submodule?"); } void ASTWriter::WritePragmaDiagnosticMappings(const DiagnosticsEngine &Diag, bool isModule) { llvm::SmallDenseMap DiagStateIDMap; unsigned CurrID = 0; RecordData Record; auto EncodeDiagStateFlags = [](const DiagnosticsEngine::DiagState *DS) -> unsigned { unsigned Result = (unsigned)DS->ExtBehavior; for (unsigned Val : {(unsigned)DS->IgnoreAllWarnings, (unsigned)DS->EnableAllWarnings, (unsigned)DS->WarningsAsErrors, (unsigned)DS->ErrorsAsFatal, (unsigned)DS->SuppressSystemWarnings}) Result = (Result << 1) | Val; return Result; }; unsigned Flags = EncodeDiagStateFlags(Diag.DiagStatesByLoc.FirstDiagState); Record.push_back(Flags); auto AddDiagState = [&](const DiagnosticsEngine::DiagState *State, bool IncludeNonPragmaStates) { // Ensure that the diagnostic state wasn't modified since it was created. // We will not correctly round-trip this information otherwise. assert(Flags == EncodeDiagStateFlags(State) && "diag state flags vary in single AST file"); unsigned &DiagStateID = DiagStateIDMap[State]; Record.push_back(DiagStateID); if (DiagStateID == 0) { DiagStateID = ++CurrID; // Add a placeholder for the number of mappings. auto SizeIdx = Record.size(); Record.emplace_back(); for (const auto &I : *State) { if (I.second.isPragma() || IncludeNonPragmaStates) { Record.push_back(I.first); Record.push_back(I.second.serialize()); } } // Update the placeholder. Record[SizeIdx] = (Record.size() - SizeIdx) / 2; } }; AddDiagState(Diag.DiagStatesByLoc.FirstDiagState, isModule); // Reserve a spot for the number of locations with state transitions. auto NumLocationsIdx = Record.size(); Record.emplace_back(); // Emit the state transitions. unsigned NumLocations = 0; for (auto &FileIDAndFile : Diag.DiagStatesByLoc.Files) { if (!FileIDAndFile.first.isValid() || !FileIDAndFile.second.HasLocalTransitions) continue; ++NumLocations; AddSourceLocation(Diag.SourceMgr->getLocForStartOfFile(FileIDAndFile.first), Record); Record.push_back(FileIDAndFile.second.StateTransitions.size()); for (auto &StatePoint : FileIDAndFile.second.StateTransitions) { Record.push_back(StatePoint.Offset); AddDiagState(StatePoint.State, false); } } // Backpatch the number of locations. Record[NumLocationsIdx] = NumLocations; // Emit CurDiagStateLoc. Do it last in order to match source order. // // This also protects against a hypothetical corner case with simulating // -Werror settings for implicit modules in the ASTReader, where reading // CurDiagState out of context could change whether warning pragmas are // treated as errors. AddSourceLocation(Diag.DiagStatesByLoc.CurDiagStateLoc, Record); AddDiagState(Diag.DiagStatesByLoc.CurDiagState, false); Stream.EmitRecord(DIAG_PRAGMA_MAPPINGS, Record); } //===----------------------------------------------------------------------===// // Type Serialization //===----------------------------------------------------------------------===// /// \brief Write the representation of a type to the AST stream. void ASTWriter::WriteType(QualType T) { TypeIdx &IdxRef = TypeIdxs[T]; if (IdxRef.getIndex() == 0) // we haven't seen this type before. IdxRef = TypeIdx(NextTypeID++); TypeIdx Idx = IdxRef; assert(Idx.getIndex() >= FirstTypeID && "Re-writing a type from a prior AST"); RecordData Record; // Emit the type's representation. ASTTypeWriter W(*this, Record); W.Visit(T); uint64_t Offset = W.Emit(); // Record the offset for this type. unsigned Index = Idx.getIndex() - FirstTypeID; if (TypeOffsets.size() == Index) TypeOffsets.push_back(Offset); else if (TypeOffsets.size() < Index) { TypeOffsets.resize(Index + 1); TypeOffsets[Index] = Offset; } else { llvm_unreachable("Types emitted in wrong order"); } } //===----------------------------------------------------------------------===// // Declaration Serialization //===----------------------------------------------------------------------===// /// \brief Write the block containing all of the declaration IDs /// lexically declared within the given DeclContext. /// /// \returns the offset of the DECL_CONTEXT_LEXICAL block within the /// bistream, or 0 if no block was written. uint64_t ASTWriter::WriteDeclContextLexicalBlock(ASTContext &Context, DeclContext *DC) { if (DC->decls_empty()) return 0; uint64_t Offset = Stream.GetCurrentBitNo(); SmallVector KindDeclPairs; for (const auto *D : DC->decls()) { KindDeclPairs.push_back(D->getKind()); KindDeclPairs.push_back(GetDeclRef(D)); } ++NumLexicalDeclContexts; RecordData::value_type Record[] = {DECL_CONTEXT_LEXICAL}; Stream.EmitRecordWithBlob(DeclContextLexicalAbbrev, Record, bytes(KindDeclPairs)); return Offset; } void ASTWriter::WriteTypeDeclOffsets() { using namespace llvm; // Write the type offsets array auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(TYPE_OFFSET)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // # of types Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // base type index Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // types block unsigned TypeOffsetAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); { RecordData::value_type Record[] = {TYPE_OFFSET, TypeOffsets.size(), FirstTypeID - NUM_PREDEF_TYPE_IDS}; Stream.EmitRecordWithBlob(TypeOffsetAbbrev, Record, bytes(TypeOffsets)); } // Write the declaration offsets array Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(DECL_OFFSET)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // # of declarations Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // base decl ID Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); // declarations block unsigned DeclOffsetAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); { RecordData::value_type Record[] = {DECL_OFFSET, DeclOffsets.size(), FirstDeclID - NUM_PREDEF_DECL_IDS}; Stream.EmitRecordWithBlob(DeclOffsetAbbrev, Record, bytes(DeclOffsets)); } } void ASTWriter::WriteFileDeclIDsMap() { using namespace llvm; SmallVector, 64> SortedFileDeclIDs( FileDeclIDs.begin(), FileDeclIDs.end()); std::sort(SortedFileDeclIDs.begin(), SortedFileDeclIDs.end(), llvm::less_first()); // Join the vectors of DeclIDs from all files. SmallVector FileGroupedDeclIDs; for (auto &FileDeclEntry : SortedFileDeclIDs) { DeclIDInFileInfo &Info = *FileDeclEntry.second; Info.FirstDeclIndex = FileGroupedDeclIDs.size(); for (auto &LocDeclEntry : Info.DeclIDs) FileGroupedDeclIDs.push_back(LocDeclEntry.second); } auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(FILE_SORTED_DECLS)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned AbbrevCode = Stream.EmitAbbrev(std::move(Abbrev)); RecordData::value_type Record[] = {FILE_SORTED_DECLS, FileGroupedDeclIDs.size()}; Stream.EmitRecordWithBlob(AbbrevCode, Record, bytes(FileGroupedDeclIDs)); } void ASTWriter::WriteComments() { Stream.EnterSubblock(COMMENTS_BLOCK_ID, 3); ArrayRef RawComments = Context->Comments.getComments(); RecordData Record; for (const auto *I : RawComments) { Record.clear(); AddSourceRange(I->getSourceRange(), Record); Record.push_back(I->getKind()); Record.push_back(I->isTrailingComment()); Record.push_back(I->isAlmostTrailingComment()); Stream.EmitRecord(COMMENTS_RAW_COMMENT, Record); } Stream.ExitBlock(); } //===----------------------------------------------------------------------===// // Global Method Pool and Selector Serialization //===----------------------------------------------------------------------===// namespace { // Trait used for the on-disk hash table used in the method pool. class ASTMethodPoolTrait { ASTWriter &Writer; public: typedef Selector key_type; typedef key_type key_type_ref; struct data_type { SelectorID ID; ObjCMethodList Instance, Factory; }; typedef const data_type& data_type_ref; typedef unsigned hash_value_type; typedef unsigned offset_type; explicit ASTMethodPoolTrait(ASTWriter &Writer) : Writer(Writer) { } static hash_value_type ComputeHash(Selector Sel) { return serialization::ComputeHash(Sel); } std::pair EmitKeyDataLength(raw_ostream& Out, Selector Sel, data_type_ref Methods) { using namespace llvm::support; endian::Writer LE(Out); unsigned KeyLen = 2 + (Sel.getNumArgs()? Sel.getNumArgs() * 4 : 4); LE.write(KeyLen); unsigned DataLen = 4 + 2 + 2; // 2 bytes for each of the method counts for (const ObjCMethodList *Method = &Methods.Instance; Method; Method = Method->getNext()) if (Method->getMethod()) DataLen += 4; for (const ObjCMethodList *Method = &Methods.Factory; Method; Method = Method->getNext()) if (Method->getMethod()) DataLen += 4; LE.write(DataLen); return std::make_pair(KeyLen, DataLen); } void EmitKey(raw_ostream& Out, Selector Sel, unsigned) { using namespace llvm::support; endian::Writer LE(Out); uint64_t Start = Out.tell(); assert((Start >> 32) == 0 && "Selector key offset too large"); Writer.SetSelectorOffset(Sel, Start); unsigned N = Sel.getNumArgs(); LE.write(N); if (N == 0) N = 1; for (unsigned I = 0; I != N; ++I) LE.write( Writer.getIdentifierRef(Sel.getIdentifierInfoForSlot(I))); } void EmitData(raw_ostream& Out, key_type_ref, data_type_ref Methods, unsigned DataLen) { using namespace llvm::support; endian::Writer LE(Out); uint64_t Start = Out.tell(); (void)Start; LE.write(Methods.ID); unsigned NumInstanceMethods = 0; for (const ObjCMethodList *Method = &Methods.Instance; Method; Method = Method->getNext()) if (Method->getMethod()) ++NumInstanceMethods; unsigned NumFactoryMethods = 0; for (const ObjCMethodList *Method = &Methods.Factory; Method; Method = Method->getNext()) if (Method->getMethod()) ++NumFactoryMethods; unsigned InstanceBits = Methods.Instance.getBits(); assert(InstanceBits < 4); unsigned InstanceHasMoreThanOneDeclBit = Methods.Instance.hasMoreThanOneDecl(); unsigned FullInstanceBits = (NumInstanceMethods << 3) | (InstanceHasMoreThanOneDeclBit << 2) | InstanceBits; unsigned FactoryBits = Methods.Factory.getBits(); assert(FactoryBits < 4); unsigned FactoryHasMoreThanOneDeclBit = Methods.Factory.hasMoreThanOneDecl(); unsigned FullFactoryBits = (NumFactoryMethods << 3) | (FactoryHasMoreThanOneDeclBit << 2) | FactoryBits; LE.write(FullInstanceBits); LE.write(FullFactoryBits); for (const ObjCMethodList *Method = &Methods.Instance; Method; Method = Method->getNext()) if (Method->getMethod()) LE.write(Writer.getDeclID(Method->getMethod())); for (const ObjCMethodList *Method = &Methods.Factory; Method; Method = Method->getNext()) if (Method->getMethod()) LE.write(Writer.getDeclID(Method->getMethod())); assert(Out.tell() - Start == DataLen && "Data length is wrong"); } }; } // end anonymous namespace /// \brief Write ObjC data: selectors and the method pool. /// /// The method pool contains both instance and factory methods, stored /// in an on-disk hash table indexed by the selector. The hash table also /// contains an empty entry for every other selector known to Sema. void ASTWriter::WriteSelectors(Sema &SemaRef) { using namespace llvm; // Do we have to do anything at all? if (SemaRef.MethodPool.empty() && SelectorIDs.empty()) return; unsigned NumTableEntries = 0; // Create and write out the blob that contains selectors and the method pool. { llvm::OnDiskChainedHashTableGenerator Generator; ASTMethodPoolTrait Trait(*this); // Create the on-disk hash table representation. We walk through every // selector we've seen and look it up in the method pool. SelectorOffsets.resize(NextSelectorID - FirstSelectorID); for (auto &SelectorAndID : SelectorIDs) { Selector S = SelectorAndID.first; SelectorID ID = SelectorAndID.second; Sema::GlobalMethodPool::iterator F = SemaRef.MethodPool.find(S); ASTMethodPoolTrait::data_type Data = { ID, ObjCMethodList(), ObjCMethodList() }; if (F != SemaRef.MethodPool.end()) { Data.Instance = F->second.first; Data.Factory = F->second.second; } // Only write this selector if it's not in an existing AST or something // changed. if (Chain && ID < FirstSelectorID) { // Selector already exists. Did it change? bool changed = false; for (ObjCMethodList *M = &Data.Instance; !changed && M && M->getMethod(); M = M->getNext()) { if (!M->getMethod()->isFromASTFile()) changed = true; } for (ObjCMethodList *M = &Data.Factory; !changed && M && M->getMethod(); M = M->getNext()) { if (!M->getMethod()->isFromASTFile()) changed = true; } if (!changed) continue; } else if (Data.Instance.getMethod() || Data.Factory.getMethod()) { // A new method pool entry. ++NumTableEntries; } Generator.insert(S, Data, Trait); } // Create the on-disk hash table in a buffer. SmallString<4096> MethodPool; uint32_t BucketOffset; { using namespace llvm::support; ASTMethodPoolTrait Trait(*this); llvm::raw_svector_ostream Out(MethodPool); // Make sure that no bucket is at offset 0 endian::Writer(Out).write(0); BucketOffset = Generator.Emit(Out, Trait); } // Create a blob abbreviation auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(METHOD_POOL)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned MethodPoolAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); // Write the method pool { RecordData::value_type Record[] = {METHOD_POOL, BucketOffset, NumTableEntries}; Stream.EmitRecordWithBlob(MethodPoolAbbrev, Record, MethodPool); } // Create a blob abbreviation for the selector table offsets. Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(SELECTOR_OFFSETS)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // size Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // first ID Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned SelectorOffsetAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); // Write the selector offsets table. { RecordData::value_type Record[] = { SELECTOR_OFFSETS, SelectorOffsets.size(), FirstSelectorID - NUM_PREDEF_SELECTOR_IDS}; Stream.EmitRecordWithBlob(SelectorOffsetAbbrev, Record, bytes(SelectorOffsets)); } } } /// \brief Write the selectors referenced in @selector expression into AST file. void ASTWriter::WriteReferencedSelectorsPool(Sema &SemaRef) { using namespace llvm; if (SemaRef.ReferencedSelectors.empty()) return; RecordData Record; ASTRecordWriter Writer(*this, Record); // Note: this writes out all references even for a dependent AST. But it is // very tricky to fix, and given that @selector shouldn't really appear in // headers, probably not worth it. It's not a correctness issue. for (auto &SelectorAndLocation : SemaRef.ReferencedSelectors) { Selector Sel = SelectorAndLocation.first; SourceLocation Loc = SelectorAndLocation.second; Writer.AddSelectorRef(Sel); Writer.AddSourceLocation(Loc); } Writer.Emit(REFERENCED_SELECTOR_POOL); } //===----------------------------------------------------------------------===// // Identifier Table Serialization //===----------------------------------------------------------------------===// /// Determine the declaration that should be put into the name lookup table to /// represent the given declaration in this module. This is usually D itself, /// but if D was imported and merged into a local declaration, we want the most /// recent local declaration instead. The chosen declaration will be the most /// recent declaration in any module that imports this one. static NamedDecl *getDeclForLocalLookup(const LangOptions &LangOpts, NamedDecl *D) { if (!LangOpts.Modules || !D->isFromASTFile()) return D; if (Decl *Redecl = D->getPreviousDecl()) { // For Redeclarable decls, a prior declaration might be local. for (; Redecl; Redecl = Redecl->getPreviousDecl()) { // If we find a local decl, we're done. if (!Redecl->isFromASTFile()) { // Exception: in very rare cases (for injected-class-names), not all // redeclarations are in the same semantic context. Skip ones in a // different context. They don't go in this lookup table at all. if (!Redecl->getDeclContext()->getRedeclContext()->Equals( D->getDeclContext()->getRedeclContext())) continue; return cast(Redecl); } // If we find a decl from a (chained-)PCH stop since we won't find a // local one. if (Redecl->getOwningModuleID() == 0) break; } } else if (Decl *First = D->getCanonicalDecl()) { // For Mergeable decls, the first decl might be local. if (!First->isFromASTFile()) return cast(First); } // All declarations are imported. Our most recent declaration will also be // the most recent one in anyone who imports us. return D; } namespace { class ASTIdentifierTableTrait { ASTWriter &Writer; Preprocessor &PP; IdentifierResolver &IdResolver; bool IsModule; bool NeedDecls; ASTWriter::RecordData *InterestingIdentifierOffsets; /// \brief Determines whether this is an "interesting" identifier that needs a /// full IdentifierInfo structure written into the hash table. Notably, this /// doesn't check whether the name has macros defined; use PublicMacroIterator /// to check that. bool isInterestingIdentifier(const IdentifierInfo *II, uint64_t MacroOffset) { if (MacroOffset || II->isPoisoned() || (IsModule ? II->hasRevertedBuiltin() : II->getObjCOrBuiltinID()) || II->hasRevertedTokenIDToIdentifier() || (NeedDecls && II->getFETokenInfo())) return true; return false; } public: typedef IdentifierInfo* key_type; typedef key_type key_type_ref; typedef IdentID data_type; typedef data_type data_type_ref; typedef unsigned hash_value_type; typedef unsigned offset_type; ASTIdentifierTableTrait(ASTWriter &Writer, Preprocessor &PP, IdentifierResolver &IdResolver, bool IsModule, ASTWriter::RecordData *InterestingIdentifierOffsets) : Writer(Writer), PP(PP), IdResolver(IdResolver), IsModule(IsModule), NeedDecls(!IsModule || !Writer.getLangOpts().CPlusPlus), InterestingIdentifierOffsets(InterestingIdentifierOffsets) {} bool needDecls() const { return NeedDecls; } static hash_value_type ComputeHash(const IdentifierInfo* II) { return llvm::HashString(II->getName()); } bool isInterestingIdentifier(const IdentifierInfo *II) { auto MacroOffset = Writer.getMacroDirectivesOffset(II); return isInterestingIdentifier(II, MacroOffset); } bool isInterestingNonMacroIdentifier(const IdentifierInfo *II) { return isInterestingIdentifier(II, 0); } std::pair EmitKeyDataLength(raw_ostream& Out, IdentifierInfo* II, IdentID ID) { unsigned KeyLen = II->getLength() + 1; unsigned DataLen = 4; // 4 bytes for the persistent ID << 1 auto MacroOffset = Writer.getMacroDirectivesOffset(II); if (isInterestingIdentifier(II, MacroOffset)) { DataLen += 2; // 2 bytes for builtin ID DataLen += 2; // 2 bytes for flags if (MacroOffset) DataLen += 4; // MacroDirectives offset. if (NeedDecls) { for (IdentifierResolver::iterator D = IdResolver.begin(II), DEnd = IdResolver.end(); D != DEnd; ++D) DataLen += 4; } } using namespace llvm::support; endian::Writer LE(Out); assert((uint16_t)DataLen == DataLen && (uint16_t)KeyLen == KeyLen); LE.write(DataLen); // We emit the key length after the data length so that every // string is preceded by a 16-bit length. This matches the PTH // format for storing identifiers. LE.write(KeyLen); return std::make_pair(KeyLen, DataLen); } void EmitKey(raw_ostream& Out, const IdentifierInfo* II, unsigned KeyLen) { // Record the location of the key data. This is used when generating // the mapping from persistent IDs to strings. Writer.SetIdentifierOffset(II, Out.tell()); // Emit the offset of the key/data length information to the interesting // identifiers table if necessary. if (InterestingIdentifierOffsets && isInterestingIdentifier(II)) InterestingIdentifierOffsets->push_back(Out.tell() - 4); Out.write(II->getNameStart(), KeyLen); } void EmitData(raw_ostream& Out, IdentifierInfo* II, IdentID ID, unsigned) { using namespace llvm::support; endian::Writer LE(Out); auto MacroOffset = Writer.getMacroDirectivesOffset(II); if (!isInterestingIdentifier(II, MacroOffset)) { LE.write(ID << 1); return; } LE.write((ID << 1) | 0x01); uint32_t Bits = (uint32_t)II->getObjCOrBuiltinID(); assert((Bits & 0xffff) == Bits && "ObjCOrBuiltinID too big for ASTReader."); LE.write(Bits); Bits = 0; bool HadMacroDefinition = MacroOffset != 0; Bits = (Bits << 1) | unsigned(HadMacroDefinition); Bits = (Bits << 1) | unsigned(II->isExtensionToken()); Bits = (Bits << 1) | unsigned(II->isPoisoned()); Bits = (Bits << 1) | unsigned(II->hasRevertedBuiltin()); Bits = (Bits << 1) | unsigned(II->hasRevertedTokenIDToIdentifier()); Bits = (Bits << 1) | unsigned(II->isCPlusPlusOperatorKeyword()); LE.write(Bits); if (HadMacroDefinition) LE.write(MacroOffset); if (NeedDecls) { // Emit the declaration IDs in reverse order, because the // IdentifierResolver provides the declarations as they would be // visible (e.g., the function "stat" would come before the struct // "stat"), but the ASTReader adds declarations to the end of the list // (so we need to see the struct "stat" before the function "stat"). // Only emit declarations that aren't from a chained PCH, though. SmallVector Decls(IdResolver.begin(II), IdResolver.end()); for (SmallVectorImpl::reverse_iterator D = Decls.rbegin(), DEnd = Decls.rend(); D != DEnd; ++D) LE.write( Writer.getDeclID(getDeclForLocalLookup(PP.getLangOpts(), *D))); } } }; } // end anonymous namespace /// \brief Write the identifier table into the AST file. /// /// The identifier table consists of a blob containing string data /// (the actual identifiers themselves) and a separate "offsets" index /// that maps identifier IDs to locations within the blob. void ASTWriter::WriteIdentifierTable(Preprocessor &PP, IdentifierResolver &IdResolver, bool IsModule) { using namespace llvm; RecordData InterestingIdents; // Create and write out the blob that contains the identifier // strings. { llvm::OnDiskChainedHashTableGenerator Generator; ASTIdentifierTableTrait Trait( *this, PP, IdResolver, IsModule, (getLangOpts().CPlusPlus && IsModule) ? &InterestingIdents : nullptr); // Look for any identifiers that were named while processing the // headers, but are otherwise not needed. We add these to the hash // table to enable checking of the predefines buffer in the case // where the user adds new macro definitions when building the AST // file. SmallVector IIs; for (const auto &ID : PP.getIdentifierTable()) IIs.push_back(ID.second); // Sort the identifiers lexicographically before getting them references so // that their order is stable. std::sort(IIs.begin(), IIs.end(), llvm::less_ptr()); for (const IdentifierInfo *II : IIs) if (Trait.isInterestingNonMacroIdentifier(II)) getIdentifierRef(II); // Create the on-disk hash table representation. We only store offsets // for identifiers that appear here for the first time. IdentifierOffsets.resize(NextIdentID - FirstIdentID); for (auto IdentIDPair : IdentifierIDs) { auto *II = const_cast(IdentIDPair.first); IdentID ID = IdentIDPair.second; assert(II && "NULL identifier in identifier table"); // Write out identifiers if either the ID is local or the identifier has // changed since it was loaded. if (ID >= FirstIdentID || !Chain || !II->isFromAST() || II->hasChangedSinceDeserialization() || (Trait.needDecls() && II->hasFETokenInfoChangedSinceDeserialization())) Generator.insert(II, ID, Trait); } // Create the on-disk hash table in a buffer. SmallString<4096> IdentifierTable; uint32_t BucketOffset; { using namespace llvm::support; llvm::raw_svector_ostream Out(IdentifierTable); // Make sure that no bucket is at offset 0 endian::Writer(Out).write(0); BucketOffset = Generator.Emit(Out, Trait); } // Create a blob abbreviation auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(IDENTIFIER_TABLE)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned IDTableAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); // Write the identifier table RecordData::value_type Record[] = {IDENTIFIER_TABLE, BucketOffset}; Stream.EmitRecordWithBlob(IDTableAbbrev, Record, IdentifierTable); } // Write the offsets table for identifier IDs. auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(IDENTIFIER_OFFSET)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // # of identifiers Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 32)); // first ID Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned IdentifierOffsetAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); #ifndef NDEBUG for (unsigned I = 0, N = IdentifierOffsets.size(); I != N; ++I) assert(IdentifierOffsets[I] && "Missing identifier offset?"); #endif RecordData::value_type Record[] = {IDENTIFIER_OFFSET, IdentifierOffsets.size(), FirstIdentID - NUM_PREDEF_IDENT_IDS}; Stream.EmitRecordWithBlob(IdentifierOffsetAbbrev, Record, bytes(IdentifierOffsets)); // In C++, write the list of interesting identifiers (those that are // defined as macros, poisoned, or similar unusual things). if (!InterestingIdents.empty()) Stream.EmitRecord(INTERESTING_IDENTIFIERS, InterestingIdents); } //===----------------------------------------------------------------------===// // DeclContext's Name Lookup Table Serialization //===----------------------------------------------------------------------===// namespace { // Trait used for the on-disk hash table used in the method pool. class ASTDeclContextNameLookupTrait { ASTWriter &Writer; llvm::SmallVector DeclIDs; public: typedef DeclarationNameKey key_type; typedef key_type key_type_ref; /// A start and end index into DeclIDs, representing a sequence of decls. typedef std::pair data_type; typedef const data_type& data_type_ref; typedef unsigned hash_value_type; typedef unsigned offset_type; explicit ASTDeclContextNameLookupTrait(ASTWriter &Writer) : Writer(Writer) { } template data_type getData(const Coll &Decls) { unsigned Start = DeclIDs.size(); for (NamedDecl *D : Decls) { DeclIDs.push_back( Writer.GetDeclRef(getDeclForLocalLookup(Writer.getLangOpts(), D))); } return std::make_pair(Start, DeclIDs.size()); } data_type ImportData(const reader::ASTDeclContextNameLookupTrait::data_type &FromReader) { unsigned Start = DeclIDs.size(); for (auto ID : FromReader) DeclIDs.push_back(ID); return std::make_pair(Start, DeclIDs.size()); } static bool EqualKey(key_type_ref a, key_type_ref b) { return a == b; } hash_value_type ComputeHash(DeclarationNameKey Name) { return Name.getHash(); } void EmitFileRef(raw_ostream &Out, ModuleFile *F) const { assert(Writer.hasChain() && "have reference to loaded module file but no chain?"); using namespace llvm::support; endian::Writer(Out) .write(Writer.getChain()->getModuleFileID(F)); } std::pair EmitKeyDataLength(raw_ostream &Out, DeclarationNameKey Name, data_type_ref Lookup) { using namespace llvm::support; endian::Writer LE(Out); unsigned KeyLen = 1; switch (Name.getKind()) { case DeclarationName::Identifier: case DeclarationName::ObjCZeroArgSelector: case DeclarationName::ObjCOneArgSelector: case DeclarationName::ObjCMultiArgSelector: case DeclarationName::CXXLiteralOperatorName: case DeclarationName::CXXDeductionGuideName: KeyLen += 4; break; case DeclarationName::CXXOperatorName: KeyLen += 1; break; case DeclarationName::CXXConstructorName: case DeclarationName::CXXDestructorName: case DeclarationName::CXXConversionFunctionName: case DeclarationName::CXXUsingDirective: break; } LE.write(KeyLen); // 4 bytes for each DeclID. unsigned DataLen = 4 * (Lookup.second - Lookup.first); assert(uint16_t(DataLen) == DataLen && "too many decls for serialized lookup result"); LE.write(DataLen); return std::make_pair(KeyLen, DataLen); } void EmitKey(raw_ostream &Out, DeclarationNameKey Name, unsigned) { using namespace llvm::support; endian::Writer LE(Out); LE.write(Name.getKind()); switch (Name.getKind()) { case DeclarationName::Identifier: case DeclarationName::CXXLiteralOperatorName: case DeclarationName::CXXDeductionGuideName: LE.write(Writer.getIdentifierRef(Name.getIdentifier())); return; case DeclarationName::ObjCZeroArgSelector: case DeclarationName::ObjCOneArgSelector: case DeclarationName::ObjCMultiArgSelector: LE.write(Writer.getSelectorRef(Name.getSelector())); return; case DeclarationName::CXXOperatorName: assert(Name.getOperatorKind() < NUM_OVERLOADED_OPERATORS && "Invalid operator?"); LE.write(Name.getOperatorKind()); return; case DeclarationName::CXXConstructorName: case DeclarationName::CXXDestructorName: case DeclarationName::CXXConversionFunctionName: case DeclarationName::CXXUsingDirective: return; } llvm_unreachable("Invalid name kind?"); } void EmitData(raw_ostream &Out, key_type_ref, data_type Lookup, unsigned DataLen) { using namespace llvm::support; endian::Writer LE(Out); uint64_t Start = Out.tell(); (void)Start; for (unsigned I = Lookup.first, N = Lookup.second; I != N; ++I) LE.write(DeclIDs[I]); assert(Out.tell() - Start == DataLen && "Data length is wrong"); } }; } // end anonymous namespace bool ASTWriter::isLookupResultExternal(StoredDeclsList &Result, DeclContext *DC) { return Result.hasExternalDecls() && DC->NeedToReconcileExternalVisibleStorage; } bool ASTWriter::isLookupResultEntirelyExternal(StoredDeclsList &Result, DeclContext *DC) { for (auto *D : Result.getLookupResult()) if (!getDeclForLocalLookup(getLangOpts(), D)->isFromASTFile()) return false; return true; } void ASTWriter::GenerateNameLookupTable(const DeclContext *ConstDC, llvm::SmallVectorImpl &LookupTable) { assert(!ConstDC->HasLazyLocalLexicalLookups && !ConstDC->HasLazyExternalLexicalLookups && "must call buildLookups first"); // FIXME: We need to build the lookups table, which is logically const. auto *DC = const_cast(ConstDC); assert(DC == DC->getPrimaryContext() && "only primary DC has lookup table"); // Create the on-disk hash table representation. MultiOnDiskHashTableGenerator Generator; ASTDeclContextNameLookupTrait Trait(*this); // The first step is to collect the declaration names which we need to // serialize into the name lookup table, and to collect them in a stable // order. SmallVector Names; // We also build up small sets of the constructor and conversion function // names which are visible. llvm::SmallSet ConstructorNameSet, ConversionNameSet; for (auto &Lookup : *DC->buildLookup()) { auto &Name = Lookup.first; auto &Result = Lookup.second; // If there are no local declarations in our lookup result, we // don't need to write an entry for the name at all. If we can't // write out a lookup set without performing more deserialization, // just skip this entry. if (isLookupResultExternal(Result, DC) && isLookupResultEntirelyExternal(Result, DC)) continue; // We also skip empty results. If any of the results could be external and // the currently available results are empty, then all of the results are // external and we skip it above. So the only way we get here with an empty // results is when no results could have been external *and* we have // external results. // // FIXME: While we might want to start emitting on-disk entries for negative // lookups into a decl context as an optimization, today we *have* to skip // them because there are names with empty lookup results in decl contexts // which we can't emit in any stable ordering: we lookup constructors and // conversion functions in the enclosing namespace scope creating empty // results for them. This in almost certainly a bug in Clang's name lookup, // but that is likely to be hard or impossible to fix and so we tolerate it // here by omitting lookups with empty results. if (Lookup.second.getLookupResult().empty()) continue; switch (Lookup.first.getNameKind()) { default: Names.push_back(Lookup.first); break; case DeclarationName::CXXConstructorName: assert(isa(DC) && "Cannot have a constructor name outside of a class!"); ConstructorNameSet.insert(Name); break; case DeclarationName::CXXConversionFunctionName: assert(isa(DC) && "Cannot have a conversion function name outside of a class!"); ConversionNameSet.insert(Name); break; } } // Sort the names into a stable order. std::sort(Names.begin(), Names.end()); if (auto *D = dyn_cast(DC)) { // We need to establish an ordering of constructor and conversion function // names, and they don't have an intrinsic ordering. // First we try the easy case by forming the current context's constructor // name and adding that name first. This is a very useful optimization to // avoid walking the lexical declarations in many cases, and it also // handles the only case where a constructor name can come from some other // lexical context -- when that name is an implicit constructor merged from // another declaration in the redecl chain. Any non-implicit constructor or // conversion function which doesn't occur in all the lexical contexts // would be an ODR violation. auto ImplicitCtorName = Context->DeclarationNames.getCXXConstructorName( Context->getCanonicalType(Context->getRecordType(D))); if (ConstructorNameSet.erase(ImplicitCtorName)) Names.push_back(ImplicitCtorName); // If we still have constructors or conversion functions, we walk all the // names in the decl and add the constructors and conversion functions // which are visible in the order they lexically occur within the context. if (!ConstructorNameSet.empty() || !ConversionNameSet.empty()) for (Decl *ChildD : cast(DC)->decls()) if (auto *ChildND = dyn_cast(ChildD)) { auto Name = ChildND->getDeclName(); switch (Name.getNameKind()) { default: continue; case DeclarationName::CXXConstructorName: if (ConstructorNameSet.erase(Name)) Names.push_back(Name); break; case DeclarationName::CXXConversionFunctionName: if (ConversionNameSet.erase(Name)) Names.push_back(Name); break; } if (ConstructorNameSet.empty() && ConversionNameSet.empty()) break; } assert(ConstructorNameSet.empty() && "Failed to find all of the visible " "constructors by walking all the " "lexical members of the context."); assert(ConversionNameSet.empty() && "Failed to find all of the visible " "conversion functions by walking all " "the lexical members of the context."); } // Next we need to do a lookup with each name into this decl context to fully // populate any results from external sources. We don't actually use the // results of these lookups because we only want to use the results after all // results have been loaded and the pointers into them will be stable. for (auto &Name : Names) DC->lookup(Name); // Now we need to insert the results for each name into the hash table. For // constructor names and conversion function names, we actually need to merge // all of the results for them into one list of results each and insert // those. SmallVector ConstructorDecls; SmallVector ConversionDecls; // Now loop over the names, either inserting them or appending for the two // special cases. for (auto &Name : Names) { DeclContext::lookup_result Result = DC->noload_lookup(Name); switch (Name.getNameKind()) { default: Generator.insert(Name, Trait.getData(Result), Trait); break; case DeclarationName::CXXConstructorName: ConstructorDecls.append(Result.begin(), Result.end()); break; case DeclarationName::CXXConversionFunctionName: ConversionDecls.append(Result.begin(), Result.end()); break; } } // Handle our two special cases if we ended up having any. We arbitrarily use // the first declaration's name here because the name itself isn't part of // the key, only the kind of name is used. if (!ConstructorDecls.empty()) Generator.insert(ConstructorDecls.front()->getDeclName(), Trait.getData(ConstructorDecls), Trait); if (!ConversionDecls.empty()) Generator.insert(ConversionDecls.front()->getDeclName(), Trait.getData(ConversionDecls), Trait); // Create the on-disk hash table. Also emit the existing imported and // merged table if there is one. auto *Lookups = Chain ? Chain->getLoadedLookupTables(DC) : nullptr; Generator.emit(LookupTable, Trait, Lookups ? &Lookups->Table : nullptr); } /// \brief Write the block containing all of the declaration IDs /// visible from the given DeclContext. /// /// \returns the offset of the DECL_CONTEXT_VISIBLE block within the /// bitstream, or 0 if no block was written. uint64_t ASTWriter::WriteDeclContextVisibleBlock(ASTContext &Context, DeclContext *DC) { // If we imported a key declaration of this namespace, write the visible // lookup results as an update record for it rather than including them // on this declaration. We will only look at key declarations on reload. if (isa(DC) && Chain && Chain->getKeyDeclaration(cast(DC))->isFromASTFile()) { // Only do this once, for the first local declaration of the namespace. for (auto *Prev = cast(DC)->getPreviousDecl(); Prev; Prev = Prev->getPreviousDecl()) if (!Prev->isFromASTFile()) return 0; // Note that we need to emit an update record for the primary context. UpdatedDeclContexts.insert(DC->getPrimaryContext()); // Make sure all visible decls are written. They will be recorded later. We // do this using a side data structure so we can sort the names into // a deterministic order. StoredDeclsMap *Map = DC->getPrimaryContext()->buildLookup(); SmallVector, 16> LookupResults; if (Map) { LookupResults.reserve(Map->size()); for (auto &Entry : *Map) LookupResults.push_back( std::make_pair(Entry.first, Entry.second.getLookupResult())); } std::sort(LookupResults.begin(), LookupResults.end(), llvm::less_first()); for (auto &NameAndResult : LookupResults) { DeclarationName Name = NameAndResult.first; DeclContext::lookup_result Result = NameAndResult.second; if (Name.getNameKind() == DeclarationName::CXXConstructorName || Name.getNameKind() == DeclarationName::CXXConversionFunctionName) { // We have to work around a name lookup bug here where negative lookup // results for these names get cached in namespace lookup tables (these // names should never be looked up in a namespace). assert(Result.empty() && "Cannot have a constructor or conversion " "function name in a namespace!"); continue; } for (NamedDecl *ND : Result) if (!ND->isFromASTFile()) GetDeclRef(ND); } return 0; } if (DC->getPrimaryContext() != DC) return 0; // Skip contexts which don't support name lookup. if (!DC->isLookupContext()) return 0; // If not in C++, we perform name lookup for the translation unit via the // IdentifierInfo chains, don't bother to build a visible-declarations table. if (DC->isTranslationUnit() && !Context.getLangOpts().CPlusPlus) return 0; // Serialize the contents of the mapping used for lookup. Note that, // although we have two very different code paths, the serialized // representation is the same for both cases: a declaration name, // followed by a size, followed by references to the visible // declarations that have that name. uint64_t Offset = Stream.GetCurrentBitNo(); StoredDeclsMap *Map = DC->buildLookup(); if (!Map || Map->empty()) return 0; // Create the on-disk hash table in a buffer. SmallString<4096> LookupTable; GenerateNameLookupTable(DC, LookupTable); // Write the lookup table RecordData::value_type Record[] = {DECL_CONTEXT_VISIBLE}; Stream.EmitRecordWithBlob(DeclContextVisibleLookupAbbrev, Record, LookupTable); ++NumVisibleDeclContexts; return Offset; } /// \brief Write an UPDATE_VISIBLE block for the given context. /// /// UPDATE_VISIBLE blocks contain the declarations that are added to an existing /// DeclContext in a dependent AST file. As such, they only exist for the TU /// (in C++), for namespaces, and for classes with forward-declared unscoped /// enumeration members (in C++11). void ASTWriter::WriteDeclContextVisibleUpdate(const DeclContext *DC) { StoredDeclsMap *Map = DC->getLookupPtr(); if (!Map || Map->empty()) return; // Create the on-disk hash table in a buffer. SmallString<4096> LookupTable; GenerateNameLookupTable(DC, LookupTable); // If we're updating a namespace, select a key declaration as the key for the // update record; those are the only ones that will be checked on reload. if (isa(DC)) DC = cast(Chain->getKeyDeclaration(cast(DC))); // Write the lookup table RecordData::value_type Record[] = {UPDATE_VISIBLE, getDeclID(cast(DC))}; Stream.EmitRecordWithBlob(UpdateVisibleAbbrev, Record, LookupTable); } /// \brief Write an FP_PRAGMA_OPTIONS block for the given FPOptions. void ASTWriter::WriteFPPragmaOptions(const FPOptions &Opts) { RecordData::value_type Record[] = {Opts.getInt()}; Stream.EmitRecord(FP_PRAGMA_OPTIONS, Record); } /// \brief Write an OPENCL_EXTENSIONS block for the given OpenCLOptions. void ASTWriter::WriteOpenCLExtensions(Sema &SemaRef) { if (!SemaRef.Context.getLangOpts().OpenCL) return; const OpenCLOptions &Opts = SemaRef.getOpenCLOptions(); RecordData Record; for (const auto &I:Opts.OptMap) { AddString(I.getKey(), Record); auto V = I.getValue(); Record.push_back(V.Supported ? 1 : 0); Record.push_back(V.Enabled ? 1 : 0); Record.push_back(V.Avail); Record.push_back(V.Core); } Stream.EmitRecord(OPENCL_EXTENSIONS, Record); } void ASTWriter::WriteOpenCLExtensionTypes(Sema &SemaRef) { if (!SemaRef.Context.getLangOpts().OpenCL) return; RecordData Record; for (const auto &I : SemaRef.OpenCLTypeExtMap) { Record.push_back( static_cast(getTypeID(I.first->getCanonicalTypeInternal()))); Record.push_back(I.second.size()); for (auto Ext : I.second) AddString(Ext, Record); } Stream.EmitRecord(OPENCL_EXTENSION_TYPES, Record); } void ASTWriter::WriteOpenCLExtensionDecls(Sema &SemaRef) { if (!SemaRef.Context.getLangOpts().OpenCL) return; RecordData Record; for (const auto &I : SemaRef.OpenCLDeclExtMap) { Record.push_back(getDeclID(I.first)); Record.push_back(static_cast(I.second.size())); for (auto Ext : I.second) AddString(Ext, Record); } Stream.EmitRecord(OPENCL_EXTENSION_DECLS, Record); } void ASTWriter::WriteCUDAPragmas(Sema &SemaRef) { if (SemaRef.ForceCUDAHostDeviceDepth > 0) { RecordData::value_type Record[] = {SemaRef.ForceCUDAHostDeviceDepth}; Stream.EmitRecord(CUDA_PRAGMA_FORCE_HOST_DEVICE_DEPTH, Record); } } void ASTWriter::WriteObjCCategories() { SmallVector CategoriesMap; RecordData Categories; for (unsigned I = 0, N = ObjCClassesWithCategories.size(); I != N; ++I) { unsigned Size = 0; unsigned StartIndex = Categories.size(); ObjCInterfaceDecl *Class = ObjCClassesWithCategories[I]; // Allocate space for the size. Categories.push_back(0); // Add the categories. for (ObjCInterfaceDecl::known_categories_iterator Cat = Class->known_categories_begin(), CatEnd = Class->known_categories_end(); Cat != CatEnd; ++Cat, ++Size) { assert(getDeclID(*Cat) != 0 && "Bogus category"); AddDeclRef(*Cat, Categories); } // Update the size. Categories[StartIndex] = Size; // Record this interface -> category map. ObjCCategoriesInfo CatInfo = { getDeclID(Class), StartIndex }; CategoriesMap.push_back(CatInfo); } // Sort the categories map by the definition ID, since the reader will be // performing binary searches on this information. llvm::array_pod_sort(CategoriesMap.begin(), CategoriesMap.end()); // Emit the categories map. using namespace llvm; auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(OBJC_CATEGORIES_MAP)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // # of entries Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned AbbrevID = Stream.EmitAbbrev(std::move(Abbrev)); RecordData::value_type Record[] = {OBJC_CATEGORIES_MAP, CategoriesMap.size()}; Stream.EmitRecordWithBlob(AbbrevID, Record, reinterpret_cast(CategoriesMap.data()), CategoriesMap.size() * sizeof(ObjCCategoriesInfo)); // Emit the category lists. Stream.EmitRecord(OBJC_CATEGORIES, Categories); } void ASTWriter::WriteLateParsedTemplates(Sema &SemaRef) { Sema::LateParsedTemplateMapT &LPTMap = SemaRef.LateParsedTemplateMap; if (LPTMap.empty()) return; RecordData Record; for (auto &LPTMapEntry : LPTMap) { const FunctionDecl *FD = LPTMapEntry.first; LateParsedTemplate &LPT = *LPTMapEntry.second; AddDeclRef(FD, Record); AddDeclRef(LPT.D, Record); Record.push_back(LPT.Toks.size()); for (const auto &Tok : LPT.Toks) { AddToken(Tok, Record); } } Stream.EmitRecord(LATE_PARSED_TEMPLATE, Record); } /// \brief Write the state of 'pragma clang optimize' at the end of the module. void ASTWriter::WriteOptimizePragmaOptions(Sema &SemaRef) { RecordData Record; SourceLocation PragmaLoc = SemaRef.getOptimizeOffPragmaLocation(); AddSourceLocation(PragmaLoc, Record); Stream.EmitRecord(OPTIMIZE_PRAGMA_OPTIONS, Record); } /// \brief Write the state of 'pragma ms_struct' at the end of the module. void ASTWriter::WriteMSStructPragmaOptions(Sema &SemaRef) { RecordData Record; Record.push_back(SemaRef.MSStructPragmaOn ? PMSST_ON : PMSST_OFF); Stream.EmitRecord(MSSTRUCT_PRAGMA_OPTIONS, Record); } /// \brief Write the state of 'pragma pointers_to_members' at the end of the //module. void ASTWriter::WriteMSPointersToMembersPragmaOptions(Sema &SemaRef) { RecordData Record; Record.push_back(SemaRef.MSPointerToMemberRepresentationMethod); AddSourceLocation(SemaRef.ImplicitMSInheritanceAttrLoc, Record); Stream.EmitRecord(POINTERS_TO_MEMBERS_PRAGMA_OPTIONS, Record); } /// \brief Write the state of 'pragma pack' at the end of the module. void ASTWriter::WritePackPragmaOptions(Sema &SemaRef) { // Don't serialize pragma pack state for modules, since it should only take // effect on a per-submodule basis. if (WritingModule) return; RecordData Record; Record.push_back(SemaRef.PackStack.CurrentValue); AddSourceLocation(SemaRef.PackStack.CurrentPragmaLocation, Record); Record.push_back(SemaRef.PackStack.Stack.size()); for (const auto &StackEntry : SemaRef.PackStack.Stack) { Record.push_back(StackEntry.Value); AddSourceLocation(StackEntry.PragmaLocation, Record); AddString(StackEntry.StackSlotLabel, Record); } Stream.EmitRecord(PACK_PRAGMA_OPTIONS, Record); } void ASTWriter::WriteModuleFileExtension(Sema &SemaRef, ModuleFileExtensionWriter &Writer) { // Enter the extension block. Stream.EnterSubblock(EXTENSION_BLOCK_ID, 4); // Emit the metadata record abbreviation. auto Abv = std::make_shared(); Abv->Add(llvm::BitCodeAbbrevOp(EXTENSION_METADATA)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::VBR, 6)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::VBR, 6)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::VBR, 6)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::VBR, 6)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::Blob)); unsigned Abbrev = Stream.EmitAbbrev(std::move(Abv)); // Emit the metadata record. RecordData Record; auto Metadata = Writer.getExtension()->getExtensionMetadata(); Record.push_back(EXTENSION_METADATA); Record.push_back(Metadata.MajorVersion); Record.push_back(Metadata.MinorVersion); Record.push_back(Metadata.BlockName.size()); Record.push_back(Metadata.UserInfo.size()); SmallString<64> Buffer; Buffer += Metadata.BlockName; Buffer += Metadata.UserInfo; Stream.EmitRecordWithBlob(Abbrev, Record, Buffer); // Emit the contents of the extension block. Writer.writeExtensionContents(SemaRef, Stream); // Exit the extension block. Stream.ExitBlock(); } //===----------------------------------------------------------------------===// // General Serialization Routines //===----------------------------------------------------------------------===// /// \brief Emit the list of attributes to the specified record. void ASTRecordWriter::AddAttributes(ArrayRef Attrs) { auto &Record = *this; Record.push_back(Attrs.size()); for (const auto *A : Attrs) { Record.push_back(A->getKind()); // FIXME: stable encoding, target attrs Record.AddSourceRange(A->getRange()); #include "clang/Serialization/AttrPCHWrite.inc" } } void ASTWriter::AddToken(const Token &Tok, RecordDataImpl &Record) { AddSourceLocation(Tok.getLocation(), Record); Record.push_back(Tok.getLength()); // FIXME: When reading literal tokens, reconstruct the literal pointer // if it is needed. AddIdentifierRef(Tok.getIdentifierInfo(), Record); // FIXME: Should translate token kind to a stable encoding. Record.push_back(Tok.getKind()); // FIXME: Should translate token flags to a stable encoding. Record.push_back(Tok.getFlags()); } void ASTWriter::AddString(StringRef Str, RecordDataImpl &Record) { Record.push_back(Str.size()); Record.insert(Record.end(), Str.begin(), Str.end()); } bool ASTWriter::PreparePathForOutput(SmallVectorImpl &Path) { assert(Context && "should have context when outputting path"); bool Changed = cleanPathForOutput(Context->getSourceManager().getFileManager(), Path); // Remove a prefix to make the path relative, if relevant. const char *PathBegin = Path.data(); const char *PathPtr = adjustFilenameForRelocatableAST(PathBegin, BaseDirectory); if (PathPtr != PathBegin) { Path.erase(Path.begin(), Path.begin() + (PathPtr - PathBegin)); Changed = true; } return Changed; } void ASTWriter::AddPath(StringRef Path, RecordDataImpl &Record) { SmallString<128> FilePath(Path); PreparePathForOutput(FilePath); AddString(FilePath, Record); } void ASTWriter::EmitRecordWithPath(unsigned Abbrev, RecordDataRef Record, StringRef Path) { SmallString<128> FilePath(Path); PreparePathForOutput(FilePath); Stream.EmitRecordWithBlob(Abbrev, Record, FilePath); } void ASTWriter::AddVersionTuple(const VersionTuple &Version, RecordDataImpl &Record) { Record.push_back(Version.getMajor()); if (Optional Minor = Version.getMinor()) Record.push_back(*Minor + 1); else Record.push_back(0); if (Optional Subminor = Version.getSubminor()) Record.push_back(*Subminor + 1); else Record.push_back(0); } /// \brief Note that the identifier II occurs at the given offset /// within the identifier table. void ASTWriter::SetIdentifierOffset(const IdentifierInfo *II, uint32_t Offset) { IdentID ID = IdentifierIDs[II]; // Only store offsets new to this AST file. Other identifier names are looked // up earlier in the chain and thus don't need an offset. if (ID >= FirstIdentID) IdentifierOffsets[ID - FirstIdentID] = Offset; } /// \brief Note that the selector Sel occurs at the given offset /// within the method pool/selector table. void ASTWriter::SetSelectorOffset(Selector Sel, uint32_t Offset) { unsigned ID = SelectorIDs[Sel]; assert(ID && "Unknown selector"); // Don't record offsets for selectors that are also available in a different // file. if (ID < FirstSelectorID) return; SelectorOffsets[ID - FirstSelectorID] = Offset; } ASTWriter::ASTWriter(llvm::BitstreamWriter &Stream, SmallVectorImpl &Buffer, MemoryBufferCache &PCMCache, ArrayRef> Extensions, bool IncludeTimestamps) : Stream(Stream), Buffer(Buffer), PCMCache(PCMCache), IncludeTimestamps(IncludeTimestamps) { for (const auto &Ext : Extensions) { if (auto Writer = Ext->createExtensionWriter(*this)) ModuleFileExtensionWriters.push_back(std::move(Writer)); } } ASTWriter::~ASTWriter() { llvm::DeleteContainerSeconds(FileDeclIDs); } const LangOptions &ASTWriter::getLangOpts() const { assert(WritingAST && "can't determine lang opts when not writing AST"); return Context->getLangOpts(); } time_t ASTWriter::getTimestampForOutput(const FileEntry *E) const { return IncludeTimestamps ? E->getModificationTime() : 0; } ASTFileSignature ASTWriter::WriteAST(Sema &SemaRef, const std::string &OutputFile, Module *WritingModule, StringRef isysroot, bool hasErrors) { WritingAST = true; ASTHasCompilerErrors = hasErrors; // Emit the file header. Stream.Emit((unsigned)'C', 8); Stream.Emit((unsigned)'P', 8); Stream.Emit((unsigned)'C', 8); Stream.Emit((unsigned)'H', 8); WriteBlockInfoBlock(); Context = &SemaRef.Context; PP = &SemaRef.PP; this->WritingModule = WritingModule; ASTFileSignature Signature = WriteASTCore(SemaRef, isysroot, OutputFile, WritingModule); Context = nullptr; PP = nullptr; this->WritingModule = nullptr; this->BaseDirectory.clear(); WritingAST = false; if (SemaRef.Context.getLangOpts().ImplicitModules && WritingModule) { // Construct MemoryBuffer and update buffer manager. PCMCache.addBuffer(OutputFile, llvm::MemoryBuffer::getMemBufferCopy( StringRef(Buffer.begin(), Buffer.size()))); } return Signature; } template static void AddLazyVectorDecls(ASTWriter &Writer, Vector &Vec, ASTWriter::RecordData &Record) { for (typename Vector::iterator I = Vec.begin(nullptr, true), E = Vec.end(); I != E; ++I) { Writer.AddDeclRef(*I, Record); } } ASTFileSignature ASTWriter::WriteASTCore(Sema &SemaRef, StringRef isysroot, const std::string &OutputFile, Module *WritingModule) { using namespace llvm; bool isModule = WritingModule != nullptr; // Make sure that the AST reader knows to finalize itself. if (Chain) Chain->finalizeForWriting(); ASTContext &Context = SemaRef.Context; Preprocessor &PP = SemaRef.PP; // Set up predefined declaration IDs. auto RegisterPredefDecl = [&] (Decl *D, PredefinedDeclIDs ID) { if (D) { assert(D->isCanonicalDecl() && "predefined decl is not canonical"); DeclIDs[D] = ID; } }; RegisterPredefDecl(Context.getTranslationUnitDecl(), PREDEF_DECL_TRANSLATION_UNIT_ID); RegisterPredefDecl(Context.ObjCIdDecl, PREDEF_DECL_OBJC_ID_ID); RegisterPredefDecl(Context.ObjCSelDecl, PREDEF_DECL_OBJC_SEL_ID); RegisterPredefDecl(Context.ObjCClassDecl, PREDEF_DECL_OBJC_CLASS_ID); RegisterPredefDecl(Context.ObjCProtocolClassDecl, PREDEF_DECL_OBJC_PROTOCOL_ID); RegisterPredefDecl(Context.Int128Decl, PREDEF_DECL_INT_128_ID); RegisterPredefDecl(Context.UInt128Decl, PREDEF_DECL_UNSIGNED_INT_128_ID); RegisterPredefDecl(Context.ObjCInstanceTypeDecl, PREDEF_DECL_OBJC_INSTANCETYPE_ID); RegisterPredefDecl(Context.BuiltinVaListDecl, PREDEF_DECL_BUILTIN_VA_LIST_ID); RegisterPredefDecl(Context.VaListTagDecl, PREDEF_DECL_VA_LIST_TAG); RegisterPredefDecl(Context.BuiltinMSVaListDecl, PREDEF_DECL_BUILTIN_MS_VA_LIST_ID); RegisterPredefDecl(Context.ExternCContext, PREDEF_DECL_EXTERN_C_CONTEXT_ID); RegisterPredefDecl(Context.MakeIntegerSeqDecl, PREDEF_DECL_MAKE_INTEGER_SEQ_ID); RegisterPredefDecl(Context.CFConstantStringTypeDecl, PREDEF_DECL_CF_CONSTANT_STRING_ID); RegisterPredefDecl(Context.CFConstantStringTagDecl, PREDEF_DECL_CF_CONSTANT_STRING_TAG_ID); RegisterPredefDecl(Context.TypePackElementDecl, PREDEF_DECL_TYPE_PACK_ELEMENT_ID); // Build a record containing all of the tentative definitions in this file, in // TentativeDefinitions order. Generally, this record will be empty for // headers. RecordData TentativeDefinitions; AddLazyVectorDecls(*this, SemaRef.TentativeDefinitions, TentativeDefinitions); // Build a record containing all of the file scoped decls in this file. RecordData UnusedFileScopedDecls; if (!isModule) AddLazyVectorDecls(*this, SemaRef.UnusedFileScopedDecls, UnusedFileScopedDecls); // Build a record containing all of the delegating constructors we still need // to resolve. RecordData DelegatingCtorDecls; if (!isModule) AddLazyVectorDecls(*this, SemaRef.DelegatingCtorDecls, DelegatingCtorDecls); // Write the set of weak, undeclared identifiers. We always write the // entire table, since later PCH files in a PCH chain are only interested in // the results at the end of the chain. RecordData WeakUndeclaredIdentifiers; for (auto &WeakUndeclaredIdentifier : SemaRef.WeakUndeclaredIdentifiers) { IdentifierInfo *II = WeakUndeclaredIdentifier.first; WeakInfo &WI = WeakUndeclaredIdentifier.second; AddIdentifierRef(II, WeakUndeclaredIdentifiers); AddIdentifierRef(WI.getAlias(), WeakUndeclaredIdentifiers); AddSourceLocation(WI.getLocation(), WeakUndeclaredIdentifiers); WeakUndeclaredIdentifiers.push_back(WI.getUsed()); } // Build a record containing all of the ext_vector declarations. RecordData ExtVectorDecls; AddLazyVectorDecls(*this, SemaRef.ExtVectorDecls, ExtVectorDecls); // Build a record containing all of the VTable uses information. RecordData VTableUses; if (!SemaRef.VTableUses.empty()) { for (unsigned I = 0, N = SemaRef.VTableUses.size(); I != N; ++I) { AddDeclRef(SemaRef.VTableUses[I].first, VTableUses); AddSourceLocation(SemaRef.VTableUses[I].second, VTableUses); VTableUses.push_back(SemaRef.VTablesUsed[SemaRef.VTableUses[I].first]); } } // Build a record containing all of the UnusedLocalTypedefNameCandidates. RecordData UnusedLocalTypedefNameCandidates; for (const TypedefNameDecl *TD : SemaRef.UnusedLocalTypedefNameCandidates) AddDeclRef(TD, UnusedLocalTypedefNameCandidates); // Build a record containing all of pending implicit instantiations. RecordData PendingInstantiations; for (const auto &I : SemaRef.PendingInstantiations) { AddDeclRef(I.first, PendingInstantiations); AddSourceLocation(I.second, PendingInstantiations); } assert(SemaRef.PendingLocalImplicitInstantiations.empty() && "There are local ones at end of translation unit!"); // Build a record containing some declaration references. RecordData SemaDeclRefs; if (SemaRef.StdNamespace || SemaRef.StdBadAlloc || SemaRef.StdAlignValT) { AddDeclRef(SemaRef.getStdNamespace(), SemaDeclRefs); AddDeclRef(SemaRef.getStdBadAlloc(), SemaDeclRefs); AddDeclRef(SemaRef.getStdAlignValT(), SemaDeclRefs); } RecordData CUDASpecialDeclRefs; if (Context.getcudaConfigureCallDecl()) { AddDeclRef(Context.getcudaConfigureCallDecl(), CUDASpecialDeclRefs); } // Build a record containing all of the known namespaces. RecordData KnownNamespaces; for (const auto &I : SemaRef.KnownNamespaces) { if (!I.second) AddDeclRef(I.first, KnownNamespaces); } // Build a record of all used, undefined objects that require definitions. RecordData UndefinedButUsed; SmallVector, 16> Undefined; SemaRef.getUndefinedButUsed(Undefined); for (const auto &I : Undefined) { AddDeclRef(I.first, UndefinedButUsed); AddSourceLocation(I.second, UndefinedButUsed); } // Build a record containing all delete-expressions that we would like to // analyze later in AST. RecordData DeleteExprsToAnalyze; for (const auto &DeleteExprsInfo : SemaRef.getMismatchingDeleteExpressions()) { AddDeclRef(DeleteExprsInfo.first, DeleteExprsToAnalyze); DeleteExprsToAnalyze.push_back(DeleteExprsInfo.second.size()); for (const auto &DeleteLoc : DeleteExprsInfo.second) { AddSourceLocation(DeleteLoc.first, DeleteExprsToAnalyze); DeleteExprsToAnalyze.push_back(DeleteLoc.second); } } // Write the control block WriteControlBlock(PP, Context, isysroot, OutputFile); // Write the remaining AST contents. Stream.EnterSubblock(AST_BLOCK_ID, 5); // This is so that older clang versions, before the introduction // of the control block, can read and reject the newer PCH format. { RecordData Record = {VERSION_MAJOR}; Stream.EmitRecord(METADATA_OLD_FORMAT, Record); } // Create a lexical update block containing all of the declarations in the // translation unit that do not come from other AST files. const TranslationUnitDecl *TU = Context.getTranslationUnitDecl(); SmallVector NewGlobalKindDeclPairs; for (const auto *D : TU->noload_decls()) { if (!D->isFromASTFile()) { NewGlobalKindDeclPairs.push_back(D->getKind()); NewGlobalKindDeclPairs.push_back(GetDeclRef(D)); } } auto Abv = std::make_shared(); Abv->Add(llvm::BitCodeAbbrevOp(TU_UPDATE_LEXICAL)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::Blob)); unsigned TuUpdateLexicalAbbrev = Stream.EmitAbbrev(std::move(Abv)); { RecordData::value_type Record[] = {TU_UPDATE_LEXICAL}; Stream.EmitRecordWithBlob(TuUpdateLexicalAbbrev, Record, bytes(NewGlobalKindDeclPairs)); } // And a visible updates block for the translation unit. Abv = std::make_shared(); Abv->Add(llvm::BitCodeAbbrevOp(UPDATE_VISIBLE)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::VBR, 6)); Abv->Add(llvm::BitCodeAbbrevOp(llvm::BitCodeAbbrevOp::Blob)); UpdateVisibleAbbrev = Stream.EmitAbbrev(std::move(Abv)); WriteDeclContextVisibleUpdate(TU); // If we have any extern "C" names, write out a visible update for them. if (Context.ExternCContext) WriteDeclContextVisibleUpdate(Context.ExternCContext); // If the translation unit has an anonymous namespace, and we don't already // have an update block for it, write it as an update block. // FIXME: Why do we not do this if there's already an update block? if (NamespaceDecl *NS = TU->getAnonymousNamespace()) { ASTWriter::UpdateRecord &Record = DeclUpdates[TU]; if (Record.empty()) Record.push_back(DeclUpdate(UPD_CXX_ADDED_ANONYMOUS_NAMESPACE, NS)); } // Add update records for all mangling numbers and static local numbers. // These aren't really update records, but this is a convenient way of // tagging this rare extra data onto the declarations. for (const auto &Number : Context.MangleNumbers) if (!Number.first->isFromASTFile()) DeclUpdates[Number.first].push_back(DeclUpdate(UPD_MANGLING_NUMBER, Number.second)); for (const auto &Number : Context.StaticLocalNumbers) if (!Number.first->isFromASTFile()) DeclUpdates[Number.first].push_back(DeclUpdate(UPD_STATIC_LOCAL_NUMBER, Number.second)); // Make sure visible decls, added to DeclContexts previously loaded from // an AST file, are registered for serialization. Likewise for template // specializations added to imported templates. for (const auto *I : DeclsToEmitEvenIfUnreferenced) { GetDeclRef(I); } // Make sure all decls associated with an identifier are registered for // serialization, if we're storing decls with identifiers. if (!WritingModule || !getLangOpts().CPlusPlus) { llvm::SmallVector IIs; for (const auto &ID : PP.getIdentifierTable()) { const IdentifierInfo *II = ID.second; if (!Chain || !II->isFromAST() || II->hasChangedSinceDeserialization()) IIs.push_back(II); } // Sort the identifiers to visit based on their name. std::sort(IIs.begin(), IIs.end(), llvm::less_ptr()); for (const IdentifierInfo *II : IIs) { for (IdentifierResolver::iterator D = SemaRef.IdResolver.begin(II), DEnd = SemaRef.IdResolver.end(); D != DEnd; ++D) { GetDeclRef(*D); } } } // For method pool in the module, if it contains an entry for a selector, // the entry should be complete, containing everything introduced by that // module and all modules it imports. It's possible that the entry is out of // date, so we need to pull in the new content here. // It's possible that updateOutOfDateSelector can update SelectorIDs. To be // safe, we copy all selectors out. llvm::SmallVector AllSelectors; for (auto &SelectorAndID : SelectorIDs) AllSelectors.push_back(SelectorAndID.first); for (auto &Selector : AllSelectors) SemaRef.updateOutOfDateSelector(Selector); // Form the record of special types. RecordData SpecialTypes; AddTypeRef(Context.getRawCFConstantStringType(), SpecialTypes); AddTypeRef(Context.getFILEType(), SpecialTypes); AddTypeRef(Context.getjmp_bufType(), SpecialTypes); AddTypeRef(Context.getsigjmp_bufType(), SpecialTypes); AddTypeRef(Context.ObjCIdRedefinitionType, SpecialTypes); AddTypeRef(Context.ObjCClassRedefinitionType, SpecialTypes); AddTypeRef(Context.ObjCSelRedefinitionType, SpecialTypes); AddTypeRef(Context.getucontext_tType(), SpecialTypes); if (Chain) { // Write the mapping information describing our module dependencies and how // each of those modules were mapped into our own offset/ID space, so that // the reader can build the appropriate mapping to its own offset/ID space. // The map consists solely of a blob with the following format: // *(module-name-len:i16 module-name:len*i8 // source-location-offset:i32 // identifier-id:i32 // preprocessed-entity-id:i32 // macro-definition-id:i32 // submodule-id:i32 // selector-id:i32 // declaration-id:i32 // c++-base-specifiers-id:i32 // type-id:i32) // auto Abbrev = std::make_shared(); Abbrev->Add(BitCodeAbbrevOp(MODULE_OFFSET_MAP)); Abbrev->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Blob)); unsigned ModuleOffsetMapAbbrev = Stream.EmitAbbrev(std::move(Abbrev)); SmallString<2048> Buffer; { llvm::raw_svector_ostream Out(Buffer); for (ModuleFile &M : Chain->ModuleMgr) { using namespace llvm::support; endian::Writer LE(Out); StringRef FileName = M.FileName; LE.write(FileName.size()); Out.write(FileName.data(), FileName.size()); // Note: if a base ID was uint max, it would not be possible to load // another module after it or have more than one entity inside it. uint32_t None = std::numeric_limits::max(); auto writeBaseIDOrNone = [&](uint32_t BaseID, bool ShouldWrite) { assert(BaseID < std::numeric_limits::max() && "base id too high"); if (ShouldWrite) LE.write(BaseID); else LE.write(None); }; // These values should be unique within a chain, since they will be read // as keys into ContinuousRangeMaps. writeBaseIDOrNone(M.SLocEntryBaseOffset, M.LocalNumSLocEntries); writeBaseIDOrNone(M.BaseIdentifierID, M.LocalNumIdentifiers); writeBaseIDOrNone(M.BaseMacroID, M.LocalNumMacros); writeBaseIDOrNone(M.BasePreprocessedEntityID, M.NumPreprocessedEntities); writeBaseIDOrNone(M.BaseSubmoduleID, M.LocalNumSubmodules); writeBaseIDOrNone(M.BaseSelectorID, M.LocalNumSelectors); writeBaseIDOrNone(M.BaseDeclID, M.LocalNumDecls); writeBaseIDOrNone(M.BaseTypeIndex, M.LocalNumTypes); } } RecordData::value_type Record[] = {MODULE_OFFSET_MAP}; Stream.EmitRecordWithBlob(ModuleOffsetMapAbbrev, Record, Buffer.data(), Buffer.size()); } RecordData DeclUpdatesOffsetsRecord; // Keep writing types, declarations, and declaration update records // until we've emitted all of them. Stream.EnterSubblock(DECLTYPES_BLOCK_ID, /*bits for abbreviations*/5); WriteTypeAbbrevs(); WriteDeclAbbrevs(); do { WriteDeclUpdatesBlocks(DeclUpdatesOffsetsRecord); while (!DeclTypesToEmit.empty()) { DeclOrType DOT = DeclTypesToEmit.front(); DeclTypesToEmit.pop(); if (DOT.isType()) WriteType(DOT.getType()); else WriteDecl(Context, DOT.getDecl()); } } while (!DeclUpdates.empty()); Stream.ExitBlock(); DoneWritingDeclsAndTypes = true; // These things can only be done once we've written out decls and types. WriteTypeDeclOffsets(); if (!DeclUpdatesOffsetsRecord.empty()) Stream.EmitRecord(DECL_UPDATE_OFFSETS, DeclUpdatesOffsetsRecord); WriteFileDeclIDsMap(); WriteSourceManagerBlock(Context.getSourceManager(), PP); WriteComments(); WritePreprocessor(PP, isModule); WriteHeaderSearch(PP.getHeaderSearchInfo()); WriteSelectors(SemaRef); WriteReferencedSelectorsPool(SemaRef); WriteLateParsedTemplates(SemaRef); WriteIdentifierTable(PP, SemaRef.IdResolver, isModule); WriteFPPragmaOptions(SemaRef.getFPOptions()); WriteOpenCLExtensions(SemaRef); WriteOpenCLExtensionTypes(SemaRef); WriteOpenCLExtensionDecls(SemaRef); WriteCUDAPragmas(SemaRef); // If we're emitting a module, write out the submodule information. if (WritingModule) WriteSubmodules(WritingModule); Stream.EmitRecord(SPECIAL_TYPES, SpecialTypes); // Write the record containing external, unnamed definitions. if (!EagerlyDeserializedDecls.empty()) Stream.EmitRecord(EAGERLY_DESERIALIZED_DECLS, EagerlyDeserializedDecls); if (!ModularCodegenDecls.empty()) Stream.EmitRecord(MODULAR_CODEGEN_DECLS, ModularCodegenDecls); // Write the record containing tentative definitions. if (!TentativeDefinitions.empty()) Stream.EmitRecord(TENTATIVE_DEFINITIONS, TentativeDefinitions); // Write the record containing unused file scoped decls. if (!UnusedFileScopedDecls.empty()) Stream.EmitRecord(UNUSED_FILESCOPED_DECLS, UnusedFileScopedDecls); // Write the record containing weak undeclared identifiers. if (!WeakUndeclaredIdentifiers.empty()) Stream.EmitRecord(WEAK_UNDECLARED_IDENTIFIERS, WeakUndeclaredIdentifiers); // Write the record containing ext_vector type names. if (!ExtVectorDecls.empty()) Stream.EmitRecord(EXT_VECTOR_DECLS, ExtVectorDecls); // Write the record containing VTable uses information. if (!VTableUses.empty()) Stream.EmitRecord(VTABLE_USES, VTableUses); // Write the record containing potentially unused local typedefs. if (!UnusedLocalTypedefNameCandidates.empty()) Stream.EmitRecord(UNUSED_LOCAL_TYPEDEF_NAME_CANDIDATES, UnusedLocalTypedefNameCandidates); // Write the record containing pending implicit instantiations. if (!PendingInstantiations.empty()) Stream.EmitRecord(PENDING_IMPLICIT_INSTANTIATIONS, PendingInstantiations); // Write the record containing declaration references of Sema. if (!SemaDeclRefs.empty()) Stream.EmitRecord(SEMA_DECL_REFS, SemaDeclRefs); // Write the record containing CUDA-specific declaration references. if (!CUDASpecialDeclRefs.empty()) Stream.EmitRecord(CUDA_SPECIAL_DECL_REFS, CUDASpecialDeclRefs); // Write the delegating constructors. if (!DelegatingCtorDecls.empty()) Stream.EmitRecord(DELEGATING_CTORS, DelegatingCtorDecls); // Write the known namespaces. if (!KnownNamespaces.empty()) Stream.EmitRecord(KNOWN_NAMESPACES, KnownNamespaces); // Write the undefined internal functions and variables, and inline functions. if (!UndefinedButUsed.empty()) Stream.EmitRecord(UNDEFINED_BUT_USED, UndefinedButUsed); if (!DeleteExprsToAnalyze.empty()) Stream.EmitRecord(DELETE_EXPRS_TO_ANALYZE, DeleteExprsToAnalyze); // Write the visible updates to DeclContexts. for (auto *DC : UpdatedDeclContexts) WriteDeclContextVisibleUpdate(DC); if (!WritingModule) { // Write the submodules that were imported, if any. struct ModuleInfo { uint64_t ID; Module *M; ModuleInfo(uint64_t ID, Module *M) : ID(ID), M(M) {} }; llvm::SmallVector Imports; for (const auto *I : Context.local_imports()) { assert(SubmoduleIDs.find(I->getImportedModule()) != SubmoduleIDs.end()); Imports.push_back(ModuleInfo(SubmoduleIDs[I->getImportedModule()], I->getImportedModule())); } if (!Imports.empty()) { auto Cmp = [](const ModuleInfo &A, const ModuleInfo &B) { return A.ID < B.ID; }; auto Eq = [](const ModuleInfo &A, const ModuleInfo &B) { return A.ID == B.ID; }; // Sort and deduplicate module IDs. std::sort(Imports.begin(), Imports.end(), Cmp); Imports.erase(std::unique(Imports.begin(), Imports.end(), Eq), Imports.end()); RecordData ImportedModules; for (const auto &Import : Imports) { ImportedModules.push_back(Import.ID); // FIXME: If the module has macros imported then later has declarations // imported, this location won't be the right one as a location for the // declaration imports. AddSourceLocation(PP.getModuleImportLoc(Import.M), ImportedModules); } Stream.EmitRecord(IMPORTED_MODULES, ImportedModules); } } WriteObjCCategories(); if(!WritingModule) { WriteOptimizePragmaOptions(SemaRef); WriteMSStructPragmaOptions(SemaRef); WriteMSPointersToMembersPragmaOptions(SemaRef); } WritePackPragmaOptions(SemaRef); // Some simple statistics RecordData::value_type Record[] = { NumStatements, NumMacros, NumLexicalDeclContexts, NumVisibleDeclContexts}; Stream.EmitRecord(STATISTICS, Record); Stream.ExitBlock(); // Write the module file extension blocks. for (const auto &ExtWriter : ModuleFileExtensionWriters) WriteModuleFileExtension(SemaRef, *ExtWriter); return writeUnhashedControlBlock(PP, Context); } void ASTWriter::WriteDeclUpdatesBlocks(RecordDataImpl &OffsetsRecord) { if (DeclUpdates.empty()) return; DeclUpdateMap LocalUpdates; LocalUpdates.swap(DeclUpdates); for (auto &DeclUpdate : LocalUpdates) { const Decl *D = DeclUpdate.first; bool HasUpdatedBody = false; RecordData RecordData; ASTRecordWriter Record(*this, RecordData); for (auto &Update : DeclUpdate.second) { DeclUpdateKind Kind = (DeclUpdateKind)Update.getKind(); // An updated body is emitted last, so that the reader doesn't need // to skip over the lazy body to reach statements for other records. if (Kind == UPD_CXX_ADDED_FUNCTION_DEFINITION) HasUpdatedBody = true; else Record.push_back(Kind); switch (Kind) { case UPD_CXX_ADDED_IMPLICIT_MEMBER: case UPD_CXX_ADDED_TEMPLATE_SPECIALIZATION: case UPD_CXX_ADDED_ANONYMOUS_NAMESPACE: assert(Update.getDecl() && "no decl to add?"); Record.push_back(GetDeclRef(Update.getDecl())); break; case UPD_CXX_ADDED_FUNCTION_DEFINITION: break; case UPD_CXX_INSTANTIATED_STATIC_DATA_MEMBER: { const VarDecl *VD = cast(D); Record.AddSourceLocation(Update.getLoc()); if (VD->getInit()) { Record.push_back(!VD->isInitKnownICE() ? 1 : (VD->isInitICE() ? 3 : 2)); Record.AddStmt(const_cast(VD->getInit())); } else { Record.push_back(0); } break; } case UPD_CXX_INSTANTIATED_DEFAULT_ARGUMENT: Record.AddStmt(const_cast( cast(Update.getDecl())->getDefaultArg())); break; case UPD_CXX_INSTANTIATED_DEFAULT_MEMBER_INITIALIZER: Record.AddStmt( cast(Update.getDecl())->getInClassInitializer()); break; case UPD_CXX_INSTANTIATED_CLASS_DEFINITION: { auto *RD = cast(D); UpdatedDeclContexts.insert(RD->getPrimaryContext()); Record.AddCXXDefinitionData(RD); Record.AddOffset(WriteDeclContextLexicalBlock( *Context, const_cast(RD))); // This state is sometimes updated by template instantiation, when we // switch from the specialization referring to the template declaration // to it referring to the template definition. if (auto *MSInfo = RD->getMemberSpecializationInfo()) { Record.push_back(MSInfo->getTemplateSpecializationKind()); Record.AddSourceLocation(MSInfo->getPointOfInstantiation()); } else { auto *Spec = cast(RD); Record.push_back(Spec->getTemplateSpecializationKind()); Record.AddSourceLocation(Spec->getPointOfInstantiation()); // The instantiation might have been resolved to a partial // specialization. If so, record which one. auto From = Spec->getInstantiatedFrom(); if (auto PartialSpec = From.dyn_cast()) { Record.push_back(true); Record.AddDeclRef(PartialSpec); Record.AddTemplateArgumentList( &Spec->getTemplateInstantiationArgs()); } else { Record.push_back(false); } } Record.push_back(RD->getTagKind()); Record.AddSourceLocation(RD->getLocation()); Record.AddSourceLocation(RD->getLocStart()); Record.AddSourceRange(RD->getBraceRange()); // Instantiation may change attributes; write them all out afresh. Record.push_back(D->hasAttrs()); if (D->hasAttrs()) Record.AddAttributes(D->getAttrs()); // FIXME: Ensure we don't get here for explicit instantiations. break; } case UPD_CXX_RESOLVED_DTOR_DELETE: Record.AddDeclRef(Update.getDecl()); break; case UPD_CXX_RESOLVED_EXCEPTION_SPEC: addExceptionSpec( cast(D)->getType()->castAs(), Record); break; case UPD_CXX_DEDUCED_RETURN_TYPE: Record.push_back(GetOrCreateTypeID(Update.getType())); break; case UPD_DECL_MARKED_USED: break; case UPD_MANGLING_NUMBER: case UPD_STATIC_LOCAL_NUMBER: Record.push_back(Update.getNumber()); break; case UPD_DECL_MARKED_OPENMP_THREADPRIVATE: Record.AddSourceRange( D->getAttr()->getRange()); break; case UPD_DECL_MARKED_OPENMP_DECLARETARGET: Record.AddSourceRange( D->getAttr()->getRange()); break; case UPD_DECL_EXPORTED: Record.push_back(getSubmoduleID(Update.getModule())); break; case UPD_ADDED_ATTR_TO_RECORD: Record.AddAttributes(llvm::makeArrayRef(Update.getAttr())); break; } } if (HasUpdatedBody) { const auto *Def = cast(D); Record.push_back(UPD_CXX_ADDED_FUNCTION_DEFINITION); Record.push_back(Def->isInlined()); Record.AddSourceLocation(Def->getInnerLocStart()); Record.AddFunctionDefinition(Def); } OffsetsRecord.push_back(GetDeclRef(D)); OffsetsRecord.push_back(Record.Emit(DECL_UPDATES)); } } void ASTWriter::AddSourceLocation(SourceLocation Loc, RecordDataImpl &Record) { uint32_t Raw = Loc.getRawEncoding(); Record.push_back((Raw << 1) | (Raw >> 31)); } void ASTWriter::AddSourceRange(SourceRange Range, RecordDataImpl &Record) { AddSourceLocation(Range.getBegin(), Record); AddSourceLocation(Range.getEnd(), Record); } void ASTRecordWriter::AddAPInt(const llvm::APInt &Value) { Record->push_back(Value.getBitWidth()); const uint64_t *Words = Value.getRawData(); Record->append(Words, Words + Value.getNumWords()); } void ASTRecordWriter::AddAPSInt(const llvm::APSInt &Value) { Record->push_back(Value.isUnsigned()); AddAPInt(Value); } void ASTRecordWriter::AddAPFloat(const llvm::APFloat &Value) { AddAPInt(Value.bitcastToAPInt()); } void ASTWriter::AddIdentifierRef(const IdentifierInfo *II, RecordDataImpl &Record) { Record.push_back(getIdentifierRef(II)); } IdentID ASTWriter::getIdentifierRef(const IdentifierInfo *II) { if (!II) return 0; IdentID &ID = IdentifierIDs[II]; if (ID == 0) ID = NextIdentID++; return ID; } MacroID ASTWriter::getMacroRef(MacroInfo *MI, const IdentifierInfo *Name) { // Don't emit builtin macros like __LINE__ to the AST file unless they // have been redefined by the header (in which case they are not // isBuiltinMacro). if (!MI || MI->isBuiltinMacro()) return 0; MacroID &ID = MacroIDs[MI]; if (ID == 0) { ID = NextMacroID++; MacroInfoToEmitData Info = { Name, MI, ID }; MacroInfosToEmit.push_back(Info); } return ID; } MacroID ASTWriter::getMacroID(MacroInfo *MI) { if (!MI || MI->isBuiltinMacro()) return 0; assert(MacroIDs.find(MI) != MacroIDs.end() && "Macro not emitted!"); return MacroIDs[MI]; } uint64_t ASTWriter::getMacroDirectivesOffset(const IdentifierInfo *Name) { return IdentMacroDirectivesOffsetMap.lookup(Name); } void ASTRecordWriter::AddSelectorRef(const Selector SelRef) { Record->push_back(Writer->getSelectorRef(SelRef)); } SelectorID ASTWriter::getSelectorRef(Selector Sel) { if (Sel.getAsOpaquePtr() == nullptr) { return 0; } SelectorID SID = SelectorIDs[Sel]; if (SID == 0 && Chain) { // This might trigger a ReadSelector callback, which will set the ID for // this selector. Chain->LoadSelector(Sel); SID = SelectorIDs[Sel]; } if (SID == 0) { SID = NextSelectorID++; SelectorIDs[Sel] = SID; } return SID; } void ASTRecordWriter::AddCXXTemporary(const CXXTemporary *Temp) { AddDeclRef(Temp->getDestructor()); } void ASTRecordWriter::AddTemplateArgumentLocInfo( TemplateArgument::ArgKind Kind, const TemplateArgumentLocInfo &Arg) { switch (Kind) { case TemplateArgument::Expression: AddStmt(Arg.getAsExpr()); break; case TemplateArgument::Type: AddTypeSourceInfo(Arg.getAsTypeSourceInfo()); break; case TemplateArgument::Template: AddNestedNameSpecifierLoc(Arg.getTemplateQualifierLoc()); AddSourceLocation(Arg.getTemplateNameLoc()); break; case TemplateArgument::TemplateExpansion: AddNestedNameSpecifierLoc(Arg.getTemplateQualifierLoc()); AddSourceLocation(Arg.getTemplateNameLoc()); AddSourceLocation(Arg.getTemplateEllipsisLoc()); break; case TemplateArgument::Null: case TemplateArgument::Integral: case TemplateArgument::Declaration: case TemplateArgument::NullPtr: case TemplateArgument::Pack: // FIXME: Is this right? break; } } void ASTRecordWriter::AddTemplateArgumentLoc(const TemplateArgumentLoc &Arg) { AddTemplateArgument(Arg.getArgument()); if (Arg.getArgument().getKind() == TemplateArgument::Expression) { bool InfoHasSameExpr = Arg.getArgument().getAsExpr() == Arg.getLocInfo().getAsExpr(); Record->push_back(InfoHasSameExpr); if (InfoHasSameExpr) return; // Avoid storing the same expr twice. } AddTemplateArgumentLocInfo(Arg.getArgument().getKind(), Arg.getLocInfo()); } void ASTRecordWriter::AddTypeSourceInfo(TypeSourceInfo *TInfo) { if (!TInfo) { AddTypeRef(QualType()); return; } AddTypeLoc(TInfo->getTypeLoc()); } void ASTRecordWriter::AddTypeLoc(TypeLoc TL) { AddTypeRef(TL.getType()); TypeLocWriter TLW(*this); for (; !TL.isNull(); TL = TL.getNextTypeLoc()) TLW.Visit(TL); } void ASTWriter::AddTypeRef(QualType T, RecordDataImpl &Record) { Record.push_back(GetOrCreateTypeID(T)); } TypeID ASTWriter::GetOrCreateTypeID(QualType T) { assert(Context); return MakeTypeID(*Context, T, [&](QualType T) -> TypeIdx { if (T.isNull()) return TypeIdx(); assert(!T.getLocalFastQualifiers()); TypeIdx &Idx = TypeIdxs[T]; if (Idx.getIndex() == 0) { if (DoneWritingDeclsAndTypes) { assert(0 && "New type seen after serializing all the types to emit!"); return TypeIdx(); } // We haven't seen this type before. Assign it a new ID and put it // into the queue of types to emit. Idx = TypeIdx(NextTypeID++); DeclTypesToEmit.push(T); } return Idx; }); } TypeID ASTWriter::getTypeID(QualType T) const { assert(Context); return MakeTypeID(*Context, T, [&](QualType T) -> TypeIdx { if (T.isNull()) return TypeIdx(); assert(!T.getLocalFastQualifiers()); TypeIdxMap::const_iterator I = TypeIdxs.find(T); assert(I != TypeIdxs.end() && "Type not emitted!"); return I->second; }); } void ASTWriter::AddDeclRef(const Decl *D, RecordDataImpl &Record) { Record.push_back(GetDeclRef(D)); } DeclID ASTWriter::GetDeclRef(const Decl *D) { assert(WritingAST && "Cannot request a declaration ID before AST writing"); if (!D) { return 0; } // If D comes from an AST file, its declaration ID is already known and // fixed. if (D->isFromASTFile()) return D->getGlobalID(); assert(!(reinterpret_cast(D) & 0x01) && "Invalid decl pointer"); DeclID &ID = DeclIDs[D]; if (ID == 0) { if (DoneWritingDeclsAndTypes) { assert(0 && "New decl seen after serializing all the decls to emit!"); return 0; } // We haven't seen this declaration before. Give it a new ID and // enqueue it in the list of declarations to emit. ID = NextDeclID++; DeclTypesToEmit.push(const_cast(D)); } return ID; } DeclID ASTWriter::getDeclID(const Decl *D) { if (!D) return 0; // If D comes from an AST file, its declaration ID is already known and // fixed. if (D->isFromASTFile()) return D->getGlobalID(); assert(DeclIDs.find(D) != DeclIDs.end() && "Declaration not emitted!"); return DeclIDs[D]; } void ASTWriter::associateDeclWithFile(const Decl *D, DeclID ID) { assert(ID); assert(D); SourceLocation Loc = D->getLocation(); if (Loc.isInvalid()) return; // We only keep track of the file-level declarations of each file. if (!D->getLexicalDeclContext()->isFileContext()) return; // FIXME: ParmVarDecls that are part of a function type of a parameter of // a function/objc method, should not have TU as lexical context. if (isa(D)) return; SourceManager &SM = Context->getSourceManager(); SourceLocation FileLoc = SM.getFileLoc(Loc); assert(SM.isLocalSourceLocation(FileLoc)); FileID FID; unsigned Offset; std::tie(FID, Offset) = SM.getDecomposedLoc(FileLoc); if (FID.isInvalid()) return; assert(SM.getSLocEntry(FID).isFile()); DeclIDInFileInfo *&Info = FileDeclIDs[FID]; if (!Info) Info = new DeclIDInFileInfo(); std::pair LocDecl(Offset, ID); LocDeclIDsTy &Decls = Info->DeclIDs; if (Decls.empty() || Decls.back().first <= Offset) { Decls.push_back(LocDecl); return; } LocDeclIDsTy::iterator I = std::upper_bound(Decls.begin(), Decls.end(), LocDecl, llvm::less_first()); Decls.insert(I, LocDecl); } void ASTRecordWriter::AddDeclarationName(DeclarationName Name) { // FIXME: Emit a stable enum for NameKind. 0 = Identifier etc. Record->push_back(Name.getNameKind()); switch (Name.getNameKind()) { case DeclarationName::Identifier: AddIdentifierRef(Name.getAsIdentifierInfo()); break; case DeclarationName::ObjCZeroArgSelector: case DeclarationName::ObjCOneArgSelector: case DeclarationName::ObjCMultiArgSelector: AddSelectorRef(Name.getObjCSelector()); break; case DeclarationName::CXXConstructorName: case DeclarationName::CXXDestructorName: case DeclarationName::CXXConversionFunctionName: AddTypeRef(Name.getCXXNameType()); break; case DeclarationName::CXXDeductionGuideName: AddDeclRef(Name.getCXXDeductionGuideTemplate()); break; case DeclarationName::CXXOperatorName: Record->push_back(Name.getCXXOverloadedOperator()); break; case DeclarationName::CXXLiteralOperatorName: AddIdentifierRef(Name.getCXXLiteralIdentifier()); break; case DeclarationName::CXXUsingDirective: // No extra data to emit break; } } unsigned ASTWriter::getAnonymousDeclarationNumber(const NamedDecl *D) { assert(needsAnonymousDeclarationNumber(D) && "expected an anonymous declaration"); // Number the anonymous declarations within this context, if we've not // already done so. auto It = AnonymousDeclarationNumbers.find(D); if (It == AnonymousDeclarationNumbers.end()) { auto *DC = D->getLexicalDeclContext(); numberAnonymousDeclsWithin(DC, [&](const NamedDecl *ND, unsigned Number) { AnonymousDeclarationNumbers[ND] = Number; }); It = AnonymousDeclarationNumbers.find(D); assert(It != AnonymousDeclarationNumbers.end() && "declaration not found within its lexical context"); } return It->second; } void ASTRecordWriter::AddDeclarationNameLoc(const DeclarationNameLoc &DNLoc, DeclarationName Name) { switch (Name.getNameKind()) { case DeclarationName::CXXConstructorName: case DeclarationName::CXXDestructorName: case DeclarationName::CXXConversionFunctionName: AddTypeSourceInfo(DNLoc.NamedType.TInfo); break; case DeclarationName::CXXOperatorName: AddSourceLocation(SourceLocation::getFromRawEncoding( DNLoc.CXXOperatorName.BeginOpNameLoc)); AddSourceLocation( SourceLocation::getFromRawEncoding(DNLoc.CXXOperatorName.EndOpNameLoc)); break; case DeclarationName::CXXLiteralOperatorName: AddSourceLocation(SourceLocation::getFromRawEncoding( DNLoc.CXXLiteralOperatorName.OpNameLoc)); break; case DeclarationName::Identifier: case DeclarationName::ObjCZeroArgSelector: case DeclarationName::ObjCOneArgSelector: case DeclarationName::ObjCMultiArgSelector: case DeclarationName::CXXUsingDirective: case DeclarationName::CXXDeductionGuideName: break; } } void ASTRecordWriter::AddDeclarationNameInfo( const DeclarationNameInfo &NameInfo) { AddDeclarationName(NameInfo.getName()); AddSourceLocation(NameInfo.getLoc()); AddDeclarationNameLoc(NameInfo.getInfo(), NameInfo.getName()); } void ASTRecordWriter::AddQualifierInfo(const QualifierInfo &Info) { AddNestedNameSpecifierLoc(Info.QualifierLoc); Record->push_back(Info.NumTemplParamLists); for (unsigned i = 0, e = Info.NumTemplParamLists; i != e; ++i) AddTemplateParameterList(Info.TemplParamLists[i]); } void ASTRecordWriter::AddNestedNameSpecifier(NestedNameSpecifier *NNS) { // Nested name specifiers usually aren't too long. I think that 8 would // typically accommodate the vast majority. SmallVector NestedNames; // Push each of the NNS's onto a stack for serialization in reverse order. while (NNS) { NestedNames.push_back(NNS); NNS = NNS->getPrefix(); } Record->push_back(NestedNames.size()); while(!NestedNames.empty()) { NNS = NestedNames.pop_back_val(); NestedNameSpecifier::SpecifierKind Kind = NNS->getKind(); Record->push_back(Kind); switch (Kind) { case NestedNameSpecifier::Identifier: AddIdentifierRef(NNS->getAsIdentifier()); break; case NestedNameSpecifier::Namespace: AddDeclRef(NNS->getAsNamespace()); break; case NestedNameSpecifier::NamespaceAlias: AddDeclRef(NNS->getAsNamespaceAlias()); break; case NestedNameSpecifier::TypeSpec: case NestedNameSpecifier::TypeSpecWithTemplate: AddTypeRef(QualType(NNS->getAsType(), 0)); Record->push_back(Kind == NestedNameSpecifier::TypeSpecWithTemplate); break; case NestedNameSpecifier::Global: // Don't need to write an associated value. break; case NestedNameSpecifier::Super: AddDeclRef(NNS->getAsRecordDecl()); break; } } } void ASTRecordWriter::AddNestedNameSpecifierLoc(NestedNameSpecifierLoc NNS) { // Nested name specifiers usually aren't too long. I think that 8 would // typically accommodate the vast majority. SmallVector NestedNames; // Push each of the nested-name-specifiers's onto a stack for // serialization in reverse order. while (NNS) { NestedNames.push_back(NNS); NNS = NNS.getPrefix(); } Record->push_back(NestedNames.size()); while(!NestedNames.empty()) { NNS = NestedNames.pop_back_val(); NestedNameSpecifier::SpecifierKind Kind = NNS.getNestedNameSpecifier()->getKind(); Record->push_back(Kind); switch (Kind) { case NestedNameSpecifier::Identifier: AddIdentifierRef(NNS.getNestedNameSpecifier()->getAsIdentifier()); AddSourceRange(NNS.getLocalSourceRange()); break; case NestedNameSpecifier::Namespace: AddDeclRef(NNS.getNestedNameSpecifier()->getAsNamespace()); AddSourceRange(NNS.getLocalSourceRange()); break; case NestedNameSpecifier::NamespaceAlias: AddDeclRef(NNS.getNestedNameSpecifier()->getAsNamespaceAlias()); AddSourceRange(NNS.getLocalSourceRange()); break; case NestedNameSpecifier::TypeSpec: case NestedNameSpecifier::TypeSpecWithTemplate: Record->push_back(Kind == NestedNameSpecifier::TypeSpecWithTemplate); AddTypeLoc(NNS.getTypeLoc()); AddSourceLocation(NNS.getLocalSourceRange().getEnd()); break; case NestedNameSpecifier::Global: AddSourceLocation(NNS.getLocalSourceRange().getEnd()); break; case NestedNameSpecifier::Super: AddDeclRef(NNS.getNestedNameSpecifier()->getAsRecordDecl()); AddSourceRange(NNS.getLocalSourceRange()); break; } } } void ASTRecordWriter::AddTemplateName(TemplateName Name) { TemplateName::NameKind Kind = Name.getKind(); Record->push_back(Kind); switch (Kind) { case TemplateName::Template: AddDeclRef(Name.getAsTemplateDecl()); break; case TemplateName::OverloadedTemplate: { OverloadedTemplateStorage *OvT = Name.getAsOverloadedTemplate(); Record->push_back(OvT->size()); for (const auto &I : *OvT) AddDeclRef(I); break; } case TemplateName::QualifiedTemplate: { QualifiedTemplateName *QualT = Name.getAsQualifiedTemplateName(); AddNestedNameSpecifier(QualT->getQualifier()); Record->push_back(QualT->hasTemplateKeyword()); AddDeclRef(QualT->getTemplateDecl()); break; } case TemplateName::DependentTemplate: { DependentTemplateName *DepT = Name.getAsDependentTemplateName(); AddNestedNameSpecifier(DepT->getQualifier()); Record->push_back(DepT->isIdentifier()); if (DepT->isIdentifier()) AddIdentifierRef(DepT->getIdentifier()); else Record->push_back(DepT->getOperator()); break; } case TemplateName::SubstTemplateTemplateParm: { SubstTemplateTemplateParmStorage *subst = Name.getAsSubstTemplateTemplateParm(); AddDeclRef(subst->getParameter()); AddTemplateName(subst->getReplacement()); break; } case TemplateName::SubstTemplateTemplateParmPack: { SubstTemplateTemplateParmPackStorage *SubstPack = Name.getAsSubstTemplateTemplateParmPack(); AddDeclRef(SubstPack->getParameterPack()); AddTemplateArgument(SubstPack->getArgumentPack()); break; } } } void ASTRecordWriter::AddTemplateArgument(const TemplateArgument &Arg) { Record->push_back(Arg.getKind()); switch (Arg.getKind()) { case TemplateArgument::Null: break; case TemplateArgument::Type: AddTypeRef(Arg.getAsType()); break; case TemplateArgument::Declaration: AddDeclRef(Arg.getAsDecl()); AddTypeRef(Arg.getParamTypeForDecl()); break; case TemplateArgument::NullPtr: AddTypeRef(Arg.getNullPtrType()); break; case TemplateArgument::Integral: AddAPSInt(Arg.getAsIntegral()); AddTypeRef(Arg.getIntegralType()); break; case TemplateArgument::Template: AddTemplateName(Arg.getAsTemplateOrTemplatePattern()); break; case TemplateArgument::TemplateExpansion: AddTemplateName(Arg.getAsTemplateOrTemplatePattern()); if (Optional NumExpansions = Arg.getNumTemplateExpansions()) Record->push_back(*NumExpansions + 1); else Record->push_back(0); break; case TemplateArgument::Expression: AddStmt(Arg.getAsExpr()); break; case TemplateArgument::Pack: Record->push_back(Arg.pack_size()); for (const auto &P : Arg.pack_elements()) AddTemplateArgument(P); break; } } void ASTRecordWriter::AddTemplateParameterList( const TemplateParameterList *TemplateParams) { assert(TemplateParams && "No TemplateParams!"); AddSourceLocation(TemplateParams->getTemplateLoc()); AddSourceLocation(TemplateParams->getLAngleLoc()); AddSourceLocation(TemplateParams->getRAngleLoc()); // TODO: Concepts Record->push_back(TemplateParams->size()); for (const auto &P : *TemplateParams) AddDeclRef(P); } /// \brief Emit a template argument list. void ASTRecordWriter::AddTemplateArgumentList( const TemplateArgumentList *TemplateArgs) { assert(TemplateArgs && "No TemplateArgs!"); Record->push_back(TemplateArgs->size()); for (int i = 0, e = TemplateArgs->size(); i != e; ++i) AddTemplateArgument(TemplateArgs->get(i)); } void ASTRecordWriter::AddASTTemplateArgumentListInfo( const ASTTemplateArgumentListInfo *ASTTemplArgList) { assert(ASTTemplArgList && "No ASTTemplArgList!"); AddSourceLocation(ASTTemplArgList->LAngleLoc); AddSourceLocation(ASTTemplArgList->RAngleLoc); Record->push_back(ASTTemplArgList->NumTemplateArgs); const TemplateArgumentLoc *TemplArgs = ASTTemplArgList->getTemplateArgs(); for (int i = 0, e = ASTTemplArgList->NumTemplateArgs; i != e; ++i) AddTemplateArgumentLoc(TemplArgs[i]); } void ASTRecordWriter::AddUnresolvedSet(const ASTUnresolvedSet &Set) { Record->push_back(Set.size()); for (ASTUnresolvedSet::const_iterator I = Set.begin(), E = Set.end(); I != E; ++I) { AddDeclRef(I.getDecl()); Record->push_back(I.getAccess()); } } // FIXME: Move this out of the main ASTRecordWriter interface. void ASTRecordWriter::AddCXXBaseSpecifier(const CXXBaseSpecifier &Base) { Record->push_back(Base.isVirtual()); Record->push_back(Base.isBaseOfClass()); Record->push_back(Base.getAccessSpecifierAsWritten()); Record->push_back(Base.getInheritConstructors()); AddTypeSourceInfo(Base.getTypeSourceInfo()); AddSourceRange(Base.getSourceRange()); AddSourceLocation(Base.isPackExpansion()? Base.getEllipsisLoc() : SourceLocation()); } static uint64_t EmitCXXBaseSpecifiers(ASTWriter &W, ArrayRef Bases) { ASTWriter::RecordData Record; ASTRecordWriter Writer(W, Record); Writer.push_back(Bases.size()); for (auto &Base : Bases) Writer.AddCXXBaseSpecifier(Base); return Writer.Emit(serialization::DECL_CXX_BASE_SPECIFIERS); } // FIXME: Move this out of the main ASTRecordWriter interface. void ASTRecordWriter::AddCXXBaseSpecifiers(ArrayRef Bases) { AddOffset(EmitCXXBaseSpecifiers(*Writer, Bases)); } static uint64_t EmitCXXCtorInitializers(ASTWriter &W, ArrayRef CtorInits) { ASTWriter::RecordData Record; ASTRecordWriter Writer(W, Record); Writer.push_back(CtorInits.size()); for (auto *Init : CtorInits) { if (Init->isBaseInitializer()) { Writer.push_back(CTOR_INITIALIZER_BASE); Writer.AddTypeSourceInfo(Init->getTypeSourceInfo()); Writer.push_back(Init->isBaseVirtual()); } else if (Init->isDelegatingInitializer()) { Writer.push_back(CTOR_INITIALIZER_DELEGATING); Writer.AddTypeSourceInfo(Init->getTypeSourceInfo()); } else if (Init->isMemberInitializer()){ Writer.push_back(CTOR_INITIALIZER_MEMBER); Writer.AddDeclRef(Init->getMember()); } else { Writer.push_back(CTOR_INITIALIZER_INDIRECT_MEMBER); Writer.AddDeclRef(Init->getIndirectMember()); } Writer.AddSourceLocation(Init->getMemberLocation()); Writer.AddStmt(Init->getInit()); Writer.AddSourceLocation(Init->getLParenLoc()); Writer.AddSourceLocation(Init->getRParenLoc()); Writer.push_back(Init->isWritten()); if (Init->isWritten()) Writer.push_back(Init->getSourceOrder()); } return Writer.Emit(serialization::DECL_CXX_CTOR_INITIALIZERS); } // FIXME: Move this out of the main ASTRecordWriter interface. void ASTRecordWriter::AddCXXCtorInitializers( ArrayRef CtorInits) { AddOffset(EmitCXXCtorInitializers(*Writer, CtorInits)); } void ASTRecordWriter::AddCXXDefinitionData(const CXXRecordDecl *D) { auto &Data = D->data(); Record->push_back(Data.IsLambda); Record->push_back(Data.UserDeclaredConstructor); Record->push_back(Data.UserDeclaredSpecialMembers); Record->push_back(Data.Aggregate); Record->push_back(Data.PlainOldData); Record->push_back(Data.Empty); Record->push_back(Data.Polymorphic); Record->push_back(Data.Abstract); Record->push_back(Data.IsStandardLayout); Record->push_back(Data.HasNoNonEmptyBases); Record->push_back(Data.HasPrivateFields); Record->push_back(Data.HasProtectedFields); Record->push_back(Data.HasPublicFields); Record->push_back(Data.HasMutableFields); Record->push_back(Data.HasVariantMembers); Record->push_back(Data.HasOnlyCMembers); Record->push_back(Data.HasInClassInitializer); Record->push_back(Data.HasUninitializedReferenceMember); Record->push_back(Data.HasUninitializedFields); Record->push_back(Data.HasInheritedConstructor); Record->push_back(Data.HasInheritedAssignment); + Record->push_back(Data.NeedOverloadResolutionForCopyConstructor); Record->push_back(Data.NeedOverloadResolutionForMoveConstructor); Record->push_back(Data.NeedOverloadResolutionForMoveAssignment); Record->push_back(Data.NeedOverloadResolutionForDestructor); + Record->push_back(Data.DefaultedCopyConstructorIsDeleted); Record->push_back(Data.DefaultedMoveConstructorIsDeleted); Record->push_back(Data.DefaultedMoveAssignmentIsDeleted); Record->push_back(Data.DefaultedDestructorIsDeleted); Record->push_back(Data.HasTrivialSpecialMembers); Record->push_back(Data.DeclaredNonTrivialSpecialMembers); Record->push_back(Data.HasIrrelevantDestructor); Record->push_back(Data.HasConstexprNonCopyMoveConstructor); Record->push_back(Data.HasDefaultedDefaultConstructor); + Record->push_back(Data.CanPassInRegisters); Record->push_back(Data.DefaultedDefaultConstructorIsConstexpr); Record->push_back(Data.HasConstexprDefaultConstructor); Record->push_back(Data.HasNonLiteralTypeFieldsOrBases); Record->push_back(Data.ComputedVisibleConversions); Record->push_back(Data.UserProvidedDefaultConstructor); Record->push_back(Data.DeclaredSpecialMembers); Record->push_back(Data.ImplicitCopyConstructorCanHaveConstParamForVBase); Record->push_back(Data.ImplicitCopyConstructorCanHaveConstParamForNonVBase); Record->push_back(Data.ImplicitCopyAssignmentHasConstParam); Record->push_back(Data.HasDeclaredCopyConstructorWithConstParam); Record->push_back(Data.HasDeclaredCopyAssignmentWithConstParam); // getODRHash will compute the ODRHash if it has not been previously computed. Record->push_back(D->getODRHash()); bool ModulesDebugInfo = Writer->Context->getLangOpts().ModulesDebugInfo && Writer->WritingModule && !D->isDependentType(); Record->push_back(ModulesDebugInfo); if (ModulesDebugInfo) Writer->ModularCodegenDecls.push_back(Writer->GetDeclRef(D)); // IsLambda bit is already saved. Record->push_back(Data.NumBases); if (Data.NumBases > 0) AddCXXBaseSpecifiers(Data.bases()); // FIXME: Make VBases lazily computed when needed to avoid storing them. Record->push_back(Data.NumVBases); if (Data.NumVBases > 0) AddCXXBaseSpecifiers(Data.vbases()); AddUnresolvedSet(Data.Conversions.get(*Writer->Context)); AddUnresolvedSet(Data.VisibleConversions.get(*Writer->Context)); // Data.Definition is the owning decl, no need to write it. AddDeclRef(D->getFirstFriend()); // Add lambda-specific data. if (Data.IsLambda) { auto &Lambda = D->getLambdaData(); Record->push_back(Lambda.Dependent); Record->push_back(Lambda.IsGenericLambda); Record->push_back(Lambda.CaptureDefault); Record->push_back(Lambda.NumCaptures); Record->push_back(Lambda.NumExplicitCaptures); Record->push_back(Lambda.ManglingNumber); AddDeclRef(D->getLambdaContextDecl()); AddTypeSourceInfo(Lambda.MethodTyInfo); for (unsigned I = 0, N = Lambda.NumCaptures; I != N; ++I) { const LambdaCapture &Capture = Lambda.Captures[I]; AddSourceLocation(Capture.getLocation()); Record->push_back(Capture.isImplicit()); Record->push_back(Capture.getCaptureKind()); switch (Capture.getCaptureKind()) { case LCK_StarThis: case LCK_This: case LCK_VLAType: break; case LCK_ByCopy: case LCK_ByRef: VarDecl *Var = Capture.capturesVariable() ? Capture.getCapturedVar() : nullptr; AddDeclRef(Var); AddSourceLocation(Capture.isPackExpansion() ? Capture.getEllipsisLoc() : SourceLocation()); break; } } } } void ASTWriter::ReaderInitialized(ASTReader *Reader) { assert(Reader && "Cannot remove chain"); assert((!Chain || Chain == Reader) && "Cannot replace chain"); assert(FirstDeclID == NextDeclID && FirstTypeID == NextTypeID && FirstIdentID == NextIdentID && FirstMacroID == NextMacroID && FirstSubmoduleID == NextSubmoduleID && FirstSelectorID == NextSelectorID && "Setting chain after writing has started."); Chain = Reader; // Note, this will get called multiple times, once one the reader starts up // and again each time it's done reading a PCH or module. FirstDeclID = NUM_PREDEF_DECL_IDS + Chain->getTotalNumDecls(); FirstTypeID = NUM_PREDEF_TYPE_IDS + Chain->getTotalNumTypes(); FirstIdentID = NUM_PREDEF_IDENT_IDS + Chain->getTotalNumIdentifiers(); FirstMacroID = NUM_PREDEF_MACRO_IDS + Chain->getTotalNumMacros(); FirstSubmoduleID = NUM_PREDEF_SUBMODULE_IDS + Chain->getTotalNumSubmodules(); FirstSelectorID = NUM_PREDEF_SELECTOR_IDS + Chain->getTotalNumSelectors(); NextDeclID = FirstDeclID; NextTypeID = FirstTypeID; NextIdentID = FirstIdentID; NextMacroID = FirstMacroID; NextSelectorID = FirstSelectorID; NextSubmoduleID = FirstSubmoduleID; } void ASTWriter::IdentifierRead(IdentID ID, IdentifierInfo *II) { // Always keep the highest ID. See \p TypeRead() for more information. IdentID &StoredID = IdentifierIDs[II]; if (ID > StoredID) StoredID = ID; } void ASTWriter::MacroRead(serialization::MacroID ID, MacroInfo *MI) { // Always keep the highest ID. See \p TypeRead() for more information. MacroID &StoredID = MacroIDs[MI]; if (ID > StoredID) StoredID = ID; } void ASTWriter::TypeRead(TypeIdx Idx, QualType T) { // Always take the highest-numbered type index. This copes with an interesting // case for chained AST writing where we schedule writing the type and then, // later, deserialize the type from another AST. In this case, we want to // keep the higher-numbered entry so that we can properly write it out to // the AST file. TypeIdx &StoredIdx = TypeIdxs[T]; if (Idx.getIndex() >= StoredIdx.getIndex()) StoredIdx = Idx; } void ASTWriter::SelectorRead(SelectorID ID, Selector S) { // Always keep the highest ID. See \p TypeRead() for more information. SelectorID &StoredID = SelectorIDs[S]; if (ID > StoredID) StoredID = ID; } void ASTWriter::MacroDefinitionRead(serialization::PreprocessedEntityID ID, MacroDefinitionRecord *MD) { assert(MacroDefinitions.find(MD) == MacroDefinitions.end()); MacroDefinitions[MD] = ID; } void ASTWriter::ModuleRead(serialization::SubmoduleID ID, Module *Mod) { assert(SubmoduleIDs.find(Mod) == SubmoduleIDs.end()); SubmoduleIDs[Mod] = ID; } void ASTWriter::CompletedTagDefinition(const TagDecl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(D->isCompleteDefinition()); assert(!WritingAST && "Already writing the AST!"); if (auto *RD = dyn_cast(D)) { // We are interested when a PCH decl is modified. if (RD->isFromASTFile()) { // A forward reference was mutated into a definition. Rewrite it. // FIXME: This happens during template instantiation, should we // have created a new definition decl instead ? assert(isTemplateInstantiation(RD->getTemplateSpecializationKind()) && "completed a tag from another module but not by instantiation?"); DeclUpdates[RD].push_back( DeclUpdate(UPD_CXX_INSTANTIATED_CLASS_DEFINITION)); } } } static bool isImportedDeclContext(ASTReader *Chain, const Decl *D) { if (D->isFromASTFile()) return true; // The predefined __va_list_tag struct is imported if we imported any decls. // FIXME: This is a gross hack. return D == D->getASTContext().getVaListTagDecl(); } void ASTWriter::AddedVisibleDecl(const DeclContext *DC, const Decl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(DC->isLookupContext() && "Should not add lookup results to non-lookup contexts!"); // TU is handled elsewhere. if (isa(DC)) return; // Namespaces are handled elsewhere, except for template instantiations of // FunctionTemplateDecls in namespaces. We are interested in cases where the // local instantiations are added to an imported context. Only happens when // adding ADL lookup candidates, for example templated friends. if (isa(DC) && D->getFriendObjectKind() == Decl::FOK_None && !isa(D)) return; // We're only interested in cases where a local declaration is added to an // imported context. if (D->isFromASTFile() || !isImportedDeclContext(Chain, cast(DC))) return; assert(DC == DC->getPrimaryContext() && "added to non-primary context"); assert(!getDefinitiveDeclContext(DC) && "DeclContext not definitive!"); assert(!WritingAST && "Already writing the AST!"); if (UpdatedDeclContexts.insert(DC) && !cast(DC)->isFromASTFile()) { // We're adding a visible declaration to a predefined decl context. Ensure // that we write out all of its lookup results so we don't get a nasty // surprise when we try to emit its lookup table. for (auto *Child : DC->decls()) DeclsToEmitEvenIfUnreferenced.push_back(Child); } DeclsToEmitEvenIfUnreferenced.push_back(D); } void ASTWriter::AddedCXXImplicitMember(const CXXRecordDecl *RD, const Decl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(D->isImplicit()); // We're only interested in cases where a local declaration is added to an // imported context. if (D->isFromASTFile() || !isImportedDeclContext(Chain, RD)) return; if (!isa(D)) return; // A decl coming from PCH was modified. assert(RD->isCompleteDefinition()); assert(!WritingAST && "Already writing the AST!"); DeclUpdates[RD].push_back(DeclUpdate(UPD_CXX_ADDED_IMPLICIT_MEMBER, D)); } void ASTWriter::ResolvedExceptionSpec(const FunctionDecl *FD) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!DoneWritingDeclsAndTypes && "Already done writing updates!"); if (!Chain) return; Chain->forEachImportedKeyDecl(FD, [&](const Decl *D) { // If we don't already know the exception specification for this redecl // chain, add an update record for it. if (isUnresolvedExceptionSpec(cast(D) ->getType() ->castAs() ->getExceptionSpecType())) DeclUpdates[D].push_back(UPD_CXX_RESOLVED_EXCEPTION_SPEC); }); } void ASTWriter::DeducedReturnType(const FunctionDecl *FD, QualType ReturnType) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!Chain) return; Chain->forEachImportedKeyDecl(FD, [&](const Decl *D) { DeclUpdates[D].push_back( DeclUpdate(UPD_CXX_DEDUCED_RETURN_TYPE, ReturnType)); }); } void ASTWriter::ResolvedOperatorDelete(const CXXDestructorDecl *DD, const FunctionDecl *Delete) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); assert(Delete && "Not given an operator delete"); if (!Chain) return; Chain->forEachImportedKeyDecl(DD, [&](const Decl *D) { DeclUpdates[D].push_back(DeclUpdate(UPD_CXX_RESOLVED_DTOR_DELETE, Delete)); }); } void ASTWriter::CompletedImplicitDefinition(const FunctionDecl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!D->isFromASTFile()) return; // Declaration not imported from PCH. // Implicit function decl from a PCH was defined. DeclUpdates[D].push_back(DeclUpdate(UPD_CXX_ADDED_FUNCTION_DEFINITION)); } void ASTWriter::FunctionDefinitionInstantiated(const FunctionDecl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!D->isFromASTFile()) return; DeclUpdates[D].push_back(DeclUpdate(UPD_CXX_ADDED_FUNCTION_DEFINITION)); } void ASTWriter::StaticDataMemberInstantiated(const VarDecl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!D->isFromASTFile()) return; // Since the actual instantiation is delayed, this really means that we need // to update the instantiation location. DeclUpdates[D].push_back( DeclUpdate(UPD_CXX_INSTANTIATED_STATIC_DATA_MEMBER, D->getMemberSpecializationInfo()->getPointOfInstantiation())); } void ASTWriter::DefaultArgumentInstantiated(const ParmVarDecl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!D->isFromASTFile()) return; DeclUpdates[D].push_back( DeclUpdate(UPD_CXX_INSTANTIATED_DEFAULT_ARGUMENT, D)); } void ASTWriter::DefaultMemberInitializerInstantiated(const FieldDecl *D) { assert(!WritingAST && "Already writing the AST!"); if (!D->isFromASTFile()) return; DeclUpdates[D].push_back( DeclUpdate(UPD_CXX_INSTANTIATED_DEFAULT_MEMBER_INITIALIZER, D)); } void ASTWriter::AddedObjCCategoryToInterface(const ObjCCategoryDecl *CatD, const ObjCInterfaceDecl *IFD) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!IFD->isFromASTFile()) return; // Declaration not imported from PCH. assert(IFD->getDefinition() && "Category on a class without a definition?"); ObjCClassesWithCategories.insert( const_cast(IFD->getDefinition())); } void ASTWriter::DeclarationMarkedUsed(const Decl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); // If there is *any* declaration of the entity that's not from an AST file, // we can skip writing the update record. We make sure that isUsed() triggers // completion of the redeclaration chain of the entity. for (auto Prev = D->getMostRecentDecl(); Prev; Prev = Prev->getPreviousDecl()) if (IsLocalDecl(Prev)) return; DeclUpdates[D].push_back(DeclUpdate(UPD_DECL_MARKED_USED)); } void ASTWriter::DeclarationMarkedOpenMPThreadPrivate(const Decl *D) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!D->isFromASTFile()) return; DeclUpdates[D].push_back(DeclUpdate(UPD_DECL_MARKED_OPENMP_THREADPRIVATE)); } void ASTWriter::DeclarationMarkedOpenMPDeclareTarget(const Decl *D, const Attr *Attr) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!D->isFromASTFile()) return; DeclUpdates[D].push_back( DeclUpdate(UPD_DECL_MARKED_OPENMP_DECLARETARGET, Attr)); } void ASTWriter::RedefinedHiddenDefinition(const NamedDecl *D, Module *M) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); assert(D->isHidden() && "expected a hidden declaration"); DeclUpdates[D].push_back(DeclUpdate(UPD_DECL_EXPORTED, M)); } void ASTWriter::AddedAttributeToRecord(const Attr *Attr, const RecordDecl *Record) { if (Chain && Chain->isProcessingUpdateRecords()) return; assert(!WritingAST && "Already writing the AST!"); if (!Record->isFromASTFile()) return; DeclUpdates[Record].push_back(DeclUpdate(UPD_ADDED_ATTR_TO_RECORD, Attr)); } void ASTWriter::AddedCXXTemplateSpecialization( const ClassTemplateDecl *TD, const ClassTemplateSpecializationDecl *D) { assert(!WritingAST && "Already writing the AST!"); if (!TD->getFirstDecl()->isFromASTFile()) return; if (Chain && Chain->isProcessingUpdateRecords()) return; DeclsToEmitEvenIfUnreferenced.push_back(D); } void ASTWriter::AddedCXXTemplateSpecialization( const VarTemplateDecl *TD, const VarTemplateSpecializationDecl *D) { assert(!WritingAST && "Already writing the AST!"); if (!TD->getFirstDecl()->isFromASTFile()) return; if (Chain && Chain->isProcessingUpdateRecords()) return; DeclsToEmitEvenIfUnreferenced.push_back(D); } void ASTWriter::AddedCXXTemplateSpecialization(const FunctionTemplateDecl *TD, const FunctionDecl *D) { assert(!WritingAST && "Already writing the AST!"); if (!TD->getFirstDecl()->isFromASTFile()) return; if (Chain && Chain->isProcessingUpdateRecords()) return; DeclsToEmitEvenIfUnreferenced.push_back(D); } diff --git a/lib/StaticAnalyzer/Core/RegionStore.cpp b/lib/StaticAnalyzer/Core/RegionStore.cpp index 28f78fa3ff5e..11902f66df91 100644 --- a/lib/StaticAnalyzer/Core/RegionStore.cpp +++ b/lib/StaticAnalyzer/Core/RegionStore.cpp @@ -1,2482 +1,2495 @@ //== RegionStore.cpp - Field-sensitive store model --------------*- C++ -*--==// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file defines a basic region store model. In this model, we do have field // sensitivity. But we assume nothing about the heap shape. So recursive data // structures are largely ignored. Basically we do 1-limiting analysis. // Parameter pointers are assumed with no aliasing. Pointee objects of // parameters are created lazily. // //===----------------------------------------------------------------------===// #include "clang/AST/Attr.h" #include "clang/AST/CharUnits.h" #include "clang/Analysis/Analyses/LiveVariables.h" #include "clang/Analysis/AnalysisContext.h" #include "clang/Basic/TargetInfo.h" #include "clang/StaticAnalyzer/Core/PathSensitive/AnalysisManager.h" #include "clang/StaticAnalyzer/Core/PathSensitive/CallEvent.h" #include "clang/StaticAnalyzer/Core/PathSensitive/MemRegion.h" #include "clang/StaticAnalyzer/Core/PathSensitive/ProgramState.h" #include "clang/StaticAnalyzer/Core/PathSensitive/ProgramStateTrait.h" #include "clang/StaticAnalyzer/Core/PathSensitive/SubEngine.h" #include "llvm/ADT/ImmutableMap.h" #include "llvm/ADT/Optional.h" #include "llvm/Support/raw_ostream.h" #include using namespace clang; using namespace ento; //===----------------------------------------------------------------------===// // Representation of binding keys. //===----------------------------------------------------------------------===// namespace { class BindingKey { public: enum Kind { Default = 0x0, Direct = 0x1 }; private: enum { Symbolic = 0x2 }; llvm::PointerIntPair P; uint64_t Data; /// Create a key for a binding to region \p r, which has a symbolic offset /// from region \p Base. explicit BindingKey(const SubRegion *r, const SubRegion *Base, Kind k) : P(r, k | Symbolic), Data(reinterpret_cast(Base)) { assert(r && Base && "Must have known regions."); assert(getConcreteOffsetRegion() == Base && "Failed to store base region"); } /// Create a key for a binding at \p offset from base region \p r. explicit BindingKey(const MemRegion *r, uint64_t offset, Kind k) : P(r, k), Data(offset) { assert(r && "Must have known regions."); assert(getOffset() == offset && "Failed to store offset"); assert((r == r->getBaseRegion() || isa(r)) && "Not a base"); } public: bool isDirect() const { return P.getInt() & Direct; } bool hasSymbolicOffset() const { return P.getInt() & Symbolic; } const MemRegion *getRegion() const { return P.getPointer(); } uint64_t getOffset() const { assert(!hasSymbolicOffset()); return Data; } const SubRegion *getConcreteOffsetRegion() const { assert(hasSymbolicOffset()); return reinterpret_cast(static_cast(Data)); } const MemRegion *getBaseRegion() const { if (hasSymbolicOffset()) return getConcreteOffsetRegion()->getBaseRegion(); return getRegion()->getBaseRegion(); } void Profile(llvm::FoldingSetNodeID& ID) const { ID.AddPointer(P.getOpaqueValue()); ID.AddInteger(Data); } static BindingKey Make(const MemRegion *R, Kind k); bool operator<(const BindingKey &X) const { if (P.getOpaqueValue() < X.P.getOpaqueValue()) return true; if (P.getOpaqueValue() > X.P.getOpaqueValue()) return false; return Data < X.Data; } bool operator==(const BindingKey &X) const { return P.getOpaqueValue() == X.P.getOpaqueValue() && Data == X.Data; } void dump() const; }; } // end anonymous namespace BindingKey BindingKey::Make(const MemRegion *R, Kind k) { const RegionOffset &RO = R->getAsOffset(); if (RO.hasSymbolicOffset()) return BindingKey(cast(R), cast(RO.getRegion()), k); return BindingKey(RO.getRegion(), RO.getOffset(), k); } namespace llvm { static inline raw_ostream &operator<<(raw_ostream &os, BindingKey K) { os << '(' << K.getRegion(); if (!K.hasSymbolicOffset()) os << ',' << K.getOffset(); os << ',' << (K.isDirect() ? "direct" : "default") << ')'; return os; } template struct isPodLike; template <> struct isPodLike { static const bool value = true; }; } // end llvm namespace LLVM_DUMP_METHOD void BindingKey::dump() const { llvm::errs() << *this; } //===----------------------------------------------------------------------===// // Actual Store type. //===----------------------------------------------------------------------===// typedef llvm::ImmutableMap ClusterBindings; typedef llvm::ImmutableMapRef ClusterBindingsRef; typedef std::pair BindingPair; typedef llvm::ImmutableMap RegionBindings; namespace { class RegionBindingsRef : public llvm::ImmutableMapRef { ClusterBindings::Factory *CBFactory; public: typedef llvm::ImmutableMapRef ParentTy; RegionBindingsRef(ClusterBindings::Factory &CBFactory, const RegionBindings::TreeTy *T, RegionBindings::TreeTy::Factory *F) : llvm::ImmutableMapRef(T, F), CBFactory(&CBFactory) {} RegionBindingsRef(const ParentTy &P, ClusterBindings::Factory &CBFactory) : llvm::ImmutableMapRef(P), CBFactory(&CBFactory) {} RegionBindingsRef add(key_type_ref K, data_type_ref D) const { return RegionBindingsRef(static_cast(this)->add(K, D), *CBFactory); } RegionBindingsRef remove(key_type_ref K) const { return RegionBindingsRef(static_cast(this)->remove(K), *CBFactory); } RegionBindingsRef addBinding(BindingKey K, SVal V) const; RegionBindingsRef addBinding(const MemRegion *R, BindingKey::Kind k, SVal V) const; const SVal *lookup(BindingKey K) const; const SVal *lookup(const MemRegion *R, BindingKey::Kind k) const; using llvm::ImmutableMapRef::lookup; RegionBindingsRef removeBinding(BindingKey K); RegionBindingsRef removeBinding(const MemRegion *R, BindingKey::Kind k); RegionBindingsRef removeBinding(const MemRegion *R) { return removeBinding(R, BindingKey::Direct). removeBinding(R, BindingKey::Default); } Optional getDirectBinding(const MemRegion *R) const; /// getDefaultBinding - Returns an SVal* representing an optional default /// binding associated with a region and its subregions. Optional getDefaultBinding(const MemRegion *R) const; /// Return the internal tree as a Store. Store asStore() const { return asImmutableMap().getRootWithoutRetain(); } void dump(raw_ostream &OS, const char *nl) const { for (iterator I = begin(), E = end(); I != E; ++I) { const ClusterBindings &Cluster = I.getData(); for (ClusterBindings::iterator CI = Cluster.begin(), CE = Cluster.end(); CI != CE; ++CI) { OS << ' ' << CI.getKey() << " : " << CI.getData() << nl; } OS << nl; } } LLVM_DUMP_METHOD void dump() const { dump(llvm::errs(), "\n"); } }; } // end anonymous namespace typedef const RegionBindingsRef& RegionBindingsConstRef; Optional RegionBindingsRef::getDirectBinding(const MemRegion *R) const { return Optional::create(lookup(R, BindingKey::Direct)); } Optional RegionBindingsRef::getDefaultBinding(const MemRegion *R) const { if (R->isBoundable()) if (const TypedValueRegion *TR = dyn_cast(R)) if (TR->getValueType()->isUnionType()) return UnknownVal(); return Optional::create(lookup(R, BindingKey::Default)); } RegionBindingsRef RegionBindingsRef::addBinding(BindingKey K, SVal V) const { const MemRegion *Base = K.getBaseRegion(); const ClusterBindings *ExistingCluster = lookup(Base); ClusterBindings Cluster = (ExistingCluster ? *ExistingCluster : CBFactory->getEmptyMap()); ClusterBindings NewCluster = CBFactory->add(Cluster, K, V); return add(Base, NewCluster); } RegionBindingsRef RegionBindingsRef::addBinding(const MemRegion *R, BindingKey::Kind k, SVal V) const { return addBinding(BindingKey::Make(R, k), V); } const SVal *RegionBindingsRef::lookup(BindingKey K) const { const ClusterBindings *Cluster = lookup(K.getBaseRegion()); if (!Cluster) return nullptr; return Cluster->lookup(K); } const SVal *RegionBindingsRef::lookup(const MemRegion *R, BindingKey::Kind k) const { return lookup(BindingKey::Make(R, k)); } RegionBindingsRef RegionBindingsRef::removeBinding(BindingKey K) { const MemRegion *Base = K.getBaseRegion(); const ClusterBindings *Cluster = lookup(Base); if (!Cluster) return *this; ClusterBindings NewCluster = CBFactory->remove(*Cluster, K); if (NewCluster.isEmpty()) return remove(Base); return add(Base, NewCluster); } RegionBindingsRef RegionBindingsRef::removeBinding(const MemRegion *R, BindingKey::Kind k){ return removeBinding(BindingKey::Make(R, k)); } //===----------------------------------------------------------------------===// // Fine-grained control of RegionStoreManager. //===----------------------------------------------------------------------===// namespace { struct minimal_features_tag {}; struct maximal_features_tag {}; class RegionStoreFeatures { bool SupportsFields; public: RegionStoreFeatures(minimal_features_tag) : SupportsFields(false) {} RegionStoreFeatures(maximal_features_tag) : SupportsFields(true) {} void enableFields(bool t) { SupportsFields = t; } bool supportsFields() const { return SupportsFields; } }; } //===----------------------------------------------------------------------===// // Main RegionStore logic. //===----------------------------------------------------------------------===// namespace { class invalidateRegionsWorker; class RegionStoreManager : public StoreManager { public: const RegionStoreFeatures Features; RegionBindings::Factory RBFactory; mutable ClusterBindings::Factory CBFactory; typedef std::vector SValListTy; private: typedef llvm::DenseMap LazyBindingsMapTy; LazyBindingsMapTy LazyBindingsMap; /// The largest number of fields a struct can have and still be /// considered "small". /// /// This is currently used to decide whether or not it is worth "forcing" a /// LazyCompoundVal on bind. /// /// This is controlled by 'region-store-small-struct-limit' option. /// To disable all small-struct-dependent behavior, set the option to "0". unsigned SmallStructLimit; /// \brief A helper used to populate the work list with the given set of /// regions. void populateWorkList(invalidateRegionsWorker &W, ArrayRef Values, InvalidatedRegions *TopLevelRegions); public: RegionStoreManager(ProgramStateManager& mgr, const RegionStoreFeatures &f) : StoreManager(mgr), Features(f), RBFactory(mgr.getAllocator()), CBFactory(mgr.getAllocator()), SmallStructLimit(0) { if (SubEngine *Eng = StateMgr.getOwningEngine()) { AnalyzerOptions &Options = Eng->getAnalysisManager().options; SmallStructLimit = Options.getOptionAsInteger("region-store-small-struct-limit", 2); } } /// setImplicitDefaultValue - Set the default binding for the provided /// MemRegion to the value implicitly defined for compound literals when /// the value is not specified. RegionBindingsRef setImplicitDefaultValue(RegionBindingsConstRef B, const MemRegion *R, QualType T); /// ArrayToPointer - Emulates the "decay" of an array to a pointer /// type. 'Array' represents the lvalue of the array being decayed /// to a pointer, and the returned SVal represents the decayed /// version of that lvalue (i.e., a pointer to the first element of /// the array). This is called by ExprEngine when evaluating /// casts from arrays to pointers. SVal ArrayToPointer(Loc Array, QualType ElementTy) override; StoreRef getInitialStore(const LocationContext *InitLoc) override { return StoreRef(RBFactory.getEmptyMap().getRootWithoutRetain(), *this); } //===-------------------------------------------------------------------===// // Binding values to regions. //===-------------------------------------------------------------------===// RegionBindingsRef invalidateGlobalRegion(MemRegion::Kind K, const Expr *Ex, unsigned Count, const LocationContext *LCtx, RegionBindingsRef B, InvalidatedRegions *Invalidated); StoreRef invalidateRegions(Store store, ArrayRef Values, const Expr *E, unsigned Count, const LocationContext *LCtx, const CallEvent *Call, InvalidatedSymbols &IS, RegionAndSymbolInvalidationTraits &ITraits, InvalidatedRegions *Invalidated, InvalidatedRegions *InvalidatedTopLevel) override; bool scanReachableSymbols(Store S, const MemRegion *R, ScanReachableSymbols &Callbacks) override; RegionBindingsRef removeSubRegionBindings(RegionBindingsConstRef B, const SubRegion *R); public: // Part of public interface to class. StoreRef Bind(Store store, Loc LV, SVal V) override { return StoreRef(bind(getRegionBindings(store), LV, V).asStore(), *this); } RegionBindingsRef bind(RegionBindingsConstRef B, Loc LV, SVal V); // BindDefault is only used to initialize a region with a default value. StoreRef BindDefault(Store store, const MemRegion *R, SVal V) override { + // FIXME: The offsets of empty bases can be tricky because of + // of the so called "empty base class optimization". + // If a base class has been optimized out + // we should not try to create a binding, otherwise we should. + // Unfortunately, at the moment ASTRecordLayout doesn't expose + // the actual sizes of the empty bases + // and trying to infer them from offsets/alignments + // seems to be error-prone and non-trivial because of the trailing padding. + // As a temporary mitigation we don't create bindings for empty bases. + if (R->getKind() == MemRegion::CXXBaseObjectRegionKind && + cast(R)->getDecl()->isEmpty()) + return StoreRef(store, *this); + RegionBindingsRef B = getRegionBindings(store); assert(!B.lookup(R, BindingKey::Direct)); BindingKey Key = BindingKey::Make(R, BindingKey::Default); if (B.lookup(Key)) { const SubRegion *SR = cast(R); assert(SR->getAsOffset().getOffset() == SR->getSuperRegion()->getAsOffset().getOffset() && "A default value must come from a super-region"); B = removeSubRegionBindings(B, SR); } else { B = B.addBinding(Key, V); } return StoreRef(B.asImmutableMap().getRootWithoutRetain(), *this); } /// Attempt to extract the fields of \p LCV and bind them to the struct region /// \p R. /// /// This path is used when it seems advantageous to "force" loading the values /// within a LazyCompoundVal to bind memberwise to the struct region, rather /// than using a Default binding at the base of the entire region. This is a /// heuristic attempting to avoid building long chains of LazyCompoundVals. /// /// \returns The updated store bindings, or \c None if binding non-lazily /// would be too expensive. Optional tryBindSmallStruct(RegionBindingsConstRef B, const TypedValueRegion *R, const RecordDecl *RD, nonloc::LazyCompoundVal LCV); /// BindStruct - Bind a compound value to a structure. RegionBindingsRef bindStruct(RegionBindingsConstRef B, const TypedValueRegion* R, SVal V); /// BindVector - Bind a compound value to a vector. RegionBindingsRef bindVector(RegionBindingsConstRef B, const TypedValueRegion* R, SVal V); RegionBindingsRef bindArray(RegionBindingsConstRef B, const TypedValueRegion* R, SVal V); /// Clears out all bindings in the given region and assigns a new value /// as a Default binding. RegionBindingsRef bindAggregate(RegionBindingsConstRef B, const TypedRegion *R, SVal DefaultVal); /// \brief Create a new store with the specified binding removed. /// \param ST the original store, that is the basis for the new store. /// \param L the location whose binding should be removed. StoreRef killBinding(Store ST, Loc L) override; void incrementReferenceCount(Store store) override { getRegionBindings(store).manualRetain(); } /// If the StoreManager supports it, decrement the reference count of /// the specified Store object. If the reference count hits 0, the memory /// associated with the object is recycled. void decrementReferenceCount(Store store) override { getRegionBindings(store).manualRelease(); } bool includedInBindings(Store store, const MemRegion *region) const override; /// \brief Return the value bound to specified location in a given state. /// /// The high level logic for this method is this: /// getBinding (L) /// if L has binding /// return L's binding /// else if L is in killset /// return unknown /// else /// if L is on stack or heap /// return undefined /// else /// return symbolic SVal getBinding(Store S, Loc L, QualType T) override { return getBinding(getRegionBindings(S), L, T); } Optional getDefaultBinding(Store S, const MemRegion *R) override { RegionBindingsRef B = getRegionBindings(S); // Default bindings are always applied over a base region so look up the // base region's default binding, otherwise the lookup will fail when R // is at an offset from R->getBaseRegion(). return B.getDefaultBinding(R->getBaseRegion()); } SVal getBinding(RegionBindingsConstRef B, Loc L, QualType T = QualType()); SVal getBindingForElement(RegionBindingsConstRef B, const ElementRegion *R); SVal getBindingForField(RegionBindingsConstRef B, const FieldRegion *R); SVal getBindingForObjCIvar(RegionBindingsConstRef B, const ObjCIvarRegion *R); SVal getBindingForVar(RegionBindingsConstRef B, const VarRegion *R); SVal getBindingForLazySymbol(const TypedValueRegion *R); SVal getBindingForFieldOrElementCommon(RegionBindingsConstRef B, const TypedValueRegion *R, QualType Ty); SVal getLazyBinding(const SubRegion *LazyBindingRegion, RegionBindingsRef LazyBinding); /// Get bindings for the values in a struct and return a CompoundVal, used /// when doing struct copy: /// struct s x, y; /// x = y; /// y's value is retrieved by this method. SVal getBindingForStruct(RegionBindingsConstRef B, const TypedValueRegion *R); SVal getBindingForArray(RegionBindingsConstRef B, const TypedValueRegion *R); NonLoc createLazyBinding(RegionBindingsConstRef B, const TypedValueRegion *R); /// Used to lazily generate derived symbols for bindings that are defined /// implicitly by default bindings in a super region. /// /// Note that callers may need to specially handle LazyCompoundVals, which /// are returned as is in case the caller needs to treat them differently. Optional getBindingForDerivedDefaultValue(RegionBindingsConstRef B, const MemRegion *superR, const TypedValueRegion *R, QualType Ty); /// Get the state and region whose binding this region \p R corresponds to. /// /// If there is no lazy binding for \p R, the returned value will have a null /// \c second. Note that a null pointer can represents a valid Store. std::pair findLazyBinding(RegionBindingsConstRef B, const SubRegion *R, const SubRegion *originalRegion); /// Returns the cached set of interesting SVals contained within a lazy /// binding. /// /// The precise value of "interesting" is determined for the purposes of /// RegionStore's internal analysis. It must always contain all regions and /// symbols, but may omit constants and other kinds of SVal. const SValListTy &getInterestingValues(nonloc::LazyCompoundVal LCV); //===------------------------------------------------------------------===// // State pruning. //===------------------------------------------------------------------===// /// removeDeadBindings - Scans the RegionStore of 'state' for dead values. /// It returns a new Store with these values removed. StoreRef removeDeadBindings(Store store, const StackFrameContext *LCtx, SymbolReaper& SymReaper) override; //===------------------------------------------------------------------===// // Region "extents". //===------------------------------------------------------------------===// // FIXME: This method will soon be eliminated; see the note in Store.h. DefinedOrUnknownSVal getSizeInElements(ProgramStateRef state, const MemRegion* R, QualType EleTy) override; //===------------------------------------------------------------------===// // Utility methods. //===------------------------------------------------------------------===// RegionBindingsRef getRegionBindings(Store store) const { return RegionBindingsRef(CBFactory, static_cast(store), RBFactory.getTreeFactory()); } void print(Store store, raw_ostream &Out, const char* nl, const char *sep) override; void iterBindings(Store store, BindingsHandler& f) override { RegionBindingsRef B = getRegionBindings(store); for (RegionBindingsRef::iterator I = B.begin(), E = B.end(); I != E; ++I) { const ClusterBindings &Cluster = I.getData(); for (ClusterBindings::iterator CI = Cluster.begin(), CE = Cluster.end(); CI != CE; ++CI) { const BindingKey &K = CI.getKey(); if (!K.isDirect()) continue; if (const SubRegion *R = dyn_cast(K.getRegion())) { // FIXME: Possibly incorporate the offset? if (!f.HandleBinding(*this, store, R, CI.getData())) return; } } } } }; } // end anonymous namespace //===----------------------------------------------------------------------===// // RegionStore creation. //===----------------------------------------------------------------------===// std::unique_ptr ento::CreateRegionStoreManager(ProgramStateManager &StMgr) { RegionStoreFeatures F = maximal_features_tag(); return llvm::make_unique(StMgr, F); } std::unique_ptr ento::CreateFieldsOnlyRegionStoreManager(ProgramStateManager &StMgr) { RegionStoreFeatures F = minimal_features_tag(); F.enableFields(true); return llvm::make_unique(StMgr, F); } //===----------------------------------------------------------------------===// // Region Cluster analysis. //===----------------------------------------------------------------------===// namespace { /// Used to determine which global regions are automatically included in the /// initial worklist of a ClusterAnalysis. enum GlobalsFilterKind { /// Don't include any global regions. GFK_None, /// Only include system globals. GFK_SystemOnly, /// Include all global regions. GFK_All }; template class ClusterAnalysis { protected: typedef llvm::DenseMap ClusterMap; typedef const MemRegion * WorkListElement; typedef SmallVector WorkList; llvm::SmallPtrSet Visited; WorkList WL; RegionStoreManager &RM; ASTContext &Ctx; SValBuilder &svalBuilder; RegionBindingsRef B; protected: const ClusterBindings *getCluster(const MemRegion *R) { return B.lookup(R); } /// Returns true if all clusters in the given memspace should be initially /// included in the cluster analysis. Subclasses may provide their /// own implementation. bool includeEntireMemorySpace(const MemRegion *Base) { return false; } public: ClusterAnalysis(RegionStoreManager &rm, ProgramStateManager &StateMgr, RegionBindingsRef b) : RM(rm), Ctx(StateMgr.getContext()), svalBuilder(StateMgr.getSValBuilder()), B(std::move(b)) {} RegionBindingsRef getRegionBindings() const { return B; } bool isVisited(const MemRegion *R) { return Visited.count(getCluster(R)); } void GenerateClusters() { // Scan the entire set of bindings and record the region clusters. for (RegionBindingsRef::iterator RI = B.begin(), RE = B.end(); RI != RE; ++RI){ const MemRegion *Base = RI.getKey(); const ClusterBindings &Cluster = RI.getData(); assert(!Cluster.isEmpty() && "Empty clusters should be removed"); static_cast(this)->VisitAddedToCluster(Base, Cluster); // If the base's memspace should be entirely invalidated, add the cluster // to the workspace up front. if (static_cast(this)->includeEntireMemorySpace(Base)) AddToWorkList(WorkListElement(Base), &Cluster); } } bool AddToWorkList(WorkListElement E, const ClusterBindings *C) { if (C && !Visited.insert(C).second) return false; WL.push_back(E); return true; } bool AddToWorkList(const MemRegion *R) { return static_cast(this)->AddToWorkList(R); } void RunWorkList() { while (!WL.empty()) { WorkListElement E = WL.pop_back_val(); const MemRegion *BaseR = E; static_cast(this)->VisitCluster(BaseR, getCluster(BaseR)); } } void VisitAddedToCluster(const MemRegion *baseR, const ClusterBindings &C) {} void VisitCluster(const MemRegion *baseR, const ClusterBindings *C) {} void VisitCluster(const MemRegion *BaseR, const ClusterBindings *C, bool Flag) { static_cast(this)->VisitCluster(BaseR, C); } }; } //===----------------------------------------------------------------------===// // Binding invalidation. //===----------------------------------------------------------------------===// bool RegionStoreManager::scanReachableSymbols(Store S, const MemRegion *R, ScanReachableSymbols &Callbacks) { assert(R == R->getBaseRegion() && "Should only be called for base regions"); RegionBindingsRef B = getRegionBindings(S); const ClusterBindings *Cluster = B.lookup(R); if (!Cluster) return true; for (ClusterBindings::iterator RI = Cluster->begin(), RE = Cluster->end(); RI != RE; ++RI) { if (!Callbacks.scan(RI.getData())) return false; } return true; } static inline bool isUnionField(const FieldRegion *FR) { return FR->getDecl()->getParent()->isUnion(); } typedef SmallVector FieldVector; static void getSymbolicOffsetFields(BindingKey K, FieldVector &Fields) { assert(K.hasSymbolicOffset() && "Not implemented for concrete offset keys"); const MemRegion *Base = K.getConcreteOffsetRegion(); const MemRegion *R = K.getRegion(); while (R != Base) { if (const FieldRegion *FR = dyn_cast(R)) if (!isUnionField(FR)) Fields.push_back(FR->getDecl()); R = cast(R)->getSuperRegion(); } } static bool isCompatibleWithFields(BindingKey K, const FieldVector &Fields) { assert(K.hasSymbolicOffset() && "Not implemented for concrete offset keys"); if (Fields.empty()) return true; FieldVector FieldsInBindingKey; getSymbolicOffsetFields(K, FieldsInBindingKey); ptrdiff_t Delta = FieldsInBindingKey.size() - Fields.size(); if (Delta >= 0) return std::equal(FieldsInBindingKey.begin() + Delta, FieldsInBindingKey.end(), Fields.begin()); else return std::equal(FieldsInBindingKey.begin(), FieldsInBindingKey.end(), Fields.begin() - Delta); } /// Collects all bindings in \p Cluster that may refer to bindings within /// \p Top. /// /// Each binding is a pair whose \c first is the key (a BindingKey) and whose /// \c second is the value (an SVal). /// /// The \p IncludeAllDefaultBindings parameter specifies whether to include /// default bindings that may extend beyond \p Top itself, e.g. if \p Top is /// an aggregate within a larger aggregate with a default binding. static void collectSubRegionBindings(SmallVectorImpl &Bindings, SValBuilder &SVB, const ClusterBindings &Cluster, const SubRegion *Top, BindingKey TopKey, bool IncludeAllDefaultBindings) { FieldVector FieldsInSymbolicSubregions; if (TopKey.hasSymbolicOffset()) { getSymbolicOffsetFields(TopKey, FieldsInSymbolicSubregions); Top = cast(TopKey.getConcreteOffsetRegion()); TopKey = BindingKey::Make(Top, BindingKey::Default); } // Find the length (in bits) of the region being invalidated. uint64_t Length = UINT64_MAX; SVal Extent = Top->getExtent(SVB); if (Optional ExtentCI = Extent.getAs()) { const llvm::APSInt &ExtentInt = ExtentCI->getValue(); assert(ExtentInt.isNonNegative() || ExtentInt.isUnsigned()); // Extents are in bytes but region offsets are in bits. Be careful! Length = ExtentInt.getLimitedValue() * SVB.getContext().getCharWidth(); } else if (const FieldRegion *FR = dyn_cast(Top)) { if (FR->getDecl()->isBitField()) Length = FR->getDecl()->getBitWidthValue(SVB.getContext()); } for (ClusterBindings::iterator I = Cluster.begin(), E = Cluster.end(); I != E; ++I) { BindingKey NextKey = I.getKey(); if (NextKey.getRegion() == TopKey.getRegion()) { // FIXME: This doesn't catch the case where we're really invalidating a // region with a symbolic offset. Example: // R: points[i].y // Next: points[0].x if (NextKey.getOffset() > TopKey.getOffset() && NextKey.getOffset() - TopKey.getOffset() < Length) { // Case 1: The next binding is inside the region we're invalidating. // Include it. Bindings.push_back(*I); } else if (NextKey.getOffset() == TopKey.getOffset()) { // Case 2: The next binding is at the same offset as the region we're // invalidating. In this case, we need to leave default bindings alone, // since they may be providing a default value for a regions beyond what // we're invalidating. // FIXME: This is probably incorrect; consider invalidating an outer // struct whose first field is bound to a LazyCompoundVal. if (IncludeAllDefaultBindings || NextKey.isDirect()) Bindings.push_back(*I); } } else if (NextKey.hasSymbolicOffset()) { const MemRegion *Base = NextKey.getConcreteOffsetRegion(); if (Top->isSubRegionOf(Base)) { // Case 3: The next key is symbolic and we just changed something within // its concrete region. We don't know if the binding is still valid, so // we'll be conservative and include it. if (IncludeAllDefaultBindings || NextKey.isDirect()) if (isCompatibleWithFields(NextKey, FieldsInSymbolicSubregions)) Bindings.push_back(*I); } else if (const SubRegion *BaseSR = dyn_cast(Base)) { // Case 4: The next key is symbolic, but we changed a known // super-region. In this case the binding is certainly included. if (Top == Base || BaseSR->isSubRegionOf(Top)) if (isCompatibleWithFields(NextKey, FieldsInSymbolicSubregions)) Bindings.push_back(*I); } } } } static void collectSubRegionBindings(SmallVectorImpl &Bindings, SValBuilder &SVB, const ClusterBindings &Cluster, const SubRegion *Top, bool IncludeAllDefaultBindings) { collectSubRegionBindings(Bindings, SVB, Cluster, Top, BindingKey::Make(Top, BindingKey::Default), IncludeAllDefaultBindings); } RegionBindingsRef RegionStoreManager::removeSubRegionBindings(RegionBindingsConstRef B, const SubRegion *Top) { BindingKey TopKey = BindingKey::Make(Top, BindingKey::Default); const MemRegion *ClusterHead = TopKey.getBaseRegion(); if (Top == ClusterHead) { // We can remove an entire cluster's bindings all in one go. return B.remove(Top); } const ClusterBindings *Cluster = B.lookup(ClusterHead); if (!Cluster) { // If we're invalidating a region with a symbolic offset, we need to make // sure we don't treat the base region as uninitialized anymore. if (TopKey.hasSymbolicOffset()) { const SubRegion *Concrete = TopKey.getConcreteOffsetRegion(); return B.addBinding(Concrete, BindingKey::Default, UnknownVal()); } return B; } SmallVector Bindings; collectSubRegionBindings(Bindings, svalBuilder, *Cluster, Top, TopKey, /*IncludeAllDefaultBindings=*/false); ClusterBindingsRef Result(*Cluster, CBFactory); for (SmallVectorImpl::const_iterator I = Bindings.begin(), E = Bindings.end(); I != E; ++I) Result = Result.remove(I->first); // If we're invalidating a region with a symbolic offset, we need to make sure // we don't treat the base region as uninitialized anymore. // FIXME: This isn't very precise; see the example in // collectSubRegionBindings. if (TopKey.hasSymbolicOffset()) { const SubRegion *Concrete = TopKey.getConcreteOffsetRegion(); Result = Result.add(BindingKey::Make(Concrete, BindingKey::Default), UnknownVal()); } if (Result.isEmpty()) return B.remove(ClusterHead); return B.add(ClusterHead, Result.asImmutableMap()); } namespace { class invalidateRegionsWorker : public ClusterAnalysis { const Expr *Ex; unsigned Count; const LocationContext *LCtx; InvalidatedSymbols &IS; RegionAndSymbolInvalidationTraits &ITraits; StoreManager::InvalidatedRegions *Regions; GlobalsFilterKind GlobalsFilter; public: invalidateRegionsWorker(RegionStoreManager &rm, ProgramStateManager &stateMgr, RegionBindingsRef b, const Expr *ex, unsigned count, const LocationContext *lctx, InvalidatedSymbols &is, RegionAndSymbolInvalidationTraits &ITraitsIn, StoreManager::InvalidatedRegions *r, GlobalsFilterKind GFK) : ClusterAnalysis(rm, stateMgr, b), Ex(ex), Count(count), LCtx(lctx), IS(is), ITraits(ITraitsIn), Regions(r), GlobalsFilter(GFK) {} void VisitCluster(const MemRegion *baseR, const ClusterBindings *C); void VisitBinding(SVal V); using ClusterAnalysis::AddToWorkList; bool AddToWorkList(const MemRegion *R); /// Returns true if all clusters in the memory space for \p Base should be /// be invalidated. bool includeEntireMemorySpace(const MemRegion *Base); /// Returns true if the memory space of the given region is one of the global /// regions specially included at the start of invalidation. bool isInitiallyIncludedGlobalRegion(const MemRegion *R); }; } bool invalidateRegionsWorker::AddToWorkList(const MemRegion *R) { bool doNotInvalidateSuperRegion = ITraits.hasTrait( R, RegionAndSymbolInvalidationTraits::TK_DoNotInvalidateSuperRegion); const MemRegion *BaseR = doNotInvalidateSuperRegion ? R : R->getBaseRegion(); return AddToWorkList(WorkListElement(BaseR), getCluster(BaseR)); } void invalidateRegionsWorker::VisitBinding(SVal V) { // A symbol? Mark it touched by the invalidation. if (SymbolRef Sym = V.getAsSymbol()) IS.insert(Sym); if (const MemRegion *R = V.getAsRegion()) { AddToWorkList(R); return; } // Is it a LazyCompoundVal? All references get invalidated as well. if (Optional LCS = V.getAs()) { const RegionStoreManager::SValListTy &Vals = RM.getInterestingValues(*LCS); for (RegionStoreManager::SValListTy::const_iterator I = Vals.begin(), E = Vals.end(); I != E; ++I) VisitBinding(*I); return; } } void invalidateRegionsWorker::VisitCluster(const MemRegion *baseR, const ClusterBindings *C) { bool PreserveRegionsContents = ITraits.hasTrait(baseR, RegionAndSymbolInvalidationTraits::TK_PreserveContents); if (C) { for (ClusterBindings::iterator I = C->begin(), E = C->end(); I != E; ++I) VisitBinding(I.getData()); // Invalidate regions contents. if (!PreserveRegionsContents) B = B.remove(baseR); } // BlockDataRegion? If so, invalidate captured variables that are passed // by reference. if (const BlockDataRegion *BR = dyn_cast(baseR)) { for (BlockDataRegion::referenced_vars_iterator BI = BR->referenced_vars_begin(), BE = BR->referenced_vars_end() ; BI != BE; ++BI) { const VarRegion *VR = BI.getCapturedRegion(); const VarDecl *VD = VR->getDecl(); if (VD->hasAttr() || !VD->hasLocalStorage()) { AddToWorkList(VR); } else if (Loc::isLocType(VR->getValueType())) { // Map the current bindings to a Store to retrieve the value // of the binding. If that binding itself is a region, we should // invalidate that region. This is because a block may capture // a pointer value, but the thing pointed by that pointer may // get invalidated. SVal V = RM.getBinding(B, loc::MemRegionVal(VR)); if (Optional L = V.getAs()) { if (const MemRegion *LR = L->getAsRegion()) AddToWorkList(LR); } } } return; } // Symbolic region? if (const SymbolicRegion *SR = dyn_cast(baseR)) IS.insert(SR->getSymbol()); // Nothing else should be done in the case when we preserve regions context. if (PreserveRegionsContents) return; // Otherwise, we have a normal data region. Record that we touched the region. if (Regions) Regions->push_back(baseR); if (isa(baseR) || isa(baseR)) { // Invalidate the region by setting its default value to // conjured symbol. The type of the symbol is irrelevant. DefinedOrUnknownSVal V = svalBuilder.conjureSymbolVal(baseR, Ex, LCtx, Ctx.IntTy, Count); B = B.addBinding(baseR, BindingKey::Default, V); return; } if (!baseR->isBoundable()) return; const TypedValueRegion *TR = cast(baseR); QualType T = TR->getValueType(); if (isInitiallyIncludedGlobalRegion(baseR)) { // If the region is a global and we are invalidating all globals, // erasing the entry is good enough. This causes all globals to be lazily // symbolicated from the same base symbol. return; } if (T->isStructureOrClassType()) { // Invalidate the region by setting its default value to // conjured symbol. The type of the symbol is irrelevant. DefinedOrUnknownSVal V = svalBuilder.conjureSymbolVal(baseR, Ex, LCtx, Ctx.IntTy, Count); B = B.addBinding(baseR, BindingKey::Default, V); return; } if (const ArrayType *AT = Ctx.getAsArrayType(T)) { bool doNotInvalidateSuperRegion = ITraits.hasTrait( baseR, RegionAndSymbolInvalidationTraits::TK_DoNotInvalidateSuperRegion); if (doNotInvalidateSuperRegion) { // We are not doing blank invalidation of the whole array region so we // have to manually invalidate each elements. Optional NumElements; // Compute lower and upper offsets for region within array. if (const ConstantArrayType *CAT = dyn_cast(AT)) NumElements = CAT->getSize().getZExtValue(); if (!NumElements) // We are not dealing with a constant size array goto conjure_default; QualType ElementTy = AT->getElementType(); uint64_t ElemSize = Ctx.getTypeSize(ElementTy); const RegionOffset &RO = baseR->getAsOffset(); const MemRegion *SuperR = baseR->getBaseRegion(); if (RO.hasSymbolicOffset()) { // If base region has a symbolic offset, // we revert to invalidating the super region. if (SuperR) AddToWorkList(SuperR); goto conjure_default; } uint64_t LowerOffset = RO.getOffset(); uint64_t UpperOffset = LowerOffset + *NumElements * ElemSize; bool UpperOverflow = UpperOffset < LowerOffset; // Invalidate regions which are within array boundaries, // or have a symbolic offset. if (!SuperR) goto conjure_default; const ClusterBindings *C = B.lookup(SuperR); if (!C) goto conjure_default; for (ClusterBindings::iterator I = C->begin(), E = C->end(); I != E; ++I) { const BindingKey &BK = I.getKey(); Optional ROffset = BK.hasSymbolicOffset() ? Optional() : BK.getOffset(); // Check offset is not symbolic and within array's boundaries. // Handles arrays of 0 elements and of 0-sized elements as well. if (!ROffset || ((*ROffset >= LowerOffset && *ROffset < UpperOffset) || (UpperOverflow && (*ROffset >= LowerOffset || *ROffset < UpperOffset)) || (LowerOffset == UpperOffset && *ROffset == LowerOffset))) { B = B.removeBinding(I.getKey()); // Bound symbolic regions need to be invalidated for dead symbol // detection. SVal V = I.getData(); const MemRegion *R = V.getAsRegion(); if (R && isa(R)) VisitBinding(V); } } } conjure_default: // Set the default value of the array to conjured symbol. DefinedOrUnknownSVal V = svalBuilder.conjureSymbolVal(baseR, Ex, LCtx, AT->getElementType(), Count); B = B.addBinding(baseR, BindingKey::Default, V); return; } DefinedOrUnknownSVal V = svalBuilder.conjureSymbolVal(baseR, Ex, LCtx, T,Count); assert(SymbolManager::canSymbolicate(T) || V.isUnknown()); B = B.addBinding(baseR, BindingKey::Direct, V); } bool invalidateRegionsWorker::isInitiallyIncludedGlobalRegion( const MemRegion *R) { switch (GlobalsFilter) { case GFK_None: return false; case GFK_SystemOnly: return isa(R->getMemorySpace()); case GFK_All: return isa(R->getMemorySpace()); } llvm_unreachable("unknown globals filter"); } bool invalidateRegionsWorker::includeEntireMemorySpace(const MemRegion *Base) { if (isInitiallyIncludedGlobalRegion(Base)) return true; const MemSpaceRegion *MemSpace = Base->getMemorySpace(); return ITraits.hasTrait(MemSpace, RegionAndSymbolInvalidationTraits::TK_EntireMemSpace); } RegionBindingsRef RegionStoreManager::invalidateGlobalRegion(MemRegion::Kind K, const Expr *Ex, unsigned Count, const LocationContext *LCtx, RegionBindingsRef B, InvalidatedRegions *Invalidated) { // Bind the globals memory space to a new symbol that we will use to derive // the bindings for all globals. const GlobalsSpaceRegion *GS = MRMgr.getGlobalsRegion(K); SVal V = svalBuilder.conjureSymbolVal(/* SymbolTag = */ (const void*) GS, Ex, LCtx, /* type does not matter */ Ctx.IntTy, Count); B = B.removeBinding(GS) .addBinding(BindingKey::Make(GS, BindingKey::Default), V); // Even if there are no bindings in the global scope, we still need to // record that we touched it. if (Invalidated) Invalidated->push_back(GS); return B; } void RegionStoreManager::populateWorkList(invalidateRegionsWorker &W, ArrayRef Values, InvalidatedRegions *TopLevelRegions) { for (ArrayRef::iterator I = Values.begin(), E = Values.end(); I != E; ++I) { SVal V = *I; if (Optional LCS = V.getAs()) { const SValListTy &Vals = getInterestingValues(*LCS); for (SValListTy::const_iterator I = Vals.begin(), E = Vals.end(); I != E; ++I) { // Note: the last argument is false here because these are // non-top-level regions. if (const MemRegion *R = (*I).getAsRegion()) W.AddToWorkList(R); } continue; } if (const MemRegion *R = V.getAsRegion()) { if (TopLevelRegions) TopLevelRegions->push_back(R); W.AddToWorkList(R); continue; } } } StoreRef RegionStoreManager::invalidateRegions(Store store, ArrayRef Values, const Expr *Ex, unsigned Count, const LocationContext *LCtx, const CallEvent *Call, InvalidatedSymbols &IS, RegionAndSymbolInvalidationTraits &ITraits, InvalidatedRegions *TopLevelRegions, InvalidatedRegions *Invalidated) { GlobalsFilterKind GlobalsFilter; if (Call) { if (Call->isInSystemHeader()) GlobalsFilter = GFK_SystemOnly; else GlobalsFilter = GFK_All; } else { GlobalsFilter = GFK_None; } RegionBindingsRef B = getRegionBindings(store); invalidateRegionsWorker W(*this, StateMgr, B, Ex, Count, LCtx, IS, ITraits, Invalidated, GlobalsFilter); // Scan the bindings and generate the clusters. W.GenerateClusters(); // Add the regions to the worklist. populateWorkList(W, Values, TopLevelRegions); W.RunWorkList(); // Return the new bindings. B = W.getRegionBindings(); // For calls, determine which global regions should be invalidated and // invalidate them. (Note that function-static and immutable globals are never // invalidated by this.) // TODO: This could possibly be more precise with modules. switch (GlobalsFilter) { case GFK_All: B = invalidateGlobalRegion(MemRegion::GlobalInternalSpaceRegionKind, Ex, Count, LCtx, B, Invalidated); // FALLTHROUGH case GFK_SystemOnly: B = invalidateGlobalRegion(MemRegion::GlobalSystemSpaceRegionKind, Ex, Count, LCtx, B, Invalidated); // FALLTHROUGH case GFK_None: break; } return StoreRef(B.asStore(), *this); } //===----------------------------------------------------------------------===// // Extents for regions. //===----------------------------------------------------------------------===// DefinedOrUnknownSVal RegionStoreManager::getSizeInElements(ProgramStateRef state, const MemRegion *R, QualType EleTy) { SVal Size = cast(R)->getExtent(svalBuilder); const llvm::APSInt *SizeInt = svalBuilder.getKnownValue(state, Size); if (!SizeInt) return UnknownVal(); CharUnits RegionSize = CharUnits::fromQuantity(SizeInt->getSExtValue()); if (Ctx.getAsVariableArrayType(EleTy)) { // FIXME: We need to track extra state to properly record the size // of VLAs. Returning UnknownVal here, however, is a stop-gap so that // we don't have a divide-by-zero below. return UnknownVal(); } CharUnits EleSize = Ctx.getTypeSizeInChars(EleTy); // If a variable is reinterpreted as a type that doesn't fit into a larger // type evenly, round it down. // This is a signed value, since it's used in arithmetic with signed indices. return svalBuilder.makeIntVal(RegionSize / EleSize, false); } //===----------------------------------------------------------------------===// // Location and region casting. //===----------------------------------------------------------------------===// /// ArrayToPointer - Emulates the "decay" of an array to a pointer /// type. 'Array' represents the lvalue of the array being decayed /// to a pointer, and the returned SVal represents the decayed /// version of that lvalue (i.e., a pointer to the first element of /// the array). This is called by ExprEngine when evaluating casts /// from arrays to pointers. SVal RegionStoreManager::ArrayToPointer(Loc Array, QualType T) { if (Array.getAs()) return Array; if (!Array.getAs()) return UnknownVal(); const SubRegion *R = cast(Array.castAs().getRegion()); NonLoc ZeroIdx = svalBuilder.makeZeroArrayIndex(); return loc::MemRegionVal(MRMgr.getElementRegion(T, ZeroIdx, R, Ctx)); } //===----------------------------------------------------------------------===// // Loading values from regions. //===----------------------------------------------------------------------===// SVal RegionStoreManager::getBinding(RegionBindingsConstRef B, Loc L, QualType T) { assert(!L.getAs() && "location unknown"); assert(!L.getAs() && "location undefined"); // For access to concrete addresses, return UnknownVal. Checks // for null dereferences (and similar errors) are done by checkers, not // the Store. // FIXME: We can consider lazily symbolicating such memory, but we really // should defer this when we can reason easily about symbolicating arrays // of bytes. if (L.getAs()) { return UnknownVal(); } if (!L.getAs()) { return UnknownVal(); } const MemRegion *MR = L.castAs().getRegion(); if (isa(MR)) { return UnknownVal(); } if (isa(MR) || isa(MR) || isa(MR)) { if (T.isNull()) { if (const TypedRegion *TR = dyn_cast(MR)) T = TR->getLocationType(); else { const SymbolicRegion *SR = cast(MR); T = SR->getSymbol()->getType(); } } MR = GetElementZeroRegion(cast(MR), T); } // FIXME: Perhaps this method should just take a 'const MemRegion*' argument // instead of 'Loc', and have the other Loc cases handled at a higher level. const TypedValueRegion *R = cast(MR); QualType RTy = R->getValueType(); // FIXME: we do not yet model the parts of a complex type, so treat the // whole thing as "unknown". if (RTy->isAnyComplexType()) return UnknownVal(); // FIXME: We should eventually handle funny addressing. e.g.: // // int x = ...; // int *p = &x; // char *q = (char*) p; // char c = *q; // returns the first byte of 'x'. // // Such funny addressing will occur due to layering of regions. if (RTy->isStructureOrClassType()) return getBindingForStruct(B, R); // FIXME: Handle unions. if (RTy->isUnionType()) return createLazyBinding(B, R); if (RTy->isArrayType()) { if (RTy->isConstantArrayType()) return getBindingForArray(B, R); else return UnknownVal(); } // FIXME: handle Vector types. if (RTy->isVectorType()) return UnknownVal(); if (const FieldRegion* FR = dyn_cast(R)) return CastRetrievedVal(getBindingForField(B, FR), FR, T, false); if (const ElementRegion* ER = dyn_cast(R)) { // FIXME: Here we actually perform an implicit conversion from the loaded // value to the element type. Eventually we want to compose these values // more intelligently. For example, an 'element' can encompass multiple // bound regions (e.g., several bound bytes), or could be a subset of // a larger value. return CastRetrievedVal(getBindingForElement(B, ER), ER, T, false); } if (const ObjCIvarRegion *IVR = dyn_cast(R)) { // FIXME: Here we actually perform an implicit conversion from the loaded // value to the ivar type. What we should model is stores to ivars // that blow past the extent of the ivar. If the address of the ivar is // reinterpretted, it is possible we stored a different value that could // fit within the ivar. Either we need to cast these when storing them // or reinterpret them lazily (as we do here). return CastRetrievedVal(getBindingForObjCIvar(B, IVR), IVR, T, false); } if (const VarRegion *VR = dyn_cast(R)) { // FIXME: Here we actually perform an implicit conversion from the loaded // value to the variable type. What we should model is stores to variables // that blow past the extent of the variable. If the address of the // variable is reinterpretted, it is possible we stored a different value // that could fit within the variable. Either we need to cast these when // storing them or reinterpret them lazily (as we do here). return CastRetrievedVal(getBindingForVar(B, VR), VR, T, false); } const SVal *V = B.lookup(R, BindingKey::Direct); // Check if the region has a binding. if (V) return *V; // The location does not have a bound value. This means that it has // the value it had upon its creation and/or entry to the analyzed // function/method. These are either symbolic values or 'undefined'. if (R->hasStackNonParametersStorage()) { // All stack variables are considered to have undefined values // upon creation. All heap allocated blocks are considered to // have undefined values as well unless they are explicitly bound // to specific values. return UndefinedVal(); } // All other values are symbolic. return svalBuilder.getRegionValueSymbolVal(R); } static QualType getUnderlyingType(const SubRegion *R) { QualType RegionTy; if (const TypedValueRegion *TVR = dyn_cast(R)) RegionTy = TVR->getValueType(); if (const SymbolicRegion *SR = dyn_cast(R)) RegionTy = SR->getSymbol()->getType(); return RegionTy; } /// Checks to see if store \p B has a lazy binding for region \p R. /// /// If \p AllowSubregionBindings is \c false, a lazy binding will be rejected /// if there are additional bindings within \p R. /// /// Note that unlike RegionStoreManager::findLazyBinding, this will not search /// for lazy bindings for super-regions of \p R. static Optional getExistingLazyBinding(SValBuilder &SVB, RegionBindingsConstRef B, const SubRegion *R, bool AllowSubregionBindings) { Optional V = B.getDefaultBinding(R); if (!V) return None; Optional LCV = V->getAs(); if (!LCV) return None; // If the LCV is for a subregion, the types might not match, and we shouldn't // reuse the binding. QualType RegionTy = getUnderlyingType(R); if (!RegionTy.isNull() && !RegionTy->isVoidPointerType()) { QualType SourceRegionTy = LCV->getRegion()->getValueType(); if (!SVB.getContext().hasSameUnqualifiedType(RegionTy, SourceRegionTy)) return None; } if (!AllowSubregionBindings) { // If there are any other bindings within this region, we shouldn't reuse // the top-level binding. SmallVector Bindings; collectSubRegionBindings(Bindings, SVB, *B.lookup(R->getBaseRegion()), R, /*IncludeAllDefaultBindings=*/true); if (Bindings.size() > 1) return None; } return *LCV; } std::pair RegionStoreManager::findLazyBinding(RegionBindingsConstRef B, const SubRegion *R, const SubRegion *originalRegion) { if (originalRegion != R) { if (Optional V = getExistingLazyBinding(svalBuilder, B, R, true)) return std::make_pair(V->getStore(), V->getRegion()); } typedef std::pair StoreRegionPair; StoreRegionPair Result = StoreRegionPair(); if (const ElementRegion *ER = dyn_cast(R)) { Result = findLazyBinding(B, cast(ER->getSuperRegion()), originalRegion); if (Result.second) Result.second = MRMgr.getElementRegionWithSuper(ER, Result.second); } else if (const FieldRegion *FR = dyn_cast(R)) { Result = findLazyBinding(B, cast(FR->getSuperRegion()), originalRegion); if (Result.second) Result.second = MRMgr.getFieldRegionWithSuper(FR, Result.second); } else if (const CXXBaseObjectRegion *BaseReg = dyn_cast(R)) { // C++ base object region is another kind of region that we should blast // through to look for lazy compound value. It is like a field region. Result = findLazyBinding(B, cast(BaseReg->getSuperRegion()), originalRegion); if (Result.second) Result.second = MRMgr.getCXXBaseObjectRegionWithSuper(BaseReg, Result.second); } return Result; } SVal RegionStoreManager::getBindingForElement(RegionBindingsConstRef B, const ElementRegion* R) { // We do not currently model bindings of the CompoundLiteralregion. if (isa(R->getBaseRegion())) return UnknownVal(); // Check if the region has a binding. if (const Optional &V = B.getDirectBinding(R)) return *V; const MemRegion* superR = R->getSuperRegion(); // Check if the region is an element region of a string literal. if (const StringRegion *StrR=dyn_cast(superR)) { // FIXME: Handle loads from strings where the literal is treated as // an integer, e.g., *((unsigned int*)"hello") QualType T = Ctx.getAsArrayType(StrR->getValueType())->getElementType(); if (!Ctx.hasSameUnqualifiedType(T, R->getElementType())) return UnknownVal(); const StringLiteral *Str = StrR->getStringLiteral(); SVal Idx = R->getIndex(); if (Optional CI = Idx.getAs()) { int64_t i = CI->getValue().getSExtValue(); // Abort on string underrun. This can be possible by arbitrary // clients of getBindingForElement(). if (i < 0) return UndefinedVal(); int64_t length = Str->getLength(); // Technically, only i == length is guaranteed to be null. // However, such overflows should be caught before reaching this point; // the only time such an access would be made is if a string literal was // used to initialize a larger array. char c = (i >= length) ? '\0' : Str->getCodeUnit(i); return svalBuilder.makeIntVal(c, T); } } // Check for loads from a code text region. For such loads, just give up. if (isa(superR)) return UnknownVal(); // Handle the case where we are indexing into a larger scalar object. // For example, this handles: // int x = ... // char *y = &x; // return *y; // FIXME: This is a hack, and doesn't do anything really intelligent yet. const RegionRawOffset &O = R->getAsArrayOffset(); // If we cannot reason about the offset, return an unknown value. if (!O.getRegion()) return UnknownVal(); if (const TypedValueRegion *baseR = dyn_cast_or_null(O.getRegion())) { QualType baseT = baseR->getValueType(); if (baseT->isScalarType()) { QualType elemT = R->getElementType(); if (elemT->isScalarType()) { if (Ctx.getTypeSizeInChars(baseT) >= Ctx.getTypeSizeInChars(elemT)) { if (const Optional &V = B.getDirectBinding(superR)) { if (SymbolRef parentSym = V->getAsSymbol()) return svalBuilder.getDerivedRegionValueSymbolVal(parentSym, R); if (V->isUnknownOrUndef()) return *V; // Other cases: give up. We are indexing into a larger object // that has some value, but we don't know how to handle that yet. return UnknownVal(); } } } } } return getBindingForFieldOrElementCommon(B, R, R->getElementType()); } SVal RegionStoreManager::getBindingForField(RegionBindingsConstRef B, const FieldRegion* R) { // Check if the region has a binding. if (const Optional &V = B.getDirectBinding(R)) return *V; QualType Ty = R->getValueType(); return getBindingForFieldOrElementCommon(B, R, Ty); } Optional RegionStoreManager::getBindingForDerivedDefaultValue(RegionBindingsConstRef B, const MemRegion *superR, const TypedValueRegion *R, QualType Ty) { if (const Optional &D = B.getDefaultBinding(superR)) { const SVal &val = D.getValue(); if (SymbolRef parentSym = val.getAsSymbol()) return svalBuilder.getDerivedRegionValueSymbolVal(parentSym, R); if (val.isZeroConstant()) return svalBuilder.makeZeroVal(Ty); if (val.isUnknownOrUndef()) return val; // Lazy bindings are usually handled through getExistingLazyBinding(). // We should unify these two code paths at some point. if (val.getAs() || val.getAs()) return val; llvm_unreachable("Unknown default value"); } return None; } SVal RegionStoreManager::getLazyBinding(const SubRegion *LazyBindingRegion, RegionBindingsRef LazyBinding) { SVal Result; if (const ElementRegion *ER = dyn_cast(LazyBindingRegion)) Result = getBindingForElement(LazyBinding, ER); else Result = getBindingForField(LazyBinding, cast(LazyBindingRegion)); // FIXME: This is a hack to deal with RegionStore's inability to distinguish a // default value for /part/ of an aggregate from a default value for the // /entire/ aggregate. The most common case of this is when struct Outer // has as its first member a struct Inner, which is copied in from a stack // variable. In this case, even if the Outer's default value is symbolic, 0, // or unknown, it gets overridden by the Inner's default value of undefined. // // This is a general problem -- if the Inner is zero-initialized, the Outer // will now look zero-initialized. The proper way to solve this is with a // new version of RegionStore that tracks the extent of a binding as well // as the offset. // // This hack only takes care of the undefined case because that can very // quickly result in a warning. if (Result.isUndef()) Result = UnknownVal(); return Result; } SVal RegionStoreManager::getBindingForFieldOrElementCommon(RegionBindingsConstRef B, const TypedValueRegion *R, QualType Ty) { // At this point we have already checked in either getBindingForElement or // getBindingForField if 'R' has a direct binding. // Lazy binding? Store lazyBindingStore = nullptr; const SubRegion *lazyBindingRegion = nullptr; std::tie(lazyBindingStore, lazyBindingRegion) = findLazyBinding(B, R, R); if (lazyBindingRegion) return getLazyBinding(lazyBindingRegion, getRegionBindings(lazyBindingStore)); // Record whether or not we see a symbolic index. That can completely // be out of scope of our lookup. bool hasSymbolicIndex = false; // FIXME: This is a hack to deal with RegionStore's inability to distinguish a // default value for /part/ of an aggregate from a default value for the // /entire/ aggregate. The most common case of this is when struct Outer // has as its first member a struct Inner, which is copied in from a stack // variable. In this case, even if the Outer's default value is symbolic, 0, // or unknown, it gets overridden by the Inner's default value of undefined. // // This is a general problem -- if the Inner is zero-initialized, the Outer // will now look zero-initialized. The proper way to solve this is with a // new version of RegionStore that tracks the extent of a binding as well // as the offset. // // This hack only takes care of the undefined case because that can very // quickly result in a warning. bool hasPartialLazyBinding = false; const SubRegion *SR = dyn_cast(R); while (SR) { const MemRegion *Base = SR->getSuperRegion(); if (Optional D = getBindingForDerivedDefaultValue(B, Base, R, Ty)) { if (D->getAs()) { hasPartialLazyBinding = true; break; } return *D; } if (const ElementRegion *ER = dyn_cast(Base)) { NonLoc index = ER->getIndex(); if (!index.isConstant()) hasSymbolicIndex = true; } // If our super region is a field or element itself, walk up the region // hierarchy to see if there is a default value installed in an ancestor. SR = dyn_cast(Base); } if (R->hasStackNonParametersStorage()) { if (isa(R)) { // Currently we don't reason specially about Clang-style vectors. Check // if superR is a vector and if so return Unknown. if (const TypedValueRegion *typedSuperR = dyn_cast(R->getSuperRegion())) { if (typedSuperR->getValueType()->isVectorType()) return UnknownVal(); } } // FIXME: We also need to take ElementRegions with symbolic indexes into // account. This case handles both directly accessing an ElementRegion // with a symbolic offset, but also fields within an element with // a symbolic offset. if (hasSymbolicIndex) return UnknownVal(); if (!hasPartialLazyBinding) return UndefinedVal(); } // All other values are symbolic. return svalBuilder.getRegionValueSymbolVal(R); } SVal RegionStoreManager::getBindingForObjCIvar(RegionBindingsConstRef B, const ObjCIvarRegion* R) { // Check if the region has a binding. if (const Optional &V = B.getDirectBinding(R)) return *V; const MemRegion *superR = R->getSuperRegion(); // Check if the super region has a default binding. if (const Optional &V = B.getDefaultBinding(superR)) { if (SymbolRef parentSym = V->getAsSymbol()) return svalBuilder.getDerivedRegionValueSymbolVal(parentSym, R); // Other cases: give up. return UnknownVal(); } return getBindingForLazySymbol(R); } SVal RegionStoreManager::getBindingForVar(RegionBindingsConstRef B, const VarRegion *R) { // Check if the region has a binding. if (const Optional &V = B.getDirectBinding(R)) return *V; // Lazily derive a value for the VarRegion. const VarDecl *VD = R->getDecl(); const MemSpaceRegion *MS = R->getMemorySpace(); // Arguments are always symbolic. if (isa(MS)) return svalBuilder.getRegionValueSymbolVal(R); // Is 'VD' declared constant? If so, retrieve the constant value. if (VD->getType().isConstQualified()) if (const Expr *Init = VD->getInit()) if (Optional V = svalBuilder.getConstantVal(Init)) return *V; // This must come after the check for constants because closure-captured // constant variables may appear in UnknownSpaceRegion. if (isa(MS)) return svalBuilder.getRegionValueSymbolVal(R); if (isa(MS)) { QualType T = VD->getType(); // Function-scoped static variables are default-initialized to 0; if they // have an initializer, it would have been processed by now. // FIXME: This is only true when we're starting analysis from main(). // We're losing a lot of coverage here. if (isa(MS)) return svalBuilder.makeZeroVal(T); if (Optional V = getBindingForDerivedDefaultValue(B, MS, R, T)) { assert(!V->getAs()); return V.getValue(); } return svalBuilder.getRegionValueSymbolVal(R); } return UndefinedVal(); } SVal RegionStoreManager::getBindingForLazySymbol(const TypedValueRegion *R) { // All other values are symbolic. return svalBuilder.getRegionValueSymbolVal(R); } const RegionStoreManager::SValListTy & RegionStoreManager::getInterestingValues(nonloc::LazyCompoundVal LCV) { // First, check the cache. LazyBindingsMapTy::iterator I = LazyBindingsMap.find(LCV.getCVData()); if (I != LazyBindingsMap.end()) return I->second; // If we don't have a list of values cached, start constructing it. SValListTy List; const SubRegion *LazyR = LCV.getRegion(); RegionBindingsRef B = getRegionBindings(LCV.getStore()); // If this region had /no/ bindings at the time, there are no interesting // values to return. const ClusterBindings *Cluster = B.lookup(LazyR->getBaseRegion()); if (!Cluster) return (LazyBindingsMap[LCV.getCVData()] = std::move(List)); SmallVector Bindings; collectSubRegionBindings(Bindings, svalBuilder, *Cluster, LazyR, /*IncludeAllDefaultBindings=*/true); for (SmallVectorImpl::const_iterator I = Bindings.begin(), E = Bindings.end(); I != E; ++I) { SVal V = I->second; if (V.isUnknownOrUndef() || V.isConstant()) continue; if (Optional InnerLCV = V.getAs()) { const SValListTy &InnerList = getInterestingValues(*InnerLCV); List.insert(List.end(), InnerList.begin(), InnerList.end()); continue; } List.push_back(V); } return (LazyBindingsMap[LCV.getCVData()] = std::move(List)); } NonLoc RegionStoreManager::createLazyBinding(RegionBindingsConstRef B, const TypedValueRegion *R) { if (Optional V = getExistingLazyBinding(svalBuilder, B, R, false)) return *V; return svalBuilder.makeLazyCompoundVal(StoreRef(B.asStore(), *this), R); } static bool isRecordEmpty(const RecordDecl *RD) { if (!RD->field_empty()) return false; if (const CXXRecordDecl *CRD = dyn_cast(RD)) return CRD->getNumBases() == 0; return true; } SVal RegionStoreManager::getBindingForStruct(RegionBindingsConstRef B, const TypedValueRegion *R) { const RecordDecl *RD = R->getValueType()->castAs()->getDecl(); if (!RD->getDefinition() || isRecordEmpty(RD)) return UnknownVal(); return createLazyBinding(B, R); } SVal RegionStoreManager::getBindingForArray(RegionBindingsConstRef B, const TypedValueRegion *R) { assert(Ctx.getAsConstantArrayType(R->getValueType()) && "Only constant array types can have compound bindings."); return createLazyBinding(B, R); } bool RegionStoreManager::includedInBindings(Store store, const MemRegion *region) const { RegionBindingsRef B = getRegionBindings(store); region = region->getBaseRegion(); // Quick path: if the base is the head of a cluster, the region is live. if (B.lookup(region)) return true; // Slow path: if the region is the VALUE of any binding, it is live. for (RegionBindingsRef::iterator RI = B.begin(), RE = B.end(); RI != RE; ++RI) { const ClusterBindings &Cluster = RI.getData(); for (ClusterBindings::iterator CI = Cluster.begin(), CE = Cluster.end(); CI != CE; ++CI) { const SVal &D = CI.getData(); if (const MemRegion *R = D.getAsRegion()) if (R->getBaseRegion() == region) return true; } } return false; } //===----------------------------------------------------------------------===// // Binding values to regions. //===----------------------------------------------------------------------===// StoreRef RegionStoreManager::killBinding(Store ST, Loc L) { if (Optional LV = L.getAs()) if (const MemRegion* R = LV->getRegion()) return StoreRef(getRegionBindings(ST).removeBinding(R) .asImmutableMap() .getRootWithoutRetain(), *this); return StoreRef(ST, *this); } RegionBindingsRef RegionStoreManager::bind(RegionBindingsConstRef B, Loc L, SVal V) { if (L.getAs()) return B; // If we get here, the location should be a region. const MemRegion *R = L.castAs().getRegion(); // Check if the region is a struct region. if (const TypedValueRegion* TR = dyn_cast(R)) { QualType Ty = TR->getValueType(); if (Ty->isArrayType()) return bindArray(B, TR, V); if (Ty->isStructureOrClassType()) return bindStruct(B, TR, V); if (Ty->isVectorType()) return bindVector(B, TR, V); if (Ty->isUnionType()) return bindAggregate(B, TR, V); } if (const SymbolicRegion *SR = dyn_cast(R)) { // Binding directly to a symbolic region should be treated as binding // to element 0. QualType T = SR->getSymbol()->getType(); if (T->isAnyPointerType() || T->isReferenceType()) T = T->getPointeeType(); R = GetElementZeroRegion(SR, T); } // Clear out bindings that may overlap with this binding. RegionBindingsRef NewB = removeSubRegionBindings(B, cast(R)); return NewB.addBinding(BindingKey::Make(R, BindingKey::Direct), V); } RegionBindingsRef RegionStoreManager::setImplicitDefaultValue(RegionBindingsConstRef B, const MemRegion *R, QualType T) { SVal V; if (Loc::isLocType(T)) V = svalBuilder.makeNull(); else if (T->isIntegralOrEnumerationType()) V = svalBuilder.makeZeroVal(T); else if (T->isStructureOrClassType() || T->isArrayType()) { // Set the default value to a zero constant when it is a structure // or array. The type doesn't really matter. V = svalBuilder.makeZeroVal(Ctx.IntTy); } else { // We can't represent values of this type, but we still need to set a value // to record that the region has been initialized. // If this assertion ever fires, a new case should be added above -- we // should know how to default-initialize any value we can symbolicate. assert(!SymbolManager::canSymbolicate(T) && "This type is representable"); V = UnknownVal(); } return B.addBinding(R, BindingKey::Default, V); } RegionBindingsRef RegionStoreManager::bindArray(RegionBindingsConstRef B, const TypedValueRegion* R, SVal Init) { const ArrayType *AT =cast(Ctx.getCanonicalType(R->getValueType())); QualType ElementTy = AT->getElementType(); Optional Size; if (const ConstantArrayType* CAT = dyn_cast(AT)) Size = CAT->getSize().getZExtValue(); // Check if the init expr is a string literal. if (Optional MRV = Init.getAs()) { const StringRegion *S = cast(MRV->getRegion()); // Treat the string as a lazy compound value. StoreRef store(B.asStore(), *this); nonloc::LazyCompoundVal LCV = svalBuilder.makeLazyCompoundVal(store, S) .castAs(); return bindAggregate(B, R, LCV); } // Handle lazy compound values. if (Init.getAs()) return bindAggregate(B, R, Init); if (Init.isUnknown()) return bindAggregate(B, R, UnknownVal()); // Remaining case: explicit compound values. const nonloc::CompoundVal& CV = Init.castAs(); nonloc::CompoundVal::iterator VI = CV.begin(), VE = CV.end(); uint64_t i = 0; RegionBindingsRef NewB(B); for (; Size.hasValue() ? i < Size.getValue() : true ; ++i, ++VI) { // The init list might be shorter than the array length. if (VI == VE) break; const NonLoc &Idx = svalBuilder.makeArrayIndex(i); const ElementRegion *ER = MRMgr.getElementRegion(ElementTy, Idx, R, Ctx); if (ElementTy->isStructureOrClassType()) NewB = bindStruct(NewB, ER, *VI); else if (ElementTy->isArrayType()) NewB = bindArray(NewB, ER, *VI); else NewB = bind(NewB, loc::MemRegionVal(ER), *VI); } // If the init list is shorter than the array length, set the // array default value. if (Size.hasValue() && i < Size.getValue()) NewB = setImplicitDefaultValue(NewB, R, ElementTy); return NewB; } RegionBindingsRef RegionStoreManager::bindVector(RegionBindingsConstRef B, const TypedValueRegion* R, SVal V) { QualType T = R->getValueType(); assert(T->isVectorType()); const VectorType *VT = T->getAs(); // Use getAs for typedefs. // Handle lazy compound values and symbolic values. if (V.getAs() || V.getAs()) return bindAggregate(B, R, V); // We may get non-CompoundVal accidentally due to imprecise cast logic or // that we are binding symbolic struct value. Kill the field values, and if // the value is symbolic go and bind it as a "default" binding. if (!V.getAs()) { return bindAggregate(B, R, UnknownVal()); } QualType ElemType = VT->getElementType(); nonloc::CompoundVal CV = V.castAs(); nonloc::CompoundVal::iterator VI = CV.begin(), VE = CV.end(); unsigned index = 0, numElements = VT->getNumElements(); RegionBindingsRef NewB(B); for ( ; index != numElements ; ++index) { if (VI == VE) break; NonLoc Idx = svalBuilder.makeArrayIndex(index); const ElementRegion *ER = MRMgr.getElementRegion(ElemType, Idx, R, Ctx); if (ElemType->isArrayType()) NewB = bindArray(NewB, ER, *VI); else if (ElemType->isStructureOrClassType()) NewB = bindStruct(NewB, ER, *VI); else NewB = bind(NewB, loc::MemRegionVal(ER), *VI); } return NewB; } Optional RegionStoreManager::tryBindSmallStruct(RegionBindingsConstRef B, const TypedValueRegion *R, const RecordDecl *RD, nonloc::LazyCompoundVal LCV) { FieldVector Fields; if (const CXXRecordDecl *Class = dyn_cast(RD)) if (Class->getNumBases() != 0 || Class->getNumVBases() != 0) return None; for (const auto *FD : RD->fields()) { if (FD->isUnnamedBitfield()) continue; // If there are too many fields, or if any of the fields are aggregates, // just use the LCV as a default binding. if (Fields.size() == SmallStructLimit) return None; QualType Ty = FD->getType(); if (!(Ty->isScalarType() || Ty->isReferenceType())) return None; Fields.push_back(FD); } RegionBindingsRef NewB = B; for (FieldVector::iterator I = Fields.begin(), E = Fields.end(); I != E; ++I){ const FieldRegion *SourceFR = MRMgr.getFieldRegion(*I, LCV.getRegion()); SVal V = getBindingForField(getRegionBindings(LCV.getStore()), SourceFR); const FieldRegion *DestFR = MRMgr.getFieldRegion(*I, R); NewB = bind(NewB, loc::MemRegionVal(DestFR), V); } return NewB; } RegionBindingsRef RegionStoreManager::bindStruct(RegionBindingsConstRef B, const TypedValueRegion* R, SVal V) { if (!Features.supportsFields()) return B; QualType T = R->getValueType(); assert(T->isStructureOrClassType()); const RecordType* RT = T->getAs(); const RecordDecl *RD = RT->getDecl(); if (!RD->isCompleteDefinition()) return B; // Handle lazy compound values and symbolic values. if (Optional LCV = V.getAs()) { if (Optional NewB = tryBindSmallStruct(B, R, RD, *LCV)) return *NewB; return bindAggregate(B, R, V); } if (V.getAs()) return bindAggregate(B, R, V); // We may get non-CompoundVal accidentally due to imprecise cast logic or // that we are binding symbolic struct value. Kill the field values, and if // the value is symbolic go and bind it as a "default" binding. if (V.isUnknown() || !V.getAs()) return bindAggregate(B, R, UnknownVal()); const nonloc::CompoundVal& CV = V.castAs(); nonloc::CompoundVal::iterator VI = CV.begin(), VE = CV.end(); RecordDecl::field_iterator FI, FE; RegionBindingsRef NewB(B); for (FI = RD->field_begin(), FE = RD->field_end(); FI != FE; ++FI) { if (VI == VE) break; // Skip any unnamed bitfields to stay in sync with the initializers. if (FI->isUnnamedBitfield()) continue; QualType FTy = FI->getType(); const FieldRegion* FR = MRMgr.getFieldRegion(*FI, R); if (FTy->isArrayType()) NewB = bindArray(NewB, FR, *VI); else if (FTy->isStructureOrClassType()) NewB = bindStruct(NewB, FR, *VI); else NewB = bind(NewB, loc::MemRegionVal(FR), *VI); ++VI; } // There may be fewer values in the initialize list than the fields of struct. if (FI != FE) { NewB = NewB.addBinding(R, BindingKey::Default, svalBuilder.makeIntVal(0, false)); } return NewB; } RegionBindingsRef RegionStoreManager::bindAggregate(RegionBindingsConstRef B, const TypedRegion *R, SVal Val) { // Remove the old bindings, using 'R' as the root of all regions // we will invalidate. Then add the new binding. return removeSubRegionBindings(B, R).addBinding(R, BindingKey::Default, Val); } //===----------------------------------------------------------------------===// // State pruning. //===----------------------------------------------------------------------===// namespace { class removeDeadBindingsWorker : public ClusterAnalysis { SmallVector Postponed; SymbolReaper &SymReaper; const StackFrameContext *CurrentLCtx; public: removeDeadBindingsWorker(RegionStoreManager &rm, ProgramStateManager &stateMgr, RegionBindingsRef b, SymbolReaper &symReaper, const StackFrameContext *LCtx) : ClusterAnalysis(rm, stateMgr, b), SymReaper(symReaper), CurrentLCtx(LCtx) {} // Called by ClusterAnalysis. void VisitAddedToCluster(const MemRegion *baseR, const ClusterBindings &C); void VisitCluster(const MemRegion *baseR, const ClusterBindings *C); using ClusterAnalysis::VisitCluster; using ClusterAnalysis::AddToWorkList; bool AddToWorkList(const MemRegion *R); bool UpdatePostponed(); void VisitBinding(SVal V); }; } bool removeDeadBindingsWorker::AddToWorkList(const MemRegion *R) { const MemRegion *BaseR = R->getBaseRegion(); return AddToWorkList(WorkListElement(BaseR), getCluster(BaseR)); } void removeDeadBindingsWorker::VisitAddedToCluster(const MemRegion *baseR, const ClusterBindings &C) { if (const VarRegion *VR = dyn_cast(baseR)) { if (SymReaper.isLive(VR)) AddToWorkList(baseR, &C); return; } if (const SymbolicRegion *SR = dyn_cast(baseR)) { if (SymReaper.isLive(SR->getSymbol())) AddToWorkList(SR, &C); else Postponed.push_back(SR); return; } if (isa(baseR)) { AddToWorkList(baseR, &C); return; } // CXXThisRegion in the current or parent location context is live. if (const CXXThisRegion *TR = dyn_cast(baseR)) { const StackArgumentsSpaceRegion *StackReg = cast(TR->getSuperRegion()); const StackFrameContext *RegCtx = StackReg->getStackFrame(); if (CurrentLCtx && (RegCtx == CurrentLCtx || RegCtx->isParentOf(CurrentLCtx))) AddToWorkList(TR, &C); } } void removeDeadBindingsWorker::VisitCluster(const MemRegion *baseR, const ClusterBindings *C) { if (!C) return; // Mark the symbol for any SymbolicRegion with live bindings as live itself. // This means we should continue to track that symbol. if (const SymbolicRegion *SymR = dyn_cast(baseR)) SymReaper.markLive(SymR->getSymbol()); for (ClusterBindings::iterator I = C->begin(), E = C->end(); I != E; ++I) { // Element index of a binding key is live. SymReaper.markElementIndicesLive(I.getKey().getRegion()); VisitBinding(I.getData()); } } void removeDeadBindingsWorker::VisitBinding(SVal V) { // Is it a LazyCompoundVal? All referenced regions are live as well. if (Optional LCS = V.getAs()) { const RegionStoreManager::SValListTy &Vals = RM.getInterestingValues(*LCS); for (RegionStoreManager::SValListTy::const_iterator I = Vals.begin(), E = Vals.end(); I != E; ++I) VisitBinding(*I); return; } // If V is a region, then add it to the worklist. if (const MemRegion *R = V.getAsRegion()) { AddToWorkList(R); SymReaper.markLive(R); // All regions captured by a block are also live. if (const BlockDataRegion *BR = dyn_cast(R)) { BlockDataRegion::referenced_vars_iterator I = BR->referenced_vars_begin(), E = BR->referenced_vars_end(); for ( ; I != E; ++I) AddToWorkList(I.getCapturedRegion()); } } // Update the set of live symbols. for (SymExpr::symbol_iterator SI = V.symbol_begin(), SE = V.symbol_end(); SI!=SE; ++SI) SymReaper.markLive(*SI); } bool removeDeadBindingsWorker::UpdatePostponed() { // See if any postponed SymbolicRegions are actually live now, after // having done a scan. bool changed = false; for (SmallVectorImpl::iterator I = Postponed.begin(), E = Postponed.end() ; I != E ; ++I) { if (const SymbolicRegion *SR = *I) { if (SymReaper.isLive(SR->getSymbol())) { changed |= AddToWorkList(SR); *I = nullptr; } } } return changed; } StoreRef RegionStoreManager::removeDeadBindings(Store store, const StackFrameContext *LCtx, SymbolReaper& SymReaper) { RegionBindingsRef B = getRegionBindings(store); removeDeadBindingsWorker W(*this, StateMgr, B, SymReaper, LCtx); W.GenerateClusters(); // Enqueue the region roots onto the worklist. for (SymbolReaper::region_iterator I = SymReaper.region_begin(), E = SymReaper.region_end(); I != E; ++I) { W.AddToWorkList(*I); } do W.RunWorkList(); while (W.UpdatePostponed()); // We have now scanned the store, marking reachable regions and symbols // as live. We now remove all the regions that are dead from the store // as well as update DSymbols with the set symbols that are now dead. for (RegionBindingsRef::iterator I = B.begin(), E = B.end(); I != E; ++I) { const MemRegion *Base = I.getKey(); // If the cluster has been visited, we know the region has been marked. if (W.isVisited(Base)) continue; // Remove the dead entry. B = B.remove(Base); if (const SymbolicRegion *SymR = dyn_cast(Base)) SymReaper.maybeDead(SymR->getSymbol()); // Mark all non-live symbols that this binding references as dead. const ClusterBindings &Cluster = I.getData(); for (ClusterBindings::iterator CI = Cluster.begin(), CE = Cluster.end(); CI != CE; ++CI) { SVal X = CI.getData(); SymExpr::symbol_iterator SI = X.symbol_begin(), SE = X.symbol_end(); for (; SI != SE; ++SI) SymReaper.maybeDead(*SI); } } return StoreRef(B.asStore(), *this); } //===----------------------------------------------------------------------===// // Utility methods. //===----------------------------------------------------------------------===// void RegionStoreManager::print(Store store, raw_ostream &OS, const char* nl, const char *sep) { RegionBindingsRef B = getRegionBindings(store); OS << "Store (direct and default bindings), " << B.asStore() << " :" << nl; B.dump(OS, nl); } diff --git a/test/Analysis/ctor.mm b/test/Analysis/ctor.mm index 646229aac989..619e2cb0f044 100644 --- a/test/Analysis/ctor.mm +++ b/test/Analysis/ctor.mm @@ -1,706 +1,723 @@ // RUN: %clang_analyze_cc1 -analyzer-checker=core,debug.ExprInspection -fobjc-arc -analyzer-config c++-inlining=constructors -Wno-null-dereference -std=c++11 -verify %s #include "Inputs/system-header-simulator-cxx.h" void clang_analyzer_eval(bool); void clang_analyzer_checkInlined(bool); // A simplified version of std::move. template T &&move(T &obj) { return static_cast(obj); } struct Wrapper { __strong id obj; }; void test() { Wrapper w; // force a diagnostic *(char *)0 = 1; // expected-warning{{Dereference of null pointer}} } struct IntWrapper { int x; }; void testCopyConstructor() { IntWrapper a; a.x = 42; IntWrapper b(a); clang_analyzer_eval(b.x == 42); // expected-warning{{TRUE}} } struct NonPODIntWrapper { int x; virtual int get(); }; void testNonPODCopyConstructor() { NonPODIntWrapper a; a.x = 42; NonPODIntWrapper b(a); clang_analyzer_eval(b.x == 42); // expected-warning{{TRUE}} } namespace ConstructorVirtualCalls { class A { public: int *out1, *out2, *out3; virtual int get() { return 1; } A(int *out1) { *out1 = get(); } }; class B : public A { public: virtual int get() { return 2; } B(int *out1, int *out2) : A(out1) { *out2 = get(); } }; class C : public B { public: virtual int get() { return 3; } C(int *out1, int *out2, int *out3) : B(out1, out2) { *out3 = get(); } }; void test() { int a, b, c; C obj(&a, &b, &c); clang_analyzer_eval(a == 1); // expected-warning{{TRUE}} clang_analyzer_eval(b == 2); // expected-warning{{TRUE}} clang_analyzer_eval(c == 3); // expected-warning{{TRUE}} clang_analyzer_eval(obj.get() == 3); // expected-warning{{TRUE}} // Sanity check for devirtualization. A *base = &obj; clang_analyzer_eval(base->get() == 3); // expected-warning{{TRUE}} } } namespace TemporaryConstructor { class BoolWrapper { public: BoolWrapper() { clang_analyzer_checkInlined(true); // expected-warning{{TRUE}} value = true; } bool value; }; void test() { // PR13717 - Don't crash when a CXXTemporaryObjectExpr is inlined. if (BoolWrapper().value) return; } } namespace ConstructorUsedAsRValue { using TemporaryConstructor::BoolWrapper; bool extractValue(BoolWrapper b) { return b.value; } void test() { bool result = extractValue(BoolWrapper()); clang_analyzer_eval(result); // expected-warning{{TRUE}} } } namespace PODUninitialized { class POD { public: int x, y; }; class PODWrapper { public: POD p; }; class NonPOD { public: int x, y; NonPOD() {} NonPOD(const NonPOD &Other) : x(Other.x), y(Other.y) // expected-warning {{undefined}} { } NonPOD(NonPOD &&Other) : x(Other.x), y(Other.y) // expected-warning {{undefined}} { } NonPOD &operator=(const NonPOD &Other) { x = Other.x; y = Other.y; // expected-warning {{undefined}} return *this; } NonPOD &operator=(NonPOD &&Other) { x = Other.x; y = Other.y; // expected-warning {{undefined}} return *this; } }; class NonPODWrapper { public: class Inner { public: int x, y; Inner() {} Inner(const Inner &Other) : x(Other.x), y(Other.y) // expected-warning {{undefined}} { } Inner(Inner &&Other) : x(Other.x), y(Other.y) // expected-warning {{undefined}} { } Inner &operator=(const Inner &Other) { x = Other.x; // expected-warning {{undefined}} y = Other.y; return *this; } Inner &operator=(Inner &&Other) { x = Other.x; // expected-warning {{undefined}} y = Other.y; return *this; } }; Inner p; }; void testPOD() { POD p; p.x = 1; POD p2 = p; // no-warning clang_analyzer_eval(p2.x == 1); // expected-warning{{TRUE}} POD p3 = move(p); // no-warning clang_analyzer_eval(p3.x == 1); // expected-warning{{TRUE}} // Use rvalues as well. clang_analyzer_eval(POD(p3).x == 1); // expected-warning{{TRUE}} PODWrapper w; w.p.y = 1; PODWrapper w2 = w; // no-warning clang_analyzer_eval(w2.p.y == 1); // expected-warning{{TRUE}} PODWrapper w3 = move(w); // no-warning clang_analyzer_eval(w3.p.y == 1); // expected-warning{{TRUE}} // Use rvalues as well. clang_analyzer_eval(PODWrapper(w3).p.y == 1); // expected-warning{{TRUE}} } void testNonPOD() { NonPOD p; p.x = 1; NonPOD p2 = p; } void testNonPODMove() { NonPOD p; p.x = 1; NonPOD p2 = move(p); } void testNonPODWrapper() { NonPODWrapper w; w.p.y = 1; NonPODWrapper w2 = w; } void testNonPODWrapperMove() { NonPODWrapper w; w.p.y = 1; NonPODWrapper w2 = move(w); } // Not strictly about constructors, but trivial assignment operators should // essentially work the same way. namespace AssignmentOperator { void testPOD() { POD p; p.x = 1; POD p2; p2 = p; // no-warning clang_analyzer_eval(p2.x == 1); // expected-warning{{TRUE}} POD p3; p3 = move(p); // no-warning clang_analyzer_eval(p3.x == 1); // expected-warning{{TRUE}} PODWrapper w; w.p.y = 1; PODWrapper w2; w2 = w; // no-warning clang_analyzer_eval(w2.p.y == 1); // expected-warning{{TRUE}} PODWrapper w3; w3 = move(w); // no-warning clang_analyzer_eval(w3.p.y == 1); // expected-warning{{TRUE}} } void testReturnValue() { POD p; p.x = 1; POD p2; clang_analyzer_eval(&(p2 = p) == &p2); // expected-warning{{TRUE}} PODWrapper w; w.p.y = 1; PODWrapper w2; clang_analyzer_eval(&(w2 = w) == &w2); // expected-warning{{TRUE}} } void testNonPOD() { NonPOD p; p.x = 1; NonPOD p2; p2 = p; } void testNonPODMove() { NonPOD p; p.x = 1; NonPOD p2; p2 = move(p); } void testNonPODWrapper() { NonPODWrapper w; w.p.y = 1; NonPODWrapper w2; w2 = w; } void testNonPODWrapperMove() { NonPODWrapper w; w.p.y = 1; NonPODWrapper w2; w2 = move(w); } } } namespace ArrayMembers { struct Primitive { int values[3]; }; void testPrimitive() { Primitive a = { { 1, 2, 3 } }; clang_analyzer_eval(a.values[0] == 1); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1] == 2); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[2] == 3); // expected-warning{{TRUE}} Primitive b = a; clang_analyzer_eval(b.values[0] == 1); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[1] == 2); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[2] == 3); // expected-warning{{TRUE}} Primitive c; c = b; clang_analyzer_eval(c.values[0] == 1); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[1] == 2); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[2] == 3); // expected-warning{{TRUE}} } struct NestedPrimitive { int values[2][3]; }; void testNestedPrimitive() { NestedPrimitive a = { { { 0, 0, 0 }, { 1, 2, 3 } } }; clang_analyzer_eval(a.values[1][0] == 1); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1][1] == 2); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1][2] == 3); // expected-warning{{TRUE}} NestedPrimitive b = a; clang_analyzer_eval(b.values[1][0] == 1); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[1][1] == 2); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[1][2] == 3); // expected-warning{{TRUE}} NestedPrimitive c; c = b; clang_analyzer_eval(c.values[1][0] == 1); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[1][1] == 2); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[1][2] == 3); // expected-warning{{TRUE}} } struct POD { IntWrapper values[3]; }; void testPOD() { POD a = { { { 1 }, { 2 }, { 3 } } }; clang_analyzer_eval(a.values[0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[2].x == 3); // expected-warning{{TRUE}} POD b = a; clang_analyzer_eval(b.values[0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[2].x == 3); // expected-warning{{TRUE}} POD c; c = b; clang_analyzer_eval(c.values[0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[2].x == 3); // expected-warning{{TRUE}} } struct NestedPOD { IntWrapper values[2][3]; }; void testNestedPOD() { NestedPOD a = { { { { 0 }, { 0 }, { 0 } }, { { 1 }, { 2 }, { 3 } } } }; clang_analyzer_eval(a.values[1][0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1][1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1][2].x == 3); // expected-warning{{TRUE}} NestedPOD b = a; clang_analyzer_eval(b.values[1][0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[1][1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(b.values[1][2].x == 3); // expected-warning{{TRUE}} NestedPOD c; c = b; clang_analyzer_eval(c.values[1][0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[1][1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(c.values[1][2].x == 3); // expected-warning{{TRUE}} } struct NonPOD { NonPODIntWrapper values[3]; }; void testNonPOD() { NonPOD a; a.values[0].x = 1; a.values[1].x = 2; a.values[2].x = 3; clang_analyzer_eval(a.values[0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[2].x == 3); // expected-warning{{TRUE}} NonPOD b = a; clang_analyzer_eval(b.values[0].x == 1); // expected-warning{{UNKNOWN}} clang_analyzer_eval(b.values[1].x == 2); // expected-warning{{UNKNOWN}} clang_analyzer_eval(b.values[2].x == 3); // expected-warning{{UNKNOWN}} NonPOD c; c = b; clang_analyzer_eval(c.values[0].x == 1); // expected-warning{{UNKNOWN}} clang_analyzer_eval(c.values[1].x == 2); // expected-warning{{UNKNOWN}} clang_analyzer_eval(c.values[2].x == 3); // expected-warning{{UNKNOWN}} } struct NestedNonPOD { NonPODIntWrapper values[2][3]; }; void testNestedNonPOD() { NestedNonPOD a; a.values[0][0].x = 0; a.values[0][1].x = 0; a.values[0][2].x = 0; a.values[1][0].x = 1; a.values[1][1].x = 2; a.values[1][2].x = 3; clang_analyzer_eval(a.values[1][0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1][1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1][2].x == 3); // expected-warning{{TRUE}} NestedNonPOD b = a; clang_analyzer_eval(b.values[1][0].x == 1); // expected-warning{{UNKNOWN}} clang_analyzer_eval(b.values[1][1].x == 2); // expected-warning{{UNKNOWN}} clang_analyzer_eval(b.values[1][2].x == 3); // expected-warning{{UNKNOWN}} NestedNonPOD c; c = b; clang_analyzer_eval(c.values[1][0].x == 1); // expected-warning{{UNKNOWN}} clang_analyzer_eval(c.values[1][1].x == 2); // expected-warning{{UNKNOWN}} clang_analyzer_eval(c.values[1][2].x == 3); // expected-warning{{UNKNOWN}} } struct NonPODDefaulted { NonPODIntWrapper values[3]; NonPODDefaulted() = default; NonPODDefaulted(const NonPODDefaulted &) = default; NonPODDefaulted &operator=(const NonPODDefaulted &) = default; }; void testNonPODDefaulted() { NonPODDefaulted a; a.values[0].x = 1; a.values[1].x = 2; a.values[2].x = 3; clang_analyzer_eval(a.values[0].x == 1); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[1].x == 2); // expected-warning{{TRUE}} clang_analyzer_eval(a.values[2].x == 3); // expected-warning{{TRUE}} NonPODDefaulted b = a; clang_analyzer_eval(b.values[0].x == 1); // expected-warning{{UNKNOWN}} clang_analyzer_eval(b.values[1].x == 2); // expected-warning{{UNKNOWN}} clang_analyzer_eval(b.values[2].x == 3); // expected-warning{{UNKNOWN}} NonPODDefaulted c; c = b; clang_analyzer_eval(c.values[0].x == 1); // expected-warning{{UNKNOWN}} clang_analyzer_eval(c.values[1].x == 2); // expected-warning{{UNKNOWN}} clang_analyzer_eval(c.values[2].x == 3); // expected-warning{{UNKNOWN}} } }; namespace VirtualInheritance { int counter; struct base { base() { ++counter; } }; struct virtual_subclass : public virtual base { virtual_subclass() {} }; struct double_subclass : public virtual_subclass { double_subclass() {} }; void test() { counter = 0; double_subclass obj; clang_analyzer_eval(counter == 1); // expected-warning{{TRUE}} } struct double_virtual_subclass : public virtual virtual_subclass { double_virtual_subclass() {} }; void testVirtual() { counter = 0; double_virtual_subclass obj; clang_analyzer_eval(counter == 1); // expected-warning{{TRUE}} } } namespace ZeroInitialization { struct raw_pair { int p1; int p2; }; void testVarDecl() { raw_pair p{}; clang_analyzer_eval(p.p1 == 0); // expected-warning{{TRUE}} clang_analyzer_eval(p.p2 == 0); // expected-warning{{TRUE}} } void testTemporary() { clang_analyzer_eval(raw_pair().p1 == 0); // expected-warning{{TRUE}} clang_analyzer_eval(raw_pair().p2 == 0); // expected-warning{{TRUE}} } void testArray() { raw_pair p[2] = {}; clang_analyzer_eval(p[0].p1 == 0); // expected-warning{{TRUE}} clang_analyzer_eval(p[0].p2 == 0); // expected-warning{{TRUE}} clang_analyzer_eval(p[1].p1 == 0); // expected-warning{{TRUE}} clang_analyzer_eval(p[1].p2 == 0); // expected-warning{{TRUE}} } void testNew() { // FIXME: Pending proper implementation of constructors for 'new'. raw_pair *pp = new raw_pair(); clang_analyzer_eval(pp->p1 == 0); // expected-warning{{UNKNOWN}} clang_analyzer_eval(pp->p2 == 0); // expected-warning{{UNKNOWN}} } void testArrayNew() { // FIXME: Pending proper implementation of constructors for 'new[]'. raw_pair *p = new raw_pair[2](); clang_analyzer_eval(p[0].p1 == 0); // expected-warning{{UNKNOWN}} clang_analyzer_eval(p[0].p2 == 0); // expected-warning{{UNKNOWN}} clang_analyzer_eval(p[1].p1 == 0); // expected-warning{{UNKNOWN}} clang_analyzer_eval(p[1].p2 == 0); // expected-warning{{UNKNOWN}} } struct initializing_pair { public: int x; raw_pair y; initializing_pair() : x(), y() {} }; void testFieldInitializers() { initializing_pair p; clang_analyzer_eval(p.x == 0); // expected-warning{{TRUE}} clang_analyzer_eval(p.y.p1 == 0); // expected-warning{{TRUE}} clang_analyzer_eval(p.y.p2 == 0); // expected-warning{{TRUE}} } struct subclass : public raw_pair { subclass() = default; }; void testSubclass() { subclass p; clang_analyzer_eval(p.p1 == 0); // expected-warning{{garbage}} } struct initializing_subclass : public raw_pair { initializing_subclass() : raw_pair() {} }; void testInitializingSubclass() { initializing_subclass p; clang_analyzer_eval(p.p1 == 0); // expected-warning{{TRUE}} clang_analyzer_eval(p.p2 == 0); // expected-warning{{TRUE}} } struct pair_wrapper { pair_wrapper() : p() {} raw_pair p; }; struct virtual_subclass : public virtual pair_wrapper { virtual_subclass() {} }; struct double_virtual_subclass : public virtual_subclass { double_virtual_subclass() { // This previously caused a crash because the pair_wrapper subobject was // initialized twice. } }; class Empty { public: Empty(); }; class PairContainer : public Empty { raw_pair p; public: PairContainer() : Empty(), p() { // This previously caused a crash because the empty base class looked // like an initialization of 'p'. } PairContainer(int) : Empty(), p() { // Test inlining something else here. } }; class PairContainerContainer { int padding; PairContainer pc; public: PairContainerContainer() : pc(1) {} }; } namespace InitializerList { struct List { bool usedInitializerList; List() : usedInitializerList(false) {} List(std::initializer_list) : usedInitializerList(true) {} }; void testStatic() { List defaultCtor; clang_analyzer_eval(!defaultCtor.usedInitializerList); // expected-warning{{TRUE}} List list{1, 2}; clang_analyzer_eval(list.usedInitializerList); // expected-warning{{TRUE}} } void testDynamic() { List *list = new List{1, 2}; // FIXME: When we handle constructors with 'new', this will be TRUE. clang_analyzer_eval(list->usedInitializerList); // expected-warning{{UNKNOWN}} } } namespace PR19579 { class C {}; void f() { C(); int a; extern void use(int); use(a); // expected-warning{{uninitialized}} } void g() { struct S { C c; int i; }; // This order triggers the initialization of the inner "a" after the // constructor for "C" is run, which used to confuse the analyzer // (is "C()" the initialization of "a"?). struct S s = { C(), ({ int a, b = 0; 0; }) }; } } + +namespace NoCrashOnEmptyBaseOptimization { + struct NonEmptyBase { + int X; + explicit NonEmptyBase(int X) : X(X) {} + }; + + struct EmptyBase {}; + + struct S : NonEmptyBase, EmptyBase { + S() : NonEmptyBase(0), EmptyBase() {} + }; + + void testSCtorNoCrash() { + S s; + } +} diff --git a/test/CodeGenCXX/uncopyable-args.cpp b/test/CodeGenCXX/uncopyable-args.cpp index 307a5cf11b6b..ef7168cdaaf7 100644 --- a/test/CodeGenCXX/uncopyable-args.cpp +++ b/test/CodeGenCXX/uncopyable-args.cpp @@ -1,280 +1,351 @@ // RUN: %clang_cc1 -std=c++11 -triple x86_64-unknown-unknown -emit-llvm -o - %s | FileCheck %s -// RUN: %clang_cc1 -std=c++11 -triple x86_64-windows-msvc -emit-llvm -o - %s | FileCheck %s -check-prefix=WIN64 +// RUN: %clang_cc1 -std=c++11 -triple x86_64-windows-msvc -emit-llvm -o - %s -fms-compatibility -fms-compatibility-version=18 | FileCheck %s -check-prefix=WIN64 -check-prefix=WIN64-18 +// RUN: %clang_cc1 -std=c++11 -triple x86_64-windows-msvc -emit-llvm -o - %s -fms-compatibility -fms-compatibility-version=19 | FileCheck %s -check-prefix=WIN64 -check-prefix=WIN64-19 namespace trivial { // Trivial structs should be passed directly. struct A { void *p; }; void foo(A); void bar() { foo({}); } // CHECK-LABEL: define void @_ZN7trivial3barEv() // CHECK: alloca %"struct.trivial::A" // CHECK: load i8*, i8** // CHECK: call void @_ZN7trivial3fooENS_1AE(i8* %{{.*}}) // CHECK-LABEL: declare void @_ZN7trivial3fooENS_1AE(i8*) // WIN64-LABEL: declare void @"\01?foo@trivial@@YAXUA@1@@Z"(i64) } namespace default_ctor { struct A { A(); void *p; }; void foo(A); void bar() { // Core issue 1590. We can pass this type in registers, even though C++ // normally doesn't permit copies when using braced initialization. foo({}); } // CHECK-LABEL: define void @_ZN12default_ctor3barEv() // CHECK: alloca %"struct.default_ctor::A" // CHECK: call void @_Z{{.*}}C1Ev( // CHECK: load i8*, i8** // CHECK: call void @_ZN12default_ctor3fooENS_1AE(i8* %{{.*}}) // CHECK-LABEL: declare void @_ZN12default_ctor3fooENS_1AE(i8*) // WIN64-LABEL: declare void @"\01?foo@default_ctor@@YAXUA@1@@Z"(i64) } namespace move_ctor { // The presence of a move constructor implicitly deletes the trivial copy ctor // and means that we have to pass this struct by address. struct A { A(); A(A &&o); void *p; }; void foo(A); void bar() { foo({}); } -// FIXME: The copy ctor is implicitly deleted. -// CHECK-DISABLED-LABEL: define void @_ZN9move_ctor3barEv() -// CHECK-DISABLED: call void @_Z{{.*}}C1Ev( -// CHECK-DISABLED-NOT: call -// CHECK-DISABLED: call void @_ZN9move_ctor3fooENS_1AE(%"struct.move_ctor::A"* %{{.*}}) -// CHECK-DISABLED-LABEL: declare void @_ZN9move_ctor3fooENS_1AE(%"struct.move_ctor::A"*) +// CHECK-LABEL: define void @_ZN9move_ctor3barEv() +// CHECK: call void @_Z{{.*}}C1Ev( +// CHECK-NOT: call +// CHECK: call void @_ZN9move_ctor3fooENS_1AE(%"struct.move_ctor::A"* %{{.*}}) +// CHECK-LABEL: declare void @_ZN9move_ctor3fooENS_1AE(%"struct.move_ctor::A"*) // WIN64-LABEL: declare void @"\01?foo@move_ctor@@YAXUA@1@@Z"(%"struct.move_ctor::A"*) } namespace all_deleted { struct A { A(); A(const A &o) = delete; A(A &&o) = delete; void *p; }; void foo(A); void bar() { foo({}); } -// FIXME: The copy ctor is deleted. -// CHECK-DISABLED-LABEL: define void @_ZN11all_deleted3barEv() -// CHECK-DISABLED: call void @_Z{{.*}}C1Ev( -// CHECK-DISABLED-NOT: call -// CHECK-DISABLED: call void @_ZN11all_deleted3fooENS_1AE(%"struct.all_deleted::A"* %{{.*}}) -// CHECK-DISABLED-LABEL: declare void @_ZN11all_deleted3fooENS_1AE(%"struct.all_deleted::A"*) +// CHECK-LABEL: define void @_ZN11all_deleted3barEv() +// CHECK: call void @_Z{{.*}}C1Ev( +// CHECK-NOT: call +// CHECK: call void @_ZN11all_deleted3fooENS_1AE(%"struct.all_deleted::A"* %{{.*}}) +// CHECK-LABEL: declare void @_ZN11all_deleted3fooENS_1AE(%"struct.all_deleted::A"*) // WIN64-LABEL: declare void @"\01?foo@all_deleted@@YAXUA@1@@Z"(%"struct.all_deleted::A"*) } namespace implicitly_deleted { struct A { A(); A &operator=(A &&o); void *p; }; void foo(A); void bar() { foo({}); } -// FIXME: The copy and move ctors are implicitly deleted. -// CHECK-DISABLED-LABEL: define void @_ZN18implicitly_deleted3barEv() -// CHECK-DISABLED: call void @_Z{{.*}}C1Ev( -// CHECK-DISABLED-NOT: call -// CHECK-DISABLED: call void @_ZN18implicitly_deleted3fooENS_1AE(%"struct.implicitly_deleted::A"* %{{.*}}) -// CHECK-DISABLED-LABEL: declare void @_ZN18implicitly_deleted3fooENS_1AE(%"struct.implicitly_deleted::A"*) - -// WIN64-LABEL: declare void @"\01?foo@implicitly_deleted@@YAXUA@1@@Z"(%"struct.implicitly_deleted::A"*) +// CHECK-LABEL: define void @_ZN18implicitly_deleted3barEv() +// CHECK: call void @_Z{{.*}}C1Ev( +// CHECK-NOT: call +// CHECK: call void @_ZN18implicitly_deleted3fooENS_1AE(%"struct.implicitly_deleted::A"* %{{.*}}) +// CHECK-LABEL: declare void @_ZN18implicitly_deleted3fooENS_1AE(%"struct.implicitly_deleted::A"*) + +// In MSVC 2013, the copy ctor is not deleted by a move assignment. In MSVC 2015, it is. +// WIN64-18-LABEL: declare void @"\01?foo@implicitly_deleted@@YAXUA@1@@Z"(i64 +// WIN64-19-LABEL: declare void @"\01?foo@implicitly_deleted@@YAXUA@1@@Z"(%"struct.implicitly_deleted::A"*) } namespace one_deleted { struct A { A(); A(A &&o) = delete; void *p; }; void foo(A); void bar() { foo({}); } -// FIXME: The copy constructor is implicitly deleted. -// CHECK-DISABLED-LABEL: define void @_ZN11one_deleted3barEv() -// CHECK-DISABLED: call void @_Z{{.*}}C1Ev( -// CHECK-DISABLED-NOT: call -// CHECK-DISABLED: call void @_ZN11one_deleted3fooENS_1AE(%"struct.one_deleted::A"* %{{.*}}) -// CHECK-DISABLED-LABEL: declare void @_ZN11one_deleted3fooENS_1AE(%"struct.one_deleted::A"*) +// CHECK-LABEL: define void @_ZN11one_deleted3barEv() +// CHECK: call void @_Z{{.*}}C1Ev( +// CHECK-NOT: call +// CHECK: call void @_ZN11one_deleted3fooENS_1AE(%"struct.one_deleted::A"* %{{.*}}) +// CHECK-LABEL: declare void @_ZN11one_deleted3fooENS_1AE(%"struct.one_deleted::A"*) // WIN64-LABEL: declare void @"\01?foo@one_deleted@@YAXUA@1@@Z"(%"struct.one_deleted::A"*) } namespace copy_defaulted { struct A { A(); A(const A &o) = default; A(A &&o) = delete; void *p; }; void foo(A); void bar() { foo({}); } // CHECK-LABEL: define void @_ZN14copy_defaulted3barEv() // CHECK: call void @_Z{{.*}}C1Ev( // CHECK: load i8*, i8** // CHECK: call void @_ZN14copy_defaulted3fooENS_1AE(i8* %{{.*}}) // CHECK-LABEL: declare void @_ZN14copy_defaulted3fooENS_1AE(i8*) // WIN64-LABEL: declare void @"\01?foo@copy_defaulted@@YAXUA@1@@Z"(i64) } namespace move_defaulted { struct A { A(); A(const A &o) = delete; A(A &&o) = default; void *p; }; void foo(A); void bar() { foo({}); } // CHECK-LABEL: define void @_ZN14move_defaulted3barEv() // CHECK: call void @_Z{{.*}}C1Ev( // CHECK: load i8*, i8** // CHECK: call void @_ZN14move_defaulted3fooENS_1AE(i8* %{{.*}}) // CHECK-LABEL: declare void @_ZN14move_defaulted3fooENS_1AE(i8*) // WIN64-LABEL: declare void @"\01?foo@move_defaulted@@YAXUA@1@@Z"(%"struct.move_defaulted::A"*) } namespace trivial_defaulted { struct A { A(); A(const A &o) = default; void *p; }; void foo(A); void bar() { foo({}); } // CHECK-LABEL: define void @_ZN17trivial_defaulted3barEv() // CHECK: call void @_Z{{.*}}C1Ev( // CHECK: load i8*, i8** // CHECK: call void @_ZN17trivial_defaulted3fooENS_1AE(i8* %{{.*}}) // CHECK-LABEL: declare void @_ZN17trivial_defaulted3fooENS_1AE(i8*) // WIN64-LABEL: declare void @"\01?foo@trivial_defaulted@@YAXUA@1@@Z"(i64) } namespace two_copy_ctors { struct A { A(); A(const A &) = default; A(const A &, int = 0); void *p; }; struct B : A {}; void foo(B); void bar() { foo({}); } -// FIXME: This class has a non-trivial copy ctor and a trivial copy ctor. It's -// not clear whether we should pass by address or in registers. -// CHECK-DISABLED-LABEL: define void @_ZN14two_copy_ctors3barEv() -// CHECK-DISABLED: call void @_Z{{.*}}C1Ev( -// CHECK-DISABLED: call void @_ZN14two_copy_ctors3fooENS_1BE(%"struct.two_copy_ctors::B"* %{{.*}}) -// CHECK-DISABLED-LABEL: declare void @_ZN14two_copy_ctors3fooENS_1BE(%"struct.two_copy_ctors::B"*) +// CHECK-LABEL: define void @_ZN14two_copy_ctors3barEv() +// CHECK: call void @_Z{{.*}}C1Ev( +// CHECK: call void @_ZN14two_copy_ctors3fooENS_1BE(%"struct.two_copy_ctors::B"* %{{.*}}) +// CHECK-LABEL: declare void @_ZN14two_copy_ctors3fooENS_1BE(%"struct.two_copy_ctors::B"*) // WIN64-LABEL: declare void @"\01?foo@two_copy_ctors@@YAXUB@1@@Z"(%"struct.two_copy_ctors::B"*) } namespace definition_only { struct A { A(); A(A &&o); void *p; }; void *foo(A a) { return a.p; } +// CHECK-LABEL: define i8* @_ZN15definition_only3fooENS_1AE(%"struct.definition_only::A"* // WIN64-LABEL: define i8* @"\01?foo@definition_only@@YAPEAXUA@1@@Z"(%"struct.definition_only::A"* } namespace deleted_by_member { struct B { B(); B(B &&o); void *p; }; struct A { A(); B b; }; void *foo(A a) { return a.b.p; } +// CHECK-LABEL: define i8* @_ZN17deleted_by_member3fooENS_1AE(%"struct.deleted_by_member::A"* // WIN64-LABEL: define i8* @"\01?foo@deleted_by_member@@YAPEAXUA@1@@Z"(%"struct.deleted_by_member::A"* } namespace deleted_by_base { struct B { B(); B(B &&o); void *p; }; struct A : B { A(); }; void *foo(A a) { return a.p; } +// CHECK-LABEL: define i8* @_ZN15deleted_by_base3fooENS_1AE(%"struct.deleted_by_base::A"* // WIN64-LABEL: define i8* @"\01?foo@deleted_by_base@@YAPEAXUA@1@@Z"(%"struct.deleted_by_base::A"* } namespace deleted_by_member_copy { struct B { B(); B(const B &o) = delete; void *p; }; struct A { A(); B b; }; void *foo(A a) { return a.b.p; } +// CHECK-LABEL: define i8* @_ZN22deleted_by_member_copy3fooENS_1AE(%"struct.deleted_by_member_copy::A"* // WIN64-LABEL: define i8* @"\01?foo@deleted_by_member_copy@@YAPEAXUA@1@@Z"(%"struct.deleted_by_member_copy::A"* } namespace deleted_by_base_copy { struct B { B(); B(const B &o) = delete; void *p; }; struct A : B { A(); }; void *foo(A a) { return a.p; } +// CHECK-LABEL: define i8* @_ZN20deleted_by_base_copy3fooENS_1AE(%"struct.deleted_by_base_copy::A"* // WIN64-LABEL: define i8* @"\01?foo@deleted_by_base_copy@@YAPEAXUA@1@@Z"(%"struct.deleted_by_base_copy::A"* } namespace explicit_delete { struct A { A(); A(const A &o) = delete; void *p; }; +// CHECK-LABEL: define i8* @_ZN15explicit_delete3fooENS_1AE(%"struct.explicit_delete::A"* // WIN64-LABEL: define i8* @"\01?foo@explicit_delete@@YAPEAXUA@1@@Z"(%"struct.explicit_delete::A"* void *foo(A a) { return a.p; } } + +namespace implicitly_deleted_copy_ctor { +struct A { + // No move ctor due to copy assignment. + A &operator=(const A&); + // Deleted copy ctor due to rvalue ref member. + int &&ref; +}; +// CHECK-LABEL: define {{.*}} @_ZN28implicitly_deleted_copy_ctor3fooENS_1AE(%"struct.implicitly_deleted_copy_ctor::A"* +// WIN64-LABEL: define {{.*}} @"\01?foo@implicitly_deleted_copy_ctor@@YAAEAHUA@1@@Z"(%"struct.implicitly_deleted_copy_ctor::A"* +int &foo(A a) { return a.ref; } + +struct B { + // Passed direct: has non-deleted trivial copy ctor. + B &operator=(const B&); + int &ref; +}; +int &foo(B b) { return b.ref; } +// CHECK-LABEL: define {{.*}} @_ZN28implicitly_deleted_copy_ctor3fooENS_1BE(i32* +// WIN64-LABEL: define {{.*}} @"\01?foo@implicitly_deleted_copy_ctor@@YAAEAHUB@1@@Z"(i64 + +struct X { X(const X&); }; +struct Y { Y(const Y&) = default; }; + +union C { + C &operator=(const C&); + // Passed indirect: copy ctor deleted due to variant member with nontrivial copy ctor. + X x; + int n; +}; +int foo(C c) { return c.n; } +// CHECK-LABEL: define {{.*}} @_ZN28implicitly_deleted_copy_ctor3fooENS_1CE(%"union.implicitly_deleted_copy_ctor::C"* +// WIN64-LABEL: define {{.*}} @"\01?foo@implicitly_deleted_copy_ctor@@YAHTC@1@@Z"(%"union.implicitly_deleted_copy_ctor::C"* + +struct D { + D &operator=(const D&); + // Passed indirect: copy ctor deleted due to variant member with nontrivial copy ctor. + union { + X x; + int n; + }; +}; +int foo(D d) { return d.n; } +// CHECK-LABEL: define {{.*}} @_ZN28implicitly_deleted_copy_ctor3fooENS_1DE(%"struct.implicitly_deleted_copy_ctor::D"* +// WIN64-LABEL: define {{.*}} @"\01?foo@implicitly_deleted_copy_ctor@@YAHUD@1@@Z"(%"struct.implicitly_deleted_copy_ctor::D"* + +union E { + // Passed direct: has non-deleted trivial copy ctor. + E &operator=(const E&); + Y y; + int n; +}; +int foo(E e) { return e.n; } +// CHECK-LABEL: define {{.*}} @_ZN28implicitly_deleted_copy_ctor3fooENS_1EE(i32 +// WIN64-LABEL: define {{.*}} @"\01?foo@implicitly_deleted_copy_ctor@@YAHTE@1@@Z"(i32 + +struct F { + // Passed direct: has non-deleted trivial copy ctor. + F &operator=(const F&); + union { + Y y; + int n; + }; +}; +int foo(F f) { return f.n; } +// CHECK-LABEL: define {{.*}} @_ZN28implicitly_deleted_copy_ctor3fooENS_1FE(i32 +// WIN64-LABEL: define {{.*}} @"\01?foo@implicitly_deleted_copy_ctor@@YAHUF@1@@Z"(i32 +} diff --git a/test/Driver/clang-translation.c b/test/Driver/clang-translation.c index 545951d5aa11..3b30f7af76dc 100644 --- a/test/Driver/clang-translation.c +++ b/test/Driver/clang-translation.c @@ -1,306 +1,310 @@ // RUN: %clang -target i386-unknown-unknown -### -S -O0 -Os %s -o %t.s -fverbose-asm -funwind-tables -fvisibility=hidden 2>&1 | FileCheck -check-prefix=I386 %s // I386: "-triple" "i386-unknown-unknown" // I386: "-S" // I386: "-disable-free" // I386: "-mrelocation-model" "static" // I386: "-mdisable-fp-elim" // I386: "-masm-verbose" // I386: "-munwind-tables" // I386: "-Os" // I386: "-fvisibility" // I386: "hidden" // I386: "-o" // I386: clang-translation // RUN: %clang -target i386-apple-darwin9 -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=YONAH %s // RUN: %clang -target i386-apple-macosx10.11 -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=YONAH %s // YONAH: "-target-cpu" // YONAH: "yonah" // RUN: %clang -target x86_64-apple-darwin9 -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=CORE2 %s // RUN: %clang -target x86_64-apple-macosx10.11 -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=CORE2 %s // CORE2: "-target-cpu" // CORE2: "core2" // RUN: %clang -target x86_64h-apple-darwin -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=AVX2 %s // RUN: %clang -target x86_64h-apple-macosx10.12 -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=AVX2 %s // AVX2: "-target-cpu" // AVX2: "core-avx2" // RUN: %clang -target i386-apple-macosx10.12 -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=PENRYN %s // RUN: %clang -target x86_64-apple-macosx10.12 -### -S %s -o %t.s 2>&1 | \ // RUN: FileCheck -check-prefix=PENRYN %s // PENRYN: "-target-cpu" // PENRYN: "penryn" // RUN: %clang -target x86_64-apple-darwin10 -### -S %s -arch armv7 2>&1 | \ // RUN: FileCheck -check-prefix=ARMV7_DEFAULT %s // ARMV7_DEFAULT: clang // ARMV7_DEFAULT: "-cc1" // ARMV7_DEFAULT-NOT: "-msoft-float" // ARMV7_DEFAULT: "-mfloat-abi" "soft" // ARMV7_DEFAULT-NOT: "-msoft-float" // ARMV7_DEFAULT: "-x" "c" // RUN: %clang -target x86_64-apple-darwin10 -### -S %s -arch armv7 \ // RUN: -msoft-float 2>&1 | FileCheck -check-prefix=ARMV7_SOFTFLOAT %s // ARMV7_SOFTFLOAT: clang // ARMV7_SOFTFLOAT: "-cc1" // ARMV7_SOFTFLOAT: "-target-feature" // ARMV7_SOFTFLOAT: "-neon" // ARMV7_SOFTFLOAT: "-msoft-float" // ARMV7_SOFTFLOAT: "-mfloat-abi" "soft" // ARMV7_SOFTFLOAT: "-x" "c" // RUN: %clang -target x86_64-apple-darwin10 -### -S %s -arch armv7 \ // RUN: -mhard-float 2>&1 | FileCheck -check-prefix=ARMV7_HARDFLOAT %s // ARMV7_HARDFLOAT: clang // ARMV7_HARDFLOAT: "-cc1" // ARMV7_HARDFLOAT-NOT: "-msoft-float" // ARMV7_HARDFLOAT: "-mfloat-abi" "hard" // ARMV7_HARDFLOAT-NOT: "-msoft-float" // ARMV7_HARDFLOAT: "-x" "c" // RUN: %clang -target arm64-apple-ios10 -### -S %s -arch arm64 2>&1 | \ // RUN: FileCheck -check-prefix=ARM64-APPLE %s // ARM64-APPLE: -munwind-table +// RUN: %clang -target arm64-apple-ios10 -fno-exceptions -### -S %s -arch arm64 2>&1 | \ +// RUN: FileCheck -check-prefix=ARM64-APPLE-EXCEP %s +// ARM64-APPLE-EXCEP-NOT: -munwind-table + // RUN: %clang -target armv7k-apple-watchos4.0 -### -S %s -arch armv7k 2>&1 | \ // RUN: FileCheck -check-prefix=ARMV7K-APPLE %s // ARMV7K-APPLE: -munwind-table // RUN: %clang -target arm-linux -### -S %s -march=armv5e 2>&1 | \ // RUN: FileCheck -check-prefix=ARMV5E %s // ARMV5E: clang // ARMV5E: "-cc1" // ARMV5E: "-target-cpu" "arm1022e" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=G5 2>&1 | FileCheck -check-prefix=PPCG5 %s // PPCG5: clang // PPCG5: "-cc1" // PPCG5: "-target-cpu" "g5" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power7 2>&1 | FileCheck -check-prefix=PPCPWR7 %s // PPCPWR7: clang // PPCPWR7: "-cc1" // PPCPWR7: "-target-cpu" "pwr7" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power8 2>&1 | FileCheck -check-prefix=PPCPWR8 %s // PPCPWR8: clang // PPCPWR8: "-cc1" // PPCPWR8: "-target-cpu" "pwr8" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=a2q 2>&1 | FileCheck -check-prefix=PPCA2Q %s // PPCA2Q: clang // PPCA2Q: "-cc1" // PPCA2Q: "-target-cpu" "a2q" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=630 2>&1 | FileCheck -check-prefix=PPC630 %s // PPC630: clang // PPC630: "-cc1" // PPC630: "-target-cpu" "pwr3" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power3 2>&1 | FileCheck -check-prefix=PPCPOWER3 %s // PPCPOWER3: clang // PPCPOWER3: "-cc1" // PPCPOWER3: "-target-cpu" "pwr3" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=pwr3 2>&1 | FileCheck -check-prefix=PPCPWR3 %s // PPCPWR3: clang // PPCPWR3: "-cc1" // PPCPWR3: "-target-cpu" "pwr3" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power4 2>&1 | FileCheck -check-prefix=PPCPOWER4 %s // PPCPOWER4: clang // PPCPOWER4: "-cc1" // PPCPOWER4: "-target-cpu" "pwr4" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=pwr4 2>&1 | FileCheck -check-prefix=PPCPWR4 %s // PPCPWR4: clang // PPCPWR4: "-cc1" // PPCPWR4: "-target-cpu" "pwr4" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power5 2>&1 | FileCheck -check-prefix=PPCPOWER5 %s // PPCPOWER5: clang // PPCPOWER5: "-cc1" // PPCPOWER5: "-target-cpu" "pwr5" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=pwr5 2>&1 | FileCheck -check-prefix=PPCPWR5 %s // PPCPWR5: clang // PPCPWR5: "-cc1" // PPCPWR5: "-target-cpu" "pwr5" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power5x 2>&1 | FileCheck -check-prefix=PPCPOWER5X %s // PPCPOWER5X: clang // PPCPOWER5X: "-cc1" // PPCPOWER5X: "-target-cpu" "pwr5x" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=pwr5x 2>&1 | FileCheck -check-prefix=PPCPWR5X %s // PPCPWR5X: clang // PPCPWR5X: "-cc1" // PPCPWR5X: "-target-cpu" "pwr5x" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power6 2>&1 | FileCheck -check-prefix=PPCPOWER6 %s // PPCPOWER6: clang // PPCPOWER6: "-cc1" // PPCPOWER6: "-target-cpu" "pwr6" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=pwr6 2>&1 | FileCheck -check-prefix=PPCPWR6 %s // PPCPWR6: clang // PPCPWR6: "-cc1" // PPCPWR6: "-target-cpu" "pwr6" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power6x 2>&1 | FileCheck -check-prefix=PPCPOWER6X %s // PPCPOWER6X: clang // PPCPOWER6X: "-cc1" // PPCPOWER6X: "-target-cpu" "pwr6x" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=pwr6x 2>&1 | FileCheck -check-prefix=PPCPWR6X %s // PPCPWR6X: clang // PPCPWR6X: "-cc1" // PPCPWR6X: "-target-cpu" "pwr6x" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=power7 2>&1 | FileCheck -check-prefix=PPCPOWER7 %s // PPCPOWER7: clang // PPCPOWER7: "-cc1" // PPCPOWER7: "-target-cpu" "pwr7" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=powerpc 2>&1 | FileCheck -check-prefix=PPCPOWERPC %s // PPCPOWERPC: clang // PPCPOWERPC: "-cc1" // PPCPOWERPC: "-target-cpu" "ppc" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s -mcpu=powerpc64 2>&1 | FileCheck -check-prefix=PPCPOWERPC64 %s // PPCPOWERPC64: clang // PPCPOWERPC64: "-cc1" // PPCPOWERPC64: "-target-cpu" "ppc64" // RUN: %clang -target powerpc64-unknown-linux-gnu \ // RUN: -### -S %s 2>&1 | FileCheck -check-prefix=PPC64NS %s // PPC64NS: clang // PPC64NS: "-cc1" // PPC64NS: "-target-cpu" "ppc64" // RUN: %clang -target powerpc-fsl-linux -### -S %s \ // RUN: -mcpu=e500mc 2>&1 | FileCheck -check-prefix=PPCE500MC %s // PPCE500MC: clang // PPCE500MC: "-cc1" // PPCE500MC: "-target-cpu" "e500mc" // RUN: %clang -target powerpc64-fsl-linux -### -S \ // RUN: %s -mcpu=e5500 2>&1 | FileCheck -check-prefix=PPCE5500 %s // PPCE5500: clang // PPCE5500: "-cc1" // PPCE5500: "-target-cpu" "e5500" // RUN: %clang -target amd64-unknown-openbsd5.2 -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=AMD64 %s // AMD64: clang // AMD64: "-cc1" // AMD64: "-triple" // AMD64: "amd64-unknown-openbsd5.2" // AMD64: "-munwind-tables" // RUN: %clang -target amd64--mingw32 -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=AMD64-MINGW %s // AMD64-MINGW: clang // AMD64-MINGW: "-cc1" // AMD64-MINGW: "-triple" // AMD64-MINGW: "amd64--windows-gnu" // AMD64-MINGW: "-munwind-tables" // RUN: %clang -target i686-linux-android -### -S %s 2>&1 \ // RUN: --sysroot=%S/Inputs/basic_android_tree/sysroot \ // RUN: | FileCheck --check-prefix=ANDROID-X86 %s // ANDROID-X86: clang // ANDROID-X86: "-target-cpu" "i686" // ANDROID-X86: "-target-feature" "+ssse3" // RUN: %clang -target x86_64-linux-android -### -S %s 2>&1 \ // RUN: --sysroot=%S/Inputs/basic_android_tree/sysroot \ // RUN: | FileCheck --check-prefix=ANDROID-X86_64 %s // ANDROID-X86_64: clang // ANDROID-X86_64: "-target-cpu" "x86-64" // ANDROID-X86_64: "-target-feature" "+sse4.2" // ANDROID-X86_64: "-target-feature" "+popcnt" // RUN: %clang -target mips-linux-gnu -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=MIPS %s // MIPS: clang // MIPS: "-cc1" // MIPS: "-target-cpu" "mips32r2" // MIPS: "-mfloat-abi" "hard" // RUN: %clang -target mipsel-linux-gnu -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=MIPSEL %s // MIPSEL: clang // MIPSEL: "-cc1" // MIPSEL: "-target-cpu" "mips32r2" // MIPSEL: "-mfloat-abi" "hard" // RUN: %clang -target mipsel-linux-android -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=MIPSEL-ANDROID %s // MIPSEL-ANDROID: clang // MIPSEL-ANDROID: "-cc1" // MIPSEL-ANDROID: "-target-cpu" "mips32" // MIPSEL-ANDROID: "-target-feature" "+fpxx" // MIPSEL-ANDROID: "-target-feature" "+nooddspreg" // MIPSEL-ANDROID: "-mfloat-abi" "hard" // RUN: %clang -target mipsel-linux-android -### -S %s -mcpu=mips32r6 2>&1 | \ // RUN: FileCheck -check-prefix=MIPSEL-ANDROID-R6 %s // MIPSEL-ANDROID-R6: clang // MIPSEL-ANDROID-R6: "-cc1" // MIPSEL-ANDROID-R6: "-target-cpu" "mips32r6" // MIPSEL-ANDROID-R6: "-target-feature" "+fp64" // MIPSEL-ANDROID-R6: "-target-feature" "+nooddspreg" // MIPSEL-ANDROID-R6: "-mfloat-abi" "hard" // RUN: %clang -target mips64-linux-gnu -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=MIPS64 %s // MIPS64: clang // MIPS64: "-cc1" // MIPS64: "-target-cpu" "mips64r2" // MIPS64: "-mfloat-abi" "hard" // RUN: %clang -target mips64el-linux-gnu -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=MIPS64EL %s // MIPS64EL: clang // MIPS64EL: "-cc1" // MIPS64EL: "-target-cpu" "mips64r2" // MIPS64EL: "-mfloat-abi" "hard" // RUN: %clang -target mips64el-linux-android -### -S %s 2>&1 | \ // RUN: FileCheck -check-prefix=MIPS64EL-ANDROID %s // MIPS64EL-ANDROID: clang // MIPS64EL-ANDROID: "-cc1" // MIPS64EL-ANDROID: "-target-cpu" "mips64r6" // MIPS64EL-ANDROID: "-mfloat-abi" "hard" diff --git a/test/Index/preamble-conditionals-crash.cpp b/test/Index/preamble-conditionals-crash.cpp new file mode 100644 index 000000000000..6b18c87d19f9 --- /dev/null +++ b/test/Index/preamble-conditionals-crash.cpp @@ -0,0 +1,12 @@ +#ifndef HEADER_GUARD + +#define FOO int aba; +FOO + +#endif +// RUN: env CINDEXTEST_EDITING=1 c-index-test -test-load-source-reparse 5 \ +// RUN: local -std=c++14 %s 2>&1 \ +// RUN: | FileCheck %s --implicit-check-not "libclang: crash detected" \ +// RUN: --implicit-check-not "error:" +// CHECK: macro expansion=FOO:3:9 Extent=[4:1 - 4:4] +// CHECK: VarDecl=aba:4:1 (Definition) Extent=[4:1 - 4:4] diff --git a/test/Index/preamble-conditionals.cpp b/test/Index/preamble-conditionals.cpp new file mode 100644 index 000000000000..81ef8265e829 --- /dev/null +++ b/test/Index/preamble-conditionals.cpp @@ -0,0 +1,8 @@ +// RUN: env CINDEXTEST_EDITING=1 c-index-test -test-load-source local %s 2>&1 \ +// RUN: | FileCheck %s --implicit-check-not "error:" +#ifndef FOO_H +#define FOO_H + +void foo(); + +#endif diff --git a/test/SemaObjC/arc-property-decl-attrs.m b/test/SemaObjC/arc-property-decl-attrs.m index ee48d310edc0..7393f58199f9 100644 --- a/test/SemaObjC/arc-property-decl-attrs.m +++ b/test/SemaObjC/arc-property-decl-attrs.m @@ -1,227 +1,254 @@ // RUN: %clang_cc1 -triple x86_64-apple-darwin11 -fobjc-runtime-has-weak -fsyntax-only -fobjc-arc -verify %s // rdar://9340606 @interface Foo { @public id __unsafe_unretained x; id __weak y; id __autoreleasing z; // expected-error {{instance variables cannot have __autoreleasing ownership}} } @property(strong) id x; @property(strong) id y; @property(strong) id z; @end @interface Bar { @public id __unsafe_unretained x; id __weak y; id __autoreleasing z; // expected-error {{instance variables cannot have __autoreleasing ownership}} } @property(retain) id x; @property(retain) id y; @property(retain) id z; @end @interface Bas { @public id __unsafe_unretained x; id __weak y; id __autoreleasing z; // expected-error {{instance variables cannot have __autoreleasing ownership}} } @property(copy) id x; @property(copy) id y; @property(copy) id z; @end // Errors should start about here :-) @interface Bat @property(strong) __unsafe_unretained id x; // expected-error {{strong property 'x' may not also be declared __unsafe_unretained}} @property(strong) __weak id y; // expected-error {{strong property 'y' may not also be declared __weak}} @property(strong) __autoreleasing id z; // expected-error {{strong property 'z' may not also be declared __autoreleasing}} @end @interface Bau @property(retain) __unsafe_unretained id x; // expected-error {{strong property 'x' may not also be declared __unsafe_unretained}} @property(retain) __weak id y; // expected-error {{strong property 'y' may not also be declared __weak}} @property(retain) __autoreleasing id z; // expected-error {{strong property 'z' may not also be declared __autoreleasing}} @end @interface Bav @property(copy) __unsafe_unretained id x; // expected-error {{strong property 'x' may not also be declared __unsafe_unretained}} @property(copy) __weak id y; // expected-error {{strong property 'y' may not also be declared __weak}} @property(copy) __autoreleasing id z; // expected-error {{strong property 'z' may not also be declared __autoreleasing}} @end @interface Bingo @property(assign) __unsafe_unretained id x; @property(assign) __weak id y; // expected-error {{unsafe_unretained property 'y' may not also be declared __weak}} @property(assign) __autoreleasing id z; // expected-error {{unsafe_unretained property 'z' may not also be declared __autoreleasing}} @end @interface Batman @property(unsafe_unretained) __unsafe_unretained id x; @property(unsafe_unretained) __weak id y; // expected-error {{unsafe_unretained property 'y' may not also be declared __weak}} @property(unsafe_unretained) __autoreleasing id z; // expected-error {{unsafe_unretained property 'z' may not also be declared __autoreleasing}} @end // rdar://9396329 @interface Super @property (readonly, retain) id foo; @property (readonly, weak) id fee; @property (readonly, strong) id frr; @end @interface Bugg : Super @property (readwrite) id foo; @property (readwrite) id fee; @property (readwrite) id frr; @end // rdar://20152386 // rdar://20383235 @interface NSObject @end #pragma clang assume_nonnull begin @interface I: NSObject @property(nonatomic, weak) id delegate; // Do not warn, nullable is inferred. @property(nonatomic, weak, readonly) id ROdelegate; // Do not warn, nullable is inferred. @property(nonatomic, weak, nonnull) id NonNulldelete; // expected-error {{property attributes 'nonnull' and 'weak' are mutually exclusive}} @property(nonatomic, weak, nullable) id Nullabledelete; // do not warn // strong cases. @property(nonatomic, strong) id stdelegate; // Do not warn @property(nonatomic, readonly) id stROdelegate; // Do not warn @property(nonatomic, strong, nonnull) id stNonNulldelete; // Do not warn @property(nonatomic, nullable) id stNullabledelete; // do not warn @end #pragma clang assume_nonnull end @interface J: NSObject @property(nonatomic, weak) id ddd; // Do not warn, nullable is inferred. @property(nonatomic, weak, nonnull) id delegate; // expected-error {{property attributes 'nonnull' and 'weak' are mutually exclusive}} @property(nonatomic, weak, nonnull, readonly) id ROdelegate; // expected-error {{property attributes 'nonnull' and 'weak' are mutually exclusive}} @end // rdar://problem/23931441 @protocol P @property(readonly, retain) id prop; @end __attribute__((objc_root_class)) @interface I2

@end @interface I2() @property (readwrite) id prop; @end @implementation I2 @synthesize prop; @end // rdar://31579994 // Verify that the all of the property declarations in inherited protocols are // compatible when synthesing a property from a protocol. @protocol CopyVsAssign1 @property (copy, nonatomic, readonly) id prop; // expected-error {{property with attribute 'copy' was selected for synthesis}} @end @protocol CopyVsAssign2 @property (assign, nonatomic, readonly) id prop; // expected-note {{it could also be property without attribute 'copy' declared here}} @end @interface CopyVsAssign: Foo @end @implementation CopyVsAssign @synthesize prop; // expected-note {{property synthesized here}} @end @protocol RetainVsNonRetain1 @property (readonly) id prop; // expected-error {{property without attribute 'retain (or strong)' was selected for synthesis}} @end @protocol RetainVsNonRetain2 @property (retain, readonly) id prop; // expected-note {{it could also be property with attribute 'retain (or strong)' declared here}} @end @interface RetainVsNonRetain: Foo @end @implementation RetainVsNonRetain @synthesize prop; // expected-note {{property synthesized here}} @end @protocol AtomicVsNonatomic1 @property (copy, nonatomic, readonly) id prop; // expected-error {{property without attribute 'atomic' was selected for synthesis}} @end @protocol AtomicVsNonatomic2 @property (copy, atomic, readonly) id prop; // expected-note {{it could also be property with attribute 'atomic' declared here}} @end @interface AtomicVsNonAtomic: Foo @end @implementation AtomicVsNonAtomic @synthesize prop; // expected-note {{property synthesized here}} @end @protocol Getter1 @property (copy, readonly) id prop; // expected-error {{property with getter 'prop' was selected for synthesis}} @end @protocol Getter2 @property (copy, getter=x, readonly) id prop; // expected-note {{it could also be property with getter 'x' declared here}} @end @interface GetterVsGetter: Foo @end @implementation GetterVsGetter @synthesize prop; // expected-note {{property synthesized here}} @end @protocol Setter1 @property (copy, readonly) id prop; @end @protocol Setter2 @property (copy, setter=setp:, readwrite) id prop; // expected-error {{property with setter 'setp:' was selected for synthesis}} @end @protocol Setter3 @property (copy, readwrite) id prop; // expected-note {{it could also be property with setter 'setProp:' declared here}} @end @interface SetterVsSetter: Foo @end @implementation SetterVsSetter @synthesize prop; // expected-note {{property synthesized here}} @end @protocol TypeVsAttribute1 @property (assign, atomic, readonly) int prop; // expected-error {{property of type 'int' was selected for synthesis}} @end @protocol TypeVsAttribute2 @property (assign, atomic, readonly) id prop; // expected-note {{it could also be property of type 'id' declared here}} @end @protocol TypeVsAttribute3 @property (copy, readonly) id prop; // expected-note {{it could also be property with attribute 'copy' declared here}} @end @interface TypeVsAttribute: Foo @end @implementation TypeVsAttribute @synthesize prop; // expected-note {{property synthesized here}} @end @protocol TypeVsSetter1 @property (assign, nonatomic, readonly) int prop; // expected-note {{it could also be property of type 'int' declared here}} @end @protocol TypeVsSetter2 @property (assign, nonatomic, readonly) id prop; // ok @end @protocol TypeVsSetter3 @property (assign, nonatomic, readwrite) id prop; // expected-error {{property of type 'id' was selected for synthesis}} @end @interface TypeVsSetter: Foo @end @implementation TypeVsSetter @synthesize prop; // expected-note {{property synthesized here}} @end + +@protocol AutoStrongProp + +@property (nonatomic, readonly) NSObject *prop; + +@end + +@protocol AutoStrongProp_Internal + +// This property gets the 'strong' attribute automatically. +@property (nonatomic, readwrite) NSObject *prop; + +@end + +@interface SynthesizeWithImplicitStrongNoError : NSObject +@end + +@interface SynthesizeWithImplicitStrongNoError () + +@end + +@implementation SynthesizeWithImplicitStrongNoError + +// no error, 'strong' is implicit in the 'readwrite' property. +@synthesize prop = _prop; + +@end diff --git a/unittests/ASTMatchers/ASTMatchersNarrowingTest.cpp b/unittests/ASTMatchers/ASTMatchersNarrowingTest.cpp index 6037127feb52..7bc8421bab2f 100644 --- a/unittests/ASTMatchers/ASTMatchersNarrowingTest.cpp +++ b/unittests/ASTMatchers/ASTMatchersNarrowingTest.cpp @@ -1,1978 +1,1987 @@ // unittests/ASTMatchers/ASTMatchersNarrowingTest.cpp - AST matcher unit tests// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// #include "ASTMatchersTest.h" #include "clang/AST/PrettyPrinter.h" #include "clang/ASTMatchers/ASTMatchFinder.h" #include "clang/ASTMatchers/ASTMatchers.h" #include "clang/Tooling/Tooling.h" #include "llvm/ADT/Triple.h" #include "llvm/Support/Host.h" #include "gtest/gtest.h" namespace clang { namespace ast_matchers { TEST(AllOf, AllOverloadsWork) { const char Program[] = "struct T { };" "int f(int, T*, int, int);" "void g(int x) { T t; f(x, &t, 3, 4); }"; EXPECT_TRUE(matches(Program, callExpr(allOf(callee(functionDecl(hasName("f"))), hasArgument(0, declRefExpr(to(varDecl()))))))); EXPECT_TRUE(matches(Program, callExpr(allOf(callee(functionDecl(hasName("f"))), hasArgument(0, declRefExpr(to(varDecl()))), hasArgument(1, hasType(pointsTo( recordDecl(hasName("T"))))))))); EXPECT_TRUE(matches(Program, callExpr(allOf(callee(functionDecl(hasName("f"))), hasArgument(0, declRefExpr(to(varDecl()))), hasArgument(1, hasType(pointsTo( recordDecl(hasName("T"))))), hasArgument(2, integerLiteral(equals(3))))))); EXPECT_TRUE(matches(Program, callExpr(allOf(callee(functionDecl(hasName("f"))), hasArgument(0, declRefExpr(to(varDecl()))), hasArgument(1, hasType(pointsTo( recordDecl(hasName("T"))))), hasArgument(2, integerLiteral(equals(3))), hasArgument(3, integerLiteral(equals(4))))))); } TEST(DeclarationMatcher, MatchHas) { DeclarationMatcher HasClassX = recordDecl(has(recordDecl(hasName("X")))); EXPECT_TRUE(matches("class Y { class X {}; };", HasClassX)); EXPECT_TRUE(matches("class X {};", HasClassX)); DeclarationMatcher YHasClassX = recordDecl(hasName("Y"), has(recordDecl(hasName("X")))); EXPECT_TRUE(matches("class Y { class X {}; };", YHasClassX)); EXPECT_TRUE(notMatches("class X {};", YHasClassX)); EXPECT_TRUE( notMatches("class Y { class Z { class X {}; }; };", YHasClassX)); } TEST(DeclarationMatcher, MatchHasRecursiveAllOf) { DeclarationMatcher Recursive = recordDecl( has(recordDecl( has(recordDecl(hasName("X"))), has(recordDecl(hasName("Y"))), hasName("Z"))), has(recordDecl( has(recordDecl(hasName("A"))), has(recordDecl(hasName("B"))), hasName("C"))), hasName("F")); EXPECT_TRUE(matches( "class F {" " class Z {" " class X {};" " class Y {};" " };" " class C {" " class A {};" " class B {};" " };" "};", Recursive)); EXPECT_TRUE(matches( "class F {" " class Z {" " class A {};" " class X {};" " class Y {};" " };" " class C {" " class X {};" " class A {};" " class B {};" " };" "};", Recursive)); EXPECT_TRUE(matches( "class O1 {" " class O2 {" " class F {" " class Z {" " class A {};" " class X {};" " class Y {};" " };" " class C {" " class X {};" " class A {};" " class B {};" " };" " };" " };" "};", Recursive)); } TEST(DeclarationMatcher, MatchHasRecursiveAnyOf) { DeclarationMatcher Recursive = recordDecl( anyOf( has(recordDecl( anyOf( has(recordDecl( hasName("X"))), has(recordDecl( hasName("Y"))), hasName("Z")))), has(recordDecl( anyOf( hasName("C"), has(recordDecl( hasName("A"))), has(recordDecl( hasName("B")))))), hasName("F"))); EXPECT_TRUE(matches("class F {};", Recursive)); EXPECT_TRUE(matches("class Z {};", Recursive)); EXPECT_TRUE(matches("class C {};", Recursive)); EXPECT_TRUE(matches("class M { class N { class X {}; }; };", Recursive)); EXPECT_TRUE(matches("class M { class N { class B {}; }; };", Recursive)); EXPECT_TRUE( matches("class O1 { class O2 {" " class M { class N { class B {}; }; }; " "}; };", Recursive)); } TEST(DeclarationMatcher, MatchNot) { DeclarationMatcher NotClassX = cxxRecordDecl( isDerivedFrom("Y"), unless(hasName("X"))); EXPECT_TRUE(notMatches("", NotClassX)); EXPECT_TRUE(notMatches("class Y {};", NotClassX)); EXPECT_TRUE(matches("class Y {}; class Z : public Y {};", NotClassX)); EXPECT_TRUE(notMatches("class Y {}; class X : public Y {};", NotClassX)); EXPECT_TRUE( notMatches("class Y {}; class Z {}; class X : public Y {};", NotClassX)); DeclarationMatcher ClassXHasNotClassY = recordDecl( hasName("X"), has(recordDecl(hasName("Z"))), unless( has(recordDecl(hasName("Y"))))); EXPECT_TRUE(matches("class X { class Z {}; };", ClassXHasNotClassY)); EXPECT_TRUE(notMatches("class X { class Y {}; class Z {}; };", ClassXHasNotClassY)); DeclarationMatcher NamedNotRecord = namedDecl(hasName("Foo"), unless(recordDecl())); EXPECT_TRUE(matches("void Foo(){}", NamedNotRecord)); EXPECT_TRUE(notMatches("struct Foo {};", NamedNotRecord)); } TEST(CastExpression, HasCastKind) { EXPECT_TRUE(matches("char *p = 0;", castExpr(hasCastKind(CK_NullToPointer)))); EXPECT_TRUE(notMatches("char *p = 0;", castExpr(hasCastKind(CK_DerivedToBase)))); EXPECT_TRUE(matches("char *p = 0;", implicitCastExpr(hasCastKind(CK_NullToPointer)))); } TEST(DeclarationMatcher, HasDescendant) { DeclarationMatcher ZDescendantClassX = recordDecl( hasDescendant(recordDecl(hasName("X"))), hasName("Z")); EXPECT_TRUE(matches("class Z { class X {}; };", ZDescendantClassX)); EXPECT_TRUE( matches("class Z { class Y { class X {}; }; };", ZDescendantClassX)); EXPECT_TRUE( matches("class Z { class A { class Y { class X {}; }; }; };", ZDescendantClassX)); EXPECT_TRUE( matches("class Z { class A { class B { class Y { class X {}; }; }; }; };", ZDescendantClassX)); EXPECT_TRUE(notMatches("class Z {};", ZDescendantClassX)); DeclarationMatcher ZDescendantClassXHasClassY = recordDecl( hasDescendant(recordDecl(has(recordDecl(hasName("Y"))), hasName("X"))), hasName("Z")); EXPECT_TRUE(matches("class Z { class X { class Y {}; }; };", ZDescendantClassXHasClassY)); EXPECT_TRUE( matches("class Z { class A { class B { class X { class Y {}; }; }; }; };", ZDescendantClassXHasClassY)); EXPECT_TRUE(notMatches( "class Z {" " class A {" " class B {" " class X {" " class C {" " class Y {};" " };" " };" " }; " " };" "};", ZDescendantClassXHasClassY)); DeclarationMatcher ZDescendantClassXDescendantClassY = recordDecl( hasDescendant(recordDecl(hasDescendant(recordDecl(hasName("Y"))), hasName("X"))), hasName("Z")); EXPECT_TRUE( matches("class Z { class A { class X { class B { class Y {}; }; }; }; };", ZDescendantClassXDescendantClassY)); EXPECT_TRUE(matches( "class Z {" " class A {" " class X {" " class B {" " class Y {};" " };" " class Y {};" " };" " };" "};", ZDescendantClassXDescendantClassY)); } TEST(DeclarationMatcher, HasDescendantMemoization) { DeclarationMatcher CannotMemoize = decl(hasDescendant(typeLoc().bind("x")), has(decl())); EXPECT_TRUE(matches("void f() { int i; }", CannotMemoize)); } TEST(DeclarationMatcher, HasDescendantMemoizationUsesRestrictKind) { auto Name = hasName("i"); auto VD = internal::Matcher(Name).dynCastTo(); auto RD = internal::Matcher(Name).dynCastTo(); // Matching VD first should not make a cache hit for RD. EXPECT_TRUE(notMatches("void f() { int i; }", decl(hasDescendant(VD), hasDescendant(RD)))); EXPECT_TRUE(notMatches("void f() { int i; }", decl(hasDescendant(RD), hasDescendant(VD)))); // Not matching RD first should not make a cache hit for VD either. EXPECT_TRUE(matches("void f() { int i; }", decl(anyOf(hasDescendant(RD), hasDescendant(VD))))); } TEST(DeclarationMatcher, HasAncestorMemoization) { // This triggers an hasAncestor with a TemplateArgument in the bound nodes. // That node can't be memoized so we have to check for it before trying to put // it on the cache. DeclarationMatcher CannotMemoize = classTemplateSpecializationDecl( hasAnyTemplateArgument(templateArgument().bind("targ")), forEach(fieldDecl(hasAncestor(forStmt())))); EXPECT_TRUE(notMatches("template struct S;" "template <> struct S{ int i; int j; };", CannotMemoize)); } TEST(DeclarationMatcher, HasAttr) { EXPECT_TRUE(matches("struct __attribute__((warn_unused)) X {};", decl(hasAttr(clang::attr::WarnUnused)))); EXPECT_FALSE(matches("struct X {};", decl(hasAttr(clang::attr::WarnUnused)))); } TEST(DeclarationMatcher, MatchAnyOf) { DeclarationMatcher YOrZDerivedFromX = cxxRecordDecl( anyOf(hasName("Y"), allOf(isDerivedFrom("X"), hasName("Z")))); EXPECT_TRUE(matches("class X {}; class Z : public X {};", YOrZDerivedFromX)); EXPECT_TRUE(matches("class Y {};", YOrZDerivedFromX)); EXPECT_TRUE( notMatches("class X {}; class W : public X {};", YOrZDerivedFromX)); EXPECT_TRUE(notMatches("class Z {};", YOrZDerivedFromX)); DeclarationMatcher XOrYOrZOrU = recordDecl(anyOf(hasName("X"), hasName("Y"), hasName("Z"), hasName("U"))); EXPECT_TRUE(matches("class X {};", XOrYOrZOrU)); EXPECT_TRUE(notMatches("class V {};", XOrYOrZOrU)); DeclarationMatcher XOrYOrZOrUOrV = recordDecl(anyOf(hasName("X"), hasName("Y"), hasName("Z"), hasName("U"), hasName("V"))); EXPECT_TRUE(matches("class X {};", XOrYOrZOrUOrV)); EXPECT_TRUE(matches("class Y {};", XOrYOrZOrUOrV)); EXPECT_TRUE(matches("class Z {};", XOrYOrZOrUOrV)); EXPECT_TRUE(matches("class U {};", XOrYOrZOrUOrV)); EXPECT_TRUE(matches("class V {};", XOrYOrZOrUOrV)); EXPECT_TRUE(notMatches("class A {};", XOrYOrZOrUOrV)); StatementMatcher MixedTypes = stmt(anyOf(ifStmt(), binaryOperator())); EXPECT_TRUE(matches("int F() { return 1 + 2; }", MixedTypes)); EXPECT_TRUE(matches("int F() { if (true) return 1; }", MixedTypes)); EXPECT_TRUE(notMatches("int F() { return 1; }", MixedTypes)); EXPECT_TRUE( matches("void f() try { } catch (int) { } catch (...) { }", cxxCatchStmt(anyOf(hasDescendant(varDecl()), isCatchAll())))); } TEST(DeclarationMatcher, ClassIsDerived) { DeclarationMatcher IsDerivedFromX = cxxRecordDecl(isDerivedFrom("X")); EXPECT_TRUE(matches("class X {}; class Y : public X {};", IsDerivedFromX)); EXPECT_TRUE(notMatches("class X {};", IsDerivedFromX)); EXPECT_TRUE(notMatches("class X;", IsDerivedFromX)); EXPECT_TRUE(notMatches("class Y;", IsDerivedFromX)); EXPECT_TRUE(notMatches("", IsDerivedFromX)); DeclarationMatcher IsAX = cxxRecordDecl(isSameOrDerivedFrom("X")); EXPECT_TRUE(matches("class X {}; class Y : public X {};", IsAX)); EXPECT_TRUE(matches("class X {};", IsAX)); EXPECT_TRUE(matches("class X;", IsAX)); EXPECT_TRUE(notMatches("class Y;", IsAX)); EXPECT_TRUE(notMatches("", IsAX)); DeclarationMatcher ZIsDerivedFromX = cxxRecordDecl(hasName("Z"), isDerivedFrom("X")); EXPECT_TRUE( matches("class X {}; class Y : public X {}; class Z : public Y {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("class X {};" "template class Y : public X {};" "class Z : public Y {};", ZIsDerivedFromX)); EXPECT_TRUE(matches("class X {}; template class Z : public X {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class X {}; " "template class Z : public X {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class X {}; " "template class Z : public X {};", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template class A { class Z : public X {}; };", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class A { public: class Z : public X {}; }; " "class X{}; void y() { A::Z z; }", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class X {}; " "template class A { class Z : public X {}; };", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template class X> class A { " " class Z : public X {}; };", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class X> class A { " " public: class Z : public X {}; }; " "template class X {}; void y() { A::Z z; }", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template class A { class Z : public X::D {}; };", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class A { public: " " class Z : public X::D {}; }; " "class Y { public: class X {}; typedef X D; }; " "void y() { A::Z z; }", ZIsDerivedFromX)); EXPECT_TRUE( matches("class X {}; typedef X Y; class Z : public Y {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class Y { typedef typename T::U X; " " class Z : public X {}; };", ZIsDerivedFromX)); EXPECT_TRUE(matches("class X {}; class Z : public ::X {};", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template class X {}; " "template class A { class Z : public X::D {}; };", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class X { public: typedef X D; }; " "template class A { public: " " class Z : public X::D {}; }; void y() { A::Z z; }", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template class A { class Z : public X::D::E {}; };", ZIsDerivedFromX)); EXPECT_TRUE( matches("class X {}; typedef X V; typedef V W; class Z : public W {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("class X {}; class Y : public X {}; " "typedef Y V; typedef V W; class Z : public W {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("template class X {}; " "template class A { class Z : public X {}; };", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template class D { typedef X A; typedef A B; " " typedef B C; class Z : public C {}; };", ZIsDerivedFromX)); EXPECT_TRUE( matches("class X {}; typedef X A; typedef A B; " "class Z : public B {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("class X {}; typedef X A; typedef A B; typedef B C; " "class Z : public C {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("class U {}; typedef U X; typedef X V; " "class Z : public V {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("class Base {}; typedef Base X; " "class Z : public Base {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("class Base {}; typedef Base Base2; typedef Base2 X; " "class Z : public Base {};", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("class Base {}; class Base2 {}; typedef Base2 X; " "class Z : public Base {};", ZIsDerivedFromX)); EXPECT_TRUE( matches("class A {}; typedef A X; typedef A Y; " "class Z : public Y {};", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template class Z;" "template <> class Z {};" "template class Z : public Z {};", IsDerivedFromX)); EXPECT_TRUE( matches("template class X;" "template <> class X {};" "template class X : public X {};", IsDerivedFromX)); EXPECT_TRUE(matches( "class X {};" "template class Z;" "template <> class Z {};" "template class Z : public Z, public X {};", ZIsDerivedFromX)); EXPECT_TRUE( notMatches("template struct X;" "template struct X : public X {};", cxxRecordDecl(isDerivedFrom(recordDecl(hasName("Some")))))); EXPECT_TRUE(matches( "struct A {};" "template struct X;" "template struct X : public X {};" "template<> struct X<0> : public A {};" "struct B : public X<42> {};", cxxRecordDecl(hasName("B"), isDerivedFrom(recordDecl(hasName("A")))))); // FIXME: Once we have better matchers for template type matching, // get rid of the Variable(...) matching and match the right template // declarations directly. const char *RecursiveTemplateOneParameter = "class Base1 {}; class Base2 {};" "template class Z;" "template <> class Z : public Base1 {};" "template <> class Z : public Base2 {};" "template <> class Z : public Z {};" "template <> class Z : public Z {};" "template class Z : public Z, public Z {};" "void f() { Z z_float; Z z_double; Z z_char; }"; EXPECT_TRUE(matches( RecursiveTemplateOneParameter, varDecl(hasName("z_float"), hasInitializer(hasType(cxxRecordDecl(isDerivedFrom("Base1"))))))); EXPECT_TRUE(notMatches( RecursiveTemplateOneParameter, varDecl(hasName("z_float"), hasInitializer(hasType(cxxRecordDecl(isDerivedFrom("Base2"))))))); EXPECT_TRUE(matches( RecursiveTemplateOneParameter, varDecl(hasName("z_char"), hasInitializer(hasType(cxxRecordDecl(isDerivedFrom("Base1"), isDerivedFrom("Base2"))))))); const char *RecursiveTemplateTwoParameters = "class Base1 {}; class Base2 {};" "template class Z;" "template class Z : public Base1 {};" "template class Z : public Base2 {};" "template class Z : public Z {};" "template class Z : public Z {};" "template class Z : " " public Z, public Z {};" "void f() { Z z_float; Z z_double; " " Z z_char; }"; EXPECT_TRUE(matches( RecursiveTemplateTwoParameters, varDecl(hasName("z_float"), hasInitializer(hasType(cxxRecordDecl(isDerivedFrom("Base1"))))))); EXPECT_TRUE(notMatches( RecursiveTemplateTwoParameters, varDecl(hasName("z_float"), hasInitializer(hasType(cxxRecordDecl(isDerivedFrom("Base2"))))))); EXPECT_TRUE(matches( RecursiveTemplateTwoParameters, varDecl(hasName("z_char"), hasInitializer(hasType(cxxRecordDecl(isDerivedFrom("Base1"), isDerivedFrom("Base2"))))))); EXPECT_TRUE(matches( "namespace ns { class X {}; class Y : public X {}; }", cxxRecordDecl(isDerivedFrom("::ns::X")))); EXPECT_TRUE(notMatches( "class X {}; class Y : public X {};", cxxRecordDecl(isDerivedFrom("::ns::X")))); EXPECT_TRUE(matches( "class X {}; class Y : public X {};", cxxRecordDecl(isDerivedFrom(recordDecl(hasName("X")).bind("test"))))); EXPECT_TRUE(matches( "template class X {};" "template using Z = X;" "template class Y : Z {};", cxxRecordDecl(isDerivedFrom(namedDecl(hasName("X")))))); } TEST(DeclarationMatcher, IsLambda) { const auto IsLambda = cxxMethodDecl(ofClass(cxxRecordDecl(isLambda()))); EXPECT_TRUE(matches("auto x = []{};", IsLambda)); EXPECT_TRUE(notMatches("struct S { void operator()() const; };", IsLambda)); } TEST(Matcher, BindMatchedNodes) { DeclarationMatcher ClassX = has(recordDecl(hasName("::X")).bind("x")); EXPECT_TRUE(matchAndVerifyResultTrue("class X {};", ClassX, llvm::make_unique>("x"))); EXPECT_TRUE(matchAndVerifyResultFalse("class X {};", ClassX, llvm::make_unique>("other-id"))); TypeMatcher TypeAHasClassB = hasDeclaration( recordDecl(hasName("A"), has(recordDecl(hasName("B")).bind("b")))); EXPECT_TRUE(matchAndVerifyResultTrue("class A { public: A *a; class B {}; };", TypeAHasClassB, llvm::make_unique>("b"))); StatementMatcher MethodX = callExpr(callee(cxxMethodDecl(hasName("x")))).bind("x"); EXPECT_TRUE(matchAndVerifyResultTrue("class A { void x() { x(); } };", MethodX, llvm::make_unique>("x"))); } TEST(Matcher, BindTheSameNameInAlternatives) { StatementMatcher matcher = anyOf( binaryOperator(hasOperatorName("+"), hasLHS(expr().bind("x")), hasRHS(integerLiteral(equals(0)))), binaryOperator(hasOperatorName("+"), hasLHS(integerLiteral(equals(0))), hasRHS(expr().bind("x")))); EXPECT_TRUE(matchAndVerifyResultTrue( // The first branch of the matcher binds x to 0 but then fails. // The second branch binds x to f() and succeeds. "int f() { return 0 + f(); }", matcher, llvm::make_unique>("x"))); } TEST(Matcher, BindsIDForMemoizedResults) { // Using the same matcher in two match expressions will make memoization // kick in. DeclarationMatcher ClassX = recordDecl(hasName("X")).bind("x"); EXPECT_TRUE(matchAndVerifyResultTrue( "class A { class B { class X {}; }; };", DeclarationMatcher(anyOf( recordDecl(hasName("A"), hasDescendant(ClassX)), recordDecl(hasName("B"), hasDescendant(ClassX)))), llvm::make_unique>("x", 2))); } TEST(HasType, MatchesAsString) { EXPECT_TRUE( matches("class Y { public: void x(); }; void z() {Y* y; y->x(); }", cxxMemberCallExpr(on(hasType(asString("class Y *")))))); EXPECT_TRUE( matches("class X { void x(int x) {} };", cxxMethodDecl(hasParameter(0, hasType(asString("int")))))); EXPECT_TRUE(matches("namespace ns { struct A {}; } struct B { ns::A a; };", fieldDecl(hasType(asString("ns::A"))))); EXPECT_TRUE(matches("namespace { struct A {}; } struct B { A a; };", fieldDecl(hasType(asString("struct (anonymous namespace)::A"))))); } TEST(Matcher, HasOperatorNameForOverloadedOperatorCall) { StatementMatcher OpCallAndAnd = cxxOperatorCallExpr(hasOverloadedOperatorName("&&")); EXPECT_TRUE(matches("class Y { }; " "bool operator&&(Y x, Y y) { return true; }; " "Y a; Y b; bool c = a && b;", OpCallAndAnd)); StatementMatcher OpCallLessLess = cxxOperatorCallExpr(hasOverloadedOperatorName("<<")); EXPECT_TRUE(notMatches("class Y { }; " "bool operator&&(Y x, Y y) { return true; }; " "Y a; Y b; bool c = a && b;", OpCallLessLess)); StatementMatcher OpStarCall = cxxOperatorCallExpr(hasOverloadedOperatorName("*")); EXPECT_TRUE(matches("class Y; int operator*(Y &); void f(Y &y) { *y; }", OpStarCall)); DeclarationMatcher ClassWithOpStar = cxxRecordDecl(hasMethod(hasOverloadedOperatorName("*"))); EXPECT_TRUE(matches("class Y { int operator*(); };", ClassWithOpStar)); EXPECT_TRUE(notMatches("class Y { void myOperator(); };", ClassWithOpStar)) ; DeclarationMatcher AnyOpStar = functionDecl(hasOverloadedOperatorName("*")); EXPECT_TRUE(matches("class Y; int operator*(Y &);", AnyOpStar)); EXPECT_TRUE(matches("class Y { int operator*(); };", AnyOpStar)); } TEST(Matcher, NestedOverloadedOperatorCalls) { EXPECT_TRUE(matchAndVerifyResultTrue( "class Y { }; " "Y& operator&&(Y& x, Y& y) { return x; }; " "Y a; Y b; Y c; Y d = a && b && c;", cxxOperatorCallExpr(hasOverloadedOperatorName("&&")).bind("x"), llvm::make_unique>("x", 2))); EXPECT_TRUE(matches("class Y { }; " "Y& operator&&(Y& x, Y& y) { return x; }; " "Y a; Y b; Y c; Y d = a && b && c;", cxxOperatorCallExpr(hasParent(cxxOperatorCallExpr())))); EXPECT_TRUE( matches("class Y { }; " "Y& operator&&(Y& x, Y& y) { return x; }; " "Y a; Y b; Y c; Y d = a && b && c;", cxxOperatorCallExpr(hasDescendant(cxxOperatorCallExpr())))); } TEST(Matcher, VarDecl_Storage) { auto M = varDecl(hasName("X"), hasLocalStorage()); EXPECT_TRUE(matches("void f() { int X; }", M)); EXPECT_TRUE(notMatches("int X;", M)); EXPECT_TRUE(notMatches("void f() { static int X; }", M)); M = varDecl(hasName("X"), hasGlobalStorage()); EXPECT_TRUE(notMatches("void f() { int X; }", M)); EXPECT_TRUE(matches("int X;", M)); EXPECT_TRUE(matches("void f() { static int X; }", M)); } TEST(Matcher, VarDecl_StorageDuration) { std::string T = "void f() { int x; static int y; } int a;static int b;extern int c;"; EXPECT_TRUE(matches(T, varDecl(hasName("x"), hasAutomaticStorageDuration()))); EXPECT_TRUE( notMatches(T, varDecl(hasName("y"), hasAutomaticStorageDuration()))); EXPECT_TRUE( notMatches(T, varDecl(hasName("a"), hasAutomaticStorageDuration()))); EXPECT_TRUE(matches(T, varDecl(hasName("y"), hasStaticStorageDuration()))); EXPECT_TRUE(matches(T, varDecl(hasName("a"), hasStaticStorageDuration()))); EXPECT_TRUE(matches(T, varDecl(hasName("b"), hasStaticStorageDuration()))); EXPECT_TRUE(matches(T, varDecl(hasName("c"), hasStaticStorageDuration()))); EXPECT_TRUE(notMatches(T, varDecl(hasName("x"), hasStaticStorageDuration()))); // FIXME: It is really hard to test with thread_local itself because not all // targets support TLS, which causes this to be an error depending on what // platform the test is being run on. We do not have access to the TargetInfo // object to be able to test whether the platform supports TLS or not. EXPECT_TRUE(notMatches(T, varDecl(hasName("x"), hasThreadStorageDuration()))); EXPECT_TRUE(notMatches(T, varDecl(hasName("y"), hasThreadStorageDuration()))); EXPECT_TRUE(notMatches(T, varDecl(hasName("a"), hasThreadStorageDuration()))); } TEST(Matcher, FindsVarDeclInFunctionParameter) { EXPECT_TRUE(matches( "void f(int i) {}", varDecl(hasName("i")))); } TEST(UnaryExpressionOrTypeTraitExpression, MatchesCorrectType) { EXPECT_TRUE(matches("void x() { int a = sizeof(a); }", sizeOfExpr( hasArgumentOfType(asString("int"))))); EXPECT_TRUE(notMatches("void x() { int a = sizeof(a); }", sizeOfExpr( hasArgumentOfType(asString("float"))))); EXPECT_TRUE(matches( "struct A {}; void x() { A a; int b = sizeof(a); }", sizeOfExpr(hasArgumentOfType(hasDeclaration(recordDecl(hasName("A"))))))); EXPECT_TRUE(notMatches("void x() { int a = sizeof(a); }", sizeOfExpr( hasArgumentOfType(hasDeclaration(recordDecl(hasName("string"))))))); } TEST(IsInteger, MatchesIntegers) { EXPECT_TRUE(matches("int i = 0;", varDecl(hasType(isInteger())))); EXPECT_TRUE(matches( "long long i = 0; void f(long long) { }; void g() {f(i);}", callExpr(hasArgument(0, declRefExpr( to(varDecl(hasType(isInteger())))))))); } TEST(IsInteger, ReportsNoFalsePositives) { EXPECT_TRUE(notMatches("int *i;", varDecl(hasType(isInteger())))); EXPECT_TRUE(notMatches("struct T {}; T t; void f(T *) { }; void g() {f(&t);}", callExpr(hasArgument(0, declRefExpr( to(varDecl(hasType(isInteger())))))))); } TEST(IsSignedInteger, MatchesSignedIntegers) { EXPECT_TRUE(matches("int i = 0;", varDecl(hasType(isSignedInteger())))); EXPECT_TRUE(notMatches("unsigned i = 0;", varDecl(hasType(isSignedInteger())))); } TEST(IsUnsignedInteger, MatchesUnsignedIntegers) { EXPECT_TRUE(notMatches("int i = 0;", varDecl(hasType(isUnsignedInteger())))); EXPECT_TRUE(matches("unsigned i = 0;", varDecl(hasType(isUnsignedInteger())))); } TEST(IsAnyPointer, MatchesPointers) { EXPECT_TRUE(matches("int* i = nullptr;", varDecl(hasType(isAnyPointer())))); } TEST(IsAnyPointer, MatchesObjcPointer) { EXPECT_TRUE(matchesObjC("@interface Foo @end Foo *f;", varDecl(hasType(isAnyPointer())))); } TEST(IsAnyPointer, ReportsNoFalsePositives) { EXPECT_TRUE(notMatches("int i = 0;", varDecl(hasType(isAnyPointer())))); } TEST(IsAnyCharacter, MatchesCharacters) { EXPECT_TRUE(matches("char i = 0;", varDecl(hasType(isAnyCharacter())))); } TEST(IsAnyCharacter, ReportsNoFalsePositives) { EXPECT_TRUE(notMatches("int i;", varDecl(hasType(isAnyCharacter())))); } TEST(IsArrow, MatchesMemberVariablesViaArrow) { EXPECT_TRUE(matches("class Y { void x() { this->y; } int y; };", memberExpr(isArrow()))); EXPECT_TRUE(matches("class Y { void x() { y; } int y; };", memberExpr(isArrow()))); EXPECT_TRUE(notMatches("class Y { void x() { (*this).y; } int y; };", memberExpr(isArrow()))); } TEST(IsArrow, MatchesStaticMemberVariablesViaArrow) { EXPECT_TRUE(matches("class Y { void x() { this->y; } static int y; };", memberExpr(isArrow()))); EXPECT_TRUE(notMatches("class Y { void x() { y; } static int y; };", memberExpr(isArrow()))); EXPECT_TRUE(notMatches("class Y { void x() { (*this).y; } static int y; };", memberExpr(isArrow()))); } TEST(IsArrow, MatchesMemberCallsViaArrow) { EXPECT_TRUE(matches("class Y { void x() { this->x(); } };", memberExpr(isArrow()))); EXPECT_TRUE(matches("class Y { void x() { x(); } };", memberExpr(isArrow()))); EXPECT_TRUE(notMatches("class Y { void x() { Y y; y.x(); } };", memberExpr(isArrow()))); } TEST(ConversionDeclaration, IsExplicit) { EXPECT_TRUE(matches("struct S { explicit operator int(); };", cxxConversionDecl(isExplicit()))); EXPECT_TRUE(notMatches("struct S { operator int(); };", cxxConversionDecl(isExplicit()))); } TEST(Matcher, ArgumentCount) { StatementMatcher Call1Arg = callExpr(argumentCountIs(1)); EXPECT_TRUE(matches("void x(int) { x(0); }", Call1Arg)); EXPECT_TRUE(matches("class X { void x(int) { x(0); } };", Call1Arg)); EXPECT_TRUE(notMatches("void x(int, int) { x(0, 0); }", Call1Arg)); } TEST(Matcher, ParameterCount) { DeclarationMatcher Function1Arg = functionDecl(parameterCountIs(1)); EXPECT_TRUE(matches("void f(int i) {}", Function1Arg)); EXPECT_TRUE(matches("class X { void f(int i) {} };", Function1Arg)); EXPECT_TRUE(notMatches("void f() {}", Function1Arg)); EXPECT_TRUE(notMatches("void f(int i, int j, int k) {}", Function1Arg)); EXPECT_TRUE(matches("void f(int i, ...) {};", Function1Arg)); } TEST(Matcher, References) { DeclarationMatcher ReferenceClassX = varDecl( hasType(references(recordDecl(hasName("X"))))); EXPECT_TRUE(matches("class X {}; void y(X y) { X &x = y; }", ReferenceClassX)); EXPECT_TRUE( matches("class X {}; void y(X y) { const X &x = y; }", ReferenceClassX)); // The match here is on the implicit copy constructor code for // class X, not on code 'X x = y'. EXPECT_TRUE( matches("class X {}; void y(X y) { X x = y; }", ReferenceClassX)); EXPECT_TRUE( notMatches("class X {}; extern X x;", ReferenceClassX)); EXPECT_TRUE( notMatches("class X {}; void y(X *y) { X *&x = y; }", ReferenceClassX)); } TEST(QualType, hasLocalQualifiers) { EXPECT_TRUE(notMatches("typedef const int const_int; const_int i = 1;", varDecl(hasType(hasLocalQualifiers())))); EXPECT_TRUE(matches("int *const j = nullptr;", varDecl(hasType(hasLocalQualifiers())))); EXPECT_TRUE(matches("int *volatile k;", varDecl(hasType(hasLocalQualifiers())))); EXPECT_TRUE(notMatches("int m;", varDecl(hasType(hasLocalQualifiers())))); } TEST(IsExternC, MatchesExternCFunctionDeclarations) { EXPECT_TRUE(matches("extern \"C\" void f() {}", functionDecl(isExternC()))); EXPECT_TRUE(matches("extern \"C\" { void f() {} }", functionDecl(isExternC()))); EXPECT_TRUE(notMatches("void f() {}", functionDecl(isExternC()))); } TEST(IsExternC, MatchesExternCVariableDeclarations) { EXPECT_TRUE(matches("extern \"C\" int i;", varDecl(isExternC()))); EXPECT_TRUE(matches("extern \"C\" { int i; }", varDecl(isExternC()))); EXPECT_TRUE(notMatches("int i;", varDecl(isExternC()))); } TEST(IsStaticStorageClass, MatchesStaticDeclarations) { EXPECT_TRUE( matches("static void f() {}", functionDecl(isStaticStorageClass()))); EXPECT_TRUE(matches("static int i = 1;", varDecl(isStaticStorageClass()))); EXPECT_TRUE(notMatches("int i = 1;", varDecl(isStaticStorageClass()))); EXPECT_TRUE(notMatches("extern int i;", varDecl(isStaticStorageClass()))); EXPECT_TRUE(notMatches("void f() {}", functionDecl(isStaticStorageClass()))); } TEST(IsDefaulted, MatchesDefaultedFunctionDeclarations) { EXPECT_TRUE(notMatches("class A { ~A(); };", functionDecl(hasName("~A"), isDefaulted()))); EXPECT_TRUE(matches("class B { ~B() = default; };", functionDecl(hasName("~B"), isDefaulted()))); } TEST(IsDeleted, MatchesDeletedFunctionDeclarations) { EXPECT_TRUE( notMatches("void Func();", functionDecl(hasName("Func"), isDeleted()))); EXPECT_TRUE(matches("void Func() = delete;", functionDecl(hasName("Func"), isDeleted()))); } TEST(IsNoThrow, MatchesNoThrowFunctionDeclarations) { EXPECT_TRUE(notMatches("void f();", functionDecl(isNoThrow()))); EXPECT_TRUE(notMatches("void f() throw(int);", functionDecl(isNoThrow()))); EXPECT_TRUE( notMatches("void f() noexcept(false);", functionDecl(isNoThrow()))); EXPECT_TRUE(matches("void f() throw();", functionDecl(isNoThrow()))); EXPECT_TRUE(matches("void f() noexcept;", functionDecl(isNoThrow()))); EXPECT_TRUE(notMatches("void f();", functionProtoType(isNoThrow()))); EXPECT_TRUE(notMatches("void f() throw(int);", functionProtoType(isNoThrow()))); EXPECT_TRUE( notMatches("void f() noexcept(false);", functionProtoType(isNoThrow()))); EXPECT_TRUE(matches("void f() throw();", functionProtoType(isNoThrow()))); EXPECT_TRUE(matches("void f() noexcept;", functionProtoType(isNoThrow()))); } TEST(isConstexpr, MatchesConstexprDeclarations) { EXPECT_TRUE(matches("constexpr int foo = 42;", varDecl(hasName("foo"), isConstexpr()))); EXPECT_TRUE(matches("constexpr int bar();", functionDecl(hasName("bar"), isConstexpr()))); } TEST(TemplateArgumentCountIs, Matches) { EXPECT_TRUE( matches("template struct C {}; C c;", classTemplateSpecializationDecl(templateArgumentCountIs(1)))); EXPECT_TRUE( notMatches("template struct C {}; C c;", classTemplateSpecializationDecl(templateArgumentCountIs(2)))); EXPECT_TRUE(matches("template struct C {}; C c;", templateSpecializationType(templateArgumentCountIs(1)))); EXPECT_TRUE( notMatches("template struct C {}; C c;", templateSpecializationType(templateArgumentCountIs(2)))); } TEST(IsIntegral, Matches) { EXPECT_TRUE(matches("template struct C {}; C<42> c;", classTemplateSpecializationDecl( hasAnyTemplateArgument(isIntegral())))); EXPECT_TRUE(notMatches("template struct C {}; C c;", classTemplateSpecializationDecl(hasAnyTemplateArgument( templateArgument(isIntegral()))))); } TEST(EqualsIntegralValue, Matches) { EXPECT_TRUE(matches("template struct C {}; C<42> c;", classTemplateSpecializationDecl( hasAnyTemplateArgument(equalsIntegralValue("42"))))); EXPECT_TRUE(matches("template struct C {}; C<-42> c;", classTemplateSpecializationDecl( hasAnyTemplateArgument(equalsIntegralValue("-42"))))); EXPECT_TRUE(matches("template struct C {}; C<-0042> c;", classTemplateSpecializationDecl( hasAnyTemplateArgument(equalsIntegralValue("-34"))))); EXPECT_TRUE(notMatches("template struct C {}; C<42> c;", classTemplateSpecializationDecl(hasAnyTemplateArgument( equalsIntegralValue("0042"))))); } TEST(Matcher, MatchesAccessSpecDecls) { EXPECT_TRUE(matches("class C { public: int i; };", accessSpecDecl())); EXPECT_TRUE( matches("class C { public: int i; };", accessSpecDecl(isPublic()))); EXPECT_TRUE( notMatches("class C { public: int i; };", accessSpecDecl(isProtected()))); EXPECT_TRUE( notMatches("class C { public: int i; };", accessSpecDecl(isPrivate()))); EXPECT_TRUE(notMatches("class C { int i; };", accessSpecDecl())); } TEST(Matcher, MatchesFinal) { EXPECT_TRUE(matches("class X final {};", cxxRecordDecl(isFinal()))); EXPECT_TRUE(matches("class X { virtual void f() final; };", cxxMethodDecl(isFinal()))); EXPECT_TRUE(notMatches("class X {};", cxxRecordDecl(isFinal()))); EXPECT_TRUE( notMatches("class X { virtual void f(); };", cxxMethodDecl(isFinal()))); } TEST(Matcher, MatchesVirtualMethod) { EXPECT_TRUE(matches("class X { virtual int f(); };", cxxMethodDecl(isVirtual(), hasName("::X::f")))); EXPECT_TRUE(notMatches("class X { int f(); };", cxxMethodDecl(isVirtual()))); } TEST(Matcher, MatchesVirtualAsWrittenMethod) { EXPECT_TRUE(matches("class A { virtual int f(); };" "class B : public A { int f(); };", cxxMethodDecl(isVirtualAsWritten(), hasName("::A::f")))); EXPECT_TRUE( notMatches("class A { virtual int f(); };" "class B : public A { int f(); };", cxxMethodDecl(isVirtualAsWritten(), hasName("::B::f")))); } TEST(Matcher, MatchesPureMethod) { EXPECT_TRUE(matches("class X { virtual int f() = 0; };", cxxMethodDecl(isPure(), hasName("::X::f")))); EXPECT_TRUE(notMatches("class X { int f(); };", cxxMethodDecl(isPure()))); } TEST(Matcher, MatchesCopyAssignmentOperator) { EXPECT_TRUE(matches("class X { X &operator=(X); };", cxxMethodDecl(isCopyAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(X &); };", cxxMethodDecl(isCopyAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(const X &); };", cxxMethodDecl(isCopyAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(volatile X &); };", cxxMethodDecl(isCopyAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(const volatile X &); };", cxxMethodDecl(isCopyAssignmentOperator()))); EXPECT_TRUE(notMatches("class X { X &operator=(X &&); };", cxxMethodDecl(isCopyAssignmentOperator()))); } TEST(Matcher, MatchesMoveAssignmentOperator) { EXPECT_TRUE(notMatches("class X { X &operator=(X); };", cxxMethodDecl(isMoveAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(X &&); };", cxxMethodDecl(isMoveAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(const X &&); };", cxxMethodDecl(isMoveAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(volatile X &&); };", cxxMethodDecl(isMoveAssignmentOperator()))); EXPECT_TRUE(matches("class X { X &operator=(const volatile X &&); };", cxxMethodDecl(isMoveAssignmentOperator()))); EXPECT_TRUE(notMatches("class X { X &operator=(X &); };", cxxMethodDecl(isMoveAssignmentOperator()))); } TEST(Matcher, MatchesConstMethod) { EXPECT_TRUE( matches("struct A { void foo() const; };", cxxMethodDecl(isConst()))); EXPECT_TRUE( notMatches("struct A { void foo(); };", cxxMethodDecl(isConst()))); } TEST(Matcher, MatchesOverridingMethod) { EXPECT_TRUE(matches("class X { virtual int f(); }; " "class Y : public X { int f(); };", cxxMethodDecl(isOverride(), hasName("::Y::f")))); EXPECT_TRUE(notMatches("class X { virtual int f(); }; " "class Y : public X { int f(); };", cxxMethodDecl(isOverride(), hasName("::X::f")))); EXPECT_TRUE(notMatches("class X { int f(); }; " "class Y : public X { int f(); };", cxxMethodDecl(isOverride()))); EXPECT_TRUE(notMatches("class X { int f(); int f(int); }; ", cxxMethodDecl(isOverride()))); EXPECT_TRUE( matches("template struct Y : Base { void f() override;};", cxxMethodDecl(isOverride(), hasName("::Y::f")))); } TEST(Matcher, ConstructorArgument) { StatementMatcher Constructor = cxxConstructExpr( hasArgument(0, declRefExpr(to(varDecl(hasName("y")))))); EXPECT_TRUE( matches("class X { public: X(int); }; void x() { int y; X x(y); }", Constructor)); EXPECT_TRUE( matches("class X { public: X(int); }; void x() { int y; X x = X(y); }", Constructor)); EXPECT_TRUE( matches("class X { public: X(int); }; void x() { int y; X x = y; }", Constructor)); EXPECT_TRUE( notMatches("class X { public: X(int); }; void x() { int z; X x(z); }", Constructor)); StatementMatcher WrongIndex = cxxConstructExpr( hasArgument(42, declRefExpr(to(varDecl(hasName("y")))))); EXPECT_TRUE( notMatches("class X { public: X(int); }; void x() { int y; X x(y); }", WrongIndex)); } TEST(Matcher, ConstructorArgumentCount) { StatementMatcher Constructor1Arg = cxxConstructExpr(argumentCountIs(1)); EXPECT_TRUE( matches("class X { public: X(int); }; void x() { X x(0); }", Constructor1Arg)); EXPECT_TRUE( matches("class X { public: X(int); }; void x() { X x = X(0); }", Constructor1Arg)); EXPECT_TRUE( matches("class X { public: X(int); }; void x() { X x = 0; }", Constructor1Arg)); EXPECT_TRUE( notMatches("class X { public: X(int, int); }; void x() { X x(0, 0); }", Constructor1Arg)); } TEST(Matcher, ConstructorListInitialization) { StatementMatcher ConstructorListInit = cxxConstructExpr(isListInitialization()); EXPECT_TRUE( matches("class X { public: X(int); }; void x() { X x{0}; }", ConstructorListInit)); EXPECT_FALSE( matches("class X { public: X(int); }; void x() { X x(0); }", ConstructorListInit)); } TEST(ConstructorDeclaration, IsImplicit) { // This one doesn't match because the constructor is not added by the // compiler (it is not needed). EXPECT_TRUE(notMatches("class Foo { };", cxxConstructorDecl(isImplicit()))); // The compiler added the implicit default constructor. EXPECT_TRUE(matches("class Foo { }; Foo* f = new Foo();", cxxConstructorDecl(isImplicit()))); EXPECT_TRUE(matches("class Foo { Foo(){} };", cxxConstructorDecl(unless(isImplicit())))); // The compiler added an implicit assignment operator. EXPECT_TRUE(matches("struct A { int x; } a = {0}, b = a; void f() { a = b; }", cxxMethodDecl(isImplicit(), hasName("operator=")))); } TEST(ConstructorDeclaration, IsExplicit) { EXPECT_TRUE(matches("struct S { explicit S(int); };", cxxConstructorDecl(isExplicit()))); EXPECT_TRUE(notMatches("struct S { S(int); };", cxxConstructorDecl(isExplicit()))); } TEST(ConstructorDeclaration, Kinds) { - EXPECT_TRUE(matches("struct S { S(); };", - cxxConstructorDecl(isDefaultConstructor()))); - EXPECT_TRUE(notMatches("struct S { S(); };", - cxxConstructorDecl(isCopyConstructor()))); - EXPECT_TRUE(notMatches("struct S { S(); };", - cxxConstructorDecl(isMoveConstructor()))); - - EXPECT_TRUE(notMatches("struct S { S(const S&); };", - cxxConstructorDecl(isDefaultConstructor()))); - EXPECT_TRUE(matches("struct S { S(const S&); };", - cxxConstructorDecl(isCopyConstructor()))); - EXPECT_TRUE(notMatches("struct S { S(const S&); };", - cxxConstructorDecl(isMoveConstructor()))); - - EXPECT_TRUE(notMatches("struct S { S(S&&); };", - cxxConstructorDecl(isDefaultConstructor()))); - EXPECT_TRUE(notMatches("struct S { S(S&&); };", - cxxConstructorDecl(isCopyConstructor()))); - EXPECT_TRUE(matches("struct S { S(S&&); };", - cxxConstructorDecl(isMoveConstructor()))); + EXPECT_TRUE(matches( + "struct S { S(); };", + cxxConstructorDecl(isDefaultConstructor(), unless(isImplicit())))); + EXPECT_TRUE(notMatches( + "struct S { S(); };", + cxxConstructorDecl(isCopyConstructor(), unless(isImplicit())))); + EXPECT_TRUE(notMatches( + "struct S { S(); };", + cxxConstructorDecl(isMoveConstructor(), unless(isImplicit())))); + + EXPECT_TRUE(notMatches( + "struct S { S(const S&); };", + cxxConstructorDecl(isDefaultConstructor(), unless(isImplicit())))); + EXPECT_TRUE(matches( + "struct S { S(const S&); };", + cxxConstructorDecl(isCopyConstructor(), unless(isImplicit())))); + EXPECT_TRUE(notMatches( + "struct S { S(const S&); };", + cxxConstructorDecl(isMoveConstructor(), unless(isImplicit())))); + + EXPECT_TRUE(notMatches( + "struct S { S(S&&); };", + cxxConstructorDecl(isDefaultConstructor(), unless(isImplicit())))); + EXPECT_TRUE(notMatches( + "struct S { S(S&&); };", + cxxConstructorDecl(isCopyConstructor(), unless(isImplicit())))); + EXPECT_TRUE(matches( + "struct S { S(S&&); };", + cxxConstructorDecl(isMoveConstructor(), unless(isImplicit())))); } TEST(ConstructorDeclaration, IsUserProvided) { EXPECT_TRUE(notMatches("struct S { int X = 0; };", cxxConstructorDecl(isUserProvided()))); EXPECT_TRUE(notMatches("struct S { S() = default; };", cxxConstructorDecl(isUserProvided()))); EXPECT_TRUE(notMatches("struct S { S() = delete; };", cxxConstructorDecl(isUserProvided()))); EXPECT_TRUE( matches("struct S { S(); };", cxxConstructorDecl(isUserProvided()))); EXPECT_TRUE(matches("struct S { S(); }; S::S(){}", cxxConstructorDecl(isUserProvided()))); } TEST(ConstructorDeclaration, IsDelegatingConstructor) { EXPECT_TRUE(notMatches("struct S { S(); S(int); int X; };", cxxConstructorDecl(isDelegatingConstructor()))); EXPECT_TRUE(notMatches("struct S { S(){} S(int X) : X(X) {} int X; };", cxxConstructorDecl(isDelegatingConstructor()))); EXPECT_TRUE(matches( "struct S { S() : S(0) {} S(int X) : X(X) {} int X; };", cxxConstructorDecl(isDelegatingConstructor(), parameterCountIs(0)))); EXPECT_TRUE(matches( "struct S { S(); S(int X); int X; }; S::S(int X) : S() {}", cxxConstructorDecl(isDelegatingConstructor(), parameterCountIs(1)))); } TEST(StringLiteral, HasSize) { StatementMatcher Literal = stringLiteral(hasSize(4)); EXPECT_TRUE(matches("const char *s = \"abcd\";", Literal)); // wide string EXPECT_TRUE(matches("const wchar_t *s = L\"abcd\";", Literal)); // with escaped characters EXPECT_TRUE(matches("const char *s = \"\x05\x06\x07\x08\";", Literal)); // no matching, too small EXPECT_TRUE(notMatches("const char *s = \"ab\";", Literal)); } TEST(Matcher, HasNameSupportsNamespaces) { EXPECT_TRUE(matches("namespace a { namespace b { class C; } }", recordDecl(hasName("a::b::C")))); EXPECT_TRUE(matches("namespace a { namespace b { class C; } }", recordDecl(hasName("::a::b::C")))); EXPECT_TRUE(matches("namespace a { namespace b { class C; } }", recordDecl(hasName("b::C")))); EXPECT_TRUE(matches("namespace a { namespace b { class C; } }", recordDecl(hasName("C")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class C; } }", recordDecl(hasName("c::b::C")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class C; } }", recordDecl(hasName("a::c::C")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class C; } }", recordDecl(hasName("a::b::A")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class C; } }", recordDecl(hasName("::C")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class C; } }", recordDecl(hasName("::b::C")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class C; } }", recordDecl(hasName("z::a::b::C")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class C; } }", recordDecl(hasName("a+b::C")))); EXPECT_TRUE(notMatches("namespace a { namespace b { class AC; } }", recordDecl(hasName("C")))); } TEST(Matcher, HasNameSupportsOuterClasses) { EXPECT_TRUE( matches("class A { class B { class C; }; };", recordDecl(hasName("A::B::C")))); EXPECT_TRUE( matches("class A { class B { class C; }; };", recordDecl(hasName("::A::B::C")))); EXPECT_TRUE( matches("class A { class B { class C; }; };", recordDecl(hasName("B::C")))); EXPECT_TRUE( matches("class A { class B { class C; }; };", recordDecl(hasName("C")))); EXPECT_TRUE( notMatches("class A { class B { class C; }; };", recordDecl(hasName("c::B::C")))); EXPECT_TRUE( notMatches("class A { class B { class C; }; };", recordDecl(hasName("A::c::C")))); EXPECT_TRUE( notMatches("class A { class B { class C; }; };", recordDecl(hasName("A::B::A")))); EXPECT_TRUE( notMatches("class A { class B { class C; }; };", recordDecl(hasName("::C")))); EXPECT_TRUE( notMatches("class A { class B { class C; }; };", recordDecl(hasName("::B::C")))); EXPECT_TRUE(notMatches("class A { class B { class C; }; };", recordDecl(hasName("z::A::B::C")))); EXPECT_TRUE( notMatches("class A { class B { class C; }; };", recordDecl(hasName("A+B::C")))); } TEST(Matcher, HasNameSupportsInlinedNamespaces) { std::string code = "namespace a { inline namespace b { class C; } }"; EXPECT_TRUE(matches(code, recordDecl(hasName("a::b::C")))); EXPECT_TRUE(matches(code, recordDecl(hasName("a::C")))); EXPECT_TRUE(matches(code, recordDecl(hasName("::a::b::C")))); EXPECT_TRUE(matches(code, recordDecl(hasName("::a::C")))); } TEST(Matcher, HasNameSupportsAnonymousNamespaces) { std::string code = "namespace a { namespace { class C; } }"; EXPECT_TRUE( matches(code, recordDecl(hasName("a::(anonymous namespace)::C")))); EXPECT_TRUE(matches(code, recordDecl(hasName("a::C")))); EXPECT_TRUE( matches(code, recordDecl(hasName("::a::(anonymous namespace)::C")))); EXPECT_TRUE(matches(code, recordDecl(hasName("::a::C")))); } TEST(Matcher, HasNameSupportsAnonymousOuterClasses) { EXPECT_TRUE(matches("class A { class { class C; } x; };", recordDecl(hasName("A::(anonymous class)::C")))); EXPECT_TRUE(matches("class A { class { class C; } x; };", recordDecl(hasName("::A::(anonymous class)::C")))); EXPECT_FALSE(matches("class A { class { class C; } x; };", recordDecl(hasName("::A::C")))); EXPECT_TRUE(matches("class A { struct { class C; } x; };", recordDecl(hasName("A::(anonymous struct)::C")))); EXPECT_TRUE(matches("class A { struct { class C; } x; };", recordDecl(hasName("::A::(anonymous struct)::C")))); EXPECT_FALSE(matches("class A { struct { class C; } x; };", recordDecl(hasName("::A::C")))); } TEST(Matcher, HasNameSupportsFunctionScope) { std::string code = "namespace a { void F(int a) { struct S { int m; }; int i; } }"; EXPECT_TRUE(matches(code, varDecl(hasName("i")))); EXPECT_FALSE(matches(code, varDecl(hasName("F()::i")))); EXPECT_TRUE(matches(code, fieldDecl(hasName("m")))); EXPECT_TRUE(matches(code, fieldDecl(hasName("S::m")))); EXPECT_TRUE(matches(code, fieldDecl(hasName("F(int)::S::m")))); EXPECT_TRUE(matches(code, fieldDecl(hasName("a::F(int)::S::m")))); EXPECT_TRUE(matches(code, fieldDecl(hasName("::a::F(int)::S::m")))); } TEST(Matcher, HasAnyName) { const std::string Code = "namespace a { namespace b { class C; } }"; EXPECT_TRUE(matches(Code, recordDecl(hasAnyName("XX", "a::b::C")))); EXPECT_TRUE(matches(Code, recordDecl(hasAnyName("a::b::C", "XX")))); EXPECT_TRUE(matches(Code, recordDecl(hasAnyName("XX::C", "a::b::C")))); EXPECT_TRUE(matches(Code, recordDecl(hasAnyName("XX", "C")))); EXPECT_TRUE(notMatches(Code, recordDecl(hasAnyName("::C", "::b::C")))); EXPECT_TRUE( matches(Code, recordDecl(hasAnyName("::C", "::b::C", "::a::b::C")))); std::vector Names = {"::C", "::b::C", "::a::b::C"}; EXPECT_TRUE(matches(Code, recordDecl(hasAnyName(Names)))); } TEST(Matcher, IsDefinition) { DeclarationMatcher DefinitionOfClassA = recordDecl(hasName("A"), isDefinition()); EXPECT_TRUE(matches("class A {};", DefinitionOfClassA)); EXPECT_TRUE(notMatches("class A;", DefinitionOfClassA)); DeclarationMatcher DefinitionOfVariableA = varDecl(hasName("a"), isDefinition()); EXPECT_TRUE(matches("int a;", DefinitionOfVariableA)); EXPECT_TRUE(notMatches("extern int a;", DefinitionOfVariableA)); DeclarationMatcher DefinitionOfMethodA = cxxMethodDecl(hasName("a"), isDefinition()); EXPECT_TRUE(matches("class A { void a() {} };", DefinitionOfMethodA)); EXPECT_TRUE(notMatches("class A { void a(); };", DefinitionOfMethodA)); } TEST(Matcher, HandlesNullQualTypes) { // FIXME: Add a Type matcher so we can replace uses of this // variable with Type(True()) const TypeMatcher AnyType = anything(); // We don't really care whether this matcher succeeds; we're testing that // it completes without crashing. EXPECT_TRUE(matches( "struct A { };" "template " "void f(T t) {" " T local_t(t /* this becomes a null QualType in the AST */);" "}" "void g() {" " f(0);" "}", expr(hasType(TypeMatcher( anyOf( TypeMatcher(hasDeclaration(anything())), pointsTo(AnyType), references(AnyType) // Other QualType matchers should go here. )))))); } TEST(StatementCountIs, FindsNoStatementsInAnEmptyCompoundStatement) { EXPECT_TRUE(matches("void f() { }", compoundStmt(statementCountIs(0)))); EXPECT_TRUE(notMatches("void f() {}", compoundStmt(statementCountIs(1)))); } TEST(StatementCountIs, AppearsToMatchOnlyOneCount) { EXPECT_TRUE(matches("void f() { 1; }", compoundStmt(statementCountIs(1)))); EXPECT_TRUE(notMatches("void f() { 1; }", compoundStmt(statementCountIs(0)))); EXPECT_TRUE(notMatches("void f() { 1; }", compoundStmt(statementCountIs(2)))); } TEST(StatementCountIs, WorksWithMultipleStatements) { EXPECT_TRUE(matches("void f() { 1; 2; 3; }", compoundStmt(statementCountIs(3)))); } TEST(StatementCountIs, WorksWithNestedCompoundStatements) { EXPECT_TRUE(matches("void f() { { 1; } { 1; 2; 3; 4; } }", compoundStmt(statementCountIs(1)))); EXPECT_TRUE(matches("void f() { { 1; } { 1; 2; 3; 4; } }", compoundStmt(statementCountIs(2)))); EXPECT_TRUE(notMatches("void f() { { 1; } { 1; 2; 3; 4; } }", compoundStmt(statementCountIs(3)))); EXPECT_TRUE(matches("void f() { { 1; } { 1; 2; 3; 4; } }", compoundStmt(statementCountIs(4)))); } TEST(Member, WorksInSimplestCase) { EXPECT_TRUE(matches("struct { int first; } s; int i(s.first);", memberExpr(member(hasName("first"))))); } TEST(Member, DoesNotMatchTheBaseExpression) { // Don't pick out the wrong part of the member expression, this should // be checking the member (name) only. EXPECT_TRUE(notMatches("struct { int i; } first; int i(first.i);", memberExpr(member(hasName("first"))))); } TEST(Member, MatchesInMemberFunctionCall) { EXPECT_TRUE(matches("void f() {" " struct { void first() {}; } s;" " s.first();" "};", memberExpr(member(hasName("first"))))); } TEST(Member, MatchesMember) { EXPECT_TRUE(matches( "struct A { int i; }; void f() { A a; a.i = 2; }", memberExpr(hasDeclaration(fieldDecl(hasType(isInteger())))))); EXPECT_TRUE(notMatches( "struct A { float f; }; void f() { A a; a.f = 2.0f; }", memberExpr(hasDeclaration(fieldDecl(hasType(isInteger())))))); } TEST(Member, BitFields) { EXPECT_TRUE(matches("class C { int a : 2; int b; };", fieldDecl(isBitField(), hasName("a")))); EXPECT_TRUE(notMatches("class C { int a : 2; int b; };", fieldDecl(isBitField(), hasName("b")))); EXPECT_TRUE(matches("class C { int a : 2; int b : 4; };", fieldDecl(isBitField(), hasBitWidth(2), hasName("a")))); } TEST(Member, InClassInitializer) { EXPECT_TRUE( matches("class C { int a = 2; int b; };", fieldDecl(hasInClassInitializer(integerLiteral(equals(2))), hasName("a")))); EXPECT_TRUE( notMatches("class C { int a = 2; int b; };", fieldDecl(hasInClassInitializer(anything()), hasName("b")))); } TEST(Member, UnderstandsAccess) { EXPECT_TRUE(matches( "struct A { int i; };", fieldDecl(isPublic(), hasName("i")))); EXPECT_TRUE(notMatches( "struct A { int i; };", fieldDecl(isProtected(), hasName("i")))); EXPECT_TRUE(notMatches( "struct A { int i; };", fieldDecl(isPrivate(), hasName("i")))); EXPECT_TRUE(notMatches( "class A { int i; };", fieldDecl(isPublic(), hasName("i")))); EXPECT_TRUE(notMatches( "class A { int i; };", fieldDecl(isProtected(), hasName("i")))); EXPECT_TRUE(matches( "class A { int i; };", fieldDecl(isPrivate(), hasName("i")))); EXPECT_TRUE(notMatches( "class A { protected: int i; };", fieldDecl(isPublic(), hasName("i")))); EXPECT_TRUE(matches("class A { protected: int i; };", fieldDecl(isProtected(), hasName("i")))); EXPECT_TRUE(notMatches( "class A { protected: int i; };", fieldDecl(isPrivate(), hasName("i")))); // Non-member decls have the AccessSpecifier AS_none and thus aren't matched. EXPECT_TRUE(notMatches("int i;", varDecl(isPublic(), hasName("i")))); EXPECT_TRUE(notMatches("int i;", varDecl(isProtected(), hasName("i")))); EXPECT_TRUE(notMatches("int i;", varDecl(isPrivate(), hasName("i")))); } TEST(hasDynamicExceptionSpec, MatchesDynamicExceptionSpecifications) { EXPECT_TRUE(notMatches("void f();", functionDecl(hasDynamicExceptionSpec()))); EXPECT_TRUE(notMatches("void g() noexcept;", functionDecl(hasDynamicExceptionSpec()))); EXPECT_TRUE(notMatches("void h() noexcept(true);", functionDecl(hasDynamicExceptionSpec()))); EXPECT_TRUE(notMatches("void i() noexcept(false);", functionDecl(hasDynamicExceptionSpec()))); EXPECT_TRUE( matches("void j() throw();", functionDecl(hasDynamicExceptionSpec()))); EXPECT_TRUE( matches("void k() throw(int);", functionDecl(hasDynamicExceptionSpec()))); EXPECT_TRUE( matches("void l() throw(...);", functionDecl(hasDynamicExceptionSpec()))); EXPECT_TRUE(notMatches("void f();", functionProtoType(hasDynamicExceptionSpec()))); EXPECT_TRUE(notMatches("void g() noexcept;", functionProtoType(hasDynamicExceptionSpec()))); EXPECT_TRUE(notMatches("void h() noexcept(true);", functionProtoType(hasDynamicExceptionSpec()))); EXPECT_TRUE(notMatches("void i() noexcept(false);", functionProtoType(hasDynamicExceptionSpec()))); EXPECT_TRUE( matches("void j() throw();", functionProtoType(hasDynamicExceptionSpec()))); EXPECT_TRUE( matches("void k() throw(int);", functionProtoType(hasDynamicExceptionSpec()))); EXPECT_TRUE( matches("void l() throw(...);", functionProtoType(hasDynamicExceptionSpec()))); } TEST(HasObjectExpression, DoesNotMatchMember) { EXPECT_TRUE(notMatches( "class X {}; struct Z { X m; }; void f(Z z) { z.m; }", memberExpr(hasObjectExpression(hasType(recordDecl(hasName("X"))))))); } TEST(HasObjectExpression, MatchesBaseOfVariable) { EXPECT_TRUE(matches( "struct X { int m; }; void f(X x) { x.m; }", memberExpr(hasObjectExpression(hasType(recordDecl(hasName("X"))))))); EXPECT_TRUE(matches( "struct X { int m; }; void f(X* x) { x->m; }", memberExpr(hasObjectExpression( hasType(pointsTo(recordDecl(hasName("X")))))))); } TEST(HasObjectExpression, MatchesObjectExpressionOfImplicitlyFormedMemberExpression) { EXPECT_TRUE(matches( "class X {}; struct S { X m; void f() { this->m; } };", memberExpr(hasObjectExpression( hasType(pointsTo(recordDecl(hasName("S")))))))); EXPECT_TRUE(matches( "class X {}; struct S { X m; void f() { m; } };", memberExpr(hasObjectExpression( hasType(pointsTo(recordDecl(hasName("S")))))))); } TEST(Field, DoesNotMatchNonFieldMembers) { EXPECT_TRUE(notMatches("class X { void m(); };", fieldDecl(hasName("m")))); EXPECT_TRUE(notMatches("class X { class m {}; };", fieldDecl(hasName("m")))); EXPECT_TRUE(notMatches("class X { enum { m }; };", fieldDecl(hasName("m")))); EXPECT_TRUE(notMatches("class X { enum m {}; };", fieldDecl(hasName("m")))); } TEST(Field, MatchesField) { EXPECT_TRUE(matches("class X { int m; };", fieldDecl(hasName("m")))); } TEST(IsVolatileQualified, QualifiersMatch) { EXPECT_TRUE(matches("volatile int i = 42;", varDecl(hasType(isVolatileQualified())))); EXPECT_TRUE(notMatches("volatile int *i;", varDecl(hasType(isVolatileQualified())))); EXPECT_TRUE(matches("typedef volatile int v_int; v_int i = 42;", varDecl(hasType(isVolatileQualified())))); } TEST(IsConstQualified, MatchesConstInt) { EXPECT_TRUE(matches("const int i = 42;", varDecl(hasType(isConstQualified())))); } TEST(IsConstQualified, MatchesConstPointer) { EXPECT_TRUE(matches("int i = 42; int* const p(&i);", varDecl(hasType(isConstQualified())))); } TEST(IsConstQualified, MatchesThroughTypedef) { EXPECT_TRUE(matches("typedef const int const_int; const_int i = 42;", varDecl(hasType(isConstQualified())))); EXPECT_TRUE(matches("typedef int* int_ptr; const int_ptr p(0);", varDecl(hasType(isConstQualified())))); } TEST(IsConstQualified, DoesNotMatchInappropriately) { EXPECT_TRUE(notMatches("typedef int nonconst_int; nonconst_int i = 42;", varDecl(hasType(isConstQualified())))); EXPECT_TRUE(notMatches("int const* p;", varDecl(hasType(isConstQualified())))); } TEST(DeclCount, DeclCountIsCorrect) { EXPECT_TRUE(matches("void f() {int i,j;}", declStmt(declCountIs(2)))); EXPECT_TRUE(notMatches("void f() {int i,j; int k;}", declStmt(declCountIs(3)))); EXPECT_TRUE(notMatches("void f() {int i,j, k, l;}", declStmt(declCountIs(3)))); } TEST(EachOf, TriggersForEachMatch) { EXPECT_TRUE(matchAndVerifyResultTrue( "class A { int a; int b; };", recordDecl(eachOf(has(fieldDecl(hasName("a")).bind("v")), has(fieldDecl(hasName("b")).bind("v")))), llvm::make_unique>("v", 2))); } TEST(EachOf, BehavesLikeAnyOfUnlessBothMatch) { EXPECT_TRUE(matchAndVerifyResultTrue( "class A { int a; int c; };", recordDecl(eachOf(has(fieldDecl(hasName("a")).bind("v")), has(fieldDecl(hasName("b")).bind("v")))), llvm::make_unique>("v", 1))); EXPECT_TRUE(matchAndVerifyResultTrue( "class A { int c; int b; };", recordDecl(eachOf(has(fieldDecl(hasName("a")).bind("v")), has(fieldDecl(hasName("b")).bind("v")))), llvm::make_unique>("v", 1))); EXPECT_TRUE(notMatches( "class A { int c; int d; };", recordDecl(eachOf(has(fieldDecl(hasName("a")).bind("v")), has(fieldDecl(hasName("b")).bind("v")))))); } TEST(IsTemplateInstantiation, MatchesImplicitClassTemplateInstantiation) { // Make sure that we can both match the class by name (::X) and by the type // the template was instantiated with (via a field). EXPECT_TRUE(matches( "template class X {}; class A {}; X x;", cxxRecordDecl(hasName("::X"), isTemplateInstantiation()))); EXPECT_TRUE(matches( "template class X { T t; }; class A {}; X x;", cxxRecordDecl(isTemplateInstantiation(), hasDescendant( fieldDecl(hasType(recordDecl(hasName("A")))))))); } TEST(IsTemplateInstantiation, MatchesImplicitFunctionTemplateInstantiation) { EXPECT_TRUE(matches( "template void f(T t) {} class A {}; void g() { f(A()); }", functionDecl(hasParameter(0, hasType(recordDecl(hasName("A")))), isTemplateInstantiation()))); } TEST(IsTemplateInstantiation, MatchesExplicitClassTemplateInstantiation) { EXPECT_TRUE(matches( "template class X { T t; }; class A {};" "template class X;", cxxRecordDecl(isTemplateInstantiation(), hasDescendant( fieldDecl(hasType(recordDecl(hasName("A")))))))); } TEST(IsTemplateInstantiation, MatchesInstantiationOfPartiallySpecializedClassTemplate) { EXPECT_TRUE(matches( "template class X {};" "template class X {}; class A {}; X x;", cxxRecordDecl(hasName("::X"), isTemplateInstantiation()))); } TEST(IsTemplateInstantiation, MatchesInstantiationOfClassTemplateNestedInNonTemplate) { EXPECT_TRUE(matches( "class A {};" "class X {" " template class Y { U u; };" " Y y;" "};", cxxRecordDecl(hasName("::X::Y"), isTemplateInstantiation()))); } TEST(IsTemplateInstantiation, DoesNotMatchInstantiationsInsideOfInstantiation) { // FIXME: Figure out whether this makes sense. It doesn't affect the // normal use case as long as the uppermost instantiation always is marked // as template instantiation, but it might be confusing as a predicate. EXPECT_TRUE(matches( "class A {};" "template class X {" " template class Y { U u; };" " Y y;" "}; X x;", cxxRecordDecl(hasName("::X::Y"), unless(isTemplateInstantiation())))); } TEST(IsTemplateInstantiation, DoesNotMatchExplicitClassTemplateSpecialization) { EXPECT_TRUE(notMatches( "template class X {}; class A {};" "template <> class X {}; X x;", cxxRecordDecl(hasName("::X"), isTemplateInstantiation()))); } TEST(IsTemplateInstantiation, DoesNotMatchNonTemplate) { EXPECT_TRUE(notMatches( "class A {}; class Y { A a; };", cxxRecordDecl(isTemplateInstantiation()))); } TEST(IsInstantiated, MatchesInstantiation) { EXPECT_TRUE( matches("template class A { T i; }; class Y { A a; };", cxxRecordDecl(isInstantiated()))); } TEST(IsInstantiated, NotMatchesDefinition) { EXPECT_TRUE(notMatches("template class A { T i; };", cxxRecordDecl(isInstantiated()))); } TEST(IsInTemplateInstantiation, MatchesInstantiationStmt) { EXPECT_TRUE(matches("template struct A { A() { T i; } };" "class Y { A a; }; Y y;", declStmt(isInTemplateInstantiation()))); } TEST(IsInTemplateInstantiation, NotMatchesDefinitionStmt) { EXPECT_TRUE(notMatches("template struct A { void x() { T i; } };", declStmt(isInTemplateInstantiation()))); } TEST(IsInstantiated, MatchesFunctionInstantiation) { EXPECT_TRUE( matches("template void A(T t) { T i; } void x() { A(0); }", functionDecl(isInstantiated()))); } TEST(IsInstantiated, NotMatchesFunctionDefinition) { EXPECT_TRUE(notMatches("template void A(T t) { T i; }", varDecl(isInstantiated()))); } TEST(IsInTemplateInstantiation, MatchesFunctionInstantiationStmt) { EXPECT_TRUE( matches("template void A(T t) { T i; } void x() { A(0); }", declStmt(isInTemplateInstantiation()))); } TEST(IsInTemplateInstantiation, NotMatchesFunctionDefinitionStmt) { EXPECT_TRUE(notMatches("template void A(T t) { T i; }", declStmt(isInTemplateInstantiation()))); } TEST(IsInTemplateInstantiation, Sharing) { auto Matcher = binaryOperator(unless(isInTemplateInstantiation())); // FIXME: Node sharing is an implementation detail, exposing it is ugly // and makes the matcher behave in non-obvious ways. EXPECT_TRUE(notMatches( "int j; template void A(T t) { j += 42; } void x() { A(0); }", Matcher)); EXPECT_TRUE(matches( "int j; template void A(T t) { j += t; } void x() { A(0); }", Matcher)); } TEST(IsExplicitTemplateSpecialization, DoesNotMatchPrimaryTemplate) { EXPECT_TRUE(notMatches( "template class X {};", cxxRecordDecl(isExplicitTemplateSpecialization()))); EXPECT_TRUE(notMatches( "template void f(T t);", functionDecl(isExplicitTemplateSpecialization()))); } TEST(IsExplicitTemplateSpecialization, DoesNotMatchExplicitTemplateInstantiations) { EXPECT_TRUE(notMatches( "template class X {};" "template class X; extern template class X;", cxxRecordDecl(isExplicitTemplateSpecialization()))); EXPECT_TRUE(notMatches( "template void f(T t) {}" "template void f(int t); extern template void f(long t);", functionDecl(isExplicitTemplateSpecialization()))); } TEST(IsExplicitTemplateSpecialization, DoesNotMatchImplicitTemplateInstantiations) { EXPECT_TRUE(notMatches( "template class X {}; X x;", cxxRecordDecl(isExplicitTemplateSpecialization()))); EXPECT_TRUE(notMatches( "template void f(T t); void g() { f(10); }", functionDecl(isExplicitTemplateSpecialization()))); } TEST(IsExplicitTemplateSpecialization, MatchesExplicitTemplateSpecializations) { EXPECT_TRUE(matches( "template class X {};" "template<> class X {};", cxxRecordDecl(isExplicitTemplateSpecialization()))); EXPECT_TRUE(matches( "template void f(T t) {}" "template<> void f(int t) {}", functionDecl(isExplicitTemplateSpecialization()))); } TEST(TypeMatching, MatchesBool) { EXPECT_TRUE(matches("struct S { bool func(); };", cxxMethodDecl(returns(booleanType())))); EXPECT_TRUE(notMatches("struct S { void func(); };", cxxMethodDecl(returns(booleanType())))); } TEST(TypeMatching, MatchesVoid) { EXPECT_TRUE(matches("struct S { void func(); };", cxxMethodDecl(returns(voidType())))); } TEST(TypeMatching, MatchesRealFloats) { EXPECT_TRUE(matches("struct S { float func(); };", cxxMethodDecl(returns(realFloatingPointType())))); EXPECT_TRUE(notMatches("struct S { int func(); };", cxxMethodDecl(returns(realFloatingPointType())))); EXPECT_TRUE(matches("struct S { long double func(); };", cxxMethodDecl(returns(realFloatingPointType())))); } TEST(TypeMatching, MatchesArrayTypes) { EXPECT_TRUE(matches("int a[] = {2,3};", arrayType())); EXPECT_TRUE(matches("int a[42];", arrayType())); EXPECT_TRUE(matches("void f(int b) { int a[b]; }", arrayType())); EXPECT_TRUE(notMatches("struct A {}; A a[7];", arrayType(hasElementType(builtinType())))); EXPECT_TRUE(matches( "int const a[] = { 2, 3 };", qualType(arrayType(hasElementType(builtinType()))))); EXPECT_TRUE(matches( "int const a[] = { 2, 3 };", qualType(isConstQualified(), arrayType(hasElementType(builtinType()))))); EXPECT_TRUE(matches( "typedef const int T; T x[] = { 1, 2 };", qualType(isConstQualified(), arrayType()))); EXPECT_TRUE(notMatches( "int a[] = { 2, 3 };", qualType(isConstQualified(), arrayType(hasElementType(builtinType()))))); EXPECT_TRUE(notMatches( "int a[] = { 2, 3 };", qualType(arrayType(hasElementType(isConstQualified(), builtinType()))))); EXPECT_TRUE(notMatches( "int const a[] = { 2, 3 };", qualType(arrayType(hasElementType(builtinType())), unless(isConstQualified())))); EXPECT_TRUE(matches("int a[2];", constantArrayType(hasElementType(builtinType())))); EXPECT_TRUE(matches("const int a = 0;", qualType(isInteger()))); } TEST(TypeMatching, DecayedType) { EXPECT_TRUE(matches("void f(int i[]);", valueDecl(hasType(decayedType(hasDecayedType(pointerType())))))); EXPECT_TRUE(notMatches("int i[7];", decayedType())); } TEST(TypeMatching, MatchesComplexTypes) { EXPECT_TRUE(matches("_Complex float f;", complexType())); EXPECT_TRUE(matches( "_Complex float f;", complexType(hasElementType(builtinType())))); EXPECT_TRUE(notMatches( "_Complex float f;", complexType(hasElementType(isInteger())))); } TEST(NS, Anonymous) { EXPECT_TRUE(notMatches("namespace N {}", namespaceDecl(isAnonymous()))); EXPECT_TRUE(matches("namespace {}", namespaceDecl(isAnonymous()))); } TEST(EqualsBoundNodeMatcher, QualType) { EXPECT_TRUE(matches( "int i = 1;", varDecl(hasType(qualType().bind("type")), hasInitializer(ignoringParenImpCasts( hasType(qualType(equalsBoundNode("type")))))))); EXPECT_TRUE(notMatches("int i = 1.f;", varDecl(hasType(qualType().bind("type")), hasInitializer(ignoringParenImpCasts(hasType( qualType(equalsBoundNode("type")))))))); } TEST(EqualsBoundNodeMatcher, NonMatchingTypes) { EXPECT_TRUE(notMatches( "int i = 1;", varDecl(namedDecl(hasName("i")).bind("name"), hasInitializer(ignoringParenImpCasts( hasType(qualType(equalsBoundNode("type")))))))); } TEST(EqualsBoundNodeMatcher, Stmt) { EXPECT_TRUE( matches("void f() { if(true) {} }", stmt(allOf(ifStmt().bind("if"), hasParent(stmt(has(stmt(equalsBoundNode("if"))))))))); EXPECT_TRUE(notMatches( "void f() { if(true) { if (true) {} } }", stmt(allOf(ifStmt().bind("if"), has(stmt(equalsBoundNode("if"))))))); } TEST(EqualsBoundNodeMatcher, Decl) { EXPECT_TRUE(matches( "class X { class Y {}; };", decl(allOf(recordDecl(hasName("::X::Y")).bind("record"), hasParent(decl(has(decl(equalsBoundNode("record"))))))))); EXPECT_TRUE(notMatches("class X { class Y {}; };", decl(allOf(recordDecl(hasName("::X")).bind("record"), has(decl(equalsBoundNode("record"))))))); } TEST(EqualsBoundNodeMatcher, Type) { EXPECT_TRUE(matches( "class X { int a; int b; };", recordDecl( has(fieldDecl(hasName("a"), hasType(type().bind("t")))), has(fieldDecl(hasName("b"), hasType(type(equalsBoundNode("t")))))))); EXPECT_TRUE(notMatches( "class X { int a; double b; };", recordDecl( has(fieldDecl(hasName("a"), hasType(type().bind("t")))), has(fieldDecl(hasName("b"), hasType(type(equalsBoundNode("t")))))))); } TEST(EqualsBoundNodeMatcher, UsingForEachDescendant) { EXPECT_TRUE(matchAndVerifyResultTrue( "int f() {" " if (1) {" " int i = 9;" " }" " int j = 10;" " {" " float k = 9.0;" " }" " return 0;" "}", // Look for variable declarations within functions whose type is the same // as the function return type. functionDecl(returns(qualType().bind("type")), forEachDescendant(varDecl(hasType( qualType(equalsBoundNode("type")))).bind("decl"))), // Only i and j should match, not k. llvm::make_unique>("decl", 2))); } TEST(EqualsBoundNodeMatcher, FiltersMatchedCombinations) { EXPECT_TRUE(matchAndVerifyResultTrue( "void f() {" " int x;" " double d;" " x = d + x - d + x;" "}", functionDecl( hasName("f"), forEachDescendant(varDecl().bind("d")), forEachDescendant(declRefExpr(to(decl(equalsBoundNode("d")))))), llvm::make_unique>("d", 5))); } TEST(EqualsBoundNodeMatcher, UnlessDescendantsOfAncestorsMatch) { EXPECT_TRUE(matchAndVerifyResultTrue( "struct StringRef { int size() const; const char* data() const; };" "void f(StringRef v) {" " v.data();" "}", cxxMemberCallExpr( callee(cxxMethodDecl(hasName("data"))), on(declRefExpr(to( varDecl(hasType(recordDecl(hasName("StringRef")))).bind("var")))), unless(hasAncestor(stmt(hasDescendant(cxxMemberCallExpr( callee(cxxMethodDecl(anyOf(hasName("size"), hasName("length")))), on(declRefExpr(to(varDecl(equalsBoundNode("var"))))))))))) .bind("data"), llvm::make_unique>("data", 1))); EXPECT_FALSE(matches( "struct StringRef { int size() const; const char* data() const; };" "void f(StringRef v) {" " v.data();" " v.size();" "}", cxxMemberCallExpr( callee(cxxMethodDecl(hasName("data"))), on(declRefExpr(to( varDecl(hasType(recordDecl(hasName("StringRef")))).bind("var")))), unless(hasAncestor(stmt(hasDescendant(cxxMemberCallExpr( callee(cxxMethodDecl(anyOf(hasName("size"), hasName("length")))), on(declRefExpr(to(varDecl(equalsBoundNode("var"))))))))))) .bind("data"))); } TEST(NullPointerConstants, Basic) { EXPECT_TRUE(matches("#define NULL ((void *)0)\n" "void *v1 = NULL;", expr(nullPointerConstant()))); EXPECT_TRUE(matches("void *v2 = nullptr;", expr(nullPointerConstant()))); EXPECT_TRUE(matches("void *v3 = __null;", expr(nullPointerConstant()))); EXPECT_TRUE(matches("char *cp = (char *)0;", expr(nullPointerConstant()))); EXPECT_TRUE(matches("int *ip = 0;", expr(nullPointerConstant()))); EXPECT_TRUE(notMatches("int i = 0;", expr(nullPointerConstant()))); } TEST(HasExternalFormalLinkage, Basic) { EXPECT_TRUE(matches("int a = 0;", namedDecl(hasExternalFormalLinkage()))); EXPECT_TRUE( notMatches("static int a = 0;", namedDecl(hasExternalFormalLinkage()))); EXPECT_TRUE(notMatches("static void f(void) { int a = 0; }", namedDecl(hasExternalFormalLinkage()))); EXPECT_TRUE(matches("void f(void) { int a = 0; }", namedDecl(hasExternalFormalLinkage()))); // Despite having internal semantic linkage, the anonymous namespace member // has external linkage because the member has a unique name in all // translation units. EXPECT_TRUE(matches("namespace { int a = 0; }", namedDecl(hasExternalFormalLinkage()))); } } // namespace ast_matchers } // namespace clang diff --git a/unittests/Format/FormatTestComments.cpp b/unittests/Format/FormatTestComments.cpp index 7916e65e5114..f3c45fac34a9 100644 --- a/unittests/Format/FormatTestComments.cpp +++ b/unittests/Format/FormatTestComments.cpp @@ -1,2576 +1,2583 @@ //===- unittest/Format/FormatTestComments.cpp - Formatting unit tests -----===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// #include "clang/Format/Format.h" #include "../Tooling/ReplacementTest.h" #include "FormatTestUtils.h" #include "clang/Frontend/TextDiagnosticPrinter.h" #include "llvm/Support/Debug.h" #include "llvm/Support/MemoryBuffer.h" #include "gtest/gtest.h" #define DEBUG_TYPE "format-test" using clang::tooling::ReplacementTest; namespace clang { namespace format { namespace { FormatStyle getGoogleStyle() { return getGoogleStyle(FormatStyle::LK_Cpp); } class FormatTestComments : public ::testing::Test { protected: enum StatusCheck { SC_ExpectComplete, SC_ExpectIncomplete, SC_DoNotCheck }; std::string format(llvm::StringRef Code, const FormatStyle &Style = getLLVMStyle(), StatusCheck CheckComplete = SC_ExpectComplete) { DEBUG(llvm::errs() << "---\n"); DEBUG(llvm::errs() << Code << "\n\n"); std::vector Ranges(1, tooling::Range(0, Code.size())); FormattingAttemptStatus Status; tooling::Replacements Replaces = reformat(Style, Code, Ranges, "", &Status); if (CheckComplete != SC_DoNotCheck) { bool ExpectedCompleteFormat = CheckComplete == SC_ExpectComplete; EXPECT_EQ(ExpectedCompleteFormat, Status.FormatComplete) << Code << "\n\n"; } ReplacementCount = Replaces.size(); auto Result = applyAllReplacements(Code, Replaces); EXPECT_TRUE(static_cast(Result)); DEBUG(llvm::errs() << "\n" << *Result << "\n\n"); return *Result; } FormatStyle getLLVMStyleWithColumns(unsigned ColumnLimit) { FormatStyle Style = getLLVMStyle(); Style.ColumnLimit = ColumnLimit; return Style; } void verifyFormat(llvm::StringRef Code, const FormatStyle &Style = getLLVMStyle()) { EXPECT_EQ(Code.str(), format(test::messUp(Code), Style)); } void verifyGoogleFormat(llvm::StringRef Code) { verifyFormat(Code, getGoogleStyle()); } /// \brief Verify that clang-format does not crash on the given input. void verifyNoCrash(llvm::StringRef Code, const FormatStyle &Style = getLLVMStyle()) { format(Code, Style, SC_DoNotCheck); } int ReplacementCount; }; //===----------------------------------------------------------------------===// // Tests for comments. //===----------------------------------------------------------------------===// TEST_F(FormatTestComments, UnderstandsSingleLineComments) { verifyFormat("//* */"); verifyFormat("// line 1\n" "// line 2\n" "void f() {}\n"); verifyFormat("void f() {\n" " // Doesn't do anything\n" "}"); verifyFormat("SomeObject\n" " // Calling someFunction on SomeObject\n" " .someFunction();"); verifyFormat("auto result = SomeObject\n" " // Calling someFunction on SomeObject\n" " .someFunction();"); verifyFormat("void f(int i, // some comment (probably for i)\n" " int j, // some comment (probably for j)\n" " int k); // some comment (probably for k)"); verifyFormat("void f(int i,\n" " // some comment (probably for j)\n" " int j,\n" " // some comment (probably for k)\n" " int k);"); verifyFormat("int i // This is a fancy variable\n" " = 5; // with nicely aligned comment."); verifyFormat("// Leading comment.\n" "int a; // Trailing comment."); verifyFormat("int a; // Trailing comment\n" " // on 2\n" " // or 3 lines.\n" "int b;"); verifyFormat("int a; // Trailing comment\n" "\n" "// Leading comment.\n" "int b;"); verifyFormat("int a; // Comment.\n" " // More details.\n" "int bbbb; // Another comment."); verifyFormat( "int aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa; // comment\n" "int bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb; // comment\n" "int cccccccccccccccccccccccccccccc; // comment\n" "int ddd; // looooooooooooooooooooooooong comment\n" "int aaaaaaaaaaaaaaaaaaaaaaa; // comment\n" "int bbbbbbbbbbbbbbbbbbbbb; // comment\n" "int ccccccccccccccccccc; // comment"); verifyFormat("#include \"a\" // comment\n" "#include \"a/b/c\" // comment"); verifyFormat("#include // comment\n" "#include // comment"); EXPECT_EQ("#include \"a\" // comment\n" "#include \"a/b/c\" // comment", format("#include \\\n" " \"a\" // comment\n" "#include \"a/b/c\" // comment")); verifyFormat("enum E {\n" " // comment\n" " VAL_A, // comment\n" " VAL_B\n" "};"); EXPECT_EQ("enum A {\n" " // line a\n" " a,\n" " b, // line b\n" "\n" " // line c\n" " c\n" "};", format("enum A {\n" " // line a\n" " a,\n" " b, // line b\n" "\n" " // line c\n" " c\n" "};", getLLVMStyleWithColumns(20))); EXPECT_EQ("enum A {\n" " a, // line 1\n" " // line 2\n" "};", format("enum A {\n" " a, // line 1\n" " // line 2\n" "};", getLLVMStyleWithColumns(20))); EXPECT_EQ("enum A {\n" " a, // line 1\n" " // line 2\n" "};", format("enum A {\n" " a, // line 1\n" " // line 2\n" "};", getLLVMStyleWithColumns(20))); EXPECT_EQ("enum A {\n" " a, // line 1\n" " // line 2\n" " b\n" "};", format("enum A {\n" " a, // line 1\n" " // line 2\n" " b\n" "};", getLLVMStyleWithColumns(20))); EXPECT_EQ("enum A {\n" " a, // line 1\n" " // line 2\n" " b\n" "};", format("enum A {\n" " a, // line 1\n" " // line 2\n" " b\n" "};", getLLVMStyleWithColumns(20))); verifyFormat( "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa =\n" " bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb; // Trailing comment"); verifyFormat("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa =\n" " // Comment inside a statement.\n" " bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb;"); verifyFormat("SomeFunction(a,\n" " // comment\n" " b + x);"); verifyFormat("SomeFunction(a, a,\n" " // comment\n" " b + x);"); verifyFormat( "bool aaaaaaaaaaaaa = // comment\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa || aaaaaaaaaaaaaaaaaaaaaaaaaaaa ||\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa || aaaaaaaaaaaaaaaaaaaaaaaaaaaaa;"); verifyFormat("int aaaa; // aaaaa\n" "int aa; // aaaaaaa", getLLVMStyleWithColumns(20)); EXPECT_EQ("void f() { // This does something ..\n" "}\n" "int a; // This is unrelated", format("void f() { // This does something ..\n" " }\n" "int a; // This is unrelated")); EXPECT_EQ("class C {\n" " void f() { // This does something ..\n" " } // awesome..\n" "\n" " int a; // This is unrelated\n" "};", format("class C{void f() { // This does something ..\n" " } // awesome..\n" " \n" "int a; // This is unrelated\n" "};")); EXPECT_EQ("int i; // single line trailing comment", format("int i;\\\n// single line trailing comment")); verifyGoogleFormat("int a; // Trailing comment."); verifyFormat("someFunction(anotherFunction( // Force break.\n" " parameter));"); verifyGoogleFormat("#endif // HEADER_GUARD"); verifyFormat("const char *test[] = {\n" " // A\n" " \"aaaa\",\n" " // B\n" " \"aaaaa\"};"); verifyGoogleFormat( "aaaaaaaaaaaaaaaaaaaaaaaaaa(\n" " aaaaaaaaaaaaaaaaaaaaaa); // 81_cols_with_this_comment"); EXPECT_EQ("D(a, {\n" " // test\n" " int a;\n" "});", format("D(a, {\n" "// test\n" "int a;\n" "});")); EXPECT_EQ("lineWith(); // comment\n" "// at start\n" "otherLine();", format("lineWith(); // comment\n" "// at start\n" "otherLine();")); EXPECT_EQ("lineWith(); // comment\n" "/*\n" " * at start */\n" "otherLine();", format("lineWith(); // comment\n" "/*\n" " * at start */\n" "otherLine();")); EXPECT_EQ("lineWith(); // comment\n" " // at start\n" "otherLine();", format("lineWith(); // comment\n" " // at start\n" "otherLine();")); EXPECT_EQ("lineWith(); // comment\n" "// at start\n" "otherLine(); // comment", format("lineWith(); // comment\n" "// at start\n" "otherLine(); // comment")); EXPECT_EQ("lineWith();\n" "// at start\n" "otherLine(); // comment", format("lineWith();\n" " // at start\n" "otherLine(); // comment")); EXPECT_EQ("// first\n" "// at start\n" "otherLine(); // comment", format("// first\n" " // at start\n" "otherLine(); // comment")); EXPECT_EQ("f();\n" "// first\n" "// at start\n" "otherLine(); // comment", format("f();\n" "// first\n" " // at start\n" "otherLine(); // comment")); verifyFormat("f(); // comment\n" "// first\n" "// at start\n" "otherLine();"); EXPECT_EQ("f(); // comment\n" "// first\n" "// at start\n" "otherLine();", format("f(); // comment\n" "// first\n" " // at start\n" "otherLine();")); EXPECT_EQ("f(); // comment\n" " // first\n" "// at start\n" "otherLine();", format("f(); // comment\n" " // first\n" "// at start\n" "otherLine();")); EXPECT_EQ("void f() {\n" " lineWith(); // comment\n" " // at start\n" "}", format("void f() {\n" " lineWith(); // comment\n" " // at start\n" "}")); EXPECT_EQ("int xy; // a\n" "int z; // b", format("int xy; // a\n" "int z; //b")); EXPECT_EQ("int xy; // a\n" "int z; // bb", format("int xy; // a\n" "int z; //bb", getLLVMStyleWithColumns(12))); verifyFormat("#define A \\\n" " int i; /* iiiiiiiiiiiiiiiiiiiii */ \\\n" " int jjjjjjjjjjjjjjjjjjjjjjjj; /* */", getLLVMStyleWithColumns(60)); verifyFormat( "#define A \\\n" " int i; /* iiiiiiiiiiiiiiiiiiiii */ \\\n" " int jjjjjjjjjjjjjjjjjjjjjjjj; /* */", getLLVMStyleWithColumns(61)); verifyFormat("if ( // This is some comment\n" " x + 3) {\n" "}"); EXPECT_EQ("if ( // This is some comment\n" " // spanning two lines\n" " x + 3) {\n" "}", format("if( // This is some comment\n" " // spanning two lines\n" " x + 3) {\n" "}")); verifyNoCrash("/\\\n/"); verifyNoCrash("/\\\n* */"); // The 0-character somehow makes the lexer return a proper comment. verifyNoCrash(StringRef("/*\\\0\n/", 6)); } TEST_F(FormatTestComments, KeepsParameterWithTrailingCommentsOnTheirOwnLine) { EXPECT_EQ("SomeFunction(a,\n" " b, // comment\n" " c);", format("SomeFunction(a,\n" " b, // comment\n" " c);")); EXPECT_EQ("SomeFunction(a, b,\n" " // comment\n" " c);", format("SomeFunction(a,\n" " b,\n" " // comment\n" " c);")); EXPECT_EQ("SomeFunction(a, b, // comment (unclear relation)\n" " c);", format("SomeFunction(a, b, // comment (unclear relation)\n" " c);")); EXPECT_EQ("SomeFunction(a, // comment\n" " b,\n" " c); // comment", format("SomeFunction(a, // comment\n" " b,\n" " c); // comment")); EXPECT_EQ("aaaaaaaaaa(aaaa(aaaa,\n" " aaaa), //\n" " aaaa, bbbbb);", format("aaaaaaaaaa(aaaa(aaaa,\n" "aaaa), //\n" "aaaa, bbbbb);")); } TEST_F(FormatTestComments, RemovesTrailingWhitespaceOfComments) { EXPECT_EQ("// comment", format("// comment ")); EXPECT_EQ("int aaaaaaa, bbbbbbb; // comment", format("int aaaaaaa, bbbbbbb; // comment ", getLLVMStyleWithColumns(33))); EXPECT_EQ("// comment\\\n", format("// comment\\\n \t \v \f ")); EXPECT_EQ("// comment \\\n", format("// comment \\\n \t \v \f ")); } TEST_F(FormatTestComments, UnderstandsBlockComments) { verifyFormat("f(/*noSpaceAfterParameterNamingComment=*/true);"); verifyFormat("void f() { g(/*aaa=*/x, /*bbb=*/!y, /*c=*/::c); }"); EXPECT_EQ("f(aaaaaaaaaaaaaaaaaaaaaaaaa, /* Trailing comment for aa... */\n" " bbbbbbbbbbbbbbbbbbbbbbbbb);", format("f(aaaaaaaaaaaaaaaaaaaaaaaaa , \\\n" "/* Trailing comment for aa... */\n" " bbbbbbbbbbbbbbbbbbbbbbbbb);")); EXPECT_EQ( "f(aaaaaaaaaaaaaaaaaaaaaaaaa,\n" " /* Leading comment for bb... */ bbbbbbbbbbbbbbbbbbbbbbbbb);", format("f(aaaaaaaaaaaaaaaaaaaaaaaaa , \n" "/* Leading comment for bb... */ bbbbbbbbbbbbbbbbbbbbbbbbb);")); EXPECT_EQ( "void aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(\n" " aaaaaaaaaaaaaaaaaa,\n" " aaaaaaaaaaaaaaaaaa) { /*aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa*/\n" "}", format("void aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(\n" " aaaaaaaaaaaaaaaaaa ,\n" " aaaaaaaaaaaaaaaaaa) { /*aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa*/\n" "}")); verifyFormat("f(/* aaaaaaaaaaaaaaaaaa = */\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa);"); FormatStyle NoBinPacking = getLLVMStyle(); NoBinPacking.BinPackParameters = false; verifyFormat("aaaaaaaa(/* parameter 1 */ aaaaaa,\n" " /* parameter 2 */ aaaaaa,\n" " /* parameter 3 */ aaaaaa,\n" " /* parameter 4 */ aaaaaa);", NoBinPacking); // Aligning block comments in macros. verifyGoogleFormat("#define A \\\n" " int i; /*a*/ \\\n" " int jjj; /*b*/"); } TEST_F(FormatTestComments, AlignsBlockComments) { EXPECT_EQ("/*\n" " * Really multi-line\n" " * comment.\n" " */\n" "void f() {}", format(" /*\n" " * Really multi-line\n" " * comment.\n" " */\n" " void f() {}")); EXPECT_EQ("class C {\n" " /*\n" " * Another multi-line\n" " * comment.\n" " */\n" " void f() {}\n" "};", format("class C {\n" "/*\n" " * Another multi-line\n" " * comment.\n" " */\n" "void f() {}\n" "};")); EXPECT_EQ("/*\n" " 1. This is a comment with non-trivial formatting.\n" " 1.1. We have to indent/outdent all lines equally\n" " 1.1.1. to keep the formatting.\n" " */", format(" /*\n" " 1. This is a comment with non-trivial formatting.\n" " 1.1. We have to indent/outdent all lines equally\n" " 1.1.1. to keep the formatting.\n" " */")); EXPECT_EQ("/*\n" "Don't try to outdent if there's not enough indentation.\n" "*/", format(" /*\n" " Don't try to outdent if there's not enough indentation.\n" " */")); EXPECT_EQ("int i; /* Comment with empty...\n" " *\n" " * line. */", format("int i; /* Comment with empty...\n" " *\n" " * line. */")); EXPECT_EQ("int foobar = 0; /* comment */\n" "int bar = 0; /* multiline\n" " comment 1 */\n" "int baz = 0; /* multiline\n" " comment 2 */\n" "int bzz = 0; /* multiline\n" " comment 3 */", format("int foobar = 0; /* comment */\n" "int bar = 0; /* multiline\n" " comment 1 */\n" "int baz = 0; /* multiline\n" " comment 2 */\n" "int bzz = 0; /* multiline\n" " comment 3 */")); EXPECT_EQ("int foobar = 0; /* comment */\n" "int bar = 0; /* multiline\n" " comment */\n" "int baz = 0; /* multiline\n" "comment */", format("int foobar = 0; /* comment */\n" "int bar = 0; /* multiline\n" "comment */\n" "int baz = 0; /* multiline\n" "comment */")); } TEST_F(FormatTestComments, CommentReflowingCanBeTurnedOff) { FormatStyle Style = getLLVMStyleWithColumns(20); Style.ReflowComments = false; verifyFormat("// aaaaaaaaa aaaaaaaaaa aaaaaaaaaa", Style); verifyFormat("/* aaaaaaaaa aaaaaaaaaa aaaaaaaaaa */", Style); } TEST_F(FormatTestComments, CorrectlyHandlesLengthOfBlockComments) { EXPECT_EQ("double *x; /* aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa */", format("double *x; /* aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa */")); EXPECT_EQ( "void ffffffffffff(\n" " int aaaaaaaa, int bbbbbbbb,\n" " int cccccccccccc) { /*\n" " aaaaaaaaaa\n" " aaaaaaaaaaaaa\n" " bbbbbbbbbbbbbb\n" " bbbbbbbbbb\n" " */\n" "}", format("void ffffffffffff(int aaaaaaaa, int bbbbbbbb, int cccccccccccc)\n" "{ /*\n" " aaaaaaaaaa aaaaaaaaaaaaa\n" " bbbbbbbbbbbbbb bbbbbbbbbb\n" " */\n" "}", getLLVMStyleWithColumns(40))); } TEST_F(FormatTestComments, DontBreakNonTrailingBlockComments) { EXPECT_EQ("void ffffffffff(\n" " int aaaaa /* test */);", format("void ffffffffff(int aaaaa /* test */);", getLLVMStyleWithColumns(35))); } TEST_F(FormatTestComments, SplitsLongCxxComments) { EXPECT_EQ("// A comment that\n" "// doesn't fit on\n" "// one line", format("// A comment that doesn't fit on one line", getLLVMStyleWithColumns(20))); EXPECT_EQ("/// A comment that\n" "/// doesn't fit on\n" "/// one line", format("/// A comment that doesn't fit on one line", getLLVMStyleWithColumns(20))); EXPECT_EQ("//! A comment that\n" "//! doesn't fit on\n" "//! one line", format("//! A comment that doesn't fit on one line", getLLVMStyleWithColumns(20))); EXPECT_EQ("// a b c d\n" "// e f g\n" "// h i j k", format("// a b c d e f g h i j k", getLLVMStyleWithColumns(10))); EXPECT_EQ( "// a b c d\n" "// e f g\n" "// h i j k", format("\\\n// a b c d e f g h i j k", getLLVMStyleWithColumns(10))); EXPECT_EQ("if (true) // A comment that\n" " // doesn't fit on\n" " // one line", format("if (true) // A comment that doesn't fit on one line ", getLLVMStyleWithColumns(30))); EXPECT_EQ("// Don't_touch_leading_whitespace", format("// Don't_touch_leading_whitespace", getLLVMStyleWithColumns(20))); EXPECT_EQ("// Add leading\n" "// whitespace", format("//Add leading whitespace", getLLVMStyleWithColumns(20))); EXPECT_EQ("/// Add leading\n" "/// whitespace", format("///Add leading whitespace", getLLVMStyleWithColumns(20))); EXPECT_EQ("//! Add leading\n" "//! whitespace", format("//!Add leading whitespace", getLLVMStyleWithColumns(20))); EXPECT_EQ("// whitespace", format("//whitespace", getLLVMStyle())); EXPECT_EQ("// Even if it makes the line exceed the column\n" "// limit", format("//Even if it makes the line exceed the column limit", getLLVMStyleWithColumns(51))); EXPECT_EQ("//--But not here", format("//--But not here", getLLVMStyle())); EXPECT_EQ("/// line 1\n" "// add leading whitespace", format("/// line 1\n" "//add leading whitespace", getLLVMStyleWithColumns(30))); EXPECT_EQ("/// line 1\n" "/// line 2\n" "//! line 3\n" "//! line 4\n" "//! line 5\n" "// line 6\n" "// line 7", format("///line 1\n" "///line 2\n" "//! line 3\n" "//!line 4\n" "//!line 5\n" "// line 6\n" "//line 7", getLLVMStyleWithColumns(20))); EXPECT_EQ("// aa bb cc dd", format("// aa bb cc dd ", getLLVMStyleWithColumns(15))); EXPECT_EQ("// A comment before\n" "// a macro\n" "// definition\n" "#define a b", format("// A comment before a macro definition\n" "#define a b", getLLVMStyleWithColumns(20))); EXPECT_EQ("void ffffff(\n" " int aaaaaaaaa, // wwww\n" " int bbbbbbbbbb, // xxxxxxx\n" " // yyyyyyyyyy\n" " int c, int d, int e) {}", format("void ffffff(\n" " int aaaaaaaaa, // wwww\n" " int bbbbbbbbbb, // xxxxxxx yyyyyyyyyy\n" " int c, int d, int e) {}", getLLVMStyleWithColumns(40))); EXPECT_EQ("//\t aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", format("//\t aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", getLLVMStyleWithColumns(20))); EXPECT_EQ( "#define XXX // a b c d\n" " // e f g h", format("#define XXX // a b c d e f g h", getLLVMStyleWithColumns(22))); EXPECT_EQ( "#define XXX // q w e r\n" " // t y u i", format("#define XXX //q w e r t y u i", getLLVMStyleWithColumns(22))); EXPECT_EQ("{\n" " //\n" " //\\\n" " // long 1 2 3 4\n" " // 5\n" "}", format("{\n" " //\n" " //\\\n" " // long 1 2 3 4 5\n" "}", getLLVMStyleWithColumns(20))); } TEST_F(FormatTestComments, PreservesHangingIndentInCxxComments) { EXPECT_EQ("// A comment\n" "// that doesn't\n" "// fit on one\n" "// line", format("// A comment that doesn't fit on one line", getLLVMStyleWithColumns(20))); EXPECT_EQ("/// A comment\n" "/// that doesn't\n" "/// fit on one\n" "/// line", format("/// A comment that doesn't fit on one line", getLLVMStyleWithColumns(20))); } TEST_F(FormatTestComments, DontSplitLineCommentsWithEscapedNewlines) { EXPECT_EQ("// aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\\\n" "// aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\\\n" "// aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", format("// aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\\\n" "// aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\\\n" "// aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa")); EXPECT_EQ("int a; // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", format("int a; // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", getLLVMStyleWithColumns(50))); // FIXME: One day we might want to implement adjustment of leading whitespace // of the consecutive lines in this kind of comment: EXPECT_EQ("double\n" " a; // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", format("double a; // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\\\n" " // AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", getLLVMStyleWithColumns(49))); } TEST_F(FormatTestComments, DontSplitLineCommentsWithPragmas) { FormatStyle Pragmas = getLLVMStyleWithColumns(30); Pragmas.CommentPragmas = "^ IWYU pragma:"; EXPECT_EQ( "// IWYU pragma: aaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbb", format("// IWYU pragma: aaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbb", Pragmas)); EXPECT_EQ( "/* IWYU pragma: aaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbb */", format("/* IWYU pragma: aaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbb */", Pragmas)); } TEST_F(FormatTestComments, PriorityOfCommentBreaking) { EXPECT_EQ("if (xxx ==\n" " yyy && // aaaaaaaaaaaa bbbbbbbbb\n" " zzz)\n" " q();", format("if (xxx == yyy && // aaaaaaaaaaaa bbbbbbbbb\n" " zzz) q();", getLLVMStyleWithColumns(40))); EXPECT_EQ("if (xxxxxxxxxx ==\n" " yyy && // aaaaaa bbbbbbbb cccc\n" " zzz)\n" " q();", format("if (xxxxxxxxxx == yyy && // aaaaaa bbbbbbbb cccc\n" " zzz) q();", getLLVMStyleWithColumns(40))); EXPECT_EQ("if (xxxxxxxxxx &&\n" " yyy || // aaaaaa bbbbbbbb cccc\n" " zzz)\n" " q();", format("if (xxxxxxxxxx && yyy || // aaaaaa bbbbbbbb cccc\n" " zzz) q();", getLLVMStyleWithColumns(40))); EXPECT_EQ("fffffffff(\n" " &xxx, // aaaaaaaaaaaa bbbbbbbbbbb\n" " zzz);", format("fffffffff(&xxx, // aaaaaaaaaaaa bbbbbbbbbbb\n" " zzz);", getLLVMStyleWithColumns(40))); } TEST_F(FormatTestComments, MultiLineCommentsInDefines) { EXPECT_EQ("#define A(x) /* \\\n" " a comment \\\n" " inside */ \\\n" " f();", format("#define A(x) /* \\\n" " a comment \\\n" " inside */ \\\n" " f();", getLLVMStyleWithColumns(17))); EXPECT_EQ("#define A( \\\n" " x) /* \\\n" " a comment \\\n" " inside */ \\\n" " f();", format("#define A( \\\n" " x) /* \\\n" " a comment \\\n" " inside */ \\\n" " f();", getLLVMStyleWithColumns(17))); } TEST_F(FormatTestComments, ParsesCommentsAdjacentToPPDirectives) { EXPECT_EQ("namespace {}\n// Test\n#define A", format("namespace {}\n // Test\n#define A")); EXPECT_EQ("namespace {}\n/* Test */\n#define A", format("namespace {}\n /* Test */\n#define A")); EXPECT_EQ("namespace {}\n/* Test */ #define A", format("namespace {}\n /* Test */ #define A")); } TEST_F(FormatTestComments, KeepsLevelOfCommentBeforePPDirective) { // Keep the current level if the comment was originally not aligned with // the preprocessor directive. EXPECT_EQ("void f() {\n" " int i;\n" " /* comment */\n" "#ifdef A\n" " int j;\n" "}", format("void f() {\n" " int i;\n" " /* comment */\n" "#ifdef A\n" " int j;\n" "}")); EXPECT_EQ("void f() {\n" " int i;\n" " /* comment */\n" "\n" "#ifdef A\n" " int j;\n" "}", format("void f() {\n" " int i;\n" " /* comment */\n" "\n" "#ifdef A\n" " int j;\n" "}")); // Keep the current level if there is an empty line between the comment and // the preprocessor directive. EXPECT_EQ("void f() {\n" " int i;\n" " /* comment */\n" "\n" "#ifdef A\n" " int j;\n" "}", format("void f() {\n" " int i;\n" "/* comment */\n" "\n" "#ifdef A\n" " int j;\n" "}")); // Align with the preprocessor directive if the comment was originally aligned // with the preprocessor directive. EXPECT_EQ("void f() {\n" " int i;\n" "/* comment */\n" "#ifdef A\n" " int j;\n" "}", format("void f() {\n" " int i;\n" "/* comment */\n" "#ifdef A\n" " int j;\n" "}")); } TEST_F(FormatTestComments, SplitsLongLinesInComments) { EXPECT_EQ("/* This is a long\n" " * comment that\n" " * doesn't\n" " * fit on one line.\n" " */", format("/* " "This is a long " "comment that " "doesn't " "fit on one line. */", getLLVMStyleWithColumns(20))); EXPECT_EQ( "/* a b c d\n" " * e f g\n" " * h i j k\n" " */", format("/* a b c d e f g h i j k */", getLLVMStyleWithColumns(10))); EXPECT_EQ( "/* a b c d\n" " * e f g\n" " * h i j k\n" " */", format("\\\n/* a b c d e f g h i j k */", getLLVMStyleWithColumns(10))); EXPECT_EQ("/*\n" "This is a long\n" "comment that doesn't\n" "fit on one line.\n" "*/", format("/*\n" "This is a long " "comment that doesn't " "fit on one line. \n" "*/", getLLVMStyleWithColumns(20))); EXPECT_EQ("/*\n" " * This is a long\n" " * comment that\n" " * doesn't fit on\n" " * one line.\n" " */", format("/* \n" " * This is a long " " comment that " " doesn't fit on " " one line. \n" " */", getLLVMStyleWithColumns(20))); EXPECT_EQ("/*\n" " * This_is_a_comment_with_words_that_dont_fit_on_one_line\n" " * so_it_should_be_broken\n" " * wherever_a_space_occurs\n" " */", format("/*\n" " * This_is_a_comment_with_words_that_dont_fit_on_one_line " " so_it_should_be_broken " " wherever_a_space_occurs \n" " */", getLLVMStyleWithColumns(20))); EXPECT_EQ("/*\n" " * This_comment_can_not_be_broken_into_lines\n" " */", format("/*\n" " * This_comment_can_not_be_broken_into_lines\n" " */", getLLVMStyleWithColumns(20))); EXPECT_EQ("{\n" " /*\n" " This is another\n" " long comment that\n" " doesn't fit on one\n" " line 1234567890\n" " */\n" "}", format("{\n" "/*\n" "This is another " " long comment that " " doesn't fit on one" " line 1234567890\n" "*/\n" "}", getLLVMStyleWithColumns(20))); EXPECT_EQ("{\n" " /*\n" " * This i s\n" " * another comment\n" " * t hat doesn' t\n" " * fit on one l i\n" " * n e\n" " */\n" "}", format("{\n" "/*\n" " * This i s" " another comment" " t hat doesn' t" " fit on one l i" " n e\n" " */\n" "}", getLLVMStyleWithColumns(20))); EXPECT_EQ("/*\n" " * This is a long\n" " * comment that\n" " * doesn't fit on\n" " * one line\n" " */", format(" /*\n" " * This is a long comment that doesn't fit on one line\n" " */", getLLVMStyleWithColumns(20))); EXPECT_EQ("{\n" " if (something) /* This is a\n" " long\n" " comment */\n" " ;\n" "}", format("{\n" " if (something) /* This is a long comment */\n" " ;\n" "}", getLLVMStyleWithColumns(30))); EXPECT_EQ("/* A comment before\n" " * a macro\n" " * definition */\n" "#define a b", format("/* A comment before a macro definition */\n" "#define a b", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* some comment\n" " * a comment that\n" " * we break another\n" " * comment we have\n" " * to break a left\n" " * comment\n" " */", format(" /* some comment\n" " * a comment that we break\n" " * another comment we have to break\n" "* a left comment\n" " */", getLLVMStyleWithColumns(20))); EXPECT_EQ("/**\n" " * multiline block\n" " * comment\n" " *\n" " */", format("/**\n" " * multiline block comment\n" " *\n" " */", getLLVMStyleWithColumns(20))); EXPECT_EQ("/*\n" "\n" "\n" " */\n", format(" /* \n" " \n" " \n" " */\n")); EXPECT_EQ("/* a a */", format("/* a a */", getLLVMStyleWithColumns(15))); EXPECT_EQ("/* a a bc */", format("/* a a bc */", getLLVMStyleWithColumns(15))); EXPECT_EQ("/* aaa aaa\n" " * aaaaa */", format("/* aaa aaa aaaaa */", getLLVMStyleWithColumns(15))); EXPECT_EQ("/* aaa aaa\n" " * aaaaa */", format("/* aaa aaa aaaaa */", getLLVMStyleWithColumns(15))); } TEST_F(FormatTestComments, SplitsLongLinesInCommentsInPreprocessor) { EXPECT_EQ("#define X \\\n" " /* \\\n" " Test \\\n" " Macro comment \\\n" " with a long \\\n" " line \\\n" " */ \\\n" " A + B", format("#define X \\\n" " /*\n" " Test\n" " Macro comment with a long line\n" " */ \\\n" " A + B", getLLVMStyleWithColumns(20))); EXPECT_EQ("#define X \\\n" " /* Macro comment \\\n" " with a long \\\n" " line */ \\\n" " A + B", format("#define X \\\n" " /* Macro comment with a long\n" " line */ \\\n" " A + B", getLLVMStyleWithColumns(20))); EXPECT_EQ("#define X \\\n" " /* Macro comment \\\n" " * with a long \\\n" " * line */ \\\n" " A + B", format("#define X \\\n" " /* Macro comment with a long line */ \\\n" " A + B", getLLVMStyleWithColumns(20))); } TEST_F(FormatTestComments, KeepsTrailingPPCommentsAndSectionCommentsSeparate) { verifyFormat("#ifdef A // line about A\n" "// section comment\n" "#endif", getLLVMStyleWithColumns(80)); verifyFormat("#ifdef A // line 1 about A\n" " // line 2 about A\n" "// section comment\n" "#endif", getLLVMStyleWithColumns(80)); EXPECT_EQ("#ifdef A // line 1 about A\n" " // line 2 about A\n" "// section comment\n" "#endif", format("#ifdef A // line 1 about A\n" " // line 2 about A\n" "// section comment\n" "#endif", getLLVMStyleWithColumns(80))); verifyFormat("int f() {\n" " int i;\n" "#ifdef A // comment about A\n" " // section comment 1\n" " // section comment 2\n" " i = 2;\n" "#else // comment about #else\n" " // section comment 3\n" " i = 4;\n" "#endif\n" "}", getLLVMStyleWithColumns(80)); } TEST_F(FormatTestComments, AlignsPPElseEndifComments) { verifyFormat("#if A\n" "#else // A\n" "int iiii;\n" "#endif // B", getLLVMStyleWithColumns(20)); verifyFormat("#if A\n" "#else // A\n" "int iiii; // CC\n" "#endif // B", getLLVMStyleWithColumns(20)); EXPECT_EQ("#if A\n" "#else // A1\n" " // A2\n" "int ii;\n" "#endif // B", format("#if A\n" "#else // A1\n" " // A2\n" "int ii;\n" "#endif // B", getLLVMStyleWithColumns(20))); } TEST_F(FormatTestComments, CommentsInStaticInitializers) { EXPECT_EQ( "static SomeType type = {aaaaaaaaaaaaaaaaaaaa, /* comment */\n" " aaaaaaaaaaaaaaaaaaaa /* comment */,\n" " /* comment */ aaaaaaaaaaaaaaaaaaaa,\n" " aaaaaaaaaaaaaaaaaaaa, // comment\n" " aaaaaaaaaaaaaaaaaaaa};", format("static SomeType type = { aaaaaaaaaaaaaaaaaaaa , /* comment */\n" " aaaaaaaaaaaaaaaaaaaa /* comment */ ,\n" " /* comment */ aaaaaaaaaaaaaaaaaaaa ,\n" " aaaaaaaaaaaaaaaaaaaa , // comment\n" " aaaaaaaaaaaaaaaaaaaa };")); verifyFormat("static SomeType type = {aaaaaaaaaaa, // comment for aa...\n" " bbbbbbbbbbb, ccccccccccc};"); verifyFormat("static SomeType type = {aaaaaaaaaaa,\n" " // comment for bb....\n" " bbbbbbbbbbb, ccccccccccc};"); verifyGoogleFormat( "static SomeType type = {aaaaaaaaaaa, // comment for aa...\n" " bbbbbbbbbbb, ccccccccccc};"); verifyGoogleFormat("static SomeType type = {aaaaaaaaaaa,\n" " // comment for bb....\n" " bbbbbbbbbbb, ccccccccccc};"); verifyFormat("S s = {{a, b, c}, // Group #1\n" " {d, e, f}, // Group #2\n" " {g, h, i}}; // Group #3"); verifyFormat("S s = {{// Group #1\n" " a, b, c},\n" " {// Group #2\n" " d, e, f},\n" " {// Group #3\n" " g, h, i}};"); EXPECT_EQ("S s = {\n" " // Some comment\n" " a,\n" "\n" " // Comment after empty line\n" " b}", format("S s = {\n" " // Some comment\n" " a,\n" " \n" " // Comment after empty line\n" " b\n" "}")); EXPECT_EQ("S s = {\n" " /* Some comment */\n" " a,\n" "\n" " /* Comment after empty line */\n" " b}", format("S s = {\n" " /* Some comment */\n" " a,\n" " \n" " /* Comment after empty line */\n" " b\n" "}")); verifyFormat("const uint8_t aaaaaaaaaaaaaaaaaaaaaa[0] = {\n" " 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // comment\n" " 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // comment\n" " 0x00, 0x00, 0x00, 0x00}; // comment\n"); } TEST_F(FormatTestComments, LineCommentsAfterRightBrace) { EXPECT_EQ("if (true) { // comment about branch\n" " // comment about f\n" " f();\n" "}", format("if (true) { // comment about branch\n" " // comment about f\n" " f();\n" "}", getLLVMStyleWithColumns(80))); EXPECT_EQ("if (1) { // if line 1\n" " // if line 2\n" " // if line 3\n" " // f line 1\n" " // f line 2\n" " f();\n" "} else { // else line 1\n" " // else line 2\n" " // else line 3\n" " // g line 1\n" " g();\n" "}", format("if (1) { // if line 1\n" " // if line 2\n" " // if line 3\n" " // f line 1\n" " // f line 2\n" " f();\n" "} else { // else line 1\n" " // else line 2\n" " // else line 3\n" " // g line 1\n" " g();\n" "}")); EXPECT_EQ("do { // line 1\n" " // line 2\n" " // line 3\n" " f();\n" "} while (true);", format("do { // line 1\n" " // line 2\n" " // line 3\n" " f();\n" "} while (true);", getLLVMStyleWithColumns(80))); EXPECT_EQ("while (a < b) { // line 1\n" " // line 2\n" " // line 3\n" " f();\n" "}", format("while (a < b) {// line 1\n" " // line 2\n" " // line 3\n" " f();\n" "}", getLLVMStyleWithColumns(80))); } TEST_F(FormatTestComments, ReflowsComments) { // Break a long line and reflow with the full next line. EXPECT_EQ("// long long long\n" "// long long", format("// long long long long\n" "// long", getLLVMStyleWithColumns(20))); // Keep the trailing newline while reflowing. EXPECT_EQ("// long long long\n" "// long long\n", format("// long long long long\n" "// long\n", getLLVMStyleWithColumns(20))); // Break a long line and reflow with a part of the next line. EXPECT_EQ("// long long long\n" "// long long\n" "// long_long", format("// long long long long\n" "// long long_long", getLLVMStyleWithColumns(20))); // Break but do not reflow if the first word from the next line is too long. EXPECT_EQ("// long long long\n" "// long\n" "// long_long_long\n", format("// long long long long\n" "// long_long_long\n", getLLVMStyleWithColumns(20))); // Don't break or reflow short lines. verifyFormat("// long\n" "// long long long lo\n" "// long long long lo\n" "// long", getLLVMStyleWithColumns(20)); // Keep prefixes and decorations while reflowing. EXPECT_EQ("/// long long long\n" "/// long long\n", format("/// long long long long\n" "/// long\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("//! long long long\n" "//! long long\n", format("//! long long long long\n" "//! long\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* long long long\n" " * long long */", format("/* long long long long\n" " * long */", getLLVMStyleWithColumns(20))); EXPECT_EQ("///< long long long\n" "///< long long\n", format("///< long long long long\n" "///< long\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("//!< long long long\n" "//!< long long\n", format("//!< long long long long\n" "//!< long\n", getLLVMStyleWithColumns(20))); // Don't bring leading whitespace up while reflowing. EXPECT_EQ("/* long long long\n" " * long long long\n" " */", format("/* long long long long\n" " * long long\n" " */", getLLVMStyleWithColumns(20))); // Reflow the last line of a block comment with its trailing '*/'. EXPECT_EQ("/* long long long\n" " long long */", format("/* long long long long\n" " long */", getLLVMStyleWithColumns(20))); // Reflow two short lines; keep the postfix of the last one. EXPECT_EQ("/* long long long\n" " * long long long */", format("/* long long long long\n" " * long\n" " * long */", getLLVMStyleWithColumns(20))); // Put the postfix of the last short reflow line on a newline if it doesn't // fit. EXPECT_EQ("/* long long long\n" " * long long longg\n" " */", format("/* long long long long\n" " * long\n" " * longg */", getLLVMStyleWithColumns(20))); // Reflow lines with leading whitespace. EXPECT_EQ("{\n" " /*\n" " * long long long\n" " * long long long\n" " * long long long\n" " */\n" "}", format("{\n" "/*\n" " * long long long long\n" " * long\n" " * long long long long\n" " */\n" "}", getLLVMStyleWithColumns(20))); // Break single line block comments that are first in the line with ' *' // decoration. EXPECT_EQ("/* long long long\n" " * long */", format("/* long long long long */", getLLVMStyleWithColumns(20))); // Break single line block comment that are not first in the line with ' ' // decoration. EXPECT_EQ("int i; /* long long\n" " long */", format("int i; /* long long long */", getLLVMStyleWithColumns(20))); // Reflow a line that goes just over the column limit. EXPECT_EQ("// long long long\n" "// lon long", format("// long long long lon\n" "// long", getLLVMStyleWithColumns(20))); // Stop reflowing if the next line has a different indentation than the // previous line. EXPECT_EQ("// long long long\n" "// long\n" "// long long\n" "// long", format("// long long long long\n" "// long long\n" "// long", getLLVMStyleWithColumns(20))); // Reflow into the last part of a really long line that has been broken into // multiple lines. EXPECT_EQ("// long long long\n" "// long long long\n" "// long long long\n", format("// long long long long long long long long\n" "// long\n", getLLVMStyleWithColumns(20))); // Break the first line, then reflow the beginning of the second and third // line up. EXPECT_EQ("// long long long\n" "// lon1 lon2 lon2\n" "// lon2 lon3 lon3", format("// long long long lon1\n" "// lon2 lon2 lon2\n" "// lon3 lon3", getLLVMStyleWithColumns(20))); // Reflow the beginning of the second line, then break the rest. EXPECT_EQ("// long long long\n" "// lon1 lon2 lon2\n" "// lon2 lon2 lon2\n" "// lon3", format("// long long long lon1\n" "// lon2 lon2 lon2 lon2 lon2 lon3", getLLVMStyleWithColumns(20))); // Shrink the first line, then reflow the second line up. EXPECT_EQ("// long long long", format("// long long\n" "// long", getLLVMStyleWithColumns(20))); // Don't shrink leading whitespace. EXPECT_EQ("int i; /// a", format("int i; /// a", getLLVMStyleWithColumns(20))); // Shrink trailing whitespace if there is no postfix and reflow. EXPECT_EQ("// long long long\n" "// long long", format("// long long long long \n" "// long", getLLVMStyleWithColumns(20))); // Shrink trailing whitespace to a single one if there is postfix. EXPECT_EQ("/* long long long */", format("/* long long long */", getLLVMStyleWithColumns(20))); // Break a block comment postfix if exceeding the line limit. EXPECT_EQ("/* long\n" " */", format("/* long */", getLLVMStyleWithColumns(20))); // Reflow indented comments. EXPECT_EQ("{\n" " // long long long\n" " // long long\n" " int i; /* long lon\n" " g long\n" " */\n" "}", format("{\n" " // long long long long\n" " // long\n" " int i; /* long lon g\n" " long */\n" "}", getLLVMStyleWithColumns(20))); // Don't realign trailing comments after reflow has happened. EXPECT_EQ("// long long long\n" "// long long\n" "long i; // long", format("// long long long long\n" "// long\n" "long i; // long", getLLVMStyleWithColumns(20))); EXPECT_EQ("// long long long\n" "// longng long long\n" "// long lo", format("// long long long longng\n" "// long long long\n" "// lo", getLLVMStyleWithColumns(20))); // Reflow lines after a broken line. EXPECT_EQ("int a; // Trailing\n" " // comment on\n" " // 2 or 3\n" " // lines.\n", format("int a; // Trailing comment\n" " // on 2\n" " // or 3\n" " // lines.\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("/// This long line\n" "/// gets reflown.\n", format("/// This long line gets\n" "/// reflown.\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("//! This long line\n" "//! gets reflown.\n", format(" //! This long line gets\n" " //! reflown.\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* This long line\n" " * gets reflown.\n" " */\n", format("/* This long line gets\n" " * reflown.\n" " */\n", getLLVMStyleWithColumns(20))); // Reflow after indentation makes a line too long. EXPECT_EQ("{\n" " // long long long\n" " // lo long\n" "}\n", format("{\n" "// long long long lo\n" "// long\n" "}\n", getLLVMStyleWithColumns(20))); // Break and reflow multiple lines. EXPECT_EQ("/*\n" " * Reflow the end of\n" " * line by 11 22 33\n" " * 4.\n" " */\n", format("/*\n" " * Reflow the end of line\n" " * by\n" " * 11\n" " * 22\n" " * 33\n" " * 4.\n" " */\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("/// First line gets\n" "/// broken. Second\n" "/// line gets\n" "/// reflown and\n" "/// broken. Third\n" "/// gets reflown.\n", format("/// First line gets broken.\n" "/// Second line gets reflown and broken.\n" "/// Third gets reflown.\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("int i; // first long\n" " // long snd\n" " // long.\n", format("int i; // first long long\n" " // snd long.\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("{\n" " // first long line\n" " // line second\n" " // long line line\n" " // third long line\n" " // line\n" "}\n", format("{\n" " // first long line line\n" " // second long line line\n" " // third long line line\n" "}\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("int i; /* first line\n" " * second\n" " * line third\n" " * line\n" " */", format("int i; /* first line\n" " * second line\n" " * third line\n" " */", getLLVMStyleWithColumns(20))); // Reflow the last two lines of a section that starts with a line having // different indentation. EXPECT_EQ( "// long\n" "// long long long\n" "// long long", format("// long\n" "// long long long long\n" "// long", getLLVMStyleWithColumns(20))); // Keep the block comment endling '*/' while reflowing. EXPECT_EQ("/* Long long long\n" " * line short */\n", format("/* Long long long line\n" " * short */\n", getLLVMStyleWithColumns(20))); // Don't reflow between separate blocks of comments. EXPECT_EQ("/* First comment\n" " * block will */\n" "/* Snd\n" " */\n", format("/* First comment block\n" " * will */\n" "/* Snd\n" " */\n", getLLVMStyleWithColumns(20))); // Don't reflow across blank comment lines. EXPECT_EQ("int i; // This long\n" " // line gets\n" " // broken.\n" " //\n" " // keep.\n", format("int i; // This long line gets broken.\n" " // \n" " // keep.\n", getLLVMStyleWithColumns(20))); EXPECT_EQ("{\n" " /// long long long\n" " /// long long\n" " ///\n" " /// long\n" "}", format("{\n" " /// long long long long\n" " /// long\n" " ///\n" " /// long\n" "}", getLLVMStyleWithColumns(20))); EXPECT_EQ("//! long long long\n" "//! long\n" "\n" "//! long", format("//! long long long long\n" "\n" "//! long", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* long long long\n" " long\n" "\n" " long */", format("/* long long long long\n" "\n" " long */", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* long long long\n" " * long\n" " *\n" " * long */", format("/* long long long long\n" " *\n" " * long */", getLLVMStyleWithColumns(20))); // Don't reflow lines having content that is a single character. EXPECT_EQ("// long long long\n" "// long\n" "// l", format("// long long long long\n" "// l", getLLVMStyleWithColumns(20))); // Don't reflow lines starting with two punctuation characters. EXPECT_EQ("// long long long\n" "// long\n" "// ... --- ...", format( "// long long long long\n" "// ... --- ...", getLLVMStyleWithColumns(20))); // Don't reflow lines starting with '@'. EXPECT_EQ("// long long long\n" "// long\n" "// @param arg", format("// long long long long\n" "// @param arg", getLLVMStyleWithColumns(20))); // Don't reflow lines starting with 'TODO'. EXPECT_EQ("// long long long\n" "// long\n" "// TODO: long", format("// long long long long\n" "// TODO: long", getLLVMStyleWithColumns(20))); // Don't reflow lines starting with 'FIXME'. EXPECT_EQ("// long long long\n" "// long\n" "// FIXME: long", format("// long long long long\n" "// FIXME: long", getLLVMStyleWithColumns(20))); // Don't reflow lines starting with 'XXX'. EXPECT_EQ("// long long long\n" "// long\n" "// XXX: long", format("// long long long long\n" "// XXX: long", getLLVMStyleWithColumns(20))); // Don't reflow comment pragmas. EXPECT_EQ("// long long long\n" "// long\n" "// IWYU pragma:", format("// long long long long\n" "// IWYU pragma:", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* long long long\n" " * long\n" " * IWYU pragma:\n" " */", format("/* long long long long\n" " * IWYU pragma:\n" " */", getLLVMStyleWithColumns(20))); // Reflow lines that have a non-punctuation character among their first 2 // characters. EXPECT_EQ("// long long long\n" "// long 'long'", format( "// long long long long\n" "// 'long'", getLLVMStyleWithColumns(20))); // Don't reflow between separate blocks of comments. EXPECT_EQ("/* First comment\n" " * block will */\n" "/* Snd\n" " */\n", format("/* First comment block\n" " * will */\n" "/* Snd\n" " */\n", getLLVMStyleWithColumns(20))); // Don't reflow lines having different indentation. EXPECT_EQ("// long long long\n" "// long\n" "// long", format("// long long long long\n" "// long", getLLVMStyleWithColumns(20))); // Don't reflow separate bullets in list EXPECT_EQ("// - long long long\n" "// long\n" "// - long", format("// - long long long long\n" "// - long", getLLVMStyleWithColumns(20))); EXPECT_EQ("// * long long long\n" "// long\n" "// * long", format("// * long long long long\n" "// * long", getLLVMStyleWithColumns(20))); EXPECT_EQ("// + long long long\n" "// long\n" "// + long", format("// + long long long long\n" "// + long", getLLVMStyleWithColumns(20))); EXPECT_EQ("// 1. long long long\n" "// long\n" "// 2. long", format("// 1. long long long long\n" "// 2. long", getLLVMStyleWithColumns(20))); EXPECT_EQ("// -# long long long\n" "// long\n" "// -# long", format("// -# long long long long\n" "// -# long", getLLVMStyleWithColumns(20))); EXPECT_EQ("// - long long long\n" "// long long long\n" "// - long", format("// - long long long long\n" "// long long\n" "// - long", getLLVMStyleWithColumns(20))); EXPECT_EQ("// - long long long\n" "// long long long\n" "// long\n" "// - long", format("// - long long long long\n" "// long long long\n" "// - long", getLLVMStyleWithColumns(20))); // Large number (>2 digits) are not list items EXPECT_EQ("// long long long\n" "// long 1024. long.", format("// long long long long\n" "// 1024. long.", getLLVMStyleWithColumns(20))); // Do not break before number, to avoid introducing a non-reflowable doxygen // list item. EXPECT_EQ("// long long\n" "// long 10. long.", format("// long long long 10.\n" "// long.", getLLVMStyleWithColumns(20))); // Don't break or reflow after implicit string literals. verifyFormat("#include // l l l\n" " // l", getLLVMStyleWithColumns(20)); // Don't break or reflow comments on import lines. EXPECT_EQ("#include \"t\" /* l l l\n" " * l */", format("#include \"t\" /* l l l\n" " * l */", getLLVMStyleWithColumns(20))); // Don't reflow between different trailing comment sections. EXPECT_EQ("int i; // long long\n" " // long\n" "int j; // long long\n" " // long\n", format("int i; // long long long\n" "int j; // long long long\n", getLLVMStyleWithColumns(20))); // Don't reflow if the first word on the next line is longer than the // available space at current line. EXPECT_EQ("int i; // trigger\n" " // reflow\n" " // longsec\n", format("int i; // trigger reflow\n" " // longsec\n", getLLVMStyleWithColumns(20))); // Keep empty comment lines. EXPECT_EQ("/**/", format(" /**/", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* */", format(" /* */", getLLVMStyleWithColumns(20))); EXPECT_EQ("/* */", format(" /* */", getLLVMStyleWithColumns(20))); EXPECT_EQ("//", format(" // ", getLLVMStyleWithColumns(20))); EXPECT_EQ("///", format(" /// ", getLLVMStyleWithColumns(20))); } TEST_F(FormatTestComments, IgnoresIf0Contents) { EXPECT_EQ("#if 0\n" "}{)(&*(^%%#%@! fsadj f;ldjs ,:;| <<<>>>][)(][\n" "#endif\n" "void f() {}", format("#if 0\n" "}{)(&*(^%%#%@! fsadj f;ldjs ,:;| <<<>>>][)(][\n" "#endif\n" "void f( ) { }")); EXPECT_EQ("#if false\n" "void f( ) { }\n" "#endif\n" "void g() {}\n", format("#if false\n" "void f( ) { }\n" "#endif\n" "void g( ) { }\n")); EXPECT_EQ("enum E {\n" " One,\n" " Two,\n" "#if 0\n" "Three,\n" " Four,\n" "#endif\n" " Five\n" "};", format("enum E {\n" " One,Two,\n" "#if 0\n" "Three,\n" " Four,\n" "#endif\n" " Five};")); EXPECT_EQ("enum F {\n" " One,\n" "#if 1\n" " Two,\n" "#if 0\n" "Three,\n" " Four,\n" "#endif\n" " Five\n" "#endif\n" "};", format("enum F {\n" "One,\n" "#if 1\n" "Two,\n" "#if 0\n" "Three,\n" " Four,\n" "#endif\n" "Five\n" "#endif\n" "};")); EXPECT_EQ("enum G {\n" " One,\n" "#if 0\n" "Two,\n" "#else\n" " Three,\n" "#endif\n" " Four\n" "};", format("enum G {\n" "One,\n" "#if 0\n" "Two,\n" "#else\n" "Three,\n" "#endif\n" "Four\n" "};")); EXPECT_EQ("enum H {\n" " One,\n" "#if 0\n" "#ifdef Q\n" "Two,\n" "#else\n" "Three,\n" "#endif\n" "#endif\n" " Four\n" "};", format("enum H {\n" "One,\n" "#if 0\n" "#ifdef Q\n" "Two,\n" "#else\n" "Three,\n" "#endif\n" "#endif\n" "Four\n" "};")); EXPECT_EQ("enum I {\n" " One,\n" "#if /* test */ 0 || 1\n" "Two,\n" "Three,\n" "#endif\n" " Four\n" "};", format("enum I {\n" "One,\n" "#if /* test */ 0 || 1\n" "Two,\n" "Three,\n" "#endif\n" "Four\n" "};")); EXPECT_EQ("enum J {\n" " One,\n" "#if 0\n" "#if 0\n" "Two,\n" "#else\n" "Three,\n" "#endif\n" "Four,\n" "#endif\n" " Five\n" "};", format("enum J {\n" "One,\n" "#if 0\n" "#if 0\n" "Two,\n" "#else\n" "Three,\n" "#endif\n" "Four,\n" "#endif\n" "Five\n" "};")); // Ignore stuff in SWIG-blocks. EXPECT_EQ("#ifdef SWIG\n" "}{)(&*(^%%#%@! fsadj f;ldjs ,:;| <<<>>>][)(][\n" "#endif\n" "void f() {}", format("#ifdef SWIG\n" "}{)(&*(^%%#%@! fsadj f;ldjs ,:;| <<<>>>][)(][\n" "#endif\n" "void f( ) { }")); EXPECT_EQ("#ifndef SWIG\n" "void f() {}\n" "#endif", format("#ifndef SWIG\n" "void f( ) { }\n" "#endif")); } TEST_F(FormatTestComments, DontCrashOnBlockComments) { EXPECT_EQ( "int xxxxxxxxx; /* " "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\n" "zzzzzz\n" "0*/", format("int xxxxxxxxx; /* " "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy zzzzzz\n" "0*/")); } TEST_F(FormatTestComments, BlockCommentsInControlLoops) { verifyFormat("if (0) /* a comment in a strange place */ {\n" " f();\n" "}"); verifyFormat("if (0) /* a comment in a strange place */ {\n" " f();\n" "} /* another comment */ else /* comment #3 */ {\n" " g();\n" "}"); verifyFormat("while (0) /* a comment in a strange place */ {\n" " f();\n" "}"); verifyFormat("for (;;) /* a comment in a strange place */ {\n" " f();\n" "}"); verifyFormat("do /* a comment in a strange place */ {\n" " f();\n" "} /* another comment */ while (0);"); } TEST_F(FormatTestComments, BlockComments) { EXPECT_EQ("/* */ /* */ /* */\n/* */ /* */ /* */", format("/* *//* */ /* */\n/* *//* */ /* */")); EXPECT_EQ("/* */ a /* */ b;", format(" /* */ a/* */ b;")); EXPECT_EQ("#define A /*123*/ \\\n" " b\n" "/* */\n" "someCall(\n" " parameter);", format("#define A /*123*/ b\n" "/* */\n" "someCall(parameter);", getLLVMStyleWithColumns(15))); EXPECT_EQ("#define A\n" "/* */ someCall(\n" " parameter);", format("#define A\n" "/* */someCall(parameter);", getLLVMStyleWithColumns(15))); EXPECT_EQ("/*\n**\n*/", format("/*\n**\n*/")); EXPECT_EQ("/*\n" " *\n" " * aaaaaa\n" " * aaaaaa\n" " */", format("/*\n" "*\n" " * aaaaaa aaaaaa\n" "*/", getLLVMStyleWithColumns(10))); EXPECT_EQ("/*\n" "**\n" "* aaaaaa\n" "*aaaaaa\n" "*/", format("/*\n" "**\n" "* aaaaaa aaaaaa\n" "*/", getLLVMStyleWithColumns(10))); EXPECT_EQ("int aaaaaaaaaaaaaaaaaaaaaaaaaaaa =\n" " /* line 1\n" " bbbbbbbbbbbb */\n" " bbbbbbbbbbbbbbbbbbbbbbbbbbbb;", format("int aaaaaaaaaaaaaaaaaaaaaaaaaaaa =\n" " /* line 1\n" " bbbbbbbbbbbb */ bbbbbbbbbbbbbbbbbbbbbbbbbbbb;", getLLVMStyleWithColumns(50))); FormatStyle NoBinPacking = getLLVMStyle(); NoBinPacking.BinPackParameters = false; EXPECT_EQ("someFunction(1, /* comment 1 */\n" " 2, /* comment 2 */\n" " 3, /* comment 3 */\n" " aaaa,\n" " bbbb);", format("someFunction (1, /* comment 1 */\n" " 2, /* comment 2 */ \n" " 3, /* comment 3 */\n" "aaaa, bbbb );", NoBinPacking)); verifyFormat( "bool aaaaaaaaaaaaa = /* comment: */ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ||\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaa;"); EXPECT_EQ( "bool aaaaaaaaaaaaa = /* trailing comment */\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaa || aaaaaaaaaaaaaaaaaaaaaaaaa ||\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaa || aaaaaaaaaaaaaaaaaaaaaaaaaa;", format( "bool aaaaaaaaaaaaa = /* trailing comment */\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaa||aaaaaaaaaaaaaaaaaaaaaaaaa ||\n" " aaaaaaaaaaaaaaaaaaaaaaaaaaaa || aaaaaaaaaaaaaaaaaaaaaaaaaa;")); EXPECT_EQ( "int aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa; /* comment */\n" "int bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb; /* comment */\n" "int cccccccccccccccccccccccccccccc; /* comment */\n", format("int aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa; /* comment */\n" "int bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb; /* comment */\n" "int cccccccccccccccccccccccccccccc; /* comment */\n")); verifyFormat("void f(int * /* unused */) {}"); EXPECT_EQ("/*\n" " **\n" " */", format("/*\n" " **\n" " */")); EXPECT_EQ("/*\n" " *q\n" " */", format("/*\n" " *q\n" " */")); EXPECT_EQ("/*\n" " * q\n" " */", format("/*\n" " * q\n" " */")); EXPECT_EQ("/*\n" " **/", format("/*\n" " **/")); EXPECT_EQ("/*\n" " ***/", format("/*\n" " ***/")); } TEST_F(FormatTestComments, BlockCommentsInMacros) { EXPECT_EQ("#define A \\\n" " { \\\n" " /* one line */ \\\n" " someCall();", format("#define A { \\\n" " /* one line */ \\\n" " someCall();", getLLVMStyleWithColumns(20))); EXPECT_EQ("#define A \\\n" " { \\\n" " /* previous */ \\\n" " /* one line */ \\\n" " someCall();", format("#define A { \\\n" " /* previous */ \\\n" " /* one line */ \\\n" " someCall();", getLLVMStyleWithColumns(20))); } TEST_F(FormatTestComments, BlockCommentsAtEndOfLine) { EXPECT_EQ("a = {\n" " 1111 /* */\n" "};", format("a = {1111 /* */\n" "};", getLLVMStyleWithColumns(15))); EXPECT_EQ("a = {\n" " 1111 /* */\n" "};", format("a = {1111 /* */\n" "};", getLLVMStyleWithColumns(15))); EXPECT_EQ("a = {\n" " 1111 /* a\n" " */\n" "};", format("a = {1111 /* a */\n" "};", getLLVMStyleWithColumns(15))); } TEST_F(FormatTestComments, IndentLineCommentsInStartOfBlockAtEndOfFile) { verifyFormat("{\n" " // a\n" " // b"); } TEST_F(FormatTestComments, AlignTrailingComments) { EXPECT_EQ("#define MACRO(V) \\\n" " V(Rt2) /* one more char */ \\\n" " V(Rs) /* than here */ \\\n" "/* comment 3 */\n", format("#define MACRO(V)\\\n" "V(Rt2) /* one more char */ \\\n" "V(Rs) /* than here */ \\\n" "/* comment 3 */\n", getLLVMStyleWithColumns(40))); EXPECT_EQ("int i = f(abc, // line 1\n" " d, // line 2\n" " // line 3\n" " b);", format("int i = f(abc, // line 1\n" " d, // line 2\n" " // line 3\n" " b);", getLLVMStyleWithColumns(40))); // Align newly broken trailing comments. EXPECT_EQ("int ab; // line\n" "int a; // long\n" " // long\n", format("int ab; // line\n" "int a; // long long\n", getLLVMStyleWithColumns(15))); EXPECT_EQ("int ab; // line\n" "int a; // long\n" " // long\n" " // long", format("int ab; // line\n" "int a; // long long\n" " // long", getLLVMStyleWithColumns(15))); EXPECT_EQ("int ab; // line\n" "int a; // long\n" " // long\n" "pt c; // long", format("int ab; // line\n" "int a; // long long\n" "pt c; // long", getLLVMStyleWithColumns(15))); EXPECT_EQ("int ab; // line\n" "int a; // long\n" " // long\n" "\n" "// long", format("int ab; // line\n" "int a; // long long\n" "\n" "// long", getLLVMStyleWithColumns(15))); // Don't align newly broken trailing comments if that would put them over the // column limit. EXPECT_EQ("int i, j; // line 1\n" "int k; // line longg\n" " // long", format("int i, j; // line 1\n" "int k; // line longg long", getLLVMStyleWithColumns(20))); + // Always align if ColumnLimit = 0 + EXPECT_EQ("int i, j; // line 1\n" + "int k; // line longg long", + format("int i, j; // line 1\n" + "int k; // line longg long", + getLLVMStyleWithColumns(0))); + // Align comment line sections aligned with the next token with the next // token. EXPECT_EQ("class A {\n" "public: // public comment\n" " // comment about a\n" " int a;\n" "};", format("class A {\n" "public: // public comment\n" " // comment about a\n" " int a;\n" "};", getLLVMStyleWithColumns(40))); EXPECT_EQ("class A {\n" "public: // public comment 1\n" " // public comment 2\n" " // comment 1 about a\n" " // comment 2 about a\n" " int a;\n" "};", format("class A {\n" "public: // public comment 1\n" " // public comment 2\n" " // comment 1 about a\n" " // comment 2 about a\n" " int a;\n" "};", getLLVMStyleWithColumns(40))); EXPECT_EQ("int f(int n) { // comment line 1 on f\n" " // comment line 2 on f\n" " // comment line 1 before return\n" " // comment line 2 before return\n" " return n; // comment line 1 on return\n" " // comment line 2 on return\n" " // comment line 1 after return\n" "}", format("int f(int n) { // comment line 1 on f\n" " // comment line 2 on f\n" " // comment line 1 before return\n" " // comment line 2 before return\n" " return n; // comment line 1 on return\n" " // comment line 2 on return\n" " // comment line 1 after return\n" "}", getLLVMStyleWithColumns(40))); EXPECT_EQ("int f(int n) {\n" " switch (n) { // comment line 1 on switch\n" " // comment line 2 on switch\n" " // comment line 1 before case 1\n" " // comment line 2 before case 1\n" " case 1: // comment line 1 on case 1\n" " // comment line 2 on case 1\n" " // comment line 1 before return 1\n" " // comment line 2 before return 1\n" " return 1; // comment line 1 on return 1\n" " // comment line 2 on return 1\n" " // comment line 1 before default\n" " // comment line 2 before default\n" " default: // comment line 1 on default\n" " // comment line 2 on default\n" " // comment line 1 before return 2\n" " return 2 * f(n - 1); // comment line 1 on return 2\n" " // comment line 2 on return 2\n" " // comment line 1 after return\n" " // comment line 2 after return\n" " }\n" "}", format("int f(int n) {\n" " switch (n) { // comment line 1 on switch\n" " // comment line 2 on switch\n" " // comment line 1 before case 1\n" " // comment line 2 before case 1\n" " case 1: // comment line 1 on case 1\n" " // comment line 2 on case 1\n" " // comment line 1 before return 1\n" " // comment line 2 before return 1\n" " return 1; // comment line 1 on return 1\n" " // comment line 2 on return 1\n" " // comment line 1 before default\n" " // comment line 2 before default\n" " default: // comment line 1 on default\n" " // comment line 2 on default\n" " // comment line 1 before return 2\n" " return 2 * f(n - 1); // comment line 1 on return 2\n" " // comment line 2 on return 2\n" " // comment line 1 after return\n" " // comment line 2 after return\n" " }\n" "}", getLLVMStyleWithColumns(80))); // If all the lines in a sequence of line comments are aligned with the next // token, the first line belongs to the previous token and the other lines // belong to the next token. EXPECT_EQ("int a; // line about a\n" "long b;", format("int a; // line about a\n" " long b;", getLLVMStyleWithColumns(80))); EXPECT_EQ("int a; // line about a\n" "// line about b\n" "long b;", format("int a; // line about a\n" " // line about b\n" " long b;", getLLVMStyleWithColumns(80))); EXPECT_EQ("int a; // line about a\n" "// line 1 about b\n" "// line 2 about b\n" "long b;", format("int a; // line about a\n" " // line 1 about b\n" " // line 2 about b\n" " long b;", getLLVMStyleWithColumns(80))); } TEST_F(FormatTestComments, AlignsBlockCommentDecorations) { EXPECT_EQ("/*\n" " */", format("/*\n" "*/", getLLVMStyle())); EXPECT_EQ("/*\n" " */", format("/*\n" " */", getLLVMStyle())); EXPECT_EQ("/*\n" " */", format("/*\n" " */", getLLVMStyle())); // Align a single line. EXPECT_EQ("/*\n" " * line */", format("/*\n" "* line */", getLLVMStyle())); EXPECT_EQ("/*\n" " * line */", format("/*\n" " * line */", getLLVMStyle())); EXPECT_EQ("/*\n" " * line */", format("/*\n" " * line */", getLLVMStyle())); EXPECT_EQ("/*\n" " * line */", format("/*\n" " * line */", getLLVMStyle())); EXPECT_EQ("/**\n" " * line */", format("/**\n" "* line */", getLLVMStyle())); EXPECT_EQ("/**\n" " * line */", format("/**\n" " * line */", getLLVMStyle())); EXPECT_EQ("/**\n" " * line */", format("/**\n" " * line */", getLLVMStyle())); EXPECT_EQ("/**\n" " * line */", format("/**\n" " * line */", getLLVMStyle())); EXPECT_EQ("/**\n" " * line */", format("/**\n" " * line */", getLLVMStyle())); // Align the end '*/' after a line. EXPECT_EQ("/*\n" " * line\n" " */", format("/*\n" "* line\n" "*/", getLLVMStyle())); EXPECT_EQ("/*\n" " * line\n" " */", format("/*\n" " * line\n" " */", getLLVMStyle())); EXPECT_EQ("/*\n" " * line\n" " */", format("/*\n" " * line\n" " */", getLLVMStyle())); // Align two lines. EXPECT_EQ("/* line 1\n" " * line 2 */", format("/* line 1\n" " * line 2 */", getLLVMStyle())); EXPECT_EQ("/* line 1\n" " * line 2 */", format("/* line 1\n" "* line 2 */", getLLVMStyle())); EXPECT_EQ("/* line 1\n" " * line 2 */", format("/* line 1\n" " * line 2 */", getLLVMStyle())); EXPECT_EQ("/* line 1\n" " * line 2 */", format("/* line 1\n" " * line 2 */", getLLVMStyle())); EXPECT_EQ("/* line 1\n" " * line 2 */", format("/* line 1\n" " * line 2 */", getLLVMStyle())); EXPECT_EQ("int i; /* line 1\n" " * line 2 */", format("int i; /* line 1\n" "* line 2 */", getLLVMStyle())); EXPECT_EQ("int i; /* line 1\n" " * line 2 */", format("int i; /* line 1\n" " * line 2 */", getLLVMStyle())); EXPECT_EQ("int i; /* line 1\n" " * line 2 */", format("int i; /* line 1\n" " * line 2 */", getLLVMStyle())); // Align several lines. EXPECT_EQ("/* line 1\n" " * line 2\n" " * line 3 */", format("/* line 1\n" " * line 2\n" "* line 3 */", getLLVMStyle())); EXPECT_EQ("/* line 1\n" " * line 2\n" " * line 3 */", format("/* line 1\n" " * line 2\n" "* line 3 */", getLLVMStyle())); EXPECT_EQ("/*\n" "** line 1\n" "** line 2\n" "*/", format("/*\n" "** line 1\n" " ** line 2\n" "*/", getLLVMStyle())); // Align with different indent after the decorations. EXPECT_EQ("/*\n" " * line 1\n" " * line 2\n" " * line 3\n" " * line 4\n" " */", format("/*\n" "* line 1\n" " * line 2\n" " * line 3\n" "* line 4\n" "*/", getLLVMStyle())); // Align empty or blank lines. EXPECT_EQ("/**\n" " *\n" " *\n" " *\n" " */", format("/**\n" "* \n" " * \n" " *\n" "*/", getLLVMStyle())); // Align while breaking and reflowing. EXPECT_EQ("/*\n" " * long long long\n" " * long long\n" " *\n" " * long */", format("/*\n" " * long long long long\n" " * long\n" " *\n" "* long */", getLLVMStyleWithColumns(20))); } } // end namespace } // end namespace format } // end namespace clang