Index: vendor/clang/dist/docs/LanguageExtensions.rst =================================================================== --- vendor/clang/dist/docs/LanguageExtensions.rst (revision 312705) +++ vendor/clang/dist/docs/LanguageExtensions.rst (revision 312706) @@ -1,2274 +1,2314 @@ ========================= Clang Language Extensions ========================= .. contents:: :local: :depth: 1 .. toctree:: :hidden: ObjectiveCLiterals BlockLanguageSpec Block-ABI-Apple AutomaticReferenceCounting Introduction ============ This document describes the language extensions provided by Clang. In addition to the language extensions listed here, Clang aims to support a broad range of GCC extensions. Please see the `GCC manual `_ for more information on these extensions. .. _langext-feature_check: Feature Checking Macros ======================= Language extensions can be very useful, but only if you know you can depend on them. In order to allow fine-grain features checks, we support three builtin function-like macros. This allows you to directly test for a feature in your code without having to resort to something like autoconf or fragile "compiler version checks". ``__has_builtin`` ----------------- This function-like macro takes a single identifier argument that is the name of a builtin function. It evaluates to 1 if the builtin is supported or 0 if not. It can be used like this: .. code-block:: c++ #ifndef __has_builtin // Optional of course. #define __has_builtin(x) 0 // Compatibility with non-clang compilers. #endif ... #if __has_builtin(__builtin_trap) __builtin_trap(); #else abort(); #endif ... .. _langext-__has_feature-__has_extension: ``__has_feature`` and ``__has_extension`` ----------------------------------------- These function-like macros take a single identifier argument that is the name of a feature. ``__has_feature`` evaluates to 1 if the feature is both supported by Clang and standardized in the current language standard or 0 if not (but see :ref:`below `), while ``__has_extension`` evaluates to 1 if the feature is supported by Clang in the current language (either as a language extension or a standard language feature) or 0 if not. They can be used like this: .. code-block:: c++ #ifndef __has_feature // Optional of course. #define __has_feature(x) 0 // Compatibility with non-clang compilers. #endif #ifndef __has_extension #define __has_extension __has_feature // Compatibility with pre-3.0 compilers. #endif ... #if __has_feature(cxx_rvalue_references) // This code will only be compiled with the -std=c++11 and -std=gnu++11 // options, because rvalue references are only standardized in C++11. #endif #if __has_extension(cxx_rvalue_references) // This code will be compiled with the -std=c++11, -std=gnu++11, -std=c++98 // and -std=gnu++98 options, because rvalue references are supported as a // language extension in C++98. #endif .. _langext-has-feature-back-compat: For backward compatibility, ``__has_feature`` can also be used to test for support for non-standardized features, i.e. features not prefixed ``c_``, ``cxx_`` or ``objc_``. Another use of ``__has_feature`` is to check for compiler features not related to the language standard, such as e.g. :doc:`AddressSanitizer `. If the ``-pedantic-errors`` option is given, ``__has_extension`` is equivalent to ``__has_feature``. The feature tag is described along with the language feature below. The feature name or extension name can also be specified with a preceding and following ``__`` (double underscore) to avoid interference from a macro with the same name. For instance, ``__cxx_rvalue_references__`` can be used instead of ``cxx_rvalue_references``. ``__has_cpp_attribute`` ----------------------- This function-like macro takes a single argument that is the name of a C++11-style attribute. The argument can either be a single identifier, or a scoped identifier. If the attribute is supported, a nonzero value is returned. If the attribute is a standards-based attribute, this macro returns a nonzero value based on the year and month in which the attribute was voted into the working draft. If the attribute is not supported by the current compliation target, this macro evaluates to 0. It can be used like this: .. code-block:: c++ #ifndef __has_cpp_attribute // Optional of course. #define __has_cpp_attribute(x) 0 // Compatibility with non-clang compilers. #endif ... #if __has_cpp_attribute(clang::fallthrough) #define FALLTHROUGH [[clang::fallthrough]] #else #define FALLTHROUGH #endif ... The attribute identifier (but not scope) can also be specified with a preceding and following ``__`` (double underscore) to avoid interference from a macro with the same name. For instance, ``gnu::__const__`` can be used instead of ``gnu::const``. ``__has_attribute`` ------------------- This function-like macro takes a single identifier argument that is the name of a GNU-style attribute. It evaluates to 1 if the attribute is supported by the current compilation target, or 0 if not. It can be used like this: .. code-block:: c++ #ifndef __has_attribute // Optional of course. #define __has_attribute(x) 0 // Compatibility with non-clang compilers. #endif ... #if __has_attribute(always_inline) #define ALWAYS_INLINE __attribute__((always_inline)) #else #define ALWAYS_INLINE #endif ... The attribute name can also be specified with a preceding and following ``__`` (double underscore) to avoid interference from a macro with the same name. For instance, ``__always_inline__`` can be used instead of ``always_inline``. ``__has_declspec_attribute`` ---------------------------- This function-like macro takes a single identifier argument that is the name of an attribute implemented as a Microsoft-style ``__declspec`` attribute. It evaluates to 1 if the attribute is supported by the current compilation target, or 0 if not. It can be used like this: .. code-block:: c++ #ifndef __has_declspec_attribute // Optional of course. #define __has_declspec_attribute(x) 0 // Compatibility with non-clang compilers. #endif ... #if __has_declspec_attribute(dllexport) #define DLLEXPORT __declspec(dllexport) #else #define DLLEXPORT #endif ... The attribute name can also be specified with a preceding and following ``__`` (double underscore) to avoid interference from a macro with the same name. For instance, ``__dllexport__`` can be used instead of ``dllexport``. ``__is_identifier`` ------------------- This function-like macro takes a single identifier argument that might be either a reserved word or a regular identifier. It evaluates to 1 if the argument is just a regular identifier and not a reserved word, in the sense that it can then be used as the name of a user-defined function or variable. Otherwise it evaluates to 0. It can be used like this: .. code-block:: c++ ... #ifdef __is_identifier // Compatibility with non-clang compilers. #if __is_identifier(__wchar_t) typedef wchar_t __wchar_t; #endif #endif __wchar_t WideCharacter; ... Include File Checking Macros ============================ Not all developments systems have the same include files. The :ref:`langext-__has_include` and :ref:`langext-__has_include_next` macros allow you to check for the existence of an include file before doing a possibly failing ``#include`` directive. Include file checking macros must be used as expressions in ``#if`` or ``#elif`` preprocessing directives. .. _langext-__has_include: ``__has_include`` ----------------- This function-like macro takes a single file name string argument that is the name of an include file. It evaluates to 1 if the file can be found using the include paths, or 0 otherwise: .. code-block:: c++ // Note the two possible file name string formats. #if __has_include("myinclude.h") && __has_include() # include "myinclude.h" #endif To test for this feature, use ``#if defined(__has_include)``: .. code-block:: c++ // To avoid problem with non-clang compilers not having this macro. #if defined(__has_include) #if __has_include("myinclude.h") # include "myinclude.h" #endif #endif .. _langext-__has_include_next: ``__has_include_next`` ---------------------- This function-like macro takes a single file name string argument that is the name of an include file. It is like ``__has_include`` except that it looks for the second instance of the given file found in the include paths. It evaluates to 1 if the second instance of the file can be found using the include paths, or 0 otherwise: .. code-block:: c++ // Note the two possible file name string formats. #if __has_include_next("myinclude.h") && __has_include_next() # include_next "myinclude.h" #endif // To avoid problem with non-clang compilers not having this macro. #if defined(__has_include_next) #if __has_include_next("myinclude.h") # include_next "myinclude.h" #endif #endif Note that ``__has_include_next``, like the GNU extension ``#include_next`` directive, is intended for use in headers only, and will issue a warning if used in the top-level compilation file. A warning will also be issued if an absolute path is used in the file argument. ``__has_warning`` ----------------- This function-like macro takes a string literal that represents a command line option for a warning and returns true if that is a valid warning option. .. code-block:: c++ #if __has_warning("-Wformat") ... #endif Builtin Macros ============== ``__BASE_FILE__`` Defined to a string that contains the name of the main input file passed to Clang. ``__COUNTER__`` Defined to an integer value that starts at zero and is incremented each time the ``__COUNTER__`` macro is expanded. ``__INCLUDE_LEVEL__`` Defined to an integral value that is the include depth of the file currently being translated. For the main file, this value is zero. ``__TIMESTAMP__`` Defined to the date and time of the last modification of the current source file. ``__clang__`` Defined when compiling with Clang ``__clang_major__`` Defined to the major marketing version number of Clang (e.g., the 2 in 2.0.1). Note that marketing version numbers should not be used to check for language features, as different vendors use different numbering schemes. Instead, use the :ref:`langext-feature_check`. ``__clang_minor__`` Defined to the minor version number of Clang (e.g., the 0 in 2.0.1). Note that marketing version numbers should not be used to check for language features, as different vendors use different numbering schemes. Instead, use the :ref:`langext-feature_check`. ``__clang_patchlevel__`` Defined to the marketing patch level of Clang (e.g., the 1 in 2.0.1). ``__clang_version__`` Defined to a string that captures the Clang marketing version, including the Subversion tag or revision number, e.g., "``1.5 (trunk 102332)``". .. _langext-vectors: Vectors and Extended Vectors ============================ Supports the GCC, OpenCL, AltiVec and NEON vector extensions. OpenCL vector types are created using ``ext_vector_type`` attribute. It support for ``V.xyzw`` syntax and other tidbits as seen in OpenCL. An example is: .. code-block:: c++ typedef float float4 __attribute__((ext_vector_type(4))); typedef float float2 __attribute__((ext_vector_type(2))); float4 foo(float2 a, float2 b) { float4 c; c.xz = a; c.yw = b; return c; } Query for this feature with ``__has_extension(attribute_ext_vector_type)``. Giving ``-faltivec`` option to clang enables support for AltiVec vector syntax and functions. For example: .. code-block:: c++ vector float foo(vector int a) { vector int b; b = vec_add(a, a) + a; return (vector float)b; } NEON vector types are created using ``neon_vector_type`` and ``neon_polyvector_type`` attributes. For example: .. code-block:: c++ typedef __attribute__((neon_vector_type(8))) int8_t int8x8_t; typedef __attribute__((neon_polyvector_type(16))) poly8_t poly8x16_t; int8x8_t foo(int8x8_t a) { int8x8_t v; v = a; return v; } Vector Literals --------------- Vector literals can be used to create vectors from a set of scalars, or vectors. Either parentheses or braces form can be used. In the parentheses form the number of literal values specified must be one, i.e. referring to a scalar value, or must match the size of the vector type being created. If a single scalar literal value is specified, the scalar literal value will be replicated to all the components of the vector type. In the brackets form any number of literals can be specified. For example: .. code-block:: c++ typedef int v4si __attribute__((__vector_size__(16))); typedef float float4 __attribute__((ext_vector_type(4))); typedef float float2 __attribute__((ext_vector_type(2))); v4si vsi = (v4si){1, 2, 3, 4}; float4 vf = (float4)(1.0f, 2.0f, 3.0f, 4.0f); vector int vi1 = (vector int)(1); // vi1 will be (1, 1, 1, 1). vector int vi2 = (vector int){1}; // vi2 will be (1, 0, 0, 0). vector int vi3 = (vector int)(1, 2); // error vector int vi4 = (vector int){1, 2}; // vi4 will be (1, 2, 0, 0). vector int vi5 = (vector int)(1, 2, 3, 4); float4 vf = (float4)((float2)(1.0f, 2.0f), (float2)(3.0f, 4.0f)); Vector Operations ----------------- The table below shows the support for each operation by vector extension. A dash indicates that an operation is not accepted according to a corresponding specification. ============================== ======= ======= ======= ======= Operator OpenCL AltiVec GCC NEON ============================== ======= ======= ======= ======= [] yes yes yes -- unary operators +, -- yes yes yes -- ++, -- -- yes yes yes -- +,--,*,/,% yes yes yes -- bitwise operators &,|,^,~ yes yes yes -- >>,<< yes yes yes -- !, &&, || yes -- -- -- ==, !=, >, <, >=, <= yes yes -- -- = yes yes yes yes :? yes -- -- -- sizeof yes yes yes yes C-style cast yes yes yes no reinterpret_cast yes no yes no static_cast yes no yes no const_cast no no no no ============================== ======= ======= ======= ======= See also :ref:`langext-__builtin_shufflevector`, :ref:`langext-__builtin_convertvector`. Messages on ``deprecated`` and ``unavailable`` Attributes ========================================================= An optional string message can be added to the ``deprecated`` and ``unavailable`` attributes. For example: .. code-block:: c++ void explode(void) __attribute__((deprecated("extremely unsafe, use 'combust' instead!!!"))); If the deprecated or unavailable declaration is used, the message will be incorporated into the appropriate diagnostic: .. code-block:: none harmless.c:4:3: warning: 'explode' is deprecated: extremely unsafe, use 'combust' instead!!! [-Wdeprecated-declarations] explode(); ^ Query for this feature with ``__has_extension(attribute_deprecated_with_message)`` and ``__has_extension(attribute_unavailable_with_message)``. Attributes on Enumerators ========================= Clang allows attributes to be written on individual enumerators. This allows enumerators to be deprecated, made unavailable, etc. The attribute must appear after the enumerator name and before any initializer, like so: .. code-block:: c++ enum OperationMode { OM_Invalid, OM_Normal, OM_Terrified __attribute__((deprecated)), OM_AbortOnError __attribute__((deprecated)) = 4 }; Attributes on the ``enum`` declaration do not apply to individual enumerators. Query for this feature with ``__has_extension(enumerator_attributes)``. 'User-Specified' System Frameworks ================================== Clang provides a mechanism by which frameworks can be built in such a way that they will always be treated as being "system frameworks", even if they are not present in a system framework directory. This can be useful to system framework developers who want to be able to test building other applications with development builds of their framework, including the manner in which the compiler changes warning behavior for system headers. Framework developers can opt-in to this mechanism by creating a "``.system_framework``" file at the top-level of their framework. That is, the framework should have contents like: .. code-block:: none .../TestFramework.framework .../TestFramework.framework/.system_framework .../TestFramework.framework/Headers .../TestFramework.framework/Headers/TestFramework.h ... Clang will treat the presence of this file as an indicator that the framework should be treated as a system framework, regardless of how it was found in the framework search path. For consistency, we recommend that such files never be included in installed versions of the framework. Checks for Standard Language Features ===================================== The ``__has_feature`` macro can be used to query if certain standard language features are enabled. The ``__has_extension`` macro can be used to query if language features are available as an extension when compiling for a standard which does not provide them. The features which can be tested are listed here. Since Clang 3.4, the C++ SD-6 feature test macros are also supported. These are macros with names of the form ``__cpp_``, and are intended to be a portable way to query the supported features of the compiler. See `the C++ status page `_ for information on the version of SD-6 supported by each Clang release, and the macros provided by that revision of the recommendations. C++98 ----- The features listed below are part of the C++98 standard. These features are enabled by default when compiling C++ code. C++ exceptions ^^^^^^^^^^^^^^ Use ``__has_feature(cxx_exceptions)`` to determine if C++ exceptions have been enabled. For example, compiling code with ``-fno-exceptions`` disables C++ exceptions. C++ RTTI ^^^^^^^^ Use ``__has_feature(cxx_rtti)`` to determine if C++ RTTI has been enabled. For example, compiling code with ``-fno-rtti`` disables the use of RTTI. C++11 ----- The features listed below are part of the C++11 standard. As a result, all these features are enabled with the ``-std=c++11`` or ``-std=gnu++11`` option when compiling C++ code. C++11 SFINAE includes access control ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_access_control_sfinae)`` or ``__has_extension(cxx_access_control_sfinae)`` to determine whether access-control errors (e.g., calling a private constructor) are considered to be template argument deduction errors (aka SFINAE errors), per `C++ DR1170 `_. C++11 alias templates ^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_alias_templates)`` or ``__has_extension(cxx_alias_templates)`` to determine if support for C++11's alias declarations and alias templates is enabled. C++11 alignment specifiers ^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_alignas)`` or ``__has_extension(cxx_alignas)`` to determine if support for alignment specifiers using ``alignas`` is enabled. Use ``__has_feature(cxx_alignof)`` or ``__has_extension(cxx_alignof)`` to determine if support for the ``alignof`` keyword is enabled. C++11 attributes ^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_attributes)`` or ``__has_extension(cxx_attributes)`` to determine if support for attribute parsing with C++11's square bracket notation is enabled. C++11 generalized constant expressions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_constexpr)`` to determine if support for generalized constant expressions (e.g., ``constexpr``) is enabled. C++11 ``decltype()`` ^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_decltype)`` or ``__has_extension(cxx_decltype)`` to determine if support for the ``decltype()`` specifier is enabled. C++11's ``decltype`` does not require type-completeness of a function call expression. Use ``__has_feature(cxx_decltype_incomplete_return_types)`` or ``__has_extension(cxx_decltype_incomplete_return_types)`` to determine if support for this feature is enabled. C++11 default template arguments in function templates ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_default_function_template_args)`` or ``__has_extension(cxx_default_function_template_args)`` to determine if support for default template arguments in function templates is enabled. C++11 ``default``\ ed functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_defaulted_functions)`` or ``__has_extension(cxx_defaulted_functions)`` to determine if support for defaulted function definitions (with ``= default``) is enabled. C++11 delegating constructors ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_delegating_constructors)`` to determine if support for delegating constructors is enabled. C++11 ``deleted`` functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_deleted_functions)`` or ``__has_extension(cxx_deleted_functions)`` to determine if support for deleted function definitions (with ``= delete``) is enabled. C++11 explicit conversion functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_explicit_conversions)`` to determine if support for ``explicit`` conversion functions is enabled. C++11 generalized initializers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_generalized_initializers)`` to determine if support for generalized initializers (using braced lists and ``std::initializer_list``) is enabled. C++11 implicit move constructors/assignment operators ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_implicit_moves)`` to determine if Clang will implicitly generate move constructors and move assignment operators where needed. C++11 inheriting constructors ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_inheriting_constructors)`` to determine if support for inheriting constructors is enabled. C++11 inline namespaces ^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_inline_namespaces)`` or ``__has_extension(cxx_inline_namespaces)`` to determine if support for inline namespaces is enabled. C++11 lambdas ^^^^^^^^^^^^^ Use ``__has_feature(cxx_lambdas)`` or ``__has_extension(cxx_lambdas)`` to determine if support for lambdas is enabled. C++11 local and unnamed types as template arguments ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_local_type_template_args)`` or ``__has_extension(cxx_local_type_template_args)`` to determine if support for local and unnamed types as template arguments is enabled. C++11 noexcept ^^^^^^^^^^^^^^ Use ``__has_feature(cxx_noexcept)`` or ``__has_extension(cxx_noexcept)`` to determine if support for noexcept exception specifications is enabled. C++11 in-class non-static data member initialization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_nonstatic_member_init)`` to determine whether in-class initialization of non-static data members is enabled. C++11 ``nullptr`` ^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_nullptr)`` or ``__has_extension(cxx_nullptr)`` to determine if support for ``nullptr`` is enabled. C++11 ``override control`` ^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_override_control)`` or ``__has_extension(cxx_override_control)`` to determine if support for the override control keywords is enabled. C++11 reference-qualified functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_reference_qualified_functions)`` or ``__has_extension(cxx_reference_qualified_functions)`` to determine if support for reference-qualified functions (e.g., member functions with ``&`` or ``&&`` applied to ``*this``) is enabled. C++11 range-based ``for`` loop ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_range_for)`` or ``__has_extension(cxx_range_for)`` to determine if support for the range-based for loop is enabled. C++11 raw string literals ^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_raw_string_literals)`` to determine if support for raw string literals (e.g., ``R"x(foo\bar)x"``) is enabled. C++11 rvalue references ^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_rvalue_references)`` or ``__has_extension(cxx_rvalue_references)`` to determine if support for rvalue references is enabled. C++11 ``static_assert()`` ^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_static_assert)`` or ``__has_extension(cxx_static_assert)`` to determine if support for compile-time assertions using ``static_assert`` is enabled. C++11 ``thread_local`` ^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_thread_local)`` to determine if support for ``thread_local`` variables is enabled. C++11 type inference ^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_auto_type)`` or ``__has_extension(cxx_auto_type)`` to determine C++11 type inference is supported using the ``auto`` specifier. If this is disabled, ``auto`` will instead be a storage class specifier, as in C or C++98. C++11 strongly typed enumerations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_strong_enums)`` or ``__has_extension(cxx_strong_enums)`` to determine if support for strongly typed, scoped enumerations is enabled. C++11 trailing return type ^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_trailing_return)`` or ``__has_extension(cxx_trailing_return)`` to determine if support for the alternate function declaration syntax with trailing return type is enabled. C++11 Unicode string literals ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_unicode_literals)`` to determine if support for Unicode string literals is enabled. C++11 unrestricted unions ^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_unrestricted_unions)`` to determine if support for unrestricted unions is enabled. C++11 user-defined literals ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_user_literals)`` to determine if support for user-defined literals is enabled. C++11 variadic templates ^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_variadic_templates)`` or ``__has_extension(cxx_variadic_templates)`` to determine if support for variadic templates is enabled. C++1y ----- The features listed below are part of the committee draft for the C++1y standard. As a result, all these features are enabled with the ``-std=c++1y`` or ``-std=gnu++1y`` option when compiling C++ code. C++1y binary literals ^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_binary_literals)`` or ``__has_extension(cxx_binary_literals)`` to determine whether binary literals (for instance, ``0b10010``) are recognized. Clang supports this feature as an extension in all language modes. C++1y contextual conversions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_contextual_conversions)`` or ``__has_extension(cxx_contextual_conversions)`` to determine if the C++1y rules are used when performing an implicit conversion for an array bound in a *new-expression*, the operand of a *delete-expression*, an integral constant expression, or a condition in a ``switch`` statement. C++1y decltype(auto) ^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_decltype_auto)`` or ``__has_extension(cxx_decltype_auto)`` to determine if support for the ``decltype(auto)`` placeholder type is enabled. C++1y default initializers for aggregates ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_aggregate_nsdmi)`` or ``__has_extension(cxx_aggregate_nsdmi)`` to determine if support for default initializers in aggregate members is enabled. C++1y digit separators ^^^^^^^^^^^^^^^^^^^^^^ Use ``__cpp_digit_separators`` to determine if support for digit separators using single quotes (for instance, ``10'000``) is enabled. At this time, there is no corresponding ``__has_feature`` name C++1y generalized lambda capture ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_init_captures)`` or ``__has_extension(cxx_init_captures)`` to determine if support for lambda captures with explicit initializers is enabled (for instance, ``[n(0)] { return ++n; }``). C++1y generic lambdas ^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_generic_lambdas)`` or ``__has_extension(cxx_generic_lambdas)`` to determine if support for generic (polymorphic) lambdas is enabled (for instance, ``[] (auto x) { return x + 1; }``). C++1y relaxed constexpr ^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_relaxed_constexpr)`` or ``__has_extension(cxx_relaxed_constexpr)`` to determine if variable declarations, local variable modification, and control flow constructs are permitted in ``constexpr`` functions. C++1y return type deduction ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_return_type_deduction)`` or ``__has_extension(cxx_return_type_deduction)`` to determine if support for return type deduction for functions (using ``auto`` as a return type) is enabled. C++1y runtime-sized arrays ^^^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_runtime_array)`` or ``__has_extension(cxx_runtime_array)`` to determine if support for arrays of runtime bound (a restricted form of variable-length arrays) is enabled. Clang's implementation of this feature is incomplete. C++1y variable templates ^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(cxx_variable_templates)`` or ``__has_extension(cxx_variable_templates)`` to determine if support for templated variable declarations is enabled. C11 --- The features listed below are part of the C11 standard. As a result, all these features are enabled with the ``-std=c11`` or ``-std=gnu11`` option when compiling C code. Additionally, because these features are all backward-compatible, they are available as extensions in all language modes. C11 alignment specifiers ^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(c_alignas)`` or ``__has_extension(c_alignas)`` to determine if support for alignment specifiers using ``_Alignas`` is enabled. Use ``__has_feature(c_alignof)`` or ``__has_extension(c_alignof)`` to determine if support for the ``_Alignof`` keyword is enabled. C11 atomic operations ^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(c_atomic)`` or ``__has_extension(c_atomic)`` to determine if support for atomic types using ``_Atomic`` is enabled. Clang also provides :ref:`a set of builtins ` which can be used to implement the ```` operations on ``_Atomic`` types. Use ``__has_include()`` to determine if C11's ```` header is available. Clang will use the system's ```` header when one is available, and will otherwise use its own. When using its own, implementations of the atomic operations are provided as macros. In the cases where C11 also requires a real function, this header provides only the declaration of that function (along with a shadowing macro implementation), and you must link to a library which provides a definition of the function if you use it instead of the macro. C11 generic selections ^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(c_generic_selections)`` or ``__has_extension(c_generic_selections)`` to determine if support for generic selections is enabled. As an extension, the C11 generic selection expression is available in all languages supported by Clang. The syntax is the same as that given in the C11 standard. In C, type compatibility is decided according to the rules given in the appropriate standard, but in C++, which lacks the type compatibility rules used in C, types are considered compatible only if they are equivalent. C11 ``_Static_assert()`` ^^^^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(c_static_assert)`` or ``__has_extension(c_static_assert)`` to determine if support for compile-time assertions using ``_Static_assert`` is enabled. C11 ``_Thread_local`` ^^^^^^^^^^^^^^^^^^^^^ Use ``__has_feature(c_thread_local)`` or ``__has_extension(c_thread_local)`` to determine if support for ``_Thread_local`` variables is enabled. Modules ------- Use ``__has_feature(modules)`` to determine if Modules have been enabled. For example, compiling code with ``-fmodules`` enables the use of Modules. More information could be found `here `_. Checks for Type Trait Primitives ================================ Type trait primitives are special builtin constant expressions that can be used by the standard C++ library to facilitate or simplify the implementation of user-facing type traits in the header. They are not intended to be used directly by user code because they are implementation-defined and subject to change -- as such they're tied closely to the supported set of system headers, currently: * LLVM's own libc++ * GNU libstdc++ * The Microsoft standard C++ library Clang supports the `GNU C++ type traits `_ and a subset of the `Microsoft Visual C++ Type traits `_. Feature detection is supported only for some of the primitives at present. User code should not use these checks because they bear no direct relation to the actual set of type traits supported by the C++ standard library. For type trait ``__X``, ``__has_extension(X)`` indicates the presence of the type trait primitive in the compiler. A simplistic usage example as might be seen in standard C++ headers follows: .. code-block:: c++ #if __has_extension(is_convertible_to) template struct is_convertible_to { static const bool value = __is_convertible_to(From, To); }; #else // Emulate type trait for compatibility with other compilers. #endif The following type trait primitives are supported by Clang: * ``__has_nothrow_assign`` (GNU, Microsoft) * ``__has_nothrow_copy`` (GNU, Microsoft) * ``__has_nothrow_constructor`` (GNU, Microsoft) * ``__has_trivial_assign`` (GNU, Microsoft) * ``__has_trivial_copy`` (GNU, Microsoft) * ``__has_trivial_constructor`` (GNU, Microsoft) * ``__has_trivial_destructor`` (GNU, Microsoft) * ``__has_virtual_destructor`` (GNU, Microsoft) * ``__is_abstract`` (GNU, Microsoft) * ``__is_base_of`` (GNU, Microsoft) * ``__is_class`` (GNU, Microsoft) * ``__is_convertible_to`` (Microsoft) * ``__is_empty`` (GNU, Microsoft) * ``__is_enum`` (GNU, Microsoft) * ``__is_interface_class`` (Microsoft) * ``__is_pod`` (GNU, Microsoft) * ``__is_polymorphic`` (GNU, Microsoft) * ``__is_union`` (GNU, Microsoft) * ``__is_literal(type)``: Determines whether the given type is a literal type * ``__is_final``: Determines whether the given type is declared with a ``final`` class-virt-specifier. * ``__underlying_type(type)``: Retrieves the underlying type for a given ``enum`` type. This trait is required to implement the C++11 standard library. * ``__is_trivially_assignable(totype, fromtype)``: Determines whether a value of type ``totype`` can be assigned to from a value of type ``fromtype`` such that no non-trivial functions are called as part of that assignment. This trait is required to implement the C++11 standard library. * ``__is_trivially_constructible(type, argtypes...)``: Determines whether a value of type ``type`` can be direct-initialized with arguments of types ``argtypes...`` such that no non-trivial functions are called as part of that initialization. This trait is required to implement the C++11 standard library. * ``__is_destructible`` (MSVC 2013) * ``__is_nothrow_destructible`` (MSVC 2013) * ``__is_nothrow_assignable`` (MSVC 2013, clang) * ``__is_constructible`` (MSVC 2013, clang) * ``__is_nothrow_constructible`` (MSVC 2013, clang) * ``__is_assignable`` (MSVC 2015, clang) Blocks ====== The syntax and high level language feature description is in :doc:`BlockLanguageSpec`. Implementation and ABI details for the clang implementation are in :doc:`Block-ABI-Apple`. Query for this feature with ``__has_extension(blocks)``. Objective-C Features ==================== Related result types -------------------- According to Cocoa conventions, Objective-C methods with certain names ("``init``", "``alloc``", etc.) always return objects that are an instance of the receiving class's type. Such methods are said to have a "related result type", meaning that a message send to one of these methods will have the same static type as an instance of the receiver class. For example, given the following classes: .. code-block:: objc @interface NSObject + (id)alloc; - (id)init; @end @interface NSArray : NSObject @end and this common initialization pattern .. code-block:: objc NSArray *array = [[NSArray alloc] init]; the type of the expression ``[NSArray alloc]`` is ``NSArray*`` because ``alloc`` implicitly has a related result type. Similarly, the type of the expression ``[[NSArray alloc] init]`` is ``NSArray*``, since ``init`` has a related result type and its receiver is known to have the type ``NSArray *``. If neither ``alloc`` nor ``init`` had a related result type, the expressions would have had type ``id``, as declared in the method signature. A method with a related result type can be declared by using the type ``instancetype`` as its result type. ``instancetype`` is a contextual keyword that is only permitted in the result type of an Objective-C method, e.g. .. code-block:: objc @interface A + (instancetype)constructAnA; @end The related result type can also be inferred for some methods. To determine whether a method has an inferred related result type, the first word in the camel-case selector (e.g., "``init``" in "``initWithObjects``") is considered, and the method will have a related result type if its return type is compatible with the type of its class and if: * the first word is "``alloc``" or "``new``", and the method is a class method, or * the first word is "``autorelease``", "``init``", "``retain``", or "``self``", and the method is an instance method. If a method with a related result type is overridden by a subclass method, the subclass method must also return a type that is compatible with the subclass type. For example: .. code-block:: objc @interface NSString : NSObject - (NSUnrelated *)init; // incorrect usage: NSUnrelated is not NSString or a superclass of NSString @end Related result types only affect the type of a message send or property access via the given method. In all other respects, a method with a related result type is treated the same way as method that returns ``id``. Use ``__has_feature(objc_instancetype)`` to determine whether the ``instancetype`` contextual keyword is available. Automatic reference counting ---------------------------- Clang provides support for :doc:`automated reference counting ` in Objective-C, which eliminates the need for manual ``retain``/``release``/``autorelease`` message sends. There are two feature macros associated with automatic reference counting: ``__has_feature(objc_arc)`` indicates the availability of automated reference counting in general, while ``__has_feature(objc_arc_weak)`` indicates that automated reference counting also includes support for ``__weak`` pointers to Objective-C objects. .. _objc-fixed-enum: Enumerations with a fixed underlying type ----------------------------------------- Clang provides support for C++11 enumerations with a fixed underlying type within Objective-C. For example, one can write an enumeration type as: .. code-block:: c++ typedef enum : unsigned char { Red, Green, Blue } Color; This specifies that the underlying type, which is used to store the enumeration value, is ``unsigned char``. Use ``__has_feature(objc_fixed_enum)`` to determine whether support for fixed underlying types is available in Objective-C. Interoperability with C++11 lambdas ----------------------------------- Clang provides interoperability between C++11 lambdas and blocks-based APIs, by permitting a lambda to be implicitly converted to a block pointer with the corresponding signature. For example, consider an API such as ``NSArray``'s array-sorting method: .. code-block:: objc - (NSArray *)sortedArrayUsingComparator:(NSComparator)cmptr; ``NSComparator`` is simply a typedef for the block pointer ``NSComparisonResult (^)(id, id)``, and parameters of this type are generally provided with block literals as arguments. However, one can also use a C++11 lambda so long as it provides the same signature (in this case, accepting two parameters of type ``id`` and returning an ``NSComparisonResult``): .. code-block:: objc NSArray *array = @[@"string 1", @"string 21", @"string 12", @"String 11", @"String 02"]; const NSStringCompareOptions comparisonOptions = NSCaseInsensitiveSearch | NSNumericSearch | NSWidthInsensitiveSearch | NSForcedOrderingSearch; NSLocale *currentLocale = [NSLocale currentLocale]; NSArray *sorted = [array sortedArrayUsingComparator:[=](id s1, id s2) -> NSComparisonResult { NSRange string1Range = NSMakeRange(0, [s1 length]); return [s1 compare:s2 options:comparisonOptions range:string1Range locale:currentLocale]; }]; NSLog(@"sorted: %@", sorted); This code relies on an implicit conversion from the type of the lambda expression (an unnamed, local class type called the *closure type*) to the corresponding block pointer type. The conversion itself is expressed by a conversion operator in that closure type that produces a block pointer with the same signature as the lambda itself, e.g., .. code-block:: objc operator NSComparisonResult (^)(id, id)() const; This conversion function returns a new block that simply forwards the two parameters to the lambda object (which it captures by copy), then returns the result. The returned block is first copied (with ``Block_copy``) and then autoreleased. As an optimization, if a lambda expression is immediately converted to a block pointer (as in the first example, above), then the block is not copied and autoreleased: rather, it is given the same lifetime as a block literal written at that point in the program, which avoids the overhead of copying a block to the heap in the common case. The conversion from a lambda to a block pointer is only available in Objective-C++, and not in C++ with blocks, due to its use of Objective-C memory management (autorelease). Object Literals and Subscripting -------------------------------- Clang provides support for :doc:`Object Literals and Subscripting ` in Objective-C, which simplifies common Objective-C programming patterns, makes programs more concise, and improves the safety of container creation. There are several feature macros associated with object literals and subscripting: ``__has_feature(objc_array_literals)`` tests the availability of array literals; ``__has_feature(objc_dictionary_literals)`` tests the availability of dictionary literals; ``__has_feature(objc_subscripting)`` tests the availability of object subscripting. Objective-C Autosynthesis of Properties --------------------------------------- Clang provides support for autosynthesis of declared properties. Using this feature, clang provides default synthesis of those properties not declared @dynamic and not having user provided backing getter and setter methods. ``__has_feature(objc_default_synthesize_properties)`` checks for availability of this feature in version of clang being used. .. _langext-objc-retain-release: Objective-C retaining behavior attributes ----------------------------------------- In Objective-C, functions and methods are generally assumed to follow the `Cocoa Memory Management `_ conventions for ownership of object arguments and return values. However, there are exceptions, and so Clang provides attributes to allow these exceptions to be documented. This are used by ARC and the `static analyzer `_ Some exceptions may be better described using the ``objc_method_family`` attribute instead. **Usage**: The ``ns_returns_retained``, ``ns_returns_not_retained``, ``ns_returns_autoreleased``, ``cf_returns_retained``, and ``cf_returns_not_retained`` attributes can be placed on methods and functions that return Objective-C or CoreFoundation objects. They are commonly placed at the end of a function prototype or method declaration: .. code-block:: objc id foo() __attribute__((ns_returns_retained)); - (NSString *)bar:(int)x __attribute__((ns_returns_retained)); The ``*_returns_retained`` attributes specify that the returned object has a +1 retain count. The ``*_returns_not_retained`` attributes specify that the return object has a +0 retain count, even if the normal convention for its selector would be +1. ``ns_returns_autoreleased`` specifies that the returned object is +0, but is guaranteed to live at least as long as the next flush of an autorelease pool. **Usage**: The ``ns_consumed`` and ``cf_consumed`` attributes can be placed on an parameter declaration; they specify that the argument is expected to have a +1 retain count, which will be balanced in some way by the function or method. The ``ns_consumes_self`` attribute can only be placed on an Objective-C method; it specifies that the method expects its ``self`` parameter to have a +1 retain count, which it will balance in some way. .. code-block:: objc void foo(__attribute__((ns_consumed)) NSString *string); - (void) bar __attribute__((ns_consumes_self)); - (void) baz:(id) __attribute__((ns_consumed)) x; Further examples of these attributes are available in the static analyzer's `list of annotations for analysis `_. Query for these features with ``__has_attribute(ns_consumed)``, ``__has_attribute(ns_returns_retained)``, etc. Objective-C++ ABI: protocol-qualifier mangling of parameters ------------------------------------------------------------ Starting with LLVM 3.4, Clang produces a new mangling for parameters whose type is a qualified-``id`` (e.g., ``id``). This mangling allows such parameters to be differentiated from those with the regular unqualified ``id`` type. This was a non-backward compatible mangling change to the ABI. This change allows proper overloading, and also prevents mangling conflicts with template parameters of protocol-qualified type. Query the presence of this new mangling with ``__has_feature(objc_protocol_qualifier_mangling)``. .. _langext-overloading: Initializer lists for complex numbers in C ========================================== clang supports an extension which allows the following in C: .. code-block:: c++ #include #include complex float x = { 1.0f, INFINITY }; // Init to (1, Inf) This construct is useful because there is no way to separately initialize the real and imaginary parts of a complex variable in standard C, given that clang does not support ``_Imaginary``. (Clang also supports the ``__real__`` and ``__imag__`` extensions from gcc, which help in some cases, but are not usable in static initializers.) Note that this extension does not allow eliding the braces; the meaning of the following two lines is different: .. code-block:: c++ complex float x[] = { { 1.0f, 1.0f } }; // [0] = (1, 1) complex float x[] = { 1.0f, 1.0f }; // [0] = (1, 0), [1] = (1, 0) This extension also works in C++ mode, as far as that goes, but does not apply to the C++ ``std::complex``. (In C++11, list initialization allows the same syntax to be used with ``std::complex`` with the same meaning.) Builtin Functions ================= Clang supports a number of builtin library functions with the same syntax as GCC, including things like ``__builtin_nan``, ``__builtin_constant_p``, ``__builtin_choose_expr``, ``__builtin_types_compatible_p``, ``__builtin_assume_aligned``, ``__sync_fetch_and_add``, etc. In addition to the GCC builtins, Clang supports a number of builtins that GCC does not, which are listed here. Please note that Clang does not and will not support all of the GCC builtins for vector operations. Instead of using builtins, you should use the functions defined in target-specific header files like ````, which define portable wrappers for these. Many of the Clang versions of these functions are implemented directly in terms of :ref:`extended vector support ` instead of builtins, in order to reduce the number of builtins that we need to implement. ``__builtin_assume`` ------------------------------ ``__builtin_assume`` is used to provide the optimizer with a boolean invariant that is defined to be true. **Syntax**: .. code-block:: c++ __builtin_assume(bool) **Example of Use**: .. code-block:: c++ int foo(int x) { __builtin_assume(x != 0); // The optimizer may short-circuit this check using the invariant. if (x == 0) return do_something(); return do_something_else(); } **Description**: The boolean argument to this function is defined to be true. The optimizer may analyze the form of the expression provided as the argument and deduce from that information used to optimize the program. If the condition is violated during execution, the behavior is undefined. The argument itself is never evaluated, so any side effects of the expression will be discarded. Query for this feature with ``__has_builtin(__builtin_assume)``. ``__builtin_readcyclecounter`` ------------------------------ ``__builtin_readcyclecounter`` is used to access the cycle counter register (or a similar low-latency, high-accuracy clock) on those targets that support it. **Syntax**: .. code-block:: c++ __builtin_readcyclecounter() **Example of Use**: .. code-block:: c++ unsigned long long t0 = __builtin_readcyclecounter(); do_something(); unsigned long long t1 = __builtin_readcyclecounter(); unsigned long long cycles_to_do_something = t1 - t0; // assuming no overflow **Description**: The ``__builtin_readcyclecounter()`` builtin returns the cycle counter value, which may be either global or process/thread-specific depending on the target. As the backing counters often overflow quickly (on the order of seconds) this should only be used for timing small intervals. When not supported by the target, the return value is always zero. This builtin takes no arguments and produces an unsigned long long result. Query for this feature with ``__has_builtin(__builtin_readcyclecounter)``. Note that even if present, its use may depend on run-time privilege or other OS controlled state. .. _langext-__builtin_shufflevector: ``__builtin_shufflevector`` --------------------------- ``__builtin_shufflevector`` is used to express generic vector permutation/shuffle/swizzle operations. This builtin is also very important for the implementation of various target-specific header files like ````. **Syntax**: .. code-block:: c++ __builtin_shufflevector(vec1, vec2, index1, index2, ...) **Examples**: .. code-block:: c++ // identity operation - return 4-element vector v1. __builtin_shufflevector(v1, v1, 0, 1, 2, 3) // "Splat" element 0 of V1 into a 4-element result. __builtin_shufflevector(V1, V1, 0, 0, 0, 0) // Reverse 4-element vector V1. __builtin_shufflevector(V1, V1, 3, 2, 1, 0) // Concatenate every other element of 4-element vectors V1 and V2. __builtin_shufflevector(V1, V2, 0, 2, 4, 6) // Concatenate every other element of 8-element vectors V1 and V2. __builtin_shufflevector(V1, V2, 0, 2, 4, 6, 8, 10, 12, 14) // Shuffle v1 with some elements being undefined __builtin_shufflevector(v1, v1, 3, -1, 1, -1) **Description**: The first two arguments to ``__builtin_shufflevector`` are vectors that have the same element type. The remaining arguments are a list of integers that specify the elements indices of the first two vectors that should be extracted and returned in a new vector. These element indices are numbered sequentially starting with the first vector, continuing into the second vector. Thus, if ``vec1`` is a 4-element vector, index 5 would refer to the second element of ``vec2``. An index of -1 can be used to indicate that the corresponding element in the returned vector is a don't care and can be optimized by the backend. The result of ``__builtin_shufflevector`` is a vector with the same element type as ``vec1``/``vec2`` but that has an element count equal to the number of indices specified. Query for this feature with ``__has_builtin(__builtin_shufflevector)``. .. _langext-__builtin_convertvector: ``__builtin_convertvector`` --------------------------- ``__builtin_convertvector`` is used to express generic vector type-conversion operations. The input vector and the output vector type must have the same number of elements. **Syntax**: .. code-block:: c++ __builtin_convertvector(src_vec, dst_vec_type) **Examples**: .. code-block:: c++ typedef double vector4double __attribute__((__vector_size__(32))); typedef float vector4float __attribute__((__vector_size__(16))); typedef short vector4short __attribute__((__vector_size__(8))); vector4float vf; vector4short vs; // convert from a vector of 4 floats to a vector of 4 doubles. __builtin_convertvector(vf, vector4double) // equivalent to: (vector4double) { (double) vf[0], (double) vf[1], (double) vf[2], (double) vf[3] } // convert from a vector of 4 shorts to a vector of 4 floats. __builtin_convertvector(vs, vector4float) // equivalent to: (vector4float) { (float) vs[0], (float) vs[1], (float) vs[2], (float) vs[3] } **Description**: The first argument to ``__builtin_convertvector`` is a vector, and the second argument is a vector type with the same number of elements as the first argument. The result of ``__builtin_convertvector`` is a vector with the same element type as the second argument, with a value defined in terms of the action of a C-style cast applied to each element of the first argument. Query for this feature with ``__has_builtin(__builtin_convertvector)``. ``__builtin_bitreverse`` ------------------------ * ``__builtin_bitreverse8`` * ``__builtin_bitreverse16`` * ``__builtin_bitreverse32`` * ``__builtin_bitreverse64`` **Syntax**: .. code-block:: c++ __builtin_bitreverse32(x) **Examples**: .. code-block:: c++ uint8_t rev_x = __builtin_bitreverse8(x); uint16_t rev_x = __builtin_bitreverse16(x); uint32_t rev_y = __builtin_bitreverse32(y); uint64_t rev_z = __builtin_bitreverse64(z); **Description**: The '``__builtin_bitreverse``' family of builtins is used to reverse the bitpattern of an integer value; for example ``0b10110110`` becomes ``0b01101101``. ``__builtin_unreachable`` ------------------------- ``__builtin_unreachable`` is used to indicate that a specific point in the program cannot be reached, even if the compiler might otherwise think it can. This is useful to improve optimization and eliminates certain warnings. For example, without the ``__builtin_unreachable`` in the example below, the compiler assumes that the inline asm can fall through and prints a "function declared '``noreturn``' should not return" warning. **Syntax**: .. code-block:: c++ __builtin_unreachable() **Example of use**: .. code-block:: c++ void myabort(void) __attribute__((noreturn)); void myabort(void) { asm("int3"); __builtin_unreachable(); } **Description**: The ``__builtin_unreachable()`` builtin has completely undefined behavior. Since it has undefined behavior, it is a statement that it is never reached and the optimizer can take advantage of this to produce better code. This builtin takes no arguments and produces a void result. Query for this feature with ``__has_builtin(__builtin_unreachable)``. ``__builtin_unpredictable`` --------------------------- ``__builtin_unpredictable`` is used to indicate that a branch condition is unpredictable by hardware mechanisms such as branch prediction logic. **Syntax**: .. code-block:: c++ __builtin_unpredictable(long long) **Example of use**: .. code-block:: c++ if (__builtin_unpredictable(x > 0)) { foo(); } **Description**: The ``__builtin_unpredictable()`` builtin is expected to be used with control flow conditions such as in ``if`` and ``switch`` statements. Query for this feature with ``__has_builtin(__builtin_unpredictable)``. ``__sync_swap`` --------------- ``__sync_swap`` is used to atomically swap integers or pointers in memory. **Syntax**: .. code-block:: c++ type __sync_swap(type *ptr, type value, ...) **Example of Use**: .. code-block:: c++ int old_value = __sync_swap(&value, new_value); **Description**: The ``__sync_swap()`` builtin extends the existing ``__sync_*()`` family of atomic intrinsics to allow code to atomically swap the current value with the new value. More importantly, it helps developers write more efficient and correct code by avoiding expensive loops around ``__sync_bool_compare_and_swap()`` or relying on the platform specific implementation details of ``__sync_lock_test_and_set()``. The ``__sync_swap()`` builtin is a full barrier. ``__builtin_addressof`` ----------------------- ``__builtin_addressof`` performs the functionality of the built-in ``&`` operator, ignoring any ``operator&`` overload. This is useful in constant expressions in C++11, where there is no other way to take the address of an object that overloads ``operator&``. **Example of use**: .. code-block:: c++ template constexpr T *addressof(T &value) { return __builtin_addressof(value); } ``__builtin_operator_new`` and ``__builtin_operator_delete`` ------------------------------------------------------------ ``__builtin_operator_new`` allocates memory just like a non-placement non-class *new-expression*. This is exactly like directly calling the normal non-placement ``::operator new``, except that it allows certain optimizations that the C++ standard does not permit for a direct function call to ``::operator new`` (in particular, removing ``new`` / ``delete`` pairs and merging allocations). Likewise, ``__builtin_operator_delete`` deallocates memory just like a non-class *delete-expression*, and is exactly like directly calling the normal ``::operator delete``, except that it permits optimizations. Only the unsized form of ``__builtin_operator_delete`` is currently available. These builtins are intended for use in the implementation of ``std::allocator`` and other similar allocation libraries, and are only available in C++. Multiprecision Arithmetic Builtins ---------------------------------- Clang provides a set of builtins which expose multiprecision arithmetic in a manner amenable to C. They all have the following form: .. code-block:: c unsigned x = ..., y = ..., carryin = ..., carryout; unsigned sum = __builtin_addc(x, y, carryin, &carryout); Thus one can form a multiprecision addition chain in the following manner: .. code-block:: c unsigned *x, *y, *z, carryin=0, carryout; z[0] = __builtin_addc(x[0], y[0], carryin, &carryout); carryin = carryout; z[1] = __builtin_addc(x[1], y[1], carryin, &carryout); carryin = carryout; z[2] = __builtin_addc(x[2], y[2], carryin, &carryout); carryin = carryout; z[3] = __builtin_addc(x[3], y[3], carryin, &carryout); The complete list of builtins are: .. code-block:: c unsigned char __builtin_addcb (unsigned char x, unsigned char y, unsigned char carryin, unsigned char *carryout); unsigned short __builtin_addcs (unsigned short x, unsigned short y, unsigned short carryin, unsigned short *carryout); unsigned __builtin_addc (unsigned x, unsigned y, unsigned carryin, unsigned *carryout); unsigned long __builtin_addcl (unsigned long x, unsigned long y, unsigned long carryin, unsigned long *carryout); unsigned long long __builtin_addcll(unsigned long long x, unsigned long long y, unsigned long long carryin, unsigned long long *carryout); unsigned char __builtin_subcb (unsigned char x, unsigned char y, unsigned char carryin, unsigned char *carryout); unsigned short __builtin_subcs (unsigned short x, unsigned short y, unsigned short carryin, unsigned short *carryout); unsigned __builtin_subc (unsigned x, unsigned y, unsigned carryin, unsigned *carryout); unsigned long __builtin_subcl (unsigned long x, unsigned long y, unsigned long carryin, unsigned long *carryout); unsigned long long __builtin_subcll(unsigned long long x, unsigned long long y, unsigned long long carryin, unsigned long long *carryout); Checked Arithmetic Builtins --------------------------- Clang provides a set of builtins that implement checked arithmetic for security critical applications in a manner that is fast and easily expressable in C. As an example of their usage: .. code-block:: c errorcode_t security_critical_application(...) { unsigned x, y, result; ... if (__builtin_mul_overflow(x, y, &result)) return kErrorCodeHackers; ... use_multiply(result); ... } Clang provides the following checked arithmetic builtins: .. code-block:: c bool __builtin_add_overflow (type1 x, type2 y, type3 *sum); bool __builtin_sub_overflow (type1 x, type2 y, type3 *diff); bool __builtin_mul_overflow (type1 x, type2 y, type3 *prod); bool __builtin_uadd_overflow (unsigned x, unsigned y, unsigned *sum); bool __builtin_uaddl_overflow (unsigned long x, unsigned long y, unsigned long *sum); bool __builtin_uaddll_overflow(unsigned long long x, unsigned long long y, unsigned long long *sum); bool __builtin_usub_overflow (unsigned x, unsigned y, unsigned *diff); bool __builtin_usubl_overflow (unsigned long x, unsigned long y, unsigned long *diff); bool __builtin_usubll_overflow(unsigned long long x, unsigned long long y, unsigned long long *diff); bool __builtin_umul_overflow (unsigned x, unsigned y, unsigned *prod); bool __builtin_umull_overflow (unsigned long x, unsigned long y, unsigned long *prod); bool __builtin_umulll_overflow(unsigned long long x, unsigned long long y, unsigned long long *prod); bool __builtin_sadd_overflow (int x, int y, int *sum); bool __builtin_saddl_overflow (long x, long y, long *sum); bool __builtin_saddll_overflow(long long x, long long y, long long *sum); bool __builtin_ssub_overflow (int x, int y, int *diff); bool __builtin_ssubl_overflow (long x, long y, long *diff); bool __builtin_ssubll_overflow(long long x, long long y, long long *diff); bool __builtin_smul_overflow (int x, int y, int *prod); bool __builtin_smull_overflow (long x, long y, long *prod); bool __builtin_smulll_overflow(long long x, long long y, long long *prod); Each builtin performs the specified mathematical operation on the first two arguments and stores the result in the third argument. If possible, the result will be equal to mathematically-correct result and the builtin will return 0. Otherwise, the builtin will return 1 and the result will be equal to the unique value that is equivalent to the mathematically-correct result modulo two raised to the *k* power, where *k* is the number of bits in the result type. The behavior of these builtins is well-defined for all argument values. The first three builtins work generically for operands of any integer type, including boolean types. The operands need not have the same type as each other, or as the result. The other builtins may implicitly promote or convert their operands before performing the operation. Query for this feature with ``__has_builtin(__builtin_add_overflow)``, etc. Floating point builtins --------------------------------------- ``__builtin_canonicalize`` -------------------------- .. code-block:: c double __builtin_canonicalize(double); float __builtin_canonicalizef(float); long double__builtin_canonicalizel(long double); Returns the platform specific canonical encoding of a floating point number. This canonicalization is useful for implementing certain numeric primitives such as frexp. See `LLVM canonicalize intrinsic `_ for more information on the semantics. +String builtins +--------------- + +Clang provides constant expression evaluation support for builtins forms of +the following functions from the C standard library ```` header: + +* ``memchr`` +* ``memcmp`` +* ``strchr`` +* ``strcmp`` +* ``strlen`` +* ``strncmp`` +* ``wcschr`` +* ``wcscmp`` +* ``wcslen`` +* ``wcsncmp`` +* ``wmemchr`` +* ``wmemcmp`` + +In each case, the builtin form has the name of the C library function prefixed +by ``__builtin_``. Example: + +.. code-block:: c + + void *p = __builtin_memchr("foobar", 'b', 5); + +In addition to the above, one further builtin is provided: + +.. code-block:: c + + char *__builtin_char_memchr(const char *haystack, int needle, size_t size); + +``__builtin_char_memchr(a, b, c)`` is identical to +``(char*)__builtin_memchr(a, b, c)`` except that its use is permitted within +constant expressions in C++11 onwards (where a cast from ``void*`` to ``char*`` +is disallowed in general). + +Support for constant expression evaluation for the above builtins be detected +with ``__has_feature(cxx_constexpr_string_builtins)``. + .. _langext-__c11_atomic: __c11_atomic builtins --------------------- Clang provides a set of builtins which are intended to be used to implement C11's ```` header. These builtins provide the semantics of the ``_explicit`` form of the corresponding C11 operation, and are named with a ``__c11_`` prefix. The supported operations, and the differences from the corresponding C11 operations, are: * ``__c11_atomic_init`` * ``__c11_atomic_thread_fence`` * ``__c11_atomic_signal_fence`` * ``__c11_atomic_is_lock_free`` (The argument is the size of the ``_Atomic(...)`` object, instead of its address) * ``__c11_atomic_store`` * ``__c11_atomic_load`` * ``__c11_atomic_exchange`` * ``__c11_atomic_compare_exchange_strong`` * ``__c11_atomic_compare_exchange_weak`` * ``__c11_atomic_fetch_add`` * ``__c11_atomic_fetch_sub`` * ``__c11_atomic_fetch_and`` * ``__c11_atomic_fetch_or`` * ``__c11_atomic_fetch_xor`` The macros ``__ATOMIC_RELAXED``, ``__ATOMIC_CONSUME``, ``__ATOMIC_ACQUIRE``, ``__ATOMIC_RELEASE``, ``__ATOMIC_ACQ_REL``, and ``__ATOMIC_SEQ_CST`` are provided, with values corresponding to the enumerators of C11's ``memory_order`` enumeration. (Note that Clang additionally provides GCC-compatible ``__atomic_*`` builtins) Low-level ARM exclusive memory builtins --------------------------------------- Clang provides overloaded builtins giving direct access to the three key ARM instructions for implementing atomic operations. .. code-block:: c T __builtin_arm_ldrex(const volatile T *addr); T __builtin_arm_ldaex(const volatile T *addr); int __builtin_arm_strex(T val, volatile T *addr); int __builtin_arm_stlex(T val, volatile T *addr); void __builtin_arm_clrex(void); The types ``T`` currently supported are: * Integer types with width at most 64 bits (or 128 bits on AArch64). * Floating-point types * Pointer types. Note that the compiler does not guarantee it will not insert stores which clear the exclusive monitor in between an ``ldrex`` type operation and its paired ``strex``. In practice this is only usually a risk when the extra store is on the same cache line as the variable being modified and Clang will only insert stack stores on its own, so it is best not to use these operations on variables with automatic storage duration. Also, loads and stores may be implicit in code written between the ``ldrex`` and ``strex``. Clang will not necessarily mitigate the effects of these either, so care should be exercised. For these reasons the higher level atomic primitives should be preferred where possible. Non-temporal load/store builtins -------------------------------- Clang provides overloaded builtins allowing generation of non-temporal memory accesses. .. code-block:: c T __builtin_nontemporal_load(T *addr); void __builtin_nontemporal_store(T value, T *addr); The types ``T`` currently supported are: * Integer types. * Floating-point types. * Vector types. Note that the compiler does not guarantee that non-temporal loads or stores will be used. C++ Coroutines support builtins -------------------------------- .. warning:: This is a work in progress. Compatibility across Clang/LLVM releases is not guaranteed. Clang provides experimental builtins to support C++ Coroutines as defined by http://wg21.link/P0057. The following four are intended to be used by the standard library to implement `std::experimental::coroutine_handle` type. **Syntax**: .. code-block:: c void __builtin_coro_resume(void *addr); void __builtin_coro_destroy(void *addr); bool __builtin_coro_done(void *addr); void *__builtin_coro_promise(void *addr, int alignment, bool from_promise) **Example of use**: .. code-block:: c++ template <> struct coroutine_handle { void resume() const { __builtin_coro_resume(ptr); } void destroy() const { __builtin_coro_destroy(ptr); } bool done() const { return __builtin_coro_done(ptr); } // ... protected: void *ptr; }; template struct coroutine_handle : coroutine_handle<> { // ... Promise &promise() const { return *reinterpret_cast( __builtin_coro_promise(ptr, alignof(Promise), /*from-promise=*/false)); } static coroutine_handle from_promise(Promise &promise) { coroutine_handle p; p.ptr = __builtin_coro_promise(&promise, alignof(Promise), /*from-promise=*/true); return p; } }; Other coroutine builtins are either for internal clang use or for use during development of the coroutine feature. See `Coroutines in LLVM `_ for more information on their semantics. Note that builtins matching the intrinsics that take token as the first parameter (llvm.coro.begin, llvm.coro.alloc, llvm.coro.free and llvm.coro.suspend) omit the token parameter and fill it to an appropriate value during the emission. **Syntax**: .. code-block:: c size_t __builtin_coro_size() void *__builtin_coro_frame() void *__builtin_coro_free(void *coro_frame) void *__builtin_coro_id(int align, void *promise, void *fnaddr, void *parts) bool __builtin_coro_alloc() void *__builtin_coro_begin(void *memory) void __builtin_coro_end(void *coro_frame, bool unwind) char __builtin_coro_suspend(bool final) bool __builtin_coro_param(void *original, void *copy) Note that there is no builtin matching the `llvm.coro.save` intrinsic. LLVM automatically will insert one if the first argument to `llvm.coro.suspend` is token `none`. If a user calls `__builin_suspend`, clang will insert `token none` as the first argument to the intrinsic. Non-standard C++11 Attributes ============================= Clang's non-standard C++11 attributes live in the ``clang`` attribute namespace. Clang supports GCC's ``gnu`` attribute namespace. All GCC attributes which are accepted with the ``__attribute__((foo))`` syntax are also accepted as ``[[gnu::foo]]``. This only extends to attributes which are specified by GCC (see the list of `GCC function attributes `_, `GCC variable attributes `_, and `GCC type attributes `_). As with the GCC implementation, these attributes must appertain to the *declarator-id* in a declaration, which means they must go either at the start of the declaration or immediately after the name being declared. For example, this applies the GNU ``unused`` attribute to ``a`` and ``f``, and also applies the GNU ``noreturn`` attribute to ``f``. .. code-block:: c++ [[gnu::unused]] int a, f [[gnu::noreturn]] (); Target-Specific Extensions ========================== Clang supports some language features conditionally on some targets. ARM/AArch64 Language Extensions ------------------------------- Memory Barrier Intrinsics ^^^^^^^^^^^^^^^^^^^^^^^^^ Clang implements the ``__dmb``, ``__dsb`` and ``__isb`` intrinsics as defined in the `ARM C Language Extensions Release 2.0 `_. Note that these intrinsics are implemented as motion barriers that block reordering of memory accesses and side effect instructions. Other instructions like simple arithmetic may be reordered around the intrinsic. If you expect to have no reordering at all, use inline assembly instead. X86/X86-64 Language Extensions ------------------------------ The X86 backend has these language extensions: Memory references to specified segments ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Annotating a pointer with address space #256 causes it to be code generated relative to the X86 GS segment register, address space #257 causes it to be relative to the X86 FS segment, and address space #258 causes it to be relative to the X86 SS segment. Note that this is a very very low-level feature that should only be used if you know what you're doing (for example in an OS kernel). Here is an example: .. code-block:: c++ #define GS_RELATIVE __attribute__((address_space(256))) int foo(int GS_RELATIVE *P) { return *P; } Which compiles to (on X86-32): .. code-block:: gas _foo: movl 4(%esp), %eax movl %gs:(%eax), %eax ret Extensions for Static Analysis ============================== Clang supports additional attributes that are useful for documenting program invariants and rules for static analysis tools, such as the `Clang Static Analyzer `_. These attributes are documented in the analyzer's `list of source-level annotations `_. Extensions for Dynamic Analysis =============================== Use ``__has_feature(address_sanitizer)`` to check if the code is being built with :doc:`AddressSanitizer`. Use ``__has_feature(thread_sanitizer)`` to check if the code is being built with :doc:`ThreadSanitizer`. Use ``__has_feature(memory_sanitizer)`` to check if the code is being built with :doc:`MemorySanitizer`. Use ``__has_feature(safe_stack)`` to check if the code is being built with :doc:`SafeStack`. Extensions for selectively disabling optimization ================================================= Clang provides a mechanism for selectively disabling optimizations in functions and methods. To disable optimizations in a single function definition, the GNU-style or C++11 non-standard attribute ``optnone`` can be used. .. code-block:: c++ // The following functions will not be optimized. // GNU-style attribute __attribute__((optnone)) int foo() { // ... code } // C++11 attribute [[clang::optnone]] int bar() { // ... code } To facilitate disabling optimization for a range of function definitions, a range-based pragma is provided. Its syntax is ``#pragma clang optimize`` followed by ``off`` or ``on``. All function definitions in the region between an ``off`` and the following ``on`` will be decorated with the ``optnone`` attribute unless doing so would conflict with explicit attributes already present on the function (e.g. the ones that control inlining). .. code-block:: c++ #pragma clang optimize off // This function will be decorated with optnone. int foo() { // ... code } // optnone conflicts with always_inline, so bar() will not be decorated. __attribute__((always_inline)) int bar() { // ... code } #pragma clang optimize on If no ``on`` is found to close an ``off`` region, the end of the region is the end of the compilation unit. Note that a stray ``#pragma clang optimize on`` does not selectively enable additional optimizations when compiling at low optimization levels. This feature can only be used to selectively disable optimizations. The pragma has an effect on functions only at the point of their definition; for function templates, this means that the state of the pragma at the point of an instantiation is not necessarily relevant. Consider the following example: .. code-block:: c++ template T twice(T t) { return 2 * t; } #pragma clang optimize off template T thrice(T t) { return 3 * t; } int container(int a, int b) { return twice(a) + thrice(b); } #pragma clang optimize on In this example, the definition of the template function ``twice`` is outside the pragma region, whereas the definition of ``thrice`` is inside the region. The ``container`` function is also in the region and will not be optimized, but it causes the instantiation of ``twice`` and ``thrice`` with an ``int`` type; of these two instantiations, ``twice`` will be optimized (because its definition was outside the region) and ``thrice`` will not be optimized. Extensions for loop hint optimizations ====================================== The ``#pragma clang loop`` directive is used to specify hints for optimizing the subsequent for, while, do-while, or c++11 range-based for loop. The directive provides options for vectorization, interleaving, unrolling and distribution. Loop hints can be specified before any loop and will be ignored if the optimization is not safe to apply. Vectorization and Interleaving ------------------------------ A vectorized loop performs multiple iterations of the original loop in parallel using vector instructions. The instruction set of the target processor determines which vector instructions are available and their vector widths. This restricts the types of loops that can be vectorized. The vectorizer automatically determines if the loop is safe and profitable to vectorize. A vector instruction cost model is used to select the vector width. Interleaving multiple loop iterations allows modern processors to further improve instruction-level parallelism (ILP) using advanced hardware features, such as multiple execution units and out-of-order execution. The vectorizer uses a cost model that depends on the register pressure and generated code size to select the interleaving count. Vectorization is enabled by ``vectorize(enable)`` and interleaving is enabled by ``interleave(enable)``. This is useful when compiling with ``-Os`` to manually enable vectorization or interleaving. .. code-block:: c++ #pragma clang loop vectorize(enable) #pragma clang loop interleave(enable) for(...) { ... } The vector width is specified by ``vectorize_width(_value_)`` and the interleave count is specified by ``interleave_count(_value_)``, where _value_ is a positive integer. This is useful for specifying the optimal width/count of the set of target architectures supported by your application. .. code-block:: c++ #pragma clang loop vectorize_width(2) #pragma clang loop interleave_count(2) for(...) { ... } Specifying a width/count of 1 disables the optimization, and is equivalent to ``vectorize(disable)`` or ``interleave(disable)``. Loop Unrolling -------------- Unrolling a loop reduces the loop control overhead and exposes more opportunities for ILP. Loops can be fully or partially unrolled. Full unrolling eliminates the loop and replaces it with an enumerated sequence of loop iterations. Full unrolling is only possible if the loop trip count is known at compile time. Partial unrolling replicates the loop body within the loop and reduces the trip count. If ``unroll(enable)`` is specified the unroller will attempt to fully unroll the loop if the trip count is known at compile time. If the fully unrolled code size is greater than an internal limit the loop will be partially unrolled up to this limit. If the trip count is not known at compile time the loop will be partially unrolled with a heuristically chosen unroll factor. .. code-block:: c++ #pragma clang loop unroll(enable) for(...) { ... } If ``unroll(full)`` is specified the unroller will attempt to fully unroll the loop if the trip count is known at compile time identically to ``unroll(enable)``. However, with ``unroll(full)`` the loop will not be unrolled if the loop count is not known at compile time. .. code-block:: c++ #pragma clang loop unroll(full) for(...) { ... } The unroll count can be specified explicitly with ``unroll_count(_value_)`` where _value_ is a positive integer. If this value is greater than the trip count the loop will be fully unrolled. Otherwise the loop is partially unrolled subject to the same code size limit as with ``unroll(enable)``. .. code-block:: c++ #pragma clang loop unroll_count(8) for(...) { ... } Unrolling of a loop can be prevented by specifying ``unroll(disable)``. Loop Distribution ----------------- Loop Distribution allows splitting a loop into multiple loops. This is beneficial for example when the entire loop cannot be vectorized but some of the resulting loops can. If ``distribute(enable))`` is specified and the loop has memory dependencies that inhibit vectorization, the compiler will attempt to isolate the offending operations into a new loop. This optimization is not enabled by default, only loops marked with the pragma are considered. .. code-block:: c++ #pragma clang loop distribute(enable) for (i = 0; i < N; ++i) { S1: A[i + 1] = A[i] + B[i]; S2: C[i] = D[i] * E[i]; } This loop will be split into two loops between statements S1 and S2. The second loop containing S2 will be vectorized. Loop Distribution is currently not enabled by default in the optimizer because it can hurt performance in some cases. For example, instruction-level parallelism could be reduced by sequentializing the execution of the statements S1 and S2 above. If Loop Distribution is turned on globally with ``-mllvm -enable-loop-distribution``, specifying ``distribute(disable)`` can be used the disable it on a per-loop basis. Additional Information ---------------------- For convenience multiple loop hints can be specified on a single line. .. code-block:: c++ #pragma clang loop vectorize_width(4) interleave_count(8) for(...) { ... } If an optimization cannot be applied any hints that apply to it will be ignored. For example, the hint ``vectorize_width(4)`` is ignored if the loop is not proven safe to vectorize. To identify and diagnose optimization issues use `-Rpass`, `-Rpass-missed`, and `-Rpass-analysis` command line options. See the user guide for details. Index: vendor/clang/dist/include/clang/Basic/Builtins.def =================================================================== --- vendor/clang/dist/include/clang/Basic/Builtins.def (revision 312705) +++ vendor/clang/dist/include/clang/Basic/Builtins.def (revision 312706) @@ -1,1410 +1,1411 @@ //===--- Builtins.def - Builtin function info database ----------*- C++ -*-===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file defines the standard builtin function database. Users of this file // must define the BUILTIN macro to make use of this information. // //===----------------------------------------------------------------------===// // FIXME: This should really be a .td file, but that requires modifying tblgen. // Perhaps tblgen should have plugins. // The first value provided to the macro specifies the function name of the // builtin, and results in a clang::builtin::BIXX enum value for XX. // The second value provided to the macro specifies the type of the function // (result value, then each argument) as follows: // v -> void // b -> boolean // c -> char // s -> short // i -> int // h -> half // f -> float // d -> double // z -> size_t // w -> wchar_t // F -> constant CFString // G -> id // H -> SEL // M -> struct objc_super // a -> __builtin_va_list // A -> "reference" to __builtin_va_list // V -> Vector, followed by the number of elements and the base type. // E -> ext_vector, followed by the number of elements and the base type. // X -> _Complex, followed by the base type. // Y -> ptrdiff_t // P -> FILE // J -> jmp_buf // SJ -> sigjmp_buf // K -> ucontext_t // p -> pid_t // . -> "...". This may only occur at the end of the function list. // // Types may be prefixed with the following modifiers: // L -> long (e.g. Li for 'long int') // LL -> long long // LLL -> __int128_t (e.g. LLLi) // W -> int64_t // S -> signed // U -> unsigned // I -> Required to constant fold to an integer constant expression. // // Types may be postfixed with the following modifiers: // * -> pointer (optionally followed by an address space number, if no address // space is specified than any address space will be accepted) // & -> reference (optionally followed by an address space number) // C -> const // D -> volatile // The third value provided to the macro specifies information about attributes // of the function. These must be kept in sync with the predicates in the // Builtin::Context class. Currently we have: // n -> nothrow // r -> noreturn // U -> pure // c -> const // t -> signature is meaningless, use custom typechecking // F -> this is a libc/libm function with a '__builtin_' prefix added. // f -> this is a libc/libm function without the '__builtin_' prefix. It can // be followed by ':headername:' to state which header this function // comes from. // h -> this function requires a specific header or an explicit declaration. // i -> this is a runtime library implemented function without the // '__builtin_' prefix. It will be implemented in compiler-rt or libgcc. // p:N: -> this is a printf-like function whose Nth argument is the format // string. // P:N: -> similar to the p:N: attribute, but the function is like vprintf // in that it accepts its arguments as a va_list rather than // through an ellipsis // s:N: -> this is a scanf-like function whose Nth argument is the format // string. // S:N: -> similar to the s:N: attribute, but the function is like vscanf // in that it accepts its arguments as a va_list rather than // through an ellipsis // e -> const, but only when -fmath-errno=0 // j -> returns_twice (like setjmp) // u -> arguments are not evaluated for their side-effects // FIXME: gcc has nonnull #if defined(BUILTIN) && !defined(LIBBUILTIN) # define LIBBUILTIN(ID, TYPE, ATTRS, HEADER, BUILTIN_LANG) BUILTIN(ID, TYPE, ATTRS) #endif #if defined(BUILTIN) && !defined(LANGBUILTIN) # define LANGBUILTIN(ID, TYPE, ATTRS, BUILTIN_LANG) BUILTIN(ID, TYPE, ATTRS) #endif // Standard libc/libm functions: BUILTIN(__builtin_atan2 , "ddd" , "Fnc") BUILTIN(__builtin_atan2f, "fff" , "Fnc") BUILTIN(__builtin_atan2l, "LdLdLd", "Fnc") BUILTIN(__builtin_abs , "ii" , "ncF") BUILTIN(__builtin_copysign, "ddd", "ncF") BUILTIN(__builtin_copysignf, "fff", "ncF") BUILTIN(__builtin_copysignl, "LdLdLd", "ncF") BUILTIN(__builtin_fabs , "dd" , "ncF") BUILTIN(__builtin_fabsf, "ff" , "ncF") BUILTIN(__builtin_fabsl, "LdLd", "ncF") BUILTIN(__builtin_fmod , "ddd" , "Fnc") BUILTIN(__builtin_fmodf, "fff" , "Fnc") BUILTIN(__builtin_fmodl, "LdLdLd", "Fnc") BUILTIN(__builtin_frexp , "ddi*" , "Fn") BUILTIN(__builtin_frexpf, "ffi*" , "Fn") BUILTIN(__builtin_frexpl, "LdLdi*", "Fn") BUILTIN(__builtin_huge_val, "d", "nc") BUILTIN(__builtin_huge_valf, "f", "nc") BUILTIN(__builtin_huge_vall, "Ld", "nc") BUILTIN(__builtin_inf , "d" , "nc") BUILTIN(__builtin_inff , "f" , "nc") BUILTIN(__builtin_infl , "Ld" , "nc") BUILTIN(__builtin_labs , "LiLi" , "Fnc") BUILTIN(__builtin_llabs, "LLiLLi", "Fnc") BUILTIN(__builtin_ldexp , "ddi" , "Fnc") BUILTIN(__builtin_ldexpf, "ffi" , "Fnc") BUILTIN(__builtin_ldexpl, "LdLdi", "Fnc") BUILTIN(__builtin_modf , "ddd*" , "Fn") BUILTIN(__builtin_modff, "fff*" , "Fn") BUILTIN(__builtin_modfl, "LdLdLd*", "Fn") BUILTIN(__builtin_nan, "dcC*" , "ncF") BUILTIN(__builtin_nanf, "fcC*" , "ncF") BUILTIN(__builtin_nanl, "LdcC*", "ncF") BUILTIN(__builtin_nans, "dcC*" , "ncF") BUILTIN(__builtin_nansf, "fcC*" , "ncF") BUILTIN(__builtin_nansl, "LdcC*", "ncF") BUILTIN(__builtin_powi , "ddi" , "Fnc") BUILTIN(__builtin_powif, "ffi" , "Fnc") BUILTIN(__builtin_powil, "LdLdi", "Fnc") BUILTIN(__builtin_pow , "ddd" , "Fnc") BUILTIN(__builtin_powf, "fff" , "Fnc") BUILTIN(__builtin_powl, "LdLdLd", "Fnc") // Standard unary libc/libm functions with double/float/long double variants: BUILTIN(__builtin_acos , "dd" , "Fnc") BUILTIN(__builtin_acosf, "ff" , "Fnc") BUILTIN(__builtin_acosl, "LdLd", "Fnc") BUILTIN(__builtin_acosh , "dd" , "Fnc") BUILTIN(__builtin_acoshf, "ff" , "Fnc") BUILTIN(__builtin_acoshl, "LdLd", "Fnc") BUILTIN(__builtin_asin , "dd" , "Fnc") BUILTIN(__builtin_asinf, "ff" , "Fnc") BUILTIN(__builtin_asinl, "LdLd", "Fnc") BUILTIN(__builtin_asinh , "dd" , "Fnc") BUILTIN(__builtin_asinhf, "ff" , "Fnc") BUILTIN(__builtin_asinhl, "LdLd", "Fnc") BUILTIN(__builtin_atan , "dd" , "Fnc") BUILTIN(__builtin_atanf, "ff" , "Fnc") BUILTIN(__builtin_atanl, "LdLd", "Fnc") BUILTIN(__builtin_atanh , "dd", "Fnc") BUILTIN(__builtin_atanhf, "ff", "Fnc") BUILTIN(__builtin_atanhl, "LdLd", "Fnc") BUILTIN(__builtin_cbrt , "dd", "Fnc") BUILTIN(__builtin_cbrtf, "ff", "Fnc") BUILTIN(__builtin_cbrtl, "LdLd", "Fnc") BUILTIN(__builtin_ceil , "dd" , "Fnc") BUILTIN(__builtin_ceilf, "ff" , "Fnc") BUILTIN(__builtin_ceill, "LdLd", "Fnc") BUILTIN(__builtin_cos , "dd" , "Fnc") BUILTIN(__builtin_cosf, "ff" , "Fnc") BUILTIN(__builtin_cosh , "dd" , "Fnc") BUILTIN(__builtin_coshf, "ff" , "Fnc") BUILTIN(__builtin_coshl, "LdLd", "Fnc") BUILTIN(__builtin_cosl, "LdLd", "Fnc") BUILTIN(__builtin_erf , "dd", "Fnc") BUILTIN(__builtin_erff, "ff", "Fnc") BUILTIN(__builtin_erfl, "LdLd", "Fnc") BUILTIN(__builtin_erfc , "dd", "Fnc") BUILTIN(__builtin_erfcf, "ff", "Fnc") BUILTIN(__builtin_erfcl, "LdLd", "Fnc") BUILTIN(__builtin_exp , "dd" , "Fnc") BUILTIN(__builtin_expf, "ff" , "Fnc") BUILTIN(__builtin_expl, "LdLd", "Fnc") BUILTIN(__builtin_exp2 , "dd" , "Fnc") BUILTIN(__builtin_exp2f, "ff" , "Fnc") BUILTIN(__builtin_exp2l, "LdLd", "Fnc") BUILTIN(__builtin_expm1 , "dd", "Fnc") BUILTIN(__builtin_expm1f, "ff", "Fnc") BUILTIN(__builtin_expm1l, "LdLd", "Fnc") BUILTIN(__builtin_fdim, "ddd", "Fnc") BUILTIN(__builtin_fdimf, "fff", "Fnc") BUILTIN(__builtin_fdiml, "LdLdLd", "Fnc") BUILTIN(__builtin_floor , "dd" , "Fnc") BUILTIN(__builtin_floorf, "ff" , "Fnc") BUILTIN(__builtin_floorl, "LdLd", "Fnc") BUILTIN(__builtin_fma, "dddd", "Fnc") BUILTIN(__builtin_fmaf, "ffff", "Fnc") BUILTIN(__builtin_fmal, "LdLdLdLd", "Fnc") BUILTIN(__builtin_fmax, "ddd", "Fnc") BUILTIN(__builtin_fmaxf, "fff", "Fnc") BUILTIN(__builtin_fmaxl, "LdLdLd", "Fnc") BUILTIN(__builtin_fmin, "ddd", "Fnc") BUILTIN(__builtin_fminf, "fff", "Fnc") BUILTIN(__builtin_fminl, "LdLdLd", "Fnc") BUILTIN(__builtin_hypot , "ddd" , "Fnc") BUILTIN(__builtin_hypotf, "fff" , "Fnc") BUILTIN(__builtin_hypotl, "LdLdLd", "Fnc") BUILTIN(__builtin_ilogb , "id", "Fnc") BUILTIN(__builtin_ilogbf, "if", "Fnc") BUILTIN(__builtin_ilogbl, "iLd", "Fnc") BUILTIN(__builtin_lgamma , "dd", "Fnc") BUILTIN(__builtin_lgammaf, "ff", "Fnc") BUILTIN(__builtin_lgammal, "LdLd", "Fnc") BUILTIN(__builtin_llrint, "LLid", "Fnc") BUILTIN(__builtin_llrintf, "LLif", "Fnc") BUILTIN(__builtin_llrintl, "LLiLd", "Fnc") BUILTIN(__builtin_llround , "LLid", "Fnc") BUILTIN(__builtin_llroundf, "LLif", "Fnc") BUILTIN(__builtin_llroundl, "LLiLd", "Fnc") BUILTIN(__builtin_log , "dd" , "Fnc") BUILTIN(__builtin_log10 , "dd" , "Fnc") BUILTIN(__builtin_log10f, "ff" , "Fnc") BUILTIN(__builtin_log10l, "LdLd", "Fnc") BUILTIN(__builtin_log1p , "dd" , "Fnc") BUILTIN(__builtin_log1pf, "ff" , "Fnc") BUILTIN(__builtin_log1pl, "LdLd", "Fnc") BUILTIN(__builtin_log2, "dd" , "Fnc") BUILTIN(__builtin_log2f, "ff" , "Fnc") BUILTIN(__builtin_log2l, "LdLd" , "Fnc") BUILTIN(__builtin_logb , "dd", "Fnc") BUILTIN(__builtin_logbf, "ff", "Fnc") BUILTIN(__builtin_logbl, "LdLd", "Fnc") BUILTIN(__builtin_logf, "ff" , "Fnc") BUILTIN(__builtin_logl, "LdLd", "Fnc") BUILTIN(__builtin_lrint , "Lid", "Fnc") BUILTIN(__builtin_lrintf, "Lif", "Fnc") BUILTIN(__builtin_lrintl, "LiLd", "Fnc") BUILTIN(__builtin_lround , "Lid", "Fnc") BUILTIN(__builtin_lroundf, "Lif", "Fnc") BUILTIN(__builtin_lroundl, "LiLd", "Fnc") BUILTIN(__builtin_nearbyint , "dd", "Fnc") BUILTIN(__builtin_nearbyintf, "ff", "Fnc") BUILTIN(__builtin_nearbyintl, "LdLd", "Fnc") BUILTIN(__builtin_nextafter , "ddd", "Fnc") BUILTIN(__builtin_nextafterf, "fff", "Fnc") BUILTIN(__builtin_nextafterl, "LdLdLd", "Fnc") BUILTIN(__builtin_nexttoward , "ddLd", "Fnc") BUILTIN(__builtin_nexttowardf, "ffLd", "Fnc") BUILTIN(__builtin_nexttowardl, "LdLdLd", "Fnc") BUILTIN(__builtin_remainder , "ddd", "Fnc") BUILTIN(__builtin_remainderf, "fff", "Fnc") BUILTIN(__builtin_remainderl, "LdLdLd", "Fnc") BUILTIN(__builtin_remquo , "dddi*", "Fn") BUILTIN(__builtin_remquof, "fffi*", "Fn") BUILTIN(__builtin_remquol, "LdLdLdi*", "Fn") BUILTIN(__builtin_rint , "dd", "Fnc") BUILTIN(__builtin_rintf, "ff", "Fnc") BUILTIN(__builtin_rintl, "LdLd", "Fnc") BUILTIN(__builtin_round, "dd" , "Fnc") BUILTIN(__builtin_roundf, "ff" , "Fnc") BUILTIN(__builtin_roundl, "LdLd" , "Fnc") BUILTIN(__builtin_scalbln , "ddLi", "Fnc") BUILTIN(__builtin_scalblnf, "ffLi", "Fnc") BUILTIN(__builtin_scalblnl, "LdLdLi", "Fnc") BUILTIN(__builtin_scalbn , "ddi", "Fnc") BUILTIN(__builtin_scalbnf, "ffi", "Fnc") BUILTIN(__builtin_scalbnl, "LdLdi", "Fnc") BUILTIN(__builtin_sin , "dd" , "Fnc") BUILTIN(__builtin_sinf, "ff" , "Fnc") BUILTIN(__builtin_sinh , "dd" , "Fnc") BUILTIN(__builtin_sinhf, "ff" , "Fnc") BUILTIN(__builtin_sinhl, "LdLd", "Fnc") BUILTIN(__builtin_sinl, "LdLd", "Fnc") BUILTIN(__builtin_sqrt , "dd" , "Fnc") BUILTIN(__builtin_sqrtf, "ff" , "Fnc") BUILTIN(__builtin_sqrtl, "LdLd", "Fnc") BUILTIN(__builtin_tan , "dd" , "Fnc") BUILTIN(__builtin_tanf, "ff" , "Fnc") BUILTIN(__builtin_tanh , "dd" , "Fnc") BUILTIN(__builtin_tanhf, "ff" , "Fnc") BUILTIN(__builtin_tanhl, "LdLd", "Fnc") BUILTIN(__builtin_tanl, "LdLd", "Fnc") BUILTIN(__builtin_tgamma , "dd", "Fnc") BUILTIN(__builtin_tgammaf, "ff", "Fnc") BUILTIN(__builtin_tgammal, "LdLd", "Fnc") BUILTIN(__builtin_trunc , "dd", "Fnc") BUILTIN(__builtin_truncf, "ff", "Fnc") BUILTIN(__builtin_truncl, "LdLd", "Fnc") // C99 complex builtins BUILTIN(__builtin_cabs, "dXd", "Fnc") BUILTIN(__builtin_cabsf, "fXf", "Fnc") BUILTIN(__builtin_cabsl, "LdXLd", "Fnc") BUILTIN(__builtin_cacos, "XdXd", "Fnc") BUILTIN(__builtin_cacosf, "XfXf", "Fnc") BUILTIN(__builtin_cacosh, "XdXd", "Fnc") BUILTIN(__builtin_cacoshf, "XfXf", "Fnc") BUILTIN(__builtin_cacoshl, "XLdXLd", "Fnc") BUILTIN(__builtin_cacosl, "XLdXLd", "Fnc") BUILTIN(__builtin_carg, "dXd", "Fnc") BUILTIN(__builtin_cargf, "fXf", "Fnc") BUILTIN(__builtin_cargl, "LdXLd", "Fnc") BUILTIN(__builtin_casin, "XdXd", "Fnc") BUILTIN(__builtin_casinf, "XfXf", "Fnc") BUILTIN(__builtin_casinh, "XdXd", "Fnc") BUILTIN(__builtin_casinhf, "XfXf", "Fnc") BUILTIN(__builtin_casinhl, "XLdXLd", "Fnc") BUILTIN(__builtin_casinl, "XLdXLd", "Fnc") BUILTIN(__builtin_catan, "XdXd", "Fnc") BUILTIN(__builtin_catanf, "XfXf", "Fnc") BUILTIN(__builtin_catanh, "XdXd", "Fnc") BUILTIN(__builtin_catanhf, "XfXf", "Fnc") BUILTIN(__builtin_catanhl, "XLdXLd", "Fnc") BUILTIN(__builtin_catanl, "XLdXLd", "Fnc") BUILTIN(__builtin_ccos, "XdXd", "Fnc") BUILTIN(__builtin_ccosf, "XfXf", "Fnc") BUILTIN(__builtin_ccosl, "XLdXLd", "Fnc") BUILTIN(__builtin_ccosh, "XdXd", "Fnc") BUILTIN(__builtin_ccoshf, "XfXf", "Fnc") BUILTIN(__builtin_ccoshl, "XLdXLd", "Fnc") BUILTIN(__builtin_cexp, "XdXd", "Fnc") BUILTIN(__builtin_cexpf, "XfXf", "Fnc") BUILTIN(__builtin_cexpl, "XLdXLd", "Fnc") BUILTIN(__builtin_cimag, "dXd", "Fnc") BUILTIN(__builtin_cimagf, "fXf", "Fnc") BUILTIN(__builtin_cimagl, "LdXLd", "Fnc") BUILTIN(__builtin_conj, "XdXd", "Fnc") BUILTIN(__builtin_conjf, "XfXf", "Fnc") BUILTIN(__builtin_conjl, "XLdXLd", "Fnc") BUILTIN(__builtin_clog, "XdXd", "Fnc") BUILTIN(__builtin_clogf, "XfXf", "Fnc") BUILTIN(__builtin_clogl, "XLdXLd", "Fnc") BUILTIN(__builtin_cproj, "XdXd", "Fnc") BUILTIN(__builtin_cprojf, "XfXf", "Fnc") BUILTIN(__builtin_cprojl, "XLdXLd", "Fnc") BUILTIN(__builtin_cpow, "XdXdXd", "Fnc") BUILTIN(__builtin_cpowf, "XfXfXf", "Fnc") BUILTIN(__builtin_cpowl, "XLdXLdXLd", "Fnc") BUILTIN(__builtin_creal, "dXd", "Fnc") BUILTIN(__builtin_crealf, "fXf", "Fnc") BUILTIN(__builtin_creall, "LdXLd", "Fnc") BUILTIN(__builtin_csin, "XdXd", "Fnc") BUILTIN(__builtin_csinf, "XfXf", "Fnc") BUILTIN(__builtin_csinl, "XLdXLd", "Fnc") BUILTIN(__builtin_csinh, "XdXd", "Fnc") BUILTIN(__builtin_csinhf, "XfXf", "Fnc") BUILTIN(__builtin_csinhl, "XLdXLd", "Fnc") BUILTIN(__builtin_csqrt, "XdXd", "Fnc") BUILTIN(__builtin_csqrtf, "XfXf", "Fnc") BUILTIN(__builtin_csqrtl, "XLdXLd", "Fnc") BUILTIN(__builtin_ctan, "XdXd", "Fnc") BUILTIN(__builtin_ctanf, "XfXf", "Fnc") BUILTIN(__builtin_ctanl, "XLdXLd", "Fnc") BUILTIN(__builtin_ctanh, "XdXd", "Fnc") BUILTIN(__builtin_ctanhf, "XfXf", "Fnc") BUILTIN(__builtin_ctanhl, "XLdXLd", "Fnc") // FP Comparisons. BUILTIN(__builtin_isgreater , "i.", "Fnc") BUILTIN(__builtin_isgreaterequal, "i.", "Fnc") BUILTIN(__builtin_isless , "i.", "Fnc") BUILTIN(__builtin_islessequal , "i.", "Fnc") BUILTIN(__builtin_islessgreater , "i.", "Fnc") BUILTIN(__builtin_isunordered , "i.", "Fnc") // Unary FP classification BUILTIN(__builtin_fpclassify, "iiiiii.", "Fnc") BUILTIN(__builtin_isfinite, "i.", "Fnc") BUILTIN(__builtin_isinf, "i.", "Fnc") BUILTIN(__builtin_isinf_sign, "i.", "Fnc") BUILTIN(__builtin_isnan, "i.", "Fnc") BUILTIN(__builtin_isnormal, "i.", "Fnc") // FP signbit builtins BUILTIN(__builtin_signbit, "i.", "Fnc") BUILTIN(__builtin_signbitf, "if", "Fnc") BUILTIN(__builtin_signbitl, "iLd", "Fnc") // Special FP builtins. BUILTIN(__builtin_canonicalize, "dd", "nc") BUILTIN(__builtin_canonicalizef, "ff", "nc") BUILTIN(__builtin_canonicalizel, "LdLd", "nc") // Builtins for arithmetic. BUILTIN(__builtin_clzs , "iUs" , "nc") BUILTIN(__builtin_clz , "iUi" , "nc") BUILTIN(__builtin_clzl , "iULi" , "nc") BUILTIN(__builtin_clzll, "iULLi", "nc") // TODO: int clzimax(uintmax_t) BUILTIN(__builtin_ctzs , "iUs" , "nc") BUILTIN(__builtin_ctz , "iUi" , "nc") BUILTIN(__builtin_ctzl , "iULi" , "nc") BUILTIN(__builtin_ctzll, "iULLi", "nc") // TODO: int ctzimax(uintmax_t) BUILTIN(__builtin_ffs , "ii" , "Fnc") BUILTIN(__builtin_ffsl , "iLi" , "Fnc") BUILTIN(__builtin_ffsll, "iLLi", "Fnc") BUILTIN(__builtin_parity , "iUi" , "nc") BUILTIN(__builtin_parityl , "iULi" , "nc") BUILTIN(__builtin_parityll, "iULLi", "nc") BUILTIN(__builtin_popcount , "iUi" , "nc") BUILTIN(__builtin_popcountl , "iULi" , "nc") BUILTIN(__builtin_popcountll, "iULLi", "nc") // FIXME: These type signatures are not correct for targets with int != 32-bits // or with ULL != 64-bits. BUILTIN(__builtin_bswap16, "UsUs", "nc") BUILTIN(__builtin_bswap32, "UiUi", "nc") BUILTIN(__builtin_bswap64, "ULLiULLi", "nc") BUILTIN(__builtin_bitreverse8, "UcUc", "nc") BUILTIN(__builtin_bitreverse16, "UsUs", "nc") BUILTIN(__builtin_bitreverse32, "UiUi", "nc") BUILTIN(__builtin_bitreverse64, "ULLiULLi", "nc") // Random GCC builtins BUILTIN(__builtin_constant_p, "i.", "nctu") BUILTIN(__builtin_classify_type, "i.", "nctu") BUILTIN(__builtin___CFStringMakeConstantString, "FC*cC*", "nc") BUILTIN(__builtin___NSStringMakeConstantString, "FC*cC*", "nc") BUILTIN(__builtin_va_start, "vA.", "nt") BUILTIN(__builtin_va_end, "vA", "n") BUILTIN(__builtin_va_copy, "vAA", "n") BUILTIN(__builtin_stdarg_start, "vA.", "n") BUILTIN(__builtin_assume_aligned, "v*vC*z.", "nc") BUILTIN(__builtin_bcmp, "iv*v*z", "Fn") BUILTIN(__builtin_bcopy, "vv*v*z", "n") BUILTIN(__builtin_bzero, "vv*z", "nF") BUILTIN(__builtin_fprintf, "iP*cC*.", "Fp:1:") BUILTIN(__builtin_memchr, "v*vC*iz", "nF") BUILTIN(__builtin_memcmp, "ivC*vC*z", "nF") BUILTIN(__builtin_memcpy, "v*v*vC*z", "nF") BUILTIN(__builtin_memmove, "v*v*vC*z", "nF") BUILTIN(__builtin_mempcpy, "v*v*vC*z", "nF") BUILTIN(__builtin_memset, "v*v*iz", "nF") BUILTIN(__builtin_printf, "icC*.", "Fp:0:") BUILTIN(__builtin_stpcpy, "c*c*cC*", "nF") BUILTIN(__builtin_stpncpy, "c*c*cC*z", "nF") BUILTIN(__builtin_strcasecmp, "icC*cC*", "nF") BUILTIN(__builtin_strcat, "c*c*cC*", "nF") BUILTIN(__builtin_strchr, "c*cC*i", "nF") BUILTIN(__builtin_strcmp, "icC*cC*", "nF") BUILTIN(__builtin_strcpy, "c*c*cC*", "nF") BUILTIN(__builtin_strcspn, "zcC*cC*", "nF") BUILTIN(__builtin_strdup, "c*cC*", "nF") BUILTIN(__builtin_strlen, "zcC*", "nF") BUILTIN(__builtin_strncasecmp, "icC*cC*z", "nF") BUILTIN(__builtin_strncat, "c*c*cC*z", "nF") BUILTIN(__builtin_strncmp, "icC*cC*z", "nF") BUILTIN(__builtin_strncpy, "c*c*cC*z", "nF") BUILTIN(__builtin_strndup, "c*cC*z", "nF") BUILTIN(__builtin_strpbrk, "c*cC*cC*", "nF") BUILTIN(__builtin_strrchr, "c*cC*i", "nF") BUILTIN(__builtin_strspn, "zcC*cC*", "nF") BUILTIN(__builtin_strstr, "c*cC*cC*", "nF") BUILTIN(__builtin_wcschr, "w*wC*w", "nF") BUILTIN(__builtin_wcscmp, "iwC*wC*", "nF") BUILTIN(__builtin_wcslen, "zwC*", "nF") BUILTIN(__builtin_wcsncmp, "iwC*wC*z", "nF") BUILTIN(__builtin_wmemchr, "w*wC*wz", "nF") BUILTIN(__builtin_wmemcmp, "iwC*wC*z", "nF") BUILTIN(__builtin_return_address, "v*IUi", "n") BUILTIN(__builtin_extract_return_addr, "v*v*", "n") BUILTIN(__builtin_frame_address, "v*IUi", "n") BUILTIN(__builtin___clear_cache, "vc*c*", "n") BUILTIN(__builtin_flt_rounds, "i", "nc") BUILTIN(__builtin_setjmp, "iv**", "j") BUILTIN(__builtin_longjmp, "vv**i", "r") BUILTIN(__builtin_unwind_init, "v", "") BUILTIN(__builtin_eh_return_data_regno, "iIi", "nc") BUILTIN(__builtin_snprintf, "ic*zcC*.", "nFp:2:") BUILTIN(__builtin_vsprintf, "ic*cC*a", "nFP:1:") BUILTIN(__builtin_vsnprintf, "ic*zcC*a", "nFP:2:") BUILTIN(__builtin_thread_pointer, "v*", "nc") // GCC exception builtins BUILTIN(__builtin_eh_return, "vzv*", "r") // FIXME: Takes intptr_t, not size_t! BUILTIN(__builtin_frob_return_addr, "v*v*", "n") BUILTIN(__builtin_dwarf_cfa, "v*", "n") BUILTIN(__builtin_init_dwarf_reg_size_table, "vv*", "n") BUILTIN(__builtin_dwarf_sp_column, "Ui", "n") BUILTIN(__builtin_extend_pointer, "ULLiv*", "n") // _Unwind_Word == uint64_t // GCC Object size checking builtins BUILTIN(__builtin_object_size, "zvC*i", "nu") BUILTIN(__builtin___memcpy_chk, "v*v*vC*zz", "nF") BUILTIN(__builtin___memccpy_chk, "v*v*vC*izz", "nF") BUILTIN(__builtin___memmove_chk, "v*v*vC*zz", "nF") BUILTIN(__builtin___mempcpy_chk, "v*v*vC*zz", "nF") BUILTIN(__builtin___memset_chk, "v*v*izz", "nF") BUILTIN(__builtin___stpcpy_chk, "c*c*cC*z", "nF") BUILTIN(__builtin___strcat_chk, "c*c*cC*z", "nF") BUILTIN(__builtin___strcpy_chk, "c*c*cC*z", "nF") BUILTIN(__builtin___strlcat_chk, "zc*cC*zz", "nF") BUILTIN(__builtin___strlcpy_chk, "zc*cC*zz", "nF") BUILTIN(__builtin___strncat_chk, "c*c*cC*zz", "nF") BUILTIN(__builtin___strncpy_chk, "c*c*cC*zz", "nF") BUILTIN(__builtin___stpncpy_chk, "c*c*cC*zz", "nF") BUILTIN(__builtin___snprintf_chk, "ic*zizcC*.", "Fp:4:") BUILTIN(__builtin___sprintf_chk, "ic*izcC*.", "Fp:3:") BUILTIN(__builtin___vsnprintf_chk, "ic*zizcC*a", "FP:4:") BUILTIN(__builtin___vsprintf_chk, "ic*izcC*a", "FP:3:") BUILTIN(__builtin___fprintf_chk, "iP*icC*.", "Fp:2:") BUILTIN(__builtin___printf_chk, "iicC*.", "Fp:1:") BUILTIN(__builtin___vfprintf_chk, "iP*icC*a", "FP:2:") BUILTIN(__builtin___vprintf_chk, "iicC*a", "FP:1:") BUILTIN(__builtin_unpredictable, "LiLi" , "nc") BUILTIN(__builtin_expect, "LiLiLi" , "nc") BUILTIN(__builtin_prefetch, "vvC*.", "nc") BUILTIN(__builtin_readcyclecounter, "ULLi", "n") BUILTIN(__builtin_trap, "v", "nr") BUILTIN(__builtin_debugtrap, "v", "n") BUILTIN(__builtin_unreachable, "v", "nr") BUILTIN(__builtin_shufflevector, "v." , "nc") BUILTIN(__builtin_convertvector, "v." , "nct") BUILTIN(__builtin_alloca, "v*z" , "Fn") BUILTIN(__builtin_alloca_with_align, "v*zIz", "Fn") BUILTIN(__builtin_call_with_static_chain, "v.", "nt") // "Overloaded" Atomic operator builtins. These are overloaded to support data // types of i8, i16, i32, i64, and i128. The front-end sees calls to the // non-suffixed version of these (which has a bogus type) and transforms them to // the right overloaded version in Sema (plus casts). // FIXME: These assume that char -> i8, short -> i16, int -> i32, // long long -> i64. BUILTIN(__sync_fetch_and_add, "v.", "t") BUILTIN(__sync_fetch_and_add_1, "ccD*c.", "nt") BUILTIN(__sync_fetch_and_add_2, "ssD*s.", "nt") BUILTIN(__sync_fetch_and_add_4, "iiD*i.", "nt") BUILTIN(__sync_fetch_and_add_8, "LLiLLiD*LLi.", "nt") BUILTIN(__sync_fetch_and_add_16, "LLLiLLLiD*LLLi.", "nt") BUILTIN(__sync_fetch_and_sub, "v.", "t") BUILTIN(__sync_fetch_and_sub_1, "ccD*c.", "nt") BUILTIN(__sync_fetch_and_sub_2, "ssD*s.", "nt") BUILTIN(__sync_fetch_and_sub_4, "iiD*i.", "nt") BUILTIN(__sync_fetch_and_sub_8, "LLiLLiD*LLi.", "nt") BUILTIN(__sync_fetch_and_sub_16, "LLLiLLLiD*LLLi.", "nt") BUILTIN(__sync_fetch_and_or, "v.", "t") BUILTIN(__sync_fetch_and_or_1, "ccD*c.", "nt") BUILTIN(__sync_fetch_and_or_2, "ssD*s.", "nt") BUILTIN(__sync_fetch_and_or_4, "iiD*i.", "nt") BUILTIN(__sync_fetch_and_or_8, "LLiLLiD*LLi.", "nt") BUILTIN(__sync_fetch_and_or_16, "LLLiLLLiD*LLLi.", "nt") BUILTIN(__sync_fetch_and_and, "v.", "t") BUILTIN(__sync_fetch_and_and_1, "ccD*c.", "tn") BUILTIN(__sync_fetch_and_and_2, "ssD*s.", "tn") BUILTIN(__sync_fetch_and_and_4, "iiD*i.", "tn") BUILTIN(__sync_fetch_and_and_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_fetch_and_and_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_fetch_and_xor, "v.", "t") BUILTIN(__sync_fetch_and_xor_1, "ccD*c.", "tn") BUILTIN(__sync_fetch_and_xor_2, "ssD*s.", "tn") BUILTIN(__sync_fetch_and_xor_4, "iiD*i.", "tn") BUILTIN(__sync_fetch_and_xor_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_fetch_and_xor_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_fetch_and_nand, "v.", "t") BUILTIN(__sync_fetch_and_nand_1, "ccD*c.", "tn") BUILTIN(__sync_fetch_and_nand_2, "ssD*s.", "tn") BUILTIN(__sync_fetch_and_nand_4, "iiD*i.", "tn") BUILTIN(__sync_fetch_and_nand_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_fetch_and_nand_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_add_and_fetch, "v.", "t") BUILTIN(__sync_add_and_fetch_1, "ccD*c.", "tn") BUILTIN(__sync_add_and_fetch_2, "ssD*s.", "tn") BUILTIN(__sync_add_and_fetch_4, "iiD*i.", "tn") BUILTIN(__sync_add_and_fetch_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_add_and_fetch_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_sub_and_fetch, "v.", "t") BUILTIN(__sync_sub_and_fetch_1, "ccD*c.", "tn") BUILTIN(__sync_sub_and_fetch_2, "ssD*s.", "tn") BUILTIN(__sync_sub_and_fetch_4, "iiD*i.", "tn") BUILTIN(__sync_sub_and_fetch_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_sub_and_fetch_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_or_and_fetch, "v.", "t") BUILTIN(__sync_or_and_fetch_1, "ccD*c.", "tn") BUILTIN(__sync_or_and_fetch_2, "ssD*s.", "tn") BUILTIN(__sync_or_and_fetch_4, "iiD*i.", "tn") BUILTIN(__sync_or_and_fetch_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_or_and_fetch_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_and_and_fetch, "v.", "t") BUILTIN(__sync_and_and_fetch_1, "ccD*c.", "tn") BUILTIN(__sync_and_and_fetch_2, "ssD*s.", "tn") BUILTIN(__sync_and_and_fetch_4, "iiD*i.", "tn") BUILTIN(__sync_and_and_fetch_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_and_and_fetch_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_xor_and_fetch, "v.", "t") BUILTIN(__sync_xor_and_fetch_1, "ccD*c.", "tn") BUILTIN(__sync_xor_and_fetch_2, "ssD*s.", "tn") BUILTIN(__sync_xor_and_fetch_4, "iiD*i.", "tn") BUILTIN(__sync_xor_and_fetch_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_xor_and_fetch_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_nand_and_fetch, "v.", "t") BUILTIN(__sync_nand_and_fetch_1, "ccD*c.", "tn") BUILTIN(__sync_nand_and_fetch_2, "ssD*s.", "tn") BUILTIN(__sync_nand_and_fetch_4, "iiD*i.", "tn") BUILTIN(__sync_nand_and_fetch_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_nand_and_fetch_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_bool_compare_and_swap, "v.", "t") BUILTIN(__sync_bool_compare_and_swap_1, "bcD*cc.", "tn") BUILTIN(__sync_bool_compare_and_swap_2, "bsD*ss.", "tn") BUILTIN(__sync_bool_compare_and_swap_4, "biD*ii.", "tn") BUILTIN(__sync_bool_compare_and_swap_8, "bLLiD*LLiLLi.", "tn") BUILTIN(__sync_bool_compare_and_swap_16, "bLLLiD*LLLiLLLi.", "tn") BUILTIN(__sync_val_compare_and_swap, "v.", "t") BUILTIN(__sync_val_compare_and_swap_1, "ccD*cc.", "tn") BUILTIN(__sync_val_compare_and_swap_2, "ssD*ss.", "tn") BUILTIN(__sync_val_compare_and_swap_4, "iiD*ii.", "tn") BUILTIN(__sync_val_compare_and_swap_8, "LLiLLiD*LLiLLi.", "tn") BUILTIN(__sync_val_compare_and_swap_16, "LLLiLLLiD*LLLiLLLi.", "tn") BUILTIN(__sync_lock_test_and_set, "v.", "t") BUILTIN(__sync_lock_test_and_set_1, "ccD*c.", "tn") BUILTIN(__sync_lock_test_and_set_2, "ssD*s.", "tn") BUILTIN(__sync_lock_test_and_set_4, "iiD*i.", "tn") BUILTIN(__sync_lock_test_and_set_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_lock_test_and_set_16, "LLLiLLLiD*LLLi.", "tn") BUILTIN(__sync_lock_release, "v.", "t") BUILTIN(__sync_lock_release_1, "vcD*.", "tn") BUILTIN(__sync_lock_release_2, "vsD*.", "tn") BUILTIN(__sync_lock_release_4, "viD*.", "tn") BUILTIN(__sync_lock_release_8, "vLLiD*.", "tn") BUILTIN(__sync_lock_release_16, "vLLLiD*.", "tn") BUILTIN(__sync_swap, "v.", "t") BUILTIN(__sync_swap_1, "ccD*c.", "tn") BUILTIN(__sync_swap_2, "ssD*s.", "tn") BUILTIN(__sync_swap_4, "iiD*i.", "tn") BUILTIN(__sync_swap_8, "LLiLLiD*LLi.", "tn") BUILTIN(__sync_swap_16, "LLLiLLLiD*LLLi.", "tn") // Some of our atomics builtins are handled by AtomicExpr rather than // as normal builtin CallExprs. This macro is used for such builtins. #ifndef ATOMIC_BUILTIN #define ATOMIC_BUILTIN(ID, TYPE, ATTRS) BUILTIN(ID, TYPE, ATTRS) #endif // C11 _Atomic operations for . ATOMIC_BUILTIN(__c11_atomic_init, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_load, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_store, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_exchange, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_compare_exchange_strong, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_compare_exchange_weak, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_fetch_add, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_fetch_sub, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_fetch_and, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_fetch_or, "v.", "t") ATOMIC_BUILTIN(__c11_atomic_fetch_xor, "v.", "t") BUILTIN(__c11_atomic_thread_fence, "vi", "n") BUILTIN(__c11_atomic_signal_fence, "vi", "n") BUILTIN(__c11_atomic_is_lock_free, "iz", "n") // GNU atomic builtins. ATOMIC_BUILTIN(__atomic_load, "v.", "t") ATOMIC_BUILTIN(__atomic_load_n, "v.", "t") ATOMIC_BUILTIN(__atomic_store, "v.", "t") ATOMIC_BUILTIN(__atomic_store_n, "v.", "t") ATOMIC_BUILTIN(__atomic_exchange, "v.", "t") ATOMIC_BUILTIN(__atomic_exchange_n, "v.", "t") ATOMIC_BUILTIN(__atomic_compare_exchange, "v.", "t") ATOMIC_BUILTIN(__atomic_compare_exchange_n, "v.", "t") ATOMIC_BUILTIN(__atomic_fetch_add, "v.", "t") ATOMIC_BUILTIN(__atomic_fetch_sub, "v.", "t") ATOMIC_BUILTIN(__atomic_fetch_and, "v.", "t") ATOMIC_BUILTIN(__atomic_fetch_or, "v.", "t") ATOMIC_BUILTIN(__atomic_fetch_xor, "v.", "t") ATOMIC_BUILTIN(__atomic_fetch_nand, "v.", "t") ATOMIC_BUILTIN(__atomic_add_fetch, "v.", "t") ATOMIC_BUILTIN(__atomic_sub_fetch, "v.", "t") ATOMIC_BUILTIN(__atomic_and_fetch, "v.", "t") ATOMIC_BUILTIN(__atomic_or_fetch, "v.", "t") ATOMIC_BUILTIN(__atomic_xor_fetch, "v.", "t") ATOMIC_BUILTIN(__atomic_nand_fetch, "v.", "t") BUILTIN(__atomic_test_and_set, "bvD*i", "n") BUILTIN(__atomic_clear, "vvD*i", "n") BUILTIN(__atomic_thread_fence, "vi", "n") BUILTIN(__atomic_signal_fence, "vi", "n") BUILTIN(__atomic_always_lock_free, "izvCD*", "n") BUILTIN(__atomic_is_lock_free, "izvCD*", "n") #undef ATOMIC_BUILTIN // Non-overloaded atomic builtins. BUILTIN(__sync_synchronize, "v", "n") // GCC does not support these, they are a Clang extension. BUILTIN(__sync_fetch_and_min, "iiD*i", "n") BUILTIN(__sync_fetch_and_max, "iiD*i", "n") BUILTIN(__sync_fetch_and_umin, "UiUiD*Ui", "n") BUILTIN(__sync_fetch_and_umax, "UiUiD*Ui", "n") // Random libc builtins. BUILTIN(__builtin_abort, "v", "Fnr") BUILTIN(__builtin_index, "c*cC*i", "Fn") BUILTIN(__builtin_rindex, "c*cC*i", "Fn") // Microsoft builtins. These are only active with -fms-extensions. LANGBUILTIN(_alloca, "v*z", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__assume, "vb", "n", ALL_MS_LANGUAGES) LIBBUILTIN(_byteswap_ushort, "UsUs", "fnc", "stdlib.h", ALL_MS_LANGUAGES) LIBBUILTIN(_byteswap_ulong, "ULiULi", "fnc", "stdlib.h", ALL_MS_LANGUAGES) LIBBUILTIN(_byteswap_uint64, "ULLiULLi", "fnc", "stdlib.h", ALL_MS_LANGUAGES) LANGBUILTIN(__debugbreak, "v", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__exception_code, "ULi", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_exception_code, "ULi", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__exception_info, "v*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_exception_info, "v*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__abnormal_termination, "i", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_abnormal_termination, "i", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__GetExceptionInfo, "v*.", "ntu", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedAnd8, "ccD*c", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedAnd16, "ssD*s", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedAnd, "LiLiD*Li", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedCompareExchange8, "ccD*cc", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedCompareExchange16, "ssD*ss", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedCompareExchange, "LiLiD*LiLi", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedCompareExchange64, "LLiLLiD*LLiLLi", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedCompareExchangePointer, "v*v*D*v*v*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedDecrement16, "ssD*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedDecrement, "LiLiD*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchange, "LiLiD*Li", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchange8, "ccD*c", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchange16, "ssD*s", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchangeAdd8, "ccD*c", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchangeAdd16, "ssD*s", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchangeAdd, "LiLiD*Li", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchangePointer, "v*v*D*v*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchangeSub8, "ccD*c", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchangeSub16, "ssD*s", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedExchangeSub, "LiLiD*Li", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedIncrement16, "ssD*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedIncrement, "LiLiD*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedOr8, "ccD*c", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedOr16, "ssD*s", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedOr, "LiLiD*Li", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedXor8, "ccD*c", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedXor16, "ssD*s", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_InterlockedXor, "LiLiD*Li", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__noop, "i.", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__popcnt16, "UsUs", "nc", ALL_MS_LANGUAGES) LANGBUILTIN(__popcnt, "UiUi", "nc", ALL_MS_LANGUAGES) LANGBUILTIN(__popcnt64, "ULLiULLi", "nc", ALL_MS_LANGUAGES) LANGBUILTIN(__readfsdword, "ULiULi", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_ReturnAddress, "v*", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotl8, "UcUcUc", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotl16, "UsUsUc", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotl, "UiUii", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_lrotl, "ULiULii", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotl64, "ULLiULLii", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotr8, "UcUcUc", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotr16, "UsUsUc", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotr, "UiUii", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_lrotr, "ULiULii", "n", ALL_MS_LANGUAGES) LANGBUILTIN(_rotr64, "ULLiULLii", "n", ALL_MS_LANGUAGES) LANGBUILTIN(__va_start, "vc**.", "nt", ALL_MS_LANGUAGES) // Microsoft library builtins. LIBBUILTIN(_setjmpex, "iJ", "fj", "setjmpex.h", ALL_MS_LANGUAGES) // C99 library functions // C99 stdlib.h LIBBUILTIN(abort, "v", "fr", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(calloc, "v*zz", "f", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(exit, "vi", "fr", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(_Exit, "vi", "fr", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(malloc, "v*z", "f", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(realloc, "v*v*z", "f", "stdlib.h", ALL_LANGUAGES) // C99 string.h LIBBUILTIN(memcpy, "v*v*vC*z", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(memcmp, "ivC*vC*z", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(memmove, "v*v*vC*z", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strcpy, "c*c*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strncpy, "c*c*cC*z", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strcmp, "icC*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strncmp, "icC*cC*z", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strcat, "c*c*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strncat, "c*c*cC*z", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strxfrm, "zc*cC*z", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(memchr, "v*vC*iz", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strchr, "c*cC*i", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strcspn, "zcC*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strpbrk, "c*cC*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strrchr, "c*cC*i", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strspn, "zcC*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strstr, "c*cC*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strtok, "c*c*cC*", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(memset, "v*v*iz", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strerror, "c*i", "f", "string.h", ALL_LANGUAGES) LIBBUILTIN(strlen, "zcC*", "f", "string.h", ALL_LANGUAGES) // C99 stdio.h LIBBUILTIN(printf, "icC*.", "fp:0:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(fprintf, "iP*cC*.", "fp:1:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(snprintf, "ic*zcC*.", "fp:2:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(sprintf, "ic*cC*.", "fp:1:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(vprintf, "icC*a", "fP:0:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(vfprintf, "iP*cC*a", "fP:1:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(vsnprintf, "ic*zcC*a", "fP:2:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(vsprintf, "ic*cC*a", "fP:1:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(scanf, "icC*R.", "fs:0:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(fscanf, "iP*RcC*R.", "fs:1:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(sscanf, "icC*RcC*R.", "fs:1:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(vscanf, "icC*Ra", "fS:0:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(vfscanf, "iP*RcC*Ra", "fS:1:", "stdio.h", ALL_LANGUAGES) LIBBUILTIN(vsscanf, "icC*RcC*Ra", "fS:1:", "stdio.h", ALL_LANGUAGES) // C99 ctype.h LIBBUILTIN(isalnum, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isalpha, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isblank, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(iscntrl, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isdigit, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isgraph, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(islower, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isprint, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(ispunct, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isspace, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isupper, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(isxdigit, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(tolower, "ii", "fnU", "ctype.h", ALL_LANGUAGES) LIBBUILTIN(toupper, "ii", "fnU", "ctype.h", ALL_LANGUAGES) // C99 wchar.h // FIXME: This list is incomplete. We should cover at least the functions that // take format strings. LIBBUILTIN(wcschr, "w*wC*w", "f", "wchar.h", ALL_LANGUAGES) LIBBUILTIN(wcscmp, "iwC*wC*", "f", "wchar.h", ALL_LANGUAGES) LIBBUILTIN(wcslen, "zwC*", "f", "wchar.h", ALL_LANGUAGES) LIBBUILTIN(wcsncmp, "iwC*wC*z", "f", "wchar.h", ALL_LANGUAGES) LIBBUILTIN(wmemchr, "w*wC*wz", "f", "wchar.h", ALL_LANGUAGES) LIBBUILTIN(wmemcmp, "iwC*wC*z", "f", "wchar.h", ALL_LANGUAGES) // C99 // In some systems setjmp is a macro that expands to _setjmp. We undefine // it here to avoid having two identical LIBBUILTIN entries. #undef setjmp LIBBUILTIN(setjmp, "iJ", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(longjmp, "vJi", "fr", "setjmp.h", ALL_LANGUAGES) // Non-C library functions, active in GNU mode only. // Functions with (returns_twice) attribute (marked as "j") are still active in // all languages, because losing this attribute would result in miscompilation // when these functions are used in non-GNU mode. PR16138. LIBBUILTIN(alloca, "v*z", "f", "stdlib.h", ALL_GNU_LANGUAGES) // POSIX string.h LIBBUILTIN(stpcpy, "c*c*cC*", "f", "string.h", ALL_GNU_LANGUAGES) LIBBUILTIN(stpncpy, "c*c*cC*z", "f", "string.h", ALL_GNU_LANGUAGES) LIBBUILTIN(strdup, "c*cC*", "f", "string.h", ALL_GNU_LANGUAGES) LIBBUILTIN(strndup, "c*cC*z", "f", "string.h", ALL_GNU_LANGUAGES) // POSIX strings.h LIBBUILTIN(index, "c*cC*i", "f", "strings.h", ALL_GNU_LANGUAGES) LIBBUILTIN(rindex, "c*cC*i", "f", "strings.h", ALL_GNU_LANGUAGES) LIBBUILTIN(bzero, "vv*z", "f", "strings.h", ALL_GNU_LANGUAGES) // In some systems str[n]casejmp is a macro that expands to _str[n]icmp. // We undefine then here to avoid wrong name. #undef strcasecmp #undef strncasecmp LIBBUILTIN(strcasecmp, "icC*cC*", "f", "strings.h", ALL_GNU_LANGUAGES) LIBBUILTIN(strncasecmp, "icC*cC*z", "f", "strings.h", ALL_GNU_LANGUAGES) // POSIX unistd.h LIBBUILTIN(_exit, "vi", "fr", "unistd.h", ALL_GNU_LANGUAGES) LIBBUILTIN(vfork, "p", "fj", "unistd.h", ALL_LANGUAGES) // POSIX setjmp.h LIBBUILTIN(_setjmp, "iJ", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(__sigsetjmp, "iSJi", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(sigsetjmp, "iSJi", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(setjmp_syscall, "iJ", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(savectx, "iJ", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(qsetjmp, "iJ", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(getcontext, "iK*", "fj", "setjmp.h", ALL_LANGUAGES) LIBBUILTIN(_longjmp, "vJi", "fr", "setjmp.h", ALL_GNU_LANGUAGES) LIBBUILTIN(siglongjmp, "vSJi", "fr", "setjmp.h", ALL_GNU_LANGUAGES) // non-standard but very common LIBBUILTIN(strlcpy, "zc*cC*z", "f", "string.h", ALL_GNU_LANGUAGES) LIBBUILTIN(strlcat, "zc*cC*z", "f", "string.h", ALL_GNU_LANGUAGES) // id objc_msgSend(id, SEL, ...) LIBBUILTIN(objc_msgSend, "GGH.", "f", "objc/message.h", OBJC_LANG) // long double objc_msgSend_fpret(id self, SEL op, ...) LIBBUILTIN(objc_msgSend_fpret, "LdGH.", "f", "objc/message.h", OBJC_LANG) // _Complex long double objc_msgSend_fp2ret(id self, SEL op, ...) LIBBUILTIN(objc_msgSend_fp2ret, "XLdGH.", "f", "objc/message.h", OBJC_LANG) // void objc_msgSend_stret (id, SEL, ...) LIBBUILTIN(objc_msgSend_stret, "vGH.", "f", "objc/message.h", OBJC_LANG) // id objc_msgSendSuper(struct objc_super *super, SEL op, ...) LIBBUILTIN(objc_msgSendSuper, "GM*H.", "f", "objc/message.h", OBJC_LANG) // void objc_msgSendSuper_stret(struct objc_super *super, SEL op, ...) LIBBUILTIN(objc_msgSendSuper_stret, "vM*H.", "f", "objc/message.h", OBJC_LANG) // id objc_getClass(const char *name) LIBBUILTIN(objc_getClass, "GcC*", "f", "objc/runtime.h", OBJC_LANG) // id objc_getMetaClass(const char *name) LIBBUILTIN(objc_getMetaClass, "GcC*", "f", "objc/runtime.h", OBJC_LANG) // void objc_enumerationMutation(id) LIBBUILTIN(objc_enumerationMutation, "vG", "f", "objc/runtime.h", OBJC_LANG) // id objc_read_weak(id *location) LIBBUILTIN(objc_read_weak, "GG*", "f", "objc/objc-auto.h", OBJC_LANG) // id objc_assign_weak(id value, id *location) LIBBUILTIN(objc_assign_weak, "GGG*", "f", "objc/objc-auto.h", OBJC_LANG) // id objc_assign_ivar(id value, id dest, ptrdiff_t offset) LIBBUILTIN(objc_assign_ivar, "GGGY", "f", "objc/objc-auto.h", OBJC_LANG) // id objc_assign_global(id val, id *dest) LIBBUILTIN(objc_assign_global, "GGG*", "f", "objc/objc-auto.h", OBJC_LANG) // id objc_assign_strongCast(id val, id *dest LIBBUILTIN(objc_assign_strongCast, "GGG*", "f", "objc/objc-auto.h", OBJC_LANG) // id objc_exception_extract(void *localExceptionData) LIBBUILTIN(objc_exception_extract, "Gv*", "f", "objc/objc-exception.h", OBJC_LANG) // void objc_exception_try_enter(void *localExceptionData) LIBBUILTIN(objc_exception_try_enter, "vv*", "f", "objc/objc-exception.h", OBJC_LANG) // void objc_exception_try_exit(void *localExceptionData) LIBBUILTIN(objc_exception_try_exit, "vv*", "f", "objc/objc-exception.h", OBJC_LANG) // int objc_exception_match(Class exceptionClass, id exception) LIBBUILTIN(objc_exception_match, "iGG", "f", "objc/objc-exception.h", OBJC_LANG) // void objc_exception_throw(id exception) LIBBUILTIN(objc_exception_throw, "vG", "f", "objc/objc-exception.h", OBJC_LANG) // int objc_sync_enter(id obj) LIBBUILTIN(objc_sync_enter, "iG", "f", "objc/objc-sync.h", OBJC_LANG) // int objc_sync_exit(id obj) LIBBUILTIN(objc_sync_exit, "iG", "f", "objc/objc-sync.h", OBJC_LANG) BUILTIN(__builtin_objc_memmove_collectable, "v*v*vC*z", "nF") // void NSLog(NSString *fmt, ...) LIBBUILTIN(NSLog, "vG.", "fp:0:", "Foundation/NSObjCRuntime.h", OBJC_LANG) // void NSLogv(NSString *fmt, va_list args) LIBBUILTIN(NSLogv, "vGa", "fP:0:", "Foundation/NSObjCRuntime.h", OBJC_LANG) // Builtin math library functions LIBBUILTIN(atan2, "ddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atan2f, "fff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atan2l, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(abs, "ii", "fnc", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(labs, "LiLi", "fnc", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(llabs, "LLiLLi", "fnc", "stdlib.h", ALL_LANGUAGES) LIBBUILTIN(copysign, "ddd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(copysignf, "fff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(copysignl, "LdLdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fabs, "dd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fabsf, "ff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fabsl, "LdLd", "fnc", "math.h", ALL_LANGUAGES) // Some systems define finitef as alias of _finitef. #if defined (finitef) #undef finitef #endif LIBBUILTIN(finite, "id", "fnc", "math.h", GNU_LANG) LIBBUILTIN(finitef, "if", "fnc", "math.h", GNU_LANG) LIBBUILTIN(finitel, "iLd", "fnc", "math.h", GNU_LANG) // glibc's math.h generates calls to __finite LIBBUILTIN(__finite, "id", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(__finitef, "if", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(__finitel, "iLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmod, "ddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmodf, "fff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmodl, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(frexp, "ddi*", "fn", "math.h", ALL_LANGUAGES) LIBBUILTIN(frexpf, "ffi*", "fn", "math.h", ALL_LANGUAGES) LIBBUILTIN(frexpl, "LdLdi*", "fn", "math.h", ALL_LANGUAGES) LIBBUILTIN(ldexp, "ddi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(ldexpf, "ffi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(ldexpl, "LdLdi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(modf, "ddd*", "fn", "math.h", ALL_LANGUAGES) LIBBUILTIN(modff, "fff*", "fn", "math.h", ALL_LANGUAGES) LIBBUILTIN(modfl, "LdLdLd*", "fn", "math.h", ALL_LANGUAGES) LIBBUILTIN(nan, "dcC*", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(nanf, "fcC*", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(nanl, "LdcC*", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(pow, "ddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(powf, "fff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(powl, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(acos, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(acosf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(acosl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(acosh, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(acoshf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(acoshl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(asin, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(asinf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(asinl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(asinh, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(asinhf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(asinhl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atan, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atanf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atanl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atanh, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atanhf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(atanhl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(cbrt, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(cbrtf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(cbrtl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(ceil, "dd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(ceilf, "ff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(ceill, "LdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(cos, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(cosf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(cosl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(cosh, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(coshf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(coshl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(erf, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(erff, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(erfl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(erfc, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(erfcf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(erfcl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(exp, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(expf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(expl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(exp2, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(exp2f, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(exp2l, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(expm1, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(expm1f, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(expm1l, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fdim, "ddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fdimf, "fff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fdiml, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(floor, "dd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(floorf, "ff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(floorl, "LdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fma, "dddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmaf, "ffff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmal, "LdLdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmax, "ddd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmaxf, "fff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmaxl, "LdLdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fmin, "ddd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fminf, "fff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(fminl, "LdLdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(hypot, "ddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(hypotf, "fff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(hypotl, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(ilogb, "id", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(ilogbf, "if", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(ilogbl, "iLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lgamma, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lgammaf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lgammal, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(llrint, "LLid", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(llrintf, "LLif", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(llrintl, "LLiLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(llround, "LLid", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(llroundf, "LLif", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(llroundl, "LLiLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(logf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(logl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log10, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log10f, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log10l, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log1p, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log1pf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log1pl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log2, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log2f, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(log2l, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(logb, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(logbf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(logbl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lrint, "Lid", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lrintf, "Lif", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lrintl, "LiLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lround, "Lid", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lroundf, "Lif", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(lroundl, "LiLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(nearbyint, "dd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(nearbyintf, "ff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(nearbyintl, "LdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(nextafter, "ddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(nextafterf, "fff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(nextafterl, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(nexttoward, "ddLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(nexttowardf, "ffLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(nexttowardl, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(remainder, "ddd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(remainderf, "fff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(remainderl, "LdLdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(rint, "dd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(rintf, "ff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(rintl, "LdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(round, "dd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(roundf, "ff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(roundl, "LdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(scalbln, "ddLi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(scalblnf, "ffLi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(scalblnl, "LdLdLi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(scalbn, "ddi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(scalbnf, "ffi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(scalbnl, "LdLdi", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sin, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sinf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sinl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sinh, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sinhf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sinhl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sqrt, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sqrtf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(sqrtl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tan, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tanf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tanl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tanh, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tanhf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tanhl, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tgamma, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tgammaf, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(tgammal, "LdLd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(trunc, "dd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(truncf, "ff", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(truncl, "LdLd", "fnc", "math.h", ALL_LANGUAGES) LIBBUILTIN(cabs, "dXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cabsf, "fXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cabsl, "LdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cacos, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cacosf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cacosl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cacosh, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cacoshf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cacoshl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(carg, "dXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cargf, "fXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cargl, "LdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(casin, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(casinf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(casinl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(casinh, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(casinhf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(casinhl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(catan, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(catanf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(catanl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(catanh, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(catanhf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(catanhl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ccos, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ccosf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ccosl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ccosh, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ccoshf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ccoshl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cexp, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cexpf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cexpl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cimag, "dXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cimagf, "fXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cimagl, "LdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(conj, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(conjf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(conjl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(clog, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(clogf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(clogl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cproj, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cprojf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cprojl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cpow, "XdXdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cpowf, "XfXfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(cpowl, "XLdXLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(creal, "dXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(crealf, "fXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(creall, "LdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csin, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csinf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csinl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csinh, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csinhf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csinhl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csqrt, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csqrtf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(csqrtl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ctan, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ctanf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ctanl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ctanh, "XdXd", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ctanhf, "XfXf", "fnc", "complex.h", ALL_LANGUAGES) LIBBUILTIN(ctanhl, "XLdXLd", "fnc", "complex.h", ALL_LANGUAGES) // __sinpi and friends are OS X specific library functions, but otherwise much // like the standard (non-complex) sin (etc). LIBBUILTIN(__sinpi, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(__sinpif, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(__cospi, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(__cospif, "ff", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(__tanpi, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(__tanpif, "ff", "fne", "math.h", ALL_LANGUAGES) // Similarly, __exp10 is OS X only LIBBUILTIN(__exp10, "dd", "fne", "math.h", ALL_LANGUAGES) LIBBUILTIN(__exp10f, "ff", "fne", "math.h", ALL_LANGUAGES) // Blocks runtime Builtin math library functions LIBBUILTIN(_Block_object_assign, "vv*vC*iC", "f", "Blocks.h", ALL_LANGUAGES) LIBBUILTIN(_Block_object_dispose, "vvC*iC", "f", "Blocks.h", ALL_LANGUAGES) // FIXME: Also declare NSConcreteGlobalBlock and NSConcreteStackBlock. // Annotation function BUILTIN(__builtin_annotation, "v.", "tn") // Invariants BUILTIN(__builtin_assume, "vb", "n") // Multiprecision Arithmetic Builtins. BUILTIN(__builtin_addcb, "UcUcCUcCUcCUc*", "n") BUILTIN(__builtin_addcs, "UsUsCUsCUsCUs*", "n") BUILTIN(__builtin_addc, "UiUiCUiCUiCUi*", "n") BUILTIN(__builtin_addcl, "ULiULiCULiCULiCULi*", "n") BUILTIN(__builtin_addcll, "ULLiULLiCULLiCULLiCULLi*", "n") BUILTIN(__builtin_subcb, "UcUcCUcCUcCUc*", "n") BUILTIN(__builtin_subcs, "UsUsCUsCUsCUs*", "n") BUILTIN(__builtin_subc, "UiUiCUiCUiCUi*", "n") BUILTIN(__builtin_subcl, "ULiULiCULiCULiCULi*", "n") BUILTIN(__builtin_subcll, "ULLiULLiCULLiCULLiCULLi*", "n") // Checked Arithmetic Builtins for Security. BUILTIN(__builtin_add_overflow, "v.", "nt") BUILTIN(__builtin_sub_overflow, "v.", "nt") BUILTIN(__builtin_mul_overflow, "v.", "nt") BUILTIN(__builtin_uadd_overflow, "bUiCUiCUi*", "n") BUILTIN(__builtin_uaddl_overflow, "bULiCULiCULi*", "n") BUILTIN(__builtin_uaddll_overflow, "bULLiCULLiCULLi*", "n") BUILTIN(__builtin_usub_overflow, "bUiCUiCUi*", "n") BUILTIN(__builtin_usubl_overflow, "bULiCULiCULi*", "n") BUILTIN(__builtin_usubll_overflow, "bULLiCULLiCULLi*", "n") BUILTIN(__builtin_umul_overflow, "bUiCUiCUi*", "n") BUILTIN(__builtin_umull_overflow, "bULiCULiCULi*", "n") BUILTIN(__builtin_umulll_overflow, "bULLiCULLiCULLi*", "n") BUILTIN(__builtin_sadd_overflow, "bSiCSiCSi*", "n") BUILTIN(__builtin_saddl_overflow, "bSLiCSLiCSLi*", "n") BUILTIN(__builtin_saddll_overflow, "bSLLiCSLLiCSLLi*", "n") BUILTIN(__builtin_ssub_overflow, "bSiCSiCSi*", "n") BUILTIN(__builtin_ssubl_overflow, "bSLiCSLiCSLi*", "n") BUILTIN(__builtin_ssubll_overflow, "bSLLiCSLLiCSLLi*", "n") BUILTIN(__builtin_smul_overflow, "bSiCSiCSi*", "n") BUILTIN(__builtin_smull_overflow, "bSLiCSLiCSLi*", "n") BUILTIN(__builtin_smulll_overflow, "bSLLiCSLLiCSLLi*", "n") // Clang builtins (not available in GCC). BUILTIN(__builtin_addressof, "v*v&", "nct") BUILTIN(__builtin_operator_new, "v*z", "c") BUILTIN(__builtin_operator_delete, "vv*", "n") +BUILTIN(__builtin_char_memchr, "c*cC*iz", "n") // Safestack builtins BUILTIN(__builtin___get_unsafe_stack_start, "v*", "Fn") BUILTIN(__builtin___get_unsafe_stack_ptr, "v*", "Fn") // Nontemporal loads/stores builtins BUILTIN(__builtin_nontemporal_store, "v.", "t") BUILTIN(__builtin_nontemporal_load, "v.", "t") // Coroutine intrinsics. BUILTIN(__builtin_coro_resume, "vv*", "") BUILTIN(__builtin_coro_destroy, "vv*", "") BUILTIN(__builtin_coro_done, "bv*", "n") BUILTIN(__builtin_coro_promise, "v*v*IiIb", "n") BUILTIN(__builtin_coro_size, "z", "n") BUILTIN(__builtin_coro_frame, "v*", "n") BUILTIN(__builtin_coro_free, "v*v*", "n") BUILTIN(__builtin_coro_id, "v*Iiv*v*v*", "n") BUILTIN(__builtin_coro_alloc, "b", "n") BUILTIN(__builtin_coro_begin, "v*v*", "n") BUILTIN(__builtin_coro_end, "vv*Ib", "n") BUILTIN(__builtin_coro_suspend, "cIb", "n") BUILTIN(__builtin_coro_param, "bv*v*", "n") // OpenCL v2.0 s6.13.16, s9.17.3.5 - Pipe functions. // We need the generic prototype, since the packet type could be anything. LANGBUILTIN(read_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(write_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(reserve_read_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(reserve_write_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(commit_write_pipe, "v.", "tn", OCLC20_LANG) LANGBUILTIN(commit_read_pipe, "v.", "tn", OCLC20_LANG) LANGBUILTIN(sub_group_reserve_read_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(sub_group_reserve_write_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(sub_group_commit_read_pipe, "v.", "tn", OCLC20_LANG) LANGBUILTIN(sub_group_commit_write_pipe, "v.", "tn", OCLC20_LANG) LANGBUILTIN(work_group_reserve_read_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(work_group_reserve_write_pipe, "i.", "tn", OCLC20_LANG) LANGBUILTIN(work_group_commit_read_pipe, "v.", "tn", OCLC20_LANG) LANGBUILTIN(work_group_commit_write_pipe, "v.", "tn", OCLC20_LANG) LANGBUILTIN(get_pipe_num_packets, "Ui.", "tn", OCLC20_LANG) LANGBUILTIN(get_pipe_max_packets, "Ui.", "tn", OCLC20_LANG) // OpenCL v2.0 s6.13.17 - Enqueue kernel functions. // Custom builtin check allows to perform special check of passed block arguments. LANGBUILTIN(enqueue_kernel, "i.", "tn", OCLC20_LANG) LANGBUILTIN(get_kernel_work_group_size, "i.", "tn", OCLC20_LANG) LANGBUILTIN(get_kernel_preferred_work_group_size_multiple, "i.", "tn", OCLC20_LANG) // OpenCL v2.0 s6.13.9 - Address space qualifier functions. LANGBUILTIN(to_global, "v*v*", "tn", OCLC20_LANG) LANGBUILTIN(to_local, "v*v*", "tn", OCLC20_LANG) LANGBUILTIN(to_private, "v*v*", "tn", OCLC20_LANG) // Builtins for os_log/os_trace BUILTIN(__builtin_os_log_format_buffer_size, "zcC*.", "p:0:nut") BUILTIN(__builtin_os_log_format, "v*v*cC*.", "p:0:nt") #undef BUILTIN #undef LIBBUILTIN #undef LANGBUILTIN Index: vendor/clang/dist/lib/AST/ExprConstant.cpp =================================================================== --- vendor/clang/dist/lib/AST/ExprConstant.cpp (revision 312705) +++ vendor/clang/dist/lib/AST/ExprConstant.cpp (revision 312706) @@ -1,10531 +1,10533 @@ //===--- ExprConstant.cpp - Expression Constant Evaluator -----------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements the Expr constant evaluator. // // Constant expression evaluation produces four main results: // // * A success/failure flag indicating whether constant folding was successful. // This is the 'bool' return value used by most of the code in this file. A // 'false' return value indicates that constant folding has failed, and any // appropriate diagnostic has already been produced. // // * An evaluated result, valid only if constant folding has not failed. // // * A flag indicating if evaluation encountered (unevaluated) side-effects. // These arise in cases such as (sideEffect(), 0) and (sideEffect() || 1), // where it is possible to determine the evaluated result regardless. // // * A set of notes indicating why the evaluation was not a constant expression // (under the C++11 / C++1y rules only, at the moment), or, if folding failed // too, why the expression could not be folded. // // If we are checking for a potential constant expression, failure to constant // fold a potential constant sub-expression will be indicated by a 'false' // return value (the expression could not be folded) and no diagnostic (the // expression is not necessarily non-constant). // //===----------------------------------------------------------------------===// #include "clang/AST/APValue.h" #include "clang/AST/ASTContext.h" #include "clang/AST/ASTDiagnostic.h" #include "clang/AST/ASTLambda.h" #include "clang/AST/CharUnits.h" #include "clang/AST/Expr.h" #include "clang/AST/RecordLayout.h" #include "clang/AST/StmtVisitor.h" #include "clang/AST/TypeLoc.h" #include "clang/Basic/Builtins.h" #include "clang/Basic/TargetInfo.h" #include "llvm/Support/raw_ostream.h" #include #include using namespace clang; using llvm::APSInt; using llvm::APFloat; static bool IsGlobalLValue(APValue::LValueBase B); namespace { struct LValue; struct CallStackFrame; struct EvalInfo; static QualType getType(APValue::LValueBase B) { if (!B) return QualType(); if (const ValueDecl *D = B.dyn_cast()) return D->getType(); const Expr *Base = B.get(); // For a materialized temporary, the type of the temporary we materialized // may not be the type of the expression. if (const MaterializeTemporaryExpr *MTE = dyn_cast(Base)) { SmallVector CommaLHSs; SmallVector Adjustments; const Expr *Temp = MTE->GetTemporaryExpr(); const Expr *Inner = Temp->skipRValueSubobjectAdjustments(CommaLHSs, Adjustments); // Keep any cv-qualifiers from the reference if we generated a temporary // for it directly. Otherwise use the type after adjustment. if (!Adjustments.empty()) return Inner->getType(); } return Base->getType(); } /// Get an LValue path entry, which is known to not be an array index, as a /// field or base class. static APValue::BaseOrMemberType getAsBaseOrMember(APValue::LValuePathEntry E) { APValue::BaseOrMemberType Value; Value.setFromOpaqueValue(E.BaseOrMember); return Value; } /// Get an LValue path entry, which is known to not be an array index, as a /// field declaration. static const FieldDecl *getAsField(APValue::LValuePathEntry E) { return dyn_cast(getAsBaseOrMember(E).getPointer()); } /// Get an LValue path entry, which is known to not be an array index, as a /// base class declaration. static const CXXRecordDecl *getAsBaseClass(APValue::LValuePathEntry E) { return dyn_cast(getAsBaseOrMember(E).getPointer()); } /// Determine whether this LValue path entry for a base class names a virtual /// base class. static bool isVirtualBaseClass(APValue::LValuePathEntry E) { return getAsBaseOrMember(E).getInt(); } /// Given a CallExpr, try to get the alloc_size attribute. May return null. static const AllocSizeAttr *getAllocSizeAttr(const CallExpr *CE) { const FunctionDecl *Callee = CE->getDirectCallee(); return Callee ? Callee->getAttr() : nullptr; } /// Attempts to unwrap a CallExpr (with an alloc_size attribute) from an Expr. /// This will look through a single cast. /// /// Returns null if we couldn't unwrap a function with alloc_size. static const CallExpr *tryUnwrapAllocSizeCall(const Expr *E) { if (!E->getType()->isPointerType()) return nullptr; E = E->IgnoreParens(); // If we're doing a variable assignment from e.g. malloc(N), there will // probably be a cast of some kind. Ignore it. if (const auto *Cast = dyn_cast(E)) E = Cast->getSubExpr()->IgnoreParens(); if (const auto *CE = dyn_cast(E)) return getAllocSizeAttr(CE) ? CE : nullptr; return nullptr; } /// Determines whether or not the given Base contains a call to a function /// with the alloc_size attribute. static bool isBaseAnAllocSizeCall(APValue::LValueBase Base) { const auto *E = Base.dyn_cast(); return E && E->getType()->isPointerType() && tryUnwrapAllocSizeCall(E); } /// Determines if an LValue with the given LValueBase will have an unsized /// array in its designator. /// Find the path length and type of the most-derived subobject in the given /// path, and find the size of the containing array, if any. static unsigned findMostDerivedSubobject(ASTContext &Ctx, APValue::LValueBase Base, ArrayRef Path, uint64_t &ArraySize, QualType &Type, bool &IsArray) { // This only accepts LValueBases from APValues, and APValues don't support // arrays that lack size info. assert(!isBaseAnAllocSizeCall(Base) && "Unsized arrays shouldn't appear here"); unsigned MostDerivedLength = 0; Type = getType(Base); for (unsigned I = 0, N = Path.size(); I != N; ++I) { if (Type->isArrayType()) { const ConstantArrayType *CAT = cast(Ctx.getAsArrayType(Type)); Type = CAT->getElementType(); ArraySize = CAT->getSize().getZExtValue(); MostDerivedLength = I + 1; IsArray = true; } else if (Type->isAnyComplexType()) { const ComplexType *CT = Type->castAs(); Type = CT->getElementType(); ArraySize = 2; MostDerivedLength = I + 1; IsArray = true; } else if (const FieldDecl *FD = getAsField(Path[I])) { Type = FD->getType(); ArraySize = 0; MostDerivedLength = I + 1; IsArray = false; } else { // Path[I] describes a base class. ArraySize = 0; IsArray = false; } } return MostDerivedLength; } // The order of this enum is important for diagnostics. enum CheckSubobjectKind { CSK_Base, CSK_Derived, CSK_Field, CSK_ArrayToPointer, CSK_ArrayIndex, CSK_This, CSK_Real, CSK_Imag }; /// A path from a glvalue to a subobject of that glvalue. struct SubobjectDesignator { /// True if the subobject was named in a manner not supported by C++11. Such /// lvalues can still be folded, but they are not core constant expressions /// and we cannot perform lvalue-to-rvalue conversions on them. unsigned Invalid : 1; /// Is this a pointer one past the end of an object? unsigned IsOnePastTheEnd : 1; /// Indicator of whether the first entry is an unsized array. unsigned FirstEntryIsAnUnsizedArray : 1; /// Indicator of whether the most-derived object is an array element. unsigned MostDerivedIsArrayElement : 1; /// The length of the path to the most-derived object of which this is a /// subobject. unsigned MostDerivedPathLength : 28; /// The size of the array of which the most-derived object is an element. /// This will always be 0 if the most-derived object is not an array /// element. 0 is not an indicator of whether or not the most-derived object /// is an array, however, because 0-length arrays are allowed. /// /// If the current array is an unsized array, the value of this is /// undefined. uint64_t MostDerivedArraySize; /// The type of the most derived object referred to by this address. QualType MostDerivedType; typedef APValue::LValuePathEntry PathEntry; /// The entries on the path from the glvalue to the designated subobject. SmallVector Entries; SubobjectDesignator() : Invalid(true) {} explicit SubobjectDesignator(QualType T) : Invalid(false), IsOnePastTheEnd(false), FirstEntryIsAnUnsizedArray(false), MostDerivedIsArrayElement(false), MostDerivedPathLength(0), MostDerivedArraySize(0), MostDerivedType(T) {} SubobjectDesignator(ASTContext &Ctx, const APValue &V) : Invalid(!V.isLValue() || !V.hasLValuePath()), IsOnePastTheEnd(false), FirstEntryIsAnUnsizedArray(false), MostDerivedIsArrayElement(false), MostDerivedPathLength(0), MostDerivedArraySize(0) { assert(V.isLValue() && "Non-LValue used to make an LValue designator?"); if (!Invalid) { IsOnePastTheEnd = V.isLValueOnePastTheEnd(); ArrayRef VEntries = V.getLValuePath(); Entries.insert(Entries.end(), VEntries.begin(), VEntries.end()); if (V.getLValueBase()) { bool IsArray = false; MostDerivedPathLength = findMostDerivedSubobject( Ctx, V.getLValueBase(), V.getLValuePath(), MostDerivedArraySize, MostDerivedType, IsArray); MostDerivedIsArrayElement = IsArray; } } } void setInvalid() { Invalid = true; Entries.clear(); } /// Determine whether the most derived subobject is an array without a /// known bound. bool isMostDerivedAnUnsizedArray() const { assert(!Invalid && "Calling this makes no sense on invalid designators"); return Entries.size() == 1 && FirstEntryIsAnUnsizedArray; } /// Determine what the most derived array's size is. Results in an assertion /// failure if the most derived array lacks a size. uint64_t getMostDerivedArraySize() const { assert(!isMostDerivedAnUnsizedArray() && "Unsized array has no size"); return MostDerivedArraySize; } /// Determine whether this is a one-past-the-end pointer. bool isOnePastTheEnd() const { assert(!Invalid); if (IsOnePastTheEnd) return true; if (!isMostDerivedAnUnsizedArray() && MostDerivedIsArrayElement && Entries[MostDerivedPathLength - 1].ArrayIndex == MostDerivedArraySize) return true; return false; } /// Check that this refers to a valid subobject. bool isValidSubobject() const { if (Invalid) return false; return !isOnePastTheEnd(); } /// Check that this refers to a valid subobject, and if not, produce a /// relevant diagnostic and set the designator as invalid. bool checkSubobject(EvalInfo &Info, const Expr *E, CheckSubobjectKind CSK); /// Update this designator to refer to the first element within this array. void addArrayUnchecked(const ConstantArrayType *CAT) { PathEntry Entry; Entry.ArrayIndex = 0; Entries.push_back(Entry); // This is a most-derived object. MostDerivedType = CAT->getElementType(); MostDerivedIsArrayElement = true; MostDerivedArraySize = CAT->getSize().getZExtValue(); MostDerivedPathLength = Entries.size(); } /// Update this designator to refer to the first element within the array of /// elements of type T. This is an array of unknown size. void addUnsizedArrayUnchecked(QualType ElemTy) { PathEntry Entry; Entry.ArrayIndex = 0; Entries.push_back(Entry); MostDerivedType = ElemTy; MostDerivedIsArrayElement = true; // The value in MostDerivedArraySize is undefined in this case. So, set it // to an arbitrary value that's likely to loudly break things if it's // used. MostDerivedArraySize = std::numeric_limits::max() / 2; MostDerivedPathLength = Entries.size(); } /// Update this designator to refer to the given base or member of this /// object. void addDeclUnchecked(const Decl *D, bool Virtual = false) { PathEntry Entry; APValue::BaseOrMemberType Value(D, Virtual); Entry.BaseOrMember = Value.getOpaqueValue(); Entries.push_back(Entry); // If this isn't a base class, it's a new most-derived object. if (const FieldDecl *FD = dyn_cast(D)) { MostDerivedType = FD->getType(); MostDerivedIsArrayElement = false; MostDerivedArraySize = 0; MostDerivedPathLength = Entries.size(); } } /// Update this designator to refer to the given complex component. void addComplexUnchecked(QualType EltTy, bool Imag) { PathEntry Entry; Entry.ArrayIndex = Imag; Entries.push_back(Entry); // This is technically a most-derived object, though in practice this // is unlikely to matter. MostDerivedType = EltTy; MostDerivedIsArrayElement = true; MostDerivedArraySize = 2; MostDerivedPathLength = Entries.size(); } void diagnosePointerArithmetic(EvalInfo &Info, const Expr *E, uint64_t N); /// Add N to the address of this subobject. void adjustIndex(EvalInfo &Info, const Expr *E, uint64_t N) { if (Invalid) return; if (isMostDerivedAnUnsizedArray()) { // Can't verify -- trust that the user is doing the right thing (or if // not, trust that the caller will catch the bad behavior). Entries.back().ArrayIndex += N; return; } if (MostDerivedPathLength == Entries.size() && MostDerivedIsArrayElement) { Entries.back().ArrayIndex += N; if (Entries.back().ArrayIndex > getMostDerivedArraySize()) { diagnosePointerArithmetic(Info, E, Entries.back().ArrayIndex); setInvalid(); } return; } // [expr.add]p4: For the purposes of these operators, a pointer to a // nonarray object behaves the same as a pointer to the first element of // an array of length one with the type of the object as its element type. if (IsOnePastTheEnd && N == (uint64_t)-1) IsOnePastTheEnd = false; else if (!IsOnePastTheEnd && N == 1) IsOnePastTheEnd = true; else if (N != 0) { diagnosePointerArithmetic(Info, E, uint64_t(IsOnePastTheEnd) + N); setInvalid(); } } }; /// A stack frame in the constexpr call stack. struct CallStackFrame { EvalInfo &Info; /// Parent - The caller of this stack frame. CallStackFrame *Caller; /// Callee - The function which was called. const FunctionDecl *Callee; /// This - The binding for the this pointer in this call, if any. const LValue *This; /// Arguments - Parameter bindings for this function call, indexed by /// parameters' function scope indices. APValue *Arguments; // Note that we intentionally use std::map here so that references to // values are stable. typedef std::map MapTy; typedef MapTy::const_iterator temp_iterator; /// Temporaries - Temporary lvalues materialized within this stack frame. MapTy Temporaries; /// CallLoc - The location of the call expression for this call. SourceLocation CallLoc; /// Index - The call index of this call. unsigned Index; CallStackFrame(EvalInfo &Info, SourceLocation CallLoc, const FunctionDecl *Callee, const LValue *This, APValue *Arguments); ~CallStackFrame(); APValue *getTemporary(const void *Key) { MapTy::iterator I = Temporaries.find(Key); return I == Temporaries.end() ? nullptr : &I->second; } APValue &createTemporary(const void *Key, bool IsLifetimeExtended); }; /// Temporarily override 'this'. class ThisOverrideRAII { public: ThisOverrideRAII(CallStackFrame &Frame, const LValue *NewThis, bool Enable) : Frame(Frame), OldThis(Frame.This) { if (Enable) Frame.This = NewThis; } ~ThisOverrideRAII() { Frame.This = OldThis; } private: CallStackFrame &Frame; const LValue *OldThis; }; /// A partial diagnostic which we might know in advance that we are not going /// to emit. class OptionalDiagnostic { PartialDiagnostic *Diag; public: explicit OptionalDiagnostic(PartialDiagnostic *Diag = nullptr) : Diag(Diag) {} template OptionalDiagnostic &operator<<(const T &v) { if (Diag) *Diag << v; return *this; } OptionalDiagnostic &operator<<(const APSInt &I) { if (Diag) { SmallVector Buffer; I.toString(Buffer); *Diag << StringRef(Buffer.data(), Buffer.size()); } return *this; } OptionalDiagnostic &operator<<(const APFloat &F) { if (Diag) { // FIXME: Force the precision of the source value down so we don't // print digits which are usually useless (we don't really care here if // we truncate a digit by accident in edge cases). Ideally, // APFloat::toString would automatically print the shortest // representation which rounds to the correct value, but it's a bit // tricky to implement. unsigned precision = llvm::APFloat::semanticsPrecision(F.getSemantics()); precision = (precision * 59 + 195) / 196; SmallVector Buffer; F.toString(Buffer, precision); *Diag << StringRef(Buffer.data(), Buffer.size()); } return *this; } }; /// A cleanup, and a flag indicating whether it is lifetime-extended. class Cleanup { llvm::PointerIntPair Value; public: Cleanup(APValue *Val, bool IsLifetimeExtended) : Value(Val, IsLifetimeExtended) {} bool isLifetimeExtended() const { return Value.getInt(); } void endLifetime() { *Value.getPointer() = APValue(); } }; /// EvalInfo - This is a private struct used by the evaluator to capture /// information about a subexpression as it is folded. It retains information /// about the AST context, but also maintains information about the folded /// expression. /// /// If an expression could be evaluated, it is still possible it is not a C /// "integer constant expression" or constant expression. If not, this struct /// captures information about how and why not. /// /// One bit of information passed *into* the request for constant folding /// indicates whether the subexpression is "evaluated" or not according to C /// rules. For example, the RHS of (0 && foo()) is not evaluated. We can /// evaluate the expression regardless of what the RHS is, but C only allows /// certain things in certain situations. struct LLVM_ALIGNAS(/*alignof(uint64_t)*/ 8) EvalInfo { ASTContext &Ctx; /// EvalStatus - Contains information about the evaluation. Expr::EvalStatus &EvalStatus; /// CurrentCall - The top of the constexpr call stack. CallStackFrame *CurrentCall; /// CallStackDepth - The number of calls in the call stack right now. unsigned CallStackDepth; /// NextCallIndex - The next call index to assign. unsigned NextCallIndex; /// StepsLeft - The remaining number of evaluation steps we're permitted /// to perform. This is essentially a limit for the number of statements /// we will evaluate. unsigned StepsLeft; /// BottomFrame - The frame in which evaluation started. This must be /// initialized after CurrentCall and CallStackDepth. CallStackFrame BottomFrame; /// A stack of values whose lifetimes end at the end of some surrounding /// evaluation frame. llvm::SmallVector CleanupStack; /// EvaluatingDecl - This is the declaration whose initializer is being /// evaluated, if any. APValue::LValueBase EvaluatingDecl; /// EvaluatingDeclValue - This is the value being constructed for the /// declaration whose initializer is being evaluated, if any. APValue *EvaluatingDeclValue; /// The current array initialization index, if we're performing array /// initialization. uint64_t ArrayInitIndex = -1; /// HasActiveDiagnostic - Was the previous diagnostic stored? If so, further /// notes attached to it will also be stored, otherwise they will not be. bool HasActiveDiagnostic; /// \brief Have we emitted a diagnostic explaining why we couldn't constant /// fold (not just why it's not strictly a constant expression)? bool HasFoldFailureDiagnostic; /// \brief Whether or not we're currently speculatively evaluating. bool IsSpeculativelyEvaluating; enum EvaluationMode { /// Evaluate as a constant expression. Stop if we find that the expression /// is not a constant expression. EM_ConstantExpression, /// Evaluate as a potential constant expression. Keep going if we hit a /// construct that we can't evaluate yet (because we don't yet know the /// value of something) but stop if we hit something that could never be /// a constant expression. EM_PotentialConstantExpression, /// Fold the expression to a constant. Stop if we hit a side-effect that /// we can't model. EM_ConstantFold, /// Evaluate the expression looking for integer overflow and similar /// issues. Don't worry about side-effects, and try to visit all /// subexpressions. EM_EvaluateForOverflow, /// Evaluate in any way we know how. Don't worry about side-effects that /// can't be modeled. EM_IgnoreSideEffects, /// Evaluate as a constant expression. Stop if we find that the expression /// is not a constant expression. Some expressions can be retried in the /// optimizer if we don't constant fold them here, but in an unevaluated /// context we try to fold them immediately since the optimizer never /// gets a chance to look at it. EM_ConstantExpressionUnevaluated, /// Evaluate as a potential constant expression. Keep going if we hit a /// construct that we can't evaluate yet (because we don't yet know the /// value of something) but stop if we hit something that could never be /// a constant expression. Some expressions can be retried in the /// optimizer if we don't constant fold them here, but in an unevaluated /// context we try to fold them immediately since the optimizer never /// gets a chance to look at it. EM_PotentialConstantExpressionUnevaluated, /// Evaluate as a constant expression. Continue evaluating if either: /// - We find a MemberExpr with a base that can't be evaluated. /// - We find a variable initialized with a call to a function that has /// the alloc_size attribute on it. /// In either case, the LValue returned shall have an invalid base; in the /// former, the base will be the invalid MemberExpr, in the latter, the /// base will be either the alloc_size CallExpr or a CastExpr wrapping /// said CallExpr. EM_OffsetFold, } EvalMode; /// Are we checking whether the expression is a potential constant /// expression? bool checkingPotentialConstantExpression() const { return EvalMode == EM_PotentialConstantExpression || EvalMode == EM_PotentialConstantExpressionUnevaluated; } /// Are we checking an expression for overflow? // FIXME: We should check for any kind of undefined or suspicious behavior // in such constructs, not just overflow. bool checkingForOverflow() { return EvalMode == EM_EvaluateForOverflow; } EvalInfo(const ASTContext &C, Expr::EvalStatus &S, EvaluationMode Mode) : Ctx(const_cast(C)), EvalStatus(S), CurrentCall(nullptr), CallStackDepth(0), NextCallIndex(1), StepsLeft(getLangOpts().ConstexprStepLimit), BottomFrame(*this, SourceLocation(), nullptr, nullptr, nullptr), EvaluatingDecl((const ValueDecl *)nullptr), EvaluatingDeclValue(nullptr), HasActiveDiagnostic(false), HasFoldFailureDiagnostic(false), IsSpeculativelyEvaluating(false), EvalMode(Mode) {} void setEvaluatingDecl(APValue::LValueBase Base, APValue &Value) { EvaluatingDecl = Base; EvaluatingDeclValue = &Value; } const LangOptions &getLangOpts() const { return Ctx.getLangOpts(); } bool CheckCallLimit(SourceLocation Loc) { // Don't perform any constexpr calls (other than the call we're checking) // when checking a potential constant expression. if (checkingPotentialConstantExpression() && CallStackDepth > 1) return false; if (NextCallIndex == 0) { // NextCallIndex has wrapped around. FFDiag(Loc, diag::note_constexpr_call_limit_exceeded); return false; } if (CallStackDepth <= getLangOpts().ConstexprCallDepth) return true; FFDiag(Loc, diag::note_constexpr_depth_limit_exceeded) << getLangOpts().ConstexprCallDepth; return false; } CallStackFrame *getCallFrame(unsigned CallIndex) { assert(CallIndex && "no call index in getCallFrame"); // We will eventually hit BottomFrame, which has Index 1, so Frame can't // be null in this loop. CallStackFrame *Frame = CurrentCall; while (Frame->Index > CallIndex) Frame = Frame->Caller; return (Frame->Index == CallIndex) ? Frame : nullptr; } bool nextStep(const Stmt *S) { if (!StepsLeft) { FFDiag(S->getLocStart(), diag::note_constexpr_step_limit_exceeded); return false; } --StepsLeft; return true; } private: /// Add a diagnostic to the diagnostics list. PartialDiagnostic &addDiag(SourceLocation Loc, diag::kind DiagId) { PartialDiagnostic PD(DiagId, Ctx.getDiagAllocator()); EvalStatus.Diag->push_back(std::make_pair(Loc, PD)); return EvalStatus.Diag->back().second; } /// Add notes containing a call stack to the current point of evaluation. void addCallStack(unsigned Limit); private: OptionalDiagnostic Diag(SourceLocation Loc, diag::kind DiagId, unsigned ExtraNotes, bool IsCCEDiag) { if (EvalStatus.Diag) { // If we have a prior diagnostic, it will be noting that the expression // isn't a constant expression. This diagnostic is more important, // unless we require this evaluation to produce a constant expression. // // FIXME: We might want to show both diagnostics to the user in // EM_ConstantFold mode. if (!EvalStatus.Diag->empty()) { switch (EvalMode) { case EM_ConstantFold: case EM_IgnoreSideEffects: case EM_EvaluateForOverflow: if (!HasFoldFailureDiagnostic) break; // We've already failed to fold something. Keep that diagnostic. case EM_ConstantExpression: case EM_PotentialConstantExpression: case EM_ConstantExpressionUnevaluated: case EM_PotentialConstantExpressionUnevaluated: case EM_OffsetFold: HasActiveDiagnostic = false; return OptionalDiagnostic(); } } unsigned CallStackNotes = CallStackDepth - 1; unsigned Limit = Ctx.getDiagnostics().getConstexprBacktraceLimit(); if (Limit) CallStackNotes = std::min(CallStackNotes, Limit + 1); if (checkingPotentialConstantExpression()) CallStackNotes = 0; HasActiveDiagnostic = true; HasFoldFailureDiagnostic = !IsCCEDiag; EvalStatus.Diag->clear(); EvalStatus.Diag->reserve(1 + ExtraNotes + CallStackNotes); addDiag(Loc, DiagId); if (!checkingPotentialConstantExpression()) addCallStack(Limit); return OptionalDiagnostic(&(*EvalStatus.Diag)[0].second); } HasActiveDiagnostic = false; return OptionalDiagnostic(); } public: // Diagnose that the evaluation could not be folded (FF => FoldFailure) OptionalDiagnostic FFDiag(SourceLocation Loc, diag::kind DiagId = diag::note_invalid_subexpr_in_const_expr, unsigned ExtraNotes = 0) { return Diag(Loc, DiagId, ExtraNotes, false); } OptionalDiagnostic FFDiag(const Expr *E, diag::kind DiagId = diag::note_invalid_subexpr_in_const_expr, unsigned ExtraNotes = 0) { if (EvalStatus.Diag) return Diag(E->getExprLoc(), DiagId, ExtraNotes, /*IsCCEDiag*/false); HasActiveDiagnostic = false; return OptionalDiagnostic(); } /// Diagnose that the evaluation does not produce a C++11 core constant /// expression. /// /// FIXME: Stop evaluating if we're in EM_ConstantExpression or /// EM_PotentialConstantExpression mode and we produce one of these. OptionalDiagnostic CCEDiag(SourceLocation Loc, diag::kind DiagId = diag::note_invalid_subexpr_in_const_expr, unsigned ExtraNotes = 0) { // Don't override a previous diagnostic. Don't bother collecting // diagnostics if we're evaluating for overflow. if (!EvalStatus.Diag || !EvalStatus.Diag->empty()) { HasActiveDiagnostic = false; return OptionalDiagnostic(); } return Diag(Loc, DiagId, ExtraNotes, true); } OptionalDiagnostic CCEDiag(const Expr *E, diag::kind DiagId = diag::note_invalid_subexpr_in_const_expr, unsigned ExtraNotes = 0) { return CCEDiag(E->getExprLoc(), DiagId, ExtraNotes); } /// Add a note to a prior diagnostic. OptionalDiagnostic Note(SourceLocation Loc, diag::kind DiagId) { if (!HasActiveDiagnostic) return OptionalDiagnostic(); return OptionalDiagnostic(&addDiag(Loc, DiagId)); } /// Add a stack of notes to a prior diagnostic. void addNotes(ArrayRef Diags) { if (HasActiveDiagnostic) { EvalStatus.Diag->insert(EvalStatus.Diag->end(), Diags.begin(), Diags.end()); } } /// Should we continue evaluation after encountering a side-effect that we /// couldn't model? bool keepEvaluatingAfterSideEffect() { switch (EvalMode) { case EM_PotentialConstantExpression: case EM_PotentialConstantExpressionUnevaluated: case EM_EvaluateForOverflow: case EM_IgnoreSideEffects: return true; case EM_ConstantExpression: case EM_ConstantExpressionUnevaluated: case EM_ConstantFold: case EM_OffsetFold: return false; } llvm_unreachable("Missed EvalMode case"); } /// Note that we have had a side-effect, and determine whether we should /// keep evaluating. bool noteSideEffect() { EvalStatus.HasSideEffects = true; return keepEvaluatingAfterSideEffect(); } /// Should we continue evaluation after encountering undefined behavior? bool keepEvaluatingAfterUndefinedBehavior() { switch (EvalMode) { case EM_EvaluateForOverflow: case EM_IgnoreSideEffects: case EM_ConstantFold: case EM_OffsetFold: return true; case EM_PotentialConstantExpression: case EM_PotentialConstantExpressionUnevaluated: case EM_ConstantExpression: case EM_ConstantExpressionUnevaluated: return false; } llvm_unreachable("Missed EvalMode case"); } /// Note that we hit something that was technically undefined behavior, but /// that we can evaluate past it (such as signed overflow or floating-point /// division by zero.) bool noteUndefinedBehavior() { EvalStatus.HasUndefinedBehavior = true; return keepEvaluatingAfterUndefinedBehavior(); } /// Should we continue evaluation as much as possible after encountering a /// construct which can't be reduced to a value? bool keepEvaluatingAfterFailure() { if (!StepsLeft) return false; switch (EvalMode) { case EM_PotentialConstantExpression: case EM_PotentialConstantExpressionUnevaluated: case EM_EvaluateForOverflow: return true; case EM_ConstantExpression: case EM_ConstantExpressionUnevaluated: case EM_ConstantFold: case EM_IgnoreSideEffects: case EM_OffsetFold: return false; } llvm_unreachable("Missed EvalMode case"); } /// Notes that we failed to evaluate an expression that other expressions /// directly depend on, and determine if we should keep evaluating. This /// should only be called if we actually intend to keep evaluating. /// /// Call noteSideEffect() instead if we may be able to ignore the value that /// we failed to evaluate, e.g. if we failed to evaluate Foo() in: /// /// (Foo(), 1) // use noteSideEffect /// (Foo() || true) // use noteSideEffect /// Foo() + 1 // use noteFailure LLVM_NODISCARD bool noteFailure() { // Failure when evaluating some expression often means there is some // subexpression whose evaluation was skipped. Therefore, (because we // don't track whether we skipped an expression when unwinding after an // evaluation failure) every evaluation failure that bubbles up from a // subexpression implies that a side-effect has potentially happened. We // skip setting the HasSideEffects flag to true until we decide to // continue evaluating after that point, which happens here. bool KeepGoing = keepEvaluatingAfterFailure(); EvalStatus.HasSideEffects |= KeepGoing; return KeepGoing; } bool allowInvalidBaseExpr() const { return EvalMode == EM_OffsetFold; } class ArrayInitLoopIndex { EvalInfo &Info; uint64_t OuterIndex; public: ArrayInitLoopIndex(EvalInfo &Info) : Info(Info), OuterIndex(Info.ArrayInitIndex) { Info.ArrayInitIndex = 0; } ~ArrayInitLoopIndex() { Info.ArrayInitIndex = OuterIndex; } operator uint64_t&() { return Info.ArrayInitIndex; } }; }; /// Object used to treat all foldable expressions as constant expressions. struct FoldConstant { EvalInfo &Info; bool Enabled; bool HadNoPriorDiags; EvalInfo::EvaluationMode OldMode; explicit FoldConstant(EvalInfo &Info, bool Enabled) : Info(Info), Enabled(Enabled), HadNoPriorDiags(Info.EvalStatus.Diag && Info.EvalStatus.Diag->empty() && !Info.EvalStatus.HasSideEffects), OldMode(Info.EvalMode) { if (Enabled && (Info.EvalMode == EvalInfo::EM_ConstantExpression || Info.EvalMode == EvalInfo::EM_ConstantExpressionUnevaluated)) Info.EvalMode = EvalInfo::EM_ConstantFold; } void keepDiagnostics() { Enabled = false; } ~FoldConstant() { if (Enabled && HadNoPriorDiags && !Info.EvalStatus.Diag->empty() && !Info.EvalStatus.HasSideEffects) Info.EvalStatus.Diag->clear(); Info.EvalMode = OldMode; } }; /// RAII object used to treat the current evaluation as the correct pointer /// offset fold for the current EvalMode struct FoldOffsetRAII { EvalInfo &Info; EvalInfo::EvaluationMode OldMode; explicit FoldOffsetRAII(EvalInfo &Info) : Info(Info), OldMode(Info.EvalMode) { if (!Info.checkingPotentialConstantExpression()) Info.EvalMode = EvalInfo::EM_OffsetFold; } ~FoldOffsetRAII() { Info.EvalMode = OldMode; } }; /// RAII object used to optionally suppress diagnostics and side-effects from /// a speculative evaluation. class SpeculativeEvaluationRAII { /// Pair of EvalInfo, and a bit that stores whether or not we were /// speculatively evaluating when we created this RAII. llvm::PointerIntPair InfoAndOldSpecEval; Expr::EvalStatus Old; void moveFromAndCancel(SpeculativeEvaluationRAII &&Other) { InfoAndOldSpecEval = Other.InfoAndOldSpecEval; Old = Other.Old; Other.InfoAndOldSpecEval.setPointer(nullptr); } void maybeRestoreState() { EvalInfo *Info = InfoAndOldSpecEval.getPointer(); if (!Info) return; Info->EvalStatus = Old; Info->IsSpeculativelyEvaluating = InfoAndOldSpecEval.getInt(); } public: SpeculativeEvaluationRAII() = default; SpeculativeEvaluationRAII( EvalInfo &Info, SmallVectorImpl *NewDiag = nullptr) : InfoAndOldSpecEval(&Info, Info.IsSpeculativelyEvaluating), Old(Info.EvalStatus) { Info.EvalStatus.Diag = NewDiag; Info.IsSpeculativelyEvaluating = true; } SpeculativeEvaluationRAII(const SpeculativeEvaluationRAII &Other) = delete; SpeculativeEvaluationRAII(SpeculativeEvaluationRAII &&Other) { moveFromAndCancel(std::move(Other)); } SpeculativeEvaluationRAII &operator=(SpeculativeEvaluationRAII &&Other) { maybeRestoreState(); moveFromAndCancel(std::move(Other)); return *this; } ~SpeculativeEvaluationRAII() { maybeRestoreState(); } }; /// RAII object wrapping a full-expression or block scope, and handling /// the ending of the lifetime of temporaries created within it. template class ScopeRAII { EvalInfo &Info; unsigned OldStackSize; public: ScopeRAII(EvalInfo &Info) : Info(Info), OldStackSize(Info.CleanupStack.size()) {} ~ScopeRAII() { // Body moved to a static method to encourage the compiler to inline away // instances of this class. cleanup(Info, OldStackSize); } private: static void cleanup(EvalInfo &Info, unsigned OldStackSize) { unsigned NewEnd = OldStackSize; for (unsigned I = OldStackSize, N = Info.CleanupStack.size(); I != N; ++I) { if (IsFullExpression && Info.CleanupStack[I].isLifetimeExtended()) { // Full-expression cleanup of a lifetime-extended temporary: nothing // to do, just move this cleanup to the right place in the stack. std::swap(Info.CleanupStack[I], Info.CleanupStack[NewEnd]); ++NewEnd; } else { // End the lifetime of the object. Info.CleanupStack[I].endLifetime(); } } Info.CleanupStack.erase(Info.CleanupStack.begin() + NewEnd, Info.CleanupStack.end()); } }; typedef ScopeRAII BlockScopeRAII; typedef ScopeRAII FullExpressionRAII; } bool SubobjectDesignator::checkSubobject(EvalInfo &Info, const Expr *E, CheckSubobjectKind CSK) { if (Invalid) return false; if (isOnePastTheEnd()) { Info.CCEDiag(E, diag::note_constexpr_past_end_subobject) << CSK; setInvalid(); return false; } return true; } void SubobjectDesignator::diagnosePointerArithmetic(EvalInfo &Info, const Expr *E, uint64_t N) { // If we're complaining, we must be able to statically determine the size of // the most derived array. if (MostDerivedPathLength == Entries.size() && MostDerivedIsArrayElement) Info.CCEDiag(E, diag::note_constexpr_array_index) << static_cast(N) << /*array*/ 0 << static_cast(getMostDerivedArraySize()); else Info.CCEDiag(E, diag::note_constexpr_array_index) << static_cast(N) << /*non-array*/ 1; setInvalid(); } CallStackFrame::CallStackFrame(EvalInfo &Info, SourceLocation CallLoc, const FunctionDecl *Callee, const LValue *This, APValue *Arguments) : Info(Info), Caller(Info.CurrentCall), Callee(Callee), This(This), Arguments(Arguments), CallLoc(CallLoc), Index(Info.NextCallIndex++) { Info.CurrentCall = this; ++Info.CallStackDepth; } CallStackFrame::~CallStackFrame() { assert(Info.CurrentCall == this && "calls retired out of order"); --Info.CallStackDepth; Info.CurrentCall = Caller; } APValue &CallStackFrame::createTemporary(const void *Key, bool IsLifetimeExtended) { APValue &Result = Temporaries[Key]; assert(Result.isUninit() && "temporary created multiple times"); Info.CleanupStack.push_back(Cleanup(&Result, IsLifetimeExtended)); return Result; } static void describeCall(CallStackFrame *Frame, raw_ostream &Out); void EvalInfo::addCallStack(unsigned Limit) { // Determine which calls to skip, if any. unsigned ActiveCalls = CallStackDepth - 1; unsigned SkipStart = ActiveCalls, SkipEnd = SkipStart; if (Limit && Limit < ActiveCalls) { SkipStart = Limit / 2 + Limit % 2; SkipEnd = ActiveCalls - Limit / 2; } // Walk the call stack and add the diagnostics. unsigned CallIdx = 0; for (CallStackFrame *Frame = CurrentCall; Frame != &BottomFrame; Frame = Frame->Caller, ++CallIdx) { // Skip this call? if (CallIdx >= SkipStart && CallIdx < SkipEnd) { if (CallIdx == SkipStart) { // Note that we're skipping calls. addDiag(Frame->CallLoc, diag::note_constexpr_calls_suppressed) << unsigned(ActiveCalls - Limit); } continue; } // Use a different note for an inheriting constructor, because from the // user's perspective it's not really a function at all. if (auto *CD = dyn_cast_or_null(Frame->Callee)) { if (CD->isInheritingConstructor()) { addDiag(Frame->CallLoc, diag::note_constexpr_inherited_ctor_call_here) << CD->getParent(); continue; } } SmallVector Buffer; llvm::raw_svector_ostream Out(Buffer); describeCall(Frame, Out); addDiag(Frame->CallLoc, diag::note_constexpr_call_here) << Out.str(); } } namespace { struct ComplexValue { private: bool IsInt; public: APSInt IntReal, IntImag; APFloat FloatReal, FloatImag; ComplexValue() : FloatReal(APFloat::Bogus()), FloatImag(APFloat::Bogus()) {} void makeComplexFloat() { IsInt = false; } bool isComplexFloat() const { return !IsInt; } APFloat &getComplexFloatReal() { return FloatReal; } APFloat &getComplexFloatImag() { return FloatImag; } void makeComplexInt() { IsInt = true; } bool isComplexInt() const { return IsInt; } APSInt &getComplexIntReal() { return IntReal; } APSInt &getComplexIntImag() { return IntImag; } void moveInto(APValue &v) const { if (isComplexFloat()) v = APValue(FloatReal, FloatImag); else v = APValue(IntReal, IntImag); } void setFrom(const APValue &v) { assert(v.isComplexFloat() || v.isComplexInt()); if (v.isComplexFloat()) { makeComplexFloat(); FloatReal = v.getComplexFloatReal(); FloatImag = v.getComplexFloatImag(); } else { makeComplexInt(); IntReal = v.getComplexIntReal(); IntImag = v.getComplexIntImag(); } } }; struct LValue { APValue::LValueBase Base; CharUnits Offset; unsigned InvalidBase : 1; unsigned CallIndex : 31; SubobjectDesignator Designator; bool IsNullPtr; const APValue::LValueBase getLValueBase() const { return Base; } CharUnits &getLValueOffset() { return Offset; } const CharUnits &getLValueOffset() const { return Offset; } unsigned getLValueCallIndex() const { return CallIndex; } SubobjectDesignator &getLValueDesignator() { return Designator; } const SubobjectDesignator &getLValueDesignator() const { return Designator;} bool isNullPointer() const { return IsNullPtr;} void moveInto(APValue &V) const { if (Designator.Invalid) V = APValue(Base, Offset, APValue::NoLValuePath(), CallIndex, IsNullPtr); else { assert(!InvalidBase && "APValues can't handle invalid LValue bases"); assert(!Designator.FirstEntryIsAnUnsizedArray && "Unsized array with a valid base?"); V = APValue(Base, Offset, Designator.Entries, Designator.IsOnePastTheEnd, CallIndex, IsNullPtr); } } void setFrom(ASTContext &Ctx, const APValue &V) { assert(V.isLValue() && "Setting LValue from a non-LValue?"); Base = V.getLValueBase(); Offset = V.getLValueOffset(); InvalidBase = false; CallIndex = V.getLValueCallIndex(); Designator = SubobjectDesignator(Ctx, V); IsNullPtr = V.isNullPointer(); } void set(APValue::LValueBase B, unsigned I = 0, bool BInvalid = false, bool IsNullPtr_ = false, uint64_t Offset_ = 0) { #ifndef NDEBUG // We only allow a few types of invalid bases. Enforce that here. if (BInvalid) { const auto *E = B.get(); assert((isa(E) || tryUnwrapAllocSizeCall(E)) && "Unexpected type of invalid base"); } #endif Base = B; Offset = CharUnits::fromQuantity(Offset_); InvalidBase = BInvalid; CallIndex = I; Designator = SubobjectDesignator(getType(B)); IsNullPtr = IsNullPtr_; } void setInvalid(APValue::LValueBase B, unsigned I = 0) { set(B, I, true); } // Check that this LValue is not based on a null pointer. If it is, produce // a diagnostic and mark the designator as invalid. bool checkNullPointer(EvalInfo &Info, const Expr *E, CheckSubobjectKind CSK) { if (Designator.Invalid) return false; if (IsNullPtr) { Info.CCEDiag(E, diag::note_constexpr_null_subobject) << CSK; Designator.setInvalid(); return false; } return true; } // Check this LValue refers to an object. If not, set the designator to be // invalid and emit a diagnostic. bool checkSubobject(EvalInfo &Info, const Expr *E, CheckSubobjectKind CSK) { return (CSK == CSK_ArrayToPointer || checkNullPointer(Info, E, CSK)) && Designator.checkSubobject(Info, E, CSK); } void addDecl(EvalInfo &Info, const Expr *E, const Decl *D, bool Virtual = false) { if (checkSubobject(Info, E, isa(D) ? CSK_Field : CSK_Base)) Designator.addDeclUnchecked(D, Virtual); } void addUnsizedArray(EvalInfo &Info, QualType ElemTy) { assert(Designator.Entries.empty() && getType(Base)->isPointerType()); assert(isBaseAnAllocSizeCall(Base) && "Only alloc_size bases can have unsized arrays"); Designator.FirstEntryIsAnUnsizedArray = true; Designator.addUnsizedArrayUnchecked(ElemTy); } void addArray(EvalInfo &Info, const Expr *E, const ConstantArrayType *CAT) { if (checkSubobject(Info, E, CSK_ArrayToPointer)) Designator.addArrayUnchecked(CAT); } void addComplex(EvalInfo &Info, const Expr *E, QualType EltTy, bool Imag) { if (checkSubobject(Info, E, Imag ? CSK_Imag : CSK_Real)) Designator.addComplexUnchecked(EltTy, Imag); } void clearIsNullPointer() { IsNullPtr = false; } void adjustOffsetAndIndex(EvalInfo &Info, const Expr *E, uint64_t Index, CharUnits ElementSize) { // Compute the new offset in the appropriate width. Offset += Index * ElementSize; if (Index && checkNullPointer(Info, E, CSK_ArrayIndex)) Designator.adjustIndex(Info, E, Index); if (Index) clearIsNullPointer(); } void adjustOffset(CharUnits N) { Offset += N; if (N.getQuantity()) clearIsNullPointer(); } }; struct MemberPtr { MemberPtr() {} explicit MemberPtr(const ValueDecl *Decl) : DeclAndIsDerivedMember(Decl, false), Path() {} /// The member or (direct or indirect) field referred to by this member /// pointer, or 0 if this is a null member pointer. const ValueDecl *getDecl() const { return DeclAndIsDerivedMember.getPointer(); } /// Is this actually a member of some type derived from the relevant class? bool isDerivedMember() const { return DeclAndIsDerivedMember.getInt(); } /// Get the class which the declaration actually lives in. const CXXRecordDecl *getContainingRecord() const { return cast( DeclAndIsDerivedMember.getPointer()->getDeclContext()); } void moveInto(APValue &V) const { V = APValue(getDecl(), isDerivedMember(), Path); } void setFrom(const APValue &V) { assert(V.isMemberPointer()); DeclAndIsDerivedMember.setPointer(V.getMemberPointerDecl()); DeclAndIsDerivedMember.setInt(V.isMemberPointerToDerivedMember()); Path.clear(); ArrayRef P = V.getMemberPointerPath(); Path.insert(Path.end(), P.begin(), P.end()); } /// DeclAndIsDerivedMember - The member declaration, and a flag indicating /// whether the member is a member of some class derived from the class type /// of the member pointer. llvm::PointerIntPair DeclAndIsDerivedMember; /// Path - The path of base/derived classes from the member declaration's /// class (exclusive) to the class type of the member pointer (inclusive). SmallVector Path; /// Perform a cast towards the class of the Decl (either up or down the /// hierarchy). bool castBack(const CXXRecordDecl *Class) { assert(!Path.empty()); const CXXRecordDecl *Expected; if (Path.size() >= 2) Expected = Path[Path.size() - 2]; else Expected = getContainingRecord(); if (Expected->getCanonicalDecl() != Class->getCanonicalDecl()) { // C++11 [expr.static.cast]p12: In a conversion from (D::*) to (B::*), // if B does not contain the original member and is not a base or // derived class of the class containing the original member, the result // of the cast is undefined. // C++11 [conv.mem]p2 does not cover this case for a cast from (B::*) to // (D::*). We consider that to be a language defect. return false; } Path.pop_back(); return true; } /// Perform a base-to-derived member pointer cast. bool castToDerived(const CXXRecordDecl *Derived) { if (!getDecl()) return true; if (!isDerivedMember()) { Path.push_back(Derived); return true; } if (!castBack(Derived)) return false; if (Path.empty()) DeclAndIsDerivedMember.setInt(false); return true; } /// Perform a derived-to-base member pointer cast. bool castToBase(const CXXRecordDecl *Base) { if (!getDecl()) return true; if (Path.empty()) DeclAndIsDerivedMember.setInt(true); if (isDerivedMember()) { Path.push_back(Base); return true; } return castBack(Base); } }; /// Compare two member pointers, which are assumed to be of the same type. static bool operator==(const MemberPtr &LHS, const MemberPtr &RHS) { if (!LHS.getDecl() || !RHS.getDecl()) return !LHS.getDecl() && !RHS.getDecl(); if (LHS.getDecl()->getCanonicalDecl() != RHS.getDecl()->getCanonicalDecl()) return false; return LHS.Path == RHS.Path; } } static bool Evaluate(APValue &Result, EvalInfo &Info, const Expr *E); static bool EvaluateInPlace(APValue &Result, EvalInfo &Info, const LValue &This, const Expr *E, bool AllowNonLiteralTypes = false); static bool EvaluateLValue(const Expr *E, LValue &Result, EvalInfo &Info); static bool EvaluatePointer(const Expr *E, LValue &Result, EvalInfo &Info); static bool EvaluateMemberPointer(const Expr *E, MemberPtr &Result, EvalInfo &Info); static bool EvaluateTemporary(const Expr *E, LValue &Result, EvalInfo &Info); static bool EvaluateInteger(const Expr *E, APSInt &Result, EvalInfo &Info); static bool EvaluateIntegerOrLValue(const Expr *E, APValue &Result, EvalInfo &Info); static bool EvaluateFloat(const Expr *E, APFloat &Result, EvalInfo &Info); static bool EvaluateComplex(const Expr *E, ComplexValue &Res, EvalInfo &Info); static bool EvaluateAtomic(const Expr *E, APValue &Result, EvalInfo &Info); static bool EvaluateAsRValue(EvalInfo &Info, const Expr *E, APValue &Result); //===----------------------------------------------------------------------===// // Misc utilities //===----------------------------------------------------------------------===// /// Produce a string describing the given constexpr call. static void describeCall(CallStackFrame *Frame, raw_ostream &Out) { unsigned ArgIndex = 0; bool IsMemberCall = isa(Frame->Callee) && !isa(Frame->Callee) && cast(Frame->Callee)->isInstance(); if (!IsMemberCall) Out << *Frame->Callee << '('; if (Frame->This && IsMemberCall) { APValue Val; Frame->This->moveInto(Val); Val.printPretty(Out, Frame->Info.Ctx, Frame->This->Designator.MostDerivedType); // FIXME: Add parens around Val if needed. Out << "->" << *Frame->Callee << '('; IsMemberCall = false; } for (FunctionDecl::param_const_iterator I = Frame->Callee->param_begin(), E = Frame->Callee->param_end(); I != E; ++I, ++ArgIndex) { if (ArgIndex > (unsigned)IsMemberCall) Out << ", "; const ParmVarDecl *Param = *I; const APValue &Arg = Frame->Arguments[ArgIndex]; Arg.printPretty(Out, Frame->Info.Ctx, Param->getType()); if (ArgIndex == 0 && IsMemberCall) Out << "->" << *Frame->Callee << '('; } Out << ')'; } /// Evaluate an expression to see if it had side-effects, and discard its /// result. /// \return \c true if the caller should keep evaluating. static bool EvaluateIgnoredValue(EvalInfo &Info, const Expr *E) { APValue Scratch; if (!Evaluate(Scratch, Info, E)) // We don't need the value, but we might have skipped a side effect here. return Info.noteSideEffect(); return true; } /// Sign- or zero-extend a value to 64 bits. If it's already 64 bits, just /// return its existing value. static int64_t getExtValue(const APSInt &Value) { return Value.isSigned() ? Value.getSExtValue() : static_cast(Value.getZExtValue()); } /// Should this call expression be treated as a string literal? static bool IsStringLiteralCall(const CallExpr *E) { unsigned Builtin = E->getBuiltinCallee(); return (Builtin == Builtin::BI__builtin___CFStringMakeConstantString || Builtin == Builtin::BI__builtin___NSStringMakeConstantString); } static bool IsGlobalLValue(APValue::LValueBase B) { // C++11 [expr.const]p3 An address constant expression is a prvalue core // constant expression of pointer type that evaluates to... // ... a null pointer value, or a prvalue core constant expression of type // std::nullptr_t. if (!B) return true; if (const ValueDecl *D = B.dyn_cast()) { // ... the address of an object with static storage duration, if (const VarDecl *VD = dyn_cast(D)) return VD->hasGlobalStorage(); // ... the address of a function, return isa(D); } const Expr *E = B.get(); switch (E->getStmtClass()) { default: return false; case Expr::CompoundLiteralExprClass: { const CompoundLiteralExpr *CLE = cast(E); return CLE->isFileScope() && CLE->isLValue(); } case Expr::MaterializeTemporaryExprClass: // A materialized temporary might have been lifetime-extended to static // storage duration. return cast(E)->getStorageDuration() == SD_Static; // A string literal has static storage duration. case Expr::StringLiteralClass: case Expr::PredefinedExprClass: case Expr::ObjCStringLiteralClass: case Expr::ObjCEncodeExprClass: case Expr::CXXTypeidExprClass: case Expr::CXXUuidofExprClass: return true; case Expr::CallExprClass: return IsStringLiteralCall(cast(E)); // For GCC compatibility, &&label has static storage duration. case Expr::AddrLabelExprClass: return true; // A Block literal expression may be used as the initialization value for // Block variables at global or local static scope. case Expr::BlockExprClass: return !cast(E)->getBlockDecl()->hasCaptures(); case Expr::ImplicitValueInitExprClass: // FIXME: // We can never form an lvalue with an implicit value initialization as its // base through expression evaluation, so these only appear in one case: the // implicit variable declaration we invent when checking whether a constexpr // constructor can produce a constant expression. We must assume that such // an expression might be a global lvalue. return true; } } static void NoteLValueLocation(EvalInfo &Info, APValue::LValueBase Base) { assert(Base && "no location for a null lvalue"); const ValueDecl *VD = Base.dyn_cast(); if (VD) Info.Note(VD->getLocation(), diag::note_declared_at); else Info.Note(Base.get()->getExprLoc(), diag::note_constexpr_temporary_here); } /// Check that this reference or pointer core constant expression is a valid /// value for an address or reference constant expression. Return true if we /// can fold this expression, whether or not it's a constant expression. static bool CheckLValueConstantExpression(EvalInfo &Info, SourceLocation Loc, QualType Type, const LValue &LVal) { bool IsReferenceType = Type->isReferenceType(); APValue::LValueBase Base = LVal.getLValueBase(); const SubobjectDesignator &Designator = LVal.getLValueDesignator(); // Check that the object is a global. Note that the fake 'this' object we // manufacture when checking potential constant expressions is conservatively // assumed to be global here. if (!IsGlobalLValue(Base)) { if (Info.getLangOpts().CPlusPlus11) { const ValueDecl *VD = Base.dyn_cast(); Info.FFDiag(Loc, diag::note_constexpr_non_global, 1) << IsReferenceType << !Designator.Entries.empty() << !!VD << VD; NoteLValueLocation(Info, Base); } else { Info.FFDiag(Loc); } // Don't allow references to temporaries to escape. return false; } assert((Info.checkingPotentialConstantExpression() || LVal.getLValueCallIndex() == 0) && "have call index for global lvalue"); if (const ValueDecl *VD = Base.dyn_cast()) { if (const VarDecl *Var = dyn_cast(VD)) { // Check if this is a thread-local variable. if (Var->getTLSKind()) return false; // A dllimport variable never acts like a constant. if (Var->hasAttr()) return false; } if (const auto *FD = dyn_cast(VD)) { // __declspec(dllimport) must be handled very carefully: // We must never initialize an expression with the thunk in C++. // Doing otherwise would allow the same id-expression to yield // different addresses for the same function in different translation // units. However, this means that we must dynamically initialize the // expression with the contents of the import address table at runtime. // // The C language has no notion of ODR; furthermore, it has no notion of // dynamic initialization. This means that we are permitted to // perform initialization with the address of the thunk. if (Info.getLangOpts().CPlusPlus && FD->hasAttr()) return false; } } // Allow address constant expressions to be past-the-end pointers. This is // an extension: the standard requires them to point to an object. if (!IsReferenceType) return true; // A reference constant expression must refer to an object. if (!Base) { // FIXME: diagnostic Info.CCEDiag(Loc); return true; } // Does this refer one past the end of some object? if (!Designator.Invalid && Designator.isOnePastTheEnd()) { const ValueDecl *VD = Base.dyn_cast(); Info.FFDiag(Loc, diag::note_constexpr_past_end, 1) << !Designator.Entries.empty() << !!VD << VD; NoteLValueLocation(Info, Base); } return true; } /// Check that this core constant expression is of literal type, and if not, /// produce an appropriate diagnostic. static bool CheckLiteralType(EvalInfo &Info, const Expr *E, const LValue *This = nullptr) { if (!E->isRValue() || E->getType()->isLiteralType(Info.Ctx)) return true; // C++1y: A constant initializer for an object o [...] may also invoke // constexpr constructors for o and its subobjects even if those objects // are of non-literal class types. // // C++11 missed this detail for aggregates, so classes like this: // struct foo_t { union { int i; volatile int j; } u; }; // are not (obviously) initializable like so: // __attribute__((__require_constant_initialization__)) // static const foo_t x = {{0}}; // because "i" is a subobject with non-literal initialization (due to the // volatile member of the union). See: // http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1677 // Therefore, we use the C++1y behavior. if (This && Info.EvaluatingDecl == This->getLValueBase()) return true; // Prvalue constant expressions must be of literal types. if (Info.getLangOpts().CPlusPlus11) Info.FFDiag(E, diag::note_constexpr_nonliteral) << E->getType(); else Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } /// Check that this core constant expression value is a valid value for a /// constant expression. If not, report an appropriate diagnostic. Does not /// check that the expression is of literal type. static bool CheckConstantExpression(EvalInfo &Info, SourceLocation DiagLoc, QualType Type, const APValue &Value) { if (Value.isUninit()) { Info.FFDiag(DiagLoc, diag::note_constexpr_uninitialized) << true << Type; return false; } // We allow _Atomic(T) to be initialized from anything that T can be // initialized from. if (const AtomicType *AT = Type->getAs()) Type = AT->getValueType(); // Core issue 1454: For a literal constant expression of array or class type, // each subobject of its value shall have been initialized by a constant // expression. if (Value.isArray()) { QualType EltTy = Type->castAsArrayTypeUnsafe()->getElementType(); for (unsigned I = 0, N = Value.getArrayInitializedElts(); I != N; ++I) { if (!CheckConstantExpression(Info, DiagLoc, EltTy, Value.getArrayInitializedElt(I))) return false; } if (!Value.hasArrayFiller()) return true; return CheckConstantExpression(Info, DiagLoc, EltTy, Value.getArrayFiller()); } if (Value.isUnion() && Value.getUnionField()) { return CheckConstantExpression(Info, DiagLoc, Value.getUnionField()->getType(), Value.getUnionValue()); } if (Value.isStruct()) { RecordDecl *RD = Type->castAs()->getDecl(); if (const CXXRecordDecl *CD = dyn_cast(RD)) { unsigned BaseIndex = 0; for (CXXRecordDecl::base_class_const_iterator I = CD->bases_begin(), End = CD->bases_end(); I != End; ++I, ++BaseIndex) { if (!CheckConstantExpression(Info, DiagLoc, I->getType(), Value.getStructBase(BaseIndex))) return false; } } for (const auto *I : RD->fields()) { if (!CheckConstantExpression(Info, DiagLoc, I->getType(), Value.getStructField(I->getFieldIndex()))) return false; } } if (Value.isLValue()) { LValue LVal; LVal.setFrom(Info.Ctx, Value); return CheckLValueConstantExpression(Info, DiagLoc, Type, LVal); } // Everything else is fine. return true; } static const ValueDecl *GetLValueBaseDecl(const LValue &LVal) { return LVal.Base.dyn_cast(); } static bool IsLiteralLValue(const LValue &Value) { if (Value.CallIndex) return false; const Expr *E = Value.Base.dyn_cast(); return E && !isa(E); } static bool IsWeakLValue(const LValue &Value) { const ValueDecl *Decl = GetLValueBaseDecl(Value); return Decl && Decl->isWeak(); } static bool isZeroSized(const LValue &Value) { const ValueDecl *Decl = GetLValueBaseDecl(Value); if (Decl && isa(Decl)) { QualType Ty = Decl->getType(); if (Ty->isArrayType()) return Ty->isIncompleteType() || Decl->getASTContext().getTypeSize(Ty) == 0; } return false; } static bool EvalPointerValueAsBool(const APValue &Value, bool &Result) { // A null base expression indicates a null pointer. These are always // evaluatable, and they are false unless the offset is zero. if (!Value.getLValueBase()) { Result = !Value.getLValueOffset().isZero(); return true; } // We have a non-null base. These are generally known to be true, but if it's // a weak declaration it can be null at runtime. Result = true; const ValueDecl *Decl = Value.getLValueBase().dyn_cast(); return !Decl || !Decl->isWeak(); } static bool HandleConversionToBool(const APValue &Val, bool &Result) { switch (Val.getKind()) { case APValue::Uninitialized: return false; case APValue::Int: Result = Val.getInt().getBoolValue(); return true; case APValue::Float: Result = !Val.getFloat().isZero(); return true; case APValue::ComplexInt: Result = Val.getComplexIntReal().getBoolValue() || Val.getComplexIntImag().getBoolValue(); return true; case APValue::ComplexFloat: Result = !Val.getComplexFloatReal().isZero() || !Val.getComplexFloatImag().isZero(); return true; case APValue::LValue: return EvalPointerValueAsBool(Val, Result); case APValue::MemberPointer: Result = Val.getMemberPointerDecl(); return true; case APValue::Vector: case APValue::Array: case APValue::Struct: case APValue::Union: case APValue::AddrLabelDiff: return false; } llvm_unreachable("unknown APValue kind"); } static bool EvaluateAsBooleanCondition(const Expr *E, bool &Result, EvalInfo &Info) { assert(E->isRValue() && "missing lvalue-to-rvalue conv in bool condition"); APValue Val; if (!Evaluate(Val, Info, E)) return false; return HandleConversionToBool(Val, Result); } template static bool HandleOverflow(EvalInfo &Info, const Expr *E, const T &SrcValue, QualType DestType) { Info.CCEDiag(E, diag::note_constexpr_overflow) << SrcValue << DestType; return Info.noteUndefinedBehavior(); } static bool HandleFloatToIntCast(EvalInfo &Info, const Expr *E, QualType SrcType, const APFloat &Value, QualType DestType, APSInt &Result) { unsigned DestWidth = Info.Ctx.getIntWidth(DestType); // Determine whether we are converting to unsigned or signed. bool DestSigned = DestType->isSignedIntegerOrEnumerationType(); Result = APSInt(DestWidth, !DestSigned); bool ignored; if (Value.convertToInteger(Result, llvm::APFloat::rmTowardZero, &ignored) & APFloat::opInvalidOp) return HandleOverflow(Info, E, Value, DestType); return true; } static bool HandleFloatToFloatCast(EvalInfo &Info, const Expr *E, QualType SrcType, QualType DestType, APFloat &Result) { APFloat Value = Result; bool ignored; if (Result.convert(Info.Ctx.getFloatTypeSemantics(DestType), APFloat::rmNearestTiesToEven, &ignored) & APFloat::opOverflow) return HandleOverflow(Info, E, Value, DestType); return true; } static APSInt HandleIntToIntCast(EvalInfo &Info, const Expr *E, QualType DestType, QualType SrcType, const APSInt &Value) { unsigned DestWidth = Info.Ctx.getIntWidth(DestType); APSInt Result = Value; // Figure out if this is a truncate, extend or noop cast. // If the input is signed, do a sign extend, noop, or truncate. Result = Result.extOrTrunc(DestWidth); Result.setIsUnsigned(DestType->isUnsignedIntegerOrEnumerationType()); return Result; } static bool HandleIntToFloatCast(EvalInfo &Info, const Expr *E, QualType SrcType, const APSInt &Value, QualType DestType, APFloat &Result) { Result = APFloat(Info.Ctx.getFloatTypeSemantics(DestType), 1); if (Result.convertFromAPInt(Value, Value.isSigned(), APFloat::rmNearestTiesToEven) & APFloat::opOverflow) return HandleOverflow(Info, E, Value, DestType); return true; } static bool truncateBitfieldValue(EvalInfo &Info, const Expr *E, APValue &Value, const FieldDecl *FD) { assert(FD->isBitField() && "truncateBitfieldValue on non-bitfield"); if (!Value.isInt()) { // Trying to store a pointer-cast-to-integer into a bitfield. // FIXME: In this case, we should provide the diagnostic for casting // a pointer to an integer. assert(Value.isLValue() && "integral value neither int nor lvalue?"); Info.FFDiag(E); return false; } APSInt &Int = Value.getInt(); unsigned OldBitWidth = Int.getBitWidth(); unsigned NewBitWidth = FD->getBitWidthValue(Info.Ctx); if (NewBitWidth < OldBitWidth) Int = Int.trunc(NewBitWidth).extend(OldBitWidth); return true; } static bool EvalAndBitcastToAPInt(EvalInfo &Info, const Expr *E, llvm::APInt &Res) { APValue SVal; if (!Evaluate(SVal, Info, E)) return false; if (SVal.isInt()) { Res = SVal.getInt(); return true; } if (SVal.isFloat()) { Res = SVal.getFloat().bitcastToAPInt(); return true; } if (SVal.isVector()) { QualType VecTy = E->getType(); unsigned VecSize = Info.Ctx.getTypeSize(VecTy); QualType EltTy = VecTy->castAs()->getElementType(); unsigned EltSize = Info.Ctx.getTypeSize(EltTy); bool BigEndian = Info.Ctx.getTargetInfo().isBigEndian(); Res = llvm::APInt::getNullValue(VecSize); for (unsigned i = 0; i < SVal.getVectorLength(); i++) { APValue &Elt = SVal.getVectorElt(i); llvm::APInt EltAsInt; if (Elt.isInt()) { EltAsInt = Elt.getInt(); } else if (Elt.isFloat()) { EltAsInt = Elt.getFloat().bitcastToAPInt(); } else { // Don't try to handle vectors of anything other than int or float // (not sure if it's possible to hit this case). Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } unsigned BaseEltSize = EltAsInt.getBitWidth(); if (BigEndian) Res |= EltAsInt.zextOrTrunc(VecSize).rotr(i*EltSize+BaseEltSize); else Res |= EltAsInt.zextOrTrunc(VecSize).rotl(i*EltSize); } return true; } // Give up if the input isn't an int, float, or vector. For example, we // reject "(v4i16)(intptr_t)&a". Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } /// Perform the given integer operation, which is known to need at most BitWidth /// bits, and check for overflow in the original type (if that type was not an /// unsigned type). template static bool CheckedIntArithmetic(EvalInfo &Info, const Expr *E, const APSInt &LHS, const APSInt &RHS, unsigned BitWidth, Operation Op, APSInt &Result) { if (LHS.isUnsigned()) { Result = Op(LHS, RHS); return true; } APSInt Value(Op(LHS.extend(BitWidth), RHS.extend(BitWidth)), false); Result = Value.trunc(LHS.getBitWidth()); if (Result.extend(BitWidth) != Value) { if (Info.checkingForOverflow()) Info.Ctx.getDiagnostics().Report(E->getExprLoc(), diag::warn_integer_constant_overflow) << Result.toString(10) << E->getType(); else return HandleOverflow(Info, E, Value, E->getType()); } return true; } /// Perform the given binary integer operation. static bool handleIntIntBinOp(EvalInfo &Info, const Expr *E, const APSInt &LHS, BinaryOperatorKind Opcode, APSInt RHS, APSInt &Result) { switch (Opcode) { default: Info.FFDiag(E); return false; case BO_Mul: return CheckedIntArithmetic(Info, E, LHS, RHS, LHS.getBitWidth() * 2, std::multiplies(), Result); case BO_Add: return CheckedIntArithmetic(Info, E, LHS, RHS, LHS.getBitWidth() + 1, std::plus(), Result); case BO_Sub: return CheckedIntArithmetic(Info, E, LHS, RHS, LHS.getBitWidth() + 1, std::minus(), Result); case BO_And: Result = LHS & RHS; return true; case BO_Xor: Result = LHS ^ RHS; return true; case BO_Or: Result = LHS | RHS; return true; case BO_Div: case BO_Rem: if (RHS == 0) { Info.FFDiag(E, diag::note_expr_divide_by_zero); return false; } Result = (Opcode == BO_Rem ? LHS % RHS : LHS / RHS); // Check for overflow case: INT_MIN / -1 or INT_MIN % -1. APSInt supports // this operation and gives the two's complement result. if (RHS.isNegative() && RHS.isAllOnesValue() && LHS.isSigned() && LHS.isMinSignedValue()) return HandleOverflow(Info, E, -LHS.extend(LHS.getBitWidth() + 1), E->getType()); return true; case BO_Shl: { if (Info.getLangOpts().OpenCL) // OpenCL 6.3j: shift values are effectively % word size of LHS. RHS &= APSInt(llvm::APInt(RHS.getBitWidth(), static_cast(LHS.getBitWidth() - 1)), RHS.isUnsigned()); else if (RHS.isSigned() && RHS.isNegative()) { // During constant-folding, a negative shift is an opposite shift. Such // a shift is not a constant expression. Info.CCEDiag(E, diag::note_constexpr_negative_shift) << RHS; RHS = -RHS; goto shift_right; } shift_left: // C++11 [expr.shift]p1: Shift width must be less than the bit width of // the shifted type. unsigned SA = (unsigned) RHS.getLimitedValue(LHS.getBitWidth()-1); if (SA != RHS) { Info.CCEDiag(E, diag::note_constexpr_large_shift) << RHS << E->getType() << LHS.getBitWidth(); } else if (LHS.isSigned()) { // C++11 [expr.shift]p2: A signed left shift must have a non-negative // operand, and must not overflow the corresponding unsigned type. if (LHS.isNegative()) Info.CCEDiag(E, diag::note_constexpr_lshift_of_negative) << LHS; else if (LHS.countLeadingZeros() < SA) Info.CCEDiag(E, diag::note_constexpr_lshift_discards); } Result = LHS << SA; return true; } case BO_Shr: { if (Info.getLangOpts().OpenCL) // OpenCL 6.3j: shift values are effectively % word size of LHS. RHS &= APSInt(llvm::APInt(RHS.getBitWidth(), static_cast(LHS.getBitWidth() - 1)), RHS.isUnsigned()); else if (RHS.isSigned() && RHS.isNegative()) { // During constant-folding, a negative shift is an opposite shift. Such a // shift is not a constant expression. Info.CCEDiag(E, diag::note_constexpr_negative_shift) << RHS; RHS = -RHS; goto shift_left; } shift_right: // C++11 [expr.shift]p1: Shift width must be less than the bit width of the // shifted type. unsigned SA = (unsigned) RHS.getLimitedValue(LHS.getBitWidth()-1); if (SA != RHS) Info.CCEDiag(E, diag::note_constexpr_large_shift) << RHS << E->getType() << LHS.getBitWidth(); Result = LHS >> SA; return true; } case BO_LT: Result = LHS < RHS; return true; case BO_GT: Result = LHS > RHS; return true; case BO_LE: Result = LHS <= RHS; return true; case BO_GE: Result = LHS >= RHS; return true; case BO_EQ: Result = LHS == RHS; return true; case BO_NE: Result = LHS != RHS; return true; } } /// Perform the given binary floating-point operation, in-place, on LHS. static bool handleFloatFloatBinOp(EvalInfo &Info, const Expr *E, APFloat &LHS, BinaryOperatorKind Opcode, const APFloat &RHS) { switch (Opcode) { default: Info.FFDiag(E); return false; case BO_Mul: LHS.multiply(RHS, APFloat::rmNearestTiesToEven); break; case BO_Add: LHS.add(RHS, APFloat::rmNearestTiesToEven); break; case BO_Sub: LHS.subtract(RHS, APFloat::rmNearestTiesToEven); break; case BO_Div: LHS.divide(RHS, APFloat::rmNearestTiesToEven); break; } if (LHS.isInfinity() || LHS.isNaN()) { Info.CCEDiag(E, diag::note_constexpr_float_arithmetic) << LHS.isNaN(); return Info.noteUndefinedBehavior(); } return true; } /// Cast an lvalue referring to a base subobject to a derived class, by /// truncating the lvalue's path to the given length. static bool CastToDerivedClass(EvalInfo &Info, const Expr *E, LValue &Result, const RecordDecl *TruncatedType, unsigned TruncatedElements) { SubobjectDesignator &D = Result.Designator; // Check we actually point to a derived class object. if (TruncatedElements == D.Entries.size()) return true; assert(TruncatedElements >= D.MostDerivedPathLength && "not casting to a derived class"); if (!Result.checkSubobject(Info, E, CSK_Derived)) return false; // Truncate the path to the subobject, and remove any derived-to-base offsets. const RecordDecl *RD = TruncatedType; for (unsigned I = TruncatedElements, N = D.Entries.size(); I != N; ++I) { if (RD->isInvalidDecl()) return false; const ASTRecordLayout &Layout = Info.Ctx.getASTRecordLayout(RD); const CXXRecordDecl *Base = getAsBaseClass(D.Entries[I]); if (isVirtualBaseClass(D.Entries[I])) Result.Offset -= Layout.getVBaseClassOffset(Base); else Result.Offset -= Layout.getBaseClassOffset(Base); RD = Base; } D.Entries.resize(TruncatedElements); return true; } static bool HandleLValueDirectBase(EvalInfo &Info, const Expr *E, LValue &Obj, const CXXRecordDecl *Derived, const CXXRecordDecl *Base, const ASTRecordLayout *RL = nullptr) { if (!RL) { if (Derived->isInvalidDecl()) return false; RL = &Info.Ctx.getASTRecordLayout(Derived); } Obj.getLValueOffset() += RL->getBaseClassOffset(Base); Obj.addDecl(Info, E, Base, /*Virtual*/ false); return true; } static bool HandleLValueBase(EvalInfo &Info, const Expr *E, LValue &Obj, const CXXRecordDecl *DerivedDecl, const CXXBaseSpecifier *Base) { const CXXRecordDecl *BaseDecl = Base->getType()->getAsCXXRecordDecl(); if (!Base->isVirtual()) return HandleLValueDirectBase(Info, E, Obj, DerivedDecl, BaseDecl); SubobjectDesignator &D = Obj.Designator; if (D.Invalid) return false; // Extract most-derived object and corresponding type. DerivedDecl = D.MostDerivedType->getAsCXXRecordDecl(); if (!CastToDerivedClass(Info, E, Obj, DerivedDecl, D.MostDerivedPathLength)) return false; // Find the virtual base class. if (DerivedDecl->isInvalidDecl()) return false; const ASTRecordLayout &Layout = Info.Ctx.getASTRecordLayout(DerivedDecl); Obj.getLValueOffset() += Layout.getVBaseClassOffset(BaseDecl); Obj.addDecl(Info, E, BaseDecl, /*Virtual*/ true); return true; } static bool HandleLValueBasePath(EvalInfo &Info, const CastExpr *E, QualType Type, LValue &Result) { for (CastExpr::path_const_iterator PathI = E->path_begin(), PathE = E->path_end(); PathI != PathE; ++PathI) { if (!HandleLValueBase(Info, E, Result, Type->getAsCXXRecordDecl(), *PathI)) return false; Type = (*PathI)->getType(); } return true; } /// Update LVal to refer to the given field, which must be a member of the type /// currently described by LVal. static bool HandleLValueMember(EvalInfo &Info, const Expr *E, LValue &LVal, const FieldDecl *FD, const ASTRecordLayout *RL = nullptr) { if (!RL) { if (FD->getParent()->isInvalidDecl()) return false; RL = &Info.Ctx.getASTRecordLayout(FD->getParent()); } unsigned I = FD->getFieldIndex(); LVal.adjustOffset(Info.Ctx.toCharUnitsFromBits(RL->getFieldOffset(I))); LVal.addDecl(Info, E, FD); return true; } /// Update LVal to refer to the given indirect field. static bool HandleLValueIndirectMember(EvalInfo &Info, const Expr *E, LValue &LVal, const IndirectFieldDecl *IFD) { for (const auto *C : IFD->chain()) if (!HandleLValueMember(Info, E, LVal, cast(C))) return false; return true; } /// Get the size of the given type in char units. static bool HandleSizeof(EvalInfo &Info, SourceLocation Loc, QualType Type, CharUnits &Size) { // sizeof(void), __alignof__(void), sizeof(function) = 1 as a gcc // extension. if (Type->isVoidType() || Type->isFunctionType()) { Size = CharUnits::One(); return true; } if (Type->isDependentType()) { Info.FFDiag(Loc); return false; } if (!Type->isConstantSizeType()) { // sizeof(vla) is not a constantexpr: C99 6.5.3.4p2. // FIXME: Better diagnostic. Info.FFDiag(Loc); return false; } Size = Info.Ctx.getTypeSizeInChars(Type); return true; } /// Update a pointer value to model pointer arithmetic. /// \param Info - Information about the ongoing evaluation. /// \param E - The expression being evaluated, for diagnostic purposes. /// \param LVal - The pointer value to be updated. /// \param EltTy - The pointee type represented by LVal. /// \param Adjustment - The adjustment, in objects of type EltTy, to add. static bool HandleLValueArrayAdjustment(EvalInfo &Info, const Expr *E, LValue &LVal, QualType EltTy, int64_t Adjustment) { CharUnits SizeOfPointee; if (!HandleSizeof(Info, E->getExprLoc(), EltTy, SizeOfPointee)) return false; LVal.adjustOffsetAndIndex(Info, E, Adjustment, SizeOfPointee); return true; } /// Update an lvalue to refer to a component of a complex number. /// \param Info - Information about the ongoing evaluation. /// \param LVal - The lvalue to be updated. /// \param EltTy - The complex number's component type. /// \param Imag - False for the real component, true for the imaginary. static bool HandleLValueComplexElement(EvalInfo &Info, const Expr *E, LValue &LVal, QualType EltTy, bool Imag) { if (Imag) { CharUnits SizeOfComponent; if (!HandleSizeof(Info, E->getExprLoc(), EltTy, SizeOfComponent)) return false; LVal.Offset += SizeOfComponent; } LVal.addComplex(Info, E, EltTy, Imag); return true; } /// Try to evaluate the initializer for a variable declaration. /// /// \param Info Information about the ongoing evaluation. /// \param E An expression to be used when printing diagnostics. /// \param VD The variable whose initializer should be obtained. /// \param Frame The frame in which the variable was created. Must be null /// if this variable is not local to the evaluation. /// \param Result Filled in with a pointer to the value of the variable. static bool evaluateVarDeclInit(EvalInfo &Info, const Expr *E, const VarDecl *VD, CallStackFrame *Frame, APValue *&Result) { // If this is a parameter to an active constexpr function call, perform // argument substitution. if (const ParmVarDecl *PVD = dyn_cast(VD)) { // Assume arguments of a potential constant expression are unknown // constant expressions. if (Info.checkingPotentialConstantExpression()) return false; if (!Frame || !Frame->Arguments) { Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } Result = &Frame->Arguments[PVD->getFunctionScopeIndex()]; return true; } // If this is a local variable, dig out its value. if (Frame) { Result = Frame->getTemporary(VD); if (!Result) { // Assume variables referenced within a lambda's call operator that were // not declared within the call operator are captures and during checking // of a potential constant expression, assume they are unknown constant // expressions. assert(isLambdaCallOperator(Frame->Callee) && (VD->getDeclContext() != Frame->Callee || VD->isInitCapture()) && "missing value for local variable"); if (Info.checkingPotentialConstantExpression()) return false; // FIXME: implement capture evaluation during constant expr evaluation. Info.FFDiag(E->getLocStart(), diag::note_unimplemented_constexpr_lambda_feature_ast) << "captures not currently allowed"; return false; } return true; } // Dig out the initializer, and use the declaration which it's attached to. const Expr *Init = VD->getAnyInitializer(VD); if (!Init || Init->isValueDependent()) { // If we're checking a potential constant expression, the variable could be // initialized later. if (!Info.checkingPotentialConstantExpression()) Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } // If we're currently evaluating the initializer of this declaration, use that // in-flight value. if (Info.EvaluatingDecl.dyn_cast() == VD) { Result = Info.EvaluatingDeclValue; return true; } // Never evaluate the initializer of a weak variable. We can't be sure that // this is the definition which will be used. if (VD->isWeak()) { Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } // Check that we can fold the initializer. In C++, we will have already done // this in the cases where it matters for conformance. SmallVector Notes; if (!VD->evaluateValue(Notes)) { Info.FFDiag(E, diag::note_constexpr_var_init_non_constant, Notes.size() + 1) << VD; Info.Note(VD->getLocation(), diag::note_declared_at); Info.addNotes(Notes); return false; } else if (!VD->checkInitIsICE()) { Info.CCEDiag(E, diag::note_constexpr_var_init_non_constant, Notes.size() + 1) << VD; Info.Note(VD->getLocation(), diag::note_declared_at); Info.addNotes(Notes); } Result = VD->getEvaluatedValue(); return true; } static bool IsConstNonVolatile(QualType T) { Qualifiers Quals = T.getQualifiers(); return Quals.hasConst() && !Quals.hasVolatile(); } /// Get the base index of the given base class within an APValue representing /// the given derived class. static unsigned getBaseIndex(const CXXRecordDecl *Derived, const CXXRecordDecl *Base) { Base = Base->getCanonicalDecl(); unsigned Index = 0; for (CXXRecordDecl::base_class_const_iterator I = Derived->bases_begin(), E = Derived->bases_end(); I != E; ++I, ++Index) { if (I->getType()->getAsCXXRecordDecl()->getCanonicalDecl() == Base) return Index; } llvm_unreachable("base class missing from derived class's bases list"); } /// Extract the value of a character from a string literal. static APSInt extractStringLiteralCharacter(EvalInfo &Info, const Expr *Lit, uint64_t Index) { // FIXME: Support ObjCEncodeExpr, MakeStringConstant if (auto PE = dyn_cast(Lit)) Lit = PE->getFunctionName(); const StringLiteral *S = cast(Lit); const ConstantArrayType *CAT = Info.Ctx.getAsConstantArrayType(S->getType()); assert(CAT && "string literal isn't an array"); QualType CharType = CAT->getElementType(); assert(CharType->isIntegerType() && "unexpected character type"); APSInt Value(S->getCharByteWidth() * Info.Ctx.getCharWidth(), CharType->isUnsignedIntegerType()); if (Index < S->getLength()) Value = S->getCodeUnit(Index); return Value; } // Expand a string literal into an array of characters. static void expandStringLiteral(EvalInfo &Info, const Expr *Lit, APValue &Result) { const StringLiteral *S = cast(Lit); const ConstantArrayType *CAT = Info.Ctx.getAsConstantArrayType(S->getType()); assert(CAT && "string literal isn't an array"); QualType CharType = CAT->getElementType(); assert(CharType->isIntegerType() && "unexpected character type"); unsigned Elts = CAT->getSize().getZExtValue(); Result = APValue(APValue::UninitArray(), std::min(S->getLength(), Elts), Elts); APSInt Value(S->getCharByteWidth() * Info.Ctx.getCharWidth(), CharType->isUnsignedIntegerType()); if (Result.hasArrayFiller()) Result.getArrayFiller() = APValue(Value); for (unsigned I = 0, N = Result.getArrayInitializedElts(); I != N; ++I) { Value = S->getCodeUnit(I); Result.getArrayInitializedElt(I) = APValue(Value); } } // Expand an array so that it has more than Index filled elements. static void expandArray(APValue &Array, unsigned Index) { unsigned Size = Array.getArraySize(); assert(Index < Size); // Always at least double the number of elements for which we store a value. unsigned OldElts = Array.getArrayInitializedElts(); unsigned NewElts = std::max(Index+1, OldElts * 2); NewElts = std::min(Size, std::max(NewElts, 8u)); // Copy the data across. APValue NewValue(APValue::UninitArray(), NewElts, Size); for (unsigned I = 0; I != OldElts; ++I) NewValue.getArrayInitializedElt(I).swap(Array.getArrayInitializedElt(I)); for (unsigned I = OldElts; I != NewElts; ++I) NewValue.getArrayInitializedElt(I) = Array.getArrayFiller(); if (NewValue.hasArrayFiller()) NewValue.getArrayFiller() = Array.getArrayFiller(); Array.swap(NewValue); } /// Determine whether a type would actually be read by an lvalue-to-rvalue /// conversion. If it's of class type, we may assume that the copy operation /// is trivial. Note that this is never true for a union type with fields /// (because the copy always "reads" the active member) and always true for /// a non-class type. static bool isReadByLvalueToRvalueConversion(QualType T) { CXXRecordDecl *RD = T->getBaseElementTypeUnsafe()->getAsCXXRecordDecl(); if (!RD || (RD->isUnion() && !RD->field_empty())) return true; if (RD->isEmpty()) return false; for (auto *Field : RD->fields()) if (isReadByLvalueToRvalueConversion(Field->getType())) return true; for (auto &BaseSpec : RD->bases()) if (isReadByLvalueToRvalueConversion(BaseSpec.getType())) return true; return false; } /// Diagnose an attempt to read from any unreadable field within the specified /// type, which might be a class type. static bool diagnoseUnreadableFields(EvalInfo &Info, const Expr *E, QualType T) { CXXRecordDecl *RD = T->getBaseElementTypeUnsafe()->getAsCXXRecordDecl(); if (!RD) return false; if (!RD->hasMutableFields()) return false; for (auto *Field : RD->fields()) { // If we're actually going to read this field in some way, then it can't // be mutable. If we're in a union, then assigning to a mutable field // (even an empty one) can change the active member, so that's not OK. // FIXME: Add core issue number for the union case. if (Field->isMutable() && (RD->isUnion() || isReadByLvalueToRvalueConversion(Field->getType()))) { Info.FFDiag(E, diag::note_constexpr_ltor_mutable, 1) << Field; Info.Note(Field->getLocation(), diag::note_declared_at); return true; } if (diagnoseUnreadableFields(Info, E, Field->getType())) return true; } for (auto &BaseSpec : RD->bases()) if (diagnoseUnreadableFields(Info, E, BaseSpec.getType())) return true; // All mutable fields were empty, and thus not actually read. return false; } /// Kinds of access we can perform on an object, for diagnostics. enum AccessKinds { AK_Read, AK_Assign, AK_Increment, AK_Decrement }; namespace { /// A handle to a complete object (an object that is not a subobject of /// another object). struct CompleteObject { /// The value of the complete object. APValue *Value; /// The type of the complete object. QualType Type; CompleteObject() : Value(nullptr) {} CompleteObject(APValue *Value, QualType Type) : Value(Value), Type(Type) { assert(Value && "missing value for complete object"); } explicit operator bool() const { return Value; } }; } // end anonymous namespace /// Find the designated sub-object of an rvalue. template typename SubobjectHandler::result_type findSubobject(EvalInfo &Info, const Expr *E, const CompleteObject &Obj, const SubobjectDesignator &Sub, SubobjectHandler &handler) { if (Sub.Invalid) // A diagnostic will have already been produced. return handler.failed(); if (Sub.isOnePastTheEnd()) { if (Info.getLangOpts().CPlusPlus11) Info.FFDiag(E, diag::note_constexpr_access_past_end) << handler.AccessKind; else Info.FFDiag(E); return handler.failed(); } APValue *O = Obj.Value; QualType ObjType = Obj.Type; const FieldDecl *LastField = nullptr; // Walk the designator's path to find the subobject. for (unsigned I = 0, N = Sub.Entries.size(); /**/; ++I) { if (O->isUninit()) { if (!Info.checkingPotentialConstantExpression()) Info.FFDiag(E, diag::note_constexpr_access_uninit) << handler.AccessKind; return handler.failed(); } if (I == N) { // If we are reading an object of class type, there may still be more // things we need to check: if there are any mutable subobjects, we // cannot perform this read. (This only happens when performing a trivial // copy or assignment.) if (ObjType->isRecordType() && handler.AccessKind == AK_Read && diagnoseUnreadableFields(Info, E, ObjType)) return handler.failed(); if (!handler.found(*O, ObjType)) return false; // If we modified a bit-field, truncate it to the right width. if (handler.AccessKind != AK_Read && LastField && LastField->isBitField() && !truncateBitfieldValue(Info, E, *O, LastField)) return false; return true; } LastField = nullptr; if (ObjType->isArrayType()) { // Next subobject is an array element. const ConstantArrayType *CAT = Info.Ctx.getAsConstantArrayType(ObjType); assert(CAT && "vla in literal type?"); uint64_t Index = Sub.Entries[I].ArrayIndex; if (CAT->getSize().ule(Index)) { // Note, it should not be possible to form a pointer with a valid // designator which points more than one past the end of the array. if (Info.getLangOpts().CPlusPlus11) Info.FFDiag(E, diag::note_constexpr_access_past_end) << handler.AccessKind; else Info.FFDiag(E); return handler.failed(); } ObjType = CAT->getElementType(); // An array object is represented as either an Array APValue or as an // LValue which refers to a string literal. if (O->isLValue()) { assert(I == N - 1 && "extracting subobject of character?"); assert(!O->hasLValuePath() || O->getLValuePath().empty()); if (handler.AccessKind != AK_Read) expandStringLiteral(Info, O->getLValueBase().get(), *O); else return handler.foundString(*O, ObjType, Index); } if (O->getArrayInitializedElts() > Index) O = &O->getArrayInitializedElt(Index); else if (handler.AccessKind != AK_Read) { expandArray(*O, Index); O = &O->getArrayInitializedElt(Index); } else O = &O->getArrayFiller(); } else if (ObjType->isAnyComplexType()) { // Next subobject is a complex number. uint64_t Index = Sub.Entries[I].ArrayIndex; if (Index > 1) { if (Info.getLangOpts().CPlusPlus11) Info.FFDiag(E, diag::note_constexpr_access_past_end) << handler.AccessKind; else Info.FFDiag(E); return handler.failed(); } bool WasConstQualified = ObjType.isConstQualified(); ObjType = ObjType->castAs()->getElementType(); if (WasConstQualified) ObjType.addConst(); assert(I == N - 1 && "extracting subobject of scalar?"); if (O->isComplexInt()) { return handler.found(Index ? O->getComplexIntImag() : O->getComplexIntReal(), ObjType); } else { assert(O->isComplexFloat()); return handler.found(Index ? O->getComplexFloatImag() : O->getComplexFloatReal(), ObjType); } } else if (const FieldDecl *Field = getAsField(Sub.Entries[I])) { if (Field->isMutable() && handler.AccessKind == AK_Read) { Info.FFDiag(E, diag::note_constexpr_ltor_mutable, 1) << Field; Info.Note(Field->getLocation(), diag::note_declared_at); return handler.failed(); } // Next subobject is a class, struct or union field. RecordDecl *RD = ObjType->castAs()->getDecl(); if (RD->isUnion()) { const FieldDecl *UnionField = O->getUnionField(); if (!UnionField || UnionField->getCanonicalDecl() != Field->getCanonicalDecl()) { Info.FFDiag(E, diag::note_constexpr_access_inactive_union_member) << handler.AccessKind << Field << !UnionField << UnionField; return handler.failed(); } O = &O->getUnionValue(); } else O = &O->getStructField(Field->getFieldIndex()); bool WasConstQualified = ObjType.isConstQualified(); ObjType = Field->getType(); if (WasConstQualified && !Field->isMutable()) ObjType.addConst(); if (ObjType.isVolatileQualified()) { if (Info.getLangOpts().CPlusPlus) { // FIXME: Include a description of the path to the volatile subobject. Info.FFDiag(E, diag::note_constexpr_access_volatile_obj, 1) << handler.AccessKind << 2 << Field; Info.Note(Field->getLocation(), diag::note_declared_at); } else { Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); } return handler.failed(); } LastField = Field; } else { // Next subobject is a base class. const CXXRecordDecl *Derived = ObjType->getAsCXXRecordDecl(); const CXXRecordDecl *Base = getAsBaseClass(Sub.Entries[I]); O = &O->getStructBase(getBaseIndex(Derived, Base)); bool WasConstQualified = ObjType.isConstQualified(); ObjType = Info.Ctx.getRecordType(Base); if (WasConstQualified) ObjType.addConst(); } } } namespace { struct ExtractSubobjectHandler { EvalInfo &Info; APValue &Result; static const AccessKinds AccessKind = AK_Read; typedef bool result_type; bool failed() { return false; } bool found(APValue &Subobj, QualType SubobjType) { Result = Subobj; return true; } bool found(APSInt &Value, QualType SubobjType) { Result = APValue(Value); return true; } bool found(APFloat &Value, QualType SubobjType) { Result = APValue(Value); return true; } bool foundString(APValue &Subobj, QualType SubobjType, uint64_t Character) { Result = APValue(extractStringLiteralCharacter( Info, Subobj.getLValueBase().get(), Character)); return true; } }; } // end anonymous namespace const AccessKinds ExtractSubobjectHandler::AccessKind; /// Extract the designated sub-object of an rvalue. static bool extractSubobject(EvalInfo &Info, const Expr *E, const CompleteObject &Obj, const SubobjectDesignator &Sub, APValue &Result) { ExtractSubobjectHandler Handler = { Info, Result }; return findSubobject(Info, E, Obj, Sub, Handler); } namespace { struct ModifySubobjectHandler { EvalInfo &Info; APValue &NewVal; const Expr *E; typedef bool result_type; static const AccessKinds AccessKind = AK_Assign; bool checkConst(QualType QT) { // Assigning to a const object has undefined behavior. if (QT.isConstQualified()) { Info.FFDiag(E, diag::note_constexpr_modify_const_type) << QT; return false; } return true; } bool failed() { return false; } bool found(APValue &Subobj, QualType SubobjType) { if (!checkConst(SubobjType)) return false; // We've been given ownership of NewVal, so just swap it in. Subobj.swap(NewVal); return true; } bool found(APSInt &Value, QualType SubobjType) { if (!checkConst(SubobjType)) return false; if (!NewVal.isInt()) { // Maybe trying to write a cast pointer value into a complex? Info.FFDiag(E); return false; } Value = NewVal.getInt(); return true; } bool found(APFloat &Value, QualType SubobjType) { if (!checkConst(SubobjType)) return false; Value = NewVal.getFloat(); return true; } bool foundString(APValue &Subobj, QualType SubobjType, uint64_t Character) { llvm_unreachable("shouldn't encounter string elements with ExpandArrays"); } }; } // end anonymous namespace const AccessKinds ModifySubobjectHandler::AccessKind; /// Update the designated sub-object of an rvalue to the given value. static bool modifySubobject(EvalInfo &Info, const Expr *E, const CompleteObject &Obj, const SubobjectDesignator &Sub, APValue &NewVal) { ModifySubobjectHandler Handler = { Info, NewVal, E }; return findSubobject(Info, E, Obj, Sub, Handler); } /// Find the position where two subobject designators diverge, or equivalently /// the length of the common initial subsequence. static unsigned FindDesignatorMismatch(QualType ObjType, const SubobjectDesignator &A, const SubobjectDesignator &B, bool &WasArrayIndex) { unsigned I = 0, N = std::min(A.Entries.size(), B.Entries.size()); for (/**/; I != N; ++I) { if (!ObjType.isNull() && (ObjType->isArrayType() || ObjType->isAnyComplexType())) { // Next subobject is an array element. if (A.Entries[I].ArrayIndex != B.Entries[I].ArrayIndex) { WasArrayIndex = true; return I; } if (ObjType->isAnyComplexType()) ObjType = ObjType->castAs()->getElementType(); else ObjType = ObjType->castAsArrayTypeUnsafe()->getElementType(); } else { if (A.Entries[I].BaseOrMember != B.Entries[I].BaseOrMember) { WasArrayIndex = false; return I; } if (const FieldDecl *FD = getAsField(A.Entries[I])) // Next subobject is a field. ObjType = FD->getType(); else // Next subobject is a base class. ObjType = QualType(); } } WasArrayIndex = false; return I; } /// Determine whether the given subobject designators refer to elements of the /// same array object. static bool AreElementsOfSameArray(QualType ObjType, const SubobjectDesignator &A, const SubobjectDesignator &B) { if (A.Entries.size() != B.Entries.size()) return false; bool IsArray = A.MostDerivedIsArrayElement; if (IsArray && A.MostDerivedPathLength != A.Entries.size()) // A is a subobject of the array element. return false; // If A (and B) designates an array element, the last entry will be the array // index. That doesn't have to match. Otherwise, we're in the 'implicit array // of length 1' case, and the entire path must match. bool WasArrayIndex; unsigned CommonLength = FindDesignatorMismatch(ObjType, A, B, WasArrayIndex); return CommonLength >= A.Entries.size() - IsArray; } /// Find the complete object to which an LValue refers. static CompleteObject findCompleteObject(EvalInfo &Info, const Expr *E, AccessKinds AK, const LValue &LVal, QualType LValType) { if (!LVal.Base) { Info.FFDiag(E, diag::note_constexpr_access_null) << AK; return CompleteObject(); } CallStackFrame *Frame = nullptr; if (LVal.CallIndex) { Frame = Info.getCallFrame(LVal.CallIndex); if (!Frame) { Info.FFDiag(E, diag::note_constexpr_lifetime_ended, 1) << AK << LVal.Base.is(); NoteLValueLocation(Info, LVal.Base); return CompleteObject(); } } // C++11 DR1311: An lvalue-to-rvalue conversion on a volatile-qualified type // is not a constant expression (even if the object is non-volatile). We also // apply this rule to C++98, in order to conform to the expected 'volatile' // semantics. if (LValType.isVolatileQualified()) { if (Info.getLangOpts().CPlusPlus) Info.FFDiag(E, diag::note_constexpr_access_volatile_type) << AK << LValType; else Info.FFDiag(E); return CompleteObject(); } // Compute value storage location and type of base object. APValue *BaseVal = nullptr; QualType BaseType = getType(LVal.Base); if (const ValueDecl *D = LVal.Base.dyn_cast()) { // In C++98, const, non-volatile integers initialized with ICEs are ICEs. // In C++11, constexpr, non-volatile variables initialized with constant // expressions are constant expressions too. Inside constexpr functions, // parameters are constant expressions even if they're non-const. // In C++1y, objects local to a constant expression (those with a Frame) are // both readable and writable inside constant expressions. // In C, such things can also be folded, although they are not ICEs. const VarDecl *VD = dyn_cast(D); if (VD) { if (const VarDecl *VDef = VD->getDefinition(Info.Ctx)) VD = VDef; } if (!VD || VD->isInvalidDecl()) { Info.FFDiag(E); return CompleteObject(); } // Accesses of volatile-qualified objects are not allowed. if (BaseType.isVolatileQualified()) { if (Info.getLangOpts().CPlusPlus) { Info.FFDiag(E, diag::note_constexpr_access_volatile_obj, 1) << AK << 1 << VD; Info.Note(VD->getLocation(), diag::note_declared_at); } else { Info.FFDiag(E); } return CompleteObject(); } // Unless we're looking at a local variable or argument in a constexpr call, // the variable we're reading must be const. if (!Frame) { if (Info.getLangOpts().CPlusPlus14 && VD == Info.EvaluatingDecl.dyn_cast()) { // OK, we can read and modify an object if we're in the process of // evaluating its initializer, because its lifetime began in this // evaluation. } else if (AK != AK_Read) { // All the remaining cases only permit reading. Info.FFDiag(E, diag::note_constexpr_modify_global); return CompleteObject(); } else if (VD->isConstexpr()) { // OK, we can read this variable. } else if (BaseType->isIntegralOrEnumerationType()) { // In OpenCL if a variable is in constant address space it is a const value. if (!(BaseType.isConstQualified() || (Info.getLangOpts().OpenCL && BaseType.getAddressSpace() == LangAS::opencl_constant))) { if (Info.getLangOpts().CPlusPlus) { Info.FFDiag(E, diag::note_constexpr_ltor_non_const_int, 1) << VD; Info.Note(VD->getLocation(), diag::note_declared_at); } else { Info.FFDiag(E); } return CompleteObject(); } } else if (BaseType->isFloatingType() && BaseType.isConstQualified()) { // We support folding of const floating-point types, in order to make // static const data members of such types (supported as an extension) // more useful. if (Info.getLangOpts().CPlusPlus11) { Info.CCEDiag(E, diag::note_constexpr_ltor_non_constexpr, 1) << VD; Info.Note(VD->getLocation(), diag::note_declared_at); } else { Info.CCEDiag(E); } } else if (BaseType.isConstQualified() && VD->hasDefinition(Info.Ctx)) { Info.CCEDiag(E, diag::note_constexpr_ltor_non_constexpr) << VD; // Keep evaluating to see what we can do. } else { // FIXME: Allow folding of values of any literal type in all languages. if (Info.checkingPotentialConstantExpression() && VD->getType().isConstQualified() && !VD->hasDefinition(Info.Ctx)) { // The definition of this variable could be constexpr. We can't // access it right now, but may be able to in future. } else if (Info.getLangOpts().CPlusPlus11) { Info.FFDiag(E, diag::note_constexpr_ltor_non_constexpr, 1) << VD; Info.Note(VD->getLocation(), diag::note_declared_at); } else { Info.FFDiag(E); } return CompleteObject(); } } if (!evaluateVarDeclInit(Info, E, VD, Frame, BaseVal)) return CompleteObject(); } else { const Expr *Base = LVal.Base.dyn_cast(); if (!Frame) { if (const MaterializeTemporaryExpr *MTE = dyn_cast(Base)) { assert(MTE->getStorageDuration() == SD_Static && "should have a frame for a non-global materialized temporary"); // Per C++1y [expr.const]p2: // an lvalue-to-rvalue conversion [is not allowed unless it applies to] // - a [...] glvalue of integral or enumeration type that refers to // a non-volatile const object [...] // [...] // - a [...] glvalue of literal type that refers to a non-volatile // object whose lifetime began within the evaluation of e. // // C++11 misses the 'began within the evaluation of e' check and // instead allows all temporaries, including things like: // int &&r = 1; // int x = ++r; // constexpr int k = r; // Therefore we use the C++1y rules in C++11 too. const ValueDecl *VD = Info.EvaluatingDecl.dyn_cast(); const ValueDecl *ED = MTE->getExtendingDecl(); if (!(BaseType.isConstQualified() && BaseType->isIntegralOrEnumerationType()) && !(VD && VD->getCanonicalDecl() == ED->getCanonicalDecl())) { Info.FFDiag(E, diag::note_constexpr_access_static_temporary, 1) << AK; Info.Note(MTE->getExprLoc(), diag::note_constexpr_temporary_here); return CompleteObject(); } BaseVal = Info.Ctx.getMaterializedTemporaryValue(MTE, false); assert(BaseVal && "got reference to unevaluated temporary"); } else { Info.FFDiag(E); return CompleteObject(); } } else { BaseVal = Frame->getTemporary(Base); assert(BaseVal && "missing value for temporary"); } // Volatile temporary objects cannot be accessed in constant expressions. if (BaseType.isVolatileQualified()) { if (Info.getLangOpts().CPlusPlus) { Info.FFDiag(E, diag::note_constexpr_access_volatile_obj, 1) << AK << 0; Info.Note(Base->getExprLoc(), diag::note_constexpr_temporary_here); } else { Info.FFDiag(E); } return CompleteObject(); } } // During the construction of an object, it is not yet 'const'. // FIXME: We don't set up EvaluatingDecl for local variables or temporaries, // and this doesn't do quite the right thing for const subobjects of the // object under construction. if (LVal.getLValueBase() == Info.EvaluatingDecl) { BaseType = Info.Ctx.getCanonicalType(BaseType); BaseType.removeLocalConst(); } // In C++1y, we can't safely access any mutable state when we might be // evaluating after an unmodeled side effect. // // FIXME: Not all local state is mutable. Allow local constant subobjects // to be read here (but take care with 'mutable' fields). if ((Frame && Info.getLangOpts().CPlusPlus14 && Info.EvalStatus.HasSideEffects) || (AK != AK_Read && Info.IsSpeculativelyEvaluating)) return CompleteObject(); return CompleteObject(BaseVal, BaseType); } /// \brief Perform an lvalue-to-rvalue conversion on the given glvalue. This /// can also be used for 'lvalue-to-lvalue' conversions for looking up the /// glvalue referred to by an entity of reference type. /// /// \param Info - Information about the ongoing evaluation. /// \param Conv - The expression for which we are performing the conversion. /// Used for diagnostics. /// \param Type - The type of the glvalue (before stripping cv-qualifiers in the /// case of a non-class type). /// \param LVal - The glvalue on which we are attempting to perform this action. /// \param RVal - The produced value will be placed here. static bool handleLValueToRValueConversion(EvalInfo &Info, const Expr *Conv, QualType Type, const LValue &LVal, APValue &RVal) { if (LVal.Designator.Invalid) return false; // Check for special cases where there is no existing APValue to look at. const Expr *Base = LVal.Base.dyn_cast(); if (Base && !LVal.CallIndex && !Type.isVolatileQualified()) { if (const CompoundLiteralExpr *CLE = dyn_cast(Base)) { // In C99, a CompoundLiteralExpr is an lvalue, and we defer evaluating the // initializer until now for such expressions. Such an expression can't be // an ICE in C, so this only matters for fold. if (Type.isVolatileQualified()) { Info.FFDiag(Conv); return false; } APValue Lit; if (!Evaluate(Lit, Info, CLE->getInitializer())) return false; CompleteObject LitObj(&Lit, Base->getType()); return extractSubobject(Info, Conv, LitObj, LVal.Designator, RVal); } else if (isa(Base) || isa(Base)) { // We represent a string literal array as an lvalue pointing at the // corresponding expression, rather than building an array of chars. // FIXME: Support ObjCEncodeExpr, MakeStringConstant APValue Str(Base, CharUnits::Zero(), APValue::NoLValuePath(), 0); CompleteObject StrObj(&Str, Base->getType()); return extractSubobject(Info, Conv, StrObj, LVal.Designator, RVal); } } CompleteObject Obj = findCompleteObject(Info, Conv, AK_Read, LVal, Type); return Obj && extractSubobject(Info, Conv, Obj, LVal.Designator, RVal); } /// Perform an assignment of Val to LVal. Takes ownership of Val. static bool handleAssignment(EvalInfo &Info, const Expr *E, const LValue &LVal, QualType LValType, APValue &Val) { if (LVal.Designator.Invalid) return false; if (!Info.getLangOpts().CPlusPlus14) { Info.FFDiag(E); return false; } CompleteObject Obj = findCompleteObject(Info, E, AK_Assign, LVal, LValType); return Obj && modifySubobject(Info, E, Obj, LVal.Designator, Val); } static bool isOverflowingIntegerType(ASTContext &Ctx, QualType T) { return T->isSignedIntegerType() && Ctx.getIntWidth(T) >= Ctx.getIntWidth(Ctx.IntTy); } namespace { struct CompoundAssignSubobjectHandler { EvalInfo &Info; const Expr *E; QualType PromotedLHSType; BinaryOperatorKind Opcode; const APValue &RHS; static const AccessKinds AccessKind = AK_Assign; typedef bool result_type; bool checkConst(QualType QT) { // Assigning to a const object has undefined behavior. if (QT.isConstQualified()) { Info.FFDiag(E, diag::note_constexpr_modify_const_type) << QT; return false; } return true; } bool failed() { return false; } bool found(APValue &Subobj, QualType SubobjType) { switch (Subobj.getKind()) { case APValue::Int: return found(Subobj.getInt(), SubobjType); case APValue::Float: return found(Subobj.getFloat(), SubobjType); case APValue::ComplexInt: case APValue::ComplexFloat: // FIXME: Implement complex compound assignment. Info.FFDiag(E); return false; case APValue::LValue: return foundPointer(Subobj, SubobjType); default: // FIXME: can this happen? Info.FFDiag(E); return false; } } bool found(APSInt &Value, QualType SubobjType) { if (!checkConst(SubobjType)) return false; if (!SubobjType->isIntegerType() || !RHS.isInt()) { // We don't support compound assignment on integer-cast-to-pointer // values. Info.FFDiag(E); return false; } APSInt LHS = HandleIntToIntCast(Info, E, PromotedLHSType, SubobjType, Value); if (!handleIntIntBinOp(Info, E, LHS, Opcode, RHS.getInt(), LHS)) return false; Value = HandleIntToIntCast(Info, E, SubobjType, PromotedLHSType, LHS); return true; } bool found(APFloat &Value, QualType SubobjType) { return checkConst(SubobjType) && HandleFloatToFloatCast(Info, E, SubobjType, PromotedLHSType, Value) && handleFloatFloatBinOp(Info, E, Value, Opcode, RHS.getFloat()) && HandleFloatToFloatCast(Info, E, PromotedLHSType, SubobjType, Value); } bool foundPointer(APValue &Subobj, QualType SubobjType) { if (!checkConst(SubobjType)) return false; QualType PointeeType; if (const PointerType *PT = SubobjType->getAs()) PointeeType = PT->getPointeeType(); if (PointeeType.isNull() || !RHS.isInt() || (Opcode != BO_Add && Opcode != BO_Sub)) { Info.FFDiag(E); return false; } int64_t Offset = getExtValue(RHS.getInt()); if (Opcode == BO_Sub) Offset = -Offset; LValue LVal; LVal.setFrom(Info.Ctx, Subobj); if (!HandleLValueArrayAdjustment(Info, E, LVal, PointeeType, Offset)) return false; LVal.moveInto(Subobj); return true; } bool foundString(APValue &Subobj, QualType SubobjType, uint64_t Character) { llvm_unreachable("shouldn't encounter string elements here"); } }; } // end anonymous namespace const AccessKinds CompoundAssignSubobjectHandler::AccessKind; /// Perform a compound assignment of LVal = RVal. static bool handleCompoundAssignment( EvalInfo &Info, const Expr *E, const LValue &LVal, QualType LValType, QualType PromotedLValType, BinaryOperatorKind Opcode, const APValue &RVal) { if (LVal.Designator.Invalid) return false; if (!Info.getLangOpts().CPlusPlus14) { Info.FFDiag(E); return false; } CompleteObject Obj = findCompleteObject(Info, E, AK_Assign, LVal, LValType); CompoundAssignSubobjectHandler Handler = { Info, E, PromotedLValType, Opcode, RVal }; return Obj && findSubobject(Info, E, Obj, LVal.Designator, Handler); } namespace { struct IncDecSubobjectHandler { EvalInfo &Info; const Expr *E; AccessKinds AccessKind; APValue *Old; typedef bool result_type; bool checkConst(QualType QT) { // Assigning to a const object has undefined behavior. if (QT.isConstQualified()) { Info.FFDiag(E, diag::note_constexpr_modify_const_type) << QT; return false; } return true; } bool failed() { return false; } bool found(APValue &Subobj, QualType SubobjType) { // Stash the old value. Also clear Old, so we don't clobber it later // if we're post-incrementing a complex. if (Old) { *Old = Subobj; Old = nullptr; } switch (Subobj.getKind()) { case APValue::Int: return found(Subobj.getInt(), SubobjType); case APValue::Float: return found(Subobj.getFloat(), SubobjType); case APValue::ComplexInt: return found(Subobj.getComplexIntReal(), SubobjType->castAs()->getElementType() .withCVRQualifiers(SubobjType.getCVRQualifiers())); case APValue::ComplexFloat: return found(Subobj.getComplexFloatReal(), SubobjType->castAs()->getElementType() .withCVRQualifiers(SubobjType.getCVRQualifiers())); case APValue::LValue: return foundPointer(Subobj, SubobjType); default: // FIXME: can this happen? Info.FFDiag(E); return false; } } bool found(APSInt &Value, QualType SubobjType) { if (!checkConst(SubobjType)) return false; if (!SubobjType->isIntegerType()) { // We don't support increment / decrement on integer-cast-to-pointer // values. Info.FFDiag(E); return false; } if (Old) *Old = APValue(Value); // bool arithmetic promotes to int, and the conversion back to bool // doesn't reduce mod 2^n, so special-case it. if (SubobjType->isBooleanType()) { if (AccessKind == AK_Increment) Value = 1; else Value = !Value; return true; } bool WasNegative = Value.isNegative(); if (AccessKind == AK_Increment) { ++Value; if (!WasNegative && Value.isNegative() && isOverflowingIntegerType(Info.Ctx, SubobjType)) { APSInt ActualValue(Value, /*IsUnsigned*/true); return HandleOverflow(Info, E, ActualValue, SubobjType); } } else { --Value; if (WasNegative && !Value.isNegative() && isOverflowingIntegerType(Info.Ctx, SubobjType)) { unsigned BitWidth = Value.getBitWidth(); APSInt ActualValue(Value.sext(BitWidth + 1), /*IsUnsigned*/false); ActualValue.setBit(BitWidth); return HandleOverflow(Info, E, ActualValue, SubobjType); } } return true; } bool found(APFloat &Value, QualType SubobjType) { if (!checkConst(SubobjType)) return false; if (Old) *Old = APValue(Value); APFloat One(Value.getSemantics(), 1); if (AccessKind == AK_Increment) Value.add(One, APFloat::rmNearestTiesToEven); else Value.subtract(One, APFloat::rmNearestTiesToEven); return true; } bool foundPointer(APValue &Subobj, QualType SubobjType) { if (!checkConst(SubobjType)) return false; QualType PointeeType; if (const PointerType *PT = SubobjType->getAs()) PointeeType = PT->getPointeeType(); else { Info.FFDiag(E); return false; } LValue LVal; LVal.setFrom(Info.Ctx, Subobj); if (!HandleLValueArrayAdjustment(Info, E, LVal, PointeeType, AccessKind == AK_Increment ? 1 : -1)) return false; LVal.moveInto(Subobj); return true; } bool foundString(APValue &Subobj, QualType SubobjType, uint64_t Character) { llvm_unreachable("shouldn't encounter string elements here"); } }; } // end anonymous namespace /// Perform an increment or decrement on LVal. static bool handleIncDec(EvalInfo &Info, const Expr *E, const LValue &LVal, QualType LValType, bool IsIncrement, APValue *Old) { if (LVal.Designator.Invalid) return false; if (!Info.getLangOpts().CPlusPlus14) { Info.FFDiag(E); return false; } AccessKinds AK = IsIncrement ? AK_Increment : AK_Decrement; CompleteObject Obj = findCompleteObject(Info, E, AK, LVal, LValType); IncDecSubobjectHandler Handler = { Info, E, AK, Old }; return Obj && findSubobject(Info, E, Obj, LVal.Designator, Handler); } /// Build an lvalue for the object argument of a member function call. static bool EvaluateObjectArgument(EvalInfo &Info, const Expr *Object, LValue &This) { if (Object->getType()->isPointerType()) return EvaluatePointer(Object, This, Info); if (Object->isGLValue()) return EvaluateLValue(Object, This, Info); if (Object->getType()->isLiteralType(Info.Ctx)) return EvaluateTemporary(Object, This, Info); Info.FFDiag(Object, diag::note_constexpr_nonliteral) << Object->getType(); return false; } /// HandleMemberPointerAccess - Evaluate a member access operation and build an /// lvalue referring to the result. /// /// \param Info - Information about the ongoing evaluation. /// \param LV - An lvalue referring to the base of the member pointer. /// \param RHS - The member pointer expression. /// \param IncludeMember - Specifies whether the member itself is included in /// the resulting LValue subobject designator. This is not possible when /// creating a bound member function. /// \return The field or method declaration to which the member pointer refers, /// or 0 if evaluation fails. static const ValueDecl *HandleMemberPointerAccess(EvalInfo &Info, QualType LVType, LValue &LV, const Expr *RHS, bool IncludeMember = true) { MemberPtr MemPtr; if (!EvaluateMemberPointer(RHS, MemPtr, Info)) return nullptr; // C++11 [expr.mptr.oper]p6: If the second operand is the null pointer to // member value, the behavior is undefined. if (!MemPtr.getDecl()) { // FIXME: Specific diagnostic. Info.FFDiag(RHS); return nullptr; } if (MemPtr.isDerivedMember()) { // This is a member of some derived class. Truncate LV appropriately. // The end of the derived-to-base path for the base object must match the // derived-to-base path for the member pointer. if (LV.Designator.MostDerivedPathLength + MemPtr.Path.size() > LV.Designator.Entries.size()) { Info.FFDiag(RHS); return nullptr; } unsigned PathLengthToMember = LV.Designator.Entries.size() - MemPtr.Path.size(); for (unsigned I = 0, N = MemPtr.Path.size(); I != N; ++I) { const CXXRecordDecl *LVDecl = getAsBaseClass( LV.Designator.Entries[PathLengthToMember + I]); const CXXRecordDecl *MPDecl = MemPtr.Path[I]; if (LVDecl->getCanonicalDecl() != MPDecl->getCanonicalDecl()) { Info.FFDiag(RHS); return nullptr; } } // Truncate the lvalue to the appropriate derived class. if (!CastToDerivedClass(Info, RHS, LV, MemPtr.getContainingRecord(), PathLengthToMember)) return nullptr; } else if (!MemPtr.Path.empty()) { // Extend the LValue path with the member pointer's path. LV.Designator.Entries.reserve(LV.Designator.Entries.size() + MemPtr.Path.size() + IncludeMember); // Walk down to the appropriate base class. if (const PointerType *PT = LVType->getAs()) LVType = PT->getPointeeType(); const CXXRecordDecl *RD = LVType->getAsCXXRecordDecl(); assert(RD && "member pointer access on non-class-type expression"); // The first class in the path is that of the lvalue. for (unsigned I = 1, N = MemPtr.Path.size(); I != N; ++I) { const CXXRecordDecl *Base = MemPtr.Path[N - I - 1]; if (!HandleLValueDirectBase(Info, RHS, LV, RD, Base)) return nullptr; RD = Base; } // Finally cast to the class containing the member. if (!HandleLValueDirectBase(Info, RHS, LV, RD, MemPtr.getContainingRecord())) return nullptr; } // Add the member. Note that we cannot build bound member functions here. if (IncludeMember) { if (const FieldDecl *FD = dyn_cast(MemPtr.getDecl())) { if (!HandleLValueMember(Info, RHS, LV, FD)) return nullptr; } else if (const IndirectFieldDecl *IFD = dyn_cast(MemPtr.getDecl())) { if (!HandleLValueIndirectMember(Info, RHS, LV, IFD)) return nullptr; } else { llvm_unreachable("can't construct reference to bound member function"); } } return MemPtr.getDecl(); } static const ValueDecl *HandleMemberPointerAccess(EvalInfo &Info, const BinaryOperator *BO, LValue &LV, bool IncludeMember = true) { assert(BO->getOpcode() == BO_PtrMemD || BO->getOpcode() == BO_PtrMemI); if (!EvaluateObjectArgument(Info, BO->getLHS(), LV)) { if (Info.noteFailure()) { MemberPtr MemPtr; EvaluateMemberPointer(BO->getRHS(), MemPtr, Info); } return nullptr; } return HandleMemberPointerAccess(Info, BO->getLHS()->getType(), LV, BO->getRHS(), IncludeMember); } /// HandleBaseToDerivedCast - Apply the given base-to-derived cast operation on /// the provided lvalue, which currently refers to the base object. static bool HandleBaseToDerivedCast(EvalInfo &Info, const CastExpr *E, LValue &Result) { SubobjectDesignator &D = Result.Designator; if (D.Invalid || !Result.checkNullPointer(Info, E, CSK_Derived)) return false; QualType TargetQT = E->getType(); if (const PointerType *PT = TargetQT->getAs()) TargetQT = PT->getPointeeType(); // Check this cast lands within the final derived-to-base subobject path. if (D.MostDerivedPathLength + E->path_size() > D.Entries.size()) { Info.CCEDiag(E, diag::note_constexpr_invalid_downcast) << D.MostDerivedType << TargetQT; return false; } // Check the type of the final cast. We don't need to check the path, // since a cast can only be formed if the path is unique. unsigned NewEntriesSize = D.Entries.size() - E->path_size(); const CXXRecordDecl *TargetType = TargetQT->getAsCXXRecordDecl(); const CXXRecordDecl *FinalType; if (NewEntriesSize == D.MostDerivedPathLength) FinalType = D.MostDerivedType->getAsCXXRecordDecl(); else FinalType = getAsBaseClass(D.Entries[NewEntriesSize - 1]); if (FinalType->getCanonicalDecl() != TargetType->getCanonicalDecl()) { Info.CCEDiag(E, diag::note_constexpr_invalid_downcast) << D.MostDerivedType << TargetQT; return false; } // Truncate the lvalue to the appropriate derived class. return CastToDerivedClass(Info, E, Result, TargetType, NewEntriesSize); } namespace { enum EvalStmtResult { /// Evaluation failed. ESR_Failed, /// Hit a 'return' statement. ESR_Returned, /// Evaluation succeeded. ESR_Succeeded, /// Hit a 'continue' statement. ESR_Continue, /// Hit a 'break' statement. ESR_Break, /// Still scanning for 'case' or 'default' statement. ESR_CaseNotFound }; } static bool EvaluateVarDecl(EvalInfo &Info, const VarDecl *VD) { // We don't need to evaluate the initializer for a static local. if (!VD->hasLocalStorage()) return true; LValue Result; Result.set(VD, Info.CurrentCall->Index); APValue &Val = Info.CurrentCall->createTemporary(VD, true); const Expr *InitE = VD->getInit(); if (!InitE) { Info.FFDiag(VD->getLocStart(), diag::note_constexpr_uninitialized) << false << VD->getType(); Val = APValue(); return false; } if (InitE->isValueDependent()) return false; if (!EvaluateInPlace(Val, Info, Result, InitE)) { // Wipe out any partially-computed value, to allow tracking that this // evaluation failed. Val = APValue(); return false; } return true; } static bool EvaluateDecl(EvalInfo &Info, const Decl *D) { bool OK = true; if (const VarDecl *VD = dyn_cast(D)) OK &= EvaluateVarDecl(Info, VD); if (const DecompositionDecl *DD = dyn_cast(D)) for (auto *BD : DD->bindings()) if (auto *VD = BD->getHoldingVar()) OK &= EvaluateDecl(Info, VD); return OK; } /// Evaluate a condition (either a variable declaration or an expression). static bool EvaluateCond(EvalInfo &Info, const VarDecl *CondDecl, const Expr *Cond, bool &Result) { FullExpressionRAII Scope(Info); if (CondDecl && !EvaluateDecl(Info, CondDecl)) return false; return EvaluateAsBooleanCondition(Cond, Result, Info); } namespace { /// \brief A location where the result (returned value) of evaluating a /// statement should be stored. struct StmtResult { /// The APValue that should be filled in with the returned value. APValue &Value; /// The location containing the result, if any (used to support RVO). const LValue *Slot; }; } static EvalStmtResult EvaluateStmt(StmtResult &Result, EvalInfo &Info, const Stmt *S, const SwitchCase *SC = nullptr); /// Evaluate the body of a loop, and translate the result as appropriate. static EvalStmtResult EvaluateLoopBody(StmtResult &Result, EvalInfo &Info, const Stmt *Body, const SwitchCase *Case = nullptr) { BlockScopeRAII Scope(Info); switch (EvalStmtResult ESR = EvaluateStmt(Result, Info, Body, Case)) { case ESR_Break: return ESR_Succeeded; case ESR_Succeeded: case ESR_Continue: return ESR_Continue; case ESR_Failed: case ESR_Returned: case ESR_CaseNotFound: return ESR; } llvm_unreachable("Invalid EvalStmtResult!"); } /// Evaluate a switch statement. static EvalStmtResult EvaluateSwitch(StmtResult &Result, EvalInfo &Info, const SwitchStmt *SS) { BlockScopeRAII Scope(Info); // Evaluate the switch condition. APSInt Value; { FullExpressionRAII Scope(Info); if (const Stmt *Init = SS->getInit()) { EvalStmtResult ESR = EvaluateStmt(Result, Info, Init); if (ESR != ESR_Succeeded) return ESR; } if (SS->getConditionVariable() && !EvaluateDecl(Info, SS->getConditionVariable())) return ESR_Failed; if (!EvaluateInteger(SS->getCond(), Value, Info)) return ESR_Failed; } // Find the switch case corresponding to the value of the condition. // FIXME: Cache this lookup. const SwitchCase *Found = nullptr; for (const SwitchCase *SC = SS->getSwitchCaseList(); SC; SC = SC->getNextSwitchCase()) { if (isa(SC)) { Found = SC; continue; } const CaseStmt *CS = cast(SC); APSInt LHS = CS->getLHS()->EvaluateKnownConstInt(Info.Ctx); APSInt RHS = CS->getRHS() ? CS->getRHS()->EvaluateKnownConstInt(Info.Ctx) : LHS; if (LHS <= Value && Value <= RHS) { Found = SC; break; } } if (!Found) return ESR_Succeeded; // Search the switch body for the switch case and evaluate it from there. switch (EvalStmtResult ESR = EvaluateStmt(Result, Info, SS->getBody(), Found)) { case ESR_Break: return ESR_Succeeded; case ESR_Succeeded: case ESR_Continue: case ESR_Failed: case ESR_Returned: return ESR; case ESR_CaseNotFound: // This can only happen if the switch case is nested within a statement // expression. We have no intention of supporting that. Info.FFDiag(Found->getLocStart(), diag::note_constexpr_stmt_expr_unsupported); return ESR_Failed; } llvm_unreachable("Invalid EvalStmtResult!"); } // Evaluate a statement. static EvalStmtResult EvaluateStmt(StmtResult &Result, EvalInfo &Info, const Stmt *S, const SwitchCase *Case) { if (!Info.nextStep(S)) return ESR_Failed; // If we're hunting down a 'case' or 'default' label, recurse through // substatements until we hit the label. if (Case) { // FIXME: We don't start the lifetime of objects whose initialization we // jump over. However, such objects must be of class type with a trivial // default constructor that initialize all subobjects, so must be empty, // so this almost never matters. switch (S->getStmtClass()) { case Stmt::CompoundStmtClass: // FIXME: Precompute which substatement of a compound statement we // would jump to, and go straight there rather than performing a // linear scan each time. case Stmt::LabelStmtClass: case Stmt::AttributedStmtClass: case Stmt::DoStmtClass: break; case Stmt::CaseStmtClass: case Stmt::DefaultStmtClass: if (Case == S) Case = nullptr; break; case Stmt::IfStmtClass: { // FIXME: Precompute which side of an 'if' we would jump to, and go // straight there rather than scanning both sides. const IfStmt *IS = cast(S); // Wrap the evaluation in a block scope, in case it's a DeclStmt // preceded by our switch label. BlockScopeRAII Scope(Info); EvalStmtResult ESR = EvaluateStmt(Result, Info, IS->getThen(), Case); if (ESR != ESR_CaseNotFound || !IS->getElse()) return ESR; return EvaluateStmt(Result, Info, IS->getElse(), Case); } case Stmt::WhileStmtClass: { EvalStmtResult ESR = EvaluateLoopBody(Result, Info, cast(S)->getBody(), Case); if (ESR != ESR_Continue) return ESR; break; } case Stmt::ForStmtClass: { const ForStmt *FS = cast(S); EvalStmtResult ESR = EvaluateLoopBody(Result, Info, FS->getBody(), Case); if (ESR != ESR_Continue) return ESR; if (FS->getInc()) { FullExpressionRAII IncScope(Info); if (!EvaluateIgnoredValue(Info, FS->getInc())) return ESR_Failed; } break; } case Stmt::DeclStmtClass: // FIXME: If the variable has initialization that can't be jumped over, // bail out of any immediately-surrounding compound-statement too. default: return ESR_CaseNotFound; } } switch (S->getStmtClass()) { default: if (const Expr *E = dyn_cast(S)) { // Don't bother evaluating beyond an expression-statement which couldn't // be evaluated. FullExpressionRAII Scope(Info); if (!EvaluateIgnoredValue(Info, E)) return ESR_Failed; return ESR_Succeeded; } Info.FFDiag(S->getLocStart()); return ESR_Failed; case Stmt::NullStmtClass: return ESR_Succeeded; case Stmt::DeclStmtClass: { const DeclStmt *DS = cast(S); for (const auto *DclIt : DS->decls()) { // Each declaration initialization is its own full-expression. // FIXME: This isn't quite right; if we're performing aggregate // initialization, each braced subexpression is its own full-expression. FullExpressionRAII Scope(Info); if (!EvaluateDecl(Info, DclIt) && !Info.noteFailure()) return ESR_Failed; } return ESR_Succeeded; } case Stmt::ReturnStmtClass: { const Expr *RetExpr = cast(S)->getRetValue(); FullExpressionRAII Scope(Info); if (RetExpr && !(Result.Slot ? EvaluateInPlace(Result.Value, Info, *Result.Slot, RetExpr) : Evaluate(Result.Value, Info, RetExpr))) return ESR_Failed; return ESR_Returned; } case Stmt::CompoundStmtClass: { BlockScopeRAII Scope(Info); const CompoundStmt *CS = cast(S); for (const auto *BI : CS->body()) { EvalStmtResult ESR = EvaluateStmt(Result, Info, BI, Case); if (ESR == ESR_Succeeded) Case = nullptr; else if (ESR != ESR_CaseNotFound) return ESR; } return Case ? ESR_CaseNotFound : ESR_Succeeded; } case Stmt::IfStmtClass: { const IfStmt *IS = cast(S); // Evaluate the condition, as either a var decl or as an expression. BlockScopeRAII Scope(Info); if (const Stmt *Init = IS->getInit()) { EvalStmtResult ESR = EvaluateStmt(Result, Info, Init); if (ESR != ESR_Succeeded) return ESR; } bool Cond; if (!EvaluateCond(Info, IS->getConditionVariable(), IS->getCond(), Cond)) return ESR_Failed; if (const Stmt *SubStmt = Cond ? IS->getThen() : IS->getElse()) { EvalStmtResult ESR = EvaluateStmt(Result, Info, SubStmt); if (ESR != ESR_Succeeded) return ESR; } return ESR_Succeeded; } case Stmt::WhileStmtClass: { const WhileStmt *WS = cast(S); while (true) { BlockScopeRAII Scope(Info); bool Continue; if (!EvaluateCond(Info, WS->getConditionVariable(), WS->getCond(), Continue)) return ESR_Failed; if (!Continue) break; EvalStmtResult ESR = EvaluateLoopBody(Result, Info, WS->getBody()); if (ESR != ESR_Continue) return ESR; } return ESR_Succeeded; } case Stmt::DoStmtClass: { const DoStmt *DS = cast(S); bool Continue; do { EvalStmtResult ESR = EvaluateLoopBody(Result, Info, DS->getBody(), Case); if (ESR != ESR_Continue) return ESR; Case = nullptr; FullExpressionRAII CondScope(Info); if (!EvaluateAsBooleanCondition(DS->getCond(), Continue, Info)) return ESR_Failed; } while (Continue); return ESR_Succeeded; } case Stmt::ForStmtClass: { const ForStmt *FS = cast(S); BlockScopeRAII Scope(Info); if (FS->getInit()) { EvalStmtResult ESR = EvaluateStmt(Result, Info, FS->getInit()); if (ESR != ESR_Succeeded) return ESR; } while (true) { BlockScopeRAII Scope(Info); bool Continue = true; if (FS->getCond() && !EvaluateCond(Info, FS->getConditionVariable(), FS->getCond(), Continue)) return ESR_Failed; if (!Continue) break; EvalStmtResult ESR = EvaluateLoopBody(Result, Info, FS->getBody()); if (ESR != ESR_Continue) return ESR; if (FS->getInc()) { FullExpressionRAII IncScope(Info); if (!EvaluateIgnoredValue(Info, FS->getInc())) return ESR_Failed; } } return ESR_Succeeded; } case Stmt::CXXForRangeStmtClass: { const CXXForRangeStmt *FS = cast(S); BlockScopeRAII Scope(Info); // Initialize the __range variable. EvalStmtResult ESR = EvaluateStmt(Result, Info, FS->getRangeStmt()); if (ESR != ESR_Succeeded) return ESR; // Create the __begin and __end iterators. ESR = EvaluateStmt(Result, Info, FS->getBeginStmt()); if (ESR != ESR_Succeeded) return ESR; ESR = EvaluateStmt(Result, Info, FS->getEndStmt()); if (ESR != ESR_Succeeded) return ESR; while (true) { // Condition: __begin != __end. { bool Continue = true; FullExpressionRAII CondExpr(Info); if (!EvaluateAsBooleanCondition(FS->getCond(), Continue, Info)) return ESR_Failed; if (!Continue) break; } // User's variable declaration, initialized by *__begin. BlockScopeRAII InnerScope(Info); ESR = EvaluateStmt(Result, Info, FS->getLoopVarStmt()); if (ESR != ESR_Succeeded) return ESR; // Loop body. ESR = EvaluateLoopBody(Result, Info, FS->getBody()); if (ESR != ESR_Continue) return ESR; // Increment: ++__begin if (!EvaluateIgnoredValue(Info, FS->getInc())) return ESR_Failed; } return ESR_Succeeded; } case Stmt::SwitchStmtClass: return EvaluateSwitch(Result, Info, cast(S)); case Stmt::ContinueStmtClass: return ESR_Continue; case Stmt::BreakStmtClass: return ESR_Break; case Stmt::LabelStmtClass: return EvaluateStmt(Result, Info, cast(S)->getSubStmt(), Case); case Stmt::AttributedStmtClass: // As a general principle, C++11 attributes can be ignored without // any semantic impact. return EvaluateStmt(Result, Info, cast(S)->getSubStmt(), Case); case Stmt::CaseStmtClass: case Stmt::DefaultStmtClass: return EvaluateStmt(Result, Info, cast(S)->getSubStmt(), Case); } } /// CheckTrivialDefaultConstructor - Check whether a constructor is a trivial /// default constructor. If so, we'll fold it whether or not it's marked as /// constexpr. If it is marked as constexpr, we will never implicitly define it, /// so we need special handling. static bool CheckTrivialDefaultConstructor(EvalInfo &Info, SourceLocation Loc, const CXXConstructorDecl *CD, bool IsValueInitialization) { if (!CD->isTrivial() || !CD->isDefaultConstructor()) return false; // Value-initialization does not call a trivial default constructor, so such a // call is a core constant expression whether or not the constructor is // constexpr. if (!CD->isConstexpr() && !IsValueInitialization) { if (Info.getLangOpts().CPlusPlus11) { // FIXME: If DiagDecl is an implicitly-declared special member function, // we should be much more explicit about why it's not constexpr. Info.CCEDiag(Loc, diag::note_constexpr_invalid_function, 1) << /*IsConstexpr*/0 << /*IsConstructor*/1 << CD; Info.Note(CD->getLocation(), diag::note_declared_at); } else { Info.CCEDiag(Loc, diag::note_invalid_subexpr_in_const_expr); } } return true; } /// CheckConstexprFunction - Check that a function can be called in a constant /// expression. static bool CheckConstexprFunction(EvalInfo &Info, SourceLocation CallLoc, const FunctionDecl *Declaration, const FunctionDecl *Definition, const Stmt *Body) { // Potential constant expressions can contain calls to declared, but not yet // defined, constexpr functions. if (Info.checkingPotentialConstantExpression() && !Definition && Declaration->isConstexpr()) return false; // Bail out with no diagnostic if the function declaration itself is invalid. // We will have produced a relevant diagnostic while parsing it. if (Declaration->isInvalidDecl()) return false; // Can we evaluate this function call? if (Definition && Definition->isConstexpr() && !Definition->isInvalidDecl() && Body) return true; if (Info.getLangOpts().CPlusPlus11) { const FunctionDecl *DiagDecl = Definition ? Definition : Declaration; // If this function is not constexpr because it is an inherited // non-constexpr constructor, diagnose that directly. auto *CD = dyn_cast(DiagDecl); if (CD && CD->isInheritingConstructor()) { auto *Inherited = CD->getInheritedConstructor().getConstructor(); if (!Inherited->isConstexpr()) DiagDecl = CD = Inherited; } // FIXME: If DiagDecl is an implicitly-declared special member function // or an inheriting constructor, we should be much more explicit about why // it's not constexpr. if (CD && CD->isInheritingConstructor()) Info.FFDiag(CallLoc, diag::note_constexpr_invalid_inhctor, 1) << CD->getInheritedConstructor().getConstructor()->getParent(); else Info.FFDiag(CallLoc, diag::note_constexpr_invalid_function, 1) << DiagDecl->isConstexpr() << (bool)CD << DiagDecl; Info.Note(DiagDecl->getLocation(), diag::note_declared_at); } else { Info.FFDiag(CallLoc, diag::note_invalid_subexpr_in_const_expr); } return false; } /// Determine if a class has any fields that might need to be copied by a /// trivial copy or move operation. static bool hasFields(const CXXRecordDecl *RD) { if (!RD || RD->isEmpty()) return false; for (auto *FD : RD->fields()) { if (FD->isUnnamedBitfield()) continue; return true; } for (auto &Base : RD->bases()) if (hasFields(Base.getType()->getAsCXXRecordDecl())) return true; return false; } namespace { typedef SmallVector ArgVector; } /// EvaluateArgs - Evaluate the arguments to a function call. static bool EvaluateArgs(ArrayRef Args, ArgVector &ArgValues, EvalInfo &Info) { bool Success = true; for (ArrayRef::iterator I = Args.begin(), E = Args.end(); I != E; ++I) { if (!Evaluate(ArgValues[I - Args.begin()], Info, *I)) { // If we're checking for a potential constant expression, evaluate all // initializers even if some of them fail. if (!Info.noteFailure()) return false; Success = false; } } return Success; } /// Evaluate a function call. static bool HandleFunctionCall(SourceLocation CallLoc, const FunctionDecl *Callee, const LValue *This, ArrayRef Args, const Stmt *Body, EvalInfo &Info, APValue &Result, const LValue *ResultSlot) { ArgVector ArgValues(Args.size()); if (!EvaluateArgs(Args, ArgValues, Info)) return false; if (!Info.CheckCallLimit(CallLoc)) return false; CallStackFrame Frame(Info, CallLoc, Callee, This, ArgValues.data()); // For a trivial copy or move assignment, perform an APValue copy. This is // essential for unions, where the operations performed by the assignment // operator cannot be represented as statements. // // Skip this for non-union classes with no fields; in that case, the defaulted // copy/move does not actually read the object. const CXXMethodDecl *MD = dyn_cast(Callee); if (MD && MD->isDefaulted() && (MD->getParent()->isUnion() || (MD->isTrivial() && hasFields(MD->getParent())))) { assert(This && (MD->isCopyAssignmentOperator() || MD->isMoveAssignmentOperator())); LValue RHS; RHS.setFrom(Info.Ctx, ArgValues[0]); APValue RHSValue; if (!handleLValueToRValueConversion(Info, Args[0], Args[0]->getType(), RHS, RHSValue)) return false; if (!handleAssignment(Info, Args[0], *This, MD->getThisType(Info.Ctx), RHSValue)) return false; This->moveInto(Result); return true; } StmtResult Ret = {Result, ResultSlot}; EvalStmtResult ESR = EvaluateStmt(Ret, Info, Body); if (ESR == ESR_Succeeded) { if (Callee->getReturnType()->isVoidType()) return true; Info.FFDiag(Callee->getLocEnd(), diag::note_constexpr_no_return); } return ESR == ESR_Returned; } /// Evaluate a constructor call. static bool HandleConstructorCall(const Expr *E, const LValue &This, APValue *ArgValues, const CXXConstructorDecl *Definition, EvalInfo &Info, APValue &Result) { SourceLocation CallLoc = E->getExprLoc(); if (!Info.CheckCallLimit(CallLoc)) return false; const CXXRecordDecl *RD = Definition->getParent(); if (RD->getNumVBases()) { Info.FFDiag(CallLoc, diag::note_constexpr_virtual_base) << RD; return false; } CallStackFrame Frame(Info, CallLoc, Definition, &This, ArgValues); // FIXME: Creating an APValue just to hold a nonexistent return value is // wasteful. APValue RetVal; StmtResult Ret = {RetVal, nullptr}; // If it's a delegating constructor, delegate. if (Definition->isDelegatingConstructor()) { CXXConstructorDecl::init_const_iterator I = Definition->init_begin(); { FullExpressionRAII InitScope(Info); if (!EvaluateInPlace(Result, Info, This, (*I)->getInit())) return false; } return EvaluateStmt(Ret, Info, Definition->getBody()) != ESR_Failed; } // For a trivial copy or move constructor, perform an APValue copy. This is // essential for unions (or classes with anonymous union members), where the // operations performed by the constructor cannot be represented by // ctor-initializers. // // Skip this for empty non-union classes; we should not perform an // lvalue-to-rvalue conversion on them because their copy constructor does not // actually read them. if (Definition->isDefaulted() && Definition->isCopyOrMoveConstructor() && (Definition->getParent()->isUnion() || (Definition->isTrivial() && hasFields(Definition->getParent())))) { LValue RHS; RHS.setFrom(Info.Ctx, ArgValues[0]); return handleLValueToRValueConversion( Info, E, Definition->getParamDecl(0)->getType().getNonReferenceType(), RHS, Result); } // Reserve space for the struct members. if (!RD->isUnion() && Result.isUninit()) Result = APValue(APValue::UninitStruct(), RD->getNumBases(), std::distance(RD->field_begin(), RD->field_end())); if (RD->isInvalidDecl()) return false; const ASTRecordLayout &Layout = Info.Ctx.getASTRecordLayout(RD); // A scope for temporaries lifetime-extended by reference members. BlockScopeRAII LifetimeExtendedScope(Info); bool Success = true; unsigned BasesSeen = 0; #ifndef NDEBUG CXXRecordDecl::base_class_const_iterator BaseIt = RD->bases_begin(); #endif for (const auto *I : Definition->inits()) { LValue Subobject = This; APValue *Value = &Result; // Determine the subobject to initialize. FieldDecl *FD = nullptr; if (I->isBaseInitializer()) { QualType BaseType(I->getBaseClass(), 0); #ifndef NDEBUG // Non-virtual base classes are initialized in the order in the class // definition. We have already checked for virtual base classes. assert(!BaseIt->isVirtual() && "virtual base for literal type"); assert(Info.Ctx.hasSameType(BaseIt->getType(), BaseType) && "base class initializers not in expected order"); ++BaseIt; #endif if (!HandleLValueDirectBase(Info, I->getInit(), Subobject, RD, BaseType->getAsCXXRecordDecl(), &Layout)) return false; Value = &Result.getStructBase(BasesSeen++); } else if ((FD = I->getMember())) { if (!HandleLValueMember(Info, I->getInit(), Subobject, FD, &Layout)) return false; if (RD->isUnion()) { Result = APValue(FD); Value = &Result.getUnionValue(); } else { Value = &Result.getStructField(FD->getFieldIndex()); } } else if (IndirectFieldDecl *IFD = I->getIndirectMember()) { // Walk the indirect field decl's chain to find the object to initialize, // and make sure we've initialized every step along it. for (auto *C : IFD->chain()) { FD = cast(C); CXXRecordDecl *CD = cast(FD->getParent()); // Switch the union field if it differs. This happens if we had // preceding zero-initialization, and we're now initializing a union // subobject other than the first. // FIXME: In this case, the values of the other subobjects are // specified, since zero-initialization sets all padding bits to zero. if (Value->isUninit() || (Value->isUnion() && Value->getUnionField() != FD)) { if (CD->isUnion()) *Value = APValue(FD); else *Value = APValue(APValue::UninitStruct(), CD->getNumBases(), std::distance(CD->field_begin(), CD->field_end())); } if (!HandleLValueMember(Info, I->getInit(), Subobject, FD)) return false; if (CD->isUnion()) Value = &Value->getUnionValue(); else Value = &Value->getStructField(FD->getFieldIndex()); } } else { llvm_unreachable("unknown base initializer kind"); } FullExpressionRAII InitScope(Info); if (!EvaluateInPlace(*Value, Info, Subobject, I->getInit()) || (FD && FD->isBitField() && !truncateBitfieldValue(Info, I->getInit(), *Value, FD))) { // If we're checking for a potential constant expression, evaluate all // initializers even if some of them fail. if (!Info.noteFailure()) return false; Success = false; } } return Success && EvaluateStmt(Ret, Info, Definition->getBody()) != ESR_Failed; } static bool HandleConstructorCall(const Expr *E, const LValue &This, ArrayRef Args, const CXXConstructorDecl *Definition, EvalInfo &Info, APValue &Result) { ArgVector ArgValues(Args.size()); if (!EvaluateArgs(Args, ArgValues, Info)) return false; return HandleConstructorCall(E, This, ArgValues.data(), Definition, Info, Result); } //===----------------------------------------------------------------------===// // Generic Evaluation //===----------------------------------------------------------------------===// namespace { template class ExprEvaluatorBase : public ConstStmtVisitor { private: Derived &getDerived() { return static_cast(*this); } bool DerivedSuccess(const APValue &V, const Expr *E) { return getDerived().Success(V, E); } bool DerivedZeroInitialization(const Expr *E) { return getDerived().ZeroInitialization(E); } // Check whether a conditional operator with a non-constant condition is a // potential constant expression. If neither arm is a potential constant // expression, then the conditional operator is not either. template void CheckPotentialConstantConditional(const ConditionalOperator *E) { assert(Info.checkingPotentialConstantExpression()); // Speculatively evaluate both arms. SmallVector Diag; { SpeculativeEvaluationRAII Speculate(Info, &Diag); StmtVisitorTy::Visit(E->getFalseExpr()); if (Diag.empty()) return; } { SpeculativeEvaluationRAII Speculate(Info, &Diag); Diag.clear(); StmtVisitorTy::Visit(E->getTrueExpr()); if (Diag.empty()) return; } Error(E, diag::note_constexpr_conditional_never_const); } template bool HandleConditionalOperator(const ConditionalOperator *E) { bool BoolResult; if (!EvaluateAsBooleanCondition(E->getCond(), BoolResult, Info)) { if (Info.checkingPotentialConstantExpression() && Info.noteFailure()) CheckPotentialConstantConditional(E); return false; } Expr *EvalExpr = BoolResult ? E->getTrueExpr() : E->getFalseExpr(); return StmtVisitorTy::Visit(EvalExpr); } protected: EvalInfo &Info; typedef ConstStmtVisitor StmtVisitorTy; typedef ExprEvaluatorBase ExprEvaluatorBaseTy; OptionalDiagnostic CCEDiag(const Expr *E, diag::kind D) { return Info.CCEDiag(E, D); } bool ZeroInitialization(const Expr *E) { return Error(E); } public: ExprEvaluatorBase(EvalInfo &Info) : Info(Info) {} EvalInfo &getEvalInfo() { return Info; } /// Report an evaluation error. This should only be called when an error is /// first discovered. When propagating an error, just return false. bool Error(const Expr *E, diag::kind D) { Info.FFDiag(E, D); return false; } bool Error(const Expr *E) { return Error(E, diag::note_invalid_subexpr_in_const_expr); } bool VisitStmt(const Stmt *) { llvm_unreachable("Expression evaluator should not be called on stmts"); } bool VisitExpr(const Expr *E) { return Error(E); } bool VisitParenExpr(const ParenExpr *E) { return StmtVisitorTy::Visit(E->getSubExpr()); } bool VisitUnaryExtension(const UnaryOperator *E) { return StmtVisitorTy::Visit(E->getSubExpr()); } bool VisitUnaryPlus(const UnaryOperator *E) { return StmtVisitorTy::Visit(E->getSubExpr()); } bool VisitChooseExpr(const ChooseExpr *E) { return StmtVisitorTy::Visit(E->getChosenSubExpr()); } bool VisitGenericSelectionExpr(const GenericSelectionExpr *E) { return StmtVisitorTy::Visit(E->getResultExpr()); } bool VisitSubstNonTypeTemplateParmExpr(const SubstNonTypeTemplateParmExpr *E) { return StmtVisitorTy::Visit(E->getReplacement()); } bool VisitCXXDefaultArgExpr(const CXXDefaultArgExpr *E) { return StmtVisitorTy::Visit(E->getExpr()); } bool VisitCXXDefaultInitExpr(const CXXDefaultInitExpr *E) { // The initializer may not have been parsed yet, or might be erroneous. if (!E->getExpr()) return Error(E); return StmtVisitorTy::Visit(E->getExpr()); } // We cannot create any objects for which cleanups are required, so there is // nothing to do here; all cleanups must come from unevaluated subexpressions. bool VisitExprWithCleanups(const ExprWithCleanups *E) { return StmtVisitorTy::Visit(E->getSubExpr()); } bool VisitCXXReinterpretCastExpr(const CXXReinterpretCastExpr *E) { CCEDiag(E, diag::note_constexpr_invalid_cast) << 0; return static_cast(this)->VisitCastExpr(E); } bool VisitCXXDynamicCastExpr(const CXXDynamicCastExpr *E) { CCEDiag(E, diag::note_constexpr_invalid_cast) << 1; return static_cast(this)->VisitCastExpr(E); } bool VisitBinaryOperator(const BinaryOperator *E) { switch (E->getOpcode()) { default: return Error(E); case BO_Comma: VisitIgnoredValue(E->getLHS()); return StmtVisitorTy::Visit(E->getRHS()); case BO_PtrMemD: case BO_PtrMemI: { LValue Obj; if (!HandleMemberPointerAccess(Info, E, Obj)) return false; APValue Result; if (!handleLValueToRValueConversion(Info, E, E->getType(), Obj, Result)) return false; return DerivedSuccess(Result, E); } } } bool VisitBinaryConditionalOperator(const BinaryConditionalOperator *E) { // Evaluate and cache the common expression. We treat it as a temporary, // even though it's not quite the same thing. if (!Evaluate(Info.CurrentCall->createTemporary(E->getOpaqueValue(), false), Info, E->getCommon())) return false; return HandleConditionalOperator(E); } bool VisitConditionalOperator(const ConditionalOperator *E) { bool IsBcpCall = false; // If the condition (ignoring parens) is a __builtin_constant_p call, // the result is a constant expression if it can be folded without // side-effects. This is an important GNU extension. See GCC PR38377 // for discussion. if (const CallExpr *CallCE = dyn_cast(E->getCond()->IgnoreParenCasts())) if (CallCE->getBuiltinCallee() == Builtin::BI__builtin_constant_p) IsBcpCall = true; // Always assume __builtin_constant_p(...) ? ... : ... is a potential // constant expression; we can't check whether it's potentially foldable. if (Info.checkingPotentialConstantExpression() && IsBcpCall) return false; FoldConstant Fold(Info, IsBcpCall); if (!HandleConditionalOperator(E)) { Fold.keepDiagnostics(); return false; } return true; } bool VisitOpaqueValueExpr(const OpaqueValueExpr *E) { if (APValue *Value = Info.CurrentCall->getTemporary(E)) return DerivedSuccess(*Value, E); const Expr *Source = E->getSourceExpr(); if (!Source) return Error(E); if (Source == E) { // sanity checking. assert(0 && "OpaqueValueExpr recursively refers to itself"); return Error(E); } return StmtVisitorTy::Visit(Source); } bool VisitCallExpr(const CallExpr *E) { APValue Result; if (!handleCallExpr(E, Result, nullptr)) return false; return DerivedSuccess(Result, E); } bool handleCallExpr(const CallExpr *E, APValue &Result, const LValue *ResultSlot) { const Expr *Callee = E->getCallee()->IgnoreParens(); QualType CalleeType = Callee->getType(); const FunctionDecl *FD = nullptr; LValue *This = nullptr, ThisVal; auto Args = llvm::makeArrayRef(E->getArgs(), E->getNumArgs()); bool HasQualifier = false; // Extract function decl and 'this' pointer from the callee. if (CalleeType->isSpecificBuiltinType(BuiltinType::BoundMember)) { const ValueDecl *Member = nullptr; if (const MemberExpr *ME = dyn_cast(Callee)) { // Explicit bound member calls, such as x.f() or p->g(); if (!EvaluateObjectArgument(Info, ME->getBase(), ThisVal)) return false; Member = ME->getMemberDecl(); This = &ThisVal; HasQualifier = ME->hasQualifier(); } else if (const BinaryOperator *BE = dyn_cast(Callee)) { // Indirect bound member calls ('.*' or '->*'). Member = HandleMemberPointerAccess(Info, BE, ThisVal, false); if (!Member) return false; This = &ThisVal; } else return Error(Callee); FD = dyn_cast(Member); if (!FD) return Error(Callee); } else if (CalleeType->isFunctionPointerType()) { LValue Call; if (!EvaluatePointer(Callee, Call, Info)) return false; if (!Call.getLValueOffset().isZero()) return Error(Callee); FD = dyn_cast_or_null( Call.getLValueBase().dyn_cast()); if (!FD) return Error(Callee); // Don't call function pointers which have been cast to some other type. // Per DR (no number yet), the caller and callee can differ in noexcept. if (!Info.Ctx.hasSameFunctionTypeIgnoringExceptionSpec( CalleeType->getPointeeType(), FD->getType())) { return Error(E); } // Overloaded operator calls to member functions are represented as normal // calls with '*this' as the first argument. const CXXMethodDecl *MD = dyn_cast(FD); if (MD && !MD->isStatic()) { // FIXME: When selecting an implicit conversion for an overloaded // operator delete, we sometimes try to evaluate calls to conversion // operators without a 'this' parameter! if (Args.empty()) return Error(E); if (!EvaluateObjectArgument(Info, Args[0], ThisVal)) return false; This = &ThisVal; Args = Args.slice(1); } else if (MD && MD->isLambdaStaticInvoker()) { // Map the static invoker for the lambda back to the call operator. // Conveniently, we don't have to slice out the 'this' argument (as is // being done for the non-static case), since a static member function // doesn't have an implicit argument passed in. const CXXRecordDecl *ClosureClass = MD->getParent(); assert( ClosureClass->captures_begin() == ClosureClass->captures_end() && "Number of captures must be zero for conversion to function-ptr"); const CXXMethodDecl *LambdaCallOp = ClosureClass->getLambdaCallOperator(); // Set 'FD', the function that will be called below, to the call // operator. If the closure object represents a generic lambda, find // the corresponding specialization of the call operator. if (ClosureClass->isGenericLambda()) { assert(MD->isFunctionTemplateSpecialization() && "A generic lambda's static-invoker function must be a " "template specialization"); const TemplateArgumentList *TAL = MD->getTemplateSpecializationArgs(); FunctionTemplateDecl *CallOpTemplate = LambdaCallOp->getDescribedFunctionTemplate(); void *InsertPos = nullptr; FunctionDecl *CorrespondingCallOpSpecialization = CallOpTemplate->findSpecialization(TAL->asArray(), InsertPos); assert(CorrespondingCallOpSpecialization && "We must always have a function call operator specialization " "that corresponds to our static invoker specialization"); FD = cast(CorrespondingCallOpSpecialization); } else FD = LambdaCallOp; } } else return Error(E); if (This && !This->checkSubobject(Info, E, CSK_This)) return false; // DR1358 allows virtual constexpr functions in some cases. Don't allow // calls to such functions in constant expressions. if (This && !HasQualifier && isa(FD) && cast(FD)->isVirtual()) return Error(E, diag::note_constexpr_virtual_call); const FunctionDecl *Definition = nullptr; Stmt *Body = FD->getBody(Definition); if (!CheckConstexprFunction(Info, E->getExprLoc(), FD, Definition, Body) || !HandleFunctionCall(E->getExprLoc(), Definition, This, Args, Body, Info, Result, ResultSlot)) return false; return true; } bool VisitCompoundLiteralExpr(const CompoundLiteralExpr *E) { return StmtVisitorTy::Visit(E->getInitializer()); } bool VisitInitListExpr(const InitListExpr *E) { if (E->getNumInits() == 0) return DerivedZeroInitialization(E); if (E->getNumInits() == 1) return StmtVisitorTy::Visit(E->getInit(0)); return Error(E); } bool VisitImplicitValueInitExpr(const ImplicitValueInitExpr *E) { return DerivedZeroInitialization(E); } bool VisitCXXScalarValueInitExpr(const CXXScalarValueInitExpr *E) { return DerivedZeroInitialization(E); } bool VisitCXXNullPtrLiteralExpr(const CXXNullPtrLiteralExpr *E) { return DerivedZeroInitialization(E); } /// A member expression where the object is a prvalue is itself a prvalue. bool VisitMemberExpr(const MemberExpr *E) { assert(!E->isArrow() && "missing call to bound member function?"); APValue Val; if (!Evaluate(Val, Info, E->getBase())) return false; QualType BaseTy = E->getBase()->getType(); const FieldDecl *FD = dyn_cast(E->getMemberDecl()); if (!FD) return Error(E); assert(!FD->getType()->isReferenceType() && "prvalue reference?"); assert(BaseTy->castAs()->getDecl()->getCanonicalDecl() == FD->getParent()->getCanonicalDecl() && "record / field mismatch"); CompleteObject Obj(&Val, BaseTy); SubobjectDesignator Designator(BaseTy); Designator.addDeclUnchecked(FD); APValue Result; return extractSubobject(Info, E, Obj, Designator, Result) && DerivedSuccess(Result, E); } bool VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: break; case CK_AtomicToNonAtomic: { APValue AtomicVal; if (!EvaluateAtomic(E->getSubExpr(), AtomicVal, Info)) return false; return DerivedSuccess(AtomicVal, E); } case CK_NoOp: case CK_UserDefinedConversion: return StmtVisitorTy::Visit(E->getSubExpr()); case CK_LValueToRValue: { LValue LVal; if (!EvaluateLValue(E->getSubExpr(), LVal, Info)) return false; APValue RVal; // Note, we use the subexpression's type in order to retain cv-qualifiers. if (!handleLValueToRValueConversion(Info, E, E->getSubExpr()->getType(), LVal, RVal)) return false; return DerivedSuccess(RVal, E); } } return Error(E); } bool VisitUnaryPostInc(const UnaryOperator *UO) { return VisitUnaryPostIncDec(UO); } bool VisitUnaryPostDec(const UnaryOperator *UO) { return VisitUnaryPostIncDec(UO); } bool VisitUnaryPostIncDec(const UnaryOperator *UO) { if (!Info.getLangOpts().CPlusPlus14 && !Info.keepEvaluatingAfterFailure()) return Error(UO); LValue LVal; if (!EvaluateLValue(UO->getSubExpr(), LVal, Info)) return false; APValue RVal; if (!handleIncDec(this->Info, UO, LVal, UO->getSubExpr()->getType(), UO->isIncrementOp(), &RVal)) return false; return DerivedSuccess(RVal, UO); } bool VisitStmtExpr(const StmtExpr *E) { // We will have checked the full-expressions inside the statement expression // when they were completed, and don't need to check them again now. if (Info.checkingForOverflow()) return Error(E); BlockScopeRAII Scope(Info); const CompoundStmt *CS = E->getSubStmt(); if (CS->body_empty()) return true; for (CompoundStmt::const_body_iterator BI = CS->body_begin(), BE = CS->body_end(); /**/; ++BI) { if (BI + 1 == BE) { const Expr *FinalExpr = dyn_cast(*BI); if (!FinalExpr) { Info.FFDiag((*BI)->getLocStart(), diag::note_constexpr_stmt_expr_unsupported); return false; } return this->Visit(FinalExpr); } APValue ReturnValue; StmtResult Result = { ReturnValue, nullptr }; EvalStmtResult ESR = EvaluateStmt(Result, Info, *BI); if (ESR != ESR_Succeeded) { // FIXME: If the statement-expression terminated due to 'return', // 'break', or 'continue', it would be nice to propagate that to // the outer statement evaluation rather than bailing out. if (ESR != ESR_Failed) Info.FFDiag((*BI)->getLocStart(), diag::note_constexpr_stmt_expr_unsupported); return false; } } llvm_unreachable("Return from function from the loop above."); } /// Visit a value which is evaluated, but whose value is ignored. void VisitIgnoredValue(const Expr *E) { EvaluateIgnoredValue(Info, E); } /// Potentially visit a MemberExpr's base expression. void VisitIgnoredBaseExpression(const Expr *E) { // While MSVC doesn't evaluate the base expression, it does diagnose the // presence of side-effecting behavior. if (Info.getLangOpts().MSVCCompat && !E->HasSideEffects(Info.Ctx)) return; VisitIgnoredValue(E); } }; } //===----------------------------------------------------------------------===// // Common base class for lvalue and temporary evaluation. //===----------------------------------------------------------------------===// namespace { template class LValueExprEvaluatorBase : public ExprEvaluatorBase { protected: LValue &Result; typedef LValueExprEvaluatorBase LValueExprEvaluatorBaseTy; typedef ExprEvaluatorBase ExprEvaluatorBaseTy; bool Success(APValue::LValueBase B) { Result.set(B); return true; } public: LValueExprEvaluatorBase(EvalInfo &Info, LValue &Result) : ExprEvaluatorBaseTy(Info), Result(Result) {} bool Success(const APValue &V, const Expr *E) { Result.setFrom(this->Info.Ctx, V); return true; } bool VisitMemberExpr(const MemberExpr *E) { // Handle non-static data members. QualType BaseTy; bool EvalOK; if (E->isArrow()) { EvalOK = EvaluatePointer(E->getBase(), Result, this->Info); BaseTy = E->getBase()->getType()->castAs()->getPointeeType(); } else if (E->getBase()->isRValue()) { assert(E->getBase()->getType()->isRecordType()); EvalOK = EvaluateTemporary(E->getBase(), Result, this->Info); BaseTy = E->getBase()->getType(); } else { EvalOK = this->Visit(E->getBase()); BaseTy = E->getBase()->getType(); } if (!EvalOK) { if (!this->Info.allowInvalidBaseExpr()) return false; Result.setInvalid(E); return true; } const ValueDecl *MD = E->getMemberDecl(); if (const FieldDecl *FD = dyn_cast(E->getMemberDecl())) { assert(BaseTy->getAs()->getDecl()->getCanonicalDecl() == FD->getParent()->getCanonicalDecl() && "record / field mismatch"); (void)BaseTy; if (!HandleLValueMember(this->Info, E, Result, FD)) return false; } else if (const IndirectFieldDecl *IFD = dyn_cast(MD)) { if (!HandleLValueIndirectMember(this->Info, E, Result, IFD)) return false; } else return this->Error(E); if (MD->getType()->isReferenceType()) { APValue RefValue; if (!handleLValueToRValueConversion(this->Info, E, MD->getType(), Result, RefValue)) return false; return Success(RefValue, E); } return true; } bool VisitBinaryOperator(const BinaryOperator *E) { switch (E->getOpcode()) { default: return ExprEvaluatorBaseTy::VisitBinaryOperator(E); case BO_PtrMemD: case BO_PtrMemI: return HandleMemberPointerAccess(this->Info, E, Result); } } bool VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_DerivedToBase: case CK_UncheckedDerivedToBase: if (!this->Visit(E->getSubExpr())) return false; // Now figure out the necessary offset to add to the base LV to get from // the derived class to the base class. return HandleLValueBasePath(this->Info, E, E->getSubExpr()->getType(), Result); } } }; } //===----------------------------------------------------------------------===// // LValue Evaluation // // This is used for evaluating lvalues (in C and C++), xvalues (in C++11), // function designators (in C), decl references to void objects (in C), and // temporaries (if building with -Wno-address-of-temporary). // // LValue evaluation produces values comprising a base expression of one of the // following types: // - Declarations // * VarDecl // * FunctionDecl // - Literals // * CompoundLiteralExpr in C (and in global scope in C++) // * StringLiteral // * CXXTypeidExpr // * PredefinedExpr // * ObjCStringLiteralExpr // * ObjCEncodeExpr // * AddrLabelExpr // * BlockExpr // * CallExpr for a MakeStringConstant builtin // - Locals and temporaries // * MaterializeTemporaryExpr // * Any Expr, with a CallIndex indicating the function in which the temporary // was evaluated, for cases where the MaterializeTemporaryExpr is missing // from the AST (FIXME). // * A MaterializeTemporaryExpr that has static storage duration, with no // CallIndex, for a lifetime-extended temporary. // plus an offset in bytes. //===----------------------------------------------------------------------===// namespace { class LValueExprEvaluator : public LValueExprEvaluatorBase { public: LValueExprEvaluator(EvalInfo &Info, LValue &Result) : LValueExprEvaluatorBaseTy(Info, Result) {} bool VisitVarDecl(const Expr *E, const VarDecl *VD); bool VisitUnaryPreIncDec(const UnaryOperator *UO); bool VisitDeclRefExpr(const DeclRefExpr *E); bool VisitPredefinedExpr(const PredefinedExpr *E) { return Success(E); } bool VisitMaterializeTemporaryExpr(const MaterializeTemporaryExpr *E); bool VisitCompoundLiteralExpr(const CompoundLiteralExpr *E); bool VisitMemberExpr(const MemberExpr *E); bool VisitStringLiteral(const StringLiteral *E) { return Success(E); } bool VisitObjCEncodeExpr(const ObjCEncodeExpr *E) { return Success(E); } bool VisitCXXTypeidExpr(const CXXTypeidExpr *E); bool VisitCXXUuidofExpr(const CXXUuidofExpr *E); bool VisitArraySubscriptExpr(const ArraySubscriptExpr *E); bool VisitUnaryDeref(const UnaryOperator *E); bool VisitUnaryReal(const UnaryOperator *E); bool VisitUnaryImag(const UnaryOperator *E); bool VisitUnaryPreInc(const UnaryOperator *UO) { return VisitUnaryPreIncDec(UO); } bool VisitUnaryPreDec(const UnaryOperator *UO) { return VisitUnaryPreIncDec(UO); } bool VisitBinAssign(const BinaryOperator *BO); bool VisitCompoundAssignOperator(const CompoundAssignOperator *CAO); bool VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: return LValueExprEvaluatorBaseTy::VisitCastExpr(E); case CK_LValueBitCast: this->CCEDiag(E, diag::note_constexpr_invalid_cast) << 2; if (!Visit(E->getSubExpr())) return false; Result.Designator.setInvalid(); return true; case CK_BaseToDerived: if (!Visit(E->getSubExpr())) return false; return HandleBaseToDerivedCast(Info, E, Result); } } }; } // end anonymous namespace /// Evaluate an expression as an lvalue. This can be legitimately called on /// expressions which are not glvalues, in three cases: /// * function designators in C, and /// * "extern void" objects /// * @selector() expressions in Objective-C static bool EvaluateLValue(const Expr *E, LValue &Result, EvalInfo &Info) { assert(E->isGLValue() || E->getType()->isFunctionType() || E->getType()->isVoidType() || isa(E)); return LValueExprEvaluator(Info, Result).Visit(E); } bool LValueExprEvaluator::VisitDeclRefExpr(const DeclRefExpr *E) { if (const FunctionDecl *FD = dyn_cast(E->getDecl())) return Success(FD); if (const VarDecl *VD = dyn_cast(E->getDecl())) return VisitVarDecl(E, VD); if (const BindingDecl *BD = dyn_cast(E->getDecl())) return Visit(BD->getBinding()); return Error(E); } bool LValueExprEvaluator::VisitVarDecl(const Expr *E, const VarDecl *VD) { CallStackFrame *Frame = nullptr; if (VD->hasLocalStorage() && Info.CurrentCall->Index > 1) { // Only if a local variable was declared in the function currently being // evaluated, do we expect to be able to find its value in the current // frame. (Otherwise it was likely declared in an enclosing context and // could either have a valid evaluatable value (for e.g. a constexpr // variable) or be ill-formed (and trigger an appropriate evaluation // diagnostic)). if (Info.CurrentCall->Callee && Info.CurrentCall->Callee->Equals(VD->getDeclContext())) { Frame = Info.CurrentCall; } } if (!VD->getType()->isReferenceType()) { if (Frame) { Result.set(VD, Frame->Index); return true; } return Success(VD); } APValue *V; if (!evaluateVarDeclInit(Info, E, VD, Frame, V)) return false; if (V->isUninit()) { if (!Info.checkingPotentialConstantExpression()) Info.FFDiag(E, diag::note_constexpr_use_uninit_reference); return false; } return Success(*V, E); } bool LValueExprEvaluator::VisitMaterializeTemporaryExpr( const MaterializeTemporaryExpr *E) { // Walk through the expression to find the materialized temporary itself. SmallVector CommaLHSs; SmallVector Adjustments; const Expr *Inner = E->GetTemporaryExpr()-> skipRValueSubobjectAdjustments(CommaLHSs, Adjustments); // If we passed any comma operators, evaluate their LHSs. for (unsigned I = 0, N = CommaLHSs.size(); I != N; ++I) if (!EvaluateIgnoredValue(Info, CommaLHSs[I])) return false; // A materialized temporary with static storage duration can appear within the // result of a constant expression evaluation, so we need to preserve its // value for use outside this evaluation. APValue *Value; if (E->getStorageDuration() == SD_Static) { Value = Info.Ctx.getMaterializedTemporaryValue(E, true); *Value = APValue(); Result.set(E); } else { Value = &Info.CurrentCall-> createTemporary(E, E->getStorageDuration() == SD_Automatic); Result.set(E, Info.CurrentCall->Index); } QualType Type = Inner->getType(); // Materialize the temporary itself. if (!EvaluateInPlace(*Value, Info, Result, Inner) || (E->getStorageDuration() == SD_Static && !CheckConstantExpression(Info, E->getExprLoc(), Type, *Value))) { *Value = APValue(); return false; } // Adjust our lvalue to refer to the desired subobject. for (unsigned I = Adjustments.size(); I != 0; /**/) { --I; switch (Adjustments[I].Kind) { case SubobjectAdjustment::DerivedToBaseAdjustment: if (!HandleLValueBasePath(Info, Adjustments[I].DerivedToBase.BasePath, Type, Result)) return false; Type = Adjustments[I].DerivedToBase.BasePath->getType(); break; case SubobjectAdjustment::FieldAdjustment: if (!HandleLValueMember(Info, E, Result, Adjustments[I].Field)) return false; Type = Adjustments[I].Field->getType(); break; case SubobjectAdjustment::MemberPointerAdjustment: if (!HandleMemberPointerAccess(this->Info, Type, Result, Adjustments[I].Ptr.RHS)) return false; Type = Adjustments[I].Ptr.MPT->getPointeeType(); break; } } return true; } bool LValueExprEvaluator::VisitCompoundLiteralExpr(const CompoundLiteralExpr *E) { assert((!Info.getLangOpts().CPlusPlus || E->isFileScope()) && "lvalue compound literal in c++?"); // Defer visiting the literal until the lvalue-to-rvalue conversion. We can // only see this when folding in C, so there's no standard to follow here. return Success(E); } bool LValueExprEvaluator::VisitCXXTypeidExpr(const CXXTypeidExpr *E) { if (!E->isPotentiallyEvaluated()) return Success(E); Info.FFDiag(E, diag::note_constexpr_typeid_polymorphic) << E->getExprOperand()->getType() << E->getExprOperand()->getSourceRange(); return false; } bool LValueExprEvaluator::VisitCXXUuidofExpr(const CXXUuidofExpr *E) { return Success(E); } bool LValueExprEvaluator::VisitMemberExpr(const MemberExpr *E) { // Handle static data members. if (const VarDecl *VD = dyn_cast(E->getMemberDecl())) { VisitIgnoredBaseExpression(E->getBase()); return VisitVarDecl(E, VD); } // Handle static member functions. if (const CXXMethodDecl *MD = dyn_cast(E->getMemberDecl())) { if (MD->isStatic()) { VisitIgnoredBaseExpression(E->getBase()); return Success(MD); } } // Handle non-static data members. return LValueExprEvaluatorBaseTy::VisitMemberExpr(E); } bool LValueExprEvaluator::VisitArraySubscriptExpr(const ArraySubscriptExpr *E) { // FIXME: Deal with vectors as array subscript bases. if (E->getBase()->getType()->isVectorType()) return Error(E); if (!EvaluatePointer(E->getBase(), Result, Info)) return false; APSInt Index; if (!EvaluateInteger(E->getIdx(), Index, Info)) return false; return HandleLValueArrayAdjustment(Info, E, Result, E->getType(), getExtValue(Index)); } bool LValueExprEvaluator::VisitUnaryDeref(const UnaryOperator *E) { return EvaluatePointer(E->getSubExpr(), Result, Info); } bool LValueExprEvaluator::VisitUnaryReal(const UnaryOperator *E) { if (!Visit(E->getSubExpr())) return false; // __real is a no-op on scalar lvalues. if (E->getSubExpr()->getType()->isAnyComplexType()) HandleLValueComplexElement(Info, E, Result, E->getType(), false); return true; } bool LValueExprEvaluator::VisitUnaryImag(const UnaryOperator *E) { assert(E->getSubExpr()->getType()->isAnyComplexType() && "lvalue __imag__ on scalar?"); if (!Visit(E->getSubExpr())) return false; HandleLValueComplexElement(Info, E, Result, E->getType(), true); return true; } bool LValueExprEvaluator::VisitUnaryPreIncDec(const UnaryOperator *UO) { if (!Info.getLangOpts().CPlusPlus14 && !Info.keepEvaluatingAfterFailure()) return Error(UO); if (!this->Visit(UO->getSubExpr())) return false; return handleIncDec( this->Info, UO, Result, UO->getSubExpr()->getType(), UO->isIncrementOp(), nullptr); } bool LValueExprEvaluator::VisitCompoundAssignOperator( const CompoundAssignOperator *CAO) { if (!Info.getLangOpts().CPlusPlus14 && !Info.keepEvaluatingAfterFailure()) return Error(CAO); APValue RHS; // The overall lvalue result is the result of evaluating the LHS. if (!this->Visit(CAO->getLHS())) { if (Info.noteFailure()) Evaluate(RHS, this->Info, CAO->getRHS()); return false; } if (!Evaluate(RHS, this->Info, CAO->getRHS())) return false; return handleCompoundAssignment( this->Info, CAO, Result, CAO->getLHS()->getType(), CAO->getComputationLHSType(), CAO->getOpForCompoundAssignment(CAO->getOpcode()), RHS); } bool LValueExprEvaluator::VisitBinAssign(const BinaryOperator *E) { if (!Info.getLangOpts().CPlusPlus14 && !Info.keepEvaluatingAfterFailure()) return Error(E); APValue NewVal; if (!this->Visit(E->getLHS())) { if (Info.noteFailure()) Evaluate(NewVal, this->Info, E->getRHS()); return false; } if (!Evaluate(NewVal, this->Info, E->getRHS())) return false; return handleAssignment(this->Info, E, Result, E->getLHS()->getType(), NewVal); } //===----------------------------------------------------------------------===// // Pointer Evaluation //===----------------------------------------------------------------------===// /// \brief Attempts to compute the number of bytes available at the pointer /// returned by a function with the alloc_size attribute. Returns true if we /// were successful. Places an unsigned number into `Result`. /// /// This expects the given CallExpr to be a call to a function with an /// alloc_size attribute. static bool getBytesReturnedByAllocSizeCall(const ASTContext &Ctx, const CallExpr *Call, llvm::APInt &Result) { const AllocSizeAttr *AllocSize = getAllocSizeAttr(Call); // alloc_size args are 1-indexed, 0 means not present. assert(AllocSize && AllocSize->getElemSizeParam() != 0); unsigned SizeArgNo = AllocSize->getElemSizeParam() - 1; unsigned BitsInSizeT = Ctx.getTypeSize(Ctx.getSizeType()); if (Call->getNumArgs() <= SizeArgNo) return false; auto EvaluateAsSizeT = [&](const Expr *E, APSInt &Into) { if (!E->EvaluateAsInt(Into, Ctx, Expr::SE_AllowSideEffects)) return false; if (Into.isNegative() || !Into.isIntN(BitsInSizeT)) return false; Into = Into.zextOrSelf(BitsInSizeT); return true; }; APSInt SizeOfElem; if (!EvaluateAsSizeT(Call->getArg(SizeArgNo), SizeOfElem)) return false; if (!AllocSize->getNumElemsParam()) { Result = std::move(SizeOfElem); return true; } APSInt NumberOfElems; // Argument numbers start at 1 unsigned NumArgNo = AllocSize->getNumElemsParam() - 1; if (!EvaluateAsSizeT(Call->getArg(NumArgNo), NumberOfElems)) return false; bool Overflow; llvm::APInt BytesAvailable = SizeOfElem.umul_ov(NumberOfElems, Overflow); if (Overflow) return false; Result = std::move(BytesAvailable); return true; } /// \brief Convenience function. LVal's base must be a call to an alloc_size /// function. static bool getBytesReturnedByAllocSizeCall(const ASTContext &Ctx, const LValue &LVal, llvm::APInt &Result) { assert(isBaseAnAllocSizeCall(LVal.getLValueBase()) && "Can't get the size of a non alloc_size function"); const auto *Base = LVal.getLValueBase().get(); const CallExpr *CE = tryUnwrapAllocSizeCall(Base); return getBytesReturnedByAllocSizeCall(Ctx, CE, Result); } /// \brief Attempts to evaluate the given LValueBase as the result of a call to /// a function with the alloc_size attribute. If it was possible to do so, this /// function will return true, make Result's Base point to said function call, /// and mark Result's Base as invalid. static bool evaluateLValueAsAllocSize(EvalInfo &Info, APValue::LValueBase Base, LValue &Result) { if (!Info.allowInvalidBaseExpr() || Base.isNull()) return false; // Because we do no form of static analysis, we only support const variables. // // Additionally, we can't support parameters, nor can we support static // variables (in the latter case, use-before-assign isn't UB; in the former, // we have no clue what they'll be assigned to). const auto *VD = dyn_cast_or_null(Base.dyn_cast()); if (!VD || !VD->isLocalVarDecl() || !VD->getType().isConstQualified()) return false; const Expr *Init = VD->getAnyInitializer(); if (!Init) return false; const Expr *E = Init->IgnoreParens(); if (!tryUnwrapAllocSizeCall(E)) return false; // Store E instead of E unwrapped so that the type of the LValue's base is // what the user wanted. Result.setInvalid(E); QualType Pointee = E->getType()->castAs()->getPointeeType(); Result.addUnsizedArray(Info, Pointee); return true; } namespace { class PointerExprEvaluator : public ExprEvaluatorBase { LValue &Result; bool Success(const Expr *E) { Result.set(E); return true; } bool visitNonBuiltinCallExpr(const CallExpr *E); public: PointerExprEvaluator(EvalInfo &info, LValue &Result) : ExprEvaluatorBaseTy(info), Result(Result) {} bool Success(const APValue &V, const Expr *E) { Result.setFrom(Info.Ctx, V); return true; } bool ZeroInitialization(const Expr *E) { auto Offset = Info.Ctx.getTargetNullPointerValue(E->getType()); Result.set((Expr*)nullptr, 0, false, true, Offset); return true; } bool VisitBinaryOperator(const BinaryOperator *E); bool VisitCastExpr(const CastExpr* E); bool VisitUnaryAddrOf(const UnaryOperator *E); bool VisitObjCStringLiteral(const ObjCStringLiteral *E) { return Success(E); } bool VisitObjCBoxedExpr(const ObjCBoxedExpr *E) { return Success(E); } bool VisitAddrLabelExpr(const AddrLabelExpr *E) { return Success(E); } bool VisitCallExpr(const CallExpr *E); bool VisitBuiltinCallExpr(const CallExpr *E, unsigned BuiltinOp); bool VisitBlockExpr(const BlockExpr *E) { if (!E->getBlockDecl()->hasCaptures()) return Success(E); return Error(E); } bool VisitCXXThisExpr(const CXXThisExpr *E) { // Can't look at 'this' when checking a potential constant expression. if (Info.checkingPotentialConstantExpression()) return false; if (!Info.CurrentCall->This) { if (Info.getLangOpts().CPlusPlus11) Info.FFDiag(E, diag::note_constexpr_this) << E->isImplicit(); else Info.FFDiag(E); return false; } Result = *Info.CurrentCall->This; return true; } // FIXME: Missing: @protocol, @selector }; } // end anonymous namespace static bool EvaluatePointer(const Expr* E, LValue& Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->hasPointerRepresentation()); return PointerExprEvaluator(Info, Result).Visit(E); } bool PointerExprEvaluator::VisitBinaryOperator(const BinaryOperator *E) { if (E->getOpcode() != BO_Add && E->getOpcode() != BO_Sub) return ExprEvaluatorBaseTy::VisitBinaryOperator(E); const Expr *PExp = E->getLHS(); const Expr *IExp = E->getRHS(); if (IExp->getType()->isPointerType()) std::swap(PExp, IExp); bool EvalPtrOK = EvaluatePointer(PExp, Result, Info); if (!EvalPtrOK && !Info.noteFailure()) return false; llvm::APSInt Offset; if (!EvaluateInteger(IExp, Offset, Info) || !EvalPtrOK) return false; int64_t AdditionalOffset = getExtValue(Offset); if (E->getOpcode() == BO_Sub) AdditionalOffset = -AdditionalOffset; QualType Pointee = PExp->getType()->castAs()->getPointeeType(); return HandleLValueArrayAdjustment(Info, E, Result, Pointee, AdditionalOffset); } bool PointerExprEvaluator::VisitUnaryAddrOf(const UnaryOperator *E) { return EvaluateLValue(E->getSubExpr(), Result, Info); } bool PointerExprEvaluator::VisitCastExpr(const CastExpr* E) { const Expr* SubExpr = E->getSubExpr(); switch (E->getCastKind()) { default: break; case CK_BitCast: case CK_CPointerToObjCPointerCast: case CK_BlockPointerToObjCPointerCast: case CK_AnyPointerToBlockPointerCast: case CK_AddressSpaceConversion: if (!Visit(SubExpr)) return false; // Bitcasts to cv void* are static_casts, not reinterpret_casts, so are // permitted in constant expressions in C++11. Bitcasts from cv void* are // also static_casts, but we disallow them as a resolution to DR1312. if (!E->getType()->isVoidPointerType()) { Result.Designator.setInvalid(); if (SubExpr->getType()->isVoidPointerType()) CCEDiag(E, diag::note_constexpr_invalid_cast) << 3 << SubExpr->getType(); else CCEDiag(E, diag::note_constexpr_invalid_cast) << 2; } if (E->getCastKind() == CK_AddressSpaceConversion && Result.IsNullPtr) ZeroInitialization(E); return true; case CK_DerivedToBase: case CK_UncheckedDerivedToBase: if (!EvaluatePointer(E->getSubExpr(), Result, Info)) return false; if (!Result.Base && Result.Offset.isZero()) return true; // Now figure out the necessary offset to add to the base LV to get from // the derived class to the base class. return HandleLValueBasePath(Info, E, E->getSubExpr()->getType()-> castAs()->getPointeeType(), Result); case CK_BaseToDerived: if (!Visit(E->getSubExpr())) return false; if (!Result.Base && Result.Offset.isZero()) return true; return HandleBaseToDerivedCast(Info, E, Result); case CK_NullToPointer: VisitIgnoredValue(E->getSubExpr()); return ZeroInitialization(E); case CK_IntegralToPointer: { CCEDiag(E, diag::note_constexpr_invalid_cast) << 2; APValue Value; if (!EvaluateIntegerOrLValue(SubExpr, Value, Info)) break; if (Value.isInt()) { unsigned Size = Info.Ctx.getTypeSize(E->getType()); uint64_t N = Value.getInt().extOrTrunc(Size).getZExtValue(); Result.Base = (Expr*)nullptr; Result.InvalidBase = false; Result.Offset = CharUnits::fromQuantity(N); Result.CallIndex = 0; Result.Designator.setInvalid(); Result.IsNullPtr = false; return true; } else { // Cast is of an lvalue, no need to change value. Result.setFrom(Info.Ctx, Value); return true; } } case CK_ArrayToPointerDecay: if (SubExpr->isGLValue()) { if (!EvaluateLValue(SubExpr, Result, Info)) return false; } else { Result.set(SubExpr, Info.CurrentCall->Index); if (!EvaluateInPlace(Info.CurrentCall->createTemporary(SubExpr, false), Info, Result, SubExpr)) return false; } // The result is a pointer to the first element of the array. if (const ConstantArrayType *CAT = Info.Ctx.getAsConstantArrayType(SubExpr->getType())) Result.addArray(Info, E, CAT); else Result.Designator.setInvalid(); return true; case CK_FunctionToPointerDecay: return EvaluateLValue(SubExpr, Result, Info); case CK_LValueToRValue: { LValue LVal; if (!EvaluateLValue(E->getSubExpr(), LVal, Info)) return false; APValue RVal; // Note, we use the subexpression's type in order to retain cv-qualifiers. if (!handleLValueToRValueConversion(Info, E, E->getSubExpr()->getType(), LVal, RVal)) return evaluateLValueAsAllocSize(Info, LVal.Base, Result); return Success(RVal, E); } } return ExprEvaluatorBaseTy::VisitCastExpr(E); } static CharUnits GetAlignOfType(EvalInfo &Info, QualType T) { // C++ [expr.alignof]p3: // When alignof is applied to a reference type, the result is the // alignment of the referenced type. if (const ReferenceType *Ref = T->getAs()) T = Ref->getPointeeType(); // __alignof is defined to return the preferred alignment. return Info.Ctx.toCharUnitsFromBits( Info.Ctx.getPreferredTypeAlign(T.getTypePtr())); } static CharUnits GetAlignOfExpr(EvalInfo &Info, const Expr *E) { E = E->IgnoreParens(); // The kinds of expressions that we have special-case logic here for // should be kept up to date with the special checks for those // expressions in Sema. // alignof decl is always accepted, even if it doesn't make sense: we default // to 1 in those cases. if (const DeclRefExpr *DRE = dyn_cast(E)) return Info.Ctx.getDeclAlign(DRE->getDecl(), /*RefAsPointee*/true); if (const MemberExpr *ME = dyn_cast(E)) return Info.Ctx.getDeclAlign(ME->getMemberDecl(), /*RefAsPointee*/true); return GetAlignOfType(Info, E->getType()); } // To be clear: this happily visits unsupported builtins. Better name welcomed. bool PointerExprEvaluator::visitNonBuiltinCallExpr(const CallExpr *E) { if (ExprEvaluatorBaseTy::VisitCallExpr(E)) return true; if (!(Info.allowInvalidBaseExpr() && getAllocSizeAttr(E))) return false; Result.setInvalid(E); QualType PointeeTy = E->getType()->castAs()->getPointeeType(); Result.addUnsizedArray(Info, PointeeTy); return true; } bool PointerExprEvaluator::VisitCallExpr(const CallExpr *E) { if (IsStringLiteralCall(E)) return Success(E); if (unsigned BuiltinOp = E->getBuiltinCallee()) return VisitBuiltinCallExpr(E, BuiltinOp); return visitNonBuiltinCallExpr(E); } bool PointerExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E, unsigned BuiltinOp) { switch (BuiltinOp) { case Builtin::BI__builtin_addressof: return EvaluateLValue(E->getArg(0), Result, Info); case Builtin::BI__builtin_assume_aligned: { // We need to be very careful here because: if the pointer does not have the // asserted alignment, then the behavior is undefined, and undefined // behavior is non-constant. if (!EvaluatePointer(E->getArg(0), Result, Info)) return false; LValue OffsetResult(Result); APSInt Alignment; if (!EvaluateInteger(E->getArg(1), Alignment, Info)) return false; CharUnits Align = CharUnits::fromQuantity(getExtValue(Alignment)); if (E->getNumArgs() > 2) { APSInt Offset; if (!EvaluateInteger(E->getArg(2), Offset, Info)) return false; int64_t AdditionalOffset = -getExtValue(Offset); OffsetResult.Offset += CharUnits::fromQuantity(AdditionalOffset); } // If there is a base object, then it must have the correct alignment. if (OffsetResult.Base) { CharUnits BaseAlignment; if (const ValueDecl *VD = OffsetResult.Base.dyn_cast()) { BaseAlignment = Info.Ctx.getDeclAlign(VD); } else { BaseAlignment = GetAlignOfExpr(Info, OffsetResult.Base.get()); } if (BaseAlignment < Align) { Result.Designator.setInvalid(); // FIXME: Quantities here cast to integers because the plural modifier // does not work on APSInts yet. CCEDiag(E->getArg(0), diag::note_constexpr_baa_insufficient_alignment) << 0 << (int) BaseAlignment.getQuantity() << (unsigned) getExtValue(Alignment); return false; } } // The offset must also have the correct alignment. if (OffsetResult.Offset.alignTo(Align) != OffsetResult.Offset) { Result.Designator.setInvalid(); APSInt Offset(64, false); Offset = OffsetResult.Offset.getQuantity(); if (OffsetResult.Base) CCEDiag(E->getArg(0), diag::note_constexpr_baa_insufficient_alignment) << 1 << (int) getExtValue(Offset) << (unsigned) getExtValue(Alignment); else CCEDiag(E->getArg(0), diag::note_constexpr_baa_value_insufficient_alignment) << Offset << (unsigned) getExtValue(Alignment); return false; } return true; } case Builtin::BIstrchr: case Builtin::BIwcschr: case Builtin::BImemchr: case Builtin::BIwmemchr: if (Info.getLangOpts().CPlusPlus11) Info.CCEDiag(E, diag::note_constexpr_invalid_function) << /*isConstexpr*/0 << /*isConstructor*/0 << (std::string("'") + Info.Ctx.BuiltinInfo.getName(BuiltinOp) + "'"); else Info.CCEDiag(E, diag::note_invalid_subexpr_in_const_expr); // Fall through. case Builtin::BI__builtin_strchr: case Builtin::BI__builtin_wcschr: case Builtin::BI__builtin_memchr: + case Builtin::BI__builtin_char_memchr: case Builtin::BI__builtin_wmemchr: { if (!Visit(E->getArg(0))) return false; APSInt Desired; if (!EvaluateInteger(E->getArg(1), Desired, Info)) return false; uint64_t MaxLength = uint64_t(-1); if (BuiltinOp != Builtin::BIstrchr && BuiltinOp != Builtin::BIwcschr && BuiltinOp != Builtin::BI__builtin_strchr && BuiltinOp != Builtin::BI__builtin_wcschr) { APSInt N; if (!EvaluateInteger(E->getArg(2), N, Info)) return false; MaxLength = N.getExtValue(); } QualType CharTy = E->getArg(0)->getType()->getPointeeType(); // Figure out what value we're actually looking for (after converting to // the corresponding unsigned type if necessary). uint64_t DesiredVal; bool StopAtNull = false; switch (BuiltinOp) { case Builtin::BIstrchr: case Builtin::BI__builtin_strchr: // strchr compares directly to the passed integer, and therefore // always fails if given an int that is not a char. if (!APSInt::isSameValue(HandleIntToIntCast(Info, E, CharTy, E->getArg(1)->getType(), Desired), Desired)) return ZeroInitialization(E); StopAtNull = true; // Fall through. case Builtin::BImemchr: case Builtin::BI__builtin_memchr: + case Builtin::BI__builtin_char_memchr: // memchr compares by converting both sides to unsigned char. That's also // correct for strchr if we get this far (to cope with plain char being // unsigned in the strchr case). DesiredVal = Desired.trunc(Info.Ctx.getCharWidth()).getZExtValue(); break; case Builtin::BIwcschr: case Builtin::BI__builtin_wcschr: StopAtNull = true; // Fall through. case Builtin::BIwmemchr: case Builtin::BI__builtin_wmemchr: // wcschr and wmemchr are given a wchar_t to look for. Just use it. DesiredVal = Desired.getZExtValue(); break; } for (; MaxLength; --MaxLength) { APValue Char; if (!handleLValueToRValueConversion(Info, E, CharTy, Result, Char) || !Char.isInt()) return false; if (Char.getInt().getZExtValue() == DesiredVal) return true; if (StopAtNull && !Char.getInt()) break; if (!HandleLValueArrayAdjustment(Info, E, Result, CharTy, 1)) return false; } // Not found: return nullptr. return ZeroInitialization(E); } default: return visitNonBuiltinCallExpr(E); } } //===----------------------------------------------------------------------===// // Member Pointer Evaluation //===----------------------------------------------------------------------===// namespace { class MemberPointerExprEvaluator : public ExprEvaluatorBase { MemberPtr &Result; bool Success(const ValueDecl *D) { Result = MemberPtr(D); return true; } public: MemberPointerExprEvaluator(EvalInfo &Info, MemberPtr &Result) : ExprEvaluatorBaseTy(Info), Result(Result) {} bool Success(const APValue &V, const Expr *E) { Result.setFrom(V); return true; } bool ZeroInitialization(const Expr *E) { return Success((const ValueDecl*)nullptr); } bool VisitCastExpr(const CastExpr *E); bool VisitUnaryAddrOf(const UnaryOperator *E); }; } // end anonymous namespace static bool EvaluateMemberPointer(const Expr *E, MemberPtr &Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isMemberPointerType()); return MemberPointerExprEvaluator(Info, Result).Visit(E); } bool MemberPointerExprEvaluator::VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_NullToMemberPointer: VisitIgnoredValue(E->getSubExpr()); return ZeroInitialization(E); case CK_BaseToDerivedMemberPointer: { if (!Visit(E->getSubExpr())) return false; if (E->path_empty()) return true; // Base-to-derived member pointer casts store the path in derived-to-base // order, so iterate backwards. The CXXBaseSpecifier also provides us with // the wrong end of the derived->base arc, so stagger the path by one class. typedef std::reverse_iterator ReverseIter; for (ReverseIter PathI(E->path_end() - 1), PathE(E->path_begin()); PathI != PathE; ++PathI) { assert(!(*PathI)->isVirtual() && "memptr cast through vbase"); const CXXRecordDecl *Derived = (*PathI)->getType()->getAsCXXRecordDecl(); if (!Result.castToDerived(Derived)) return Error(E); } const Type *FinalTy = E->getType()->castAs()->getClass(); if (!Result.castToDerived(FinalTy->getAsCXXRecordDecl())) return Error(E); return true; } case CK_DerivedToBaseMemberPointer: if (!Visit(E->getSubExpr())) return false; for (CastExpr::path_const_iterator PathI = E->path_begin(), PathE = E->path_end(); PathI != PathE; ++PathI) { assert(!(*PathI)->isVirtual() && "memptr cast through vbase"); const CXXRecordDecl *Base = (*PathI)->getType()->getAsCXXRecordDecl(); if (!Result.castToBase(Base)) return Error(E); } return true; } } bool MemberPointerExprEvaluator::VisitUnaryAddrOf(const UnaryOperator *E) { // C++11 [expr.unary.op]p3 has very strict rules on how the address of a // member can be formed. return Success(cast(E->getSubExpr())->getDecl()); } //===----------------------------------------------------------------------===// // Record Evaluation //===----------------------------------------------------------------------===// namespace { class RecordExprEvaluator : public ExprEvaluatorBase { const LValue &This; APValue &Result; public: RecordExprEvaluator(EvalInfo &info, const LValue &This, APValue &Result) : ExprEvaluatorBaseTy(info), This(This), Result(Result) {} bool Success(const APValue &V, const Expr *E) { Result = V; return true; } bool ZeroInitialization(const Expr *E) { return ZeroInitialization(E, E->getType()); } bool ZeroInitialization(const Expr *E, QualType T); bool VisitCallExpr(const CallExpr *E) { return handleCallExpr(E, Result, &This); } bool VisitCastExpr(const CastExpr *E); bool VisitInitListExpr(const InitListExpr *E); bool VisitCXXConstructExpr(const CXXConstructExpr *E) { return VisitCXXConstructExpr(E, E->getType()); } bool VisitLambdaExpr(const LambdaExpr *E); bool VisitCXXInheritedCtorInitExpr(const CXXInheritedCtorInitExpr *E); bool VisitCXXConstructExpr(const CXXConstructExpr *E, QualType T); bool VisitCXXStdInitializerListExpr(const CXXStdInitializerListExpr *E); }; } /// Perform zero-initialization on an object of non-union class type. /// C++11 [dcl.init]p5: /// To zero-initialize an object or reference of type T means: /// [...] /// -- if T is a (possibly cv-qualified) non-union class type, /// each non-static data member and each base-class subobject is /// zero-initialized static bool HandleClassZeroInitialization(EvalInfo &Info, const Expr *E, const RecordDecl *RD, const LValue &This, APValue &Result) { assert(!RD->isUnion() && "Expected non-union class type"); const CXXRecordDecl *CD = dyn_cast(RD); Result = APValue(APValue::UninitStruct(), CD ? CD->getNumBases() : 0, std::distance(RD->field_begin(), RD->field_end())); if (RD->isInvalidDecl()) return false; const ASTRecordLayout &Layout = Info.Ctx.getASTRecordLayout(RD); if (CD) { unsigned Index = 0; for (CXXRecordDecl::base_class_const_iterator I = CD->bases_begin(), End = CD->bases_end(); I != End; ++I, ++Index) { const CXXRecordDecl *Base = I->getType()->getAsCXXRecordDecl(); LValue Subobject = This; if (!HandleLValueDirectBase(Info, E, Subobject, CD, Base, &Layout)) return false; if (!HandleClassZeroInitialization(Info, E, Base, Subobject, Result.getStructBase(Index))) return false; } } for (const auto *I : RD->fields()) { // -- if T is a reference type, no initialization is performed. if (I->getType()->isReferenceType()) continue; LValue Subobject = This; if (!HandleLValueMember(Info, E, Subobject, I, &Layout)) return false; ImplicitValueInitExpr VIE(I->getType()); if (!EvaluateInPlace( Result.getStructField(I->getFieldIndex()), Info, Subobject, &VIE)) return false; } return true; } bool RecordExprEvaluator::ZeroInitialization(const Expr *E, QualType T) { const RecordDecl *RD = T->castAs()->getDecl(); if (RD->isInvalidDecl()) return false; if (RD->isUnion()) { // C++11 [dcl.init]p5: If T is a (possibly cv-qualified) union type, the // object's first non-static named data member is zero-initialized RecordDecl::field_iterator I = RD->field_begin(); if (I == RD->field_end()) { Result = APValue((const FieldDecl*)nullptr); return true; } LValue Subobject = This; if (!HandleLValueMember(Info, E, Subobject, *I)) return false; Result = APValue(*I); ImplicitValueInitExpr VIE(I->getType()); return EvaluateInPlace(Result.getUnionValue(), Info, Subobject, &VIE); } if (isa(RD) && cast(RD)->getNumVBases()) { Info.FFDiag(E, diag::note_constexpr_virtual_base) << RD; return false; } return HandleClassZeroInitialization(Info, E, RD, This, Result); } bool RecordExprEvaluator::VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_ConstructorConversion: return Visit(E->getSubExpr()); case CK_DerivedToBase: case CK_UncheckedDerivedToBase: { APValue DerivedObject; if (!Evaluate(DerivedObject, Info, E->getSubExpr())) return false; if (!DerivedObject.isStruct()) return Error(E->getSubExpr()); // Derived-to-base rvalue conversion: just slice off the derived part. APValue *Value = &DerivedObject; const CXXRecordDecl *RD = E->getSubExpr()->getType()->getAsCXXRecordDecl(); for (CastExpr::path_const_iterator PathI = E->path_begin(), PathE = E->path_end(); PathI != PathE; ++PathI) { assert(!(*PathI)->isVirtual() && "record rvalue with virtual base"); const CXXRecordDecl *Base = (*PathI)->getType()->getAsCXXRecordDecl(); Value = &Value->getStructBase(getBaseIndex(RD, Base)); RD = Base; } Result = *Value; return true; } } } bool RecordExprEvaluator::VisitInitListExpr(const InitListExpr *E) { if (E->isTransparent()) return Visit(E->getInit(0)); const RecordDecl *RD = E->getType()->castAs()->getDecl(); if (RD->isInvalidDecl()) return false; const ASTRecordLayout &Layout = Info.Ctx.getASTRecordLayout(RD); if (RD->isUnion()) { const FieldDecl *Field = E->getInitializedFieldInUnion(); Result = APValue(Field); if (!Field) return true; // If the initializer list for a union does not contain any elements, the // first element of the union is value-initialized. // FIXME: The element should be initialized from an initializer list. // Is this difference ever observable for initializer lists which // we don't build? ImplicitValueInitExpr VIE(Field->getType()); const Expr *InitExpr = E->getNumInits() ? E->getInit(0) : &VIE; LValue Subobject = This; if (!HandleLValueMember(Info, InitExpr, Subobject, Field, &Layout)) return false; // Temporarily override This, in case there's a CXXDefaultInitExpr in here. ThisOverrideRAII ThisOverride(*Info.CurrentCall, &This, isa(InitExpr)); return EvaluateInPlace(Result.getUnionValue(), Info, Subobject, InitExpr); } auto *CXXRD = dyn_cast(RD); if (Result.isUninit()) Result = APValue(APValue::UninitStruct(), CXXRD ? CXXRD->getNumBases() : 0, std::distance(RD->field_begin(), RD->field_end())); unsigned ElementNo = 0; bool Success = true; // Initialize base classes. if (CXXRD) { for (const auto &Base : CXXRD->bases()) { assert(ElementNo < E->getNumInits() && "missing init for base class"); const Expr *Init = E->getInit(ElementNo); LValue Subobject = This; if (!HandleLValueBase(Info, Init, Subobject, CXXRD, &Base)) return false; APValue &FieldVal = Result.getStructBase(ElementNo); if (!EvaluateInPlace(FieldVal, Info, Subobject, Init)) { if (!Info.noteFailure()) return false; Success = false; } ++ElementNo; } } // Initialize members. for (const auto *Field : RD->fields()) { // Anonymous bit-fields are not considered members of the class for // purposes of aggregate initialization. if (Field->isUnnamedBitfield()) continue; LValue Subobject = This; bool HaveInit = ElementNo < E->getNumInits(); // FIXME: Diagnostics here should point to the end of the initializer // list, not the start. if (!HandleLValueMember(Info, HaveInit ? E->getInit(ElementNo) : E, Subobject, Field, &Layout)) return false; // Perform an implicit value-initialization for members beyond the end of // the initializer list. ImplicitValueInitExpr VIE(HaveInit ? Info.Ctx.IntTy : Field->getType()); const Expr *Init = HaveInit ? E->getInit(ElementNo++) : &VIE; // Temporarily override This, in case there's a CXXDefaultInitExpr in here. ThisOverrideRAII ThisOverride(*Info.CurrentCall, &This, isa(Init)); APValue &FieldVal = Result.getStructField(Field->getFieldIndex()); if (!EvaluateInPlace(FieldVal, Info, Subobject, Init) || (Field->isBitField() && !truncateBitfieldValue(Info, Init, FieldVal, Field))) { if (!Info.noteFailure()) return false; Success = false; } } return Success; } bool RecordExprEvaluator::VisitCXXConstructExpr(const CXXConstructExpr *E, QualType T) { // Note that E's type is not necessarily the type of our class here; we might // be initializing an array element instead. const CXXConstructorDecl *FD = E->getConstructor(); if (FD->isInvalidDecl() || FD->getParent()->isInvalidDecl()) return false; bool ZeroInit = E->requiresZeroInitialization(); if (CheckTrivialDefaultConstructor(Info, E->getExprLoc(), FD, ZeroInit)) { // If we've already performed zero-initialization, we're already done. if (!Result.isUninit()) return true; // We can get here in two different ways: // 1) We're performing value-initialization, and should zero-initialize // the object, or // 2) We're performing default-initialization of an object with a trivial // constexpr default constructor, in which case we should start the // lifetimes of all the base subobjects (there can be no data member // subobjects in this case) per [basic.life]p1. // Either way, ZeroInitialization is appropriate. return ZeroInitialization(E, T); } const FunctionDecl *Definition = nullptr; auto Body = FD->getBody(Definition); if (!CheckConstexprFunction(Info, E->getExprLoc(), FD, Definition, Body)) return false; // Avoid materializing a temporary for an elidable copy/move constructor. if (E->isElidable() && !ZeroInit) if (const MaterializeTemporaryExpr *ME = dyn_cast(E->getArg(0))) return Visit(ME->GetTemporaryExpr()); if (ZeroInit && !ZeroInitialization(E, T)) return false; auto Args = llvm::makeArrayRef(E->getArgs(), E->getNumArgs()); return HandleConstructorCall(E, This, Args, cast(Definition), Info, Result); } bool RecordExprEvaluator::VisitCXXInheritedCtorInitExpr( const CXXInheritedCtorInitExpr *E) { if (!Info.CurrentCall) { assert(Info.checkingPotentialConstantExpression()); return false; } const CXXConstructorDecl *FD = E->getConstructor(); if (FD->isInvalidDecl() || FD->getParent()->isInvalidDecl()) return false; const FunctionDecl *Definition = nullptr; auto Body = FD->getBody(Definition); if (!CheckConstexprFunction(Info, E->getExprLoc(), FD, Definition, Body)) return false; return HandleConstructorCall(E, This, Info.CurrentCall->Arguments, cast(Definition), Info, Result); } bool RecordExprEvaluator::VisitCXXStdInitializerListExpr( const CXXStdInitializerListExpr *E) { const ConstantArrayType *ArrayType = Info.Ctx.getAsConstantArrayType(E->getSubExpr()->getType()); LValue Array; if (!EvaluateLValue(E->getSubExpr(), Array, Info)) return false; // Get a pointer to the first element of the array. Array.addArray(Info, E, ArrayType); // FIXME: Perform the checks on the field types in SemaInit. RecordDecl *Record = E->getType()->castAs()->getDecl(); RecordDecl::field_iterator Field = Record->field_begin(); if (Field == Record->field_end()) return Error(E); // Start pointer. if (!Field->getType()->isPointerType() || !Info.Ctx.hasSameType(Field->getType()->getPointeeType(), ArrayType->getElementType())) return Error(E); // FIXME: What if the initializer_list type has base classes, etc? Result = APValue(APValue::UninitStruct(), 0, 2); Array.moveInto(Result.getStructField(0)); if (++Field == Record->field_end()) return Error(E); if (Field->getType()->isPointerType() && Info.Ctx.hasSameType(Field->getType()->getPointeeType(), ArrayType->getElementType())) { // End pointer. if (!HandleLValueArrayAdjustment(Info, E, Array, ArrayType->getElementType(), ArrayType->getSize().getZExtValue())) return false; Array.moveInto(Result.getStructField(1)); } else if (Info.Ctx.hasSameType(Field->getType(), Info.Ctx.getSizeType())) // Length. Result.getStructField(1) = APValue(APSInt(ArrayType->getSize())); else return Error(E); if (++Field != Record->field_end()) return Error(E); return true; } bool RecordExprEvaluator::VisitLambdaExpr(const LambdaExpr *E) { const CXXRecordDecl *ClosureClass = E->getLambdaClass(); if (ClosureClass->isInvalidDecl()) return false; if (Info.checkingPotentialConstantExpression()) return true; if (E->capture_size()) { Info.FFDiag(E, diag::note_unimplemented_constexpr_lambda_feature_ast) << "can not evaluate lambda expressions with captures"; return false; } // FIXME: Implement captures. Result = APValue(APValue::UninitStruct(), /*NumBases*/0, /*NumFields*/0); return true; } static bool EvaluateRecord(const Expr *E, const LValue &This, APValue &Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isRecordType() && "can't evaluate expression as a record rvalue"); return RecordExprEvaluator(Info, This, Result).Visit(E); } //===----------------------------------------------------------------------===// // Temporary Evaluation // // Temporaries are represented in the AST as rvalues, but generally behave like // lvalues. The full-object of which the temporary is a subobject is implicitly // materialized so that a reference can bind to it. //===----------------------------------------------------------------------===// namespace { class TemporaryExprEvaluator : public LValueExprEvaluatorBase { public: TemporaryExprEvaluator(EvalInfo &Info, LValue &Result) : LValueExprEvaluatorBaseTy(Info, Result) {} /// Visit an expression which constructs the value of this temporary. bool VisitConstructExpr(const Expr *E) { Result.set(E, Info.CurrentCall->Index); return EvaluateInPlace(Info.CurrentCall->createTemporary(E, false), Info, Result, E); } bool VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: return LValueExprEvaluatorBaseTy::VisitCastExpr(E); case CK_ConstructorConversion: return VisitConstructExpr(E->getSubExpr()); } } bool VisitInitListExpr(const InitListExpr *E) { return VisitConstructExpr(E); } bool VisitCXXConstructExpr(const CXXConstructExpr *E) { return VisitConstructExpr(E); } bool VisitCallExpr(const CallExpr *E) { return VisitConstructExpr(E); } bool VisitCXXStdInitializerListExpr(const CXXStdInitializerListExpr *E) { return VisitConstructExpr(E); } bool VisitLambdaExpr(const LambdaExpr *E) { return VisitConstructExpr(E); } }; } // end anonymous namespace /// Evaluate an expression of record type as a temporary. static bool EvaluateTemporary(const Expr *E, LValue &Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isRecordType()); return TemporaryExprEvaluator(Info, Result).Visit(E); } //===----------------------------------------------------------------------===// // Vector Evaluation //===----------------------------------------------------------------------===// namespace { class VectorExprEvaluator : public ExprEvaluatorBase { APValue &Result; public: VectorExprEvaluator(EvalInfo &info, APValue &Result) : ExprEvaluatorBaseTy(info), Result(Result) {} bool Success(ArrayRef V, const Expr *E) { assert(V.size() == E->getType()->castAs()->getNumElements()); // FIXME: remove this APValue copy. Result = APValue(V.data(), V.size()); return true; } bool Success(const APValue &V, const Expr *E) { assert(V.isVector()); Result = V; return true; } bool ZeroInitialization(const Expr *E); bool VisitUnaryReal(const UnaryOperator *E) { return Visit(E->getSubExpr()); } bool VisitCastExpr(const CastExpr* E); bool VisitInitListExpr(const InitListExpr *E); bool VisitUnaryImag(const UnaryOperator *E); // FIXME: Missing: unary -, unary ~, binary add/sub/mul/div, // binary comparisons, binary and/or/xor, // shufflevector, ExtVectorElementExpr }; } // end anonymous namespace static bool EvaluateVector(const Expr* E, APValue& Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isVectorType() &&"not a vector rvalue"); return VectorExprEvaluator(Info, Result).Visit(E); } bool VectorExprEvaluator::VisitCastExpr(const CastExpr *E) { const VectorType *VTy = E->getType()->castAs(); unsigned NElts = VTy->getNumElements(); const Expr *SE = E->getSubExpr(); QualType SETy = SE->getType(); switch (E->getCastKind()) { case CK_VectorSplat: { APValue Val = APValue(); if (SETy->isIntegerType()) { APSInt IntResult; if (!EvaluateInteger(SE, IntResult, Info)) return false; Val = APValue(std::move(IntResult)); } else if (SETy->isRealFloatingType()) { APFloat FloatResult(0.0); if (!EvaluateFloat(SE, FloatResult, Info)) return false; Val = APValue(std::move(FloatResult)); } else { return Error(E); } // Splat and create vector APValue. SmallVector Elts(NElts, Val); return Success(Elts, E); } case CK_BitCast: { // Evaluate the operand into an APInt we can extract from. llvm::APInt SValInt; if (!EvalAndBitcastToAPInt(Info, SE, SValInt)) return false; // Extract the elements QualType EltTy = VTy->getElementType(); unsigned EltSize = Info.Ctx.getTypeSize(EltTy); bool BigEndian = Info.Ctx.getTargetInfo().isBigEndian(); SmallVector Elts; if (EltTy->isRealFloatingType()) { const llvm::fltSemantics &Sem = Info.Ctx.getFloatTypeSemantics(EltTy); unsigned FloatEltSize = EltSize; if (&Sem == &APFloat::x87DoubleExtended()) FloatEltSize = 80; for (unsigned i = 0; i < NElts; i++) { llvm::APInt Elt; if (BigEndian) Elt = SValInt.rotl(i*EltSize+FloatEltSize).trunc(FloatEltSize); else Elt = SValInt.rotr(i*EltSize).trunc(FloatEltSize); Elts.push_back(APValue(APFloat(Sem, Elt))); } } else if (EltTy->isIntegerType()) { for (unsigned i = 0; i < NElts; i++) { llvm::APInt Elt; if (BigEndian) Elt = SValInt.rotl(i*EltSize+EltSize).zextOrTrunc(EltSize); else Elt = SValInt.rotr(i*EltSize).zextOrTrunc(EltSize); Elts.push_back(APValue(APSInt(Elt, EltTy->isSignedIntegerType()))); } } else { return Error(E); } return Success(Elts, E); } default: return ExprEvaluatorBaseTy::VisitCastExpr(E); } } bool VectorExprEvaluator::VisitInitListExpr(const InitListExpr *E) { const VectorType *VT = E->getType()->castAs(); unsigned NumInits = E->getNumInits(); unsigned NumElements = VT->getNumElements(); QualType EltTy = VT->getElementType(); SmallVector Elements; // The number of initializers can be less than the number of // vector elements. For OpenCL, this can be due to nested vector // initialization. For GCC compatibility, missing trailing elements // should be initialized with zeroes. unsigned CountInits = 0, CountElts = 0; while (CountElts < NumElements) { // Handle nested vector initialization. if (CountInits < NumInits && E->getInit(CountInits)->getType()->isVectorType()) { APValue v; if (!EvaluateVector(E->getInit(CountInits), v, Info)) return Error(E); unsigned vlen = v.getVectorLength(); for (unsigned j = 0; j < vlen; j++) Elements.push_back(v.getVectorElt(j)); CountElts += vlen; } else if (EltTy->isIntegerType()) { llvm::APSInt sInt(32); if (CountInits < NumInits) { if (!EvaluateInteger(E->getInit(CountInits), sInt, Info)) return false; } else // trailing integer zero. sInt = Info.Ctx.MakeIntValue(0, EltTy); Elements.push_back(APValue(sInt)); CountElts++; } else { llvm::APFloat f(0.0); if (CountInits < NumInits) { if (!EvaluateFloat(E->getInit(CountInits), f, Info)) return false; } else // trailing float zero. f = APFloat::getZero(Info.Ctx.getFloatTypeSemantics(EltTy)); Elements.push_back(APValue(f)); CountElts++; } CountInits++; } return Success(Elements, E); } bool VectorExprEvaluator::ZeroInitialization(const Expr *E) { const VectorType *VT = E->getType()->getAs(); QualType EltTy = VT->getElementType(); APValue ZeroElement; if (EltTy->isIntegerType()) ZeroElement = APValue(Info.Ctx.MakeIntValue(0, EltTy)); else ZeroElement = APValue(APFloat::getZero(Info.Ctx.getFloatTypeSemantics(EltTy))); SmallVector Elements(VT->getNumElements(), ZeroElement); return Success(Elements, E); } bool VectorExprEvaluator::VisitUnaryImag(const UnaryOperator *E) { VisitIgnoredValue(E->getSubExpr()); return ZeroInitialization(E); } //===----------------------------------------------------------------------===// // Array Evaluation //===----------------------------------------------------------------------===// namespace { class ArrayExprEvaluator : public ExprEvaluatorBase { const LValue &This; APValue &Result; public: ArrayExprEvaluator(EvalInfo &Info, const LValue &This, APValue &Result) : ExprEvaluatorBaseTy(Info), This(This), Result(Result) {} bool Success(const APValue &V, const Expr *E) { assert((V.isArray() || V.isLValue()) && "expected array or string literal"); Result = V; return true; } bool ZeroInitialization(const Expr *E) { const ConstantArrayType *CAT = Info.Ctx.getAsConstantArrayType(E->getType()); if (!CAT) return Error(E); Result = APValue(APValue::UninitArray(), 0, CAT->getSize().getZExtValue()); if (!Result.hasArrayFiller()) return true; // Zero-initialize all elements. LValue Subobject = This; Subobject.addArray(Info, E, CAT); ImplicitValueInitExpr VIE(CAT->getElementType()); return EvaluateInPlace(Result.getArrayFiller(), Info, Subobject, &VIE); } bool VisitCallExpr(const CallExpr *E) { return handleCallExpr(E, Result, &This); } bool VisitInitListExpr(const InitListExpr *E); bool VisitArrayInitLoopExpr(const ArrayInitLoopExpr *E); bool VisitCXXConstructExpr(const CXXConstructExpr *E); bool VisitCXXConstructExpr(const CXXConstructExpr *E, const LValue &Subobject, APValue *Value, QualType Type); }; } // end anonymous namespace static bool EvaluateArray(const Expr *E, const LValue &This, APValue &Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isArrayType() && "not an array rvalue"); return ArrayExprEvaluator(Info, This, Result).Visit(E); } bool ArrayExprEvaluator::VisitInitListExpr(const InitListExpr *E) { const ConstantArrayType *CAT = Info.Ctx.getAsConstantArrayType(E->getType()); if (!CAT) return Error(E); // C++11 [dcl.init.string]p1: A char array [...] can be initialized by [...] // an appropriately-typed string literal enclosed in braces. if (E->isStringLiteralInit()) { LValue LV; if (!EvaluateLValue(E->getInit(0), LV, Info)) return false; APValue Val; LV.moveInto(Val); return Success(Val, E); } bool Success = true; assert((!Result.isArray() || Result.getArrayInitializedElts() == 0) && "zero-initialized array shouldn't have any initialized elts"); APValue Filler; if (Result.isArray() && Result.hasArrayFiller()) Filler = Result.getArrayFiller(); unsigned NumEltsToInit = E->getNumInits(); unsigned NumElts = CAT->getSize().getZExtValue(); const Expr *FillerExpr = E->hasArrayFiller() ? E->getArrayFiller() : nullptr; // If the initializer might depend on the array index, run it for each // array element. For now, just whitelist non-class value-initialization. if (NumEltsToInit != NumElts && !isa(FillerExpr)) NumEltsToInit = NumElts; Result = APValue(APValue::UninitArray(), NumEltsToInit, NumElts); // If the array was previously zero-initialized, preserve the // zero-initialized values. if (!Filler.isUninit()) { for (unsigned I = 0, E = Result.getArrayInitializedElts(); I != E; ++I) Result.getArrayInitializedElt(I) = Filler; if (Result.hasArrayFiller()) Result.getArrayFiller() = Filler; } LValue Subobject = This; Subobject.addArray(Info, E, CAT); for (unsigned Index = 0; Index != NumEltsToInit; ++Index) { const Expr *Init = Index < E->getNumInits() ? E->getInit(Index) : FillerExpr; if (!EvaluateInPlace(Result.getArrayInitializedElt(Index), Info, Subobject, Init) || !HandleLValueArrayAdjustment(Info, Init, Subobject, CAT->getElementType(), 1)) { if (!Info.noteFailure()) return false; Success = false; } } if (!Result.hasArrayFiller()) return Success; // If we get here, we have a trivial filler, which we can just evaluate // once and splat over the rest of the array elements. assert(FillerExpr && "no array filler for incomplete init list"); return EvaluateInPlace(Result.getArrayFiller(), Info, Subobject, FillerExpr) && Success; } bool ArrayExprEvaluator::VisitArrayInitLoopExpr(const ArrayInitLoopExpr *E) { if (E->getCommonExpr() && !Evaluate(Info.CurrentCall->createTemporary(E->getCommonExpr(), false), Info, E->getCommonExpr()->getSourceExpr())) return false; auto *CAT = cast(E->getType()->castAsArrayTypeUnsafe()); uint64_t Elements = CAT->getSize().getZExtValue(); Result = APValue(APValue::UninitArray(), Elements, Elements); LValue Subobject = This; Subobject.addArray(Info, E, CAT); bool Success = true; for (EvalInfo::ArrayInitLoopIndex Index(Info); Index != Elements; ++Index) { if (!EvaluateInPlace(Result.getArrayInitializedElt(Index), Info, Subobject, E->getSubExpr()) || !HandleLValueArrayAdjustment(Info, E, Subobject, CAT->getElementType(), 1)) { if (!Info.noteFailure()) return false; Success = false; } } return Success; } bool ArrayExprEvaluator::VisitCXXConstructExpr(const CXXConstructExpr *E) { return VisitCXXConstructExpr(E, This, &Result, E->getType()); } bool ArrayExprEvaluator::VisitCXXConstructExpr(const CXXConstructExpr *E, const LValue &Subobject, APValue *Value, QualType Type) { bool HadZeroInit = !Value->isUninit(); if (const ConstantArrayType *CAT = Info.Ctx.getAsConstantArrayType(Type)) { unsigned N = CAT->getSize().getZExtValue(); // Preserve the array filler if we had prior zero-initialization. APValue Filler = HadZeroInit && Value->hasArrayFiller() ? Value->getArrayFiller() : APValue(); *Value = APValue(APValue::UninitArray(), N, N); if (HadZeroInit) for (unsigned I = 0; I != N; ++I) Value->getArrayInitializedElt(I) = Filler; // Initialize the elements. LValue ArrayElt = Subobject; ArrayElt.addArray(Info, E, CAT); for (unsigned I = 0; I != N; ++I) if (!VisitCXXConstructExpr(E, ArrayElt, &Value->getArrayInitializedElt(I), CAT->getElementType()) || !HandleLValueArrayAdjustment(Info, E, ArrayElt, CAT->getElementType(), 1)) return false; return true; } if (!Type->isRecordType()) return Error(E); return RecordExprEvaluator(Info, Subobject, *Value) .VisitCXXConstructExpr(E, Type); } //===----------------------------------------------------------------------===// // Integer Evaluation // // As a GNU extension, we support casting pointers to sufficiently-wide integer // types and back in constant folding. Integer values are thus represented // either as an integer-valued APValue, or as an lvalue-valued APValue. //===----------------------------------------------------------------------===// namespace { class IntExprEvaluator : public ExprEvaluatorBase { APValue &Result; public: IntExprEvaluator(EvalInfo &info, APValue &result) : ExprEvaluatorBaseTy(info), Result(result) {} bool Success(const llvm::APSInt &SI, const Expr *E, APValue &Result) { assert(E->getType()->isIntegralOrEnumerationType() && "Invalid evaluation result."); assert(SI.isSigned() == E->getType()->isSignedIntegerOrEnumerationType() && "Invalid evaluation result."); assert(SI.getBitWidth() == Info.Ctx.getIntWidth(E->getType()) && "Invalid evaluation result."); Result = APValue(SI); return true; } bool Success(const llvm::APSInt &SI, const Expr *E) { return Success(SI, E, Result); } bool Success(const llvm::APInt &I, const Expr *E, APValue &Result) { assert(E->getType()->isIntegralOrEnumerationType() && "Invalid evaluation result."); assert(I.getBitWidth() == Info.Ctx.getIntWidth(E->getType()) && "Invalid evaluation result."); Result = APValue(APSInt(I)); Result.getInt().setIsUnsigned( E->getType()->isUnsignedIntegerOrEnumerationType()); return true; } bool Success(const llvm::APInt &I, const Expr *E) { return Success(I, E, Result); } bool Success(uint64_t Value, const Expr *E, APValue &Result) { assert(E->getType()->isIntegralOrEnumerationType() && "Invalid evaluation result."); Result = APValue(Info.Ctx.MakeIntValue(Value, E->getType())); return true; } bool Success(uint64_t Value, const Expr *E) { return Success(Value, E, Result); } bool Success(CharUnits Size, const Expr *E) { return Success(Size.getQuantity(), E); } bool Success(const APValue &V, const Expr *E) { if (V.isLValue() || V.isAddrLabelDiff()) { Result = V; return true; } return Success(V.getInt(), E); } bool ZeroInitialization(const Expr *E) { return Success(0, E); } //===--------------------------------------------------------------------===// // Visitor Methods //===--------------------------------------------------------------------===// bool VisitIntegerLiteral(const IntegerLiteral *E) { return Success(E->getValue(), E); } bool VisitCharacterLiteral(const CharacterLiteral *E) { return Success(E->getValue(), E); } bool CheckReferencedDecl(const Expr *E, const Decl *D); bool VisitDeclRefExpr(const DeclRefExpr *E) { if (CheckReferencedDecl(E, E->getDecl())) return true; return ExprEvaluatorBaseTy::VisitDeclRefExpr(E); } bool VisitMemberExpr(const MemberExpr *E) { if (CheckReferencedDecl(E, E->getMemberDecl())) { VisitIgnoredBaseExpression(E->getBase()); return true; } return ExprEvaluatorBaseTy::VisitMemberExpr(E); } bool VisitCallExpr(const CallExpr *E); bool VisitBuiltinCallExpr(const CallExpr *E, unsigned BuiltinOp); bool VisitBinaryOperator(const BinaryOperator *E); bool VisitOffsetOfExpr(const OffsetOfExpr *E); bool VisitUnaryOperator(const UnaryOperator *E); bool VisitCastExpr(const CastExpr* E); bool VisitUnaryExprOrTypeTraitExpr(const UnaryExprOrTypeTraitExpr *E); bool VisitCXXBoolLiteralExpr(const CXXBoolLiteralExpr *E) { return Success(E->getValue(), E); } bool VisitObjCBoolLiteralExpr(const ObjCBoolLiteralExpr *E) { return Success(E->getValue(), E); } bool VisitArrayInitIndexExpr(const ArrayInitIndexExpr *E) { if (Info.ArrayInitIndex == uint64_t(-1)) { // We were asked to evaluate this subexpression independent of the // enclosing ArrayInitLoopExpr. We can't do that. Info.FFDiag(E); return false; } return Success(Info.ArrayInitIndex, E); } // Note, GNU defines __null as an integer, not a pointer. bool VisitGNUNullExpr(const GNUNullExpr *E) { return ZeroInitialization(E); } bool VisitTypeTraitExpr(const TypeTraitExpr *E) { return Success(E->getValue(), E); } bool VisitArrayTypeTraitExpr(const ArrayTypeTraitExpr *E) { return Success(E->getValue(), E); } bool VisitExpressionTraitExpr(const ExpressionTraitExpr *E) { return Success(E->getValue(), E); } bool VisitUnaryReal(const UnaryOperator *E); bool VisitUnaryImag(const UnaryOperator *E); bool VisitCXXNoexceptExpr(const CXXNoexceptExpr *E); bool VisitSizeOfPackExpr(const SizeOfPackExpr *E); // FIXME: Missing: array subscript of vector, member of vector }; } // end anonymous namespace /// EvaluateIntegerOrLValue - Evaluate an rvalue integral-typed expression, and /// produce either the integer value or a pointer. /// /// GCC has a heinous extension which folds casts between pointer types and /// pointer-sized integral types. We support this by allowing the evaluation of /// an integer rvalue to produce a pointer (represented as an lvalue) instead. /// Some simple arithmetic on such values is supported (they are treated much /// like char*). static bool EvaluateIntegerOrLValue(const Expr *E, APValue &Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isIntegralOrEnumerationType()); return IntExprEvaluator(Info, Result).Visit(E); } static bool EvaluateInteger(const Expr *E, APSInt &Result, EvalInfo &Info) { APValue Val; if (!EvaluateIntegerOrLValue(E, Val, Info)) return false; if (!Val.isInt()) { // FIXME: It would be better to produce the diagnostic for casting // a pointer to an integer. Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } Result = Val.getInt(); return true; } /// Check whether the given declaration can be directly converted to an integral /// rvalue. If not, no diagnostic is produced; there are other things we can /// try. bool IntExprEvaluator::CheckReferencedDecl(const Expr* E, const Decl* D) { // Enums are integer constant exprs. if (const EnumConstantDecl *ECD = dyn_cast(D)) { // Check for signedness/width mismatches between E type and ECD value. bool SameSign = (ECD->getInitVal().isSigned() == E->getType()->isSignedIntegerOrEnumerationType()); bool SameWidth = (ECD->getInitVal().getBitWidth() == Info.Ctx.getIntWidth(E->getType())); if (SameSign && SameWidth) return Success(ECD->getInitVal(), E); else { // Get rid of mismatch (otherwise Success assertions will fail) // by computing a new value matching the type of E. llvm::APSInt Val = ECD->getInitVal(); if (!SameSign) Val.setIsSigned(!ECD->getInitVal().isSigned()); if (!SameWidth) Val = Val.extOrTrunc(Info.Ctx.getIntWidth(E->getType())); return Success(Val, E); } } return false; } /// EvaluateBuiltinClassifyType - Evaluate __builtin_classify_type the same way /// as GCC. static int EvaluateBuiltinClassifyType(const CallExpr *E, const LangOptions &LangOpts) { // The following enum mimics the values returned by GCC. // FIXME: Does GCC differ between lvalue and rvalue references here? enum gcc_type_class { no_type_class = -1, void_type_class, integer_type_class, char_type_class, enumeral_type_class, boolean_type_class, pointer_type_class, reference_type_class, offset_type_class, real_type_class, complex_type_class, function_type_class, method_type_class, record_type_class, union_type_class, array_type_class, string_type_class, lang_type_class }; // If no argument was supplied, default to "no_type_class". This isn't // ideal, however it is what gcc does. if (E->getNumArgs() == 0) return no_type_class; QualType CanTy = E->getArg(0)->getType().getCanonicalType(); const BuiltinType *BT = dyn_cast(CanTy); switch (CanTy->getTypeClass()) { #define TYPE(ID, BASE) #define DEPENDENT_TYPE(ID, BASE) case Type::ID: #define NON_CANONICAL_TYPE(ID, BASE) case Type::ID: #define NON_CANONICAL_UNLESS_DEPENDENT_TYPE(ID, BASE) case Type::ID: #include "clang/AST/TypeNodes.def" llvm_unreachable("CallExpr::isBuiltinClassifyType(): unimplemented type"); case Type::Builtin: switch (BT->getKind()) { #define BUILTIN_TYPE(ID, SINGLETON_ID) #define SIGNED_TYPE(ID, SINGLETON_ID) case BuiltinType::ID: return integer_type_class; #define FLOATING_TYPE(ID, SINGLETON_ID) case BuiltinType::ID: return real_type_class; #define PLACEHOLDER_TYPE(ID, SINGLETON_ID) case BuiltinType::ID: break; #include "clang/AST/BuiltinTypes.def" case BuiltinType::Void: return void_type_class; case BuiltinType::Bool: return boolean_type_class; case BuiltinType::Char_U: // gcc doesn't appear to use char_type_class case BuiltinType::UChar: case BuiltinType::UShort: case BuiltinType::UInt: case BuiltinType::ULong: case BuiltinType::ULongLong: case BuiltinType::UInt128: return integer_type_class; case BuiltinType::NullPtr: return pointer_type_class; case BuiltinType::WChar_U: case BuiltinType::Char16: case BuiltinType::Char32: case BuiltinType::ObjCId: case BuiltinType::ObjCClass: case BuiltinType::ObjCSel: #define IMAGE_TYPE(ImgType, Id, SingletonId, Access, Suffix) \ case BuiltinType::Id: #include "clang/Basic/OpenCLImageTypes.def" case BuiltinType::OCLSampler: case BuiltinType::OCLEvent: case BuiltinType::OCLClkEvent: case BuiltinType::OCLQueue: case BuiltinType::OCLNDRange: case BuiltinType::OCLReserveID: case BuiltinType::Dependent: llvm_unreachable("CallExpr::isBuiltinClassifyType(): unimplemented type"); }; case Type::Enum: return LangOpts.CPlusPlus ? enumeral_type_class : integer_type_class; break; case Type::Pointer: return pointer_type_class; break; case Type::MemberPointer: if (CanTy->isMemberDataPointerType()) return offset_type_class; else { // We expect member pointers to be either data or function pointers, // nothing else. assert(CanTy->isMemberFunctionPointerType()); return method_type_class; } case Type::Complex: return complex_type_class; case Type::FunctionNoProto: case Type::FunctionProto: return LangOpts.CPlusPlus ? function_type_class : pointer_type_class; case Type::Record: if (const RecordType *RT = CanTy->getAs()) { switch (RT->getDecl()->getTagKind()) { case TagTypeKind::TTK_Struct: case TagTypeKind::TTK_Class: case TagTypeKind::TTK_Interface: return record_type_class; case TagTypeKind::TTK_Enum: return LangOpts.CPlusPlus ? enumeral_type_class : integer_type_class; case TagTypeKind::TTK_Union: return union_type_class; } } llvm_unreachable("CallExpr::isBuiltinClassifyType(): unimplemented type"); case Type::ConstantArray: case Type::VariableArray: case Type::IncompleteArray: return LangOpts.CPlusPlus ? array_type_class : pointer_type_class; case Type::BlockPointer: case Type::LValueReference: case Type::RValueReference: case Type::Vector: case Type::ExtVector: case Type::Auto: case Type::ObjCObject: case Type::ObjCInterface: case Type::ObjCObjectPointer: case Type::Pipe: case Type::Atomic: llvm_unreachable("CallExpr::isBuiltinClassifyType(): unimplemented type"); } llvm_unreachable("CallExpr::isBuiltinClassifyType(): unimplemented type"); } /// EvaluateBuiltinConstantPForLValue - Determine the result of /// __builtin_constant_p when applied to the given lvalue. /// /// An lvalue is only "constant" if it is a pointer or reference to the first /// character of a string literal. template static bool EvaluateBuiltinConstantPForLValue(const LValue &LV) { const Expr *E = LV.getLValueBase().template dyn_cast(); return E && isa(E) && LV.getLValueOffset().isZero(); } /// EvaluateBuiltinConstantP - Evaluate __builtin_constant_p as similarly to /// GCC as we can manage. static bool EvaluateBuiltinConstantP(ASTContext &Ctx, const Expr *Arg) { QualType ArgType = Arg->getType(); // __builtin_constant_p always has one operand. The rules which gcc follows // are not precisely documented, but are as follows: // // - If the operand is of integral, floating, complex or enumeration type, // and can be folded to a known value of that type, it returns 1. // - If the operand and can be folded to a pointer to the first character // of a string literal (or such a pointer cast to an integral type), it // returns 1. // // Otherwise, it returns 0. // // FIXME: GCC also intends to return 1 for literals of aggregate types, but // its support for this does not currently work. if (ArgType->isIntegralOrEnumerationType()) { Expr::EvalResult Result; if (!Arg->EvaluateAsRValue(Result, Ctx) || Result.HasSideEffects) return false; APValue &V = Result.Val; if (V.getKind() == APValue::Int) return true; if (V.getKind() == APValue::LValue) return EvaluateBuiltinConstantPForLValue(V); } else if (ArgType->isFloatingType() || ArgType->isAnyComplexType()) { return Arg->isEvaluatable(Ctx); } else if (ArgType->isPointerType() || Arg->isGLValue()) { LValue LV; Expr::EvalStatus Status; EvalInfo Info(Ctx, Status, EvalInfo::EM_ConstantFold); if ((Arg->isGLValue() ? EvaluateLValue(Arg, LV, Info) : EvaluatePointer(Arg, LV, Info)) && !Status.HasSideEffects) return EvaluateBuiltinConstantPForLValue(LV); } // Anything else isn't considered to be sufficiently constant. return false; } /// Retrieves the "underlying object type" of the given expression, /// as used by __builtin_object_size. static QualType getObjectType(APValue::LValueBase B) { if (const ValueDecl *D = B.dyn_cast()) { if (const VarDecl *VD = dyn_cast(D)) return VD->getType(); } else if (const Expr *E = B.get()) { if (isa(E)) return E->getType(); } return QualType(); } /// A more selective version of E->IgnoreParenCasts for /// tryEvaluateBuiltinObjectSize. This ignores some casts/parens that serve only /// to change the type of E. /// Ex. For E = `(short*)((char*)(&foo))`, returns `&foo` /// /// Always returns an RValue with a pointer representation. static const Expr *ignorePointerCastsAndParens(const Expr *E) { assert(E->isRValue() && E->getType()->hasPointerRepresentation()); auto *NoParens = E->IgnoreParens(); auto *Cast = dyn_cast(NoParens); if (Cast == nullptr) return NoParens; // We only conservatively allow a few kinds of casts, because this code is // inherently a simple solution that seeks to support the common case. auto CastKind = Cast->getCastKind(); if (CastKind != CK_NoOp && CastKind != CK_BitCast && CastKind != CK_AddressSpaceConversion) return NoParens; auto *SubExpr = Cast->getSubExpr(); if (!SubExpr->getType()->hasPointerRepresentation() || !SubExpr->isRValue()) return NoParens; return ignorePointerCastsAndParens(SubExpr); } /// Checks to see if the given LValue's Designator is at the end of the LValue's /// record layout. e.g. /// struct { struct { int a, b; } fst, snd; } obj; /// obj.fst // no /// obj.snd // yes /// obj.fst.a // no /// obj.fst.b // no /// obj.snd.a // no /// obj.snd.b // yes /// /// Please note: this function is specialized for how __builtin_object_size /// views "objects". /// /// If this encounters an invalid RecordDecl, it will always return true. static bool isDesignatorAtObjectEnd(const ASTContext &Ctx, const LValue &LVal) { assert(!LVal.Designator.Invalid); auto IsLastOrInvalidFieldDecl = [&Ctx](const FieldDecl *FD, bool &Invalid) { const RecordDecl *Parent = FD->getParent(); Invalid = Parent->isInvalidDecl(); if (Invalid || Parent->isUnion()) return true; const ASTRecordLayout &Layout = Ctx.getASTRecordLayout(Parent); return FD->getFieldIndex() + 1 == Layout.getFieldCount(); }; auto &Base = LVal.getLValueBase(); if (auto *ME = dyn_cast_or_null(Base.dyn_cast())) { if (auto *FD = dyn_cast(ME->getMemberDecl())) { bool Invalid; if (!IsLastOrInvalidFieldDecl(FD, Invalid)) return Invalid; } else if (auto *IFD = dyn_cast(ME->getMemberDecl())) { for (auto *FD : IFD->chain()) { bool Invalid; if (!IsLastOrInvalidFieldDecl(cast(FD), Invalid)) return Invalid; } } } unsigned I = 0; QualType BaseType = getType(Base); if (LVal.Designator.FirstEntryIsAnUnsizedArray) { assert(isBaseAnAllocSizeCall(Base) && "Unsized array in non-alloc_size call?"); // If this is an alloc_size base, we should ignore the initial array index ++I; BaseType = BaseType->castAs()->getPointeeType(); } for (unsigned E = LVal.Designator.Entries.size(); I != E; ++I) { const auto &Entry = LVal.Designator.Entries[I]; if (BaseType->isArrayType()) { // Because __builtin_object_size treats arrays as objects, we can ignore // the index iff this is the last array in the Designator. if (I + 1 == E) return true; const auto *CAT = cast(Ctx.getAsArrayType(BaseType)); uint64_t Index = Entry.ArrayIndex; if (Index + 1 != CAT->getSize()) return false; BaseType = CAT->getElementType(); } else if (BaseType->isAnyComplexType()) { const auto *CT = BaseType->castAs(); uint64_t Index = Entry.ArrayIndex; if (Index != 1) return false; BaseType = CT->getElementType(); } else if (auto *FD = getAsField(Entry)) { bool Invalid; if (!IsLastOrInvalidFieldDecl(FD, Invalid)) return Invalid; BaseType = FD->getType(); } else { assert(getAsBaseClass(Entry) && "Expecting cast to a base class"); return false; } } return true; } /// Tests to see if the LValue has a user-specified designator (that isn't /// necessarily valid). Note that this always returns 'true' if the LValue has /// an unsized array as its first designator entry, because there's currently no /// way to tell if the user typed *foo or foo[0]. static bool refersToCompleteObject(const LValue &LVal) { if (LVal.Designator.Invalid) return false; if (!LVal.Designator.Entries.empty()) return LVal.Designator.isMostDerivedAnUnsizedArray(); if (!LVal.InvalidBase) return true; // If `E` is a MemberExpr, then the first part of the designator is hiding in // the LValueBase. const auto *E = LVal.Base.dyn_cast(); return !E || !isa(E); } /// Attempts to detect a user writing into a piece of memory that's impossible /// to figure out the size of by just using types. static bool isUserWritingOffTheEnd(const ASTContext &Ctx, const LValue &LVal) { const SubobjectDesignator &Designator = LVal.Designator; // Notes: // - Users can only write off of the end when we have an invalid base. Invalid // bases imply we don't know where the memory came from. // - We used to be a bit more aggressive here; we'd only be conservative if // the array at the end was flexible, or if it had 0 or 1 elements. This // broke some common standard library extensions (PR30346), but was // otherwise seemingly fine. It may be useful to reintroduce this behavior // with some sort of whitelist. OTOH, it seems that GCC is always // conservative with the last element in structs (if it's an array), so our // current behavior is more compatible than a whitelisting approach would // be. return LVal.InvalidBase && Designator.Entries.size() == Designator.MostDerivedPathLength && Designator.MostDerivedIsArrayElement && isDesignatorAtObjectEnd(Ctx, LVal); } /// Converts the given APInt to CharUnits, assuming the APInt is unsigned. /// Fails if the conversion would cause loss of precision. static bool convertUnsignedAPIntToCharUnits(const llvm::APInt &Int, CharUnits &Result) { auto CharUnitsMax = std::numeric_limits::max(); if (Int.ugt(CharUnitsMax)) return false; Result = CharUnits::fromQuantity(Int.getZExtValue()); return true; } /// Helper for tryEvaluateBuiltinObjectSize -- Given an LValue, this will /// determine how many bytes exist from the beginning of the object to either /// the end of the current subobject, or the end of the object itself, depending /// on what the LValue looks like + the value of Type. /// /// If this returns false, the value of Result is undefined. static bool determineEndOffset(EvalInfo &Info, SourceLocation ExprLoc, unsigned Type, const LValue &LVal, CharUnits &EndOffset) { bool DetermineForCompleteObject = refersToCompleteObject(LVal); auto CheckedHandleSizeof = [&](QualType Ty, CharUnits &Result) { if (Ty.isNull() || Ty->isIncompleteType() || Ty->isFunctionType()) return false; return HandleSizeof(Info, ExprLoc, Ty, Result); }; // We want to evaluate the size of the entire object. This is a valid fallback // for when Type=1 and the designator is invalid, because we're asked for an // upper-bound. if (!(Type & 1) || LVal.Designator.Invalid || DetermineForCompleteObject) { // Type=3 wants a lower bound, so we can't fall back to this. if (Type == 3 && !DetermineForCompleteObject) return false; llvm::APInt APEndOffset; if (isBaseAnAllocSizeCall(LVal.getLValueBase()) && getBytesReturnedByAllocSizeCall(Info.Ctx, LVal, APEndOffset)) return convertUnsignedAPIntToCharUnits(APEndOffset, EndOffset); if (LVal.InvalidBase) return false; QualType BaseTy = getObjectType(LVal.getLValueBase()); return CheckedHandleSizeof(BaseTy, EndOffset); } // We want to evaluate the size of a subobject. const SubobjectDesignator &Designator = LVal.Designator; // The following is a moderately common idiom in C: // // struct Foo { int a; char c[1]; }; // struct Foo *F = (struct Foo *)malloc(sizeof(struct Foo) + strlen(Bar)); // strcpy(&F->c[0], Bar); // // In order to not break too much legacy code, we need to support it. if (isUserWritingOffTheEnd(Info.Ctx, LVal)) { // If we can resolve this to an alloc_size call, we can hand that back, // because we know for certain how many bytes there are to write to. llvm::APInt APEndOffset; if (isBaseAnAllocSizeCall(LVal.getLValueBase()) && getBytesReturnedByAllocSizeCall(Info.Ctx, LVal, APEndOffset)) return convertUnsignedAPIntToCharUnits(APEndOffset, EndOffset); // If we cannot determine the size of the initial allocation, then we can't // given an accurate upper-bound. However, we are still able to give // conservative lower-bounds for Type=3. if (Type == 1) return false; } CharUnits BytesPerElem; if (!CheckedHandleSizeof(Designator.MostDerivedType, BytesPerElem)) return false; // According to the GCC documentation, we want the size of the subobject // denoted by the pointer. But that's not quite right -- what we actually // want is the size of the immediately-enclosing array, if there is one. int64_t ElemsRemaining; if (Designator.MostDerivedIsArrayElement && Designator.Entries.size() == Designator.MostDerivedPathLength) { uint64_t ArraySize = Designator.getMostDerivedArraySize(); uint64_t ArrayIndex = Designator.Entries.back().ArrayIndex; ElemsRemaining = ArraySize <= ArrayIndex ? 0 : ArraySize - ArrayIndex; } else { ElemsRemaining = Designator.isOnePastTheEnd() ? 0 : 1; } EndOffset = LVal.getLValueOffset() + BytesPerElem * ElemsRemaining; return true; } /// \brief Tries to evaluate the __builtin_object_size for @p E. If successful, /// returns true and stores the result in @p Size. /// /// If @p WasError is non-null, this will report whether the failure to evaluate /// is to be treated as an Error in IntExprEvaluator. static bool tryEvaluateBuiltinObjectSize(const Expr *E, unsigned Type, EvalInfo &Info, uint64_t &Size) { // Determine the denoted object. LValue LVal; { // The operand of __builtin_object_size is never evaluated for side-effects. // If there are any, but we can determine the pointed-to object anyway, then // ignore the side-effects. SpeculativeEvaluationRAII SpeculativeEval(Info); FoldOffsetRAII Fold(Info); if (E->isGLValue()) { // It's possible for us to be given GLValues if we're called via // Expr::tryEvaluateObjectSize. APValue RVal; if (!EvaluateAsRValue(Info, E, RVal)) return false; LVal.setFrom(Info.Ctx, RVal); } else if (!EvaluatePointer(ignorePointerCastsAndParens(E), LVal, Info)) return false; } // If we point to before the start of the object, there are no accessible // bytes. if (LVal.getLValueOffset().isNegative()) { Size = 0; return true; } CharUnits EndOffset; if (!determineEndOffset(Info, E->getExprLoc(), Type, LVal, EndOffset)) return false; // If we've fallen outside of the end offset, just pretend there's nothing to // write to/read from. if (EndOffset <= LVal.getLValueOffset()) Size = 0; else Size = (EndOffset - LVal.getLValueOffset()).getQuantity(); return true; } bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) { if (unsigned BuiltinOp = E->getBuiltinCallee()) return VisitBuiltinCallExpr(E, BuiltinOp); return ExprEvaluatorBaseTy::VisitCallExpr(E); } bool IntExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E, unsigned BuiltinOp) { switch (unsigned BuiltinOp = E->getBuiltinCallee()) { default: return ExprEvaluatorBaseTy::VisitCallExpr(E); case Builtin::BI__builtin_object_size: { // The type was checked when we built the expression. unsigned Type = E->getArg(1)->EvaluateKnownConstInt(Info.Ctx).getZExtValue(); assert(Type <= 3 && "unexpected type"); uint64_t Size; if (tryEvaluateBuiltinObjectSize(E->getArg(0), Type, Info, Size)) return Success(Size, E); if (E->getArg(0)->HasSideEffects(Info.Ctx)) return Success((Type & 2) ? 0 : -1, E); // Expression had no side effects, but we couldn't statically determine the // size of the referenced object. switch (Info.EvalMode) { case EvalInfo::EM_ConstantExpression: case EvalInfo::EM_PotentialConstantExpression: case EvalInfo::EM_ConstantFold: case EvalInfo::EM_EvaluateForOverflow: case EvalInfo::EM_IgnoreSideEffects: case EvalInfo::EM_OffsetFold: // Leave it to IR generation. return Error(E); case EvalInfo::EM_ConstantExpressionUnevaluated: case EvalInfo::EM_PotentialConstantExpressionUnevaluated: // Reduce it to a constant now. return Success((Type & 2) ? 0 : -1, E); } llvm_unreachable("unexpected EvalMode"); } case Builtin::BI__builtin_bswap16: case Builtin::BI__builtin_bswap32: case Builtin::BI__builtin_bswap64: { APSInt Val; if (!EvaluateInteger(E->getArg(0), Val, Info)) return false; return Success(Val.byteSwap(), E); } case Builtin::BI__builtin_classify_type: return Success(EvaluateBuiltinClassifyType(E, Info.getLangOpts()), E); // FIXME: BI__builtin_clrsb // FIXME: BI__builtin_clrsbl // FIXME: BI__builtin_clrsbll case Builtin::BI__builtin_clz: case Builtin::BI__builtin_clzl: case Builtin::BI__builtin_clzll: case Builtin::BI__builtin_clzs: { APSInt Val; if (!EvaluateInteger(E->getArg(0), Val, Info)) return false; if (!Val) return Error(E); return Success(Val.countLeadingZeros(), E); } case Builtin::BI__builtin_constant_p: return Success(EvaluateBuiltinConstantP(Info.Ctx, E->getArg(0)), E); case Builtin::BI__builtin_ctz: case Builtin::BI__builtin_ctzl: case Builtin::BI__builtin_ctzll: case Builtin::BI__builtin_ctzs: { APSInt Val; if (!EvaluateInteger(E->getArg(0), Val, Info)) return false; if (!Val) return Error(E); return Success(Val.countTrailingZeros(), E); } case Builtin::BI__builtin_eh_return_data_regno: { int Operand = E->getArg(0)->EvaluateKnownConstInt(Info.Ctx).getZExtValue(); Operand = Info.Ctx.getTargetInfo().getEHDataRegisterNumber(Operand); return Success(Operand, E); } case Builtin::BI__builtin_expect: return Visit(E->getArg(0)); case Builtin::BI__builtin_ffs: case Builtin::BI__builtin_ffsl: case Builtin::BI__builtin_ffsll: { APSInt Val; if (!EvaluateInteger(E->getArg(0), Val, Info)) return false; unsigned N = Val.countTrailingZeros(); return Success(N == Val.getBitWidth() ? 0 : N + 1, E); } case Builtin::BI__builtin_fpclassify: { APFloat Val(0.0); if (!EvaluateFloat(E->getArg(5), Val, Info)) return false; unsigned Arg; switch (Val.getCategory()) { case APFloat::fcNaN: Arg = 0; break; case APFloat::fcInfinity: Arg = 1; break; case APFloat::fcNormal: Arg = Val.isDenormal() ? 3 : 2; break; case APFloat::fcZero: Arg = 4; break; } return Visit(E->getArg(Arg)); } case Builtin::BI__builtin_isinf_sign: { APFloat Val(0.0); return EvaluateFloat(E->getArg(0), Val, Info) && Success(Val.isInfinity() ? (Val.isNegative() ? -1 : 1) : 0, E); } case Builtin::BI__builtin_isinf: { APFloat Val(0.0); return EvaluateFloat(E->getArg(0), Val, Info) && Success(Val.isInfinity() ? 1 : 0, E); } case Builtin::BI__builtin_isfinite: { APFloat Val(0.0); return EvaluateFloat(E->getArg(0), Val, Info) && Success(Val.isFinite() ? 1 : 0, E); } case Builtin::BI__builtin_isnan: { APFloat Val(0.0); return EvaluateFloat(E->getArg(0), Val, Info) && Success(Val.isNaN() ? 1 : 0, E); } case Builtin::BI__builtin_isnormal: { APFloat Val(0.0); return EvaluateFloat(E->getArg(0), Val, Info) && Success(Val.isNormal() ? 1 : 0, E); } case Builtin::BI__builtin_parity: case Builtin::BI__builtin_parityl: case Builtin::BI__builtin_parityll: { APSInt Val; if (!EvaluateInteger(E->getArg(0), Val, Info)) return false; return Success(Val.countPopulation() % 2, E); } case Builtin::BI__builtin_popcount: case Builtin::BI__builtin_popcountl: case Builtin::BI__builtin_popcountll: { APSInt Val; if (!EvaluateInteger(E->getArg(0), Val, Info)) return false; return Success(Val.countPopulation(), E); } case Builtin::BIstrlen: case Builtin::BIwcslen: // A call to strlen is not a constant expression. if (Info.getLangOpts().CPlusPlus11) Info.CCEDiag(E, diag::note_constexpr_invalid_function) << /*isConstexpr*/0 << /*isConstructor*/0 << (std::string("'") + Info.Ctx.BuiltinInfo.getName(BuiltinOp) + "'"); else Info.CCEDiag(E, diag::note_invalid_subexpr_in_const_expr); // Fall through. case Builtin::BI__builtin_strlen: case Builtin::BI__builtin_wcslen: { // As an extension, we support __builtin_strlen() as a constant expression, // and support folding strlen() to a constant. LValue String; if (!EvaluatePointer(E->getArg(0), String, Info)) return false; QualType CharTy = E->getArg(0)->getType()->getPointeeType(); // Fast path: if it's a string literal, search the string value. if (const StringLiteral *S = dyn_cast_or_null( String.getLValueBase().dyn_cast())) { // The string literal may have embedded null characters. Find the first // one and truncate there. StringRef Str = S->getBytes(); int64_t Off = String.Offset.getQuantity(); if (Off >= 0 && (uint64_t)Off <= (uint64_t)Str.size() && S->getCharByteWidth() == 1 && // FIXME: Add fast-path for wchar_t too. Info.Ctx.hasSameUnqualifiedType(CharTy, Info.Ctx.CharTy)) { Str = Str.substr(Off); StringRef::size_type Pos = Str.find(0); if (Pos != StringRef::npos) Str = Str.substr(0, Pos); return Success(Str.size(), E); } // Fall through to slow path to issue appropriate diagnostic. } // Slow path: scan the bytes of the string looking for the terminating 0. for (uint64_t Strlen = 0; /**/; ++Strlen) { APValue Char; if (!handleLValueToRValueConversion(Info, E, CharTy, String, Char) || !Char.isInt()) return false; if (!Char.getInt()) return Success(Strlen, E); if (!HandleLValueArrayAdjustment(Info, E, String, CharTy, 1)) return false; } } case Builtin::BIstrcmp: case Builtin::BIwcscmp: case Builtin::BIstrncmp: case Builtin::BIwcsncmp: case Builtin::BImemcmp: case Builtin::BIwmemcmp: // A call to strlen is not a constant expression. if (Info.getLangOpts().CPlusPlus11) Info.CCEDiag(E, diag::note_constexpr_invalid_function) << /*isConstexpr*/0 << /*isConstructor*/0 << (std::string("'") + Info.Ctx.BuiltinInfo.getName(BuiltinOp) + "'"); else Info.CCEDiag(E, diag::note_invalid_subexpr_in_const_expr); // Fall through. case Builtin::BI__builtin_strcmp: case Builtin::BI__builtin_wcscmp: case Builtin::BI__builtin_strncmp: case Builtin::BI__builtin_wcsncmp: case Builtin::BI__builtin_memcmp: case Builtin::BI__builtin_wmemcmp: { LValue String1, String2; if (!EvaluatePointer(E->getArg(0), String1, Info) || !EvaluatePointer(E->getArg(1), String2, Info)) return false; QualType CharTy = E->getArg(0)->getType()->getPointeeType(); uint64_t MaxLength = uint64_t(-1); if (BuiltinOp != Builtin::BIstrcmp && BuiltinOp != Builtin::BIwcscmp && BuiltinOp != Builtin::BI__builtin_strcmp && BuiltinOp != Builtin::BI__builtin_wcscmp) { APSInt N; if (!EvaluateInteger(E->getArg(2), N, Info)) return false; MaxLength = N.getExtValue(); } bool StopAtNull = (BuiltinOp != Builtin::BImemcmp && BuiltinOp != Builtin::BIwmemcmp && BuiltinOp != Builtin::BI__builtin_memcmp && BuiltinOp != Builtin::BI__builtin_wmemcmp); for (; MaxLength; --MaxLength) { APValue Char1, Char2; if (!handleLValueToRValueConversion(Info, E, CharTy, String1, Char1) || !handleLValueToRValueConversion(Info, E, CharTy, String2, Char2) || !Char1.isInt() || !Char2.isInt()) return false; if (Char1.getInt() != Char2.getInt()) return Success(Char1.getInt() < Char2.getInt() ? -1 : 1, E); if (StopAtNull && !Char1.getInt()) return Success(0, E); assert(!(StopAtNull && !Char2.getInt())); if (!HandleLValueArrayAdjustment(Info, E, String1, CharTy, 1) || !HandleLValueArrayAdjustment(Info, E, String2, CharTy, 1)) return false; } // We hit the strncmp / memcmp limit. return Success(0, E); } case Builtin::BI__atomic_always_lock_free: case Builtin::BI__atomic_is_lock_free: case Builtin::BI__c11_atomic_is_lock_free: { APSInt SizeVal; if (!EvaluateInteger(E->getArg(0), SizeVal, Info)) return false; // For __atomic_is_lock_free(sizeof(_Atomic(T))), if the size is a power // of two less than the maximum inline atomic width, we know it is // lock-free. If the size isn't a power of two, or greater than the // maximum alignment where we promote atomics, we know it is not lock-free // (at least not in the sense of atomic_is_lock_free). Otherwise, // the answer can only be determined at runtime; for example, 16-byte // atomics have lock-free implementations on some, but not all, // x86-64 processors. // Check power-of-two. CharUnits Size = CharUnits::fromQuantity(SizeVal.getZExtValue()); if (Size.isPowerOfTwo()) { // Check against inlining width. unsigned InlineWidthBits = Info.Ctx.getTargetInfo().getMaxAtomicInlineWidth(); if (Size <= Info.Ctx.toCharUnitsFromBits(InlineWidthBits)) { if (BuiltinOp == Builtin::BI__c11_atomic_is_lock_free || Size == CharUnits::One() || E->getArg(1)->isNullPointerConstant(Info.Ctx, Expr::NPC_NeverValueDependent)) // OK, we will inline appropriately-aligned operations of this size, // and _Atomic(T) is appropriately-aligned. return Success(1, E); QualType PointeeType = E->getArg(1)->IgnoreImpCasts()->getType()-> castAs()->getPointeeType(); if (!PointeeType->isIncompleteType() && Info.Ctx.getTypeAlignInChars(PointeeType) >= Size) { // OK, we will inline operations on this object. return Success(1, E); } } } return BuiltinOp == Builtin::BI__atomic_always_lock_free ? Success(0, E) : Error(E); } } } static bool HasSameBase(const LValue &A, const LValue &B) { if (!A.getLValueBase()) return !B.getLValueBase(); if (!B.getLValueBase()) return false; if (A.getLValueBase().getOpaqueValue() != B.getLValueBase().getOpaqueValue()) { const Decl *ADecl = GetLValueBaseDecl(A); if (!ADecl) return false; const Decl *BDecl = GetLValueBaseDecl(B); if (!BDecl || ADecl->getCanonicalDecl() != BDecl->getCanonicalDecl()) return false; } return IsGlobalLValue(A.getLValueBase()) || A.getLValueCallIndex() == B.getLValueCallIndex(); } /// \brief Determine whether this is a pointer past the end of the complete /// object referred to by the lvalue. static bool isOnePastTheEndOfCompleteObject(const ASTContext &Ctx, const LValue &LV) { // A null pointer can be viewed as being "past the end" but we don't // choose to look at it that way here. if (!LV.getLValueBase()) return false; // If the designator is valid and refers to a subobject, we're not pointing // past the end. if (!LV.getLValueDesignator().Invalid && !LV.getLValueDesignator().isOnePastTheEnd()) return false; // A pointer to an incomplete type might be past-the-end if the type's size is // zero. We cannot tell because the type is incomplete. QualType Ty = getType(LV.getLValueBase()); if (Ty->isIncompleteType()) return true; // We're a past-the-end pointer if we point to the byte after the object, // no matter what our type or path is. auto Size = Ctx.getTypeSizeInChars(Ty); return LV.getLValueOffset() == Size; } namespace { /// \brief Data recursive integer evaluator of certain binary operators. /// /// We use a data recursive algorithm for binary operators so that we are able /// to handle extreme cases of chained binary operators without causing stack /// overflow. class DataRecursiveIntBinOpEvaluator { struct EvalResult { APValue Val; bool Failed; EvalResult() : Failed(false) { } void swap(EvalResult &RHS) { Val.swap(RHS.Val); Failed = RHS.Failed; RHS.Failed = false; } }; struct Job { const Expr *E; EvalResult LHSResult; // meaningful only for binary operator expression. enum { AnyExprKind, BinOpKind, BinOpVisitedLHSKind } Kind; Job() = default; Job(Job &&) = default; void startSpeculativeEval(EvalInfo &Info) { SpecEvalRAII = SpeculativeEvaluationRAII(Info); } private: SpeculativeEvaluationRAII SpecEvalRAII; }; SmallVector Queue; IntExprEvaluator &IntEval; EvalInfo &Info; APValue &FinalResult; public: DataRecursiveIntBinOpEvaluator(IntExprEvaluator &IntEval, APValue &Result) : IntEval(IntEval), Info(IntEval.getEvalInfo()), FinalResult(Result) { } /// \brief True if \param E is a binary operator that we are going to handle /// data recursively. /// We handle binary operators that are comma, logical, or that have operands /// with integral or enumeration type. static bool shouldEnqueue(const BinaryOperator *E) { return E->getOpcode() == BO_Comma || E->isLogicalOp() || (E->isRValue() && E->getType()->isIntegralOrEnumerationType() && E->getLHS()->getType()->isIntegralOrEnumerationType() && E->getRHS()->getType()->isIntegralOrEnumerationType()); } bool Traverse(const BinaryOperator *E) { enqueue(E); EvalResult PrevResult; while (!Queue.empty()) process(PrevResult); if (PrevResult.Failed) return false; FinalResult.swap(PrevResult.Val); return true; } private: bool Success(uint64_t Value, const Expr *E, APValue &Result) { return IntEval.Success(Value, E, Result); } bool Success(const APSInt &Value, const Expr *E, APValue &Result) { return IntEval.Success(Value, E, Result); } bool Error(const Expr *E) { return IntEval.Error(E); } bool Error(const Expr *E, diag::kind D) { return IntEval.Error(E, D); } OptionalDiagnostic CCEDiag(const Expr *E, diag::kind D) { return Info.CCEDiag(E, D); } // \brief Returns true if visiting the RHS is necessary, false otherwise. bool VisitBinOpLHSOnly(EvalResult &LHSResult, const BinaryOperator *E, bool &SuppressRHSDiags); bool VisitBinOp(const EvalResult &LHSResult, const EvalResult &RHSResult, const BinaryOperator *E, APValue &Result); void EvaluateExpr(const Expr *E, EvalResult &Result) { Result.Failed = !Evaluate(Result.Val, Info, E); if (Result.Failed) Result.Val = APValue(); } void process(EvalResult &Result); void enqueue(const Expr *E) { E = E->IgnoreParens(); Queue.resize(Queue.size()+1); Queue.back().E = E; Queue.back().Kind = Job::AnyExprKind; } }; } bool DataRecursiveIntBinOpEvaluator:: VisitBinOpLHSOnly(EvalResult &LHSResult, const BinaryOperator *E, bool &SuppressRHSDiags) { if (E->getOpcode() == BO_Comma) { // Ignore LHS but note if we could not evaluate it. if (LHSResult.Failed) return Info.noteSideEffect(); return true; } if (E->isLogicalOp()) { bool LHSAsBool; if (!LHSResult.Failed && HandleConversionToBool(LHSResult.Val, LHSAsBool)) { // We were able to evaluate the LHS, see if we can get away with not // evaluating the RHS: 0 && X -> 0, 1 || X -> 1 if (LHSAsBool == (E->getOpcode() == BO_LOr)) { Success(LHSAsBool, E, LHSResult.Val); return false; // Ignore RHS } } else { LHSResult.Failed = true; // Since we weren't able to evaluate the left hand side, it // might have had side effects. if (!Info.noteSideEffect()) return false; // We can't evaluate the LHS; however, sometimes the result // is determined by the RHS: X && 0 -> 0, X || 1 -> 1. // Don't ignore RHS and suppress diagnostics from this arm. SuppressRHSDiags = true; } return true; } assert(E->getLHS()->getType()->isIntegralOrEnumerationType() && E->getRHS()->getType()->isIntegralOrEnumerationType()); if (LHSResult.Failed && !Info.noteFailure()) return false; // Ignore RHS; return true; } bool DataRecursiveIntBinOpEvaluator:: VisitBinOp(const EvalResult &LHSResult, const EvalResult &RHSResult, const BinaryOperator *E, APValue &Result) { if (E->getOpcode() == BO_Comma) { if (RHSResult.Failed) return false; Result = RHSResult.Val; return true; } if (E->isLogicalOp()) { bool lhsResult, rhsResult; bool LHSIsOK = HandleConversionToBool(LHSResult.Val, lhsResult); bool RHSIsOK = HandleConversionToBool(RHSResult.Val, rhsResult); if (LHSIsOK) { if (RHSIsOK) { if (E->getOpcode() == BO_LOr) return Success(lhsResult || rhsResult, E, Result); else return Success(lhsResult && rhsResult, E, Result); } } else { if (RHSIsOK) { // We can't evaluate the LHS; however, sometimes the result // is determined by the RHS: X && 0 -> 0, X || 1 -> 1. if (rhsResult == (E->getOpcode() == BO_LOr)) return Success(rhsResult, E, Result); } } return false; } assert(E->getLHS()->getType()->isIntegralOrEnumerationType() && E->getRHS()->getType()->isIntegralOrEnumerationType()); if (LHSResult.Failed || RHSResult.Failed) return false; const APValue &LHSVal = LHSResult.Val; const APValue &RHSVal = RHSResult.Val; // Handle cases like (unsigned long)&a + 4. if (E->isAdditiveOp() && LHSVal.isLValue() && RHSVal.isInt()) { Result = LHSVal; CharUnits AdditionalOffset = CharUnits::fromQuantity(RHSVal.getInt().getZExtValue()); if (E->getOpcode() == BO_Add) Result.getLValueOffset() += AdditionalOffset; else Result.getLValueOffset() -= AdditionalOffset; return true; } // Handle cases like 4 + (unsigned long)&a if (E->getOpcode() == BO_Add && RHSVal.isLValue() && LHSVal.isInt()) { Result = RHSVal; Result.getLValueOffset() += CharUnits::fromQuantity(LHSVal.getInt().getZExtValue()); return true; } if (E->getOpcode() == BO_Sub && LHSVal.isLValue() && RHSVal.isLValue()) { // Handle (intptr_t)&&A - (intptr_t)&&B. if (!LHSVal.getLValueOffset().isZero() || !RHSVal.getLValueOffset().isZero()) return false; const Expr *LHSExpr = LHSVal.getLValueBase().dyn_cast(); const Expr *RHSExpr = RHSVal.getLValueBase().dyn_cast(); if (!LHSExpr || !RHSExpr) return false; const AddrLabelExpr *LHSAddrExpr = dyn_cast(LHSExpr); const AddrLabelExpr *RHSAddrExpr = dyn_cast(RHSExpr); if (!LHSAddrExpr || !RHSAddrExpr) return false; // Make sure both labels come from the same function. if (LHSAddrExpr->getLabel()->getDeclContext() != RHSAddrExpr->getLabel()->getDeclContext()) return false; Result = APValue(LHSAddrExpr, RHSAddrExpr); return true; } // All the remaining cases expect both operands to be an integer if (!LHSVal.isInt() || !RHSVal.isInt()) return Error(E); // Set up the width and signedness manually, in case it can't be deduced // from the operation we're performing. // FIXME: Don't do this in the cases where we can deduce it. APSInt Value(Info.Ctx.getIntWidth(E->getType()), E->getType()->isUnsignedIntegerOrEnumerationType()); if (!handleIntIntBinOp(Info, E, LHSVal.getInt(), E->getOpcode(), RHSVal.getInt(), Value)) return false; return Success(Value, E, Result); } void DataRecursiveIntBinOpEvaluator::process(EvalResult &Result) { Job &job = Queue.back(); switch (job.Kind) { case Job::AnyExprKind: { if (const BinaryOperator *Bop = dyn_cast(job.E)) { if (shouldEnqueue(Bop)) { job.Kind = Job::BinOpKind; enqueue(Bop->getLHS()); return; } } EvaluateExpr(job.E, Result); Queue.pop_back(); return; } case Job::BinOpKind: { const BinaryOperator *Bop = cast(job.E); bool SuppressRHSDiags = false; if (!VisitBinOpLHSOnly(Result, Bop, SuppressRHSDiags)) { Queue.pop_back(); return; } if (SuppressRHSDiags) job.startSpeculativeEval(Info); job.LHSResult.swap(Result); job.Kind = Job::BinOpVisitedLHSKind; enqueue(Bop->getRHS()); return; } case Job::BinOpVisitedLHSKind: { const BinaryOperator *Bop = cast(job.E); EvalResult RHS; RHS.swap(Result); Result.Failed = !VisitBinOp(job.LHSResult, RHS, Bop, Result.Val); Queue.pop_back(); return; } } llvm_unreachable("Invalid Job::Kind!"); } namespace { /// Used when we determine that we should fail, but can keep evaluating prior to /// noting that we had a failure. class DelayedNoteFailureRAII { EvalInfo &Info; bool NoteFailure; public: DelayedNoteFailureRAII(EvalInfo &Info, bool NoteFailure = true) : Info(Info), NoteFailure(NoteFailure) {} ~DelayedNoteFailureRAII() { if (NoteFailure) { bool ContinueAfterFailure = Info.noteFailure(); (void)ContinueAfterFailure; assert(ContinueAfterFailure && "Shouldn't have kept evaluating on failure."); } } }; } bool IntExprEvaluator::VisitBinaryOperator(const BinaryOperator *E) { // We don't call noteFailure immediately because the assignment happens after // we evaluate LHS and RHS. if (!Info.keepEvaluatingAfterFailure() && E->isAssignmentOp()) return Error(E); DelayedNoteFailureRAII MaybeNoteFailureLater(Info, E->isAssignmentOp()); if (DataRecursiveIntBinOpEvaluator::shouldEnqueue(E)) return DataRecursiveIntBinOpEvaluator(*this, Result).Traverse(E); QualType LHSTy = E->getLHS()->getType(); QualType RHSTy = E->getRHS()->getType(); if (LHSTy->isAnyComplexType() || RHSTy->isAnyComplexType()) { ComplexValue LHS, RHS; bool LHSOK; if (E->isAssignmentOp()) { LValue LV; EvaluateLValue(E->getLHS(), LV, Info); LHSOK = false; } else if (LHSTy->isRealFloatingType()) { LHSOK = EvaluateFloat(E->getLHS(), LHS.FloatReal, Info); if (LHSOK) { LHS.makeComplexFloat(); LHS.FloatImag = APFloat(LHS.FloatReal.getSemantics()); } } else { LHSOK = EvaluateComplex(E->getLHS(), LHS, Info); } if (!LHSOK && !Info.noteFailure()) return false; if (E->getRHS()->getType()->isRealFloatingType()) { if (!EvaluateFloat(E->getRHS(), RHS.FloatReal, Info) || !LHSOK) return false; RHS.makeComplexFloat(); RHS.FloatImag = APFloat(RHS.FloatReal.getSemantics()); } else if (!EvaluateComplex(E->getRHS(), RHS, Info) || !LHSOK) return false; if (LHS.isComplexFloat()) { APFloat::cmpResult CR_r = LHS.getComplexFloatReal().compare(RHS.getComplexFloatReal()); APFloat::cmpResult CR_i = LHS.getComplexFloatImag().compare(RHS.getComplexFloatImag()); if (E->getOpcode() == BO_EQ) return Success((CR_r == APFloat::cmpEqual && CR_i == APFloat::cmpEqual), E); else { assert(E->getOpcode() == BO_NE && "Invalid complex comparison."); return Success(((CR_r == APFloat::cmpGreaterThan || CR_r == APFloat::cmpLessThan || CR_r == APFloat::cmpUnordered) || (CR_i == APFloat::cmpGreaterThan || CR_i == APFloat::cmpLessThan || CR_i == APFloat::cmpUnordered)), E); } } else { if (E->getOpcode() == BO_EQ) return Success((LHS.getComplexIntReal() == RHS.getComplexIntReal() && LHS.getComplexIntImag() == RHS.getComplexIntImag()), E); else { assert(E->getOpcode() == BO_NE && "Invalid compex comparison."); return Success((LHS.getComplexIntReal() != RHS.getComplexIntReal() || LHS.getComplexIntImag() != RHS.getComplexIntImag()), E); } } } if (LHSTy->isRealFloatingType() && RHSTy->isRealFloatingType()) { APFloat RHS(0.0), LHS(0.0); bool LHSOK = EvaluateFloat(E->getRHS(), RHS, Info); if (!LHSOK && !Info.noteFailure()) return false; if (!EvaluateFloat(E->getLHS(), LHS, Info) || !LHSOK) return false; APFloat::cmpResult CR = LHS.compare(RHS); switch (E->getOpcode()) { default: llvm_unreachable("Invalid binary operator!"); case BO_LT: return Success(CR == APFloat::cmpLessThan, E); case BO_GT: return Success(CR == APFloat::cmpGreaterThan, E); case BO_LE: return Success(CR == APFloat::cmpLessThan || CR == APFloat::cmpEqual, E); case BO_GE: return Success(CR == APFloat::cmpGreaterThan || CR == APFloat::cmpEqual, E); case BO_EQ: return Success(CR == APFloat::cmpEqual, E); case BO_NE: return Success(CR == APFloat::cmpGreaterThan || CR == APFloat::cmpLessThan || CR == APFloat::cmpUnordered, E); } } if (LHSTy->isPointerType() && RHSTy->isPointerType()) { if (E->getOpcode() == BO_Sub || E->isComparisonOp()) { LValue LHSValue, RHSValue; bool LHSOK = EvaluatePointer(E->getLHS(), LHSValue, Info); if (!LHSOK && !Info.noteFailure()) return false; if (!EvaluatePointer(E->getRHS(), RHSValue, Info) || !LHSOK) return false; // Reject differing bases from the normal codepath; we special-case // comparisons to null. if (!HasSameBase(LHSValue, RHSValue)) { if (E->getOpcode() == BO_Sub) { // Handle &&A - &&B. if (!LHSValue.Offset.isZero() || !RHSValue.Offset.isZero()) return Error(E); const Expr *LHSExpr = LHSValue.Base.dyn_cast(); const Expr *RHSExpr = RHSValue.Base.dyn_cast(); if (!LHSExpr || !RHSExpr) return Error(E); const AddrLabelExpr *LHSAddrExpr = dyn_cast(LHSExpr); const AddrLabelExpr *RHSAddrExpr = dyn_cast(RHSExpr); if (!LHSAddrExpr || !RHSAddrExpr) return Error(E); // Make sure both labels come from the same function. if (LHSAddrExpr->getLabel()->getDeclContext() != RHSAddrExpr->getLabel()->getDeclContext()) return Error(E); return Success(APValue(LHSAddrExpr, RHSAddrExpr), E); } // Inequalities and subtractions between unrelated pointers have // unspecified or undefined behavior. if (!E->isEqualityOp()) return Error(E); // A constant address may compare equal to the address of a symbol. // The one exception is that address of an object cannot compare equal // to a null pointer constant. if ((!LHSValue.Base && !LHSValue.Offset.isZero()) || (!RHSValue.Base && !RHSValue.Offset.isZero())) return Error(E); // It's implementation-defined whether distinct literals will have // distinct addresses. In clang, the result of such a comparison is // unspecified, so it is not a constant expression. However, we do know // that the address of a literal will be non-null. if ((IsLiteralLValue(LHSValue) || IsLiteralLValue(RHSValue)) && LHSValue.Base && RHSValue.Base) return Error(E); // We can't tell whether weak symbols will end up pointing to the same // object. if (IsWeakLValue(LHSValue) || IsWeakLValue(RHSValue)) return Error(E); // We can't compare the address of the start of one object with the // past-the-end address of another object, per C++ DR1652. if ((LHSValue.Base && LHSValue.Offset.isZero() && isOnePastTheEndOfCompleteObject(Info.Ctx, RHSValue)) || (RHSValue.Base && RHSValue.Offset.isZero() && isOnePastTheEndOfCompleteObject(Info.Ctx, LHSValue))) return Error(E); // We can't tell whether an object is at the same address as another // zero sized object. if ((RHSValue.Base && isZeroSized(LHSValue)) || (LHSValue.Base && isZeroSized(RHSValue))) return Error(E); // Pointers with different bases cannot represent the same object. // (Note that clang defaults to -fmerge-all-constants, which can // lead to inconsistent results for comparisons involving the address // of a constant; this generally doesn't matter in practice.) return Success(E->getOpcode() == BO_NE, E); } const CharUnits &LHSOffset = LHSValue.getLValueOffset(); const CharUnits &RHSOffset = RHSValue.getLValueOffset(); SubobjectDesignator &LHSDesignator = LHSValue.getLValueDesignator(); SubobjectDesignator &RHSDesignator = RHSValue.getLValueDesignator(); if (E->getOpcode() == BO_Sub) { // C++11 [expr.add]p6: // Unless both pointers point to elements of the same array object, or // one past the last element of the array object, the behavior is // undefined. if (!LHSDesignator.Invalid && !RHSDesignator.Invalid && !AreElementsOfSameArray(getType(LHSValue.Base), LHSDesignator, RHSDesignator)) CCEDiag(E, diag::note_constexpr_pointer_subtraction_not_same_array); QualType Type = E->getLHS()->getType(); QualType ElementType = Type->getAs()->getPointeeType(); CharUnits ElementSize; if (!HandleSizeof(Info, E->getExprLoc(), ElementType, ElementSize)) return false; // As an extension, a type may have zero size (empty struct or union in // C, array of zero length). Pointer subtraction in such cases has // undefined behavior, so is not constant. if (ElementSize.isZero()) { Info.FFDiag(E, diag::note_constexpr_pointer_subtraction_zero_size) << ElementType; return false; } // FIXME: LLVM and GCC both compute LHSOffset - RHSOffset at runtime, // and produce incorrect results when it overflows. Such behavior // appears to be non-conforming, but is common, so perhaps we should // assume the standard intended for such cases to be undefined behavior // and check for them. // Compute (LHSOffset - RHSOffset) / Size carefully, checking for // overflow in the final conversion to ptrdiff_t. APSInt LHS( llvm::APInt(65, (int64_t)LHSOffset.getQuantity(), true), false); APSInt RHS( llvm::APInt(65, (int64_t)RHSOffset.getQuantity(), true), false); APSInt ElemSize( llvm::APInt(65, (int64_t)ElementSize.getQuantity(), true), false); APSInt TrueResult = (LHS - RHS) / ElemSize; APSInt Result = TrueResult.trunc(Info.Ctx.getIntWidth(E->getType())); if (Result.extend(65) != TrueResult && !HandleOverflow(Info, E, TrueResult, E->getType())) return false; return Success(Result, E); } // C++11 [expr.rel]p3: // Pointers to void (after pointer conversions) can be compared, with a // result defined as follows: If both pointers represent the same // address or are both the null pointer value, the result is true if the // operator is <= or >= and false otherwise; otherwise the result is // unspecified. // We interpret this as applying to pointers to *cv* void. if (LHSTy->isVoidPointerType() && LHSOffset != RHSOffset && E->isRelationalOp()) CCEDiag(E, diag::note_constexpr_void_comparison); // C++11 [expr.rel]p2: // - If two pointers point to non-static data members of the same object, // or to subobjects or array elements fo such members, recursively, the // pointer to the later declared member compares greater provided the // two members have the same access control and provided their class is // not a union. // [...] // - Otherwise pointer comparisons are unspecified. if (!LHSDesignator.Invalid && !RHSDesignator.Invalid && E->isRelationalOp()) { bool WasArrayIndex; unsigned Mismatch = FindDesignatorMismatch(getType(LHSValue.Base), LHSDesignator, RHSDesignator, WasArrayIndex); // At the point where the designators diverge, the comparison has a // specified value if: // - we are comparing array indices // - we are comparing fields of a union, or fields with the same access // Otherwise, the result is unspecified and thus the comparison is not a // constant expression. if (!WasArrayIndex && Mismatch < LHSDesignator.Entries.size() && Mismatch < RHSDesignator.Entries.size()) { const FieldDecl *LF = getAsField(LHSDesignator.Entries[Mismatch]); const FieldDecl *RF = getAsField(RHSDesignator.Entries[Mismatch]); if (!LF && !RF) CCEDiag(E, diag::note_constexpr_pointer_comparison_base_classes); else if (!LF) CCEDiag(E, diag::note_constexpr_pointer_comparison_base_field) << getAsBaseClass(LHSDesignator.Entries[Mismatch]) << RF->getParent() << RF; else if (!RF) CCEDiag(E, diag::note_constexpr_pointer_comparison_base_field) << getAsBaseClass(RHSDesignator.Entries[Mismatch]) << LF->getParent() << LF; else if (!LF->getParent()->isUnion() && LF->getAccess() != RF->getAccess()) CCEDiag(E, diag::note_constexpr_pointer_comparison_differing_access) << LF << LF->getAccess() << RF << RF->getAccess() << LF->getParent(); } } // The comparison here must be unsigned, and performed with the same // width as the pointer. unsigned PtrSize = Info.Ctx.getTypeSize(LHSTy); uint64_t CompareLHS = LHSOffset.getQuantity(); uint64_t CompareRHS = RHSOffset.getQuantity(); assert(PtrSize <= 64 && "Unexpected pointer width"); uint64_t Mask = ~0ULL >> (64 - PtrSize); CompareLHS &= Mask; CompareRHS &= Mask; // If there is a base and this is a relational operator, we can only // compare pointers within the object in question; otherwise, the result // depends on where the object is located in memory. if (!LHSValue.Base.isNull() && E->isRelationalOp()) { QualType BaseTy = getType(LHSValue.Base); if (BaseTy->isIncompleteType()) return Error(E); CharUnits Size = Info.Ctx.getTypeSizeInChars(BaseTy); uint64_t OffsetLimit = Size.getQuantity(); if (CompareLHS > OffsetLimit || CompareRHS > OffsetLimit) return Error(E); } switch (E->getOpcode()) { default: llvm_unreachable("missing comparison operator"); case BO_LT: return Success(CompareLHS < CompareRHS, E); case BO_GT: return Success(CompareLHS > CompareRHS, E); case BO_LE: return Success(CompareLHS <= CompareRHS, E); case BO_GE: return Success(CompareLHS >= CompareRHS, E); case BO_EQ: return Success(CompareLHS == CompareRHS, E); case BO_NE: return Success(CompareLHS != CompareRHS, E); } } } if (LHSTy->isMemberPointerType()) { assert(E->isEqualityOp() && "unexpected member pointer operation"); assert(RHSTy->isMemberPointerType() && "invalid comparison"); MemberPtr LHSValue, RHSValue; bool LHSOK = EvaluateMemberPointer(E->getLHS(), LHSValue, Info); if (!LHSOK && !Info.noteFailure()) return false; if (!EvaluateMemberPointer(E->getRHS(), RHSValue, Info) || !LHSOK) return false; // C++11 [expr.eq]p2: // If both operands are null, they compare equal. Otherwise if only one is // null, they compare unequal. if (!LHSValue.getDecl() || !RHSValue.getDecl()) { bool Equal = !LHSValue.getDecl() && !RHSValue.getDecl(); return Success(E->getOpcode() == BO_EQ ? Equal : !Equal, E); } // Otherwise if either is a pointer to a virtual member function, the // result is unspecified. if (const CXXMethodDecl *MD = dyn_cast(LHSValue.getDecl())) if (MD->isVirtual()) CCEDiag(E, diag::note_constexpr_compare_virtual_mem_ptr) << MD; if (const CXXMethodDecl *MD = dyn_cast(RHSValue.getDecl())) if (MD->isVirtual()) CCEDiag(E, diag::note_constexpr_compare_virtual_mem_ptr) << MD; // Otherwise they compare equal if and only if they would refer to the // same member of the same most derived object or the same subobject if // they were dereferenced with a hypothetical object of the associated // class type. bool Equal = LHSValue == RHSValue; return Success(E->getOpcode() == BO_EQ ? Equal : !Equal, E); } if (LHSTy->isNullPtrType()) { assert(E->isComparisonOp() && "unexpected nullptr operation"); assert(RHSTy->isNullPtrType() && "missing pointer conversion"); // C++11 [expr.rel]p4, [expr.eq]p3: If two operands of type std::nullptr_t // are compared, the result is true of the operator is <=, >= or ==, and // false otherwise. BinaryOperator::Opcode Opcode = E->getOpcode(); return Success(Opcode == BO_EQ || Opcode == BO_LE || Opcode == BO_GE, E); } assert((!LHSTy->isIntegralOrEnumerationType() || !RHSTy->isIntegralOrEnumerationType()) && "DataRecursiveIntBinOpEvaluator should have handled integral types"); // We can't continue from here for non-integral types. return ExprEvaluatorBaseTy::VisitBinaryOperator(E); } /// VisitUnaryExprOrTypeTraitExpr - Evaluate a sizeof, alignof or vec_step with /// a result as the expression's type. bool IntExprEvaluator::VisitUnaryExprOrTypeTraitExpr( const UnaryExprOrTypeTraitExpr *E) { switch(E->getKind()) { case UETT_AlignOf: { if (E->isArgumentType()) return Success(GetAlignOfType(Info, E->getArgumentType()), E); else return Success(GetAlignOfExpr(Info, E->getArgumentExpr()), E); } case UETT_VecStep: { QualType Ty = E->getTypeOfArgument(); if (Ty->isVectorType()) { unsigned n = Ty->castAs()->getNumElements(); // The vec_step built-in functions that take a 3-component // vector return 4. (OpenCL 1.1 spec 6.11.12) if (n == 3) n = 4; return Success(n, E); } else return Success(1, E); } case UETT_SizeOf: { QualType SrcTy = E->getTypeOfArgument(); // C++ [expr.sizeof]p2: "When applied to a reference or a reference type, // the result is the size of the referenced type." if (const ReferenceType *Ref = SrcTy->getAs()) SrcTy = Ref->getPointeeType(); CharUnits Sizeof; if (!HandleSizeof(Info, E->getExprLoc(), SrcTy, Sizeof)) return false; return Success(Sizeof, E); } case UETT_OpenMPRequiredSimdAlign: assert(E->isArgumentType()); return Success( Info.Ctx.toCharUnitsFromBits( Info.Ctx.getOpenMPDefaultSimdAlign(E->getArgumentType())) .getQuantity(), E); } llvm_unreachable("unknown expr/type trait"); } bool IntExprEvaluator::VisitOffsetOfExpr(const OffsetOfExpr *OOE) { CharUnits Result; unsigned n = OOE->getNumComponents(); if (n == 0) return Error(OOE); QualType CurrentType = OOE->getTypeSourceInfo()->getType(); for (unsigned i = 0; i != n; ++i) { OffsetOfNode ON = OOE->getComponent(i); switch (ON.getKind()) { case OffsetOfNode::Array: { const Expr *Idx = OOE->getIndexExpr(ON.getArrayExprIndex()); APSInt IdxResult; if (!EvaluateInteger(Idx, IdxResult, Info)) return false; const ArrayType *AT = Info.Ctx.getAsArrayType(CurrentType); if (!AT) return Error(OOE); CurrentType = AT->getElementType(); CharUnits ElementSize = Info.Ctx.getTypeSizeInChars(CurrentType); Result += IdxResult.getSExtValue() * ElementSize; break; } case OffsetOfNode::Field: { FieldDecl *MemberDecl = ON.getField(); const RecordType *RT = CurrentType->getAs(); if (!RT) return Error(OOE); RecordDecl *RD = RT->getDecl(); if (RD->isInvalidDecl()) return false; const ASTRecordLayout &RL = Info.Ctx.getASTRecordLayout(RD); unsigned i = MemberDecl->getFieldIndex(); assert(i < RL.getFieldCount() && "offsetof field in wrong type"); Result += Info.Ctx.toCharUnitsFromBits(RL.getFieldOffset(i)); CurrentType = MemberDecl->getType().getNonReferenceType(); break; } case OffsetOfNode::Identifier: llvm_unreachable("dependent __builtin_offsetof"); case OffsetOfNode::Base: { CXXBaseSpecifier *BaseSpec = ON.getBase(); if (BaseSpec->isVirtual()) return Error(OOE); // Find the layout of the class whose base we are looking into. const RecordType *RT = CurrentType->getAs(); if (!RT) return Error(OOE); RecordDecl *RD = RT->getDecl(); if (RD->isInvalidDecl()) return false; const ASTRecordLayout &RL = Info.Ctx.getASTRecordLayout(RD); // Find the base class itself. CurrentType = BaseSpec->getType(); const RecordType *BaseRT = CurrentType->getAs(); if (!BaseRT) return Error(OOE); // Add the offset to the base. Result += RL.getBaseClassOffset(cast(BaseRT->getDecl())); break; } } } return Success(Result, OOE); } bool IntExprEvaluator::VisitUnaryOperator(const UnaryOperator *E) { switch (E->getOpcode()) { default: // Address, indirect, pre/post inc/dec, etc are not valid constant exprs. // See C99 6.6p3. return Error(E); case UO_Extension: // FIXME: Should extension allow i-c-e extension expressions in its scope? // If so, we could clear the diagnostic ID. return Visit(E->getSubExpr()); case UO_Plus: // The result is just the value. return Visit(E->getSubExpr()); case UO_Minus: { if (!Visit(E->getSubExpr())) return false; if (!Result.isInt()) return Error(E); const APSInt &Value = Result.getInt(); if (Value.isSigned() && Value.isMinSignedValue() && !HandleOverflow(Info, E, -Value.extend(Value.getBitWidth() + 1), E->getType())) return false; return Success(-Value, E); } case UO_Not: { if (!Visit(E->getSubExpr())) return false; if (!Result.isInt()) return Error(E); return Success(~Result.getInt(), E); } case UO_LNot: { bool bres; if (!EvaluateAsBooleanCondition(E->getSubExpr(), bres, Info)) return false; return Success(!bres, E); } } } /// HandleCast - This is used to evaluate implicit or explicit casts where the /// result type is integer. bool IntExprEvaluator::VisitCastExpr(const CastExpr *E) { const Expr *SubExpr = E->getSubExpr(); QualType DestType = E->getType(); QualType SrcType = SubExpr->getType(); switch (E->getCastKind()) { case CK_BaseToDerived: case CK_DerivedToBase: case CK_UncheckedDerivedToBase: case CK_Dynamic: case CK_ToUnion: case CK_ArrayToPointerDecay: case CK_FunctionToPointerDecay: case CK_NullToPointer: case CK_NullToMemberPointer: case CK_BaseToDerivedMemberPointer: case CK_DerivedToBaseMemberPointer: case CK_ReinterpretMemberPointer: case CK_ConstructorConversion: case CK_IntegralToPointer: case CK_ToVoid: case CK_VectorSplat: case CK_IntegralToFloating: case CK_FloatingCast: case CK_CPointerToObjCPointerCast: case CK_BlockPointerToObjCPointerCast: case CK_AnyPointerToBlockPointerCast: case CK_ObjCObjectLValueCast: case CK_FloatingRealToComplex: case CK_FloatingComplexToReal: case CK_FloatingComplexCast: case CK_FloatingComplexToIntegralComplex: case CK_IntegralRealToComplex: case CK_IntegralComplexCast: case CK_IntegralComplexToFloatingComplex: case CK_BuiltinFnToFnPtr: case CK_ZeroToOCLEvent: case CK_ZeroToOCLQueue: case CK_NonAtomicToAtomic: case CK_AddressSpaceConversion: case CK_IntToOCLSampler: llvm_unreachable("invalid cast kind for integral value"); case CK_BitCast: case CK_Dependent: case CK_LValueBitCast: case CK_ARCProduceObject: case CK_ARCConsumeObject: case CK_ARCReclaimReturnedObject: case CK_ARCExtendBlockObject: case CK_CopyAndAutoreleaseBlockObject: return Error(E); case CK_UserDefinedConversion: case CK_LValueToRValue: case CK_AtomicToNonAtomic: case CK_NoOp: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_MemberPointerToBoolean: case CK_PointerToBoolean: case CK_IntegralToBoolean: case CK_FloatingToBoolean: case CK_BooleanToSignedIntegral: case CK_FloatingComplexToBoolean: case CK_IntegralComplexToBoolean: { bool BoolResult; if (!EvaluateAsBooleanCondition(SubExpr, BoolResult, Info)) return false; uint64_t IntResult = BoolResult; if (BoolResult && E->getCastKind() == CK_BooleanToSignedIntegral) IntResult = (uint64_t)-1; return Success(IntResult, E); } case CK_IntegralCast: { if (!Visit(SubExpr)) return false; if (!Result.isInt()) { // Allow casts of address-of-label differences if they are no-ops // or narrowing. (The narrowing case isn't actually guaranteed to // be constant-evaluatable except in some narrow cases which are hard // to detect here. We let it through on the assumption the user knows // what they are doing.) if (Result.isAddrLabelDiff()) return Info.Ctx.getTypeSize(DestType) <= Info.Ctx.getTypeSize(SrcType); // Only allow casts of lvalues if they are lossless. return Info.Ctx.getTypeSize(DestType) == Info.Ctx.getTypeSize(SrcType); } return Success(HandleIntToIntCast(Info, E, DestType, SrcType, Result.getInt()), E); } case CK_PointerToIntegral: { CCEDiag(E, diag::note_constexpr_invalid_cast) << 2; LValue LV; if (!EvaluatePointer(SubExpr, LV, Info)) return false; if (LV.getLValueBase()) { // Only allow based lvalue casts if they are lossless. // FIXME: Allow a larger integer size than the pointer size, and allow // narrowing back down to pointer width in subsequent integral casts. // FIXME: Check integer type's active bits, not its type size. if (Info.Ctx.getTypeSize(DestType) != Info.Ctx.getTypeSize(SrcType)) return Error(E); LV.Designator.setInvalid(); LV.moveInto(Result); return true; } uint64_t V; if (LV.isNullPointer()) V = Info.Ctx.getTargetNullPointerValue(SrcType); else V = LV.getLValueOffset().getQuantity(); APSInt AsInt = Info.Ctx.MakeIntValue(V, SrcType); return Success(HandleIntToIntCast(Info, E, DestType, SrcType, AsInt), E); } case CK_IntegralComplexToReal: { ComplexValue C; if (!EvaluateComplex(SubExpr, C, Info)) return false; return Success(C.getComplexIntReal(), E); } case CK_FloatingToIntegral: { APFloat F(0.0); if (!EvaluateFloat(SubExpr, F, Info)) return false; APSInt Value; if (!HandleFloatToIntCast(Info, E, SrcType, F, DestType, Value)) return false; return Success(Value, E); } } llvm_unreachable("unknown cast resulting in integral value"); } bool IntExprEvaluator::VisitUnaryReal(const UnaryOperator *E) { if (E->getSubExpr()->getType()->isAnyComplexType()) { ComplexValue LV; if (!EvaluateComplex(E->getSubExpr(), LV, Info)) return false; if (!LV.isComplexInt()) return Error(E); return Success(LV.getComplexIntReal(), E); } return Visit(E->getSubExpr()); } bool IntExprEvaluator::VisitUnaryImag(const UnaryOperator *E) { if (E->getSubExpr()->getType()->isComplexIntegerType()) { ComplexValue LV; if (!EvaluateComplex(E->getSubExpr(), LV, Info)) return false; if (!LV.isComplexInt()) return Error(E); return Success(LV.getComplexIntImag(), E); } VisitIgnoredValue(E->getSubExpr()); return Success(0, E); } bool IntExprEvaluator::VisitSizeOfPackExpr(const SizeOfPackExpr *E) { return Success(E->getPackLength(), E); } bool IntExprEvaluator::VisitCXXNoexceptExpr(const CXXNoexceptExpr *E) { return Success(E->getValue(), E); } //===----------------------------------------------------------------------===// // Float Evaluation //===----------------------------------------------------------------------===// namespace { class FloatExprEvaluator : public ExprEvaluatorBase { APFloat &Result; public: FloatExprEvaluator(EvalInfo &info, APFloat &result) : ExprEvaluatorBaseTy(info), Result(result) {} bool Success(const APValue &V, const Expr *e) { Result = V.getFloat(); return true; } bool ZeroInitialization(const Expr *E) { Result = APFloat::getZero(Info.Ctx.getFloatTypeSemantics(E->getType())); return true; } bool VisitCallExpr(const CallExpr *E); bool VisitUnaryOperator(const UnaryOperator *E); bool VisitBinaryOperator(const BinaryOperator *E); bool VisitFloatingLiteral(const FloatingLiteral *E); bool VisitCastExpr(const CastExpr *E); bool VisitUnaryReal(const UnaryOperator *E); bool VisitUnaryImag(const UnaryOperator *E); // FIXME: Missing: array subscript of vector, member of vector }; } // end anonymous namespace static bool EvaluateFloat(const Expr* E, APFloat& Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isRealFloatingType()); return FloatExprEvaluator(Info, Result).Visit(E); } static bool TryEvaluateBuiltinNaN(const ASTContext &Context, QualType ResultTy, const Expr *Arg, bool SNaN, llvm::APFloat &Result) { const StringLiteral *S = dyn_cast(Arg->IgnoreParenCasts()); if (!S) return false; const llvm::fltSemantics &Sem = Context.getFloatTypeSemantics(ResultTy); llvm::APInt fill; // Treat empty strings as if they were zero. if (S->getString().empty()) fill = llvm::APInt(32, 0); else if (S->getString().getAsInteger(0, fill)) return false; if (Context.getTargetInfo().isNan2008()) { if (SNaN) Result = llvm::APFloat::getSNaN(Sem, false, &fill); else Result = llvm::APFloat::getQNaN(Sem, false, &fill); } else { // Prior to IEEE 754-2008, architectures were allowed to choose whether // the first bit of their significand was set for qNaN or sNaN. MIPS chose // a different encoding to what became a standard in 2008, and for pre- // 2008 revisions, MIPS interpreted sNaN-2008 as qNan and qNaN-2008 as // sNaN. This is now known as "legacy NaN" encoding. if (SNaN) Result = llvm::APFloat::getQNaN(Sem, false, &fill); else Result = llvm::APFloat::getSNaN(Sem, false, &fill); } return true; } bool FloatExprEvaluator::VisitCallExpr(const CallExpr *E) { switch (E->getBuiltinCallee()) { default: return ExprEvaluatorBaseTy::VisitCallExpr(E); case Builtin::BI__builtin_huge_val: case Builtin::BI__builtin_huge_valf: case Builtin::BI__builtin_huge_vall: case Builtin::BI__builtin_inf: case Builtin::BI__builtin_inff: case Builtin::BI__builtin_infl: { const llvm::fltSemantics &Sem = Info.Ctx.getFloatTypeSemantics(E->getType()); Result = llvm::APFloat::getInf(Sem); return true; } case Builtin::BI__builtin_nans: case Builtin::BI__builtin_nansf: case Builtin::BI__builtin_nansl: if (!TryEvaluateBuiltinNaN(Info.Ctx, E->getType(), E->getArg(0), true, Result)) return Error(E); return true; case Builtin::BI__builtin_nan: case Builtin::BI__builtin_nanf: case Builtin::BI__builtin_nanl: // If this is __builtin_nan() turn this into a nan, otherwise we // can't constant fold it. if (!TryEvaluateBuiltinNaN(Info.Ctx, E->getType(), E->getArg(0), false, Result)) return Error(E); return true; case Builtin::BI__builtin_fabs: case Builtin::BI__builtin_fabsf: case Builtin::BI__builtin_fabsl: if (!EvaluateFloat(E->getArg(0), Result, Info)) return false; if (Result.isNegative()) Result.changeSign(); return true; // FIXME: Builtin::BI__builtin_powi // FIXME: Builtin::BI__builtin_powif // FIXME: Builtin::BI__builtin_powil case Builtin::BI__builtin_copysign: case Builtin::BI__builtin_copysignf: case Builtin::BI__builtin_copysignl: { APFloat RHS(0.); if (!EvaluateFloat(E->getArg(0), Result, Info) || !EvaluateFloat(E->getArg(1), RHS, Info)) return false; Result.copySign(RHS); return true; } } } bool FloatExprEvaluator::VisitUnaryReal(const UnaryOperator *E) { if (E->getSubExpr()->getType()->isAnyComplexType()) { ComplexValue CV; if (!EvaluateComplex(E->getSubExpr(), CV, Info)) return false; Result = CV.FloatReal; return true; } return Visit(E->getSubExpr()); } bool FloatExprEvaluator::VisitUnaryImag(const UnaryOperator *E) { if (E->getSubExpr()->getType()->isAnyComplexType()) { ComplexValue CV; if (!EvaluateComplex(E->getSubExpr(), CV, Info)) return false; Result = CV.FloatImag; return true; } VisitIgnoredValue(E->getSubExpr()); const llvm::fltSemantics &Sem = Info.Ctx.getFloatTypeSemantics(E->getType()); Result = llvm::APFloat::getZero(Sem); return true; } bool FloatExprEvaluator::VisitUnaryOperator(const UnaryOperator *E) { switch (E->getOpcode()) { default: return Error(E); case UO_Plus: return EvaluateFloat(E->getSubExpr(), Result, Info); case UO_Minus: if (!EvaluateFloat(E->getSubExpr(), Result, Info)) return false; Result.changeSign(); return true; } } bool FloatExprEvaluator::VisitBinaryOperator(const BinaryOperator *E) { if (E->isPtrMemOp() || E->isAssignmentOp() || E->getOpcode() == BO_Comma) return ExprEvaluatorBaseTy::VisitBinaryOperator(E); APFloat RHS(0.0); bool LHSOK = EvaluateFloat(E->getLHS(), Result, Info); if (!LHSOK && !Info.noteFailure()) return false; return EvaluateFloat(E->getRHS(), RHS, Info) && LHSOK && handleFloatFloatBinOp(Info, E, Result, E->getOpcode(), RHS); } bool FloatExprEvaluator::VisitFloatingLiteral(const FloatingLiteral *E) { Result = E->getValue(); return true; } bool FloatExprEvaluator::VisitCastExpr(const CastExpr *E) { const Expr* SubExpr = E->getSubExpr(); switch (E->getCastKind()) { default: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_IntegralToFloating: { APSInt IntResult; return EvaluateInteger(SubExpr, IntResult, Info) && HandleIntToFloatCast(Info, E, SubExpr->getType(), IntResult, E->getType(), Result); } case CK_FloatingCast: { if (!Visit(SubExpr)) return false; return HandleFloatToFloatCast(Info, E, SubExpr->getType(), E->getType(), Result); } case CK_FloatingComplexToReal: { ComplexValue V; if (!EvaluateComplex(SubExpr, V, Info)) return false; Result = V.getComplexFloatReal(); return true; } } } //===----------------------------------------------------------------------===// // Complex Evaluation (for float and integer) //===----------------------------------------------------------------------===// namespace { class ComplexExprEvaluator : public ExprEvaluatorBase { ComplexValue &Result; public: ComplexExprEvaluator(EvalInfo &info, ComplexValue &Result) : ExprEvaluatorBaseTy(info), Result(Result) {} bool Success(const APValue &V, const Expr *e) { Result.setFrom(V); return true; } bool ZeroInitialization(const Expr *E); //===--------------------------------------------------------------------===// // Visitor Methods //===--------------------------------------------------------------------===// bool VisitImaginaryLiteral(const ImaginaryLiteral *E); bool VisitCastExpr(const CastExpr *E); bool VisitBinaryOperator(const BinaryOperator *E); bool VisitUnaryOperator(const UnaryOperator *E); bool VisitInitListExpr(const InitListExpr *E); }; } // end anonymous namespace static bool EvaluateComplex(const Expr *E, ComplexValue &Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isAnyComplexType()); return ComplexExprEvaluator(Info, Result).Visit(E); } bool ComplexExprEvaluator::ZeroInitialization(const Expr *E) { QualType ElemTy = E->getType()->castAs()->getElementType(); if (ElemTy->isRealFloatingType()) { Result.makeComplexFloat(); APFloat Zero = APFloat::getZero(Info.Ctx.getFloatTypeSemantics(ElemTy)); Result.FloatReal = Zero; Result.FloatImag = Zero; } else { Result.makeComplexInt(); APSInt Zero = Info.Ctx.MakeIntValue(0, ElemTy); Result.IntReal = Zero; Result.IntImag = Zero; } return true; } bool ComplexExprEvaluator::VisitImaginaryLiteral(const ImaginaryLiteral *E) { const Expr* SubExpr = E->getSubExpr(); if (SubExpr->getType()->isRealFloatingType()) { Result.makeComplexFloat(); APFloat &Imag = Result.FloatImag; if (!EvaluateFloat(SubExpr, Imag, Info)) return false; Result.FloatReal = APFloat(Imag.getSemantics()); return true; } else { assert(SubExpr->getType()->isIntegerType() && "Unexpected imaginary literal."); Result.makeComplexInt(); APSInt &Imag = Result.IntImag; if (!EvaluateInteger(SubExpr, Imag, Info)) return false; Result.IntReal = APSInt(Imag.getBitWidth(), !Imag.isSigned()); return true; } } bool ComplexExprEvaluator::VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { case CK_BitCast: case CK_BaseToDerived: case CK_DerivedToBase: case CK_UncheckedDerivedToBase: case CK_Dynamic: case CK_ToUnion: case CK_ArrayToPointerDecay: case CK_FunctionToPointerDecay: case CK_NullToPointer: case CK_NullToMemberPointer: case CK_BaseToDerivedMemberPointer: case CK_DerivedToBaseMemberPointer: case CK_MemberPointerToBoolean: case CK_ReinterpretMemberPointer: case CK_ConstructorConversion: case CK_IntegralToPointer: case CK_PointerToIntegral: case CK_PointerToBoolean: case CK_ToVoid: case CK_VectorSplat: case CK_IntegralCast: case CK_BooleanToSignedIntegral: case CK_IntegralToBoolean: case CK_IntegralToFloating: case CK_FloatingToIntegral: case CK_FloatingToBoolean: case CK_FloatingCast: case CK_CPointerToObjCPointerCast: case CK_BlockPointerToObjCPointerCast: case CK_AnyPointerToBlockPointerCast: case CK_ObjCObjectLValueCast: case CK_FloatingComplexToReal: case CK_FloatingComplexToBoolean: case CK_IntegralComplexToReal: case CK_IntegralComplexToBoolean: case CK_ARCProduceObject: case CK_ARCConsumeObject: case CK_ARCReclaimReturnedObject: case CK_ARCExtendBlockObject: case CK_CopyAndAutoreleaseBlockObject: case CK_BuiltinFnToFnPtr: case CK_ZeroToOCLEvent: case CK_ZeroToOCLQueue: case CK_NonAtomicToAtomic: case CK_AddressSpaceConversion: case CK_IntToOCLSampler: llvm_unreachable("invalid cast kind for complex value"); case CK_LValueToRValue: case CK_AtomicToNonAtomic: case CK_NoOp: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_Dependent: case CK_LValueBitCast: case CK_UserDefinedConversion: return Error(E); case CK_FloatingRealToComplex: { APFloat &Real = Result.FloatReal; if (!EvaluateFloat(E->getSubExpr(), Real, Info)) return false; Result.makeComplexFloat(); Result.FloatImag = APFloat(Real.getSemantics()); return true; } case CK_FloatingComplexCast: { if (!Visit(E->getSubExpr())) return false; QualType To = E->getType()->getAs()->getElementType(); QualType From = E->getSubExpr()->getType()->getAs()->getElementType(); return HandleFloatToFloatCast(Info, E, From, To, Result.FloatReal) && HandleFloatToFloatCast(Info, E, From, To, Result.FloatImag); } case CK_FloatingComplexToIntegralComplex: { if (!Visit(E->getSubExpr())) return false; QualType To = E->getType()->getAs()->getElementType(); QualType From = E->getSubExpr()->getType()->getAs()->getElementType(); Result.makeComplexInt(); return HandleFloatToIntCast(Info, E, From, Result.FloatReal, To, Result.IntReal) && HandleFloatToIntCast(Info, E, From, Result.FloatImag, To, Result.IntImag); } case CK_IntegralRealToComplex: { APSInt &Real = Result.IntReal; if (!EvaluateInteger(E->getSubExpr(), Real, Info)) return false; Result.makeComplexInt(); Result.IntImag = APSInt(Real.getBitWidth(), !Real.isSigned()); return true; } case CK_IntegralComplexCast: { if (!Visit(E->getSubExpr())) return false; QualType To = E->getType()->getAs()->getElementType(); QualType From = E->getSubExpr()->getType()->getAs()->getElementType(); Result.IntReal = HandleIntToIntCast(Info, E, To, From, Result.IntReal); Result.IntImag = HandleIntToIntCast(Info, E, To, From, Result.IntImag); return true; } case CK_IntegralComplexToFloatingComplex: { if (!Visit(E->getSubExpr())) return false; QualType To = E->getType()->castAs()->getElementType(); QualType From = E->getSubExpr()->getType()->castAs()->getElementType(); Result.makeComplexFloat(); return HandleIntToFloatCast(Info, E, From, Result.IntReal, To, Result.FloatReal) && HandleIntToFloatCast(Info, E, From, Result.IntImag, To, Result.FloatImag); } } llvm_unreachable("unknown cast resulting in complex value"); } bool ComplexExprEvaluator::VisitBinaryOperator(const BinaryOperator *E) { if (E->isPtrMemOp() || E->isAssignmentOp() || E->getOpcode() == BO_Comma) return ExprEvaluatorBaseTy::VisitBinaryOperator(E); // Track whether the LHS or RHS is real at the type system level. When this is // the case we can simplify our evaluation strategy. bool LHSReal = false, RHSReal = false; bool LHSOK; if (E->getLHS()->getType()->isRealFloatingType()) { LHSReal = true; APFloat &Real = Result.FloatReal; LHSOK = EvaluateFloat(E->getLHS(), Real, Info); if (LHSOK) { Result.makeComplexFloat(); Result.FloatImag = APFloat(Real.getSemantics()); } } else { LHSOK = Visit(E->getLHS()); } if (!LHSOK && !Info.noteFailure()) return false; ComplexValue RHS; if (E->getRHS()->getType()->isRealFloatingType()) { RHSReal = true; APFloat &Real = RHS.FloatReal; if (!EvaluateFloat(E->getRHS(), Real, Info) || !LHSOK) return false; RHS.makeComplexFloat(); RHS.FloatImag = APFloat(Real.getSemantics()); } else if (!EvaluateComplex(E->getRHS(), RHS, Info) || !LHSOK) return false; assert(!(LHSReal && RHSReal) && "Cannot have both operands of a complex operation be real."); switch (E->getOpcode()) { default: return Error(E); case BO_Add: if (Result.isComplexFloat()) { Result.getComplexFloatReal().add(RHS.getComplexFloatReal(), APFloat::rmNearestTiesToEven); if (LHSReal) Result.getComplexFloatImag() = RHS.getComplexFloatImag(); else if (!RHSReal) Result.getComplexFloatImag().add(RHS.getComplexFloatImag(), APFloat::rmNearestTiesToEven); } else { Result.getComplexIntReal() += RHS.getComplexIntReal(); Result.getComplexIntImag() += RHS.getComplexIntImag(); } break; case BO_Sub: if (Result.isComplexFloat()) { Result.getComplexFloatReal().subtract(RHS.getComplexFloatReal(), APFloat::rmNearestTiesToEven); if (LHSReal) { Result.getComplexFloatImag() = RHS.getComplexFloatImag(); Result.getComplexFloatImag().changeSign(); } else if (!RHSReal) { Result.getComplexFloatImag().subtract(RHS.getComplexFloatImag(), APFloat::rmNearestTiesToEven); } } else { Result.getComplexIntReal() -= RHS.getComplexIntReal(); Result.getComplexIntImag() -= RHS.getComplexIntImag(); } break; case BO_Mul: if (Result.isComplexFloat()) { // This is an implementation of complex multiplication according to the // constraints laid out in C11 Annex G. The implemantion uses the // following naming scheme: // (a + ib) * (c + id) ComplexValue LHS = Result; APFloat &A = LHS.getComplexFloatReal(); APFloat &B = LHS.getComplexFloatImag(); APFloat &C = RHS.getComplexFloatReal(); APFloat &D = RHS.getComplexFloatImag(); APFloat &ResR = Result.getComplexFloatReal(); APFloat &ResI = Result.getComplexFloatImag(); if (LHSReal) { assert(!RHSReal && "Cannot have two real operands for a complex op!"); ResR = A * C; ResI = A * D; } else if (RHSReal) { ResR = C * A; ResI = C * B; } else { // In the fully general case, we need to handle NaNs and infinities // robustly. APFloat AC = A * C; APFloat BD = B * D; APFloat AD = A * D; APFloat BC = B * C; ResR = AC - BD; ResI = AD + BC; if (ResR.isNaN() && ResI.isNaN()) { bool Recalc = false; if (A.isInfinity() || B.isInfinity()) { A = APFloat::copySign( APFloat(A.getSemantics(), A.isInfinity() ? 1 : 0), A); B = APFloat::copySign( APFloat(B.getSemantics(), B.isInfinity() ? 1 : 0), B); if (C.isNaN()) C = APFloat::copySign(APFloat(C.getSemantics()), C); if (D.isNaN()) D = APFloat::copySign(APFloat(D.getSemantics()), D); Recalc = true; } if (C.isInfinity() || D.isInfinity()) { C = APFloat::copySign( APFloat(C.getSemantics(), C.isInfinity() ? 1 : 0), C); D = APFloat::copySign( APFloat(D.getSemantics(), D.isInfinity() ? 1 : 0), D); if (A.isNaN()) A = APFloat::copySign(APFloat(A.getSemantics()), A); if (B.isNaN()) B = APFloat::copySign(APFloat(B.getSemantics()), B); Recalc = true; } if (!Recalc && (AC.isInfinity() || BD.isInfinity() || AD.isInfinity() || BC.isInfinity())) { if (A.isNaN()) A = APFloat::copySign(APFloat(A.getSemantics()), A); if (B.isNaN()) B = APFloat::copySign(APFloat(B.getSemantics()), B); if (C.isNaN()) C = APFloat::copySign(APFloat(C.getSemantics()), C); if (D.isNaN()) D = APFloat::copySign(APFloat(D.getSemantics()), D); Recalc = true; } if (Recalc) { ResR = APFloat::getInf(A.getSemantics()) * (A * C - B * D); ResI = APFloat::getInf(A.getSemantics()) * (A * D + B * C); } } } } else { ComplexValue LHS = Result; Result.getComplexIntReal() = (LHS.getComplexIntReal() * RHS.getComplexIntReal() - LHS.getComplexIntImag() * RHS.getComplexIntImag()); Result.getComplexIntImag() = (LHS.getComplexIntReal() * RHS.getComplexIntImag() + LHS.getComplexIntImag() * RHS.getComplexIntReal()); } break; case BO_Div: if (Result.isComplexFloat()) { // This is an implementation of complex division according to the // constraints laid out in C11 Annex G. The implemantion uses the // following naming scheme: // (a + ib) / (c + id) ComplexValue LHS = Result; APFloat &A = LHS.getComplexFloatReal(); APFloat &B = LHS.getComplexFloatImag(); APFloat &C = RHS.getComplexFloatReal(); APFloat &D = RHS.getComplexFloatImag(); APFloat &ResR = Result.getComplexFloatReal(); APFloat &ResI = Result.getComplexFloatImag(); if (RHSReal) { ResR = A / C; ResI = B / C; } else { if (LHSReal) { // No real optimizations we can do here, stub out with zero. B = APFloat::getZero(A.getSemantics()); } int DenomLogB = 0; APFloat MaxCD = maxnum(abs(C), abs(D)); if (MaxCD.isFinite()) { DenomLogB = ilogb(MaxCD); C = scalbn(C, -DenomLogB, APFloat::rmNearestTiesToEven); D = scalbn(D, -DenomLogB, APFloat::rmNearestTiesToEven); } APFloat Denom = C * C + D * D; ResR = scalbn((A * C + B * D) / Denom, -DenomLogB, APFloat::rmNearestTiesToEven); ResI = scalbn((B * C - A * D) / Denom, -DenomLogB, APFloat::rmNearestTiesToEven); if (ResR.isNaN() && ResI.isNaN()) { if (Denom.isPosZero() && (!A.isNaN() || !B.isNaN())) { ResR = APFloat::getInf(ResR.getSemantics(), C.isNegative()) * A; ResI = APFloat::getInf(ResR.getSemantics(), C.isNegative()) * B; } else if ((A.isInfinity() || B.isInfinity()) && C.isFinite() && D.isFinite()) { A = APFloat::copySign( APFloat(A.getSemantics(), A.isInfinity() ? 1 : 0), A); B = APFloat::copySign( APFloat(B.getSemantics(), B.isInfinity() ? 1 : 0), B); ResR = APFloat::getInf(ResR.getSemantics()) * (A * C + B * D); ResI = APFloat::getInf(ResI.getSemantics()) * (B * C - A * D); } else if (MaxCD.isInfinity() && A.isFinite() && B.isFinite()) { C = APFloat::copySign( APFloat(C.getSemantics(), C.isInfinity() ? 1 : 0), C); D = APFloat::copySign( APFloat(D.getSemantics(), D.isInfinity() ? 1 : 0), D); ResR = APFloat::getZero(ResR.getSemantics()) * (A * C + B * D); ResI = APFloat::getZero(ResI.getSemantics()) * (B * C - A * D); } } } } else { if (RHS.getComplexIntReal() == 0 && RHS.getComplexIntImag() == 0) return Error(E, diag::note_expr_divide_by_zero); ComplexValue LHS = Result; APSInt Den = RHS.getComplexIntReal() * RHS.getComplexIntReal() + RHS.getComplexIntImag() * RHS.getComplexIntImag(); Result.getComplexIntReal() = (LHS.getComplexIntReal() * RHS.getComplexIntReal() + LHS.getComplexIntImag() * RHS.getComplexIntImag()) / Den; Result.getComplexIntImag() = (LHS.getComplexIntImag() * RHS.getComplexIntReal() - LHS.getComplexIntReal() * RHS.getComplexIntImag()) / Den; } break; } return true; } bool ComplexExprEvaluator::VisitUnaryOperator(const UnaryOperator *E) { // Get the operand value into 'Result'. if (!Visit(E->getSubExpr())) return false; switch (E->getOpcode()) { default: return Error(E); case UO_Extension: return true; case UO_Plus: // The result is always just the subexpr. return true; case UO_Minus: if (Result.isComplexFloat()) { Result.getComplexFloatReal().changeSign(); Result.getComplexFloatImag().changeSign(); } else { Result.getComplexIntReal() = -Result.getComplexIntReal(); Result.getComplexIntImag() = -Result.getComplexIntImag(); } return true; case UO_Not: if (Result.isComplexFloat()) Result.getComplexFloatImag().changeSign(); else Result.getComplexIntImag() = -Result.getComplexIntImag(); return true; } } bool ComplexExprEvaluator::VisitInitListExpr(const InitListExpr *E) { if (E->getNumInits() == 2) { if (E->getType()->isComplexType()) { Result.makeComplexFloat(); if (!EvaluateFloat(E->getInit(0), Result.FloatReal, Info)) return false; if (!EvaluateFloat(E->getInit(1), Result.FloatImag, Info)) return false; } else { Result.makeComplexInt(); if (!EvaluateInteger(E->getInit(0), Result.IntReal, Info)) return false; if (!EvaluateInteger(E->getInit(1), Result.IntImag, Info)) return false; } return true; } return ExprEvaluatorBaseTy::VisitInitListExpr(E); } //===----------------------------------------------------------------------===// // Atomic expression evaluation, essentially just handling the NonAtomicToAtomic // implicit conversion. //===----------------------------------------------------------------------===// namespace { class AtomicExprEvaluator : public ExprEvaluatorBase { APValue &Result; public: AtomicExprEvaluator(EvalInfo &Info, APValue &Result) : ExprEvaluatorBaseTy(Info), Result(Result) {} bool Success(const APValue &V, const Expr *E) { Result = V; return true; } bool ZeroInitialization(const Expr *E) { ImplicitValueInitExpr VIE( E->getType()->castAs()->getValueType()); return Evaluate(Result, Info, &VIE); } bool VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_NonAtomicToAtomic: return Evaluate(Result, Info, E->getSubExpr()); } } }; } // end anonymous namespace static bool EvaluateAtomic(const Expr *E, APValue &Result, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isAtomicType()); return AtomicExprEvaluator(Info, Result).Visit(E); } //===----------------------------------------------------------------------===// // Void expression evaluation, primarily for a cast to void on the LHS of a // comma operator //===----------------------------------------------------------------------===// namespace { class VoidExprEvaluator : public ExprEvaluatorBase { public: VoidExprEvaluator(EvalInfo &Info) : ExprEvaluatorBaseTy(Info) {} bool Success(const APValue &V, const Expr *e) { return true; } bool VisitCastExpr(const CastExpr *E) { switch (E->getCastKind()) { default: return ExprEvaluatorBaseTy::VisitCastExpr(E); case CK_ToVoid: VisitIgnoredValue(E->getSubExpr()); return true; } } bool VisitCallExpr(const CallExpr *E) { switch (E->getBuiltinCallee()) { default: return ExprEvaluatorBaseTy::VisitCallExpr(E); case Builtin::BI__assume: case Builtin::BI__builtin_assume: // The argument is not evaluated! return true; } } }; } // end anonymous namespace static bool EvaluateVoid(const Expr *E, EvalInfo &Info) { assert(E->isRValue() && E->getType()->isVoidType()); return VoidExprEvaluator(Info).Visit(E); } //===----------------------------------------------------------------------===// // Top level Expr::EvaluateAsRValue method. //===----------------------------------------------------------------------===// static bool Evaluate(APValue &Result, EvalInfo &Info, const Expr *E) { // In C, function designators are not lvalues, but we evaluate them as if they // are. QualType T = E->getType(); if (E->isGLValue() || T->isFunctionType()) { LValue LV; if (!EvaluateLValue(E, LV, Info)) return false; LV.moveInto(Result); } else if (T->isVectorType()) { if (!EvaluateVector(E, Result, Info)) return false; } else if (T->isIntegralOrEnumerationType()) { if (!IntExprEvaluator(Info, Result).Visit(E)) return false; } else if (T->hasPointerRepresentation()) { LValue LV; if (!EvaluatePointer(E, LV, Info)) return false; LV.moveInto(Result); } else if (T->isRealFloatingType()) { llvm::APFloat F(0.0); if (!EvaluateFloat(E, F, Info)) return false; Result = APValue(F); } else if (T->isAnyComplexType()) { ComplexValue C; if (!EvaluateComplex(E, C, Info)) return false; C.moveInto(Result); } else if (T->isMemberPointerType()) { MemberPtr P; if (!EvaluateMemberPointer(E, P, Info)) return false; P.moveInto(Result); return true; } else if (T->isArrayType()) { LValue LV; LV.set(E, Info.CurrentCall->Index); APValue &Value = Info.CurrentCall->createTemporary(E, false); if (!EvaluateArray(E, LV, Value, Info)) return false; Result = Value; } else if (T->isRecordType()) { LValue LV; LV.set(E, Info.CurrentCall->Index); APValue &Value = Info.CurrentCall->createTemporary(E, false); if (!EvaluateRecord(E, LV, Value, Info)) return false; Result = Value; } else if (T->isVoidType()) { if (!Info.getLangOpts().CPlusPlus11) Info.CCEDiag(E, diag::note_constexpr_nonliteral) << E->getType(); if (!EvaluateVoid(E, Info)) return false; } else if (T->isAtomicType()) { if (!EvaluateAtomic(E, Result, Info)) return false; } else if (Info.getLangOpts().CPlusPlus11) { Info.FFDiag(E, diag::note_constexpr_nonliteral) << E->getType(); return false; } else { Info.FFDiag(E, diag::note_invalid_subexpr_in_const_expr); return false; } return true; } /// EvaluateInPlace - Evaluate an expression in-place in an APValue. In some /// cases, the in-place evaluation is essential, since later initializers for /// an object can indirectly refer to subobjects which were initialized earlier. static bool EvaluateInPlace(APValue &Result, EvalInfo &Info, const LValue &This, const Expr *E, bool AllowNonLiteralTypes) { assert(!E->isValueDependent()); if (!AllowNonLiteralTypes && !CheckLiteralType(Info, E, &This)) return false; if (E->isRValue()) { // Evaluate arrays and record types in-place, so that later initializers can // refer to earlier-initialized members of the object. if (E->getType()->isArrayType()) return EvaluateArray(E, This, Result, Info); else if (E->getType()->isRecordType()) return EvaluateRecord(E, This, Result, Info); } // For any other type, in-place evaluation is unimportant. return Evaluate(Result, Info, E); } /// EvaluateAsRValue - Try to evaluate this expression, performing an implicit /// lvalue-to-rvalue cast if it is an lvalue. static bool EvaluateAsRValue(EvalInfo &Info, const Expr *E, APValue &Result) { if (E->getType().isNull()) return false; if (!CheckLiteralType(Info, E)) return false; if (!::Evaluate(Result, Info, E)) return false; if (E->isGLValue()) { LValue LV; LV.setFrom(Info.Ctx, Result); if (!handleLValueToRValueConversion(Info, E, E->getType(), LV, Result)) return false; } // Check this core constant expression is a constant expression. return CheckConstantExpression(Info, E->getExprLoc(), E->getType(), Result); } static bool FastEvaluateAsRValue(const Expr *Exp, Expr::EvalResult &Result, const ASTContext &Ctx, bool &IsConst) { // Fast-path evaluations of integer literals, since we sometimes see files // containing vast quantities of these. if (const IntegerLiteral *L = dyn_cast(Exp)) { Result.Val = APValue(APSInt(L->getValue(), L->getType()->isUnsignedIntegerType())); IsConst = true; return true; } // This case should be rare, but we need to check it before we check on // the type below. if (Exp->getType().isNull()) { IsConst = false; return true; } // FIXME: Evaluating values of large array and record types can cause // performance problems. Only do so in C++11 for now. if (Exp->isRValue() && (Exp->getType()->isArrayType() || Exp->getType()->isRecordType()) && !Ctx.getLangOpts().CPlusPlus11) { IsConst = false; return true; } return false; } /// EvaluateAsRValue - Return true if this is a constant which we can fold using /// any crazy technique (that has nothing to do with language standards) that /// we want to. If this function returns true, it returns the folded constant /// in Result. If this expression is a glvalue, an lvalue-to-rvalue conversion /// will be applied to the result. bool Expr::EvaluateAsRValue(EvalResult &Result, const ASTContext &Ctx) const { bool IsConst; if (FastEvaluateAsRValue(this, Result, Ctx, IsConst)) return IsConst; EvalInfo Info(Ctx, Result, EvalInfo::EM_IgnoreSideEffects); return ::EvaluateAsRValue(Info, this, Result.Val); } bool Expr::EvaluateAsBooleanCondition(bool &Result, const ASTContext &Ctx) const { EvalResult Scratch; return EvaluateAsRValue(Scratch, Ctx) && HandleConversionToBool(Scratch.Val, Result); } static bool hasUnacceptableSideEffect(Expr::EvalStatus &Result, Expr::SideEffectsKind SEK) { return (SEK < Expr::SE_AllowSideEffects && Result.HasSideEffects) || (SEK < Expr::SE_AllowUndefinedBehavior && Result.HasUndefinedBehavior); } bool Expr::EvaluateAsInt(APSInt &Result, const ASTContext &Ctx, SideEffectsKind AllowSideEffects) const { if (!getType()->isIntegralOrEnumerationType()) return false; EvalResult ExprResult; if (!EvaluateAsRValue(ExprResult, Ctx) || !ExprResult.Val.isInt() || hasUnacceptableSideEffect(ExprResult, AllowSideEffects)) return false; Result = ExprResult.Val.getInt(); return true; } bool Expr::EvaluateAsFloat(APFloat &Result, const ASTContext &Ctx, SideEffectsKind AllowSideEffects) const { if (!getType()->isRealFloatingType()) return false; EvalResult ExprResult; if (!EvaluateAsRValue(ExprResult, Ctx) || !ExprResult.Val.isFloat() || hasUnacceptableSideEffect(ExprResult, AllowSideEffects)) return false; Result = ExprResult.Val.getFloat(); return true; } bool Expr::EvaluateAsLValue(EvalResult &Result, const ASTContext &Ctx) const { EvalInfo Info(Ctx, Result, EvalInfo::EM_ConstantFold); LValue LV; if (!EvaluateLValue(this, LV, Info) || Result.HasSideEffects || !CheckLValueConstantExpression(Info, getExprLoc(), Ctx.getLValueReferenceType(getType()), LV)) return false; LV.moveInto(Result.Val); return true; } bool Expr::EvaluateAsInitializer(APValue &Value, const ASTContext &Ctx, const VarDecl *VD, SmallVectorImpl &Notes) const { // FIXME: Evaluating initializers for large array and record types can cause // performance problems. Only do so in C++11 for now. if (isRValue() && (getType()->isArrayType() || getType()->isRecordType()) && !Ctx.getLangOpts().CPlusPlus11) return false; Expr::EvalStatus EStatus; EStatus.Diag = &Notes; EvalInfo InitInfo(Ctx, EStatus, VD->isConstexpr() ? EvalInfo::EM_ConstantExpression : EvalInfo::EM_ConstantFold); InitInfo.setEvaluatingDecl(VD, Value); LValue LVal; LVal.set(VD); // C++11 [basic.start.init]p2: // Variables with static storage duration or thread storage duration shall be // zero-initialized before any other initialization takes place. // This behavior is not present in C. if (Ctx.getLangOpts().CPlusPlus && !VD->hasLocalStorage() && !VD->getType()->isReferenceType()) { ImplicitValueInitExpr VIE(VD->getType()); if (!EvaluateInPlace(Value, InitInfo, LVal, &VIE, /*AllowNonLiteralTypes=*/true)) return false; } if (!EvaluateInPlace(Value, InitInfo, LVal, this, /*AllowNonLiteralTypes=*/true) || EStatus.HasSideEffects) return false; return CheckConstantExpression(InitInfo, VD->getLocation(), VD->getType(), Value); } /// isEvaluatable - Call EvaluateAsRValue to see if this expression can be /// constant folded, but discard the result. bool Expr::isEvaluatable(const ASTContext &Ctx, SideEffectsKind SEK) const { EvalResult Result; return EvaluateAsRValue(Result, Ctx) && !hasUnacceptableSideEffect(Result, SEK); } APSInt Expr::EvaluateKnownConstInt(const ASTContext &Ctx, SmallVectorImpl *Diag) const { EvalResult EvalResult; EvalResult.Diag = Diag; bool Result = EvaluateAsRValue(EvalResult, Ctx); (void)Result; assert(Result && "Could not evaluate expression"); assert(EvalResult.Val.isInt() && "Expression did not evaluate to integer"); return EvalResult.Val.getInt(); } void Expr::EvaluateForOverflow(const ASTContext &Ctx) const { bool IsConst; EvalResult EvalResult; if (!FastEvaluateAsRValue(this, EvalResult, Ctx, IsConst)) { EvalInfo Info(Ctx, EvalResult, EvalInfo::EM_EvaluateForOverflow); (void)::EvaluateAsRValue(Info, this, EvalResult.Val); } } bool Expr::EvalResult::isGlobalLValue() const { assert(Val.isLValue()); return IsGlobalLValue(Val.getLValueBase()); } /// isIntegerConstantExpr - this recursive routine will test if an expression is /// an integer constant expression. /// FIXME: Pass up a reason why! Invalid operation in i-c-e, division by zero, /// comma, etc // CheckICE - This function does the fundamental ICE checking: the returned // ICEDiag contains an ICEKind indicating whether the expression is an ICE, // and a (possibly null) SourceLocation indicating the location of the problem. // // Note that to reduce code duplication, this helper does no evaluation // itself; the caller checks whether the expression is evaluatable, and // in the rare cases where CheckICE actually cares about the evaluated // value, it calls into Evalute. namespace { enum ICEKind { /// This expression is an ICE. IK_ICE, /// This expression is not an ICE, but if it isn't evaluated, it's /// a legal subexpression for an ICE. This return value is used to handle /// the comma operator in C99 mode, and non-constant subexpressions. IK_ICEIfUnevaluated, /// This expression is not an ICE, and is not a legal subexpression for one. IK_NotICE }; struct ICEDiag { ICEKind Kind; SourceLocation Loc; ICEDiag(ICEKind IK, SourceLocation l) : Kind(IK), Loc(l) {} }; } static ICEDiag NoDiag() { return ICEDiag(IK_ICE, SourceLocation()); } static ICEDiag Worst(ICEDiag A, ICEDiag B) { return A.Kind >= B.Kind ? A : B; } static ICEDiag CheckEvalInICE(const Expr* E, const ASTContext &Ctx) { Expr::EvalResult EVResult; if (!E->EvaluateAsRValue(EVResult, Ctx) || EVResult.HasSideEffects || !EVResult.Val.isInt()) return ICEDiag(IK_NotICE, E->getLocStart()); return NoDiag(); } static ICEDiag CheckICE(const Expr* E, const ASTContext &Ctx) { assert(!E->isValueDependent() && "Should not see value dependent exprs!"); if (!E->getType()->isIntegralOrEnumerationType()) return ICEDiag(IK_NotICE, E->getLocStart()); switch (E->getStmtClass()) { #define ABSTRACT_STMT(Node) #define STMT(Node, Base) case Expr::Node##Class: #define EXPR(Node, Base) #include "clang/AST/StmtNodes.inc" case Expr::PredefinedExprClass: case Expr::FloatingLiteralClass: case Expr::ImaginaryLiteralClass: case Expr::StringLiteralClass: case Expr::ArraySubscriptExprClass: case Expr::OMPArraySectionExprClass: case Expr::MemberExprClass: case Expr::CompoundAssignOperatorClass: case Expr::CompoundLiteralExprClass: case Expr::ExtVectorElementExprClass: case Expr::DesignatedInitExprClass: case Expr::ArrayInitLoopExprClass: case Expr::ArrayInitIndexExprClass: case Expr::NoInitExprClass: case Expr::DesignatedInitUpdateExprClass: case Expr::ImplicitValueInitExprClass: case Expr::ParenListExprClass: case Expr::VAArgExprClass: case Expr::AddrLabelExprClass: case Expr::StmtExprClass: case Expr::CXXMemberCallExprClass: case Expr::CUDAKernelCallExprClass: case Expr::CXXDynamicCastExprClass: case Expr::CXXTypeidExprClass: case Expr::CXXUuidofExprClass: case Expr::MSPropertyRefExprClass: case Expr::MSPropertySubscriptExprClass: case Expr::CXXNullPtrLiteralExprClass: case Expr::UserDefinedLiteralClass: case Expr::CXXThisExprClass: case Expr::CXXThrowExprClass: case Expr::CXXNewExprClass: case Expr::CXXDeleteExprClass: case Expr::CXXPseudoDestructorExprClass: case Expr::UnresolvedLookupExprClass: case Expr::TypoExprClass: case Expr::DependentScopeDeclRefExprClass: case Expr::CXXConstructExprClass: case Expr::CXXInheritedCtorInitExprClass: case Expr::CXXStdInitializerListExprClass: case Expr::CXXBindTemporaryExprClass: case Expr::ExprWithCleanupsClass: case Expr::CXXTemporaryObjectExprClass: case Expr::CXXUnresolvedConstructExprClass: case Expr::CXXDependentScopeMemberExprClass: case Expr::UnresolvedMemberExprClass: case Expr::ObjCStringLiteralClass: case Expr::ObjCBoxedExprClass: case Expr::ObjCArrayLiteralClass: case Expr::ObjCDictionaryLiteralClass: case Expr::ObjCEncodeExprClass: case Expr::ObjCMessageExprClass: case Expr::ObjCSelectorExprClass: case Expr::ObjCProtocolExprClass: case Expr::ObjCIvarRefExprClass: case Expr::ObjCPropertyRefExprClass: case Expr::ObjCSubscriptRefExprClass: case Expr::ObjCIsaExprClass: case Expr::ObjCAvailabilityCheckExprClass: case Expr::ShuffleVectorExprClass: case Expr::ConvertVectorExprClass: case Expr::BlockExprClass: case Expr::NoStmtClass: case Expr::OpaqueValueExprClass: case Expr::PackExpansionExprClass: case Expr::SubstNonTypeTemplateParmPackExprClass: case Expr::FunctionParmPackExprClass: case Expr::AsTypeExprClass: case Expr::ObjCIndirectCopyRestoreExprClass: case Expr::MaterializeTemporaryExprClass: case Expr::PseudoObjectExprClass: case Expr::AtomicExprClass: case Expr::LambdaExprClass: case Expr::CXXFoldExprClass: case Expr::CoawaitExprClass: case Expr::CoyieldExprClass: return ICEDiag(IK_NotICE, E->getLocStart()); case Expr::InitListExprClass: { // C++03 [dcl.init]p13: If T is a scalar type, then a declaration of the // form "T x = { a };" is equivalent to "T x = a;". // Unless we're initializing a reference, T is a scalar as it is known to be // of integral or enumeration type. if (E->isRValue()) if (cast(E)->getNumInits() == 1) return CheckICE(cast(E)->getInit(0), Ctx); return ICEDiag(IK_NotICE, E->getLocStart()); } case Expr::SizeOfPackExprClass: case Expr::GNUNullExprClass: // GCC considers the GNU __null value to be an integral constant expression. return NoDiag(); case Expr::SubstNonTypeTemplateParmExprClass: return CheckICE(cast(E)->getReplacement(), Ctx); case Expr::ParenExprClass: return CheckICE(cast(E)->getSubExpr(), Ctx); case Expr::GenericSelectionExprClass: return CheckICE(cast(E)->getResultExpr(), Ctx); case Expr::IntegerLiteralClass: case Expr::CharacterLiteralClass: case Expr::ObjCBoolLiteralExprClass: case Expr::CXXBoolLiteralExprClass: case Expr::CXXScalarValueInitExprClass: case Expr::TypeTraitExprClass: case Expr::ArrayTypeTraitExprClass: case Expr::ExpressionTraitExprClass: case Expr::CXXNoexceptExprClass: return NoDiag(); case Expr::CallExprClass: case Expr::CXXOperatorCallExprClass: { // C99 6.6/3 allows function calls within unevaluated subexpressions of // constant expressions, but they can never be ICEs because an ICE cannot // contain an operand of (pointer to) function type. const CallExpr *CE = cast(E); if (CE->getBuiltinCallee()) return CheckEvalInICE(E, Ctx); return ICEDiag(IK_NotICE, E->getLocStart()); } case Expr::DeclRefExprClass: { if (isa(cast(E)->getDecl())) return NoDiag(); const ValueDecl *D = dyn_cast(cast(E)->getDecl()); if (Ctx.getLangOpts().CPlusPlus && D && IsConstNonVolatile(D->getType())) { // Parameter variables are never constants. Without this check, // getAnyInitializer() can find a default argument, which leads // to chaos. if (isa(D)) return ICEDiag(IK_NotICE, cast(E)->getLocation()); // C++ 7.1.5.1p2 // A variable of non-volatile const-qualified integral or enumeration // type initialized by an ICE can be used in ICEs. if (const VarDecl *Dcl = dyn_cast(D)) { if (!Dcl->getType()->isIntegralOrEnumerationType()) return ICEDiag(IK_NotICE, cast(E)->getLocation()); const VarDecl *VD; // Look for a declaration of this variable that has an initializer, and // check whether it is an ICE. if (Dcl->getAnyInitializer(VD) && VD->checkInitIsICE()) return NoDiag(); else return ICEDiag(IK_NotICE, cast(E)->getLocation()); } } return ICEDiag(IK_NotICE, E->getLocStart()); } case Expr::UnaryOperatorClass: { const UnaryOperator *Exp = cast(E); switch (Exp->getOpcode()) { case UO_PostInc: case UO_PostDec: case UO_PreInc: case UO_PreDec: case UO_AddrOf: case UO_Deref: case UO_Coawait: // C99 6.6/3 allows increment and decrement within unevaluated // subexpressions of constant expressions, but they can never be ICEs // because an ICE cannot contain an lvalue operand. return ICEDiag(IK_NotICE, E->getLocStart()); case UO_Extension: case UO_LNot: case UO_Plus: case UO_Minus: case UO_Not: case UO_Real: case UO_Imag: return CheckICE(Exp->getSubExpr(), Ctx); } // OffsetOf falls through here. } case Expr::OffsetOfExprClass: { // Note that per C99, offsetof must be an ICE. And AFAIK, using // EvaluateAsRValue matches the proposed gcc behavior for cases like // "offsetof(struct s{int x[4];}, x[1.0])". This doesn't affect // compliance: we should warn earlier for offsetof expressions with // array subscripts that aren't ICEs, and if the array subscripts // are ICEs, the value of the offsetof must be an integer constant. return CheckEvalInICE(E, Ctx); } case Expr::UnaryExprOrTypeTraitExprClass: { const UnaryExprOrTypeTraitExpr *Exp = cast(E); if ((Exp->getKind() == UETT_SizeOf) && Exp->getTypeOfArgument()->isVariableArrayType()) return ICEDiag(IK_NotICE, E->getLocStart()); return NoDiag(); } case Expr::BinaryOperatorClass: { const BinaryOperator *Exp = cast(E); switch (Exp->getOpcode()) { case BO_PtrMemD: case BO_PtrMemI: case BO_Assign: case BO_MulAssign: case BO_DivAssign: case BO_RemAssign: case BO_AddAssign: case BO_SubAssign: case BO_ShlAssign: case BO_ShrAssign: case BO_AndAssign: case BO_XorAssign: case BO_OrAssign: // C99 6.6/3 allows assignments within unevaluated subexpressions of // constant expressions, but they can never be ICEs because an ICE cannot // contain an lvalue operand. return ICEDiag(IK_NotICE, E->getLocStart()); case BO_Mul: case BO_Div: case BO_Rem: case BO_Add: case BO_Sub: case BO_Shl: case BO_Shr: case BO_LT: case BO_GT: case BO_LE: case BO_GE: case BO_EQ: case BO_NE: case BO_And: case BO_Xor: case BO_Or: case BO_Comma: { ICEDiag LHSResult = CheckICE(Exp->getLHS(), Ctx); ICEDiag RHSResult = CheckICE(Exp->getRHS(), Ctx); if (Exp->getOpcode() == BO_Div || Exp->getOpcode() == BO_Rem) { // EvaluateAsRValue gives an error for undefined Div/Rem, so make sure // we don't evaluate one. if (LHSResult.Kind == IK_ICE && RHSResult.Kind == IK_ICE) { llvm::APSInt REval = Exp->getRHS()->EvaluateKnownConstInt(Ctx); if (REval == 0) return ICEDiag(IK_ICEIfUnevaluated, E->getLocStart()); if (REval.isSigned() && REval.isAllOnesValue()) { llvm::APSInt LEval = Exp->getLHS()->EvaluateKnownConstInt(Ctx); if (LEval.isMinSignedValue()) return ICEDiag(IK_ICEIfUnevaluated, E->getLocStart()); } } } if (Exp->getOpcode() == BO_Comma) { if (Ctx.getLangOpts().C99) { // C99 6.6p3 introduces a strange edge case: comma can be in an ICE // if it isn't evaluated. if (LHSResult.Kind == IK_ICE && RHSResult.Kind == IK_ICE) return ICEDiag(IK_ICEIfUnevaluated, E->getLocStart()); } else { // In both C89 and C++, commas in ICEs are illegal. return ICEDiag(IK_NotICE, E->getLocStart()); } } return Worst(LHSResult, RHSResult); } case BO_LAnd: case BO_LOr: { ICEDiag LHSResult = CheckICE(Exp->getLHS(), Ctx); ICEDiag RHSResult = CheckICE(Exp->getRHS(), Ctx); if (LHSResult.Kind == IK_ICE && RHSResult.Kind == IK_ICEIfUnevaluated) { // Rare case where the RHS has a comma "side-effect"; we need // to actually check the condition to see whether the side // with the comma is evaluated. if ((Exp->getOpcode() == BO_LAnd) != (Exp->getLHS()->EvaluateKnownConstInt(Ctx) == 0)) return RHSResult; return NoDiag(); } return Worst(LHSResult, RHSResult); } } } case Expr::ImplicitCastExprClass: case Expr::CStyleCastExprClass: case Expr::CXXFunctionalCastExprClass: case Expr::CXXStaticCastExprClass: case Expr::CXXReinterpretCastExprClass: case Expr::CXXConstCastExprClass: case Expr::ObjCBridgedCastExprClass: { const Expr *SubExpr = cast(E)->getSubExpr(); if (isa(E)) { if (const FloatingLiteral *FL = dyn_cast(SubExpr->IgnoreParenImpCasts())) { unsigned DestWidth = Ctx.getIntWidth(E->getType()); bool DestSigned = E->getType()->isSignedIntegerOrEnumerationType(); APSInt IgnoredVal(DestWidth, !DestSigned); bool Ignored; // If the value does not fit in the destination type, the behavior is // undefined, so we are not required to treat it as a constant // expression. if (FL->getValue().convertToInteger(IgnoredVal, llvm::APFloat::rmTowardZero, &Ignored) & APFloat::opInvalidOp) return ICEDiag(IK_NotICE, E->getLocStart()); return NoDiag(); } } switch (cast(E)->getCastKind()) { case CK_LValueToRValue: case CK_AtomicToNonAtomic: case CK_NonAtomicToAtomic: case CK_NoOp: case CK_IntegralToBoolean: case CK_IntegralCast: return CheckICE(SubExpr, Ctx); default: return ICEDiag(IK_NotICE, E->getLocStart()); } } case Expr::BinaryConditionalOperatorClass: { const BinaryConditionalOperator *Exp = cast(E); ICEDiag CommonResult = CheckICE(Exp->getCommon(), Ctx); if (CommonResult.Kind == IK_NotICE) return CommonResult; ICEDiag FalseResult = CheckICE(Exp->getFalseExpr(), Ctx); if (FalseResult.Kind == IK_NotICE) return FalseResult; if (CommonResult.Kind == IK_ICEIfUnevaluated) return CommonResult; if (FalseResult.Kind == IK_ICEIfUnevaluated && Exp->getCommon()->EvaluateKnownConstInt(Ctx) != 0) return NoDiag(); return FalseResult; } case Expr::ConditionalOperatorClass: { const ConditionalOperator *Exp = cast(E); // If the condition (ignoring parens) is a __builtin_constant_p call, // then only the true side is actually considered in an integer constant // expression, and it is fully evaluated. This is an important GNU // extension. See GCC PR38377 for discussion. if (const CallExpr *CallCE = dyn_cast(Exp->getCond()->IgnoreParenCasts())) if (CallCE->getBuiltinCallee() == Builtin::BI__builtin_constant_p) return CheckEvalInICE(E, Ctx); ICEDiag CondResult = CheckICE(Exp->getCond(), Ctx); if (CondResult.Kind == IK_NotICE) return CondResult; ICEDiag TrueResult = CheckICE(Exp->getTrueExpr(), Ctx); ICEDiag FalseResult = CheckICE(Exp->getFalseExpr(), Ctx); if (TrueResult.Kind == IK_NotICE) return TrueResult; if (FalseResult.Kind == IK_NotICE) return FalseResult; if (CondResult.Kind == IK_ICEIfUnevaluated) return CondResult; if (TrueResult.Kind == IK_ICE && FalseResult.Kind == IK_ICE) return NoDiag(); // Rare case where the diagnostics depend on which side is evaluated // Note that if we get here, CondResult is 0, and at least one of // TrueResult and FalseResult is non-zero. if (Exp->getCond()->EvaluateKnownConstInt(Ctx) == 0) return FalseResult; return TrueResult; } case Expr::CXXDefaultArgExprClass: return CheckICE(cast(E)->getExpr(), Ctx); case Expr::CXXDefaultInitExprClass: return CheckICE(cast(E)->getExpr(), Ctx); case Expr::ChooseExprClass: { return CheckICE(cast(E)->getChosenSubExpr(), Ctx); } } llvm_unreachable("Invalid StmtClass!"); } /// Evaluate an expression as a C++11 integral constant expression. static bool EvaluateCPlusPlus11IntegralConstantExpr(const ASTContext &Ctx, const Expr *E, llvm::APSInt *Value, SourceLocation *Loc) { if (!E->getType()->isIntegralOrEnumerationType()) { if (Loc) *Loc = E->getExprLoc(); return false; } APValue Result; if (!E->isCXX11ConstantExpr(Ctx, &Result, Loc)) return false; if (!Result.isInt()) { if (Loc) *Loc = E->getExprLoc(); return false; } if (Value) *Value = Result.getInt(); return true; } bool Expr::isIntegerConstantExpr(const ASTContext &Ctx, SourceLocation *Loc) const { if (Ctx.getLangOpts().CPlusPlus11) return EvaluateCPlusPlus11IntegralConstantExpr(Ctx, this, nullptr, Loc); ICEDiag D = CheckICE(this, Ctx); if (D.Kind != IK_ICE) { if (Loc) *Loc = D.Loc; return false; } return true; } bool Expr::isIntegerConstantExpr(llvm::APSInt &Value, const ASTContext &Ctx, SourceLocation *Loc, bool isEvaluated) const { if (Ctx.getLangOpts().CPlusPlus11) return EvaluateCPlusPlus11IntegralConstantExpr(Ctx, this, &Value, Loc); if (!isIntegerConstantExpr(Ctx, Loc)) return false; // The only possible side-effects here are due to UB discovered in the // evaluation (for instance, INT_MAX + 1). In such a case, we are still // required to treat the expression as an ICE, so we produce the folded // value. if (!EvaluateAsInt(Value, Ctx, SE_AllowSideEffects)) llvm_unreachable("ICE cannot be evaluated!"); return true; } bool Expr::isCXX98IntegralConstantExpr(const ASTContext &Ctx) const { return CheckICE(this, Ctx).Kind == IK_ICE; } bool Expr::isCXX11ConstantExpr(const ASTContext &Ctx, APValue *Result, SourceLocation *Loc) const { // We support this checking in C++98 mode in order to diagnose compatibility // issues. assert(Ctx.getLangOpts().CPlusPlus); // Build evaluation settings. Expr::EvalStatus Status; SmallVector Diags; Status.Diag = &Diags; EvalInfo Info(Ctx, Status, EvalInfo::EM_ConstantExpression); APValue Scratch; bool IsConstExpr = ::EvaluateAsRValue(Info, this, Result ? *Result : Scratch); if (!Diags.empty()) { IsConstExpr = false; if (Loc) *Loc = Diags[0].first; } else if (!IsConstExpr) { // FIXME: This shouldn't happen. if (Loc) *Loc = getExprLoc(); } return IsConstExpr; } bool Expr::EvaluateWithSubstitution(APValue &Value, ASTContext &Ctx, const FunctionDecl *Callee, ArrayRef Args, const Expr *This) const { Expr::EvalStatus Status; EvalInfo Info(Ctx, Status, EvalInfo::EM_ConstantExpressionUnevaluated); LValue ThisVal; const LValue *ThisPtr = nullptr; if (This) { #ifndef NDEBUG auto *MD = dyn_cast(Callee); assert(MD && "Don't provide `this` for non-methods."); assert(!MD->isStatic() && "Don't provide `this` for static methods."); #endif if (EvaluateObjectArgument(Info, This, ThisVal)) ThisPtr = &ThisVal; if (Info.EvalStatus.HasSideEffects) return false; } ArgVector ArgValues(Args.size()); for (ArrayRef::iterator I = Args.begin(), E = Args.end(); I != E; ++I) { if ((*I)->isValueDependent() || !Evaluate(ArgValues[I - Args.begin()], Info, *I)) // If evaluation fails, throw away the argument entirely. ArgValues[I - Args.begin()] = APValue(); if (Info.EvalStatus.HasSideEffects) return false; } // Build fake call to Callee. CallStackFrame Frame(Info, Callee->getLocation(), Callee, ThisPtr, ArgValues.data()); return Evaluate(Value, Info, this) && !Info.EvalStatus.HasSideEffects; } bool Expr::isPotentialConstantExpr(const FunctionDecl *FD, SmallVectorImpl< PartialDiagnosticAt> &Diags) { // FIXME: It would be useful to check constexpr function templates, but at the // moment the constant expression evaluator cannot cope with the non-rigorous // ASTs which we build for dependent expressions. if (FD->isDependentContext()) return true; Expr::EvalStatus Status; Status.Diag = &Diags; EvalInfo Info(FD->getASTContext(), Status, EvalInfo::EM_PotentialConstantExpression); const CXXMethodDecl *MD = dyn_cast(FD); const CXXRecordDecl *RD = MD ? MD->getParent()->getCanonicalDecl() : nullptr; // Fabricate an arbitrary expression on the stack and pretend that it // is a temporary being used as the 'this' pointer. LValue This; ImplicitValueInitExpr VIE(RD ? Info.Ctx.getRecordType(RD) : Info.Ctx.IntTy); This.set(&VIE, Info.CurrentCall->Index); ArrayRef Args; APValue Scratch; if (const CXXConstructorDecl *CD = dyn_cast(FD)) { // Evaluate the call as a constant initializer, to allow the construction // of objects of non-literal types. Info.setEvaluatingDecl(This.getLValueBase(), Scratch); HandleConstructorCall(&VIE, This, Args, CD, Info, Scratch); } else { SourceLocation Loc = FD->getLocation(); HandleFunctionCall(Loc, FD, (MD && MD->isInstance()) ? &This : nullptr, Args, FD->getBody(), Info, Scratch, nullptr); } return Diags.empty(); } bool Expr::isPotentialConstantExprUnevaluated(Expr *E, const FunctionDecl *FD, SmallVectorImpl< PartialDiagnosticAt> &Diags) { Expr::EvalStatus Status; Status.Diag = &Diags; EvalInfo Info(FD->getASTContext(), Status, EvalInfo::EM_PotentialConstantExpressionUnevaluated); // Fabricate a call stack frame to give the arguments a plausible cover story. ArrayRef Args; ArgVector ArgValues(0); bool Success = EvaluateArgs(Args, ArgValues, Info); (void)Success; assert(Success && "Failed to set up arguments for potential constant evaluation"); CallStackFrame Frame(Info, SourceLocation(), FD, nullptr, ArgValues.data()); APValue ResultScratch; Evaluate(ResultScratch, Info, E); return Diags.empty(); } bool Expr::tryEvaluateObjectSize(uint64_t &Result, ASTContext &Ctx, unsigned Type) const { if (!getType()->isPointerType()) return false; Expr::EvalStatus Status; EvalInfo Info(Ctx, Status, EvalInfo::EM_ConstantFold); return tryEvaluateBuiltinObjectSize(this, Type, Info, Result); } Index: vendor/clang/dist/lib/CodeGen/CGBuiltin.cpp =================================================================== --- vendor/clang/dist/lib/CodeGen/CGBuiltin.cpp (revision 312705) +++ vendor/clang/dist/lib/CodeGen/CGBuiltin.cpp (revision 312706) @@ -1,8926 +1,8930 @@ //===---- CGBuiltin.cpp - Emit LLVM Code for builtins ---------------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This contains code to emit Builtin calls as LLVM code. // //===----------------------------------------------------------------------===// #include "CGCXXABI.h" #include "CGObjCRuntime.h" #include "CGOpenCLRuntime.h" #include "CodeGenFunction.h" #include "CodeGenModule.h" #include "TargetInfo.h" #include "clang/AST/ASTContext.h" #include "clang/AST/Decl.h" #include "clang/Analysis/Analyses/OSLog.h" #include "clang/Basic/TargetBuiltins.h" #include "clang/Basic/TargetInfo.h" #include "clang/CodeGen/CGFunctionInfo.h" #include "llvm/ADT/StringExtras.h" #include "llvm/IR/CallSite.h" #include "llvm/IR/DataLayout.h" #include "llvm/IR/InlineAsm.h" #include "llvm/IR/Intrinsics.h" #include "llvm/IR/MDBuilder.h" #include using namespace clang; using namespace CodeGen; using namespace llvm; static int64_t clamp(int64_t Value, int64_t Low, int64_t High) { return std::min(High, std::max(Low, Value)); } /// getBuiltinLibFunction - Given a builtin id for a function like /// "__builtin_fabsf", return a Function* for "fabsf". llvm::Constant *CodeGenModule::getBuiltinLibFunction(const FunctionDecl *FD, unsigned BuiltinID) { assert(Context.BuiltinInfo.isLibFunction(BuiltinID)); // Get the name, skip over the __builtin_ prefix (if necessary). StringRef Name; GlobalDecl D(FD); // If the builtin has been declared explicitly with an assembler label, // use the mangled name. This differs from the plain label on platforms // that prefix labels. if (FD->hasAttr()) Name = getMangledName(D); else Name = Context.BuiltinInfo.getName(BuiltinID) + 10; llvm::FunctionType *Ty = cast(getTypes().ConvertType(FD->getType())); return GetOrCreateLLVMFunction(Name, Ty, D, /*ForVTable=*/false); } /// Emit the conversions required to turn the given value into an /// integer of the given size. static Value *EmitToInt(CodeGenFunction &CGF, llvm::Value *V, QualType T, llvm::IntegerType *IntType) { V = CGF.EmitToMemory(V, T); if (V->getType()->isPointerTy()) return CGF.Builder.CreatePtrToInt(V, IntType); assert(V->getType() == IntType); return V; } static Value *EmitFromInt(CodeGenFunction &CGF, llvm::Value *V, QualType T, llvm::Type *ResultType) { V = CGF.EmitFromMemory(V, T); if (ResultType->isPointerTy()) return CGF.Builder.CreateIntToPtr(V, ResultType); assert(V->getType() == ResultType); return V; } /// Utility to insert an atomic instruction based on Instrinsic::ID /// and the expression node. static Value *MakeBinaryAtomicValue(CodeGenFunction &CGF, llvm::AtomicRMWInst::BinOp Kind, const CallExpr *E) { QualType T = E->getType(); assert(E->getArg(0)->getType()->isPointerType()); assert(CGF.getContext().hasSameUnqualifiedType(T, E->getArg(0)->getType()->getPointeeType())); assert(CGF.getContext().hasSameUnqualifiedType(T, E->getArg(1)->getType())); llvm::Value *DestPtr = CGF.EmitScalarExpr(E->getArg(0)); unsigned AddrSpace = DestPtr->getType()->getPointerAddressSpace(); llvm::IntegerType *IntType = llvm::IntegerType::get(CGF.getLLVMContext(), CGF.getContext().getTypeSize(T)); llvm::Type *IntPtrType = IntType->getPointerTo(AddrSpace); llvm::Value *Args[2]; Args[0] = CGF.Builder.CreateBitCast(DestPtr, IntPtrType); Args[1] = CGF.EmitScalarExpr(E->getArg(1)); llvm::Type *ValueType = Args[1]->getType(); Args[1] = EmitToInt(CGF, Args[1], T, IntType); llvm::Value *Result = CGF.Builder.CreateAtomicRMW( Kind, Args[0], Args[1], llvm::AtomicOrdering::SequentiallyConsistent); return EmitFromInt(CGF, Result, T, ValueType); } static Value *EmitNontemporalStore(CodeGenFunction &CGF, const CallExpr *E) { Value *Val = CGF.EmitScalarExpr(E->getArg(0)); Value *Address = CGF.EmitScalarExpr(E->getArg(1)); // Convert the type of the pointer to a pointer to the stored type. Val = CGF.EmitToMemory(Val, E->getArg(0)->getType()); Value *BC = CGF.Builder.CreateBitCast( Address, llvm::PointerType::getUnqual(Val->getType()), "cast"); LValue LV = CGF.MakeNaturalAlignAddrLValue(BC, E->getArg(0)->getType()); LV.setNontemporal(true); CGF.EmitStoreOfScalar(Val, LV, false); return nullptr; } static Value *EmitNontemporalLoad(CodeGenFunction &CGF, const CallExpr *E) { Value *Address = CGF.EmitScalarExpr(E->getArg(0)); LValue LV = CGF.MakeNaturalAlignAddrLValue(Address, E->getType()); LV.setNontemporal(true); return CGF.EmitLoadOfScalar(LV, E->getExprLoc()); } static RValue EmitBinaryAtomic(CodeGenFunction &CGF, llvm::AtomicRMWInst::BinOp Kind, const CallExpr *E) { return RValue::get(MakeBinaryAtomicValue(CGF, Kind, E)); } /// Utility to insert an atomic instruction based Instrinsic::ID and /// the expression node, where the return value is the result of the /// operation. static RValue EmitBinaryAtomicPost(CodeGenFunction &CGF, llvm::AtomicRMWInst::BinOp Kind, const CallExpr *E, Instruction::BinaryOps Op, bool Invert = false) { QualType T = E->getType(); assert(E->getArg(0)->getType()->isPointerType()); assert(CGF.getContext().hasSameUnqualifiedType(T, E->getArg(0)->getType()->getPointeeType())); assert(CGF.getContext().hasSameUnqualifiedType(T, E->getArg(1)->getType())); llvm::Value *DestPtr = CGF.EmitScalarExpr(E->getArg(0)); unsigned AddrSpace = DestPtr->getType()->getPointerAddressSpace(); llvm::IntegerType *IntType = llvm::IntegerType::get(CGF.getLLVMContext(), CGF.getContext().getTypeSize(T)); llvm::Type *IntPtrType = IntType->getPointerTo(AddrSpace); llvm::Value *Args[2]; Args[1] = CGF.EmitScalarExpr(E->getArg(1)); llvm::Type *ValueType = Args[1]->getType(); Args[1] = EmitToInt(CGF, Args[1], T, IntType); Args[0] = CGF.Builder.CreateBitCast(DestPtr, IntPtrType); llvm::Value *Result = CGF.Builder.CreateAtomicRMW( Kind, Args[0], Args[1], llvm::AtomicOrdering::SequentiallyConsistent); Result = CGF.Builder.CreateBinOp(Op, Result, Args[1]); if (Invert) Result = CGF.Builder.CreateBinOp(llvm::Instruction::Xor, Result, llvm::ConstantInt::get(IntType, -1)); Result = EmitFromInt(CGF, Result, T, ValueType); return RValue::get(Result); } /// @brief Utility to insert an atomic cmpxchg instruction. /// /// @param CGF The current codegen function. /// @param E Builtin call expression to convert to cmpxchg. /// arg0 - address to operate on /// arg1 - value to compare with /// arg2 - new value /// @param ReturnBool Specifies whether to return success flag of /// cmpxchg result or the old value. /// /// @returns result of cmpxchg, according to ReturnBool static Value *MakeAtomicCmpXchgValue(CodeGenFunction &CGF, const CallExpr *E, bool ReturnBool) { QualType T = ReturnBool ? E->getArg(1)->getType() : E->getType(); llvm::Value *DestPtr = CGF.EmitScalarExpr(E->getArg(0)); unsigned AddrSpace = DestPtr->getType()->getPointerAddressSpace(); llvm::IntegerType *IntType = llvm::IntegerType::get( CGF.getLLVMContext(), CGF.getContext().getTypeSize(T)); llvm::Type *IntPtrType = IntType->getPointerTo(AddrSpace); Value *Args[3]; Args[0] = CGF.Builder.CreateBitCast(DestPtr, IntPtrType); Args[1] = CGF.EmitScalarExpr(E->getArg(1)); llvm::Type *ValueType = Args[1]->getType(); Args[1] = EmitToInt(CGF, Args[1], T, IntType); Args[2] = EmitToInt(CGF, CGF.EmitScalarExpr(E->getArg(2)), T, IntType); Value *Pair = CGF.Builder.CreateAtomicCmpXchg( Args[0], Args[1], Args[2], llvm::AtomicOrdering::SequentiallyConsistent, llvm::AtomicOrdering::SequentiallyConsistent); if (ReturnBool) // Extract boolean success flag and zext it to int. return CGF.Builder.CreateZExt(CGF.Builder.CreateExtractValue(Pair, 1), CGF.ConvertType(E->getType())); else // Extract old value and emit it using the same type as compare value. return EmitFromInt(CGF, CGF.Builder.CreateExtractValue(Pair, 0), T, ValueType); } // Emit a simple mangled intrinsic that has 1 argument and a return type // matching the argument type. static Value *emitUnaryBuiltin(CodeGenFunction &CGF, const CallExpr *E, unsigned IntrinsicID) { llvm::Value *Src0 = CGF.EmitScalarExpr(E->getArg(0)); Value *F = CGF.CGM.getIntrinsic(IntrinsicID, Src0->getType()); return CGF.Builder.CreateCall(F, Src0); } // Emit an intrinsic that has 2 operands of the same type as its result. static Value *emitBinaryBuiltin(CodeGenFunction &CGF, const CallExpr *E, unsigned IntrinsicID) { llvm::Value *Src0 = CGF.EmitScalarExpr(E->getArg(0)); llvm::Value *Src1 = CGF.EmitScalarExpr(E->getArg(1)); Value *F = CGF.CGM.getIntrinsic(IntrinsicID, Src0->getType()); return CGF.Builder.CreateCall(F, { Src0, Src1 }); } // Emit an intrinsic that has 3 operands of the same type as its result. static Value *emitTernaryBuiltin(CodeGenFunction &CGF, const CallExpr *E, unsigned IntrinsicID) { llvm::Value *Src0 = CGF.EmitScalarExpr(E->getArg(0)); llvm::Value *Src1 = CGF.EmitScalarExpr(E->getArg(1)); llvm::Value *Src2 = CGF.EmitScalarExpr(E->getArg(2)); Value *F = CGF.CGM.getIntrinsic(IntrinsicID, Src0->getType()); return CGF.Builder.CreateCall(F, { Src0, Src1, Src2 }); } // Emit an intrinsic that has 1 float or double operand, and 1 integer. static Value *emitFPIntBuiltin(CodeGenFunction &CGF, const CallExpr *E, unsigned IntrinsicID) { llvm::Value *Src0 = CGF.EmitScalarExpr(E->getArg(0)); llvm::Value *Src1 = CGF.EmitScalarExpr(E->getArg(1)); Value *F = CGF.CGM.getIntrinsic(IntrinsicID, Src0->getType()); return CGF.Builder.CreateCall(F, {Src0, Src1}); } /// EmitFAbs - Emit a call to @llvm.fabs(). static Value *EmitFAbs(CodeGenFunction &CGF, Value *V) { Value *F = CGF.CGM.getIntrinsic(Intrinsic::fabs, V->getType()); llvm::CallInst *Call = CGF.Builder.CreateCall(F, V); Call->setDoesNotAccessMemory(); return Call; } /// Emit the computation of the sign bit for a floating point value. Returns /// the i1 sign bit value. static Value *EmitSignBit(CodeGenFunction &CGF, Value *V) { LLVMContext &C = CGF.CGM.getLLVMContext(); llvm::Type *Ty = V->getType(); int Width = Ty->getPrimitiveSizeInBits(); llvm::Type *IntTy = llvm::IntegerType::get(C, Width); V = CGF.Builder.CreateBitCast(V, IntTy); if (Ty->isPPC_FP128Ty()) { // We want the sign bit of the higher-order double. The bitcast we just // did works as if the double-double was stored to memory and then // read as an i128. The "store" will put the higher-order double in the // lower address in both little- and big-Endian modes, but the "load" // will treat those bits as a different part of the i128: the low bits in // little-Endian, the high bits in big-Endian. Therefore, on big-Endian // we need to shift the high bits down to the low before truncating. Width >>= 1; if (CGF.getTarget().isBigEndian()) { Value *ShiftCst = llvm::ConstantInt::get(IntTy, Width); V = CGF.Builder.CreateLShr(V, ShiftCst); } // We are truncating value in order to extract the higher-order // double, which we will be using to extract the sign from. IntTy = llvm::IntegerType::get(C, Width); V = CGF.Builder.CreateTrunc(V, IntTy); } Value *Zero = llvm::Constant::getNullValue(IntTy); return CGF.Builder.CreateICmpSLT(V, Zero); } static RValue emitLibraryCall(CodeGenFunction &CGF, const FunctionDecl *FD, const CallExpr *E, llvm::Constant *calleeValue) { CGCallee callee = CGCallee::forDirect(calleeValue, FD); return CGF.EmitCall(E->getCallee()->getType(), callee, E, ReturnValueSlot()); } /// \brief Emit a call to llvm.{sadd,uadd,ssub,usub,smul,umul}.with.overflow.* /// depending on IntrinsicID. /// /// \arg CGF The current codegen function. /// \arg IntrinsicID The ID for the Intrinsic we wish to generate. /// \arg X The first argument to the llvm.*.with.overflow.*. /// \arg Y The second argument to the llvm.*.with.overflow.*. /// \arg Carry The carry returned by the llvm.*.with.overflow.*. /// \returns The result (i.e. sum/product) returned by the intrinsic. static llvm::Value *EmitOverflowIntrinsic(CodeGenFunction &CGF, const llvm::Intrinsic::ID IntrinsicID, llvm::Value *X, llvm::Value *Y, llvm::Value *&Carry) { // Make sure we have integers of the same width. assert(X->getType() == Y->getType() && "Arguments must be the same type. (Did you forget to make sure both " "arguments have the same integer width?)"); llvm::Value *Callee = CGF.CGM.getIntrinsic(IntrinsicID, X->getType()); llvm::Value *Tmp = CGF.Builder.CreateCall(Callee, {X, Y}); Carry = CGF.Builder.CreateExtractValue(Tmp, 1); return CGF.Builder.CreateExtractValue(Tmp, 0); } static Value *emitRangedBuiltin(CodeGenFunction &CGF, unsigned IntrinsicID, int low, int high) { llvm::MDBuilder MDHelper(CGF.getLLVMContext()); llvm::MDNode *RNode = MDHelper.createRange(APInt(32, low), APInt(32, high)); Value *F = CGF.CGM.getIntrinsic(IntrinsicID, {}); llvm::Instruction *Call = CGF.Builder.CreateCall(F); Call->setMetadata(llvm::LLVMContext::MD_range, RNode); return Call; } namespace { struct WidthAndSignedness { unsigned Width; bool Signed; }; } static WidthAndSignedness getIntegerWidthAndSignedness(const clang::ASTContext &context, const clang::QualType Type) { assert(Type->isIntegerType() && "Given type is not an integer."); unsigned Width = Type->isBooleanType() ? 1 : context.getTypeInfo(Type).Width; bool Signed = Type->isSignedIntegerType(); return {Width, Signed}; } // Given one or more integer types, this function produces an integer type that // encompasses them: any value in one of the given types could be expressed in // the encompassing type. static struct WidthAndSignedness EncompassingIntegerType(ArrayRef Types) { assert(Types.size() > 0 && "Empty list of types."); // If any of the given types is signed, we must return a signed type. bool Signed = false; for (const auto &Type : Types) { Signed |= Type.Signed; } // The encompassing type must have a width greater than or equal to the width // of the specified types. Aditionally, if the encompassing type is signed, // its width must be strictly greater than the width of any unsigned types // given. unsigned Width = 0; for (const auto &Type : Types) { unsigned MinWidth = Type.Width + (Signed && !Type.Signed); if (Width < MinWidth) { Width = MinWidth; } } return {Width, Signed}; } Value *CodeGenFunction::EmitVAStartEnd(Value *ArgValue, bool IsStart) { llvm::Type *DestType = Int8PtrTy; if (ArgValue->getType() != DestType) ArgValue = Builder.CreateBitCast(ArgValue, DestType, ArgValue->getName().data()); Intrinsic::ID inst = IsStart ? Intrinsic::vastart : Intrinsic::vaend; return Builder.CreateCall(CGM.getIntrinsic(inst), ArgValue); } /// Checks if using the result of __builtin_object_size(p, @p From) in place of /// __builtin_object_size(p, @p To) is correct static bool areBOSTypesCompatible(int From, int To) { // Note: Our __builtin_object_size implementation currently treats Type=0 and // Type=2 identically. Encoding this implementation detail here may make // improving __builtin_object_size difficult in the future, so it's omitted. return From == To || (From == 0 && To == 1) || (From == 3 && To == 2); } static llvm::Value * getDefaultBuiltinObjectSizeResult(unsigned Type, llvm::IntegerType *ResType) { return ConstantInt::get(ResType, (Type & 2) ? 0 : -1, /*isSigned=*/true); } llvm::Value * CodeGenFunction::evaluateOrEmitBuiltinObjectSize(const Expr *E, unsigned Type, llvm::IntegerType *ResType) { uint64_t ObjectSize; if (!E->tryEvaluateObjectSize(ObjectSize, getContext(), Type)) return emitBuiltinObjectSize(E, Type, ResType); return ConstantInt::get(ResType, ObjectSize, /*isSigned=*/true); } /// Returns a Value corresponding to the size of the given expression. /// This Value may be either of the following: /// - A llvm::Argument (if E is a param with the pass_object_size attribute on /// it) /// - A call to the @llvm.objectsize intrinsic llvm::Value * CodeGenFunction::emitBuiltinObjectSize(const Expr *E, unsigned Type, llvm::IntegerType *ResType) { // We need to reference an argument if the pointer is a parameter with the // pass_object_size attribute. if (auto *D = dyn_cast(E->IgnoreParenImpCasts())) { auto *Param = dyn_cast(D->getDecl()); auto *PS = D->getDecl()->getAttr(); if (Param != nullptr && PS != nullptr && areBOSTypesCompatible(PS->getType(), Type)) { auto Iter = SizeArguments.find(Param); assert(Iter != SizeArguments.end()); const ImplicitParamDecl *D = Iter->second; auto DIter = LocalDeclMap.find(D); assert(DIter != LocalDeclMap.end()); return EmitLoadOfScalar(DIter->second, /*volatile=*/false, getContext().getSizeType(), E->getLocStart()); } } // LLVM can't handle Type=3 appropriately, and __builtin_object_size shouldn't // evaluate E for side-effects. In either case, we shouldn't lower to // @llvm.objectsize. if (Type == 3 || E->HasSideEffects(getContext())) return getDefaultBuiltinObjectSizeResult(Type, ResType); // LLVM only supports 0 and 2, make sure that we pass along that // as a boolean. auto *CI = ConstantInt::get(Builder.getInt1Ty(), (Type & 2) >> 1); // FIXME: Get right address space. llvm::Type *Tys[] = {ResType, Builder.getInt8PtrTy(0)}; Value *F = CGM.getIntrinsic(Intrinsic::objectsize, Tys); return Builder.CreateCall(F, {EmitScalarExpr(E), CI}); } // Many of MSVC builtins are on both x64 and ARM; to avoid repeating code, we // handle them here. enum class CodeGenFunction::MSVCIntrin { _BitScanForward, _BitScanReverse, _InterlockedAnd, _InterlockedDecrement, _InterlockedExchange, _InterlockedExchangeAdd, _InterlockedExchangeSub, _InterlockedIncrement, _InterlockedOr, _InterlockedXor, }; Value *CodeGenFunction::EmitMSVCBuiltinExpr(MSVCIntrin BuiltinID, const CallExpr *E) { switch (BuiltinID) { case MSVCIntrin::_BitScanForward: case MSVCIntrin::_BitScanReverse: { Value *ArgValue = EmitScalarExpr(E->getArg(1)); llvm::Type *ArgType = ArgValue->getType(); llvm::Type *IndexType = EmitScalarExpr(E->getArg(0))->getType()->getPointerElementType(); llvm::Type *ResultType = ConvertType(E->getType()); Value *ArgZero = llvm::Constant::getNullValue(ArgType); Value *ResZero = llvm::Constant::getNullValue(ResultType); Value *ResOne = llvm::ConstantInt::get(ResultType, 1); BasicBlock *Begin = Builder.GetInsertBlock(); BasicBlock *End = createBasicBlock("bitscan_end", this->CurFn); Builder.SetInsertPoint(End); PHINode *Result = Builder.CreatePHI(ResultType, 2, "bitscan_result"); Builder.SetInsertPoint(Begin); Value *IsZero = Builder.CreateICmpEQ(ArgValue, ArgZero); BasicBlock *NotZero = createBasicBlock("bitscan_not_zero", this->CurFn); Builder.CreateCondBr(IsZero, End, NotZero); Result->addIncoming(ResZero, Begin); Builder.SetInsertPoint(NotZero); Address IndexAddress = EmitPointerWithAlignment(E->getArg(0)); if (BuiltinID == MSVCIntrin::_BitScanForward) { Value *F = CGM.getIntrinsic(Intrinsic::cttz, ArgType); Value *ZeroCount = Builder.CreateCall(F, {ArgValue, Builder.getTrue()}); ZeroCount = Builder.CreateIntCast(ZeroCount, IndexType, false); Builder.CreateStore(ZeroCount, IndexAddress, false); } else { unsigned ArgWidth = cast(ArgType)->getBitWidth(); Value *ArgTypeLastIndex = llvm::ConstantInt::get(IndexType, ArgWidth - 1); Value *F = CGM.getIntrinsic(Intrinsic::ctlz, ArgType); Value *ZeroCount = Builder.CreateCall(F, {ArgValue, Builder.getTrue()}); ZeroCount = Builder.CreateIntCast(ZeroCount, IndexType, false); Value *Index = Builder.CreateNSWSub(ArgTypeLastIndex, ZeroCount); Builder.CreateStore(Index, IndexAddress, false); } Builder.CreateBr(End); Result->addIncoming(ResOne, NotZero); Builder.SetInsertPoint(End); return Result; } case MSVCIntrin::_InterlockedAnd: return MakeBinaryAtomicValue(*this, AtomicRMWInst::And, E); case MSVCIntrin::_InterlockedExchange: return MakeBinaryAtomicValue(*this, AtomicRMWInst::Xchg, E); case MSVCIntrin::_InterlockedExchangeAdd: return MakeBinaryAtomicValue(*this, AtomicRMWInst::Add, E); case MSVCIntrin::_InterlockedExchangeSub: return MakeBinaryAtomicValue(*this, AtomicRMWInst::Sub, E); case MSVCIntrin::_InterlockedOr: return MakeBinaryAtomicValue(*this, AtomicRMWInst::Or, E); case MSVCIntrin::_InterlockedXor: return MakeBinaryAtomicValue(*this, AtomicRMWInst::Xor, E); case MSVCIntrin::_InterlockedDecrement: { llvm::Type *IntTy = ConvertType(E->getType()); AtomicRMWInst *RMWI = Builder.CreateAtomicRMW( AtomicRMWInst::Sub, EmitScalarExpr(E->getArg(0)), ConstantInt::get(IntTy, 1), llvm::AtomicOrdering::SequentiallyConsistent); return Builder.CreateSub(RMWI, ConstantInt::get(IntTy, 1)); } case MSVCIntrin::_InterlockedIncrement: { llvm::Type *IntTy = ConvertType(E->getType()); AtomicRMWInst *RMWI = Builder.CreateAtomicRMW( AtomicRMWInst::Add, EmitScalarExpr(E->getArg(0)), ConstantInt::get(IntTy, 1), llvm::AtomicOrdering::SequentiallyConsistent); return Builder.CreateAdd(RMWI, ConstantInt::get(IntTy, 1)); } } llvm_unreachable("Incorrect MSVC intrinsic!"); } namespace { // ARC cleanup for __builtin_os_log_format struct CallObjCArcUse final : EHScopeStack::Cleanup { CallObjCArcUse(llvm::Value *object) : object(object) {} llvm::Value *object; void Emit(CodeGenFunction &CGF, Flags flags) override { CGF.EmitARCIntrinsicUse(object); } }; } RValue CodeGenFunction::EmitBuiltinExpr(const FunctionDecl *FD, unsigned BuiltinID, const CallExpr *E, ReturnValueSlot ReturnValue) { // See if we can constant fold this builtin. If so, don't emit it at all. Expr::EvalResult Result; if (E->EvaluateAsRValue(Result, CGM.getContext()) && !Result.hasSideEffects()) { if (Result.Val.isInt()) return RValue::get(llvm::ConstantInt::get(getLLVMContext(), Result.Val.getInt())); if (Result.Val.isFloat()) return RValue::get(llvm::ConstantFP::get(getLLVMContext(), Result.Val.getFloat())); } switch (BuiltinID) { default: break; // Handle intrinsics and libm functions below. case Builtin::BI__builtin___CFStringMakeConstantString: case Builtin::BI__builtin___NSStringMakeConstantString: return RValue::get(CGM.EmitConstantExpr(E, E->getType(), nullptr)); case Builtin::BI__builtin_stdarg_start: case Builtin::BI__builtin_va_start: case Builtin::BI__va_start: case Builtin::BI__builtin_va_end: return RValue::get( EmitVAStartEnd(BuiltinID == Builtin::BI__va_start ? EmitScalarExpr(E->getArg(0)) : EmitVAListRef(E->getArg(0)).getPointer(), BuiltinID != Builtin::BI__builtin_va_end)); case Builtin::BI__builtin_va_copy: { Value *DstPtr = EmitVAListRef(E->getArg(0)).getPointer(); Value *SrcPtr = EmitVAListRef(E->getArg(1)).getPointer(); llvm::Type *Type = Int8PtrTy; DstPtr = Builder.CreateBitCast(DstPtr, Type); SrcPtr = Builder.CreateBitCast(SrcPtr, Type); return RValue::get(Builder.CreateCall(CGM.getIntrinsic(Intrinsic::vacopy), {DstPtr, SrcPtr})); } case Builtin::BI__builtin_abs: case Builtin::BI__builtin_labs: case Builtin::BI__builtin_llabs: { Value *ArgValue = EmitScalarExpr(E->getArg(0)); Value *NegOp = Builder.CreateNeg(ArgValue, "neg"); Value *CmpResult = Builder.CreateICmpSGE(ArgValue, llvm::Constant::getNullValue(ArgValue->getType()), "abscond"); Value *Result = Builder.CreateSelect(CmpResult, ArgValue, NegOp, "abs"); return RValue::get(Result); } case Builtin::BI__builtin_fabs: case Builtin::BI__builtin_fabsf: case Builtin::BI__builtin_fabsl: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::fabs)); } case Builtin::BI__builtin_fmod: case Builtin::BI__builtin_fmodf: case Builtin::BI__builtin_fmodl: { Value *Arg1 = EmitScalarExpr(E->getArg(0)); Value *Arg2 = EmitScalarExpr(E->getArg(1)); Value *Result = Builder.CreateFRem(Arg1, Arg2, "fmod"); return RValue::get(Result); } case Builtin::BI__builtin_copysign: case Builtin::BI__builtin_copysignf: case Builtin::BI__builtin_copysignl: { return RValue::get(emitBinaryBuiltin(*this, E, Intrinsic::copysign)); } case Builtin::BI__builtin_ceil: case Builtin::BI__builtin_ceilf: case Builtin::BI__builtin_ceill: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::ceil)); } case Builtin::BI__builtin_floor: case Builtin::BI__builtin_floorf: case Builtin::BI__builtin_floorl: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::floor)); } case Builtin::BI__builtin_trunc: case Builtin::BI__builtin_truncf: case Builtin::BI__builtin_truncl: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::trunc)); } case Builtin::BI__builtin_rint: case Builtin::BI__builtin_rintf: case Builtin::BI__builtin_rintl: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::rint)); } case Builtin::BI__builtin_nearbyint: case Builtin::BI__builtin_nearbyintf: case Builtin::BI__builtin_nearbyintl: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::nearbyint)); } case Builtin::BI__builtin_round: case Builtin::BI__builtin_roundf: case Builtin::BI__builtin_roundl: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::round)); } case Builtin::BI__builtin_fmin: case Builtin::BI__builtin_fminf: case Builtin::BI__builtin_fminl: { return RValue::get(emitBinaryBuiltin(*this, E, Intrinsic::minnum)); } case Builtin::BI__builtin_fmax: case Builtin::BI__builtin_fmaxf: case Builtin::BI__builtin_fmaxl: { return RValue::get(emitBinaryBuiltin(*this, E, Intrinsic::maxnum)); } case Builtin::BI__builtin_conj: case Builtin::BI__builtin_conjf: case Builtin::BI__builtin_conjl: { ComplexPairTy ComplexVal = EmitComplexExpr(E->getArg(0)); Value *Real = ComplexVal.first; Value *Imag = ComplexVal.second; Value *Zero = Imag->getType()->isFPOrFPVectorTy() ? llvm::ConstantFP::getZeroValueForNegation(Imag->getType()) : llvm::Constant::getNullValue(Imag->getType()); Imag = Builder.CreateFSub(Zero, Imag, "sub"); return RValue::getComplex(std::make_pair(Real, Imag)); } case Builtin::BI__builtin_creal: case Builtin::BI__builtin_crealf: case Builtin::BI__builtin_creall: case Builtin::BIcreal: case Builtin::BIcrealf: case Builtin::BIcreall: { ComplexPairTy ComplexVal = EmitComplexExpr(E->getArg(0)); return RValue::get(ComplexVal.first); } case Builtin::BI__builtin_cimag: case Builtin::BI__builtin_cimagf: case Builtin::BI__builtin_cimagl: case Builtin::BIcimag: case Builtin::BIcimagf: case Builtin::BIcimagl: { ComplexPairTy ComplexVal = EmitComplexExpr(E->getArg(0)); return RValue::get(ComplexVal.second); } case Builtin::BI__builtin_ctzs: case Builtin::BI__builtin_ctz: case Builtin::BI__builtin_ctzl: case Builtin::BI__builtin_ctzll: { Value *ArgValue = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = ArgValue->getType(); Value *F = CGM.getIntrinsic(Intrinsic::cttz, ArgType); llvm::Type *ResultType = ConvertType(E->getType()); Value *ZeroUndef = Builder.getInt1(getTarget().isCLZForZeroUndef()); Value *Result = Builder.CreateCall(F, {ArgValue, ZeroUndef}); if (Result->getType() != ResultType) Result = Builder.CreateIntCast(Result, ResultType, /*isSigned*/true, "cast"); return RValue::get(Result); } case Builtin::BI__builtin_clzs: case Builtin::BI__builtin_clz: case Builtin::BI__builtin_clzl: case Builtin::BI__builtin_clzll: { Value *ArgValue = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = ArgValue->getType(); Value *F = CGM.getIntrinsic(Intrinsic::ctlz, ArgType); llvm::Type *ResultType = ConvertType(E->getType()); Value *ZeroUndef = Builder.getInt1(getTarget().isCLZForZeroUndef()); Value *Result = Builder.CreateCall(F, {ArgValue, ZeroUndef}); if (Result->getType() != ResultType) Result = Builder.CreateIntCast(Result, ResultType, /*isSigned*/true, "cast"); return RValue::get(Result); } case Builtin::BI__builtin_ffs: case Builtin::BI__builtin_ffsl: case Builtin::BI__builtin_ffsll: { // ffs(x) -> x ? cttz(x) + 1 : 0 Value *ArgValue = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = ArgValue->getType(); Value *F = CGM.getIntrinsic(Intrinsic::cttz, ArgType); llvm::Type *ResultType = ConvertType(E->getType()); Value *Tmp = Builder.CreateAdd(Builder.CreateCall(F, {ArgValue, Builder.getTrue()}), llvm::ConstantInt::get(ArgType, 1)); Value *Zero = llvm::Constant::getNullValue(ArgType); Value *IsZero = Builder.CreateICmpEQ(ArgValue, Zero, "iszero"); Value *Result = Builder.CreateSelect(IsZero, Zero, Tmp, "ffs"); if (Result->getType() != ResultType) Result = Builder.CreateIntCast(Result, ResultType, /*isSigned*/true, "cast"); return RValue::get(Result); } case Builtin::BI__builtin_parity: case Builtin::BI__builtin_parityl: case Builtin::BI__builtin_parityll: { // parity(x) -> ctpop(x) & 1 Value *ArgValue = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = ArgValue->getType(); Value *F = CGM.getIntrinsic(Intrinsic::ctpop, ArgType); llvm::Type *ResultType = ConvertType(E->getType()); Value *Tmp = Builder.CreateCall(F, ArgValue); Value *Result = Builder.CreateAnd(Tmp, llvm::ConstantInt::get(ArgType, 1)); if (Result->getType() != ResultType) Result = Builder.CreateIntCast(Result, ResultType, /*isSigned*/true, "cast"); return RValue::get(Result); } case Builtin::BI__popcnt16: case Builtin::BI__popcnt: case Builtin::BI__popcnt64: case Builtin::BI__builtin_popcount: case Builtin::BI__builtin_popcountl: case Builtin::BI__builtin_popcountll: { Value *ArgValue = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = ArgValue->getType(); Value *F = CGM.getIntrinsic(Intrinsic::ctpop, ArgType); llvm::Type *ResultType = ConvertType(E->getType()); Value *Result = Builder.CreateCall(F, ArgValue); if (Result->getType() != ResultType) Result = Builder.CreateIntCast(Result, ResultType, /*isSigned*/true, "cast"); return RValue::get(Result); } case Builtin::BI_rotr8: case Builtin::BI_rotr16: case Builtin::BI_rotr: case Builtin::BI_lrotr: case Builtin::BI_rotr64: { Value *Val = EmitScalarExpr(E->getArg(0)); Value *Shift = EmitScalarExpr(E->getArg(1)); llvm::Type *ArgType = Val->getType(); Shift = Builder.CreateIntCast(Shift, ArgType, false); unsigned ArgWidth = cast(ArgType)->getBitWidth(); Value *ArgTypeSize = llvm::ConstantInt::get(ArgType, ArgWidth); Value *ArgZero = llvm::Constant::getNullValue(ArgType); Value *Mask = llvm::ConstantInt::get(ArgType, ArgWidth - 1); Shift = Builder.CreateAnd(Shift, Mask); Value *LeftShift = Builder.CreateSub(ArgTypeSize, Shift); Value *RightShifted = Builder.CreateLShr(Val, Shift); Value *LeftShifted = Builder.CreateShl(Val, LeftShift); Value *Rotated = Builder.CreateOr(LeftShifted, RightShifted); Value *ShiftIsZero = Builder.CreateICmpEQ(Shift, ArgZero); Value *Result = Builder.CreateSelect(ShiftIsZero, Val, Rotated); return RValue::get(Result); } case Builtin::BI_rotl8: case Builtin::BI_rotl16: case Builtin::BI_rotl: case Builtin::BI_lrotl: case Builtin::BI_rotl64: { Value *Val = EmitScalarExpr(E->getArg(0)); Value *Shift = EmitScalarExpr(E->getArg(1)); llvm::Type *ArgType = Val->getType(); Shift = Builder.CreateIntCast(Shift, ArgType, false); unsigned ArgWidth = cast(ArgType)->getBitWidth(); Value *ArgTypeSize = llvm::ConstantInt::get(ArgType, ArgWidth); Value *ArgZero = llvm::Constant::getNullValue(ArgType); Value *Mask = llvm::ConstantInt::get(ArgType, ArgWidth - 1); Shift = Builder.CreateAnd(Shift, Mask); Value *RightShift = Builder.CreateSub(ArgTypeSize, Shift); Value *LeftShifted = Builder.CreateShl(Val, Shift); Value *RightShifted = Builder.CreateLShr(Val, RightShift); Value *Rotated = Builder.CreateOr(LeftShifted, RightShifted); Value *ShiftIsZero = Builder.CreateICmpEQ(Shift, ArgZero); Value *Result = Builder.CreateSelect(ShiftIsZero, Val, Rotated); return RValue::get(Result); } case Builtin::BI__builtin_unpredictable: { // Always return the argument of __builtin_unpredictable. LLVM does not // handle this builtin. Metadata for this builtin should be added directly // to instructions such as branches or switches that use it. return RValue::get(EmitScalarExpr(E->getArg(0))); } case Builtin::BI__builtin_expect: { Value *ArgValue = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = ArgValue->getType(); Value *ExpectedValue = EmitScalarExpr(E->getArg(1)); // Don't generate llvm.expect on -O0 as the backend won't use it for // anything. // Note, we still IRGen ExpectedValue because it could have side-effects. if (CGM.getCodeGenOpts().OptimizationLevel == 0) return RValue::get(ArgValue); Value *FnExpect = CGM.getIntrinsic(Intrinsic::expect, ArgType); Value *Result = Builder.CreateCall(FnExpect, {ArgValue, ExpectedValue}, "expval"); return RValue::get(Result); } case Builtin::BI__builtin_assume_aligned: { Value *PtrValue = EmitScalarExpr(E->getArg(0)); Value *OffsetValue = (E->getNumArgs() > 2) ? EmitScalarExpr(E->getArg(2)) : nullptr; Value *AlignmentValue = EmitScalarExpr(E->getArg(1)); ConstantInt *AlignmentCI = cast(AlignmentValue); unsigned Alignment = (unsigned) AlignmentCI->getZExtValue(); EmitAlignmentAssumption(PtrValue, Alignment, OffsetValue); return RValue::get(PtrValue); } case Builtin::BI__assume: case Builtin::BI__builtin_assume: { if (E->getArg(0)->HasSideEffects(getContext())) return RValue::get(nullptr); Value *ArgValue = EmitScalarExpr(E->getArg(0)); Value *FnAssume = CGM.getIntrinsic(Intrinsic::assume); return RValue::get(Builder.CreateCall(FnAssume, ArgValue)); } case Builtin::BI__builtin_bswap16: case Builtin::BI__builtin_bswap32: case Builtin::BI__builtin_bswap64: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::bswap)); } case Builtin::BI__builtin_bitreverse8: case Builtin::BI__builtin_bitreverse16: case Builtin::BI__builtin_bitreverse32: case Builtin::BI__builtin_bitreverse64: { return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::bitreverse)); } case Builtin::BI__builtin_object_size: { unsigned Type = E->getArg(1)->EvaluateKnownConstInt(getContext()).getZExtValue(); auto *ResType = cast(ConvertType(E->getType())); // We pass this builtin onto the optimizer so that it can figure out the // object size in more complex cases. return RValue::get(emitBuiltinObjectSize(E->getArg(0), Type, ResType)); } case Builtin::BI__builtin_prefetch: { Value *Locality, *RW, *Address = EmitScalarExpr(E->getArg(0)); // FIXME: Technically these constants should of type 'int', yes? RW = (E->getNumArgs() > 1) ? EmitScalarExpr(E->getArg(1)) : llvm::ConstantInt::get(Int32Ty, 0); Locality = (E->getNumArgs() > 2) ? EmitScalarExpr(E->getArg(2)) : llvm::ConstantInt::get(Int32Ty, 3); Value *Data = llvm::ConstantInt::get(Int32Ty, 1); Value *F = CGM.getIntrinsic(Intrinsic::prefetch); return RValue::get(Builder.CreateCall(F, {Address, RW, Locality, Data})); } case Builtin::BI__builtin_readcyclecounter: { Value *F = CGM.getIntrinsic(Intrinsic::readcyclecounter); return RValue::get(Builder.CreateCall(F)); } case Builtin::BI__builtin___clear_cache: { Value *Begin = EmitScalarExpr(E->getArg(0)); Value *End = EmitScalarExpr(E->getArg(1)); Value *F = CGM.getIntrinsic(Intrinsic::clear_cache); return RValue::get(Builder.CreateCall(F, {Begin, End})); } case Builtin::BI__builtin_trap: return RValue::get(EmitTrapCall(Intrinsic::trap)); case Builtin::BI__debugbreak: return RValue::get(EmitTrapCall(Intrinsic::debugtrap)); case Builtin::BI__builtin_unreachable: { if (SanOpts.has(SanitizerKind::Unreachable)) { SanitizerScope SanScope(this); EmitCheck(std::make_pair(static_cast(Builder.getFalse()), SanitizerKind::Unreachable), SanitizerHandler::BuiltinUnreachable, EmitCheckSourceLocation(E->getExprLoc()), None); } else Builder.CreateUnreachable(); // We do need to preserve an insertion point. EmitBlock(createBasicBlock("unreachable.cont")); return RValue::get(nullptr); } case Builtin::BI__builtin_powi: case Builtin::BI__builtin_powif: case Builtin::BI__builtin_powil: { Value *Base = EmitScalarExpr(E->getArg(0)); Value *Exponent = EmitScalarExpr(E->getArg(1)); llvm::Type *ArgType = Base->getType(); Value *F = CGM.getIntrinsic(Intrinsic::powi, ArgType); return RValue::get(Builder.CreateCall(F, {Base, Exponent})); } case Builtin::BI__builtin_isgreater: case Builtin::BI__builtin_isgreaterequal: case Builtin::BI__builtin_isless: case Builtin::BI__builtin_islessequal: case Builtin::BI__builtin_islessgreater: case Builtin::BI__builtin_isunordered: { // Ordered comparisons: we know the arguments to these are matching scalar // floating point values. Value *LHS = EmitScalarExpr(E->getArg(0)); Value *RHS = EmitScalarExpr(E->getArg(1)); switch (BuiltinID) { default: llvm_unreachable("Unknown ordered comparison"); case Builtin::BI__builtin_isgreater: LHS = Builder.CreateFCmpOGT(LHS, RHS, "cmp"); break; case Builtin::BI__builtin_isgreaterequal: LHS = Builder.CreateFCmpOGE(LHS, RHS, "cmp"); break; case Builtin::BI__builtin_isless: LHS = Builder.CreateFCmpOLT(LHS, RHS, "cmp"); break; case Builtin::BI__builtin_islessequal: LHS = Builder.CreateFCmpOLE(LHS, RHS, "cmp"); break; case Builtin::BI__builtin_islessgreater: LHS = Builder.CreateFCmpONE(LHS, RHS, "cmp"); break; case Builtin::BI__builtin_isunordered: LHS = Builder.CreateFCmpUNO(LHS, RHS, "cmp"); break; } // ZExt bool to int type. return RValue::get(Builder.CreateZExt(LHS, ConvertType(E->getType()))); } case Builtin::BI__builtin_isnan: { Value *V = EmitScalarExpr(E->getArg(0)); V = Builder.CreateFCmpUNO(V, V, "cmp"); return RValue::get(Builder.CreateZExt(V, ConvertType(E->getType()))); } case Builtin::BIfinite: case Builtin::BI__finite: case Builtin::BIfinitef: case Builtin::BI__finitef: case Builtin::BIfinitel: case Builtin::BI__finitel: case Builtin::BI__builtin_isinf: case Builtin::BI__builtin_isfinite: { // isinf(x) --> fabs(x) == infinity // isfinite(x) --> fabs(x) != infinity // x != NaN via the ordered compare in either case. Value *V = EmitScalarExpr(E->getArg(0)); Value *Fabs = EmitFAbs(*this, V); Constant *Infinity = ConstantFP::getInfinity(V->getType()); CmpInst::Predicate Pred = (BuiltinID == Builtin::BI__builtin_isinf) ? CmpInst::FCMP_OEQ : CmpInst::FCMP_ONE; Value *FCmp = Builder.CreateFCmp(Pred, Fabs, Infinity, "cmpinf"); return RValue::get(Builder.CreateZExt(FCmp, ConvertType(E->getType()))); } case Builtin::BI__builtin_isinf_sign: { // isinf_sign(x) -> fabs(x) == infinity ? (signbit(x) ? -1 : 1) : 0 Value *Arg = EmitScalarExpr(E->getArg(0)); Value *AbsArg = EmitFAbs(*this, Arg); Value *IsInf = Builder.CreateFCmpOEQ( AbsArg, ConstantFP::getInfinity(Arg->getType()), "isinf"); Value *IsNeg = EmitSignBit(*this, Arg); llvm::Type *IntTy = ConvertType(E->getType()); Value *Zero = Constant::getNullValue(IntTy); Value *One = ConstantInt::get(IntTy, 1); Value *NegativeOne = ConstantInt::get(IntTy, -1); Value *SignResult = Builder.CreateSelect(IsNeg, NegativeOne, One); Value *Result = Builder.CreateSelect(IsInf, SignResult, Zero); return RValue::get(Result); } case Builtin::BI__builtin_isnormal: { // isnormal(x) --> x == x && fabsf(x) < infinity && fabsf(x) >= float_min Value *V = EmitScalarExpr(E->getArg(0)); Value *Eq = Builder.CreateFCmpOEQ(V, V, "iseq"); Value *Abs = EmitFAbs(*this, V); Value *IsLessThanInf = Builder.CreateFCmpULT(Abs, ConstantFP::getInfinity(V->getType()),"isinf"); APFloat Smallest = APFloat::getSmallestNormalized( getContext().getFloatTypeSemantics(E->getArg(0)->getType())); Value *IsNormal = Builder.CreateFCmpUGE(Abs, ConstantFP::get(V->getContext(), Smallest), "isnormal"); V = Builder.CreateAnd(Eq, IsLessThanInf, "and"); V = Builder.CreateAnd(V, IsNormal, "and"); return RValue::get(Builder.CreateZExt(V, ConvertType(E->getType()))); } case Builtin::BI__builtin_fpclassify: { Value *V = EmitScalarExpr(E->getArg(5)); llvm::Type *Ty = ConvertType(E->getArg(5)->getType()); // Create Result BasicBlock *Begin = Builder.GetInsertBlock(); BasicBlock *End = createBasicBlock("fpclassify_end", this->CurFn); Builder.SetInsertPoint(End); PHINode *Result = Builder.CreatePHI(ConvertType(E->getArg(0)->getType()), 4, "fpclassify_result"); // if (V==0) return FP_ZERO Builder.SetInsertPoint(Begin); Value *IsZero = Builder.CreateFCmpOEQ(V, Constant::getNullValue(Ty), "iszero"); Value *ZeroLiteral = EmitScalarExpr(E->getArg(4)); BasicBlock *NotZero = createBasicBlock("fpclassify_not_zero", this->CurFn); Builder.CreateCondBr(IsZero, End, NotZero); Result->addIncoming(ZeroLiteral, Begin); // if (V != V) return FP_NAN Builder.SetInsertPoint(NotZero); Value *IsNan = Builder.CreateFCmpUNO(V, V, "cmp"); Value *NanLiteral = EmitScalarExpr(E->getArg(0)); BasicBlock *NotNan = createBasicBlock("fpclassify_not_nan", this->CurFn); Builder.CreateCondBr(IsNan, End, NotNan); Result->addIncoming(NanLiteral, NotZero); // if (fabs(V) == infinity) return FP_INFINITY Builder.SetInsertPoint(NotNan); Value *VAbs = EmitFAbs(*this, V); Value *IsInf = Builder.CreateFCmpOEQ(VAbs, ConstantFP::getInfinity(V->getType()), "isinf"); Value *InfLiteral = EmitScalarExpr(E->getArg(1)); BasicBlock *NotInf = createBasicBlock("fpclassify_not_inf", this->CurFn); Builder.CreateCondBr(IsInf, End, NotInf); Result->addIncoming(InfLiteral, NotNan); // if (fabs(V) >= MIN_NORMAL) return FP_NORMAL else FP_SUBNORMAL Builder.SetInsertPoint(NotInf); APFloat Smallest = APFloat::getSmallestNormalized( getContext().getFloatTypeSemantics(E->getArg(5)->getType())); Value *IsNormal = Builder.CreateFCmpUGE(VAbs, ConstantFP::get(V->getContext(), Smallest), "isnormal"); Value *NormalResult = Builder.CreateSelect(IsNormal, EmitScalarExpr(E->getArg(2)), EmitScalarExpr(E->getArg(3))); Builder.CreateBr(End); Result->addIncoming(NormalResult, NotInf); // return Result Builder.SetInsertPoint(End); return RValue::get(Result); } case Builtin::BIalloca: case Builtin::BI_alloca: case Builtin::BI__builtin_alloca: { Value *Size = EmitScalarExpr(E->getArg(0)); const TargetInfo &TI = getContext().getTargetInfo(); // The alignment of the alloca should correspond to __BIGGEST_ALIGNMENT__. unsigned SuitableAlignmentInBytes = CGM.getContext() .toCharUnitsFromBits(TI.getSuitableAlign()) .getQuantity(); AllocaInst *AI = Builder.CreateAlloca(Builder.getInt8Ty(), Size); AI->setAlignment(SuitableAlignmentInBytes); return RValue::get(AI); } case Builtin::BI__builtin_alloca_with_align: { Value *Size = EmitScalarExpr(E->getArg(0)); Value *AlignmentInBitsValue = EmitScalarExpr(E->getArg(1)); auto *AlignmentInBitsCI = cast(AlignmentInBitsValue); unsigned AlignmentInBits = AlignmentInBitsCI->getZExtValue(); unsigned AlignmentInBytes = CGM.getContext().toCharUnitsFromBits(AlignmentInBits).getQuantity(); AllocaInst *AI = Builder.CreateAlloca(Builder.getInt8Ty(), Size); AI->setAlignment(AlignmentInBytes); return RValue::get(AI); } case Builtin::BIbzero: case Builtin::BI__builtin_bzero: { Address Dest = EmitPointerWithAlignment(E->getArg(0)); Value *SizeVal = EmitScalarExpr(E->getArg(1)); EmitNonNullArgCheck(RValue::get(Dest.getPointer()), E->getArg(0)->getType(), E->getArg(0)->getExprLoc(), FD, 0); Builder.CreateMemSet(Dest, Builder.getInt8(0), SizeVal, false); return RValue::get(Dest.getPointer()); } case Builtin::BImemcpy: case Builtin::BI__builtin_memcpy: { Address Dest = EmitPointerWithAlignment(E->getArg(0)); Address Src = EmitPointerWithAlignment(E->getArg(1)); Value *SizeVal = EmitScalarExpr(E->getArg(2)); EmitNonNullArgCheck(RValue::get(Dest.getPointer()), E->getArg(0)->getType(), E->getArg(0)->getExprLoc(), FD, 0); EmitNonNullArgCheck(RValue::get(Src.getPointer()), E->getArg(1)->getType(), E->getArg(1)->getExprLoc(), FD, 1); Builder.CreateMemCpy(Dest, Src, SizeVal, false); return RValue::get(Dest.getPointer()); } + case Builtin::BI__builtin_char_memchr: + BuiltinID = Builtin::BI__builtin_memchr; + break; + case Builtin::BI__builtin___memcpy_chk: { // fold __builtin_memcpy_chk(x, y, cst1, cst2) to memcpy iff cst1<=cst2. llvm::APSInt Size, DstSize; if (!E->getArg(2)->EvaluateAsInt(Size, CGM.getContext()) || !E->getArg(3)->EvaluateAsInt(DstSize, CGM.getContext())) break; if (Size.ugt(DstSize)) break; Address Dest = EmitPointerWithAlignment(E->getArg(0)); Address Src = EmitPointerWithAlignment(E->getArg(1)); Value *SizeVal = llvm::ConstantInt::get(Builder.getContext(), Size); Builder.CreateMemCpy(Dest, Src, SizeVal, false); return RValue::get(Dest.getPointer()); } case Builtin::BI__builtin_objc_memmove_collectable: { Address DestAddr = EmitPointerWithAlignment(E->getArg(0)); Address SrcAddr = EmitPointerWithAlignment(E->getArg(1)); Value *SizeVal = EmitScalarExpr(E->getArg(2)); CGM.getObjCRuntime().EmitGCMemmoveCollectable(*this, DestAddr, SrcAddr, SizeVal); return RValue::get(DestAddr.getPointer()); } case Builtin::BI__builtin___memmove_chk: { // fold __builtin_memmove_chk(x, y, cst1, cst2) to memmove iff cst1<=cst2. llvm::APSInt Size, DstSize; if (!E->getArg(2)->EvaluateAsInt(Size, CGM.getContext()) || !E->getArg(3)->EvaluateAsInt(DstSize, CGM.getContext())) break; if (Size.ugt(DstSize)) break; Address Dest = EmitPointerWithAlignment(E->getArg(0)); Address Src = EmitPointerWithAlignment(E->getArg(1)); Value *SizeVal = llvm::ConstantInt::get(Builder.getContext(), Size); Builder.CreateMemMove(Dest, Src, SizeVal, false); return RValue::get(Dest.getPointer()); } case Builtin::BImemmove: case Builtin::BI__builtin_memmove: { Address Dest = EmitPointerWithAlignment(E->getArg(0)); Address Src = EmitPointerWithAlignment(E->getArg(1)); Value *SizeVal = EmitScalarExpr(E->getArg(2)); EmitNonNullArgCheck(RValue::get(Dest.getPointer()), E->getArg(0)->getType(), E->getArg(0)->getExprLoc(), FD, 0); EmitNonNullArgCheck(RValue::get(Src.getPointer()), E->getArg(1)->getType(), E->getArg(1)->getExprLoc(), FD, 1); Builder.CreateMemMove(Dest, Src, SizeVal, false); return RValue::get(Dest.getPointer()); } case Builtin::BImemset: case Builtin::BI__builtin_memset: { Address Dest = EmitPointerWithAlignment(E->getArg(0)); Value *ByteVal = Builder.CreateTrunc(EmitScalarExpr(E->getArg(1)), Builder.getInt8Ty()); Value *SizeVal = EmitScalarExpr(E->getArg(2)); EmitNonNullArgCheck(RValue::get(Dest.getPointer()), E->getArg(0)->getType(), E->getArg(0)->getExprLoc(), FD, 0); Builder.CreateMemSet(Dest, ByteVal, SizeVal, false); return RValue::get(Dest.getPointer()); } case Builtin::BI__builtin___memset_chk: { // fold __builtin_memset_chk(x, y, cst1, cst2) to memset iff cst1<=cst2. llvm::APSInt Size, DstSize; if (!E->getArg(2)->EvaluateAsInt(Size, CGM.getContext()) || !E->getArg(3)->EvaluateAsInt(DstSize, CGM.getContext())) break; if (Size.ugt(DstSize)) break; Address Dest = EmitPointerWithAlignment(E->getArg(0)); Value *ByteVal = Builder.CreateTrunc(EmitScalarExpr(E->getArg(1)), Builder.getInt8Ty()); Value *SizeVal = llvm::ConstantInt::get(Builder.getContext(), Size); Builder.CreateMemSet(Dest, ByteVal, SizeVal, false); return RValue::get(Dest.getPointer()); } case Builtin::BI__builtin_dwarf_cfa: { // The offset in bytes from the first argument to the CFA. // // Why on earth is this in the frontend? Is there any reason at // all that the backend can't reasonably determine this while // lowering llvm.eh.dwarf.cfa()? // // TODO: If there's a satisfactory reason, add a target hook for // this instead of hard-coding 0, which is correct for most targets. int32_t Offset = 0; Value *F = CGM.getIntrinsic(Intrinsic::eh_dwarf_cfa); return RValue::get(Builder.CreateCall(F, llvm::ConstantInt::get(Int32Ty, Offset))); } case Builtin::BI__builtin_return_address: { Value *Depth = CGM.EmitConstantExpr(E->getArg(0), getContext().UnsignedIntTy, this); Value *F = CGM.getIntrinsic(Intrinsic::returnaddress); return RValue::get(Builder.CreateCall(F, Depth)); } case Builtin::BI_ReturnAddress: { Value *F = CGM.getIntrinsic(Intrinsic::returnaddress); return RValue::get(Builder.CreateCall(F, Builder.getInt32(0))); } case Builtin::BI__builtin_frame_address: { Value *Depth = CGM.EmitConstantExpr(E->getArg(0), getContext().UnsignedIntTy, this); Value *F = CGM.getIntrinsic(Intrinsic::frameaddress); return RValue::get(Builder.CreateCall(F, Depth)); } case Builtin::BI__builtin_extract_return_addr: { Value *Address = EmitScalarExpr(E->getArg(0)); Value *Result = getTargetHooks().decodeReturnAddress(*this, Address); return RValue::get(Result); } case Builtin::BI__builtin_frob_return_addr: { Value *Address = EmitScalarExpr(E->getArg(0)); Value *Result = getTargetHooks().encodeReturnAddress(*this, Address); return RValue::get(Result); } case Builtin::BI__builtin_dwarf_sp_column: { llvm::IntegerType *Ty = cast(ConvertType(E->getType())); int Column = getTargetHooks().getDwarfEHStackPointer(CGM); if (Column == -1) { CGM.ErrorUnsupported(E, "__builtin_dwarf_sp_column"); return RValue::get(llvm::UndefValue::get(Ty)); } return RValue::get(llvm::ConstantInt::get(Ty, Column, true)); } case Builtin::BI__builtin_init_dwarf_reg_size_table: { Value *Address = EmitScalarExpr(E->getArg(0)); if (getTargetHooks().initDwarfEHRegSizeTable(*this, Address)) CGM.ErrorUnsupported(E, "__builtin_init_dwarf_reg_size_table"); return RValue::get(llvm::UndefValue::get(ConvertType(E->getType()))); } case Builtin::BI__builtin_eh_return: { Value *Int = EmitScalarExpr(E->getArg(0)); Value *Ptr = EmitScalarExpr(E->getArg(1)); llvm::IntegerType *IntTy = cast(Int->getType()); assert((IntTy->getBitWidth() == 32 || IntTy->getBitWidth() == 64) && "LLVM's __builtin_eh_return only supports 32- and 64-bit variants"); Value *F = CGM.getIntrinsic(IntTy->getBitWidth() == 32 ? Intrinsic::eh_return_i32 : Intrinsic::eh_return_i64); Builder.CreateCall(F, {Int, Ptr}); Builder.CreateUnreachable(); // We do need to preserve an insertion point. EmitBlock(createBasicBlock("builtin_eh_return.cont")); return RValue::get(nullptr); } case Builtin::BI__builtin_unwind_init: { Value *F = CGM.getIntrinsic(Intrinsic::eh_unwind_init); return RValue::get(Builder.CreateCall(F)); } case Builtin::BI__builtin_extend_pointer: { // Extends a pointer to the size of an _Unwind_Word, which is // uint64_t on all platforms. Generally this gets poked into a // register and eventually used as an address, so if the // addressing registers are wider than pointers and the platform // doesn't implicitly ignore high-order bits when doing // addressing, we need to make sure we zext / sext based on // the platform's expectations. // // See: http://gcc.gnu.org/ml/gcc-bugs/2002-02/msg00237.html // Cast the pointer to intptr_t. Value *Ptr = EmitScalarExpr(E->getArg(0)); Value *Result = Builder.CreatePtrToInt(Ptr, IntPtrTy, "extend.cast"); // If that's 64 bits, we're done. if (IntPtrTy->getBitWidth() == 64) return RValue::get(Result); // Otherwise, ask the codegen data what to do. if (getTargetHooks().extendPointerWithSExt()) return RValue::get(Builder.CreateSExt(Result, Int64Ty, "extend.sext")); else return RValue::get(Builder.CreateZExt(Result, Int64Ty, "extend.zext")); } case Builtin::BI__builtin_setjmp: { // Buffer is a void**. Address Buf = EmitPointerWithAlignment(E->getArg(0)); // Store the frame pointer to the setjmp buffer. Value *FrameAddr = Builder.CreateCall(CGM.getIntrinsic(Intrinsic::frameaddress), ConstantInt::get(Int32Ty, 0)); Builder.CreateStore(FrameAddr, Buf); // Store the stack pointer to the setjmp buffer. Value *StackAddr = Builder.CreateCall(CGM.getIntrinsic(Intrinsic::stacksave)); Address StackSaveSlot = Builder.CreateConstInBoundsGEP(Buf, 2, getPointerSize()); Builder.CreateStore(StackAddr, StackSaveSlot); // Call LLVM's EH setjmp, which is lightweight. Value *F = CGM.getIntrinsic(Intrinsic::eh_sjlj_setjmp); Buf = Builder.CreateBitCast(Buf, Int8PtrTy); return RValue::get(Builder.CreateCall(F, Buf.getPointer())); } case Builtin::BI__builtin_longjmp: { Value *Buf = EmitScalarExpr(E->getArg(0)); Buf = Builder.CreateBitCast(Buf, Int8PtrTy); // Call LLVM's EH longjmp, which is lightweight. Builder.CreateCall(CGM.getIntrinsic(Intrinsic::eh_sjlj_longjmp), Buf); // longjmp doesn't return; mark this as unreachable. Builder.CreateUnreachable(); // We do need to preserve an insertion point. EmitBlock(createBasicBlock("longjmp.cont")); return RValue::get(nullptr); } case Builtin::BI__sync_fetch_and_add: case Builtin::BI__sync_fetch_and_sub: case Builtin::BI__sync_fetch_and_or: case Builtin::BI__sync_fetch_and_and: case Builtin::BI__sync_fetch_and_xor: case Builtin::BI__sync_fetch_and_nand: case Builtin::BI__sync_add_and_fetch: case Builtin::BI__sync_sub_and_fetch: case Builtin::BI__sync_and_and_fetch: case Builtin::BI__sync_or_and_fetch: case Builtin::BI__sync_xor_and_fetch: case Builtin::BI__sync_nand_and_fetch: case Builtin::BI__sync_val_compare_and_swap: case Builtin::BI__sync_bool_compare_and_swap: case Builtin::BI__sync_lock_test_and_set: case Builtin::BI__sync_lock_release: case Builtin::BI__sync_swap: llvm_unreachable("Shouldn't make it through sema"); case Builtin::BI__sync_fetch_and_add_1: case Builtin::BI__sync_fetch_and_add_2: case Builtin::BI__sync_fetch_and_add_4: case Builtin::BI__sync_fetch_and_add_8: case Builtin::BI__sync_fetch_and_add_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Add, E); case Builtin::BI__sync_fetch_and_sub_1: case Builtin::BI__sync_fetch_and_sub_2: case Builtin::BI__sync_fetch_and_sub_4: case Builtin::BI__sync_fetch_and_sub_8: case Builtin::BI__sync_fetch_and_sub_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Sub, E); case Builtin::BI__sync_fetch_and_or_1: case Builtin::BI__sync_fetch_and_or_2: case Builtin::BI__sync_fetch_and_or_4: case Builtin::BI__sync_fetch_and_or_8: case Builtin::BI__sync_fetch_and_or_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Or, E); case Builtin::BI__sync_fetch_and_and_1: case Builtin::BI__sync_fetch_and_and_2: case Builtin::BI__sync_fetch_and_and_4: case Builtin::BI__sync_fetch_and_and_8: case Builtin::BI__sync_fetch_and_and_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::And, E); case Builtin::BI__sync_fetch_and_xor_1: case Builtin::BI__sync_fetch_and_xor_2: case Builtin::BI__sync_fetch_and_xor_4: case Builtin::BI__sync_fetch_and_xor_8: case Builtin::BI__sync_fetch_and_xor_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Xor, E); case Builtin::BI__sync_fetch_and_nand_1: case Builtin::BI__sync_fetch_and_nand_2: case Builtin::BI__sync_fetch_and_nand_4: case Builtin::BI__sync_fetch_and_nand_8: case Builtin::BI__sync_fetch_and_nand_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Nand, E); // Clang extensions: not overloaded yet. case Builtin::BI__sync_fetch_and_min: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Min, E); case Builtin::BI__sync_fetch_and_max: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Max, E); case Builtin::BI__sync_fetch_and_umin: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::UMin, E); case Builtin::BI__sync_fetch_and_umax: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::UMax, E); case Builtin::BI__sync_add_and_fetch_1: case Builtin::BI__sync_add_and_fetch_2: case Builtin::BI__sync_add_and_fetch_4: case Builtin::BI__sync_add_and_fetch_8: case Builtin::BI__sync_add_and_fetch_16: return EmitBinaryAtomicPost(*this, llvm::AtomicRMWInst::Add, E, llvm::Instruction::Add); case Builtin::BI__sync_sub_and_fetch_1: case Builtin::BI__sync_sub_and_fetch_2: case Builtin::BI__sync_sub_and_fetch_4: case Builtin::BI__sync_sub_and_fetch_8: case Builtin::BI__sync_sub_and_fetch_16: return EmitBinaryAtomicPost(*this, llvm::AtomicRMWInst::Sub, E, llvm::Instruction::Sub); case Builtin::BI__sync_and_and_fetch_1: case Builtin::BI__sync_and_and_fetch_2: case Builtin::BI__sync_and_and_fetch_4: case Builtin::BI__sync_and_and_fetch_8: case Builtin::BI__sync_and_and_fetch_16: return EmitBinaryAtomicPost(*this, llvm::AtomicRMWInst::And, E, llvm::Instruction::And); case Builtin::BI__sync_or_and_fetch_1: case Builtin::BI__sync_or_and_fetch_2: case Builtin::BI__sync_or_and_fetch_4: case Builtin::BI__sync_or_and_fetch_8: case Builtin::BI__sync_or_and_fetch_16: return EmitBinaryAtomicPost(*this, llvm::AtomicRMWInst::Or, E, llvm::Instruction::Or); case Builtin::BI__sync_xor_and_fetch_1: case Builtin::BI__sync_xor_and_fetch_2: case Builtin::BI__sync_xor_and_fetch_4: case Builtin::BI__sync_xor_and_fetch_8: case Builtin::BI__sync_xor_and_fetch_16: return EmitBinaryAtomicPost(*this, llvm::AtomicRMWInst::Xor, E, llvm::Instruction::Xor); case Builtin::BI__sync_nand_and_fetch_1: case Builtin::BI__sync_nand_and_fetch_2: case Builtin::BI__sync_nand_and_fetch_4: case Builtin::BI__sync_nand_and_fetch_8: case Builtin::BI__sync_nand_and_fetch_16: return EmitBinaryAtomicPost(*this, llvm::AtomicRMWInst::Nand, E, llvm::Instruction::And, true); case Builtin::BI__sync_val_compare_and_swap_1: case Builtin::BI__sync_val_compare_and_swap_2: case Builtin::BI__sync_val_compare_and_swap_4: case Builtin::BI__sync_val_compare_and_swap_8: case Builtin::BI__sync_val_compare_and_swap_16: return RValue::get(MakeAtomicCmpXchgValue(*this, E, false)); case Builtin::BI__sync_bool_compare_and_swap_1: case Builtin::BI__sync_bool_compare_and_swap_2: case Builtin::BI__sync_bool_compare_and_swap_4: case Builtin::BI__sync_bool_compare_and_swap_8: case Builtin::BI__sync_bool_compare_and_swap_16: return RValue::get(MakeAtomicCmpXchgValue(*this, E, true)); case Builtin::BI__sync_swap_1: case Builtin::BI__sync_swap_2: case Builtin::BI__sync_swap_4: case Builtin::BI__sync_swap_8: case Builtin::BI__sync_swap_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Xchg, E); case Builtin::BI__sync_lock_test_and_set_1: case Builtin::BI__sync_lock_test_and_set_2: case Builtin::BI__sync_lock_test_and_set_4: case Builtin::BI__sync_lock_test_and_set_8: case Builtin::BI__sync_lock_test_and_set_16: return EmitBinaryAtomic(*this, llvm::AtomicRMWInst::Xchg, E); case Builtin::BI__sync_lock_release_1: case Builtin::BI__sync_lock_release_2: case Builtin::BI__sync_lock_release_4: case Builtin::BI__sync_lock_release_8: case Builtin::BI__sync_lock_release_16: { Value *Ptr = EmitScalarExpr(E->getArg(0)); QualType ElTy = E->getArg(0)->getType()->getPointeeType(); CharUnits StoreSize = getContext().getTypeSizeInChars(ElTy); llvm::Type *ITy = llvm::IntegerType::get(getLLVMContext(), StoreSize.getQuantity() * 8); Ptr = Builder.CreateBitCast(Ptr, ITy->getPointerTo()); llvm::StoreInst *Store = Builder.CreateAlignedStore(llvm::Constant::getNullValue(ITy), Ptr, StoreSize); Store->setAtomic(llvm::AtomicOrdering::Release); return RValue::get(nullptr); } case Builtin::BI__sync_synchronize: { // We assume this is supposed to correspond to a C++0x-style // sequentially-consistent fence (i.e. this is only usable for // synchonization, not device I/O or anything like that). This intrinsic // is really badly designed in the sense that in theory, there isn't // any way to safely use it... but in practice, it mostly works // to use it with non-atomic loads and stores to get acquire/release // semantics. Builder.CreateFence(llvm::AtomicOrdering::SequentiallyConsistent); return RValue::get(nullptr); } case Builtin::BI__builtin_nontemporal_load: return RValue::get(EmitNontemporalLoad(*this, E)); case Builtin::BI__builtin_nontemporal_store: return RValue::get(EmitNontemporalStore(*this, E)); case Builtin::BI__c11_atomic_is_lock_free: case Builtin::BI__atomic_is_lock_free: { // Call "bool __atomic_is_lock_free(size_t size, void *ptr)". For the // __c11 builtin, ptr is 0 (indicating a properly-aligned object), since // _Atomic(T) is always properly-aligned. const char *LibCallName = "__atomic_is_lock_free"; CallArgList Args; Args.add(RValue::get(EmitScalarExpr(E->getArg(0))), getContext().getSizeType()); if (BuiltinID == Builtin::BI__atomic_is_lock_free) Args.add(RValue::get(EmitScalarExpr(E->getArg(1))), getContext().VoidPtrTy); else Args.add(RValue::get(llvm::Constant::getNullValue(VoidPtrTy)), getContext().VoidPtrTy); const CGFunctionInfo &FuncInfo = CGM.getTypes().arrangeBuiltinFunctionCall(E->getType(), Args); llvm::FunctionType *FTy = CGM.getTypes().GetFunctionType(FuncInfo); llvm::Constant *Func = CGM.CreateRuntimeFunction(FTy, LibCallName); return EmitCall(FuncInfo, CGCallee::forDirect(Func), ReturnValueSlot(), Args); } case Builtin::BI__atomic_test_and_set: { // Look at the argument type to determine whether this is a volatile // operation. The parameter type is always volatile. QualType PtrTy = E->getArg(0)->IgnoreImpCasts()->getType(); bool Volatile = PtrTy->castAs()->getPointeeType().isVolatileQualified(); Value *Ptr = EmitScalarExpr(E->getArg(0)); unsigned AddrSpace = Ptr->getType()->getPointerAddressSpace(); Ptr = Builder.CreateBitCast(Ptr, Int8Ty->getPointerTo(AddrSpace)); Value *NewVal = Builder.getInt8(1); Value *Order = EmitScalarExpr(E->getArg(1)); if (isa(Order)) { int ord = cast(Order)->getZExtValue(); AtomicRMWInst *Result = nullptr; switch (ord) { case 0: // memory_order_relaxed default: // invalid order Result = Builder.CreateAtomicRMW(llvm::AtomicRMWInst::Xchg, Ptr, NewVal, llvm::AtomicOrdering::Monotonic); break; case 1: // memory_order_consume case 2: // memory_order_acquire Result = Builder.CreateAtomicRMW(llvm::AtomicRMWInst::Xchg, Ptr, NewVal, llvm::AtomicOrdering::Acquire); break; case 3: // memory_order_release Result = Builder.CreateAtomicRMW(llvm::AtomicRMWInst::Xchg, Ptr, NewVal, llvm::AtomicOrdering::Release); break; case 4: // memory_order_acq_rel Result = Builder.CreateAtomicRMW(llvm::AtomicRMWInst::Xchg, Ptr, NewVal, llvm::AtomicOrdering::AcquireRelease); break; case 5: // memory_order_seq_cst Result = Builder.CreateAtomicRMW( llvm::AtomicRMWInst::Xchg, Ptr, NewVal, llvm::AtomicOrdering::SequentiallyConsistent); break; } Result->setVolatile(Volatile); return RValue::get(Builder.CreateIsNotNull(Result, "tobool")); } llvm::BasicBlock *ContBB = createBasicBlock("atomic.continue", CurFn); llvm::BasicBlock *BBs[5] = { createBasicBlock("monotonic", CurFn), createBasicBlock("acquire", CurFn), createBasicBlock("release", CurFn), createBasicBlock("acqrel", CurFn), createBasicBlock("seqcst", CurFn) }; llvm::AtomicOrdering Orders[5] = { llvm::AtomicOrdering::Monotonic, llvm::AtomicOrdering::Acquire, llvm::AtomicOrdering::Release, llvm::AtomicOrdering::AcquireRelease, llvm::AtomicOrdering::SequentiallyConsistent}; Order = Builder.CreateIntCast(Order, Builder.getInt32Ty(), false); llvm::SwitchInst *SI = Builder.CreateSwitch(Order, BBs[0]); Builder.SetInsertPoint(ContBB); PHINode *Result = Builder.CreatePHI(Int8Ty, 5, "was_set"); for (unsigned i = 0; i < 5; ++i) { Builder.SetInsertPoint(BBs[i]); AtomicRMWInst *RMW = Builder.CreateAtomicRMW(llvm::AtomicRMWInst::Xchg, Ptr, NewVal, Orders[i]); RMW->setVolatile(Volatile); Result->addIncoming(RMW, BBs[i]); Builder.CreateBr(ContBB); } SI->addCase(Builder.getInt32(0), BBs[0]); SI->addCase(Builder.getInt32(1), BBs[1]); SI->addCase(Builder.getInt32(2), BBs[1]); SI->addCase(Builder.getInt32(3), BBs[2]); SI->addCase(Builder.getInt32(4), BBs[3]); SI->addCase(Builder.getInt32(5), BBs[4]); Builder.SetInsertPoint(ContBB); return RValue::get(Builder.CreateIsNotNull(Result, "tobool")); } case Builtin::BI__atomic_clear: { QualType PtrTy = E->getArg(0)->IgnoreImpCasts()->getType(); bool Volatile = PtrTy->castAs()->getPointeeType().isVolatileQualified(); Address Ptr = EmitPointerWithAlignment(E->getArg(0)); unsigned AddrSpace = Ptr.getPointer()->getType()->getPointerAddressSpace(); Ptr = Builder.CreateBitCast(Ptr, Int8Ty->getPointerTo(AddrSpace)); Value *NewVal = Builder.getInt8(0); Value *Order = EmitScalarExpr(E->getArg(1)); if (isa(Order)) { int ord = cast(Order)->getZExtValue(); StoreInst *Store = Builder.CreateStore(NewVal, Ptr, Volatile); switch (ord) { case 0: // memory_order_relaxed default: // invalid order Store->setOrdering(llvm::AtomicOrdering::Monotonic); break; case 3: // memory_order_release Store->setOrdering(llvm::AtomicOrdering::Release); break; case 5: // memory_order_seq_cst Store->setOrdering(llvm::AtomicOrdering::SequentiallyConsistent); break; } return RValue::get(nullptr); } llvm::BasicBlock *ContBB = createBasicBlock("atomic.continue", CurFn); llvm::BasicBlock *BBs[3] = { createBasicBlock("monotonic", CurFn), createBasicBlock("release", CurFn), createBasicBlock("seqcst", CurFn) }; llvm::AtomicOrdering Orders[3] = { llvm::AtomicOrdering::Monotonic, llvm::AtomicOrdering::Release, llvm::AtomicOrdering::SequentiallyConsistent}; Order = Builder.CreateIntCast(Order, Builder.getInt32Ty(), false); llvm::SwitchInst *SI = Builder.CreateSwitch(Order, BBs[0]); for (unsigned i = 0; i < 3; ++i) { Builder.SetInsertPoint(BBs[i]); StoreInst *Store = Builder.CreateStore(NewVal, Ptr, Volatile); Store->setOrdering(Orders[i]); Builder.CreateBr(ContBB); } SI->addCase(Builder.getInt32(0), BBs[0]); SI->addCase(Builder.getInt32(3), BBs[1]); SI->addCase(Builder.getInt32(5), BBs[2]); Builder.SetInsertPoint(ContBB); return RValue::get(nullptr); } case Builtin::BI__atomic_thread_fence: case Builtin::BI__atomic_signal_fence: case Builtin::BI__c11_atomic_thread_fence: case Builtin::BI__c11_atomic_signal_fence: { llvm::SynchronizationScope Scope; if (BuiltinID == Builtin::BI__atomic_signal_fence || BuiltinID == Builtin::BI__c11_atomic_signal_fence) Scope = llvm::SingleThread; else Scope = llvm::CrossThread; Value *Order = EmitScalarExpr(E->getArg(0)); if (isa(Order)) { int ord = cast(Order)->getZExtValue(); switch (ord) { case 0: // memory_order_relaxed default: // invalid order break; case 1: // memory_order_consume case 2: // memory_order_acquire Builder.CreateFence(llvm::AtomicOrdering::Acquire, Scope); break; case 3: // memory_order_release Builder.CreateFence(llvm::AtomicOrdering::Release, Scope); break; case 4: // memory_order_acq_rel Builder.CreateFence(llvm::AtomicOrdering::AcquireRelease, Scope); break; case 5: // memory_order_seq_cst Builder.CreateFence(llvm::AtomicOrdering::SequentiallyConsistent, Scope); break; } return RValue::get(nullptr); } llvm::BasicBlock *AcquireBB, *ReleaseBB, *AcqRelBB, *SeqCstBB; AcquireBB = createBasicBlock("acquire", CurFn); ReleaseBB = createBasicBlock("release", CurFn); AcqRelBB = createBasicBlock("acqrel", CurFn); SeqCstBB = createBasicBlock("seqcst", CurFn); llvm::BasicBlock *ContBB = createBasicBlock("atomic.continue", CurFn); Order = Builder.CreateIntCast(Order, Builder.getInt32Ty(), false); llvm::SwitchInst *SI = Builder.CreateSwitch(Order, ContBB); Builder.SetInsertPoint(AcquireBB); Builder.CreateFence(llvm::AtomicOrdering::Acquire, Scope); Builder.CreateBr(ContBB); SI->addCase(Builder.getInt32(1), AcquireBB); SI->addCase(Builder.getInt32(2), AcquireBB); Builder.SetInsertPoint(ReleaseBB); Builder.CreateFence(llvm::AtomicOrdering::Release, Scope); Builder.CreateBr(ContBB); SI->addCase(Builder.getInt32(3), ReleaseBB); Builder.SetInsertPoint(AcqRelBB); Builder.CreateFence(llvm::AtomicOrdering::AcquireRelease, Scope); Builder.CreateBr(ContBB); SI->addCase(Builder.getInt32(4), AcqRelBB); Builder.SetInsertPoint(SeqCstBB); Builder.CreateFence(llvm::AtomicOrdering::SequentiallyConsistent, Scope); Builder.CreateBr(ContBB); SI->addCase(Builder.getInt32(5), SeqCstBB); Builder.SetInsertPoint(ContBB); return RValue::get(nullptr); } // Library functions with special handling. case Builtin::BIsqrt: case Builtin::BIsqrtf: case Builtin::BIsqrtl: { // Transform a call to sqrt* into a @llvm.sqrt.* intrinsic call, but only // in finite- or unsafe-math mode (the intrinsic has different semantics // for handling negative numbers compared to the library function, so // -fmath-errno=0 is not enough). if (!FD->hasAttr()) break; if (!(CGM.getCodeGenOpts().UnsafeFPMath || CGM.getCodeGenOpts().NoNaNsFPMath)) break; Value *Arg0 = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = Arg0->getType(); Value *F = CGM.getIntrinsic(Intrinsic::sqrt, ArgType); return RValue::get(Builder.CreateCall(F, Arg0)); } case Builtin::BI__builtin_pow: case Builtin::BI__builtin_powf: case Builtin::BI__builtin_powl: case Builtin::BIpow: case Builtin::BIpowf: case Builtin::BIpowl: { // Transform a call to pow* into a @llvm.pow.* intrinsic call. if (!FD->hasAttr()) break; Value *Base = EmitScalarExpr(E->getArg(0)); Value *Exponent = EmitScalarExpr(E->getArg(1)); llvm::Type *ArgType = Base->getType(); Value *F = CGM.getIntrinsic(Intrinsic::pow, ArgType); return RValue::get(Builder.CreateCall(F, {Base, Exponent})); } case Builtin::BIfma: case Builtin::BIfmaf: case Builtin::BIfmal: case Builtin::BI__builtin_fma: case Builtin::BI__builtin_fmaf: case Builtin::BI__builtin_fmal: { // Rewrite fma to intrinsic. Value *FirstArg = EmitScalarExpr(E->getArg(0)); llvm::Type *ArgType = FirstArg->getType(); Value *F = CGM.getIntrinsic(Intrinsic::fma, ArgType); return RValue::get( Builder.CreateCall(F, {FirstArg, EmitScalarExpr(E->getArg(1)), EmitScalarExpr(E->getArg(2))})); } case Builtin::BI__builtin_signbit: case Builtin::BI__builtin_signbitf: case Builtin::BI__builtin_signbitl: { return RValue::get( Builder.CreateZExt(EmitSignBit(*this, EmitScalarExpr(E->getArg(0))), ConvertType(E->getType()))); } case Builtin::BI__builtin_annotation: { llvm::Value *AnnVal = EmitScalarExpr(E->getArg(0)); llvm::Value *F = CGM.getIntrinsic(llvm::Intrinsic::annotation, AnnVal->getType()); // Get the annotation string, go through casts. Sema requires this to be a // non-wide string literal, potentially casted, so the cast<> is safe. const Expr *AnnotationStrExpr = E->getArg(1)->IgnoreParenCasts(); StringRef Str = cast(AnnotationStrExpr)->getString(); return RValue::get(EmitAnnotationCall(F, AnnVal, Str, E->getExprLoc())); } case Builtin::BI__builtin_addcb: case Builtin::BI__builtin_addcs: case Builtin::BI__builtin_addc: case Builtin::BI__builtin_addcl: case Builtin::BI__builtin_addcll: case Builtin::BI__builtin_subcb: case Builtin::BI__builtin_subcs: case Builtin::BI__builtin_subc: case Builtin::BI__builtin_subcl: case Builtin::BI__builtin_subcll: { // We translate all of these builtins from expressions of the form: // int x = ..., y = ..., carryin = ..., carryout, result; // result = __builtin_addc(x, y, carryin, &carryout); // // to LLVM IR of the form: // // %tmp1 = call {i32, i1} @llvm.uadd.with.overflow.i32(i32 %x, i32 %y) // %tmpsum1 = extractvalue {i32, i1} %tmp1, 0 // %carry1 = extractvalue {i32, i1} %tmp1, 1 // %tmp2 = call {i32, i1} @llvm.uadd.with.overflow.i32(i32 %tmpsum1, // i32 %carryin) // %result = extractvalue {i32, i1} %tmp2, 0 // %carry2 = extractvalue {i32, i1} %tmp2, 1 // %tmp3 = or i1 %carry1, %carry2 // %tmp4 = zext i1 %tmp3 to i32 // store i32 %tmp4, i32* %carryout // Scalarize our inputs. llvm::Value *X = EmitScalarExpr(E->getArg(0)); llvm::Value *Y = EmitScalarExpr(E->getArg(1)); llvm::Value *Carryin = EmitScalarExpr(E->getArg(2)); Address CarryOutPtr = EmitPointerWithAlignment(E->getArg(3)); // Decide if we are lowering to a uadd.with.overflow or usub.with.overflow. llvm::Intrinsic::ID IntrinsicId; switch (BuiltinID) { default: llvm_unreachable("Unknown multiprecision builtin id."); case Builtin::BI__builtin_addcb: case Builtin::BI__builtin_addcs: case Builtin::BI__builtin_addc: case Builtin::BI__builtin_addcl: case Builtin::BI__builtin_addcll: IntrinsicId = llvm::Intrinsic::uadd_with_overflow; break; case Builtin::BI__builtin_subcb: case Builtin::BI__builtin_subcs: case Builtin::BI__builtin_subc: case Builtin::BI__builtin_subcl: case Builtin::BI__builtin_subcll: IntrinsicId = llvm::Intrinsic::usub_with_overflow; break; } // Construct our resulting LLVM IR expression. llvm::Value *Carry1; llvm::Value *Sum1 = EmitOverflowIntrinsic(*this, IntrinsicId, X, Y, Carry1); llvm::Value *Carry2; llvm::Value *Sum2 = EmitOverflowIntrinsic(*this, IntrinsicId, Sum1, Carryin, Carry2); llvm::Value *CarryOut = Builder.CreateZExt(Builder.CreateOr(Carry1, Carry2), X->getType()); Builder.CreateStore(CarryOut, CarryOutPtr); return RValue::get(Sum2); } case Builtin::BI__builtin_add_overflow: case Builtin::BI__builtin_sub_overflow: case Builtin::BI__builtin_mul_overflow: { const clang::Expr *LeftArg = E->getArg(0); const clang::Expr *RightArg = E->getArg(1); const clang::Expr *ResultArg = E->getArg(2); clang::QualType ResultQTy = ResultArg->getType()->castAs()->getPointeeType(); WidthAndSignedness LeftInfo = getIntegerWidthAndSignedness(CGM.getContext(), LeftArg->getType()); WidthAndSignedness RightInfo = getIntegerWidthAndSignedness(CGM.getContext(), RightArg->getType()); WidthAndSignedness ResultInfo = getIntegerWidthAndSignedness(CGM.getContext(), ResultQTy); WidthAndSignedness EncompassingInfo = EncompassingIntegerType({LeftInfo, RightInfo, ResultInfo}); llvm::Type *EncompassingLLVMTy = llvm::IntegerType::get(CGM.getLLVMContext(), EncompassingInfo.Width); llvm::Type *ResultLLVMTy = CGM.getTypes().ConvertType(ResultQTy); llvm::Intrinsic::ID IntrinsicId; switch (BuiltinID) { default: llvm_unreachable("Unknown overflow builtin id."); case Builtin::BI__builtin_add_overflow: IntrinsicId = EncompassingInfo.Signed ? llvm::Intrinsic::sadd_with_overflow : llvm::Intrinsic::uadd_with_overflow; break; case Builtin::BI__builtin_sub_overflow: IntrinsicId = EncompassingInfo.Signed ? llvm::Intrinsic::ssub_with_overflow : llvm::Intrinsic::usub_with_overflow; break; case Builtin::BI__builtin_mul_overflow: IntrinsicId = EncompassingInfo.Signed ? llvm::Intrinsic::smul_with_overflow : llvm::Intrinsic::umul_with_overflow; break; } llvm::Value *Left = EmitScalarExpr(LeftArg); llvm::Value *Right = EmitScalarExpr(RightArg); Address ResultPtr = EmitPointerWithAlignment(ResultArg); // Extend each operand to the encompassing type. Left = Builder.CreateIntCast(Left, EncompassingLLVMTy, LeftInfo.Signed); Right = Builder.CreateIntCast(Right, EncompassingLLVMTy, RightInfo.Signed); // Perform the operation on the extended values. llvm::Value *Overflow, *Result; Result = EmitOverflowIntrinsic(*this, IntrinsicId, Left, Right, Overflow); if (EncompassingInfo.Width > ResultInfo.Width) { // The encompassing type is wider than the result type, so we need to // truncate it. llvm::Value *ResultTrunc = Builder.CreateTrunc(Result, ResultLLVMTy); // To see if the truncation caused an overflow, we will extend // the result and then compare it to the original result. llvm::Value *ResultTruncExt = Builder.CreateIntCast( ResultTrunc, EncompassingLLVMTy, ResultInfo.Signed); llvm::Value *TruncationOverflow = Builder.CreateICmpNE(Result, ResultTruncExt); Overflow = Builder.CreateOr(Overflow, TruncationOverflow); Result = ResultTrunc; } // Finally, store the result using the pointer. bool isVolatile = ResultArg->getType()->getPointeeType().isVolatileQualified(); Builder.CreateStore(EmitToMemory(Result, ResultQTy), ResultPtr, isVolatile); return RValue::get(Overflow); } case Builtin::BI__builtin_uadd_overflow: case Builtin::BI__builtin_uaddl_overflow: case Builtin::BI__builtin_uaddll_overflow: case Builtin::BI__builtin_usub_overflow: case Builtin::BI__builtin_usubl_overflow: case Builtin::BI__builtin_usubll_overflow: case Builtin::BI__builtin_umul_overflow: case Builtin::BI__builtin_umull_overflow: case Builtin::BI__builtin_umulll_overflow: case Builtin::BI__builtin_sadd_overflow: case Builtin::BI__builtin_saddl_overflow: case Builtin::BI__builtin_saddll_overflow: case Builtin::BI__builtin_ssub_overflow: case Builtin::BI__builtin_ssubl_overflow: case Builtin::BI__builtin_ssubll_overflow: case Builtin::BI__builtin_smul_overflow: case Builtin::BI__builtin_smull_overflow: case Builtin::BI__builtin_smulll_overflow: { // We translate all of these builtins directly to the relevant llvm IR node. // Scalarize our inputs. llvm::Value *X = EmitScalarExpr(E->getArg(0)); llvm::Value *Y = EmitScalarExpr(E->getArg(1)); Address SumOutPtr = EmitPointerWithAlignment(E->getArg(2)); // Decide which of the overflow intrinsics we are lowering to: llvm::Intrinsic::ID IntrinsicId; switch (BuiltinID) { default: llvm_unreachable("Unknown overflow builtin id."); case Builtin::BI__builtin_uadd_overflow: case Builtin::BI__builtin_uaddl_overflow: case Builtin::BI__builtin_uaddll_overflow: IntrinsicId = llvm::Intrinsic::uadd_with_overflow; break; case Builtin::BI__builtin_usub_overflow: case Builtin::BI__builtin_usubl_overflow: case Builtin::BI__builtin_usubll_overflow: IntrinsicId = llvm::Intrinsic::usub_with_overflow; break; case Builtin::BI__builtin_umul_overflow: case Builtin::BI__builtin_umull_overflow: case Builtin::BI__builtin_umulll_overflow: IntrinsicId = llvm::Intrinsic::umul_with_overflow; break; case Builtin::BI__builtin_sadd_overflow: case Builtin::BI__builtin_saddl_overflow: case Builtin::BI__builtin_saddll_overflow: IntrinsicId = llvm::Intrinsic::sadd_with_overflow; break; case Builtin::BI__builtin_ssub_overflow: case Builtin::BI__builtin_ssubl_overflow: case Builtin::BI__builtin_ssubll_overflow: IntrinsicId = llvm::Intrinsic::ssub_with_overflow; break; case Builtin::BI__builtin_smul_overflow: case Builtin::BI__builtin_smull_overflow: case Builtin::BI__builtin_smulll_overflow: IntrinsicId = llvm::Intrinsic::smul_with_overflow; break; } llvm::Value *Carry; llvm::Value *Sum = EmitOverflowIntrinsic(*this, IntrinsicId, X, Y, Carry); Builder.CreateStore(Sum, SumOutPtr); return RValue::get(Carry); } case Builtin::BI__builtin_addressof: return RValue::get(EmitLValue(E->getArg(0)).getPointer()); case Builtin::BI__builtin_operator_new: return EmitBuiltinNewDeleteCall(FD->getType()->castAs(), E->getArg(0), false); case Builtin::BI__builtin_operator_delete: return EmitBuiltinNewDeleteCall(FD->getType()->castAs(), E->getArg(0), true); case Builtin::BI__noop: // __noop always evaluates to an integer literal zero. return RValue::get(ConstantInt::get(IntTy, 0)); case Builtin::BI__builtin_call_with_static_chain: { const CallExpr *Call = cast(E->getArg(0)); const Expr *Chain = E->getArg(1); return EmitCall(Call->getCallee()->getType(), EmitCallee(Call->getCallee()), Call, ReturnValue, EmitScalarExpr(Chain)); } case Builtin::BI_InterlockedExchange8: case Builtin::BI_InterlockedExchange16: case Builtin::BI_InterlockedExchange: case Builtin::BI_InterlockedExchangePointer: return RValue::get( EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchange, E)); case Builtin::BI_InterlockedCompareExchangePointer: { llvm::Type *RTy; llvm::IntegerType *IntType = IntegerType::get(getLLVMContext(), getContext().getTypeSize(E->getType())); llvm::Type *IntPtrType = IntType->getPointerTo(); llvm::Value *Destination = Builder.CreateBitCast(EmitScalarExpr(E->getArg(0)), IntPtrType); llvm::Value *Exchange = EmitScalarExpr(E->getArg(1)); RTy = Exchange->getType(); Exchange = Builder.CreatePtrToInt(Exchange, IntType); llvm::Value *Comparand = Builder.CreatePtrToInt(EmitScalarExpr(E->getArg(2)), IntType); auto Result = Builder.CreateAtomicCmpXchg(Destination, Comparand, Exchange, AtomicOrdering::SequentiallyConsistent, AtomicOrdering::SequentiallyConsistent); Result->setVolatile(true); return RValue::get(Builder.CreateIntToPtr(Builder.CreateExtractValue(Result, 0), RTy)); } case Builtin::BI_InterlockedCompareExchange8: case Builtin::BI_InterlockedCompareExchange16: case Builtin::BI_InterlockedCompareExchange: case Builtin::BI_InterlockedCompareExchange64: { AtomicCmpXchgInst *CXI = Builder.CreateAtomicCmpXchg( EmitScalarExpr(E->getArg(0)), EmitScalarExpr(E->getArg(2)), EmitScalarExpr(E->getArg(1)), AtomicOrdering::SequentiallyConsistent, AtomicOrdering::SequentiallyConsistent); CXI->setVolatile(true); return RValue::get(Builder.CreateExtractValue(CXI, 0)); } case Builtin::BI_InterlockedIncrement16: case Builtin::BI_InterlockedIncrement: return RValue::get( EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedIncrement, E)); case Builtin::BI_InterlockedDecrement16: case Builtin::BI_InterlockedDecrement: return RValue::get( EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedDecrement, E)); case Builtin::BI_InterlockedAnd8: case Builtin::BI_InterlockedAnd16: case Builtin::BI_InterlockedAnd: return RValue::get(EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedAnd, E)); case Builtin::BI_InterlockedExchangeAdd8: case Builtin::BI_InterlockedExchangeAdd16: case Builtin::BI_InterlockedExchangeAdd: return RValue::get( EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchangeAdd, E)); case Builtin::BI_InterlockedExchangeSub8: case Builtin::BI_InterlockedExchangeSub16: case Builtin::BI_InterlockedExchangeSub: return RValue::get( EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchangeSub, E)); case Builtin::BI_InterlockedOr8: case Builtin::BI_InterlockedOr16: case Builtin::BI_InterlockedOr: return RValue::get(EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedOr, E)); case Builtin::BI_InterlockedXor8: case Builtin::BI_InterlockedXor16: case Builtin::BI_InterlockedXor: return RValue::get(EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedXor, E)); case Builtin::BI__readfsdword: { llvm::Type *IntTy = ConvertType(E->getType()); Value *IntToPtr = Builder.CreateIntToPtr(EmitScalarExpr(E->getArg(0)), llvm::PointerType::get(IntTy, 257)); LoadInst *Load = Builder.CreateAlignedLoad( IntTy, IntToPtr, getContext().getTypeAlignInChars(E->getType())); Load->setVolatile(true); return RValue::get(Load); } case Builtin::BI__exception_code: case Builtin::BI_exception_code: return RValue::get(EmitSEHExceptionCode()); case Builtin::BI__exception_info: case Builtin::BI_exception_info: return RValue::get(EmitSEHExceptionInfo()); case Builtin::BI__abnormal_termination: case Builtin::BI_abnormal_termination: return RValue::get(EmitSEHAbnormalTermination()); case Builtin::BI_setjmpex: { if (getTarget().getTriple().isOSMSVCRT()) { llvm::Type *ArgTypes[] = {Int8PtrTy, Int8PtrTy}; llvm::AttributeSet ReturnsTwiceAttr = AttributeSet::get(getLLVMContext(), llvm::AttributeSet::FunctionIndex, llvm::Attribute::ReturnsTwice); llvm::Constant *SetJmpEx = CGM.CreateRuntimeFunction( llvm::FunctionType::get(IntTy, ArgTypes, /*isVarArg=*/false), "_setjmpex", ReturnsTwiceAttr, /*Local=*/true); llvm::Value *Buf = Builder.CreateBitOrPointerCast( EmitScalarExpr(E->getArg(0)), Int8PtrTy); llvm::Value *FrameAddr = Builder.CreateCall(CGM.getIntrinsic(Intrinsic::frameaddress), ConstantInt::get(Int32Ty, 0)); llvm::Value *Args[] = {Buf, FrameAddr}; llvm::CallSite CS = EmitRuntimeCallOrInvoke(SetJmpEx, Args); CS.setAttributes(ReturnsTwiceAttr); return RValue::get(CS.getInstruction()); } break; } case Builtin::BI_setjmp: { if (getTarget().getTriple().isOSMSVCRT()) { llvm::AttributeSet ReturnsTwiceAttr = AttributeSet::get(getLLVMContext(), llvm::AttributeSet::FunctionIndex, llvm::Attribute::ReturnsTwice); llvm::Value *Buf = Builder.CreateBitOrPointerCast( EmitScalarExpr(E->getArg(0)), Int8PtrTy); llvm::CallSite CS; if (getTarget().getTriple().getArch() == llvm::Triple::x86) { llvm::Type *ArgTypes[] = {Int8PtrTy, IntTy}; llvm::Constant *SetJmp3 = CGM.CreateRuntimeFunction( llvm::FunctionType::get(IntTy, ArgTypes, /*isVarArg=*/true), "_setjmp3", ReturnsTwiceAttr, /*Local=*/true); llvm::Value *Count = ConstantInt::get(IntTy, 0); llvm::Value *Args[] = {Buf, Count}; CS = EmitRuntimeCallOrInvoke(SetJmp3, Args); } else { llvm::Type *ArgTypes[] = {Int8PtrTy, Int8PtrTy}; llvm::Constant *SetJmp = CGM.CreateRuntimeFunction( llvm::FunctionType::get(IntTy, ArgTypes, /*isVarArg=*/false), "_setjmp", ReturnsTwiceAttr, /*Local=*/true); llvm::Value *FrameAddr = Builder.CreateCall(CGM.getIntrinsic(Intrinsic::frameaddress), ConstantInt::get(Int32Ty, 0)); llvm::Value *Args[] = {Buf, FrameAddr}; CS = EmitRuntimeCallOrInvoke(SetJmp, Args); } CS.setAttributes(ReturnsTwiceAttr); return RValue::get(CS.getInstruction()); } break; } case Builtin::BI__GetExceptionInfo: { if (llvm::GlobalVariable *GV = CGM.getCXXABI().getThrowInfo(FD->getParamDecl(0)->getType())) return RValue::get(llvm::ConstantExpr::getBitCast(GV, CGM.Int8PtrTy)); break; } case Builtin::BI__builtin_coro_size: { auto & Context = getContext(); auto SizeTy = Context.getSizeType(); auto T = Builder.getIntNTy(Context.getTypeSize(SizeTy)); Value *F = CGM.getIntrinsic(Intrinsic::coro_size, T); return RValue::get(Builder.CreateCall(F)); } case Builtin::BI__builtin_coro_id: return EmitCoroutineIntrinsic(E, Intrinsic::coro_id); case Builtin::BI__builtin_coro_promise: return EmitCoroutineIntrinsic(E, Intrinsic::coro_promise); case Builtin::BI__builtin_coro_resume: return EmitCoroutineIntrinsic(E, Intrinsic::coro_resume); case Builtin::BI__builtin_coro_frame: return EmitCoroutineIntrinsic(E, Intrinsic::coro_frame); case Builtin::BI__builtin_coro_free: return EmitCoroutineIntrinsic(E, Intrinsic::coro_free); case Builtin::BI__builtin_coro_destroy: return EmitCoroutineIntrinsic(E, Intrinsic::coro_destroy); case Builtin::BI__builtin_coro_done: return EmitCoroutineIntrinsic(E, Intrinsic::coro_done); case Builtin::BI__builtin_coro_alloc: return EmitCoroutineIntrinsic(E, Intrinsic::coro_alloc); case Builtin::BI__builtin_coro_begin: return EmitCoroutineIntrinsic(E, Intrinsic::coro_begin); case Builtin::BI__builtin_coro_end: return EmitCoroutineIntrinsic(E, Intrinsic::coro_end); case Builtin::BI__builtin_coro_suspend: return EmitCoroutineIntrinsic(E, Intrinsic::coro_suspend); case Builtin::BI__builtin_coro_param: return EmitCoroutineIntrinsic(E, Intrinsic::coro_param); // OpenCL v2.0 s6.13.16.2, Built-in pipe read and write functions case Builtin::BIread_pipe: case Builtin::BIwrite_pipe: { Value *Arg0 = EmitScalarExpr(E->getArg(0)), *Arg1 = EmitScalarExpr(E->getArg(1)); CGOpenCLRuntime OpenCLRT(CGM); Value *PacketSize = OpenCLRT.getPipeElemSize(E->getArg(0)); Value *PacketAlign = OpenCLRT.getPipeElemAlign(E->getArg(0)); // Type of the generic packet parameter. unsigned GenericAS = getContext().getTargetAddressSpace(LangAS::opencl_generic); llvm::Type *I8PTy = llvm::PointerType::get( llvm::Type::getInt8Ty(getLLVMContext()), GenericAS); // Testing which overloaded version we should generate the call for. if (2U == E->getNumArgs()) { const char *Name = (BuiltinID == Builtin::BIread_pipe) ? "__read_pipe_2" : "__write_pipe_2"; // Creating a generic function type to be able to call with any builtin or // user defined type. llvm::Type *ArgTys[] = {Arg0->getType(), I8PTy, Int32Ty, Int32Ty}; llvm::FunctionType *FTy = llvm::FunctionType::get( Int32Ty, llvm::ArrayRef(ArgTys), false); Value *BCast = Builder.CreatePointerCast(Arg1, I8PTy); return RValue::get( Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, Name), {Arg0, BCast, PacketSize, PacketAlign})); } else { assert(4 == E->getNumArgs() && "Illegal number of parameters to pipe function"); const char *Name = (BuiltinID == Builtin::BIread_pipe) ? "__read_pipe_4" : "__write_pipe_4"; llvm::Type *ArgTys[] = {Arg0->getType(), Arg1->getType(), Int32Ty, I8PTy, Int32Ty, Int32Ty}; Value *Arg2 = EmitScalarExpr(E->getArg(2)), *Arg3 = EmitScalarExpr(E->getArg(3)); llvm::FunctionType *FTy = llvm::FunctionType::get( Int32Ty, llvm::ArrayRef(ArgTys), false); Value *BCast = Builder.CreatePointerCast(Arg3, I8PTy); // We know the third argument is an integer type, but we may need to cast // it to i32. if (Arg2->getType() != Int32Ty) Arg2 = Builder.CreateZExtOrTrunc(Arg2, Int32Ty); return RValue::get(Builder.CreateCall( CGM.CreateRuntimeFunction(FTy, Name), {Arg0, Arg1, Arg2, BCast, PacketSize, PacketAlign})); } } // OpenCL v2.0 s6.13.16 ,s9.17.3.5 - Built-in pipe reserve read and write // functions case Builtin::BIreserve_read_pipe: case Builtin::BIreserve_write_pipe: case Builtin::BIwork_group_reserve_read_pipe: case Builtin::BIwork_group_reserve_write_pipe: case Builtin::BIsub_group_reserve_read_pipe: case Builtin::BIsub_group_reserve_write_pipe: { // Composing the mangled name for the function. const char *Name; if (BuiltinID == Builtin::BIreserve_read_pipe) Name = "__reserve_read_pipe"; else if (BuiltinID == Builtin::BIreserve_write_pipe) Name = "__reserve_write_pipe"; else if (BuiltinID == Builtin::BIwork_group_reserve_read_pipe) Name = "__work_group_reserve_read_pipe"; else if (BuiltinID == Builtin::BIwork_group_reserve_write_pipe) Name = "__work_group_reserve_write_pipe"; else if (BuiltinID == Builtin::BIsub_group_reserve_read_pipe) Name = "__sub_group_reserve_read_pipe"; else Name = "__sub_group_reserve_write_pipe"; Value *Arg0 = EmitScalarExpr(E->getArg(0)), *Arg1 = EmitScalarExpr(E->getArg(1)); llvm::Type *ReservedIDTy = ConvertType(getContext().OCLReserveIDTy); CGOpenCLRuntime OpenCLRT(CGM); Value *PacketSize = OpenCLRT.getPipeElemSize(E->getArg(0)); Value *PacketAlign = OpenCLRT.getPipeElemAlign(E->getArg(0)); // Building the generic function prototype. llvm::Type *ArgTys[] = {Arg0->getType(), Int32Ty, Int32Ty, Int32Ty}; llvm::FunctionType *FTy = llvm::FunctionType::get( ReservedIDTy, llvm::ArrayRef(ArgTys), false); // We know the second argument is an integer type, but we may need to cast // it to i32. if (Arg1->getType() != Int32Ty) Arg1 = Builder.CreateZExtOrTrunc(Arg1, Int32Ty); return RValue::get( Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, Name), {Arg0, Arg1, PacketSize, PacketAlign})); } // OpenCL v2.0 s6.13.16, s9.17.3.5 - Built-in pipe commit read and write // functions case Builtin::BIcommit_read_pipe: case Builtin::BIcommit_write_pipe: case Builtin::BIwork_group_commit_read_pipe: case Builtin::BIwork_group_commit_write_pipe: case Builtin::BIsub_group_commit_read_pipe: case Builtin::BIsub_group_commit_write_pipe: { const char *Name; if (BuiltinID == Builtin::BIcommit_read_pipe) Name = "__commit_read_pipe"; else if (BuiltinID == Builtin::BIcommit_write_pipe) Name = "__commit_write_pipe"; else if (BuiltinID == Builtin::BIwork_group_commit_read_pipe) Name = "__work_group_commit_read_pipe"; else if (BuiltinID == Builtin::BIwork_group_commit_write_pipe) Name = "__work_group_commit_write_pipe"; else if (BuiltinID == Builtin::BIsub_group_commit_read_pipe) Name = "__sub_group_commit_read_pipe"; else Name = "__sub_group_commit_write_pipe"; Value *Arg0 = EmitScalarExpr(E->getArg(0)), *Arg1 = EmitScalarExpr(E->getArg(1)); CGOpenCLRuntime OpenCLRT(CGM); Value *PacketSize = OpenCLRT.getPipeElemSize(E->getArg(0)); Value *PacketAlign = OpenCLRT.getPipeElemAlign(E->getArg(0)); // Building the generic function prototype. llvm::Type *ArgTys[] = {Arg0->getType(), Arg1->getType(), Int32Ty, Int32Ty}; llvm::FunctionType *FTy = llvm::FunctionType::get(llvm::Type::getVoidTy(getLLVMContext()), llvm::ArrayRef(ArgTys), false); return RValue::get( Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, Name), {Arg0, Arg1, PacketSize, PacketAlign})); } // OpenCL v2.0 s6.13.16.4 Built-in pipe query functions case Builtin::BIget_pipe_num_packets: case Builtin::BIget_pipe_max_packets: { const char *Name; if (BuiltinID == Builtin::BIget_pipe_num_packets) Name = "__get_pipe_num_packets"; else Name = "__get_pipe_max_packets"; // Building the generic function prototype. Value *Arg0 = EmitScalarExpr(E->getArg(0)); CGOpenCLRuntime OpenCLRT(CGM); Value *PacketSize = OpenCLRT.getPipeElemSize(E->getArg(0)); Value *PacketAlign = OpenCLRT.getPipeElemAlign(E->getArg(0)); llvm::Type *ArgTys[] = {Arg0->getType(), Int32Ty, Int32Ty}; llvm::FunctionType *FTy = llvm::FunctionType::get( Int32Ty, llvm::ArrayRef(ArgTys), false); return RValue::get(Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, Name), {Arg0, PacketSize, PacketAlign})); } // OpenCL v2.0 s6.13.9 - Address space qualifier functions. case Builtin::BIto_global: case Builtin::BIto_local: case Builtin::BIto_private: { auto Arg0 = EmitScalarExpr(E->getArg(0)); auto NewArgT = llvm::PointerType::get(Int8Ty, CGM.getContext().getTargetAddressSpace(LangAS::opencl_generic)); auto NewRetT = llvm::PointerType::get(Int8Ty, CGM.getContext().getTargetAddressSpace( E->getType()->getPointeeType().getAddressSpace())); auto FTy = llvm::FunctionType::get(NewRetT, {NewArgT}, false); llvm::Value *NewArg; if (Arg0->getType()->getPointerAddressSpace() != NewArgT->getPointerAddressSpace()) NewArg = Builder.CreateAddrSpaceCast(Arg0, NewArgT); else NewArg = Builder.CreateBitOrPointerCast(Arg0, NewArgT); auto NewName = std::string("__") + E->getDirectCallee()->getName().str(); auto NewCall = Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, NewName), {NewArg}); return RValue::get(Builder.CreateBitOrPointerCast(NewCall, ConvertType(E->getType()))); } // OpenCL v2.0, s6.13.17 - Enqueue kernel function. // It contains four different overload formats specified in Table 6.13.17.1. case Builtin::BIenqueue_kernel: { StringRef Name; // Generated function call name unsigned NumArgs = E->getNumArgs(); llvm::Type *QueueTy = ConvertType(getContext().OCLQueueTy); llvm::Type *RangeTy = ConvertType(getContext().OCLNDRangeTy); llvm::Value *Queue = EmitScalarExpr(E->getArg(0)); llvm::Value *Flags = EmitScalarExpr(E->getArg(1)); llvm::Value *Range = EmitScalarExpr(E->getArg(2)); if (NumArgs == 4) { // The most basic form of the call with parameters: // queue_t, kernel_enqueue_flags_t, ndrange_t, block(void) Name = "__enqueue_kernel_basic"; llvm::Type *ArgTys[] = {QueueTy, Int32Ty, RangeTy, Int8PtrTy}; llvm::FunctionType *FTy = llvm::FunctionType::get( Int32Ty, llvm::ArrayRef(ArgTys, 4), false); llvm::Value *Block = Builder.CreateBitCast(EmitScalarExpr(E->getArg(3)), Int8PtrTy); return RValue::get(Builder.CreateCall( CGM.CreateRuntimeFunction(FTy, Name), {Queue, Flags, Range, Block})); } assert(NumArgs >= 5 && "Invalid enqueue_kernel signature"); // Could have events and/or vaargs. if (E->getArg(3)->getType()->isBlockPointerType()) { // No events passed, but has variadic arguments. Name = "__enqueue_kernel_vaargs"; llvm::Value *Block = Builder.CreateBitCast(EmitScalarExpr(E->getArg(3)), Int8PtrTy); // Create a vector of the arguments, as well as a constant value to // express to the runtime the number of variadic arguments. std::vector Args = {Queue, Flags, Range, Block, ConstantInt::get(IntTy, NumArgs - 4)}; std::vector ArgTys = {QueueTy, IntTy, RangeTy, Int8PtrTy, IntTy}; // Each of the following arguments specifies the size of the corresponding // argument passed to the enqueued block. for (unsigned I = 4/*Position of the first size arg*/; I < NumArgs; ++I) Args.push_back( Builder.CreateZExtOrTrunc(EmitScalarExpr(E->getArg(I)), SizeTy)); llvm::FunctionType *FTy = llvm::FunctionType::get( Int32Ty, llvm::ArrayRef(ArgTys), true); return RValue::get( Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, Name), llvm::ArrayRef(Args))); } // Any calls now have event arguments passed. if (NumArgs >= 7) { llvm::Type *EventTy = ConvertType(getContext().OCLClkEventTy); llvm::Type *EventPtrTy = EventTy->getPointerTo( CGM.getContext().getTargetAddressSpace(LangAS::opencl_generic)); llvm::Value *NumEvents = Builder.CreateZExtOrTrunc(EmitScalarExpr(E->getArg(3)), Int32Ty); llvm::Value *EventList = E->getArg(4)->getType()->isArrayType() ? EmitArrayToPointerDecay(E->getArg(4)).getPointer() : EmitScalarExpr(E->getArg(4)); llvm::Value *ClkEvent = EmitScalarExpr(E->getArg(5)); // Convert to generic address space. EventList = Builder.CreatePointerCast(EventList, EventPtrTy); ClkEvent = Builder.CreatePointerCast(ClkEvent, EventPtrTy); llvm::Value *Block = Builder.CreateBitCast(EmitScalarExpr(E->getArg(6)), Int8PtrTy); std::vector ArgTys = {QueueTy, Int32Ty, RangeTy, Int32Ty, EventPtrTy, EventPtrTy, Int8PtrTy}; std::vector Args = {Queue, Flags, Range, NumEvents, EventList, ClkEvent, Block}; if (NumArgs == 7) { // Has events but no variadics. Name = "__enqueue_kernel_basic_events"; llvm::FunctionType *FTy = llvm::FunctionType::get( Int32Ty, llvm::ArrayRef(ArgTys), false); return RValue::get( Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, Name), llvm::ArrayRef(Args))); } // Has event info and variadics // Pass the number of variadics to the runtime function too. Args.push_back(ConstantInt::get(Int32Ty, NumArgs - 7)); ArgTys.push_back(Int32Ty); Name = "__enqueue_kernel_events_vaargs"; // Each of the following arguments specifies the size of the corresponding // argument passed to the enqueued block. for (unsigned I = 7/*Position of the first size arg*/; I < NumArgs; ++I) Args.push_back( Builder.CreateZExtOrTrunc(EmitScalarExpr(E->getArg(I)), SizeTy)); llvm::FunctionType *FTy = llvm::FunctionType::get( Int32Ty, llvm::ArrayRef(ArgTys), true); return RValue::get( Builder.CreateCall(CGM.CreateRuntimeFunction(FTy, Name), llvm::ArrayRef(Args))); } } // OpenCL v2.0 s6.13.17.6 - Kernel query functions need bitcast of block // parameter. case Builtin::BIget_kernel_work_group_size: { Value *Arg = EmitScalarExpr(E->getArg(0)); Arg = Builder.CreateBitCast(Arg, Int8PtrTy); return RValue::get( Builder.CreateCall(CGM.CreateRuntimeFunction( llvm::FunctionType::get(IntTy, Int8PtrTy, false), "__get_kernel_work_group_size_impl"), Arg)); } case Builtin::BIget_kernel_preferred_work_group_size_multiple: { Value *Arg = EmitScalarExpr(E->getArg(0)); Arg = Builder.CreateBitCast(Arg, Int8PtrTy); return RValue::get(Builder.CreateCall( CGM.CreateRuntimeFunction( llvm::FunctionType::get(IntTy, Int8PtrTy, false), "__get_kernel_preferred_work_group_multiple_impl"), Arg)); } case Builtin::BIprintf: if (getLangOpts().CUDA && getLangOpts().CUDAIsDevice) return EmitCUDADevicePrintfCallExpr(E, ReturnValue); break; case Builtin::BI__builtin_canonicalize: case Builtin::BI__builtin_canonicalizef: case Builtin::BI__builtin_canonicalizel: return RValue::get(emitUnaryBuiltin(*this, E, Intrinsic::canonicalize)); case Builtin::BI__builtin_thread_pointer: { if (!getContext().getTargetInfo().isTLSSupported()) CGM.ErrorUnsupported(E, "__builtin_thread_pointer"); // Fall through - it's already mapped to the intrinsic by GCCBuiltin. break; } case Builtin::BI__builtin_os_log_format: { assert(E->getNumArgs() >= 2 && "__builtin_os_log_format takes at least 2 arguments"); analyze_os_log::OSLogBufferLayout Layout; analyze_os_log::computeOSLogBufferLayout(CGM.getContext(), E, Layout); Address BufAddr = EmitPointerWithAlignment(E->getArg(0)); // Ignore argument 1, the format string. It is not currently used. CharUnits Offset; Builder.CreateStore( Builder.getInt8(Layout.getSummaryByte()), Builder.CreateConstByteGEP(BufAddr, Offset++, "summary")); Builder.CreateStore( Builder.getInt8(Layout.getNumArgsByte()), Builder.CreateConstByteGEP(BufAddr, Offset++, "numArgs")); llvm::SmallVector RetainableOperands; for (const auto &Item : Layout.Items) { Builder.CreateStore( Builder.getInt8(Item.getDescriptorByte()), Builder.CreateConstByteGEP(BufAddr, Offset++, "argDescriptor")); Builder.CreateStore( Builder.getInt8(Item.getSizeByte()), Builder.CreateConstByteGEP(BufAddr, Offset++, "argSize")); Address Addr = Builder.CreateConstByteGEP(BufAddr, Offset); if (const Expr *TheExpr = Item.getExpr()) { Addr = Builder.CreateElementBitCast( Addr, ConvertTypeForMem(TheExpr->getType())); // Check if this is a retainable type. if (TheExpr->getType()->isObjCRetainableType()) { assert(getEvaluationKind(TheExpr->getType()) == TEK_Scalar && "Only scalar can be a ObjC retainable type"); llvm::Value *SV = EmitScalarExpr(TheExpr, /*Ignore*/ false); RValue RV = RValue::get(SV); LValue LV = MakeAddrLValue(Addr, TheExpr->getType()); EmitStoreThroughLValue(RV, LV); // Check if the object is constant, if not, save it in // RetainableOperands. if (!isa(SV)) RetainableOperands.push_back(SV); } else { EmitAnyExprToMem(TheExpr, Addr, Qualifiers(), /*isInit*/ true); } } else { Addr = Builder.CreateElementBitCast(Addr, Int32Ty); Builder.CreateStore( Builder.getInt32(Item.getConstValue().getQuantity()), Addr); } Offset += Item.size(); } // Push a clang.arc.use cleanup for each object in RetainableOperands. The // cleanup will cause the use to appear after the final log call, keeping // the object valid while it’s held in the log buffer. Note that if there’s // a release cleanup on the object, it will already be active; since // cleanups are emitted in reverse order, the use will occur before the // object is released. if (!RetainableOperands.empty() && getLangOpts().ObjCAutoRefCount && CGM.getCodeGenOpts().OptimizationLevel != 0) for (llvm::Value *object : RetainableOperands) pushFullExprCleanup(getARCCleanupKind(), object); return RValue::get(BufAddr.getPointer()); } case Builtin::BI__builtin_os_log_format_buffer_size: { analyze_os_log::OSLogBufferLayout Layout; analyze_os_log::computeOSLogBufferLayout(CGM.getContext(), E, Layout); return RValue::get(ConstantInt::get(ConvertType(E->getType()), Layout.size().getQuantity())); } } // If this is an alias for a lib function (e.g. __builtin_sin), emit // the call using the normal call path, but using the unmangled // version of the function name. if (getContext().BuiltinInfo.isLibFunction(BuiltinID)) return emitLibraryCall(*this, FD, E, CGM.getBuiltinLibFunction(FD, BuiltinID)); // If this is a predefined lib function (e.g. malloc), emit the call // using exactly the normal call path. if (getContext().BuiltinInfo.isPredefinedLibFunction(BuiltinID)) return emitLibraryCall(*this, FD, E, cast(EmitScalarExpr(E->getCallee()))); // Check that a call to a target specific builtin has the correct target // features. // This is down here to avoid non-target specific builtins, however, if // generic builtins start to require generic target features then we // can move this up to the beginning of the function. checkTargetFeatures(E, FD); // See if we have a target specific intrinsic. const char *Name = getContext().BuiltinInfo.getName(BuiltinID); Intrinsic::ID IntrinsicID = Intrinsic::not_intrinsic; StringRef Prefix = llvm::Triple::getArchTypePrefix(getTarget().getTriple().getArch()); if (!Prefix.empty()) { IntrinsicID = Intrinsic::getIntrinsicForGCCBuiltin(Prefix.data(), Name); // NOTE we dont need to perform a compatibility flag check here since the // intrinsics are declared in Builtins*.def via LANGBUILTIN which filter the // MS builtins via ALL_MS_LANGUAGES and are filtered earlier. if (IntrinsicID == Intrinsic::not_intrinsic) IntrinsicID = Intrinsic::getIntrinsicForMSBuiltin(Prefix.data(), Name); } if (IntrinsicID != Intrinsic::not_intrinsic) { SmallVector Args; // Find out if any arguments are required to be integer constant // expressions. unsigned ICEArguments = 0; ASTContext::GetBuiltinTypeError Error; getContext().GetBuiltinType(BuiltinID, Error, &ICEArguments); assert(Error == ASTContext::GE_None && "Should not codegen an error"); Function *F = CGM.getIntrinsic(IntrinsicID); llvm::FunctionType *FTy = F->getFunctionType(); for (unsigned i = 0, e = E->getNumArgs(); i != e; ++i) { Value *ArgValue; // If this is a normal argument, just emit it as a scalar. if ((ICEArguments & (1 << i)) == 0) { ArgValue = EmitScalarExpr(E->getArg(i)); } else { // If this is required to be a constant, constant fold it so that we // know that the generated intrinsic gets a ConstantInt. llvm::APSInt Result; bool IsConst = E->getArg(i)->isIntegerConstantExpr(Result,getContext()); assert(IsConst && "Constant arg isn't actually constant?"); (void)IsConst; ArgValue = llvm::ConstantInt::get(getLLVMContext(), Result); } // If the intrinsic arg type is different from the builtin arg type // we need to do a bit cast. llvm::Type *PTy = FTy->getParamType(i); if (PTy != ArgValue->getType()) { assert(PTy->canLosslesslyBitCastTo(FTy->getParamType(i)) && "Must be able to losslessly bit cast to param"); ArgValue = Builder.CreateBitCast(ArgValue, PTy); } Args.push_back(ArgValue); } Value *V = Builder.CreateCall(F, Args); QualType BuiltinRetType = E->getType(); llvm::Type *RetTy = VoidTy; if (!BuiltinRetType->isVoidType()) RetTy = ConvertType(BuiltinRetType); if (RetTy != V->getType()) { assert(V->getType()->canLosslesslyBitCastTo(RetTy) && "Must be able to losslessly bit cast result type"); V = Builder.CreateBitCast(V, RetTy); } return RValue::get(V); } // See if we have a target specific builtin that needs to be lowered. if (Value *V = EmitTargetBuiltinExpr(BuiltinID, E)) return RValue::get(V); ErrorUnsupported(E, "builtin function"); // Unknown builtin, for now just dump it out and return undef. return GetUndefRValue(E->getType()); } static Value *EmitTargetArchBuiltinExpr(CodeGenFunction *CGF, unsigned BuiltinID, const CallExpr *E, llvm::Triple::ArchType Arch) { switch (Arch) { case llvm::Triple::arm: case llvm::Triple::armeb: case llvm::Triple::thumb: case llvm::Triple::thumbeb: return CGF->EmitARMBuiltinExpr(BuiltinID, E); case llvm::Triple::aarch64: case llvm::Triple::aarch64_be: return CGF->EmitAArch64BuiltinExpr(BuiltinID, E); case llvm::Triple::x86: case llvm::Triple::x86_64: return CGF->EmitX86BuiltinExpr(BuiltinID, E); case llvm::Triple::ppc: case llvm::Triple::ppc64: case llvm::Triple::ppc64le: return CGF->EmitPPCBuiltinExpr(BuiltinID, E); case llvm::Triple::r600: case llvm::Triple::amdgcn: return CGF->EmitAMDGPUBuiltinExpr(BuiltinID, E); case llvm::Triple::systemz: return CGF->EmitSystemZBuiltinExpr(BuiltinID, E); case llvm::Triple::nvptx: case llvm::Triple::nvptx64: return CGF->EmitNVPTXBuiltinExpr(BuiltinID, E); case llvm::Triple::wasm32: case llvm::Triple::wasm64: return CGF->EmitWebAssemblyBuiltinExpr(BuiltinID, E); default: return nullptr; } } Value *CodeGenFunction::EmitTargetBuiltinExpr(unsigned BuiltinID, const CallExpr *E) { if (getContext().BuiltinInfo.isAuxBuiltinID(BuiltinID)) { assert(getContext().getAuxTargetInfo() && "Missing aux target info"); return EmitTargetArchBuiltinExpr( this, getContext().BuiltinInfo.getAuxBuiltinID(BuiltinID), E, getContext().getAuxTargetInfo()->getTriple().getArch()); } return EmitTargetArchBuiltinExpr(this, BuiltinID, E, getTarget().getTriple().getArch()); } static llvm::VectorType *GetNeonType(CodeGenFunction *CGF, NeonTypeFlags TypeFlags, bool V1Ty=false) { int IsQuad = TypeFlags.isQuad(); switch (TypeFlags.getEltType()) { case NeonTypeFlags::Int8: case NeonTypeFlags::Poly8: return llvm::VectorType::get(CGF->Int8Ty, V1Ty ? 1 : (8 << IsQuad)); case NeonTypeFlags::Int16: case NeonTypeFlags::Poly16: case NeonTypeFlags::Float16: return llvm::VectorType::get(CGF->Int16Ty, V1Ty ? 1 : (4 << IsQuad)); case NeonTypeFlags::Int32: return llvm::VectorType::get(CGF->Int32Ty, V1Ty ? 1 : (2 << IsQuad)); case NeonTypeFlags::Int64: case NeonTypeFlags::Poly64: return llvm::VectorType::get(CGF->Int64Ty, V1Ty ? 1 : (1 << IsQuad)); case NeonTypeFlags::Poly128: // FIXME: i128 and f128 doesn't get fully support in Clang and llvm. // There is a lot of i128 and f128 API missing. // so we use v16i8 to represent poly128 and get pattern matched. return llvm::VectorType::get(CGF->Int8Ty, 16); case NeonTypeFlags::Float32: return llvm::VectorType::get(CGF->FloatTy, V1Ty ? 1 : (2 << IsQuad)); case NeonTypeFlags::Float64: return llvm::VectorType::get(CGF->DoubleTy, V1Ty ? 1 : (1 << IsQuad)); } llvm_unreachable("Unknown vector element type!"); } static llvm::VectorType *GetFloatNeonType(CodeGenFunction *CGF, NeonTypeFlags IntTypeFlags) { int IsQuad = IntTypeFlags.isQuad(); switch (IntTypeFlags.getEltType()) { case NeonTypeFlags::Int32: return llvm::VectorType::get(CGF->FloatTy, (2 << IsQuad)); case NeonTypeFlags::Int64: return llvm::VectorType::get(CGF->DoubleTy, (1 << IsQuad)); default: llvm_unreachable("Type can't be converted to floating-point!"); } } Value *CodeGenFunction::EmitNeonSplat(Value *V, Constant *C) { unsigned nElts = V->getType()->getVectorNumElements(); Value* SV = llvm::ConstantVector::getSplat(nElts, C); return Builder.CreateShuffleVector(V, V, SV, "lane"); } Value *CodeGenFunction::EmitNeonCall(Function *F, SmallVectorImpl &Ops, const char *name, unsigned shift, bool rightshift) { unsigned j = 0; for (Function::const_arg_iterator ai = F->arg_begin(), ae = F->arg_end(); ai != ae; ++ai, ++j) if (shift > 0 && shift == j) Ops[j] = EmitNeonShiftVector(Ops[j], ai->getType(), rightshift); else Ops[j] = Builder.CreateBitCast(Ops[j], ai->getType(), name); return Builder.CreateCall(F, Ops, name); } Value *CodeGenFunction::EmitNeonShiftVector(Value *V, llvm::Type *Ty, bool neg) { int SV = cast(V)->getSExtValue(); return ConstantInt::get(Ty, neg ? -SV : SV); } // \brief Right-shift a vector by a constant. Value *CodeGenFunction::EmitNeonRShiftImm(Value *Vec, Value *Shift, llvm::Type *Ty, bool usgn, const char *name) { llvm::VectorType *VTy = cast(Ty); int ShiftAmt = cast(Shift)->getSExtValue(); int EltSize = VTy->getScalarSizeInBits(); Vec = Builder.CreateBitCast(Vec, Ty); // lshr/ashr are undefined when the shift amount is equal to the vector // element size. if (ShiftAmt == EltSize) { if (usgn) { // Right-shifting an unsigned value by its size yields 0. return llvm::ConstantAggregateZero::get(VTy); } else { // Right-shifting a signed value by its size is equivalent // to a shift of size-1. --ShiftAmt; Shift = ConstantInt::get(VTy->getElementType(), ShiftAmt); } } Shift = EmitNeonShiftVector(Shift, Ty, false); if (usgn) return Builder.CreateLShr(Vec, Shift, name); else return Builder.CreateAShr(Vec, Shift, name); } enum { AddRetType = (1 << 0), Add1ArgType = (1 << 1), Add2ArgTypes = (1 << 2), VectorizeRetType = (1 << 3), VectorizeArgTypes = (1 << 4), InventFloatType = (1 << 5), UnsignedAlts = (1 << 6), Use64BitVectors = (1 << 7), Use128BitVectors = (1 << 8), Vectorize1ArgType = Add1ArgType | VectorizeArgTypes, VectorRet = AddRetType | VectorizeRetType, VectorRetGetArgs01 = AddRetType | Add2ArgTypes | VectorizeRetType | VectorizeArgTypes, FpCmpzModifiers = AddRetType | VectorizeRetType | Add1ArgType | InventFloatType }; namespace { struct NeonIntrinsicInfo { const char *NameHint; unsigned BuiltinID; unsigned LLVMIntrinsic; unsigned AltLLVMIntrinsic; unsigned TypeModifier; bool operator<(unsigned RHSBuiltinID) const { return BuiltinID < RHSBuiltinID; } bool operator<(const NeonIntrinsicInfo &TE) const { return BuiltinID < TE.BuiltinID; } }; } // end anonymous namespace #define NEONMAP0(NameBase) \ { #NameBase, NEON::BI__builtin_neon_ ## NameBase, 0, 0, 0 } #define NEONMAP1(NameBase, LLVMIntrinsic, TypeModifier) \ { #NameBase, NEON:: BI__builtin_neon_ ## NameBase, \ Intrinsic::LLVMIntrinsic, 0, TypeModifier } #define NEONMAP2(NameBase, LLVMIntrinsic, AltLLVMIntrinsic, TypeModifier) \ { #NameBase, NEON:: BI__builtin_neon_ ## NameBase, \ Intrinsic::LLVMIntrinsic, Intrinsic::AltLLVMIntrinsic, \ TypeModifier } static const NeonIntrinsicInfo ARMSIMDIntrinsicMap [] = { NEONMAP2(vabd_v, arm_neon_vabdu, arm_neon_vabds, Add1ArgType | UnsignedAlts), NEONMAP2(vabdq_v, arm_neon_vabdu, arm_neon_vabds, Add1ArgType | UnsignedAlts), NEONMAP1(vabs_v, arm_neon_vabs, 0), NEONMAP1(vabsq_v, arm_neon_vabs, 0), NEONMAP0(vaddhn_v), NEONMAP1(vaesdq_v, arm_neon_aesd, 0), NEONMAP1(vaeseq_v, arm_neon_aese, 0), NEONMAP1(vaesimcq_v, arm_neon_aesimc, 0), NEONMAP1(vaesmcq_v, arm_neon_aesmc, 0), NEONMAP1(vbsl_v, arm_neon_vbsl, AddRetType), NEONMAP1(vbslq_v, arm_neon_vbsl, AddRetType), NEONMAP1(vcage_v, arm_neon_vacge, 0), NEONMAP1(vcageq_v, arm_neon_vacge, 0), NEONMAP1(vcagt_v, arm_neon_vacgt, 0), NEONMAP1(vcagtq_v, arm_neon_vacgt, 0), NEONMAP1(vcale_v, arm_neon_vacge, 0), NEONMAP1(vcaleq_v, arm_neon_vacge, 0), NEONMAP1(vcalt_v, arm_neon_vacgt, 0), NEONMAP1(vcaltq_v, arm_neon_vacgt, 0), NEONMAP1(vcls_v, arm_neon_vcls, Add1ArgType), NEONMAP1(vclsq_v, arm_neon_vcls, Add1ArgType), NEONMAP1(vclz_v, ctlz, Add1ArgType), NEONMAP1(vclzq_v, ctlz, Add1ArgType), NEONMAP1(vcnt_v, ctpop, Add1ArgType), NEONMAP1(vcntq_v, ctpop, Add1ArgType), NEONMAP1(vcvt_f16_f32, arm_neon_vcvtfp2hf, 0), NEONMAP1(vcvt_f32_f16, arm_neon_vcvthf2fp, 0), NEONMAP0(vcvt_f32_v), NEONMAP2(vcvt_n_f32_v, arm_neon_vcvtfxu2fp, arm_neon_vcvtfxs2fp, 0), NEONMAP1(vcvt_n_s32_v, arm_neon_vcvtfp2fxs, 0), NEONMAP1(vcvt_n_s64_v, arm_neon_vcvtfp2fxs, 0), NEONMAP1(vcvt_n_u32_v, arm_neon_vcvtfp2fxu, 0), NEONMAP1(vcvt_n_u64_v, arm_neon_vcvtfp2fxu, 0), NEONMAP0(vcvt_s32_v), NEONMAP0(vcvt_s64_v), NEONMAP0(vcvt_u32_v), NEONMAP0(vcvt_u64_v), NEONMAP1(vcvta_s32_v, arm_neon_vcvtas, 0), NEONMAP1(vcvta_s64_v, arm_neon_vcvtas, 0), NEONMAP1(vcvta_u32_v, arm_neon_vcvtau, 0), NEONMAP1(vcvta_u64_v, arm_neon_vcvtau, 0), NEONMAP1(vcvtaq_s32_v, arm_neon_vcvtas, 0), NEONMAP1(vcvtaq_s64_v, arm_neon_vcvtas, 0), NEONMAP1(vcvtaq_u32_v, arm_neon_vcvtau, 0), NEONMAP1(vcvtaq_u64_v, arm_neon_vcvtau, 0), NEONMAP1(vcvtm_s32_v, arm_neon_vcvtms, 0), NEONMAP1(vcvtm_s64_v, arm_neon_vcvtms, 0), NEONMAP1(vcvtm_u32_v, arm_neon_vcvtmu, 0), NEONMAP1(vcvtm_u64_v, arm_neon_vcvtmu, 0), NEONMAP1(vcvtmq_s32_v, arm_neon_vcvtms, 0), NEONMAP1(vcvtmq_s64_v, arm_neon_vcvtms, 0), NEONMAP1(vcvtmq_u32_v, arm_neon_vcvtmu, 0), NEONMAP1(vcvtmq_u64_v, arm_neon_vcvtmu, 0), NEONMAP1(vcvtn_s32_v, arm_neon_vcvtns, 0), NEONMAP1(vcvtn_s64_v, arm_neon_vcvtns, 0), NEONMAP1(vcvtn_u32_v, arm_neon_vcvtnu, 0), NEONMAP1(vcvtn_u64_v, arm_neon_vcvtnu, 0), NEONMAP1(vcvtnq_s32_v, arm_neon_vcvtns, 0), NEONMAP1(vcvtnq_s64_v, arm_neon_vcvtns, 0), NEONMAP1(vcvtnq_u32_v, arm_neon_vcvtnu, 0), NEONMAP1(vcvtnq_u64_v, arm_neon_vcvtnu, 0), NEONMAP1(vcvtp_s32_v, arm_neon_vcvtps, 0), NEONMAP1(vcvtp_s64_v, arm_neon_vcvtps, 0), NEONMAP1(vcvtp_u32_v, arm_neon_vcvtpu, 0), NEONMAP1(vcvtp_u64_v, arm_neon_vcvtpu, 0), NEONMAP1(vcvtpq_s32_v, arm_neon_vcvtps, 0), NEONMAP1(vcvtpq_s64_v, arm_neon_vcvtps, 0), NEONMAP1(vcvtpq_u32_v, arm_neon_vcvtpu, 0), NEONMAP1(vcvtpq_u64_v, arm_neon_vcvtpu, 0), NEONMAP0(vcvtq_f32_v), NEONMAP2(vcvtq_n_f32_v, arm_neon_vcvtfxu2fp, arm_neon_vcvtfxs2fp, 0), NEONMAP1(vcvtq_n_s32_v, arm_neon_vcvtfp2fxs, 0), NEONMAP1(vcvtq_n_s64_v, arm_neon_vcvtfp2fxs, 0), NEONMAP1(vcvtq_n_u32_v, arm_neon_vcvtfp2fxu, 0), NEONMAP1(vcvtq_n_u64_v, arm_neon_vcvtfp2fxu, 0), NEONMAP0(vcvtq_s32_v), NEONMAP0(vcvtq_s64_v), NEONMAP0(vcvtq_u32_v), NEONMAP0(vcvtq_u64_v), NEONMAP0(vext_v), NEONMAP0(vextq_v), NEONMAP0(vfma_v), NEONMAP0(vfmaq_v), NEONMAP2(vhadd_v, arm_neon_vhaddu, arm_neon_vhadds, Add1ArgType | UnsignedAlts), NEONMAP2(vhaddq_v, arm_neon_vhaddu, arm_neon_vhadds, Add1ArgType | UnsignedAlts), NEONMAP2(vhsub_v, arm_neon_vhsubu, arm_neon_vhsubs, Add1ArgType | UnsignedAlts), NEONMAP2(vhsubq_v, arm_neon_vhsubu, arm_neon_vhsubs, Add1ArgType | UnsignedAlts), NEONMAP0(vld1_dup_v), NEONMAP1(vld1_v, arm_neon_vld1, 0), NEONMAP0(vld1q_dup_v), NEONMAP1(vld1q_v, arm_neon_vld1, 0), NEONMAP1(vld2_lane_v, arm_neon_vld2lane, 0), NEONMAP1(vld2_v, arm_neon_vld2, 0), NEONMAP1(vld2q_lane_v, arm_neon_vld2lane, 0), NEONMAP1(vld2q_v, arm_neon_vld2, 0), NEONMAP1(vld3_lane_v, arm_neon_vld3lane, 0), NEONMAP1(vld3_v, arm_neon_vld3, 0), NEONMAP1(vld3q_lane_v, arm_neon_vld3lane, 0), NEONMAP1(vld3q_v, arm_neon_vld3, 0), NEONMAP1(vld4_lane_v, arm_neon_vld4lane, 0), NEONMAP1(vld4_v, arm_neon_vld4, 0), NEONMAP1(vld4q_lane_v, arm_neon_vld4lane, 0), NEONMAP1(vld4q_v, arm_neon_vld4, 0), NEONMAP2(vmax_v, arm_neon_vmaxu, arm_neon_vmaxs, Add1ArgType | UnsignedAlts), NEONMAP1(vmaxnm_v, arm_neon_vmaxnm, Add1ArgType), NEONMAP1(vmaxnmq_v, arm_neon_vmaxnm, Add1ArgType), NEONMAP2(vmaxq_v, arm_neon_vmaxu, arm_neon_vmaxs, Add1ArgType | UnsignedAlts), NEONMAP2(vmin_v, arm_neon_vminu, arm_neon_vmins, Add1ArgType | UnsignedAlts), NEONMAP1(vminnm_v, arm_neon_vminnm, Add1ArgType), NEONMAP1(vminnmq_v, arm_neon_vminnm, Add1ArgType), NEONMAP2(vminq_v, arm_neon_vminu, arm_neon_vmins, Add1ArgType | UnsignedAlts), NEONMAP0(vmovl_v), NEONMAP0(vmovn_v), NEONMAP1(vmul_v, arm_neon_vmulp, Add1ArgType), NEONMAP0(vmull_v), NEONMAP1(vmulq_v, arm_neon_vmulp, Add1ArgType), NEONMAP2(vpadal_v, arm_neon_vpadalu, arm_neon_vpadals, UnsignedAlts), NEONMAP2(vpadalq_v, arm_neon_vpadalu, arm_neon_vpadals, UnsignedAlts), NEONMAP1(vpadd_v, arm_neon_vpadd, Add1ArgType), NEONMAP2(vpaddl_v, arm_neon_vpaddlu, arm_neon_vpaddls, UnsignedAlts), NEONMAP2(vpaddlq_v, arm_neon_vpaddlu, arm_neon_vpaddls, UnsignedAlts), NEONMAP1(vpaddq_v, arm_neon_vpadd, Add1ArgType), NEONMAP2(vpmax_v, arm_neon_vpmaxu, arm_neon_vpmaxs, Add1ArgType | UnsignedAlts), NEONMAP2(vpmin_v, arm_neon_vpminu, arm_neon_vpmins, Add1ArgType | UnsignedAlts), NEONMAP1(vqabs_v, arm_neon_vqabs, Add1ArgType), NEONMAP1(vqabsq_v, arm_neon_vqabs, Add1ArgType), NEONMAP2(vqadd_v, arm_neon_vqaddu, arm_neon_vqadds, Add1ArgType | UnsignedAlts), NEONMAP2(vqaddq_v, arm_neon_vqaddu, arm_neon_vqadds, Add1ArgType | UnsignedAlts), NEONMAP2(vqdmlal_v, arm_neon_vqdmull, arm_neon_vqadds, 0), NEONMAP2(vqdmlsl_v, arm_neon_vqdmull, arm_neon_vqsubs, 0), NEONMAP1(vqdmulh_v, arm_neon_vqdmulh, Add1ArgType), NEONMAP1(vqdmulhq_v, arm_neon_vqdmulh, Add1ArgType), NEONMAP1(vqdmull_v, arm_neon_vqdmull, Add1ArgType), NEONMAP2(vqmovn_v, arm_neon_vqmovnu, arm_neon_vqmovns, Add1ArgType | UnsignedAlts), NEONMAP1(vqmovun_v, arm_neon_vqmovnsu, Add1ArgType), NEONMAP1(vqneg_v, arm_neon_vqneg, Add1ArgType), NEONMAP1(vqnegq_v, arm_neon_vqneg, Add1ArgType), NEONMAP1(vqrdmulh_v, arm_neon_vqrdmulh, Add1ArgType), NEONMAP1(vqrdmulhq_v, arm_neon_vqrdmulh, Add1ArgType), NEONMAP2(vqrshl_v, arm_neon_vqrshiftu, arm_neon_vqrshifts, Add1ArgType | UnsignedAlts), NEONMAP2(vqrshlq_v, arm_neon_vqrshiftu, arm_neon_vqrshifts, Add1ArgType | UnsignedAlts), NEONMAP2(vqshl_n_v, arm_neon_vqshiftu, arm_neon_vqshifts, UnsignedAlts), NEONMAP2(vqshl_v, arm_neon_vqshiftu, arm_neon_vqshifts, Add1ArgType | UnsignedAlts), NEONMAP2(vqshlq_n_v, arm_neon_vqshiftu, arm_neon_vqshifts, UnsignedAlts), NEONMAP2(vqshlq_v, arm_neon_vqshiftu, arm_neon_vqshifts, Add1ArgType | UnsignedAlts), NEONMAP1(vqshlu_n_v, arm_neon_vqshiftsu, 0), NEONMAP1(vqshluq_n_v, arm_neon_vqshiftsu, 0), NEONMAP2(vqsub_v, arm_neon_vqsubu, arm_neon_vqsubs, Add1ArgType | UnsignedAlts), NEONMAP2(vqsubq_v, arm_neon_vqsubu, arm_neon_vqsubs, Add1ArgType | UnsignedAlts), NEONMAP1(vraddhn_v, arm_neon_vraddhn, Add1ArgType), NEONMAP2(vrecpe_v, arm_neon_vrecpe, arm_neon_vrecpe, 0), NEONMAP2(vrecpeq_v, arm_neon_vrecpe, arm_neon_vrecpe, 0), NEONMAP1(vrecps_v, arm_neon_vrecps, Add1ArgType), NEONMAP1(vrecpsq_v, arm_neon_vrecps, Add1ArgType), NEONMAP2(vrhadd_v, arm_neon_vrhaddu, arm_neon_vrhadds, Add1ArgType | UnsignedAlts), NEONMAP2(vrhaddq_v, arm_neon_vrhaddu, arm_neon_vrhadds, Add1ArgType | UnsignedAlts), NEONMAP1(vrnd_v, arm_neon_vrintz, Add1ArgType), NEONMAP1(vrnda_v, arm_neon_vrinta, Add1ArgType), NEONMAP1(vrndaq_v, arm_neon_vrinta, Add1ArgType), NEONMAP1(vrndm_v, arm_neon_vrintm, Add1ArgType), NEONMAP1(vrndmq_v, arm_neon_vrintm, Add1ArgType), NEONMAP1(vrndn_v, arm_neon_vrintn, Add1ArgType), NEONMAP1(vrndnq_v, arm_neon_vrintn, Add1ArgType), NEONMAP1(vrndp_v, arm_neon_vrintp, Add1ArgType), NEONMAP1(vrndpq_v, arm_neon_vrintp, Add1ArgType), NEONMAP1(vrndq_v, arm_neon_vrintz, Add1ArgType), NEONMAP1(vrndx_v, arm_neon_vrintx, Add1ArgType), NEONMAP1(vrndxq_v, arm_neon_vrintx, Add1ArgType), NEONMAP2(vrshl_v, arm_neon_vrshiftu, arm_neon_vrshifts, Add1ArgType | UnsignedAlts), NEONMAP2(vrshlq_v, arm_neon_vrshiftu, arm_neon_vrshifts, Add1ArgType | UnsignedAlts), NEONMAP2(vrshr_n_v, arm_neon_vrshiftu, arm_neon_vrshifts, UnsignedAlts), NEONMAP2(vrshrq_n_v, arm_neon_vrshiftu, arm_neon_vrshifts, UnsignedAlts), NEONMAP2(vrsqrte_v, arm_neon_vrsqrte, arm_neon_vrsqrte, 0), NEONMAP2(vrsqrteq_v, arm_neon_vrsqrte, arm_neon_vrsqrte, 0), NEONMAP1(vrsqrts_v, arm_neon_vrsqrts, Add1ArgType), NEONMAP1(vrsqrtsq_v, arm_neon_vrsqrts, Add1ArgType), NEONMAP1(vrsubhn_v, arm_neon_vrsubhn, Add1ArgType), NEONMAP1(vsha1su0q_v, arm_neon_sha1su0, 0), NEONMAP1(vsha1su1q_v, arm_neon_sha1su1, 0), NEONMAP1(vsha256h2q_v, arm_neon_sha256h2, 0), NEONMAP1(vsha256hq_v, arm_neon_sha256h, 0), NEONMAP1(vsha256su0q_v, arm_neon_sha256su0, 0), NEONMAP1(vsha256su1q_v, arm_neon_sha256su1, 0), NEONMAP0(vshl_n_v), NEONMAP2(vshl_v, arm_neon_vshiftu, arm_neon_vshifts, Add1ArgType | UnsignedAlts), NEONMAP0(vshll_n_v), NEONMAP0(vshlq_n_v), NEONMAP2(vshlq_v, arm_neon_vshiftu, arm_neon_vshifts, Add1ArgType | UnsignedAlts), NEONMAP0(vshr_n_v), NEONMAP0(vshrn_n_v), NEONMAP0(vshrq_n_v), NEONMAP1(vst1_v, arm_neon_vst1, 0), NEONMAP1(vst1q_v, arm_neon_vst1, 0), NEONMAP1(vst2_lane_v, arm_neon_vst2lane, 0), NEONMAP1(vst2_v, arm_neon_vst2, 0), NEONMAP1(vst2q_lane_v, arm_neon_vst2lane, 0), NEONMAP1(vst2q_v, arm_neon_vst2, 0), NEONMAP1(vst3_lane_v, arm_neon_vst3lane, 0), NEONMAP1(vst3_v, arm_neon_vst3, 0), NEONMAP1(vst3q_lane_v, arm_neon_vst3lane, 0), NEONMAP1(vst3q_v, arm_neon_vst3, 0), NEONMAP1(vst4_lane_v, arm_neon_vst4lane, 0), NEONMAP1(vst4_v, arm_neon_vst4, 0), NEONMAP1(vst4q_lane_v, arm_neon_vst4lane, 0), NEONMAP1(vst4q_v, arm_neon_vst4, 0), NEONMAP0(vsubhn_v), NEONMAP0(vtrn_v), NEONMAP0(vtrnq_v), NEONMAP0(vtst_v), NEONMAP0(vtstq_v), NEONMAP0(vuzp_v), NEONMAP0(vuzpq_v), NEONMAP0(vzip_v), NEONMAP0(vzipq_v) }; static const NeonIntrinsicInfo AArch64SIMDIntrinsicMap[] = { NEONMAP1(vabs_v, aarch64_neon_abs, 0), NEONMAP1(vabsq_v, aarch64_neon_abs, 0), NEONMAP0(vaddhn_v), NEONMAP1(vaesdq_v, aarch64_crypto_aesd, 0), NEONMAP1(vaeseq_v, aarch64_crypto_aese, 0), NEONMAP1(vaesimcq_v, aarch64_crypto_aesimc, 0), NEONMAP1(vaesmcq_v, aarch64_crypto_aesmc, 0), NEONMAP1(vcage_v, aarch64_neon_facge, 0), NEONMAP1(vcageq_v, aarch64_neon_facge, 0), NEONMAP1(vcagt_v, aarch64_neon_facgt, 0), NEONMAP1(vcagtq_v, aarch64_neon_facgt, 0), NEONMAP1(vcale_v, aarch64_neon_facge, 0), NEONMAP1(vcaleq_v, aarch64_neon_facge, 0), NEONMAP1(vcalt_v, aarch64_neon_facgt, 0), NEONMAP1(vcaltq_v, aarch64_neon_facgt, 0), NEONMAP1(vcls_v, aarch64_neon_cls, Add1ArgType), NEONMAP1(vclsq_v, aarch64_neon_cls, Add1ArgType), NEONMAP1(vclz_v, ctlz, Add1ArgType), NEONMAP1(vclzq_v, ctlz, Add1ArgType), NEONMAP1(vcnt_v, ctpop, Add1ArgType), NEONMAP1(vcntq_v, ctpop, Add1ArgType), NEONMAP1(vcvt_f16_f32, aarch64_neon_vcvtfp2hf, 0), NEONMAP1(vcvt_f32_f16, aarch64_neon_vcvthf2fp, 0), NEONMAP0(vcvt_f32_v), NEONMAP2(vcvt_n_f32_v, aarch64_neon_vcvtfxu2fp, aarch64_neon_vcvtfxs2fp, 0), NEONMAP2(vcvt_n_f64_v, aarch64_neon_vcvtfxu2fp, aarch64_neon_vcvtfxs2fp, 0), NEONMAP1(vcvt_n_s32_v, aarch64_neon_vcvtfp2fxs, 0), NEONMAP1(vcvt_n_s64_v, aarch64_neon_vcvtfp2fxs, 0), NEONMAP1(vcvt_n_u32_v, aarch64_neon_vcvtfp2fxu, 0), NEONMAP1(vcvt_n_u64_v, aarch64_neon_vcvtfp2fxu, 0), NEONMAP0(vcvtq_f32_v), NEONMAP2(vcvtq_n_f32_v, aarch64_neon_vcvtfxu2fp, aarch64_neon_vcvtfxs2fp, 0), NEONMAP2(vcvtq_n_f64_v, aarch64_neon_vcvtfxu2fp, aarch64_neon_vcvtfxs2fp, 0), NEONMAP1(vcvtq_n_s32_v, aarch64_neon_vcvtfp2fxs, 0), NEONMAP1(vcvtq_n_s64_v, aarch64_neon_vcvtfp2fxs, 0), NEONMAP1(vcvtq_n_u32_v, aarch64_neon_vcvtfp2fxu, 0), NEONMAP1(vcvtq_n_u64_v, aarch64_neon_vcvtfp2fxu, 0), NEONMAP1(vcvtx_f32_v, aarch64_neon_fcvtxn, AddRetType | Add1ArgType), NEONMAP0(vext_v), NEONMAP0(vextq_v), NEONMAP0(vfma_v), NEONMAP0(vfmaq_v), NEONMAP2(vhadd_v, aarch64_neon_uhadd, aarch64_neon_shadd, Add1ArgType | UnsignedAlts), NEONMAP2(vhaddq_v, aarch64_neon_uhadd, aarch64_neon_shadd, Add1ArgType | UnsignedAlts), NEONMAP2(vhsub_v, aarch64_neon_uhsub, aarch64_neon_shsub, Add1ArgType | UnsignedAlts), NEONMAP2(vhsubq_v, aarch64_neon_uhsub, aarch64_neon_shsub, Add1ArgType | UnsignedAlts), NEONMAP0(vmovl_v), NEONMAP0(vmovn_v), NEONMAP1(vmul_v, aarch64_neon_pmul, Add1ArgType), NEONMAP1(vmulq_v, aarch64_neon_pmul, Add1ArgType), NEONMAP1(vpadd_v, aarch64_neon_addp, Add1ArgType), NEONMAP2(vpaddl_v, aarch64_neon_uaddlp, aarch64_neon_saddlp, UnsignedAlts), NEONMAP2(vpaddlq_v, aarch64_neon_uaddlp, aarch64_neon_saddlp, UnsignedAlts), NEONMAP1(vpaddq_v, aarch64_neon_addp, Add1ArgType), NEONMAP1(vqabs_v, aarch64_neon_sqabs, Add1ArgType), NEONMAP1(vqabsq_v, aarch64_neon_sqabs, Add1ArgType), NEONMAP2(vqadd_v, aarch64_neon_uqadd, aarch64_neon_sqadd, Add1ArgType | UnsignedAlts), NEONMAP2(vqaddq_v, aarch64_neon_uqadd, aarch64_neon_sqadd, Add1ArgType | UnsignedAlts), NEONMAP2(vqdmlal_v, aarch64_neon_sqdmull, aarch64_neon_sqadd, 0), NEONMAP2(vqdmlsl_v, aarch64_neon_sqdmull, aarch64_neon_sqsub, 0), NEONMAP1(vqdmulh_v, aarch64_neon_sqdmulh, Add1ArgType), NEONMAP1(vqdmulhq_v, aarch64_neon_sqdmulh, Add1ArgType), NEONMAP1(vqdmull_v, aarch64_neon_sqdmull, Add1ArgType), NEONMAP2(vqmovn_v, aarch64_neon_uqxtn, aarch64_neon_sqxtn, Add1ArgType | UnsignedAlts), NEONMAP1(vqmovun_v, aarch64_neon_sqxtun, Add1ArgType), NEONMAP1(vqneg_v, aarch64_neon_sqneg, Add1ArgType), NEONMAP1(vqnegq_v, aarch64_neon_sqneg, Add1ArgType), NEONMAP1(vqrdmulh_v, aarch64_neon_sqrdmulh, Add1ArgType), NEONMAP1(vqrdmulhq_v, aarch64_neon_sqrdmulh, Add1ArgType), NEONMAP2(vqrshl_v, aarch64_neon_uqrshl, aarch64_neon_sqrshl, Add1ArgType | UnsignedAlts), NEONMAP2(vqrshlq_v, aarch64_neon_uqrshl, aarch64_neon_sqrshl, Add1ArgType | UnsignedAlts), NEONMAP2(vqshl_n_v, aarch64_neon_uqshl, aarch64_neon_sqshl, UnsignedAlts), NEONMAP2(vqshl_v, aarch64_neon_uqshl, aarch64_neon_sqshl, Add1ArgType | UnsignedAlts), NEONMAP2(vqshlq_n_v, aarch64_neon_uqshl, aarch64_neon_sqshl,UnsignedAlts), NEONMAP2(vqshlq_v, aarch64_neon_uqshl, aarch64_neon_sqshl, Add1ArgType | UnsignedAlts), NEONMAP1(vqshlu_n_v, aarch64_neon_sqshlu, 0), NEONMAP1(vqshluq_n_v, aarch64_neon_sqshlu, 0), NEONMAP2(vqsub_v, aarch64_neon_uqsub, aarch64_neon_sqsub, Add1ArgType | UnsignedAlts), NEONMAP2(vqsubq_v, aarch64_neon_uqsub, aarch64_neon_sqsub, Add1ArgType | UnsignedAlts), NEONMAP1(vraddhn_v, aarch64_neon_raddhn, Add1ArgType), NEONMAP2(vrecpe_v, aarch64_neon_frecpe, aarch64_neon_urecpe, 0), NEONMAP2(vrecpeq_v, aarch64_neon_frecpe, aarch64_neon_urecpe, 0), NEONMAP1(vrecps_v, aarch64_neon_frecps, Add1ArgType), NEONMAP1(vrecpsq_v, aarch64_neon_frecps, Add1ArgType), NEONMAP2(vrhadd_v, aarch64_neon_urhadd, aarch64_neon_srhadd, Add1ArgType | UnsignedAlts), NEONMAP2(vrhaddq_v, aarch64_neon_urhadd, aarch64_neon_srhadd, Add1ArgType | UnsignedAlts), NEONMAP2(vrshl_v, aarch64_neon_urshl, aarch64_neon_srshl, Add1ArgType | UnsignedAlts), NEONMAP2(vrshlq_v, aarch64_neon_urshl, aarch64_neon_srshl, Add1ArgType | UnsignedAlts), NEONMAP2(vrshr_n_v, aarch64_neon_urshl, aarch64_neon_srshl, UnsignedAlts), NEONMAP2(vrshrq_n_v, aarch64_neon_urshl, aarch64_neon_srshl, UnsignedAlts), NEONMAP2(vrsqrte_v, aarch64_neon_frsqrte, aarch64_neon_ursqrte, 0), NEONMAP2(vrsqrteq_v, aarch64_neon_frsqrte, aarch64_neon_ursqrte, 0), NEONMAP1(vrsqrts_v, aarch64_neon_frsqrts, Add1ArgType), NEONMAP1(vrsqrtsq_v, aarch64_neon_frsqrts, Add1ArgType), NEONMAP1(vrsubhn_v, aarch64_neon_rsubhn, Add1ArgType), NEONMAP1(vsha1su0q_v, aarch64_crypto_sha1su0, 0), NEONMAP1(vsha1su1q_v, aarch64_crypto_sha1su1, 0), NEONMAP1(vsha256h2q_v, aarch64_crypto_sha256h2, 0), NEONMAP1(vsha256hq_v, aarch64_crypto_sha256h, 0), NEONMAP1(vsha256su0q_v, aarch64_crypto_sha256su0, 0), NEONMAP1(vsha256su1q_v, aarch64_crypto_sha256su1, 0), NEONMAP0(vshl_n_v), NEONMAP2(vshl_v, aarch64_neon_ushl, aarch64_neon_sshl, Add1ArgType | UnsignedAlts), NEONMAP0(vshll_n_v), NEONMAP0(vshlq_n_v), NEONMAP2(vshlq_v, aarch64_neon_ushl, aarch64_neon_sshl, Add1ArgType | UnsignedAlts), NEONMAP0(vshr_n_v), NEONMAP0(vshrn_n_v), NEONMAP0(vshrq_n_v), NEONMAP0(vsubhn_v), NEONMAP0(vtst_v), NEONMAP0(vtstq_v), }; static const NeonIntrinsicInfo AArch64SISDIntrinsicMap[] = { NEONMAP1(vabdd_f64, aarch64_sisd_fabd, Add1ArgType), NEONMAP1(vabds_f32, aarch64_sisd_fabd, Add1ArgType), NEONMAP1(vabsd_s64, aarch64_neon_abs, Add1ArgType), NEONMAP1(vaddlv_s32, aarch64_neon_saddlv, AddRetType | Add1ArgType), NEONMAP1(vaddlv_u32, aarch64_neon_uaddlv, AddRetType | Add1ArgType), NEONMAP1(vaddlvq_s32, aarch64_neon_saddlv, AddRetType | Add1ArgType), NEONMAP1(vaddlvq_u32, aarch64_neon_uaddlv, AddRetType | Add1ArgType), NEONMAP1(vaddv_f32, aarch64_neon_faddv, AddRetType | Add1ArgType), NEONMAP1(vaddv_s32, aarch64_neon_saddv, AddRetType | Add1ArgType), NEONMAP1(vaddv_u32, aarch64_neon_uaddv, AddRetType | Add1ArgType), NEONMAP1(vaddvq_f32, aarch64_neon_faddv, AddRetType | Add1ArgType), NEONMAP1(vaddvq_f64, aarch64_neon_faddv, AddRetType | Add1ArgType), NEONMAP1(vaddvq_s32, aarch64_neon_saddv, AddRetType | Add1ArgType), NEONMAP1(vaddvq_s64, aarch64_neon_saddv, AddRetType | Add1ArgType), NEONMAP1(vaddvq_u32, aarch64_neon_uaddv, AddRetType | Add1ArgType), NEONMAP1(vaddvq_u64, aarch64_neon_uaddv, AddRetType | Add1ArgType), NEONMAP1(vcaged_f64, aarch64_neon_facge, AddRetType | Add1ArgType), NEONMAP1(vcages_f32, aarch64_neon_facge, AddRetType | Add1ArgType), NEONMAP1(vcagtd_f64, aarch64_neon_facgt, AddRetType | Add1ArgType), NEONMAP1(vcagts_f32, aarch64_neon_facgt, AddRetType | Add1ArgType), NEONMAP1(vcaled_f64, aarch64_neon_facge, AddRetType | Add1ArgType), NEONMAP1(vcales_f32, aarch64_neon_facge, AddRetType | Add1ArgType), NEONMAP1(vcaltd_f64, aarch64_neon_facgt, AddRetType | Add1ArgType), NEONMAP1(vcalts_f32, aarch64_neon_facgt, AddRetType | Add1ArgType), NEONMAP1(vcvtad_s64_f64, aarch64_neon_fcvtas, AddRetType | Add1ArgType), NEONMAP1(vcvtad_u64_f64, aarch64_neon_fcvtau, AddRetType | Add1ArgType), NEONMAP1(vcvtas_s32_f32, aarch64_neon_fcvtas, AddRetType | Add1ArgType), NEONMAP1(vcvtas_u32_f32, aarch64_neon_fcvtau, AddRetType | Add1ArgType), NEONMAP1(vcvtd_n_f64_s64, aarch64_neon_vcvtfxs2fp, AddRetType | Add1ArgType), NEONMAP1(vcvtd_n_f64_u64, aarch64_neon_vcvtfxu2fp, AddRetType | Add1ArgType), NEONMAP1(vcvtd_n_s64_f64, aarch64_neon_vcvtfp2fxs, AddRetType | Add1ArgType), NEONMAP1(vcvtd_n_u64_f64, aarch64_neon_vcvtfp2fxu, AddRetType | Add1ArgType), NEONMAP1(vcvtmd_s64_f64, aarch64_neon_fcvtms, AddRetType | Add1ArgType), NEONMAP1(vcvtmd_u64_f64, aarch64_neon_fcvtmu, AddRetType | Add1ArgType), NEONMAP1(vcvtms_s32_f32, aarch64_neon_fcvtms, AddRetType | Add1ArgType), NEONMAP1(vcvtms_u32_f32, aarch64_neon_fcvtmu, AddRetType | Add1ArgType), NEONMAP1(vcvtnd_s64_f64, aarch64_neon_fcvtns, AddRetType | Add1ArgType), NEONMAP1(vcvtnd_u64_f64, aarch64_neon_fcvtnu, AddRetType | Add1ArgType), NEONMAP1(vcvtns_s32_f32, aarch64_neon_fcvtns, AddRetType | Add1ArgType), NEONMAP1(vcvtns_u32_f32, aarch64_neon_fcvtnu, AddRetType | Add1ArgType), NEONMAP1(vcvtpd_s64_f64, aarch64_neon_fcvtps, AddRetType | Add1ArgType), NEONMAP1(vcvtpd_u64_f64, aarch64_neon_fcvtpu, AddRetType | Add1ArgType), NEONMAP1(vcvtps_s32_f32, aarch64_neon_fcvtps, AddRetType | Add1ArgType), NEONMAP1(vcvtps_u32_f32, aarch64_neon_fcvtpu, AddRetType | Add1ArgType), NEONMAP1(vcvts_n_f32_s32, aarch64_neon_vcvtfxs2fp, AddRetType | Add1ArgType), NEONMAP1(vcvts_n_f32_u32, aarch64_neon_vcvtfxu2fp, AddRetType | Add1ArgType), NEONMAP1(vcvts_n_s32_f32, aarch64_neon_vcvtfp2fxs, AddRetType | Add1ArgType), NEONMAP1(vcvts_n_u32_f32, aarch64_neon_vcvtfp2fxu, AddRetType | Add1ArgType), NEONMAP1(vcvtxd_f32_f64, aarch64_sisd_fcvtxn, 0), NEONMAP1(vmaxnmv_f32, aarch64_neon_fmaxnmv, AddRetType | Add1ArgType), NEONMAP1(vmaxnmvq_f32, aarch64_neon_fmaxnmv, AddRetType | Add1ArgType), NEONMAP1(vmaxnmvq_f64, aarch64_neon_fmaxnmv, AddRetType | Add1ArgType), NEONMAP1(vmaxv_f32, aarch64_neon_fmaxv, AddRetType | Add1ArgType), NEONMAP1(vmaxv_s32, aarch64_neon_smaxv, AddRetType | Add1ArgType), NEONMAP1(vmaxv_u32, aarch64_neon_umaxv, AddRetType | Add1ArgType), NEONMAP1(vmaxvq_f32, aarch64_neon_fmaxv, AddRetType | Add1ArgType), NEONMAP1(vmaxvq_f64, aarch64_neon_fmaxv, AddRetType | Add1ArgType), NEONMAP1(vmaxvq_s32, aarch64_neon_smaxv, AddRetType | Add1ArgType), NEONMAP1(vmaxvq_u32, aarch64_neon_umaxv, AddRetType | Add1ArgType), NEONMAP1(vminnmv_f32, aarch64_neon_fminnmv, AddRetType | Add1ArgType), NEONMAP1(vminnmvq_f32, aarch64_neon_fminnmv, AddRetType | Add1ArgType), NEONMAP1(vminnmvq_f64, aarch64_neon_fminnmv, AddRetType | Add1ArgType), NEONMAP1(vminv_f32, aarch64_neon_fminv, AddRetType | Add1ArgType), NEONMAP1(vminv_s32, aarch64_neon_sminv, AddRetType | Add1ArgType), NEONMAP1(vminv_u32, aarch64_neon_uminv, AddRetType | Add1ArgType), NEONMAP1(vminvq_f32, aarch64_neon_fminv, AddRetType | Add1ArgType), NEONMAP1(vminvq_f64, aarch64_neon_fminv, AddRetType | Add1ArgType), NEONMAP1(vminvq_s32, aarch64_neon_sminv, AddRetType | Add1ArgType), NEONMAP1(vminvq_u32, aarch64_neon_uminv, AddRetType | Add1ArgType), NEONMAP1(vmull_p64, aarch64_neon_pmull64, 0), NEONMAP1(vmulxd_f64, aarch64_neon_fmulx, Add1ArgType), NEONMAP1(vmulxs_f32, aarch64_neon_fmulx, Add1ArgType), NEONMAP1(vpaddd_s64, aarch64_neon_uaddv, AddRetType | Add1ArgType), NEONMAP1(vpaddd_u64, aarch64_neon_uaddv, AddRetType | Add1ArgType), NEONMAP1(vpmaxnmqd_f64, aarch64_neon_fmaxnmv, AddRetType | Add1ArgType), NEONMAP1(vpmaxnms_f32, aarch64_neon_fmaxnmv, AddRetType | Add1ArgType), NEONMAP1(vpmaxqd_f64, aarch64_neon_fmaxv, AddRetType | Add1ArgType), NEONMAP1(vpmaxs_f32, aarch64_neon_fmaxv, AddRetType | Add1ArgType), NEONMAP1(vpminnmqd_f64, aarch64_neon_fminnmv, AddRetType | Add1ArgType), NEONMAP1(vpminnms_f32, aarch64_neon_fminnmv, AddRetType | Add1ArgType), NEONMAP1(vpminqd_f64, aarch64_neon_fminv, AddRetType | Add1ArgType), NEONMAP1(vpmins_f32, aarch64_neon_fminv, AddRetType | Add1ArgType), NEONMAP1(vqabsb_s8, aarch64_neon_sqabs, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqabsd_s64, aarch64_neon_sqabs, Add1ArgType), NEONMAP1(vqabsh_s16, aarch64_neon_sqabs, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqabss_s32, aarch64_neon_sqabs, Add1ArgType), NEONMAP1(vqaddb_s8, aarch64_neon_sqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqaddb_u8, aarch64_neon_uqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqaddd_s64, aarch64_neon_sqadd, Add1ArgType), NEONMAP1(vqaddd_u64, aarch64_neon_uqadd, Add1ArgType), NEONMAP1(vqaddh_s16, aarch64_neon_sqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqaddh_u16, aarch64_neon_uqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqadds_s32, aarch64_neon_sqadd, Add1ArgType), NEONMAP1(vqadds_u32, aarch64_neon_uqadd, Add1ArgType), NEONMAP1(vqdmulhh_s16, aarch64_neon_sqdmulh, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqdmulhs_s32, aarch64_neon_sqdmulh, Add1ArgType), NEONMAP1(vqdmullh_s16, aarch64_neon_sqdmull, VectorRet | Use128BitVectors), NEONMAP1(vqdmulls_s32, aarch64_neon_sqdmulls_scalar, 0), NEONMAP1(vqmovnd_s64, aarch64_neon_scalar_sqxtn, AddRetType | Add1ArgType), NEONMAP1(vqmovnd_u64, aarch64_neon_scalar_uqxtn, AddRetType | Add1ArgType), NEONMAP1(vqmovnh_s16, aarch64_neon_sqxtn, VectorRet | Use64BitVectors), NEONMAP1(vqmovnh_u16, aarch64_neon_uqxtn, VectorRet | Use64BitVectors), NEONMAP1(vqmovns_s32, aarch64_neon_sqxtn, VectorRet | Use64BitVectors), NEONMAP1(vqmovns_u32, aarch64_neon_uqxtn, VectorRet | Use64BitVectors), NEONMAP1(vqmovund_s64, aarch64_neon_scalar_sqxtun, AddRetType | Add1ArgType), NEONMAP1(vqmovunh_s16, aarch64_neon_sqxtun, VectorRet | Use64BitVectors), NEONMAP1(vqmovuns_s32, aarch64_neon_sqxtun, VectorRet | Use64BitVectors), NEONMAP1(vqnegb_s8, aarch64_neon_sqneg, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqnegd_s64, aarch64_neon_sqneg, Add1ArgType), NEONMAP1(vqnegh_s16, aarch64_neon_sqneg, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqnegs_s32, aarch64_neon_sqneg, Add1ArgType), NEONMAP1(vqrdmulhh_s16, aarch64_neon_sqrdmulh, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqrdmulhs_s32, aarch64_neon_sqrdmulh, Add1ArgType), NEONMAP1(vqrshlb_s8, aarch64_neon_sqrshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqrshlb_u8, aarch64_neon_uqrshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqrshld_s64, aarch64_neon_sqrshl, Add1ArgType), NEONMAP1(vqrshld_u64, aarch64_neon_uqrshl, Add1ArgType), NEONMAP1(vqrshlh_s16, aarch64_neon_sqrshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqrshlh_u16, aarch64_neon_uqrshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqrshls_s32, aarch64_neon_sqrshl, Add1ArgType), NEONMAP1(vqrshls_u32, aarch64_neon_uqrshl, Add1ArgType), NEONMAP1(vqrshrnd_n_s64, aarch64_neon_sqrshrn, AddRetType), NEONMAP1(vqrshrnd_n_u64, aarch64_neon_uqrshrn, AddRetType), NEONMAP1(vqrshrnh_n_s16, aarch64_neon_sqrshrn, VectorRet | Use64BitVectors), NEONMAP1(vqrshrnh_n_u16, aarch64_neon_uqrshrn, VectorRet | Use64BitVectors), NEONMAP1(vqrshrns_n_s32, aarch64_neon_sqrshrn, VectorRet | Use64BitVectors), NEONMAP1(vqrshrns_n_u32, aarch64_neon_uqrshrn, VectorRet | Use64BitVectors), NEONMAP1(vqrshrund_n_s64, aarch64_neon_sqrshrun, AddRetType), NEONMAP1(vqrshrunh_n_s16, aarch64_neon_sqrshrun, VectorRet | Use64BitVectors), NEONMAP1(vqrshruns_n_s32, aarch64_neon_sqrshrun, VectorRet | Use64BitVectors), NEONMAP1(vqshlb_n_s8, aarch64_neon_sqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshlb_n_u8, aarch64_neon_uqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshlb_s8, aarch64_neon_sqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshlb_u8, aarch64_neon_uqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshld_s64, aarch64_neon_sqshl, Add1ArgType), NEONMAP1(vqshld_u64, aarch64_neon_uqshl, Add1ArgType), NEONMAP1(vqshlh_n_s16, aarch64_neon_sqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshlh_n_u16, aarch64_neon_uqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshlh_s16, aarch64_neon_sqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshlh_u16, aarch64_neon_uqshl, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshls_n_s32, aarch64_neon_sqshl, Add1ArgType), NEONMAP1(vqshls_n_u32, aarch64_neon_uqshl, Add1ArgType), NEONMAP1(vqshls_s32, aarch64_neon_sqshl, Add1ArgType), NEONMAP1(vqshls_u32, aarch64_neon_uqshl, Add1ArgType), NEONMAP1(vqshlub_n_s8, aarch64_neon_sqshlu, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshluh_n_s16, aarch64_neon_sqshlu, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqshlus_n_s32, aarch64_neon_sqshlu, Add1ArgType), NEONMAP1(vqshrnd_n_s64, aarch64_neon_sqshrn, AddRetType), NEONMAP1(vqshrnd_n_u64, aarch64_neon_uqshrn, AddRetType), NEONMAP1(vqshrnh_n_s16, aarch64_neon_sqshrn, VectorRet | Use64BitVectors), NEONMAP1(vqshrnh_n_u16, aarch64_neon_uqshrn, VectorRet | Use64BitVectors), NEONMAP1(vqshrns_n_s32, aarch64_neon_sqshrn, VectorRet | Use64BitVectors), NEONMAP1(vqshrns_n_u32, aarch64_neon_uqshrn, VectorRet | Use64BitVectors), NEONMAP1(vqshrund_n_s64, aarch64_neon_sqshrun, AddRetType), NEONMAP1(vqshrunh_n_s16, aarch64_neon_sqshrun, VectorRet | Use64BitVectors), NEONMAP1(vqshruns_n_s32, aarch64_neon_sqshrun, VectorRet | Use64BitVectors), NEONMAP1(vqsubb_s8, aarch64_neon_sqsub, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqsubb_u8, aarch64_neon_uqsub, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqsubd_s64, aarch64_neon_sqsub, Add1ArgType), NEONMAP1(vqsubd_u64, aarch64_neon_uqsub, Add1ArgType), NEONMAP1(vqsubh_s16, aarch64_neon_sqsub, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqsubh_u16, aarch64_neon_uqsub, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vqsubs_s32, aarch64_neon_sqsub, Add1ArgType), NEONMAP1(vqsubs_u32, aarch64_neon_uqsub, Add1ArgType), NEONMAP1(vrecped_f64, aarch64_neon_frecpe, Add1ArgType), NEONMAP1(vrecpes_f32, aarch64_neon_frecpe, Add1ArgType), NEONMAP1(vrecpxd_f64, aarch64_neon_frecpx, Add1ArgType), NEONMAP1(vrecpxs_f32, aarch64_neon_frecpx, Add1ArgType), NEONMAP1(vrshld_s64, aarch64_neon_srshl, Add1ArgType), NEONMAP1(vrshld_u64, aarch64_neon_urshl, Add1ArgType), NEONMAP1(vrsqrted_f64, aarch64_neon_frsqrte, Add1ArgType), NEONMAP1(vrsqrtes_f32, aarch64_neon_frsqrte, Add1ArgType), NEONMAP1(vrsqrtsd_f64, aarch64_neon_frsqrts, Add1ArgType), NEONMAP1(vrsqrtss_f32, aarch64_neon_frsqrts, Add1ArgType), NEONMAP1(vsha1cq_u32, aarch64_crypto_sha1c, 0), NEONMAP1(vsha1h_u32, aarch64_crypto_sha1h, 0), NEONMAP1(vsha1mq_u32, aarch64_crypto_sha1m, 0), NEONMAP1(vsha1pq_u32, aarch64_crypto_sha1p, 0), NEONMAP1(vshld_s64, aarch64_neon_sshl, Add1ArgType), NEONMAP1(vshld_u64, aarch64_neon_ushl, Add1ArgType), NEONMAP1(vslid_n_s64, aarch64_neon_vsli, Vectorize1ArgType), NEONMAP1(vslid_n_u64, aarch64_neon_vsli, Vectorize1ArgType), NEONMAP1(vsqaddb_u8, aarch64_neon_usqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vsqaddd_u64, aarch64_neon_usqadd, Add1ArgType), NEONMAP1(vsqaddh_u16, aarch64_neon_usqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vsqadds_u32, aarch64_neon_usqadd, Add1ArgType), NEONMAP1(vsrid_n_s64, aarch64_neon_vsri, Vectorize1ArgType), NEONMAP1(vsrid_n_u64, aarch64_neon_vsri, Vectorize1ArgType), NEONMAP1(vuqaddb_s8, aarch64_neon_suqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vuqaddd_s64, aarch64_neon_suqadd, Add1ArgType), NEONMAP1(vuqaddh_s16, aarch64_neon_suqadd, Vectorize1ArgType | Use64BitVectors), NEONMAP1(vuqadds_s32, aarch64_neon_suqadd, Add1ArgType), }; #undef NEONMAP0 #undef NEONMAP1 #undef NEONMAP2 static bool NEONSIMDIntrinsicsProvenSorted = false; static bool AArch64SIMDIntrinsicsProvenSorted = false; static bool AArch64SISDIntrinsicsProvenSorted = false; static const NeonIntrinsicInfo * findNeonIntrinsicInMap(ArrayRef IntrinsicMap, unsigned BuiltinID, bool &MapProvenSorted) { #ifndef NDEBUG if (!MapProvenSorted) { assert(std::is_sorted(std::begin(IntrinsicMap), std::end(IntrinsicMap))); MapProvenSorted = true; } #endif const NeonIntrinsicInfo *Builtin = std::lower_bound(IntrinsicMap.begin(), IntrinsicMap.end(), BuiltinID); if (Builtin != IntrinsicMap.end() && Builtin->BuiltinID == BuiltinID) return Builtin; return nullptr; } Function *CodeGenFunction::LookupNeonLLVMIntrinsic(unsigned IntrinsicID, unsigned Modifier, llvm::Type *ArgType, const CallExpr *E) { int VectorSize = 0; if (Modifier & Use64BitVectors) VectorSize = 64; else if (Modifier & Use128BitVectors) VectorSize = 128; // Return type. SmallVector Tys; if (Modifier & AddRetType) { llvm::Type *Ty = ConvertType(E->getCallReturnType(getContext())); if (Modifier & VectorizeRetType) Ty = llvm::VectorType::get( Ty, VectorSize ? VectorSize / Ty->getPrimitiveSizeInBits() : 1); Tys.push_back(Ty); } // Arguments. if (Modifier & VectorizeArgTypes) { int Elts = VectorSize ? VectorSize / ArgType->getPrimitiveSizeInBits() : 1; ArgType = llvm::VectorType::get(ArgType, Elts); } if (Modifier & (Add1ArgType | Add2ArgTypes)) Tys.push_back(ArgType); if (Modifier & Add2ArgTypes) Tys.push_back(ArgType); if (Modifier & InventFloatType) Tys.push_back(FloatTy); return CGM.getIntrinsic(IntrinsicID, Tys); } static Value *EmitCommonNeonSISDBuiltinExpr(CodeGenFunction &CGF, const NeonIntrinsicInfo &SISDInfo, SmallVectorImpl &Ops, const CallExpr *E) { unsigned BuiltinID = SISDInfo.BuiltinID; unsigned int Int = SISDInfo.LLVMIntrinsic; unsigned Modifier = SISDInfo.TypeModifier; const char *s = SISDInfo.NameHint; switch (BuiltinID) { case NEON::BI__builtin_neon_vcled_s64: case NEON::BI__builtin_neon_vcled_u64: case NEON::BI__builtin_neon_vcles_f32: case NEON::BI__builtin_neon_vcled_f64: case NEON::BI__builtin_neon_vcltd_s64: case NEON::BI__builtin_neon_vcltd_u64: case NEON::BI__builtin_neon_vclts_f32: case NEON::BI__builtin_neon_vcltd_f64: case NEON::BI__builtin_neon_vcales_f32: case NEON::BI__builtin_neon_vcaled_f64: case NEON::BI__builtin_neon_vcalts_f32: case NEON::BI__builtin_neon_vcaltd_f64: // Only one direction of comparisons actually exist, cmle is actually a cmge // with swapped operands. The table gives us the right intrinsic but we // still need to do the swap. std::swap(Ops[0], Ops[1]); break; } assert(Int && "Generic code assumes a valid intrinsic"); // Determine the type(s) of this overloaded AArch64 intrinsic. const Expr *Arg = E->getArg(0); llvm::Type *ArgTy = CGF.ConvertType(Arg->getType()); Function *F = CGF.LookupNeonLLVMIntrinsic(Int, Modifier, ArgTy, E); int j = 0; ConstantInt *C0 = ConstantInt::get(CGF.SizeTy, 0); for (Function::const_arg_iterator ai = F->arg_begin(), ae = F->arg_end(); ai != ae; ++ai, ++j) { llvm::Type *ArgTy = ai->getType(); if (Ops[j]->getType()->getPrimitiveSizeInBits() == ArgTy->getPrimitiveSizeInBits()) continue; assert(ArgTy->isVectorTy() && !Ops[j]->getType()->isVectorTy()); // The constant argument to an _n_ intrinsic always has Int32Ty, so truncate // it before inserting. Ops[j] = CGF.Builder.CreateTruncOrBitCast(Ops[j], ArgTy->getVectorElementType()); Ops[j] = CGF.Builder.CreateInsertElement(UndefValue::get(ArgTy), Ops[j], C0); } Value *Result = CGF.EmitNeonCall(F, Ops, s); llvm::Type *ResultType = CGF.ConvertType(E->getType()); if (ResultType->getPrimitiveSizeInBits() < Result->getType()->getPrimitiveSizeInBits()) return CGF.Builder.CreateExtractElement(Result, C0); return CGF.Builder.CreateBitCast(Result, ResultType, s); } Value *CodeGenFunction::EmitCommonNeonBuiltinExpr( unsigned BuiltinID, unsigned LLVMIntrinsic, unsigned AltLLVMIntrinsic, const char *NameHint, unsigned Modifier, const CallExpr *E, SmallVectorImpl &Ops, Address PtrOp0, Address PtrOp1) { // Get the last argument, which specifies the vector type. llvm::APSInt NeonTypeConst; const Expr *Arg = E->getArg(E->getNumArgs() - 1); if (!Arg->isIntegerConstantExpr(NeonTypeConst, getContext())) return nullptr; // Determine the type of this overloaded NEON intrinsic. NeonTypeFlags Type(NeonTypeConst.getZExtValue()); bool Usgn = Type.isUnsigned(); bool Quad = Type.isQuad(); llvm::VectorType *VTy = GetNeonType(this, Type); llvm::Type *Ty = VTy; if (!Ty) return nullptr; auto getAlignmentValue32 = [&](Address addr) -> Value* { return Builder.getInt32(addr.getAlignment().getQuantity()); }; unsigned Int = LLVMIntrinsic; if ((Modifier & UnsignedAlts) && !Usgn) Int = AltLLVMIntrinsic; switch (BuiltinID) { default: break; case NEON::BI__builtin_neon_vabs_v: case NEON::BI__builtin_neon_vabsq_v: if (VTy->getElementType()->isFloatingPointTy()) return EmitNeonCall(CGM.getIntrinsic(Intrinsic::fabs, Ty), Ops, "vabs"); return EmitNeonCall(CGM.getIntrinsic(LLVMIntrinsic, Ty), Ops, "vabs"); case NEON::BI__builtin_neon_vaddhn_v: { llvm::VectorType *SrcTy = llvm::VectorType::getExtendedElementVectorType(VTy); // %sum = add <4 x i32> %lhs, %rhs Ops[0] = Builder.CreateBitCast(Ops[0], SrcTy); Ops[1] = Builder.CreateBitCast(Ops[1], SrcTy); Ops[0] = Builder.CreateAdd(Ops[0], Ops[1], "vaddhn"); // %high = lshr <4 x i32> %sum, Constant *ShiftAmt = ConstantInt::get(SrcTy, SrcTy->getScalarSizeInBits() / 2); Ops[0] = Builder.CreateLShr(Ops[0], ShiftAmt, "vaddhn"); // %res = trunc <4 x i32> %high to <4 x i16> return Builder.CreateTrunc(Ops[0], VTy, "vaddhn"); } case NEON::BI__builtin_neon_vcale_v: case NEON::BI__builtin_neon_vcaleq_v: case NEON::BI__builtin_neon_vcalt_v: case NEON::BI__builtin_neon_vcaltq_v: std::swap(Ops[0], Ops[1]); case NEON::BI__builtin_neon_vcage_v: case NEON::BI__builtin_neon_vcageq_v: case NEON::BI__builtin_neon_vcagt_v: case NEON::BI__builtin_neon_vcagtq_v: { llvm::Type *VecFlt = llvm::VectorType::get( VTy->getScalarSizeInBits() == 32 ? FloatTy : DoubleTy, VTy->getNumElements()); llvm::Type *Tys[] = { VTy, VecFlt }; Function *F = CGM.getIntrinsic(LLVMIntrinsic, Tys); return EmitNeonCall(F, Ops, NameHint); } case NEON::BI__builtin_neon_vclz_v: case NEON::BI__builtin_neon_vclzq_v: // We generate target-independent intrinsic, which needs a second argument // for whether or not clz of zero is undefined; on ARM it isn't. Ops.push_back(Builder.getInt1(getTarget().isCLZForZeroUndef())); break; case NEON::BI__builtin_neon_vcvt_f32_v: case NEON::BI__builtin_neon_vcvtq_f32_v: Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ty = GetNeonType(this, NeonTypeFlags(NeonTypeFlags::Float32, false, Quad)); return Usgn ? Builder.CreateUIToFP(Ops[0], Ty, "vcvt") : Builder.CreateSIToFP(Ops[0], Ty, "vcvt"); case NEON::BI__builtin_neon_vcvt_n_f32_v: case NEON::BI__builtin_neon_vcvt_n_f64_v: case NEON::BI__builtin_neon_vcvtq_n_f32_v: case NEON::BI__builtin_neon_vcvtq_n_f64_v: { llvm::Type *Tys[2] = { GetFloatNeonType(this, Type), Ty }; Int = Usgn ? LLVMIntrinsic : AltLLVMIntrinsic; Function *F = CGM.getIntrinsic(Int, Tys); return EmitNeonCall(F, Ops, "vcvt_n"); } case NEON::BI__builtin_neon_vcvt_n_s32_v: case NEON::BI__builtin_neon_vcvt_n_u32_v: case NEON::BI__builtin_neon_vcvt_n_s64_v: case NEON::BI__builtin_neon_vcvt_n_u64_v: case NEON::BI__builtin_neon_vcvtq_n_s32_v: case NEON::BI__builtin_neon_vcvtq_n_u32_v: case NEON::BI__builtin_neon_vcvtq_n_s64_v: case NEON::BI__builtin_neon_vcvtq_n_u64_v: { llvm::Type *Tys[2] = { Ty, GetFloatNeonType(this, Type) }; Function *F = CGM.getIntrinsic(LLVMIntrinsic, Tys); return EmitNeonCall(F, Ops, "vcvt_n"); } case NEON::BI__builtin_neon_vcvt_s32_v: case NEON::BI__builtin_neon_vcvt_u32_v: case NEON::BI__builtin_neon_vcvt_s64_v: case NEON::BI__builtin_neon_vcvt_u64_v: case NEON::BI__builtin_neon_vcvtq_s32_v: case NEON::BI__builtin_neon_vcvtq_u32_v: case NEON::BI__builtin_neon_vcvtq_s64_v: case NEON::BI__builtin_neon_vcvtq_u64_v: { Ops[0] = Builder.CreateBitCast(Ops[0], GetFloatNeonType(this, Type)); return Usgn ? Builder.CreateFPToUI(Ops[0], Ty, "vcvt") : Builder.CreateFPToSI(Ops[0], Ty, "vcvt"); } case NEON::BI__builtin_neon_vcvta_s32_v: case NEON::BI__builtin_neon_vcvta_s64_v: case NEON::BI__builtin_neon_vcvta_u32_v: case NEON::BI__builtin_neon_vcvta_u64_v: case NEON::BI__builtin_neon_vcvtaq_s32_v: case NEON::BI__builtin_neon_vcvtaq_s64_v: case NEON::BI__builtin_neon_vcvtaq_u32_v: case NEON::BI__builtin_neon_vcvtaq_u64_v: case NEON::BI__builtin_neon_vcvtn_s32_v: case NEON::BI__builtin_neon_vcvtn_s64_v: case NEON::BI__builtin_neon_vcvtn_u32_v: case NEON::BI__builtin_neon_vcvtn_u64_v: case NEON::BI__builtin_neon_vcvtnq_s32_v: case NEON::BI__builtin_neon_vcvtnq_s64_v: case NEON::BI__builtin_neon_vcvtnq_u32_v: case NEON::BI__builtin_neon_vcvtnq_u64_v: case NEON::BI__builtin_neon_vcvtp_s32_v: case NEON::BI__builtin_neon_vcvtp_s64_v: case NEON::BI__builtin_neon_vcvtp_u32_v: case NEON::BI__builtin_neon_vcvtp_u64_v: case NEON::BI__builtin_neon_vcvtpq_s32_v: case NEON::BI__builtin_neon_vcvtpq_s64_v: case NEON::BI__builtin_neon_vcvtpq_u32_v: case NEON::BI__builtin_neon_vcvtpq_u64_v: case NEON::BI__builtin_neon_vcvtm_s32_v: case NEON::BI__builtin_neon_vcvtm_s64_v: case NEON::BI__builtin_neon_vcvtm_u32_v: case NEON::BI__builtin_neon_vcvtm_u64_v: case NEON::BI__builtin_neon_vcvtmq_s32_v: case NEON::BI__builtin_neon_vcvtmq_s64_v: case NEON::BI__builtin_neon_vcvtmq_u32_v: case NEON::BI__builtin_neon_vcvtmq_u64_v: { llvm::Type *Tys[2] = { Ty, GetFloatNeonType(this, Type) }; return EmitNeonCall(CGM.getIntrinsic(LLVMIntrinsic, Tys), Ops, NameHint); } case NEON::BI__builtin_neon_vext_v: case NEON::BI__builtin_neon_vextq_v: { int CV = cast(Ops[2])->getSExtValue(); SmallVector Indices; for (unsigned i = 0, e = VTy->getNumElements(); i != e; ++i) Indices.push_back(i+CV); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); return Builder.CreateShuffleVector(Ops[0], Ops[1], Indices, "vext"); } case NEON::BI__builtin_neon_vfma_v: case NEON::BI__builtin_neon_vfmaq_v: { Value *F = CGM.getIntrinsic(Intrinsic::fma, Ty); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); // NEON intrinsic puts accumulator first, unlike the LLVM fma. return Builder.CreateCall(F, {Ops[1], Ops[2], Ops[0]}); } case NEON::BI__builtin_neon_vld1_v: case NEON::BI__builtin_neon_vld1q_v: { llvm::Type *Tys[] = {Ty, Int8PtrTy}; Ops.push_back(getAlignmentValue32(PtrOp0)); return EmitNeonCall(CGM.getIntrinsic(LLVMIntrinsic, Tys), Ops, "vld1"); } case NEON::BI__builtin_neon_vld2_v: case NEON::BI__builtin_neon_vld2q_v: case NEON::BI__builtin_neon_vld3_v: case NEON::BI__builtin_neon_vld3q_v: case NEON::BI__builtin_neon_vld4_v: case NEON::BI__builtin_neon_vld4q_v: { llvm::Type *Tys[] = {Ty, Int8PtrTy}; Function *F = CGM.getIntrinsic(LLVMIntrinsic, Tys); Value *Align = getAlignmentValue32(PtrOp1); Ops[1] = Builder.CreateCall(F, {Ops[1], Align}, NameHint); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld1_dup_v: case NEON::BI__builtin_neon_vld1q_dup_v: { Value *V = UndefValue::get(Ty); Ty = llvm::PointerType::getUnqual(VTy->getElementType()); PtrOp0 = Builder.CreateBitCast(PtrOp0, Ty); LoadInst *Ld = Builder.CreateLoad(PtrOp0); llvm::Constant *CI = ConstantInt::get(SizeTy, 0); Ops[0] = Builder.CreateInsertElement(V, Ld, CI); return EmitNeonSplat(Ops[0], CI); } case NEON::BI__builtin_neon_vld2_lane_v: case NEON::BI__builtin_neon_vld2q_lane_v: case NEON::BI__builtin_neon_vld3_lane_v: case NEON::BI__builtin_neon_vld3q_lane_v: case NEON::BI__builtin_neon_vld4_lane_v: case NEON::BI__builtin_neon_vld4q_lane_v: { llvm::Type *Tys[] = {Ty, Int8PtrTy}; Function *F = CGM.getIntrinsic(LLVMIntrinsic, Tys); for (unsigned I = 2; I < Ops.size() - 1; ++I) Ops[I] = Builder.CreateBitCast(Ops[I], Ty); Ops.push_back(getAlignmentValue32(PtrOp1)); Ops[1] = Builder.CreateCall(F, makeArrayRef(Ops).slice(1), NameHint); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vmovl_v: { llvm::Type *DTy =llvm::VectorType::getTruncatedElementVectorType(VTy); Ops[0] = Builder.CreateBitCast(Ops[0], DTy); if (Usgn) return Builder.CreateZExt(Ops[0], Ty, "vmovl"); return Builder.CreateSExt(Ops[0], Ty, "vmovl"); } case NEON::BI__builtin_neon_vmovn_v: { llvm::Type *QTy = llvm::VectorType::getExtendedElementVectorType(VTy); Ops[0] = Builder.CreateBitCast(Ops[0], QTy); return Builder.CreateTrunc(Ops[0], Ty, "vmovn"); } case NEON::BI__builtin_neon_vmull_v: // FIXME: the integer vmull operations could be emitted in terms of pure // LLVM IR (2 exts followed by a mul). Unfortunately LLVM has a habit of // hoisting the exts outside loops. Until global ISel comes along that can // see through such movement this leads to bad CodeGen. So we need an // intrinsic for now. Int = Usgn ? Intrinsic::arm_neon_vmullu : Intrinsic::arm_neon_vmulls; Int = Type.isPoly() ? (unsigned)Intrinsic::arm_neon_vmullp : Int; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vmull"); case NEON::BI__builtin_neon_vpadal_v: case NEON::BI__builtin_neon_vpadalq_v: { // The source operand type has twice as many elements of half the size. unsigned EltBits = VTy->getElementType()->getPrimitiveSizeInBits(); llvm::Type *EltTy = llvm::IntegerType::get(getLLVMContext(), EltBits / 2); llvm::Type *NarrowTy = llvm::VectorType::get(EltTy, VTy->getNumElements() * 2); llvm::Type *Tys[2] = { Ty, NarrowTy }; return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, NameHint); } case NEON::BI__builtin_neon_vpaddl_v: case NEON::BI__builtin_neon_vpaddlq_v: { // The source operand type has twice as many elements of half the size. unsigned EltBits = VTy->getElementType()->getPrimitiveSizeInBits(); llvm::Type *EltTy = llvm::IntegerType::get(getLLVMContext(), EltBits / 2); llvm::Type *NarrowTy = llvm::VectorType::get(EltTy, VTy->getNumElements() * 2); llvm::Type *Tys[2] = { Ty, NarrowTy }; return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vpaddl"); } case NEON::BI__builtin_neon_vqdmlal_v: case NEON::BI__builtin_neon_vqdmlsl_v: { SmallVector MulOps(Ops.begin() + 1, Ops.end()); Ops[1] = EmitNeonCall(CGM.getIntrinsic(LLVMIntrinsic, Ty), MulOps, "vqdmlal"); Ops.resize(2); return EmitNeonCall(CGM.getIntrinsic(AltLLVMIntrinsic, Ty), Ops, NameHint); } case NEON::BI__builtin_neon_vqshl_n_v: case NEON::BI__builtin_neon_vqshlq_n_v: return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqshl_n", 1, false); case NEON::BI__builtin_neon_vqshlu_n_v: case NEON::BI__builtin_neon_vqshluq_n_v: return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqshlu_n", 1, false); case NEON::BI__builtin_neon_vrecpe_v: case NEON::BI__builtin_neon_vrecpeq_v: case NEON::BI__builtin_neon_vrsqrte_v: case NEON::BI__builtin_neon_vrsqrteq_v: Int = Ty->isFPOrFPVectorTy() ? LLVMIntrinsic : AltLLVMIntrinsic; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, NameHint); case NEON::BI__builtin_neon_vrshr_n_v: case NEON::BI__builtin_neon_vrshrq_n_v: return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrshr_n", 1, true); case NEON::BI__builtin_neon_vshl_n_v: case NEON::BI__builtin_neon_vshlq_n_v: Ops[1] = EmitNeonShiftVector(Ops[1], Ty, false); return Builder.CreateShl(Builder.CreateBitCast(Ops[0],Ty), Ops[1], "vshl_n"); case NEON::BI__builtin_neon_vshll_n_v: { llvm::Type *SrcTy = llvm::VectorType::getTruncatedElementVectorType(VTy); Ops[0] = Builder.CreateBitCast(Ops[0], SrcTy); if (Usgn) Ops[0] = Builder.CreateZExt(Ops[0], VTy); else Ops[0] = Builder.CreateSExt(Ops[0], VTy); Ops[1] = EmitNeonShiftVector(Ops[1], VTy, false); return Builder.CreateShl(Ops[0], Ops[1], "vshll_n"); } case NEON::BI__builtin_neon_vshrn_n_v: { llvm::Type *SrcTy = llvm::VectorType::getExtendedElementVectorType(VTy); Ops[0] = Builder.CreateBitCast(Ops[0], SrcTy); Ops[1] = EmitNeonShiftVector(Ops[1], SrcTy, false); if (Usgn) Ops[0] = Builder.CreateLShr(Ops[0], Ops[1]); else Ops[0] = Builder.CreateAShr(Ops[0], Ops[1]); return Builder.CreateTrunc(Ops[0], Ty, "vshrn_n"); } case NEON::BI__builtin_neon_vshr_n_v: case NEON::BI__builtin_neon_vshrq_n_v: return EmitNeonRShiftImm(Ops[0], Ops[1], Ty, Usgn, "vshr_n"); case NEON::BI__builtin_neon_vst1_v: case NEON::BI__builtin_neon_vst1q_v: case NEON::BI__builtin_neon_vst2_v: case NEON::BI__builtin_neon_vst2q_v: case NEON::BI__builtin_neon_vst3_v: case NEON::BI__builtin_neon_vst3q_v: case NEON::BI__builtin_neon_vst4_v: case NEON::BI__builtin_neon_vst4q_v: case NEON::BI__builtin_neon_vst2_lane_v: case NEON::BI__builtin_neon_vst2q_lane_v: case NEON::BI__builtin_neon_vst3_lane_v: case NEON::BI__builtin_neon_vst3q_lane_v: case NEON::BI__builtin_neon_vst4_lane_v: case NEON::BI__builtin_neon_vst4q_lane_v: { llvm::Type *Tys[] = {Int8PtrTy, Ty}; Ops.push_back(getAlignmentValue32(PtrOp0)); return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, ""); } case NEON::BI__builtin_neon_vsubhn_v: { llvm::VectorType *SrcTy = llvm::VectorType::getExtendedElementVectorType(VTy); // %sum = add <4 x i32> %lhs, %rhs Ops[0] = Builder.CreateBitCast(Ops[0], SrcTy); Ops[1] = Builder.CreateBitCast(Ops[1], SrcTy); Ops[0] = Builder.CreateSub(Ops[0], Ops[1], "vsubhn"); // %high = lshr <4 x i32> %sum, Constant *ShiftAmt = ConstantInt::get(SrcTy, SrcTy->getScalarSizeInBits() / 2); Ops[0] = Builder.CreateLShr(Ops[0], ShiftAmt, "vsubhn"); // %res = trunc <4 x i32> %high to <4 x i16> return Builder.CreateTrunc(Ops[0], VTy, "vsubhn"); } case NEON::BI__builtin_neon_vtrn_v: case NEON::BI__builtin_neon_vtrnq_v: { Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ty)); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Value *SV = nullptr; for (unsigned vi = 0; vi != 2; ++vi) { SmallVector Indices; for (unsigned i = 0, e = VTy->getNumElements(); i != e; i += 2) { Indices.push_back(i+vi); Indices.push_back(i+e+vi); } Value *Addr = Builder.CreateConstInBoundsGEP1_32(Ty, Ops[0], vi); SV = Builder.CreateShuffleVector(Ops[1], Ops[2], Indices, "vtrn"); SV = Builder.CreateDefaultAlignedStore(SV, Addr); } return SV; } case NEON::BI__builtin_neon_vtst_v: case NEON::BI__builtin_neon_vtstq_v: { Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[0] = Builder.CreateAnd(Ops[0], Ops[1]); Ops[0] = Builder.CreateICmp(ICmpInst::ICMP_NE, Ops[0], ConstantAggregateZero::get(Ty)); return Builder.CreateSExt(Ops[0], Ty, "vtst"); } case NEON::BI__builtin_neon_vuzp_v: case NEON::BI__builtin_neon_vuzpq_v: { Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ty)); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Value *SV = nullptr; for (unsigned vi = 0; vi != 2; ++vi) { SmallVector Indices; for (unsigned i = 0, e = VTy->getNumElements(); i != e; ++i) Indices.push_back(2*i+vi); Value *Addr = Builder.CreateConstInBoundsGEP1_32(Ty, Ops[0], vi); SV = Builder.CreateShuffleVector(Ops[1], Ops[2], Indices, "vuzp"); SV = Builder.CreateDefaultAlignedStore(SV, Addr); } return SV; } case NEON::BI__builtin_neon_vzip_v: case NEON::BI__builtin_neon_vzipq_v: { Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ty)); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Value *SV = nullptr; for (unsigned vi = 0; vi != 2; ++vi) { SmallVector Indices; for (unsigned i = 0, e = VTy->getNumElements(); i != e; i += 2) { Indices.push_back((i + vi*e) >> 1); Indices.push_back(((i + vi*e) >> 1)+e); } Value *Addr = Builder.CreateConstInBoundsGEP1_32(Ty, Ops[0], vi); SV = Builder.CreateShuffleVector(Ops[1], Ops[2], Indices, "vzip"); SV = Builder.CreateDefaultAlignedStore(SV, Addr); } return SV; } } assert(Int && "Expected valid intrinsic number"); // Determine the type(s) of this overloaded AArch64 intrinsic. Function *F = LookupNeonLLVMIntrinsic(Int, Modifier, Ty, E); Value *Result = EmitNeonCall(F, Ops, NameHint); llvm::Type *ResultType = ConvertType(E->getType()); // AArch64 intrinsic one-element vector type cast to // scalar type expected by the builtin return Builder.CreateBitCast(Result, ResultType, NameHint); } Value *CodeGenFunction::EmitAArch64CompareBuiltinExpr( Value *Op, llvm::Type *Ty, const CmpInst::Predicate Fp, const CmpInst::Predicate Ip, const Twine &Name) { llvm::Type *OTy = Op->getType(); // FIXME: this is utterly horrific. We should not be looking at previous // codegen context to find out what needs doing. Unfortunately TableGen // currently gives us exactly the same calls for vceqz_f32 and vceqz_s32 // (etc). if (BitCastInst *BI = dyn_cast(Op)) OTy = BI->getOperand(0)->getType(); Op = Builder.CreateBitCast(Op, OTy); if (OTy->getScalarType()->isFloatingPointTy()) { Op = Builder.CreateFCmp(Fp, Op, Constant::getNullValue(OTy)); } else { Op = Builder.CreateICmp(Ip, Op, Constant::getNullValue(OTy)); } return Builder.CreateSExt(Op, Ty, Name); } static Value *packTBLDVectorList(CodeGenFunction &CGF, ArrayRef Ops, Value *ExtOp, Value *IndexOp, llvm::Type *ResTy, unsigned IntID, const char *Name) { SmallVector TblOps; if (ExtOp) TblOps.push_back(ExtOp); // Build a vector containing sequential number like (0, 1, 2, ..., 15) SmallVector Indices; llvm::VectorType *TblTy = cast(Ops[0]->getType()); for (unsigned i = 0, e = TblTy->getNumElements(); i != e; ++i) { Indices.push_back(2*i); Indices.push_back(2*i+1); } int PairPos = 0, End = Ops.size() - 1; while (PairPos < End) { TblOps.push_back(CGF.Builder.CreateShuffleVector(Ops[PairPos], Ops[PairPos+1], Indices, Name)); PairPos += 2; } // If there's an odd number of 64-bit lookup table, fill the high 64-bit // of the 128-bit lookup table with zero. if (PairPos == End) { Value *ZeroTbl = ConstantAggregateZero::get(TblTy); TblOps.push_back(CGF.Builder.CreateShuffleVector(Ops[PairPos], ZeroTbl, Indices, Name)); } Function *TblF; TblOps.push_back(IndexOp); TblF = CGF.CGM.getIntrinsic(IntID, ResTy); return CGF.EmitNeonCall(TblF, TblOps, Name); } Value *CodeGenFunction::GetValueForARMHint(unsigned BuiltinID) { unsigned Value; switch (BuiltinID) { default: return nullptr; case ARM::BI__builtin_arm_nop: Value = 0; break; case ARM::BI__builtin_arm_yield: case ARM::BI__yield: Value = 1; break; case ARM::BI__builtin_arm_wfe: case ARM::BI__wfe: Value = 2; break; case ARM::BI__builtin_arm_wfi: case ARM::BI__wfi: Value = 3; break; case ARM::BI__builtin_arm_sev: case ARM::BI__sev: Value = 4; break; case ARM::BI__builtin_arm_sevl: case ARM::BI__sevl: Value = 5; break; } return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::arm_hint), llvm::ConstantInt::get(Int32Ty, Value)); } // Generates the IR for the read/write special register builtin, // ValueType is the type of the value that is to be written or read, // RegisterType is the type of the register being written to or read from. static Value *EmitSpecialRegisterBuiltin(CodeGenFunction &CGF, const CallExpr *E, llvm::Type *RegisterType, llvm::Type *ValueType, bool IsRead, StringRef SysReg = "") { // write and register intrinsics only support 32 and 64 bit operations. assert((RegisterType->isIntegerTy(32) || RegisterType->isIntegerTy(64)) && "Unsupported size for register."); CodeGen::CGBuilderTy &Builder = CGF.Builder; CodeGen::CodeGenModule &CGM = CGF.CGM; LLVMContext &Context = CGM.getLLVMContext(); if (SysReg.empty()) { const Expr *SysRegStrExpr = E->getArg(0)->IgnoreParenCasts(); SysReg = cast(SysRegStrExpr)->getString(); } llvm::Metadata *Ops[] = { llvm::MDString::get(Context, SysReg) }; llvm::MDNode *RegName = llvm::MDNode::get(Context, Ops); llvm::Value *Metadata = llvm::MetadataAsValue::get(Context, RegName); llvm::Type *Types[] = { RegisterType }; bool MixedTypes = RegisterType->isIntegerTy(64) && ValueType->isIntegerTy(32); assert(!(RegisterType->isIntegerTy(32) && ValueType->isIntegerTy(64)) && "Can't fit 64-bit value in 32-bit register"); if (IsRead) { llvm::Value *F = CGM.getIntrinsic(llvm::Intrinsic::read_register, Types); llvm::Value *Call = Builder.CreateCall(F, Metadata); if (MixedTypes) // Read into 64 bit register and then truncate result to 32 bit. return Builder.CreateTrunc(Call, ValueType); if (ValueType->isPointerTy()) // Have i32/i64 result (Call) but want to return a VoidPtrTy (i8*). return Builder.CreateIntToPtr(Call, ValueType); return Call; } llvm::Value *F = CGM.getIntrinsic(llvm::Intrinsic::write_register, Types); llvm::Value *ArgValue = CGF.EmitScalarExpr(E->getArg(1)); if (MixedTypes) { // Extend 32 bit write value to 64 bit to pass to write. ArgValue = Builder.CreateZExt(ArgValue, RegisterType); return Builder.CreateCall(F, { Metadata, ArgValue }); } if (ValueType->isPointerTy()) { // Have VoidPtrTy ArgValue but want to return an i32/i64. ArgValue = Builder.CreatePtrToInt(ArgValue, RegisterType); return Builder.CreateCall(F, { Metadata, ArgValue }); } return Builder.CreateCall(F, { Metadata, ArgValue }); } /// Return true if BuiltinID is an overloaded Neon intrinsic with an extra /// argument that specifies the vector type. static bool HasExtraNeonArgument(unsigned BuiltinID) { switch (BuiltinID) { default: break; case NEON::BI__builtin_neon_vget_lane_i8: case NEON::BI__builtin_neon_vget_lane_i16: case NEON::BI__builtin_neon_vget_lane_i32: case NEON::BI__builtin_neon_vget_lane_i64: case NEON::BI__builtin_neon_vget_lane_f32: case NEON::BI__builtin_neon_vgetq_lane_i8: case NEON::BI__builtin_neon_vgetq_lane_i16: case NEON::BI__builtin_neon_vgetq_lane_i32: case NEON::BI__builtin_neon_vgetq_lane_i64: case NEON::BI__builtin_neon_vgetq_lane_f32: case NEON::BI__builtin_neon_vset_lane_i8: case NEON::BI__builtin_neon_vset_lane_i16: case NEON::BI__builtin_neon_vset_lane_i32: case NEON::BI__builtin_neon_vset_lane_i64: case NEON::BI__builtin_neon_vset_lane_f32: case NEON::BI__builtin_neon_vsetq_lane_i8: case NEON::BI__builtin_neon_vsetq_lane_i16: case NEON::BI__builtin_neon_vsetq_lane_i32: case NEON::BI__builtin_neon_vsetq_lane_i64: case NEON::BI__builtin_neon_vsetq_lane_f32: case NEON::BI__builtin_neon_vsha1h_u32: case NEON::BI__builtin_neon_vsha1cq_u32: case NEON::BI__builtin_neon_vsha1pq_u32: case NEON::BI__builtin_neon_vsha1mq_u32: case ARM::BI_MoveToCoprocessor: case ARM::BI_MoveToCoprocessor2: return false; } return true; } Value *CodeGenFunction::EmitARMBuiltinExpr(unsigned BuiltinID, const CallExpr *E) { if (auto Hint = GetValueForARMHint(BuiltinID)) return Hint; if (BuiltinID == ARM::BI__emit) { bool IsThumb = getTarget().getTriple().getArch() == llvm::Triple::thumb; llvm::FunctionType *FTy = llvm::FunctionType::get(VoidTy, /*Variadic=*/false); APSInt Value; if (!E->getArg(0)->EvaluateAsInt(Value, CGM.getContext())) llvm_unreachable("Sema will ensure that the parameter is constant"); uint64_t ZExtValue = Value.zextOrTrunc(IsThumb ? 16 : 32).getZExtValue(); llvm::InlineAsm *Emit = IsThumb ? InlineAsm::get(FTy, ".inst.n 0x" + utohexstr(ZExtValue), "", /*SideEffects=*/true) : InlineAsm::get(FTy, ".inst 0x" + utohexstr(ZExtValue), "", /*SideEffects=*/true); return Builder.CreateCall(Emit); } if (BuiltinID == ARM::BI__builtin_arm_dbg) { Value *Option = EmitScalarExpr(E->getArg(0)); return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::arm_dbg), Option); } if (BuiltinID == ARM::BI__builtin_arm_prefetch) { Value *Address = EmitScalarExpr(E->getArg(0)); Value *RW = EmitScalarExpr(E->getArg(1)); Value *IsData = EmitScalarExpr(E->getArg(2)); // Locality is not supported on ARM target Value *Locality = llvm::ConstantInt::get(Int32Ty, 3); Value *F = CGM.getIntrinsic(Intrinsic::prefetch); return Builder.CreateCall(F, {Address, RW, Locality, IsData}); } if (BuiltinID == ARM::BI__builtin_arm_rbit) { llvm::Value *Arg = EmitScalarExpr(E->getArg(0)); return Builder.CreateCall( CGM.getIntrinsic(Intrinsic::bitreverse, Arg->getType()), Arg, "rbit"); } if (BuiltinID == ARM::BI__clear_cache) { assert(E->getNumArgs() == 2 && "__clear_cache takes 2 arguments"); const FunctionDecl *FD = E->getDirectCallee(); Value *Ops[2]; for (unsigned i = 0; i < 2; i++) Ops[i] = EmitScalarExpr(E->getArg(i)); llvm::Type *Ty = CGM.getTypes().ConvertType(FD->getType()); llvm::FunctionType *FTy = cast(Ty); StringRef Name = FD->getName(); return EmitNounwindRuntimeCall(CGM.CreateRuntimeFunction(FTy, Name), Ops); } if (BuiltinID == ARM::BI__builtin_arm_mcrr || BuiltinID == ARM::BI__builtin_arm_mcrr2) { Function *F; switch (BuiltinID) { default: llvm_unreachable("unexpected builtin"); case ARM::BI__builtin_arm_mcrr: F = CGM.getIntrinsic(Intrinsic::arm_mcrr); break; case ARM::BI__builtin_arm_mcrr2: F = CGM.getIntrinsic(Intrinsic::arm_mcrr2); break; } // MCRR{2} instruction has 5 operands but // the intrinsic has 4 because Rt and Rt2 // are represented as a single unsigned 64 // bit integer in the intrinsic definition // but internally it's represented as 2 32 // bit integers. Value *Coproc = EmitScalarExpr(E->getArg(0)); Value *Opc1 = EmitScalarExpr(E->getArg(1)); Value *RtAndRt2 = EmitScalarExpr(E->getArg(2)); Value *CRm = EmitScalarExpr(E->getArg(3)); Value *C1 = llvm::ConstantInt::get(Int64Ty, 32); Value *Rt = Builder.CreateTruncOrBitCast(RtAndRt2, Int32Ty); Value *Rt2 = Builder.CreateLShr(RtAndRt2, C1); Rt2 = Builder.CreateTruncOrBitCast(Rt2, Int32Ty); return Builder.CreateCall(F, {Coproc, Opc1, Rt, Rt2, CRm}); } if (BuiltinID == ARM::BI__builtin_arm_mrrc || BuiltinID == ARM::BI__builtin_arm_mrrc2) { Function *F; switch (BuiltinID) { default: llvm_unreachable("unexpected builtin"); case ARM::BI__builtin_arm_mrrc: F = CGM.getIntrinsic(Intrinsic::arm_mrrc); break; case ARM::BI__builtin_arm_mrrc2: F = CGM.getIntrinsic(Intrinsic::arm_mrrc2); break; } Value *Coproc = EmitScalarExpr(E->getArg(0)); Value *Opc1 = EmitScalarExpr(E->getArg(1)); Value *CRm = EmitScalarExpr(E->getArg(2)); Value *RtAndRt2 = Builder.CreateCall(F, {Coproc, Opc1, CRm}); // Returns an unsigned 64 bit integer, represented // as two 32 bit integers. Value *Rt = Builder.CreateExtractValue(RtAndRt2, 1); Value *Rt1 = Builder.CreateExtractValue(RtAndRt2, 0); Rt = Builder.CreateZExt(Rt, Int64Ty); Rt1 = Builder.CreateZExt(Rt1, Int64Ty); Value *ShiftCast = llvm::ConstantInt::get(Int64Ty, 32); RtAndRt2 = Builder.CreateShl(Rt, ShiftCast, "shl", true); RtAndRt2 = Builder.CreateOr(RtAndRt2, Rt1); return Builder.CreateBitCast(RtAndRt2, ConvertType(E->getType())); } if (BuiltinID == ARM::BI__builtin_arm_ldrexd || ((BuiltinID == ARM::BI__builtin_arm_ldrex || BuiltinID == ARM::BI__builtin_arm_ldaex) && getContext().getTypeSize(E->getType()) == 64) || BuiltinID == ARM::BI__ldrexd) { Function *F; switch (BuiltinID) { default: llvm_unreachable("unexpected builtin"); case ARM::BI__builtin_arm_ldaex: F = CGM.getIntrinsic(Intrinsic::arm_ldaexd); break; case ARM::BI__builtin_arm_ldrexd: case ARM::BI__builtin_arm_ldrex: case ARM::BI__ldrexd: F = CGM.getIntrinsic(Intrinsic::arm_ldrexd); break; } Value *LdPtr = EmitScalarExpr(E->getArg(0)); Value *Val = Builder.CreateCall(F, Builder.CreateBitCast(LdPtr, Int8PtrTy), "ldrexd"); Value *Val0 = Builder.CreateExtractValue(Val, 1); Value *Val1 = Builder.CreateExtractValue(Val, 0); Val0 = Builder.CreateZExt(Val0, Int64Ty); Val1 = Builder.CreateZExt(Val1, Int64Ty); Value *ShiftCst = llvm::ConstantInt::get(Int64Ty, 32); Val = Builder.CreateShl(Val0, ShiftCst, "shl", true /* nuw */); Val = Builder.CreateOr(Val, Val1); return Builder.CreateBitCast(Val, ConvertType(E->getType())); } if (BuiltinID == ARM::BI__builtin_arm_ldrex || BuiltinID == ARM::BI__builtin_arm_ldaex) { Value *LoadAddr = EmitScalarExpr(E->getArg(0)); QualType Ty = E->getType(); llvm::Type *RealResTy = ConvertType(Ty); llvm::Type *PtrTy = llvm::IntegerType::get( getLLVMContext(), getContext().getTypeSize(Ty))->getPointerTo(); LoadAddr = Builder.CreateBitCast(LoadAddr, PtrTy); Function *F = CGM.getIntrinsic(BuiltinID == ARM::BI__builtin_arm_ldaex ? Intrinsic::arm_ldaex : Intrinsic::arm_ldrex, PtrTy); Value *Val = Builder.CreateCall(F, LoadAddr, "ldrex"); if (RealResTy->isPointerTy()) return Builder.CreateIntToPtr(Val, RealResTy); else { llvm::Type *IntResTy = llvm::IntegerType::get( getLLVMContext(), CGM.getDataLayout().getTypeSizeInBits(RealResTy)); Val = Builder.CreateTruncOrBitCast(Val, IntResTy); return Builder.CreateBitCast(Val, RealResTy); } } if (BuiltinID == ARM::BI__builtin_arm_strexd || ((BuiltinID == ARM::BI__builtin_arm_stlex || BuiltinID == ARM::BI__builtin_arm_strex) && getContext().getTypeSize(E->getArg(0)->getType()) == 64)) { Function *F = CGM.getIntrinsic(BuiltinID == ARM::BI__builtin_arm_stlex ? Intrinsic::arm_stlexd : Intrinsic::arm_strexd); llvm::Type *STy = llvm::StructType::get(Int32Ty, Int32Ty, nullptr); Address Tmp = CreateMemTemp(E->getArg(0)->getType()); Value *Val = EmitScalarExpr(E->getArg(0)); Builder.CreateStore(Val, Tmp); Address LdPtr = Builder.CreateBitCast(Tmp,llvm::PointerType::getUnqual(STy)); Val = Builder.CreateLoad(LdPtr); Value *Arg0 = Builder.CreateExtractValue(Val, 0); Value *Arg1 = Builder.CreateExtractValue(Val, 1); Value *StPtr = Builder.CreateBitCast(EmitScalarExpr(E->getArg(1)), Int8PtrTy); return Builder.CreateCall(F, {Arg0, Arg1, StPtr}, "strexd"); } if (BuiltinID == ARM::BI__builtin_arm_strex || BuiltinID == ARM::BI__builtin_arm_stlex) { Value *StoreVal = EmitScalarExpr(E->getArg(0)); Value *StoreAddr = EmitScalarExpr(E->getArg(1)); QualType Ty = E->getArg(0)->getType(); llvm::Type *StoreTy = llvm::IntegerType::get(getLLVMContext(), getContext().getTypeSize(Ty)); StoreAddr = Builder.CreateBitCast(StoreAddr, StoreTy->getPointerTo()); if (StoreVal->getType()->isPointerTy()) StoreVal = Builder.CreatePtrToInt(StoreVal, Int32Ty); else { llvm::Type *IntTy = llvm::IntegerType::get( getLLVMContext(), CGM.getDataLayout().getTypeSizeInBits(StoreVal->getType())); StoreVal = Builder.CreateBitCast(StoreVal, IntTy); StoreVal = Builder.CreateZExtOrBitCast(StoreVal, Int32Ty); } Function *F = CGM.getIntrinsic(BuiltinID == ARM::BI__builtin_arm_stlex ? Intrinsic::arm_stlex : Intrinsic::arm_strex, StoreAddr->getType()); return Builder.CreateCall(F, {StoreVal, StoreAddr}, "strex"); } switch (BuiltinID) { case ARM::BI__iso_volatile_load8: case ARM::BI__iso_volatile_load16: case ARM::BI__iso_volatile_load32: case ARM::BI__iso_volatile_load64: { Value *Ptr = EmitScalarExpr(E->getArg(0)); QualType ElTy = E->getArg(0)->getType()->getPointeeType(); CharUnits LoadSize = getContext().getTypeSizeInChars(ElTy); llvm::Type *ITy = llvm::IntegerType::get(getLLVMContext(), LoadSize.getQuantity() * 8); Ptr = Builder.CreateBitCast(Ptr, ITy->getPointerTo()); llvm::LoadInst *Load = Builder.CreateAlignedLoad(Ptr, LoadSize); Load->setVolatile(true); return Load; } case ARM::BI__iso_volatile_store8: case ARM::BI__iso_volatile_store16: case ARM::BI__iso_volatile_store32: case ARM::BI__iso_volatile_store64: { Value *Ptr = EmitScalarExpr(E->getArg(0)); Value *Value = EmitScalarExpr(E->getArg(1)); QualType ElTy = E->getArg(0)->getType()->getPointeeType(); CharUnits StoreSize = getContext().getTypeSizeInChars(ElTy); llvm::Type *ITy = llvm::IntegerType::get(getLLVMContext(), StoreSize.getQuantity() * 8); Ptr = Builder.CreateBitCast(Ptr, ITy->getPointerTo()); llvm::StoreInst *Store = Builder.CreateAlignedStore(Value, Ptr, StoreSize); Store->setVolatile(true); return Store; } } if (BuiltinID == ARM::BI__builtin_arm_clrex) { Function *F = CGM.getIntrinsic(Intrinsic::arm_clrex); return Builder.CreateCall(F); } // CRC32 Intrinsic::ID CRCIntrinsicID = Intrinsic::not_intrinsic; switch (BuiltinID) { case ARM::BI__builtin_arm_crc32b: CRCIntrinsicID = Intrinsic::arm_crc32b; break; case ARM::BI__builtin_arm_crc32cb: CRCIntrinsicID = Intrinsic::arm_crc32cb; break; case ARM::BI__builtin_arm_crc32h: CRCIntrinsicID = Intrinsic::arm_crc32h; break; case ARM::BI__builtin_arm_crc32ch: CRCIntrinsicID = Intrinsic::arm_crc32ch; break; case ARM::BI__builtin_arm_crc32w: case ARM::BI__builtin_arm_crc32d: CRCIntrinsicID = Intrinsic::arm_crc32w; break; case ARM::BI__builtin_arm_crc32cw: case ARM::BI__builtin_arm_crc32cd: CRCIntrinsicID = Intrinsic::arm_crc32cw; break; } if (CRCIntrinsicID != Intrinsic::not_intrinsic) { Value *Arg0 = EmitScalarExpr(E->getArg(0)); Value *Arg1 = EmitScalarExpr(E->getArg(1)); // crc32{c,}d intrinsics are implemnted as two calls to crc32{c,}w // intrinsics, hence we need different codegen for these cases. if (BuiltinID == ARM::BI__builtin_arm_crc32d || BuiltinID == ARM::BI__builtin_arm_crc32cd) { Value *C1 = llvm::ConstantInt::get(Int64Ty, 32); Value *Arg1a = Builder.CreateTruncOrBitCast(Arg1, Int32Ty); Value *Arg1b = Builder.CreateLShr(Arg1, C1); Arg1b = Builder.CreateTruncOrBitCast(Arg1b, Int32Ty); Function *F = CGM.getIntrinsic(CRCIntrinsicID); Value *Res = Builder.CreateCall(F, {Arg0, Arg1a}); return Builder.CreateCall(F, {Res, Arg1b}); } else { Arg1 = Builder.CreateZExtOrBitCast(Arg1, Int32Ty); Function *F = CGM.getIntrinsic(CRCIntrinsicID); return Builder.CreateCall(F, {Arg0, Arg1}); } } if (BuiltinID == ARM::BI__builtin_arm_rsr || BuiltinID == ARM::BI__builtin_arm_rsr64 || BuiltinID == ARM::BI__builtin_arm_rsrp || BuiltinID == ARM::BI__builtin_arm_wsr || BuiltinID == ARM::BI__builtin_arm_wsr64 || BuiltinID == ARM::BI__builtin_arm_wsrp) { bool IsRead = BuiltinID == ARM::BI__builtin_arm_rsr || BuiltinID == ARM::BI__builtin_arm_rsr64 || BuiltinID == ARM::BI__builtin_arm_rsrp; bool IsPointerBuiltin = BuiltinID == ARM::BI__builtin_arm_rsrp || BuiltinID == ARM::BI__builtin_arm_wsrp; bool Is64Bit = BuiltinID == ARM::BI__builtin_arm_rsr64 || BuiltinID == ARM::BI__builtin_arm_wsr64; llvm::Type *ValueType; llvm::Type *RegisterType; if (IsPointerBuiltin) { ValueType = VoidPtrTy; RegisterType = Int32Ty; } else if (Is64Bit) { ValueType = RegisterType = Int64Ty; } else { ValueType = RegisterType = Int32Ty; } return EmitSpecialRegisterBuiltin(*this, E, RegisterType, ValueType, IsRead); } // Find out if any arguments are required to be integer constant // expressions. unsigned ICEArguments = 0; ASTContext::GetBuiltinTypeError Error; getContext().GetBuiltinType(BuiltinID, Error, &ICEArguments); assert(Error == ASTContext::GE_None && "Should not codegen an error"); auto getAlignmentValue32 = [&](Address addr) -> Value* { return Builder.getInt32(addr.getAlignment().getQuantity()); }; Address PtrOp0 = Address::invalid(); Address PtrOp1 = Address::invalid(); SmallVector Ops; bool HasExtraArg = HasExtraNeonArgument(BuiltinID); unsigned NumArgs = E->getNumArgs() - (HasExtraArg ? 1 : 0); for (unsigned i = 0, e = NumArgs; i != e; i++) { if (i == 0) { switch (BuiltinID) { case NEON::BI__builtin_neon_vld1_v: case NEON::BI__builtin_neon_vld1q_v: case NEON::BI__builtin_neon_vld1q_lane_v: case NEON::BI__builtin_neon_vld1_lane_v: case NEON::BI__builtin_neon_vld1_dup_v: case NEON::BI__builtin_neon_vld1q_dup_v: case NEON::BI__builtin_neon_vst1_v: case NEON::BI__builtin_neon_vst1q_v: case NEON::BI__builtin_neon_vst1q_lane_v: case NEON::BI__builtin_neon_vst1_lane_v: case NEON::BI__builtin_neon_vst2_v: case NEON::BI__builtin_neon_vst2q_v: case NEON::BI__builtin_neon_vst2_lane_v: case NEON::BI__builtin_neon_vst2q_lane_v: case NEON::BI__builtin_neon_vst3_v: case NEON::BI__builtin_neon_vst3q_v: case NEON::BI__builtin_neon_vst3_lane_v: case NEON::BI__builtin_neon_vst3q_lane_v: case NEON::BI__builtin_neon_vst4_v: case NEON::BI__builtin_neon_vst4q_v: case NEON::BI__builtin_neon_vst4_lane_v: case NEON::BI__builtin_neon_vst4q_lane_v: // Get the alignment for the argument in addition to the value; // we'll use it later. PtrOp0 = EmitPointerWithAlignment(E->getArg(0)); Ops.push_back(PtrOp0.getPointer()); continue; } } if (i == 1) { switch (BuiltinID) { case NEON::BI__builtin_neon_vld2_v: case NEON::BI__builtin_neon_vld2q_v: case NEON::BI__builtin_neon_vld3_v: case NEON::BI__builtin_neon_vld3q_v: case NEON::BI__builtin_neon_vld4_v: case NEON::BI__builtin_neon_vld4q_v: case NEON::BI__builtin_neon_vld2_lane_v: case NEON::BI__builtin_neon_vld2q_lane_v: case NEON::BI__builtin_neon_vld3_lane_v: case NEON::BI__builtin_neon_vld3q_lane_v: case NEON::BI__builtin_neon_vld4_lane_v: case NEON::BI__builtin_neon_vld4q_lane_v: case NEON::BI__builtin_neon_vld2_dup_v: case NEON::BI__builtin_neon_vld3_dup_v: case NEON::BI__builtin_neon_vld4_dup_v: // Get the alignment for the argument in addition to the value; // we'll use it later. PtrOp1 = EmitPointerWithAlignment(E->getArg(1)); Ops.push_back(PtrOp1.getPointer()); continue; } } if ((ICEArguments & (1 << i)) == 0) { Ops.push_back(EmitScalarExpr(E->getArg(i))); } else { // If this is required to be a constant, constant fold it so that we know // that the generated intrinsic gets a ConstantInt. llvm::APSInt Result; bool IsConst = E->getArg(i)->isIntegerConstantExpr(Result, getContext()); assert(IsConst && "Constant arg isn't actually constant?"); (void)IsConst; Ops.push_back(llvm::ConstantInt::get(getLLVMContext(), Result)); } } switch (BuiltinID) { default: break; case NEON::BI__builtin_neon_vget_lane_i8: case NEON::BI__builtin_neon_vget_lane_i16: case NEON::BI__builtin_neon_vget_lane_i32: case NEON::BI__builtin_neon_vget_lane_i64: case NEON::BI__builtin_neon_vget_lane_f32: case NEON::BI__builtin_neon_vgetq_lane_i8: case NEON::BI__builtin_neon_vgetq_lane_i16: case NEON::BI__builtin_neon_vgetq_lane_i32: case NEON::BI__builtin_neon_vgetq_lane_i64: case NEON::BI__builtin_neon_vgetq_lane_f32: return Builder.CreateExtractElement(Ops[0], Ops[1], "vget_lane"); case NEON::BI__builtin_neon_vset_lane_i8: case NEON::BI__builtin_neon_vset_lane_i16: case NEON::BI__builtin_neon_vset_lane_i32: case NEON::BI__builtin_neon_vset_lane_i64: case NEON::BI__builtin_neon_vset_lane_f32: case NEON::BI__builtin_neon_vsetq_lane_i8: case NEON::BI__builtin_neon_vsetq_lane_i16: case NEON::BI__builtin_neon_vsetq_lane_i32: case NEON::BI__builtin_neon_vsetq_lane_i64: case NEON::BI__builtin_neon_vsetq_lane_f32: return Builder.CreateInsertElement(Ops[1], Ops[0], Ops[2], "vset_lane"); case NEON::BI__builtin_neon_vsha1h_u32: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_sha1h), Ops, "vsha1h"); case NEON::BI__builtin_neon_vsha1cq_u32: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_sha1c), Ops, "vsha1h"); case NEON::BI__builtin_neon_vsha1pq_u32: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_sha1p), Ops, "vsha1h"); case NEON::BI__builtin_neon_vsha1mq_u32: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_sha1m), Ops, "vsha1h"); // The ARM _MoveToCoprocessor builtins put the input register value as // the first argument, but the LLVM intrinsic expects it as the third one. case ARM::BI_MoveToCoprocessor: case ARM::BI_MoveToCoprocessor2: { Function *F = CGM.getIntrinsic(BuiltinID == ARM::BI_MoveToCoprocessor ? Intrinsic::arm_mcr : Intrinsic::arm_mcr2); return Builder.CreateCall(F, {Ops[1], Ops[2], Ops[0], Ops[3], Ops[4], Ops[5]}); } case ARM::BI_BitScanForward: case ARM::BI_BitScanForward64: return EmitMSVCBuiltinExpr(MSVCIntrin::_BitScanForward, E); case ARM::BI_BitScanReverse: case ARM::BI_BitScanReverse64: return EmitMSVCBuiltinExpr(MSVCIntrin::_BitScanReverse, E); case ARM::BI_InterlockedAnd64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedAnd, E); case ARM::BI_InterlockedExchange64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchange, E); case ARM::BI_InterlockedExchangeAdd64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchangeAdd, E); case ARM::BI_InterlockedExchangeSub64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchangeSub, E); case ARM::BI_InterlockedOr64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedOr, E); case ARM::BI_InterlockedXor64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedXor, E); case ARM::BI_InterlockedDecrement64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedDecrement, E); case ARM::BI_InterlockedIncrement64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedIncrement, E); } // Get the last argument, which specifies the vector type. assert(HasExtraArg); llvm::APSInt Result; const Expr *Arg = E->getArg(E->getNumArgs()-1); if (!Arg->isIntegerConstantExpr(Result, getContext())) return nullptr; if (BuiltinID == ARM::BI__builtin_arm_vcvtr_f || BuiltinID == ARM::BI__builtin_arm_vcvtr_d) { // Determine the overloaded type of this builtin. llvm::Type *Ty; if (BuiltinID == ARM::BI__builtin_arm_vcvtr_f) Ty = FloatTy; else Ty = DoubleTy; // Determine whether this is an unsigned conversion or not. bool usgn = Result.getZExtValue() == 1; unsigned Int = usgn ? Intrinsic::arm_vcvtru : Intrinsic::arm_vcvtr; // Call the appropriate intrinsic. Function *F = CGM.getIntrinsic(Int, Ty); return Builder.CreateCall(F, Ops, "vcvtr"); } // Determine the type of this overloaded NEON intrinsic. NeonTypeFlags Type(Result.getZExtValue()); bool usgn = Type.isUnsigned(); bool rightShift = false; llvm::VectorType *VTy = GetNeonType(this, Type); llvm::Type *Ty = VTy; if (!Ty) return nullptr; // Many NEON builtins have identical semantics and uses in ARM and // AArch64. Emit these in a single function. auto IntrinsicMap = makeArrayRef(ARMSIMDIntrinsicMap); const NeonIntrinsicInfo *Builtin = findNeonIntrinsicInMap( IntrinsicMap, BuiltinID, NEONSIMDIntrinsicsProvenSorted); if (Builtin) return EmitCommonNeonBuiltinExpr( Builtin->BuiltinID, Builtin->LLVMIntrinsic, Builtin->AltLLVMIntrinsic, Builtin->NameHint, Builtin->TypeModifier, E, Ops, PtrOp0, PtrOp1); unsigned Int; switch (BuiltinID) { default: return nullptr; case NEON::BI__builtin_neon_vld1q_lane_v: // Handle 64-bit integer elements as a special case. Use shuffles of // one-element vectors to avoid poor code for i64 in the backend. if (VTy->getElementType()->isIntegerTy(64)) { // Extract the other lane. Ops[1] = Builder.CreateBitCast(Ops[1], Ty); uint32_t Lane = cast(Ops[2])->getZExtValue(); Value *SV = llvm::ConstantVector::get(ConstantInt::get(Int32Ty, 1-Lane)); Ops[1] = Builder.CreateShuffleVector(Ops[1], Ops[1], SV); // Load the value as a one-element vector. Ty = llvm::VectorType::get(VTy->getElementType(), 1); llvm::Type *Tys[] = {Ty, Int8PtrTy}; Function *F = CGM.getIntrinsic(Intrinsic::arm_neon_vld1, Tys); Value *Align = getAlignmentValue32(PtrOp0); Value *Ld = Builder.CreateCall(F, {Ops[0], Align}); // Combine them. uint32_t Indices[] = {1 - Lane, Lane}; SV = llvm::ConstantDataVector::get(getLLVMContext(), Indices); return Builder.CreateShuffleVector(Ops[1], Ld, SV, "vld1q_lane"); } // fall through case NEON::BI__builtin_neon_vld1_lane_v: { Ops[1] = Builder.CreateBitCast(Ops[1], Ty); PtrOp0 = Builder.CreateElementBitCast(PtrOp0, VTy->getElementType()); Value *Ld = Builder.CreateLoad(PtrOp0); return Builder.CreateInsertElement(Ops[1], Ld, Ops[2], "vld1_lane"); } case NEON::BI__builtin_neon_vld2_dup_v: case NEON::BI__builtin_neon_vld3_dup_v: case NEON::BI__builtin_neon_vld4_dup_v: { // Handle 64-bit elements as a special-case. There is no "dup" needed. if (VTy->getElementType()->getPrimitiveSizeInBits() == 64) { switch (BuiltinID) { case NEON::BI__builtin_neon_vld2_dup_v: Int = Intrinsic::arm_neon_vld2; break; case NEON::BI__builtin_neon_vld3_dup_v: Int = Intrinsic::arm_neon_vld3; break; case NEON::BI__builtin_neon_vld4_dup_v: Int = Intrinsic::arm_neon_vld4; break; default: llvm_unreachable("unknown vld_dup intrinsic?"); } llvm::Type *Tys[] = {Ty, Int8PtrTy}; Function *F = CGM.getIntrinsic(Int, Tys); llvm::Value *Align = getAlignmentValue32(PtrOp1); Ops[1] = Builder.CreateCall(F, {Ops[1], Align}, "vld_dup"); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } switch (BuiltinID) { case NEON::BI__builtin_neon_vld2_dup_v: Int = Intrinsic::arm_neon_vld2lane; break; case NEON::BI__builtin_neon_vld3_dup_v: Int = Intrinsic::arm_neon_vld3lane; break; case NEON::BI__builtin_neon_vld4_dup_v: Int = Intrinsic::arm_neon_vld4lane; break; default: llvm_unreachable("unknown vld_dup intrinsic?"); } llvm::Type *Tys[] = {Ty, Int8PtrTy}; Function *F = CGM.getIntrinsic(Int, Tys); llvm::StructType *STy = cast(F->getReturnType()); SmallVector Args; Args.push_back(Ops[1]); Args.append(STy->getNumElements(), UndefValue::get(Ty)); llvm::Constant *CI = ConstantInt::get(Int32Ty, 0); Args.push_back(CI); Args.push_back(getAlignmentValue32(PtrOp1)); Ops[1] = Builder.CreateCall(F, Args, "vld_dup"); // splat lane 0 to all elts in each vector of the result. for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) { Value *Val = Builder.CreateExtractValue(Ops[1], i); Value *Elt = Builder.CreateBitCast(Val, Ty); Elt = EmitNeonSplat(Elt, CI); Elt = Builder.CreateBitCast(Elt, Val->getType()); Ops[1] = Builder.CreateInsertValue(Ops[1], Elt, i); } Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vqrshrn_n_v: Int = usgn ? Intrinsic::arm_neon_vqrshiftnu : Intrinsic::arm_neon_vqrshiftns; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqrshrn_n", 1, true); case NEON::BI__builtin_neon_vqrshrun_n_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vqrshiftnsu, Ty), Ops, "vqrshrun_n", 1, true); case NEON::BI__builtin_neon_vqshrn_n_v: Int = usgn ? Intrinsic::arm_neon_vqshiftnu : Intrinsic::arm_neon_vqshiftns; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqshrn_n", 1, true); case NEON::BI__builtin_neon_vqshrun_n_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vqshiftnsu, Ty), Ops, "vqshrun_n", 1, true); case NEON::BI__builtin_neon_vrecpe_v: case NEON::BI__builtin_neon_vrecpeq_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vrecpe, Ty), Ops, "vrecpe"); case NEON::BI__builtin_neon_vrshrn_n_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vrshiftn, Ty), Ops, "vrshrn_n", 1, true); case NEON::BI__builtin_neon_vrsra_n_v: case NEON::BI__builtin_neon_vrsraq_n_v: Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = EmitNeonShiftVector(Ops[2], Ty, true); Int = usgn ? Intrinsic::arm_neon_vrshiftu : Intrinsic::arm_neon_vrshifts; Ops[1] = Builder.CreateCall(CGM.getIntrinsic(Int, Ty), {Ops[1], Ops[2]}); return Builder.CreateAdd(Ops[0], Ops[1], "vrsra_n"); case NEON::BI__builtin_neon_vsri_n_v: case NEON::BI__builtin_neon_vsriq_n_v: rightShift = true; case NEON::BI__builtin_neon_vsli_n_v: case NEON::BI__builtin_neon_vsliq_n_v: Ops[2] = EmitNeonShiftVector(Ops[2], Ty, rightShift); return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vshiftins, Ty), Ops, "vsli_n"); case NEON::BI__builtin_neon_vsra_n_v: case NEON::BI__builtin_neon_vsraq_n_v: Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = EmitNeonRShiftImm(Ops[1], Ops[2], Ty, usgn, "vsra_n"); return Builder.CreateAdd(Ops[0], Ops[1]); case NEON::BI__builtin_neon_vst1q_lane_v: // Handle 64-bit integer elements as a special case. Use a shuffle to get // a one-element vector and avoid poor code for i64 in the backend. if (VTy->getElementType()->isIntegerTy(64)) { Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Value *SV = llvm::ConstantVector::get(cast(Ops[2])); Ops[1] = Builder.CreateShuffleVector(Ops[1], Ops[1], SV); Ops[2] = getAlignmentValue32(PtrOp0); llvm::Type *Tys[] = {Int8PtrTy, Ops[1]->getType()}; return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::arm_neon_vst1, Tys), Ops); } // fall through case NEON::BI__builtin_neon_vst1_lane_v: { Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[1] = Builder.CreateExtractElement(Ops[1], Ops[2]); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); auto St = Builder.CreateStore(Ops[1], Builder.CreateBitCast(PtrOp0, Ty)); return St; } case NEON::BI__builtin_neon_vtbl1_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbl1), Ops, "vtbl1"); case NEON::BI__builtin_neon_vtbl2_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbl2), Ops, "vtbl2"); case NEON::BI__builtin_neon_vtbl3_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbl3), Ops, "vtbl3"); case NEON::BI__builtin_neon_vtbl4_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbl4), Ops, "vtbl4"); case NEON::BI__builtin_neon_vtbx1_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbx1), Ops, "vtbx1"); case NEON::BI__builtin_neon_vtbx2_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbx2), Ops, "vtbx2"); case NEON::BI__builtin_neon_vtbx3_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbx3), Ops, "vtbx3"); case NEON::BI__builtin_neon_vtbx4_v: return EmitNeonCall(CGM.getIntrinsic(Intrinsic::arm_neon_vtbx4), Ops, "vtbx4"); } } static Value *EmitAArch64TblBuiltinExpr(CodeGenFunction &CGF, unsigned BuiltinID, const CallExpr *E, SmallVectorImpl &Ops) { unsigned int Int = 0; const char *s = nullptr; switch (BuiltinID) { default: return nullptr; case NEON::BI__builtin_neon_vtbl1_v: case NEON::BI__builtin_neon_vqtbl1_v: case NEON::BI__builtin_neon_vqtbl1q_v: case NEON::BI__builtin_neon_vtbl2_v: case NEON::BI__builtin_neon_vqtbl2_v: case NEON::BI__builtin_neon_vqtbl2q_v: case NEON::BI__builtin_neon_vtbl3_v: case NEON::BI__builtin_neon_vqtbl3_v: case NEON::BI__builtin_neon_vqtbl3q_v: case NEON::BI__builtin_neon_vtbl4_v: case NEON::BI__builtin_neon_vqtbl4_v: case NEON::BI__builtin_neon_vqtbl4q_v: break; case NEON::BI__builtin_neon_vtbx1_v: case NEON::BI__builtin_neon_vqtbx1_v: case NEON::BI__builtin_neon_vqtbx1q_v: case NEON::BI__builtin_neon_vtbx2_v: case NEON::BI__builtin_neon_vqtbx2_v: case NEON::BI__builtin_neon_vqtbx2q_v: case NEON::BI__builtin_neon_vtbx3_v: case NEON::BI__builtin_neon_vqtbx3_v: case NEON::BI__builtin_neon_vqtbx3q_v: case NEON::BI__builtin_neon_vtbx4_v: case NEON::BI__builtin_neon_vqtbx4_v: case NEON::BI__builtin_neon_vqtbx4q_v: break; } assert(E->getNumArgs() >= 3); // Get the last argument, which specifies the vector type. llvm::APSInt Result; const Expr *Arg = E->getArg(E->getNumArgs() - 1); if (!Arg->isIntegerConstantExpr(Result, CGF.getContext())) return nullptr; // Determine the type of this overloaded NEON intrinsic. NeonTypeFlags Type(Result.getZExtValue()); llvm::VectorType *Ty = GetNeonType(&CGF, Type); if (!Ty) return nullptr; CodeGen::CGBuilderTy &Builder = CGF.Builder; // AArch64 scalar builtins are not overloaded, they do not have an extra // argument that specifies the vector type, need to handle each case. switch (BuiltinID) { case NEON::BI__builtin_neon_vtbl1_v: { return packTBLDVectorList(CGF, makeArrayRef(Ops).slice(0, 1), nullptr, Ops[1], Ty, Intrinsic::aarch64_neon_tbl1, "vtbl1"); } case NEON::BI__builtin_neon_vtbl2_v: { return packTBLDVectorList(CGF, makeArrayRef(Ops).slice(0, 2), nullptr, Ops[2], Ty, Intrinsic::aarch64_neon_tbl1, "vtbl1"); } case NEON::BI__builtin_neon_vtbl3_v: { return packTBLDVectorList(CGF, makeArrayRef(Ops).slice(0, 3), nullptr, Ops[3], Ty, Intrinsic::aarch64_neon_tbl2, "vtbl2"); } case NEON::BI__builtin_neon_vtbl4_v: { return packTBLDVectorList(CGF, makeArrayRef(Ops).slice(0, 4), nullptr, Ops[4], Ty, Intrinsic::aarch64_neon_tbl2, "vtbl2"); } case NEON::BI__builtin_neon_vtbx1_v: { Value *TblRes = packTBLDVectorList(CGF, makeArrayRef(Ops).slice(1, 1), nullptr, Ops[2], Ty, Intrinsic::aarch64_neon_tbl1, "vtbl1"); llvm::Constant *EightV = ConstantInt::get(Ty, 8); Value *CmpRes = Builder.CreateICmp(ICmpInst::ICMP_UGE, Ops[2], EightV); CmpRes = Builder.CreateSExt(CmpRes, Ty); Value *EltsFromInput = Builder.CreateAnd(CmpRes, Ops[0]); Value *EltsFromTbl = Builder.CreateAnd(Builder.CreateNot(CmpRes), TblRes); return Builder.CreateOr(EltsFromInput, EltsFromTbl, "vtbx"); } case NEON::BI__builtin_neon_vtbx2_v: { return packTBLDVectorList(CGF, makeArrayRef(Ops).slice(1, 2), Ops[0], Ops[3], Ty, Intrinsic::aarch64_neon_tbx1, "vtbx1"); } case NEON::BI__builtin_neon_vtbx3_v: { Value *TblRes = packTBLDVectorList(CGF, makeArrayRef(Ops).slice(1, 3), nullptr, Ops[4], Ty, Intrinsic::aarch64_neon_tbl2, "vtbl2"); llvm::Constant *TwentyFourV = ConstantInt::get(Ty, 24); Value *CmpRes = Builder.CreateICmp(ICmpInst::ICMP_UGE, Ops[4], TwentyFourV); CmpRes = Builder.CreateSExt(CmpRes, Ty); Value *EltsFromInput = Builder.CreateAnd(CmpRes, Ops[0]); Value *EltsFromTbl = Builder.CreateAnd(Builder.CreateNot(CmpRes), TblRes); return Builder.CreateOr(EltsFromInput, EltsFromTbl, "vtbx"); } case NEON::BI__builtin_neon_vtbx4_v: { return packTBLDVectorList(CGF, makeArrayRef(Ops).slice(1, 4), Ops[0], Ops[5], Ty, Intrinsic::aarch64_neon_tbx2, "vtbx2"); } case NEON::BI__builtin_neon_vqtbl1_v: case NEON::BI__builtin_neon_vqtbl1q_v: Int = Intrinsic::aarch64_neon_tbl1; s = "vtbl1"; break; case NEON::BI__builtin_neon_vqtbl2_v: case NEON::BI__builtin_neon_vqtbl2q_v: { Int = Intrinsic::aarch64_neon_tbl2; s = "vtbl2"; break; case NEON::BI__builtin_neon_vqtbl3_v: case NEON::BI__builtin_neon_vqtbl3q_v: Int = Intrinsic::aarch64_neon_tbl3; s = "vtbl3"; break; case NEON::BI__builtin_neon_vqtbl4_v: case NEON::BI__builtin_neon_vqtbl4q_v: Int = Intrinsic::aarch64_neon_tbl4; s = "vtbl4"; break; case NEON::BI__builtin_neon_vqtbx1_v: case NEON::BI__builtin_neon_vqtbx1q_v: Int = Intrinsic::aarch64_neon_tbx1; s = "vtbx1"; break; case NEON::BI__builtin_neon_vqtbx2_v: case NEON::BI__builtin_neon_vqtbx2q_v: Int = Intrinsic::aarch64_neon_tbx2; s = "vtbx2"; break; case NEON::BI__builtin_neon_vqtbx3_v: case NEON::BI__builtin_neon_vqtbx3q_v: Int = Intrinsic::aarch64_neon_tbx3; s = "vtbx3"; break; case NEON::BI__builtin_neon_vqtbx4_v: case NEON::BI__builtin_neon_vqtbx4q_v: Int = Intrinsic::aarch64_neon_tbx4; s = "vtbx4"; break; } } if (!Int) return nullptr; Function *F = CGF.CGM.getIntrinsic(Int, Ty); return CGF.EmitNeonCall(F, Ops, s); } Value *CodeGenFunction::vectorWrapScalar16(Value *Op) { llvm::Type *VTy = llvm::VectorType::get(Int16Ty, 4); Op = Builder.CreateBitCast(Op, Int16Ty); Value *V = UndefValue::get(VTy); llvm::Constant *CI = ConstantInt::get(SizeTy, 0); Op = Builder.CreateInsertElement(V, Op, CI); return Op; } Value *CodeGenFunction::EmitAArch64BuiltinExpr(unsigned BuiltinID, const CallExpr *E) { unsigned HintID = static_cast(-1); switch (BuiltinID) { default: break; case AArch64::BI__builtin_arm_nop: HintID = 0; break; case AArch64::BI__builtin_arm_yield: HintID = 1; break; case AArch64::BI__builtin_arm_wfe: HintID = 2; break; case AArch64::BI__builtin_arm_wfi: HintID = 3; break; case AArch64::BI__builtin_arm_sev: HintID = 4; break; case AArch64::BI__builtin_arm_sevl: HintID = 5; break; } if (HintID != static_cast(-1)) { Function *F = CGM.getIntrinsic(Intrinsic::aarch64_hint); return Builder.CreateCall(F, llvm::ConstantInt::get(Int32Ty, HintID)); } if (BuiltinID == AArch64::BI__builtin_arm_prefetch) { Value *Address = EmitScalarExpr(E->getArg(0)); Value *RW = EmitScalarExpr(E->getArg(1)); Value *CacheLevel = EmitScalarExpr(E->getArg(2)); Value *RetentionPolicy = EmitScalarExpr(E->getArg(3)); Value *IsData = EmitScalarExpr(E->getArg(4)); Value *Locality = nullptr; if (cast(RetentionPolicy)->isZero()) { // Temporal fetch, needs to convert cache level to locality. Locality = llvm::ConstantInt::get(Int32Ty, -cast(CacheLevel)->getValue() + 3); } else { // Streaming fetch. Locality = llvm::ConstantInt::get(Int32Ty, 0); } // FIXME: We need AArch64 specific LLVM intrinsic if we want to specify // PLDL3STRM or PLDL2STRM. Value *F = CGM.getIntrinsic(Intrinsic::prefetch); return Builder.CreateCall(F, {Address, RW, Locality, IsData}); } if (BuiltinID == AArch64::BI__builtin_arm_rbit) { assert((getContext().getTypeSize(E->getType()) == 32) && "rbit of unusual size!"); llvm::Value *Arg = EmitScalarExpr(E->getArg(0)); return Builder.CreateCall( CGM.getIntrinsic(Intrinsic::bitreverse, Arg->getType()), Arg, "rbit"); } if (BuiltinID == AArch64::BI__builtin_arm_rbit64) { assert((getContext().getTypeSize(E->getType()) == 64) && "rbit of unusual size!"); llvm::Value *Arg = EmitScalarExpr(E->getArg(0)); return Builder.CreateCall( CGM.getIntrinsic(Intrinsic::bitreverse, Arg->getType()), Arg, "rbit"); } if (BuiltinID == AArch64::BI__clear_cache) { assert(E->getNumArgs() == 2 && "__clear_cache takes 2 arguments"); const FunctionDecl *FD = E->getDirectCallee(); Value *Ops[2]; for (unsigned i = 0; i < 2; i++) Ops[i] = EmitScalarExpr(E->getArg(i)); llvm::Type *Ty = CGM.getTypes().ConvertType(FD->getType()); llvm::FunctionType *FTy = cast(Ty); StringRef Name = FD->getName(); return EmitNounwindRuntimeCall(CGM.CreateRuntimeFunction(FTy, Name), Ops); } if ((BuiltinID == AArch64::BI__builtin_arm_ldrex || BuiltinID == AArch64::BI__builtin_arm_ldaex) && getContext().getTypeSize(E->getType()) == 128) { Function *F = CGM.getIntrinsic(BuiltinID == AArch64::BI__builtin_arm_ldaex ? Intrinsic::aarch64_ldaxp : Intrinsic::aarch64_ldxp); Value *LdPtr = EmitScalarExpr(E->getArg(0)); Value *Val = Builder.CreateCall(F, Builder.CreateBitCast(LdPtr, Int8PtrTy), "ldxp"); Value *Val0 = Builder.CreateExtractValue(Val, 1); Value *Val1 = Builder.CreateExtractValue(Val, 0); llvm::Type *Int128Ty = llvm::IntegerType::get(getLLVMContext(), 128); Val0 = Builder.CreateZExt(Val0, Int128Ty); Val1 = Builder.CreateZExt(Val1, Int128Ty); Value *ShiftCst = llvm::ConstantInt::get(Int128Ty, 64); Val = Builder.CreateShl(Val0, ShiftCst, "shl", true /* nuw */); Val = Builder.CreateOr(Val, Val1); return Builder.CreateBitCast(Val, ConvertType(E->getType())); } else if (BuiltinID == AArch64::BI__builtin_arm_ldrex || BuiltinID == AArch64::BI__builtin_arm_ldaex) { Value *LoadAddr = EmitScalarExpr(E->getArg(0)); QualType Ty = E->getType(); llvm::Type *RealResTy = ConvertType(Ty); llvm::Type *PtrTy = llvm::IntegerType::get( getLLVMContext(), getContext().getTypeSize(Ty))->getPointerTo(); LoadAddr = Builder.CreateBitCast(LoadAddr, PtrTy); Function *F = CGM.getIntrinsic(BuiltinID == AArch64::BI__builtin_arm_ldaex ? Intrinsic::aarch64_ldaxr : Intrinsic::aarch64_ldxr, PtrTy); Value *Val = Builder.CreateCall(F, LoadAddr, "ldxr"); if (RealResTy->isPointerTy()) return Builder.CreateIntToPtr(Val, RealResTy); llvm::Type *IntResTy = llvm::IntegerType::get( getLLVMContext(), CGM.getDataLayout().getTypeSizeInBits(RealResTy)); Val = Builder.CreateTruncOrBitCast(Val, IntResTy); return Builder.CreateBitCast(Val, RealResTy); } if ((BuiltinID == AArch64::BI__builtin_arm_strex || BuiltinID == AArch64::BI__builtin_arm_stlex) && getContext().getTypeSize(E->getArg(0)->getType()) == 128) { Function *F = CGM.getIntrinsic(BuiltinID == AArch64::BI__builtin_arm_stlex ? Intrinsic::aarch64_stlxp : Intrinsic::aarch64_stxp); llvm::Type *STy = llvm::StructType::get(Int64Ty, Int64Ty, nullptr); Address Tmp = CreateMemTemp(E->getArg(0)->getType()); EmitAnyExprToMem(E->getArg(0), Tmp, Qualifiers(), /*init*/ true); Tmp = Builder.CreateBitCast(Tmp, llvm::PointerType::getUnqual(STy)); llvm::Value *Val = Builder.CreateLoad(Tmp); Value *Arg0 = Builder.CreateExtractValue(Val, 0); Value *Arg1 = Builder.CreateExtractValue(Val, 1); Value *StPtr = Builder.CreateBitCast(EmitScalarExpr(E->getArg(1)), Int8PtrTy); return Builder.CreateCall(F, {Arg0, Arg1, StPtr}, "stxp"); } if (BuiltinID == AArch64::BI__builtin_arm_strex || BuiltinID == AArch64::BI__builtin_arm_stlex) { Value *StoreVal = EmitScalarExpr(E->getArg(0)); Value *StoreAddr = EmitScalarExpr(E->getArg(1)); QualType Ty = E->getArg(0)->getType(); llvm::Type *StoreTy = llvm::IntegerType::get(getLLVMContext(), getContext().getTypeSize(Ty)); StoreAddr = Builder.CreateBitCast(StoreAddr, StoreTy->getPointerTo()); if (StoreVal->getType()->isPointerTy()) StoreVal = Builder.CreatePtrToInt(StoreVal, Int64Ty); else { llvm::Type *IntTy = llvm::IntegerType::get( getLLVMContext(), CGM.getDataLayout().getTypeSizeInBits(StoreVal->getType())); StoreVal = Builder.CreateBitCast(StoreVal, IntTy); StoreVal = Builder.CreateZExtOrBitCast(StoreVal, Int64Ty); } Function *F = CGM.getIntrinsic(BuiltinID == AArch64::BI__builtin_arm_stlex ? Intrinsic::aarch64_stlxr : Intrinsic::aarch64_stxr, StoreAddr->getType()); return Builder.CreateCall(F, {StoreVal, StoreAddr}, "stxr"); } if (BuiltinID == AArch64::BI__builtin_arm_clrex) { Function *F = CGM.getIntrinsic(Intrinsic::aarch64_clrex); return Builder.CreateCall(F); } // CRC32 Intrinsic::ID CRCIntrinsicID = Intrinsic::not_intrinsic; switch (BuiltinID) { case AArch64::BI__builtin_arm_crc32b: CRCIntrinsicID = Intrinsic::aarch64_crc32b; break; case AArch64::BI__builtin_arm_crc32cb: CRCIntrinsicID = Intrinsic::aarch64_crc32cb; break; case AArch64::BI__builtin_arm_crc32h: CRCIntrinsicID = Intrinsic::aarch64_crc32h; break; case AArch64::BI__builtin_arm_crc32ch: CRCIntrinsicID = Intrinsic::aarch64_crc32ch; break; case AArch64::BI__builtin_arm_crc32w: CRCIntrinsicID = Intrinsic::aarch64_crc32w; break; case AArch64::BI__builtin_arm_crc32cw: CRCIntrinsicID = Intrinsic::aarch64_crc32cw; break; case AArch64::BI__builtin_arm_crc32d: CRCIntrinsicID = Intrinsic::aarch64_crc32x; break; case AArch64::BI__builtin_arm_crc32cd: CRCIntrinsicID = Intrinsic::aarch64_crc32cx; break; } if (CRCIntrinsicID != Intrinsic::not_intrinsic) { Value *Arg0 = EmitScalarExpr(E->getArg(0)); Value *Arg1 = EmitScalarExpr(E->getArg(1)); Function *F = CGM.getIntrinsic(CRCIntrinsicID); llvm::Type *DataTy = F->getFunctionType()->getParamType(1); Arg1 = Builder.CreateZExtOrBitCast(Arg1, DataTy); return Builder.CreateCall(F, {Arg0, Arg1}); } if (BuiltinID == AArch64::BI__builtin_arm_rsr || BuiltinID == AArch64::BI__builtin_arm_rsr64 || BuiltinID == AArch64::BI__builtin_arm_rsrp || BuiltinID == AArch64::BI__builtin_arm_wsr || BuiltinID == AArch64::BI__builtin_arm_wsr64 || BuiltinID == AArch64::BI__builtin_arm_wsrp) { bool IsRead = BuiltinID == AArch64::BI__builtin_arm_rsr || BuiltinID == AArch64::BI__builtin_arm_rsr64 || BuiltinID == AArch64::BI__builtin_arm_rsrp; bool IsPointerBuiltin = BuiltinID == AArch64::BI__builtin_arm_rsrp || BuiltinID == AArch64::BI__builtin_arm_wsrp; bool Is64Bit = BuiltinID != AArch64::BI__builtin_arm_rsr && BuiltinID != AArch64::BI__builtin_arm_wsr; llvm::Type *ValueType; llvm::Type *RegisterType = Int64Ty; if (IsPointerBuiltin) { ValueType = VoidPtrTy; } else if (Is64Bit) { ValueType = Int64Ty; } else { ValueType = Int32Ty; } return EmitSpecialRegisterBuiltin(*this, E, RegisterType, ValueType, IsRead); } // Find out if any arguments are required to be integer constant // expressions. unsigned ICEArguments = 0; ASTContext::GetBuiltinTypeError Error; getContext().GetBuiltinType(BuiltinID, Error, &ICEArguments); assert(Error == ASTContext::GE_None && "Should not codegen an error"); llvm::SmallVector Ops; for (unsigned i = 0, e = E->getNumArgs() - 1; i != e; i++) { if ((ICEArguments & (1 << i)) == 0) { Ops.push_back(EmitScalarExpr(E->getArg(i))); } else { // If this is required to be a constant, constant fold it so that we know // that the generated intrinsic gets a ConstantInt. llvm::APSInt Result; bool IsConst = E->getArg(i)->isIntegerConstantExpr(Result, getContext()); assert(IsConst && "Constant arg isn't actually constant?"); (void)IsConst; Ops.push_back(llvm::ConstantInt::get(getLLVMContext(), Result)); } } auto SISDMap = makeArrayRef(AArch64SISDIntrinsicMap); const NeonIntrinsicInfo *Builtin = findNeonIntrinsicInMap( SISDMap, BuiltinID, AArch64SISDIntrinsicsProvenSorted); if (Builtin) { Ops.push_back(EmitScalarExpr(E->getArg(E->getNumArgs() - 1))); Value *Result = EmitCommonNeonSISDBuiltinExpr(*this, *Builtin, Ops, E); assert(Result && "SISD intrinsic should have been handled"); return Result; } llvm::APSInt Result; const Expr *Arg = E->getArg(E->getNumArgs()-1); NeonTypeFlags Type(0); if (Arg->isIntegerConstantExpr(Result, getContext())) // Determine the type of this overloaded NEON intrinsic. Type = NeonTypeFlags(Result.getZExtValue()); bool usgn = Type.isUnsigned(); bool quad = Type.isQuad(); // Handle non-overloaded intrinsics first. switch (BuiltinID) { default: break; case NEON::BI__builtin_neon_vldrq_p128: { llvm::Type *Int128Ty = llvm::Type::getIntNTy(getLLVMContext(), 128); llvm::Type *Int128PTy = llvm::PointerType::get(Int128Ty, 0); Value *Ptr = Builder.CreateBitCast(EmitScalarExpr(E->getArg(0)), Int128PTy); return Builder.CreateAlignedLoad(Int128Ty, Ptr, CharUnits::fromQuantity(16)); } case NEON::BI__builtin_neon_vstrq_p128: { llvm::Type *Int128PTy = llvm::Type::getIntNPtrTy(getLLVMContext(), 128); Value *Ptr = Builder.CreateBitCast(Ops[0], Int128PTy); return Builder.CreateDefaultAlignedStore(EmitScalarExpr(E->getArg(1)), Ptr); } case NEON::BI__builtin_neon_vcvts_u32_f32: case NEON::BI__builtin_neon_vcvtd_u64_f64: usgn = true; // FALL THROUGH case NEON::BI__builtin_neon_vcvts_s32_f32: case NEON::BI__builtin_neon_vcvtd_s64_f64: { Ops.push_back(EmitScalarExpr(E->getArg(0))); bool Is64 = Ops[0]->getType()->getPrimitiveSizeInBits() == 64; llvm::Type *InTy = Is64 ? Int64Ty : Int32Ty; llvm::Type *FTy = Is64 ? DoubleTy : FloatTy; Ops[0] = Builder.CreateBitCast(Ops[0], FTy); if (usgn) return Builder.CreateFPToUI(Ops[0], InTy); return Builder.CreateFPToSI(Ops[0], InTy); } case NEON::BI__builtin_neon_vcvts_f32_u32: case NEON::BI__builtin_neon_vcvtd_f64_u64: usgn = true; // FALL THROUGH case NEON::BI__builtin_neon_vcvts_f32_s32: case NEON::BI__builtin_neon_vcvtd_f64_s64: { Ops.push_back(EmitScalarExpr(E->getArg(0))); bool Is64 = Ops[0]->getType()->getPrimitiveSizeInBits() == 64; llvm::Type *InTy = Is64 ? Int64Ty : Int32Ty; llvm::Type *FTy = Is64 ? DoubleTy : FloatTy; Ops[0] = Builder.CreateBitCast(Ops[0], InTy); if (usgn) return Builder.CreateUIToFP(Ops[0], FTy); return Builder.CreateSIToFP(Ops[0], FTy); } case NEON::BI__builtin_neon_vpaddd_s64: { llvm::Type *Ty = llvm::VectorType::get(Int64Ty, 2); Value *Vec = EmitScalarExpr(E->getArg(0)); // The vector is v2f64, so make sure it's bitcast to that. Vec = Builder.CreateBitCast(Vec, Ty, "v2i64"); llvm::Value *Idx0 = llvm::ConstantInt::get(SizeTy, 0); llvm::Value *Idx1 = llvm::ConstantInt::get(SizeTy, 1); Value *Op0 = Builder.CreateExtractElement(Vec, Idx0, "lane0"); Value *Op1 = Builder.CreateExtractElement(Vec, Idx1, "lane1"); // Pairwise addition of a v2f64 into a scalar f64. return Builder.CreateAdd(Op0, Op1, "vpaddd"); } case NEON::BI__builtin_neon_vpaddd_f64: { llvm::Type *Ty = llvm::VectorType::get(DoubleTy, 2); Value *Vec = EmitScalarExpr(E->getArg(0)); // The vector is v2f64, so make sure it's bitcast to that. Vec = Builder.CreateBitCast(Vec, Ty, "v2f64"); llvm::Value *Idx0 = llvm::ConstantInt::get(SizeTy, 0); llvm::Value *Idx1 = llvm::ConstantInt::get(SizeTy, 1); Value *Op0 = Builder.CreateExtractElement(Vec, Idx0, "lane0"); Value *Op1 = Builder.CreateExtractElement(Vec, Idx1, "lane1"); // Pairwise addition of a v2f64 into a scalar f64. return Builder.CreateFAdd(Op0, Op1, "vpaddd"); } case NEON::BI__builtin_neon_vpadds_f32: { llvm::Type *Ty = llvm::VectorType::get(FloatTy, 2); Value *Vec = EmitScalarExpr(E->getArg(0)); // The vector is v2f32, so make sure it's bitcast to that. Vec = Builder.CreateBitCast(Vec, Ty, "v2f32"); llvm::Value *Idx0 = llvm::ConstantInt::get(SizeTy, 0); llvm::Value *Idx1 = llvm::ConstantInt::get(SizeTy, 1); Value *Op0 = Builder.CreateExtractElement(Vec, Idx0, "lane0"); Value *Op1 = Builder.CreateExtractElement(Vec, Idx1, "lane1"); // Pairwise addition of a v2f32 into a scalar f32. return Builder.CreateFAdd(Op0, Op1, "vpaddd"); } case NEON::BI__builtin_neon_vceqzd_s64: case NEON::BI__builtin_neon_vceqzd_f64: case NEON::BI__builtin_neon_vceqzs_f32: Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitAArch64CompareBuiltinExpr( Ops[0], ConvertType(E->getCallReturnType(getContext())), ICmpInst::FCMP_OEQ, ICmpInst::ICMP_EQ, "vceqz"); case NEON::BI__builtin_neon_vcgezd_s64: case NEON::BI__builtin_neon_vcgezd_f64: case NEON::BI__builtin_neon_vcgezs_f32: Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitAArch64CompareBuiltinExpr( Ops[0], ConvertType(E->getCallReturnType(getContext())), ICmpInst::FCMP_OGE, ICmpInst::ICMP_SGE, "vcgez"); case NEON::BI__builtin_neon_vclezd_s64: case NEON::BI__builtin_neon_vclezd_f64: case NEON::BI__builtin_neon_vclezs_f32: Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitAArch64CompareBuiltinExpr( Ops[0], ConvertType(E->getCallReturnType(getContext())), ICmpInst::FCMP_OLE, ICmpInst::ICMP_SLE, "vclez"); case NEON::BI__builtin_neon_vcgtzd_s64: case NEON::BI__builtin_neon_vcgtzd_f64: case NEON::BI__builtin_neon_vcgtzs_f32: Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitAArch64CompareBuiltinExpr( Ops[0], ConvertType(E->getCallReturnType(getContext())), ICmpInst::FCMP_OGT, ICmpInst::ICMP_SGT, "vcgtz"); case NEON::BI__builtin_neon_vcltzd_s64: case NEON::BI__builtin_neon_vcltzd_f64: case NEON::BI__builtin_neon_vcltzs_f32: Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitAArch64CompareBuiltinExpr( Ops[0], ConvertType(E->getCallReturnType(getContext())), ICmpInst::FCMP_OLT, ICmpInst::ICMP_SLT, "vcltz"); case NEON::BI__builtin_neon_vceqzd_u64: { Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = Builder.CreateBitCast(Ops[0], Int64Ty); Ops[0] = Builder.CreateICmpEQ(Ops[0], llvm::Constant::getNullValue(Int64Ty)); return Builder.CreateSExt(Ops[0], Int64Ty, "vceqzd"); } case NEON::BI__builtin_neon_vceqd_f64: case NEON::BI__builtin_neon_vcled_f64: case NEON::BI__builtin_neon_vcltd_f64: case NEON::BI__builtin_neon_vcged_f64: case NEON::BI__builtin_neon_vcgtd_f64: { llvm::CmpInst::Predicate P; switch (BuiltinID) { default: llvm_unreachable("missing builtin ID in switch!"); case NEON::BI__builtin_neon_vceqd_f64: P = llvm::FCmpInst::FCMP_OEQ; break; case NEON::BI__builtin_neon_vcled_f64: P = llvm::FCmpInst::FCMP_OLE; break; case NEON::BI__builtin_neon_vcltd_f64: P = llvm::FCmpInst::FCMP_OLT; break; case NEON::BI__builtin_neon_vcged_f64: P = llvm::FCmpInst::FCMP_OGE; break; case NEON::BI__builtin_neon_vcgtd_f64: P = llvm::FCmpInst::FCMP_OGT; break; } Ops.push_back(EmitScalarExpr(E->getArg(1))); Ops[0] = Builder.CreateBitCast(Ops[0], DoubleTy); Ops[1] = Builder.CreateBitCast(Ops[1], DoubleTy); Ops[0] = Builder.CreateFCmp(P, Ops[0], Ops[1]); return Builder.CreateSExt(Ops[0], Int64Ty, "vcmpd"); } case NEON::BI__builtin_neon_vceqs_f32: case NEON::BI__builtin_neon_vcles_f32: case NEON::BI__builtin_neon_vclts_f32: case NEON::BI__builtin_neon_vcges_f32: case NEON::BI__builtin_neon_vcgts_f32: { llvm::CmpInst::Predicate P; switch (BuiltinID) { default: llvm_unreachable("missing builtin ID in switch!"); case NEON::BI__builtin_neon_vceqs_f32: P = llvm::FCmpInst::FCMP_OEQ; break; case NEON::BI__builtin_neon_vcles_f32: P = llvm::FCmpInst::FCMP_OLE; break; case NEON::BI__builtin_neon_vclts_f32: P = llvm::FCmpInst::FCMP_OLT; break; case NEON::BI__builtin_neon_vcges_f32: P = llvm::FCmpInst::FCMP_OGE; break; case NEON::BI__builtin_neon_vcgts_f32: P = llvm::FCmpInst::FCMP_OGT; break; } Ops.push_back(EmitScalarExpr(E->getArg(1))); Ops[0] = Builder.CreateBitCast(Ops[0], FloatTy); Ops[1] = Builder.CreateBitCast(Ops[1], FloatTy); Ops[0] = Builder.CreateFCmp(P, Ops[0], Ops[1]); return Builder.CreateSExt(Ops[0], Int32Ty, "vcmpd"); } case NEON::BI__builtin_neon_vceqd_s64: case NEON::BI__builtin_neon_vceqd_u64: case NEON::BI__builtin_neon_vcgtd_s64: case NEON::BI__builtin_neon_vcgtd_u64: case NEON::BI__builtin_neon_vcltd_s64: case NEON::BI__builtin_neon_vcltd_u64: case NEON::BI__builtin_neon_vcged_u64: case NEON::BI__builtin_neon_vcged_s64: case NEON::BI__builtin_neon_vcled_u64: case NEON::BI__builtin_neon_vcled_s64: { llvm::CmpInst::Predicate P; switch (BuiltinID) { default: llvm_unreachable("missing builtin ID in switch!"); case NEON::BI__builtin_neon_vceqd_s64: case NEON::BI__builtin_neon_vceqd_u64:P = llvm::ICmpInst::ICMP_EQ;break; case NEON::BI__builtin_neon_vcgtd_s64:P = llvm::ICmpInst::ICMP_SGT;break; case NEON::BI__builtin_neon_vcgtd_u64:P = llvm::ICmpInst::ICMP_UGT;break; case NEON::BI__builtin_neon_vcltd_s64:P = llvm::ICmpInst::ICMP_SLT;break; case NEON::BI__builtin_neon_vcltd_u64:P = llvm::ICmpInst::ICMP_ULT;break; case NEON::BI__builtin_neon_vcged_u64:P = llvm::ICmpInst::ICMP_UGE;break; case NEON::BI__builtin_neon_vcged_s64:P = llvm::ICmpInst::ICMP_SGE;break; case NEON::BI__builtin_neon_vcled_u64:P = llvm::ICmpInst::ICMP_ULE;break; case NEON::BI__builtin_neon_vcled_s64:P = llvm::ICmpInst::ICMP_SLE;break; } Ops.push_back(EmitScalarExpr(E->getArg(1))); Ops[0] = Builder.CreateBitCast(Ops[0], Int64Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Int64Ty); Ops[0] = Builder.CreateICmp(P, Ops[0], Ops[1]); return Builder.CreateSExt(Ops[0], Int64Ty, "vceqd"); } case NEON::BI__builtin_neon_vtstd_s64: case NEON::BI__builtin_neon_vtstd_u64: { Ops.push_back(EmitScalarExpr(E->getArg(1))); Ops[0] = Builder.CreateBitCast(Ops[0], Int64Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Int64Ty); Ops[0] = Builder.CreateAnd(Ops[0], Ops[1]); Ops[0] = Builder.CreateICmp(ICmpInst::ICMP_NE, Ops[0], llvm::Constant::getNullValue(Int64Ty)); return Builder.CreateSExt(Ops[0], Int64Ty, "vtstd"); } case NEON::BI__builtin_neon_vset_lane_i8: case NEON::BI__builtin_neon_vset_lane_i16: case NEON::BI__builtin_neon_vset_lane_i32: case NEON::BI__builtin_neon_vset_lane_i64: case NEON::BI__builtin_neon_vset_lane_f32: case NEON::BI__builtin_neon_vsetq_lane_i8: case NEON::BI__builtin_neon_vsetq_lane_i16: case NEON::BI__builtin_neon_vsetq_lane_i32: case NEON::BI__builtin_neon_vsetq_lane_i64: case NEON::BI__builtin_neon_vsetq_lane_f32: Ops.push_back(EmitScalarExpr(E->getArg(2))); return Builder.CreateInsertElement(Ops[1], Ops[0], Ops[2], "vset_lane"); case NEON::BI__builtin_neon_vset_lane_f64: // The vector type needs a cast for the v1f64 variant. Ops[1] = Builder.CreateBitCast(Ops[1], llvm::VectorType::get(DoubleTy, 1)); Ops.push_back(EmitScalarExpr(E->getArg(2))); return Builder.CreateInsertElement(Ops[1], Ops[0], Ops[2], "vset_lane"); case NEON::BI__builtin_neon_vsetq_lane_f64: // The vector type needs a cast for the v2f64 variant. Ops[1] = Builder.CreateBitCast(Ops[1], llvm::VectorType::get(DoubleTy, 2)); Ops.push_back(EmitScalarExpr(E->getArg(2))); return Builder.CreateInsertElement(Ops[1], Ops[0], Ops[2], "vset_lane"); case NEON::BI__builtin_neon_vget_lane_i8: case NEON::BI__builtin_neon_vdupb_lane_i8: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int8Ty, 8)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vget_lane"); case NEON::BI__builtin_neon_vgetq_lane_i8: case NEON::BI__builtin_neon_vdupb_laneq_i8: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int8Ty, 16)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vgetq_lane"); case NEON::BI__builtin_neon_vget_lane_i16: case NEON::BI__builtin_neon_vduph_lane_i16: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int16Ty, 4)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vget_lane"); case NEON::BI__builtin_neon_vgetq_lane_i16: case NEON::BI__builtin_neon_vduph_laneq_i16: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int16Ty, 8)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vgetq_lane"); case NEON::BI__builtin_neon_vget_lane_i32: case NEON::BI__builtin_neon_vdups_lane_i32: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int32Ty, 2)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vget_lane"); case NEON::BI__builtin_neon_vdups_lane_f32: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(FloatTy, 2)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vdups_lane"); case NEON::BI__builtin_neon_vgetq_lane_i32: case NEON::BI__builtin_neon_vdups_laneq_i32: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int32Ty, 4)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vgetq_lane"); case NEON::BI__builtin_neon_vget_lane_i64: case NEON::BI__builtin_neon_vdupd_lane_i64: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int64Ty, 1)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vget_lane"); case NEON::BI__builtin_neon_vdupd_lane_f64: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(DoubleTy, 1)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vdupd_lane"); case NEON::BI__builtin_neon_vgetq_lane_i64: case NEON::BI__builtin_neon_vdupd_laneq_i64: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int64Ty, 2)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vgetq_lane"); case NEON::BI__builtin_neon_vget_lane_f32: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(FloatTy, 2)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vget_lane"); case NEON::BI__builtin_neon_vget_lane_f64: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(DoubleTy, 1)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vget_lane"); case NEON::BI__builtin_neon_vgetq_lane_f32: case NEON::BI__builtin_neon_vdups_laneq_f32: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(FloatTy, 4)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vgetq_lane"); case NEON::BI__builtin_neon_vgetq_lane_f64: case NEON::BI__builtin_neon_vdupd_laneq_f64: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(DoubleTy, 2)); return Builder.CreateExtractElement(Ops[0], EmitScalarExpr(E->getArg(1)), "vgetq_lane"); case NEON::BI__builtin_neon_vaddd_s64: case NEON::BI__builtin_neon_vaddd_u64: return Builder.CreateAdd(Ops[0], EmitScalarExpr(E->getArg(1)), "vaddd"); case NEON::BI__builtin_neon_vsubd_s64: case NEON::BI__builtin_neon_vsubd_u64: return Builder.CreateSub(Ops[0], EmitScalarExpr(E->getArg(1)), "vsubd"); case NEON::BI__builtin_neon_vqdmlalh_s16: case NEON::BI__builtin_neon_vqdmlslh_s16: { SmallVector ProductOps; ProductOps.push_back(vectorWrapScalar16(Ops[1])); ProductOps.push_back(vectorWrapScalar16(EmitScalarExpr(E->getArg(2)))); llvm::Type *VTy = llvm::VectorType::get(Int32Ty, 4); Ops[1] = EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_sqdmull, VTy), ProductOps, "vqdmlXl"); Constant *CI = ConstantInt::get(SizeTy, 0); Ops[1] = Builder.CreateExtractElement(Ops[1], CI, "lane0"); unsigned AccumInt = BuiltinID == NEON::BI__builtin_neon_vqdmlalh_s16 ? Intrinsic::aarch64_neon_sqadd : Intrinsic::aarch64_neon_sqsub; return EmitNeonCall(CGM.getIntrinsic(AccumInt, Int32Ty), Ops, "vqdmlXl"); } case NEON::BI__builtin_neon_vqshlud_n_s64: { Ops.push_back(EmitScalarExpr(E->getArg(1))); Ops[1] = Builder.CreateZExt(Ops[1], Int64Ty); return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_sqshlu, Int64Ty), Ops, "vqshlu_n"); } case NEON::BI__builtin_neon_vqshld_n_u64: case NEON::BI__builtin_neon_vqshld_n_s64: { unsigned Int = BuiltinID == NEON::BI__builtin_neon_vqshld_n_u64 ? Intrinsic::aarch64_neon_uqshl : Intrinsic::aarch64_neon_sqshl; Ops.push_back(EmitScalarExpr(E->getArg(1))); Ops[1] = Builder.CreateZExt(Ops[1], Int64Ty); return EmitNeonCall(CGM.getIntrinsic(Int, Int64Ty), Ops, "vqshl_n"); } case NEON::BI__builtin_neon_vrshrd_n_u64: case NEON::BI__builtin_neon_vrshrd_n_s64: { unsigned Int = BuiltinID == NEON::BI__builtin_neon_vrshrd_n_u64 ? Intrinsic::aarch64_neon_urshl : Intrinsic::aarch64_neon_srshl; Ops.push_back(EmitScalarExpr(E->getArg(1))); int SV = cast(Ops[1])->getSExtValue(); Ops[1] = ConstantInt::get(Int64Ty, -SV); return EmitNeonCall(CGM.getIntrinsic(Int, Int64Ty), Ops, "vrshr_n"); } case NEON::BI__builtin_neon_vrsrad_n_u64: case NEON::BI__builtin_neon_vrsrad_n_s64: { unsigned Int = BuiltinID == NEON::BI__builtin_neon_vrsrad_n_u64 ? Intrinsic::aarch64_neon_urshl : Intrinsic::aarch64_neon_srshl; Ops[1] = Builder.CreateBitCast(Ops[1], Int64Ty); Ops.push_back(Builder.CreateNeg(EmitScalarExpr(E->getArg(2)))); Ops[1] = Builder.CreateCall(CGM.getIntrinsic(Int, Int64Ty), {Ops[1], Builder.CreateSExt(Ops[2], Int64Ty)}); return Builder.CreateAdd(Ops[0], Builder.CreateBitCast(Ops[1], Int64Ty)); } case NEON::BI__builtin_neon_vshld_n_s64: case NEON::BI__builtin_neon_vshld_n_u64: { llvm::ConstantInt *Amt = cast(EmitScalarExpr(E->getArg(1))); return Builder.CreateShl( Ops[0], ConstantInt::get(Int64Ty, Amt->getZExtValue()), "shld_n"); } case NEON::BI__builtin_neon_vshrd_n_s64: { llvm::ConstantInt *Amt = cast(EmitScalarExpr(E->getArg(1))); return Builder.CreateAShr( Ops[0], ConstantInt::get(Int64Ty, std::min(static_cast(63), Amt->getZExtValue())), "shrd_n"); } case NEON::BI__builtin_neon_vshrd_n_u64: { llvm::ConstantInt *Amt = cast(EmitScalarExpr(E->getArg(1))); uint64_t ShiftAmt = Amt->getZExtValue(); // Right-shifting an unsigned value by its size yields 0. if (ShiftAmt == 64) return ConstantInt::get(Int64Ty, 0); return Builder.CreateLShr(Ops[0], ConstantInt::get(Int64Ty, ShiftAmt), "shrd_n"); } case NEON::BI__builtin_neon_vsrad_n_s64: { llvm::ConstantInt *Amt = cast(EmitScalarExpr(E->getArg(2))); Ops[1] = Builder.CreateAShr( Ops[1], ConstantInt::get(Int64Ty, std::min(static_cast(63), Amt->getZExtValue())), "shrd_n"); return Builder.CreateAdd(Ops[0], Ops[1]); } case NEON::BI__builtin_neon_vsrad_n_u64: { llvm::ConstantInt *Amt = cast(EmitScalarExpr(E->getArg(2))); uint64_t ShiftAmt = Amt->getZExtValue(); // Right-shifting an unsigned value by its size yields 0. // As Op + 0 = Op, return Ops[0] directly. if (ShiftAmt == 64) return Ops[0]; Ops[1] = Builder.CreateLShr(Ops[1], ConstantInt::get(Int64Ty, ShiftAmt), "shrd_n"); return Builder.CreateAdd(Ops[0], Ops[1]); } case NEON::BI__builtin_neon_vqdmlalh_lane_s16: case NEON::BI__builtin_neon_vqdmlalh_laneq_s16: case NEON::BI__builtin_neon_vqdmlslh_lane_s16: case NEON::BI__builtin_neon_vqdmlslh_laneq_s16: { Ops[2] = Builder.CreateExtractElement(Ops[2], EmitScalarExpr(E->getArg(3)), "lane"); SmallVector ProductOps; ProductOps.push_back(vectorWrapScalar16(Ops[1])); ProductOps.push_back(vectorWrapScalar16(Ops[2])); llvm::Type *VTy = llvm::VectorType::get(Int32Ty, 4); Ops[1] = EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_sqdmull, VTy), ProductOps, "vqdmlXl"); Constant *CI = ConstantInt::get(SizeTy, 0); Ops[1] = Builder.CreateExtractElement(Ops[1], CI, "lane0"); Ops.pop_back(); unsigned AccInt = (BuiltinID == NEON::BI__builtin_neon_vqdmlalh_lane_s16 || BuiltinID == NEON::BI__builtin_neon_vqdmlalh_laneq_s16) ? Intrinsic::aarch64_neon_sqadd : Intrinsic::aarch64_neon_sqsub; return EmitNeonCall(CGM.getIntrinsic(AccInt, Int32Ty), Ops, "vqdmlXl"); } case NEON::BI__builtin_neon_vqdmlals_s32: case NEON::BI__builtin_neon_vqdmlsls_s32: { SmallVector ProductOps; ProductOps.push_back(Ops[1]); ProductOps.push_back(EmitScalarExpr(E->getArg(2))); Ops[1] = EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_sqdmulls_scalar), ProductOps, "vqdmlXl"); unsigned AccumInt = BuiltinID == NEON::BI__builtin_neon_vqdmlals_s32 ? Intrinsic::aarch64_neon_sqadd : Intrinsic::aarch64_neon_sqsub; return EmitNeonCall(CGM.getIntrinsic(AccumInt, Int64Ty), Ops, "vqdmlXl"); } case NEON::BI__builtin_neon_vqdmlals_lane_s32: case NEON::BI__builtin_neon_vqdmlals_laneq_s32: case NEON::BI__builtin_neon_vqdmlsls_lane_s32: case NEON::BI__builtin_neon_vqdmlsls_laneq_s32: { Ops[2] = Builder.CreateExtractElement(Ops[2], EmitScalarExpr(E->getArg(3)), "lane"); SmallVector ProductOps; ProductOps.push_back(Ops[1]); ProductOps.push_back(Ops[2]); Ops[1] = EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_sqdmulls_scalar), ProductOps, "vqdmlXl"); Ops.pop_back(); unsigned AccInt = (BuiltinID == NEON::BI__builtin_neon_vqdmlals_lane_s32 || BuiltinID == NEON::BI__builtin_neon_vqdmlals_laneq_s32) ? Intrinsic::aarch64_neon_sqadd : Intrinsic::aarch64_neon_sqsub; return EmitNeonCall(CGM.getIntrinsic(AccInt, Int64Ty), Ops, "vqdmlXl"); } } llvm::VectorType *VTy = GetNeonType(this, Type); llvm::Type *Ty = VTy; if (!Ty) return nullptr; // Not all intrinsics handled by the common case work for AArch64 yet, so only // defer to common code if it's been added to our special map. Builtin = findNeonIntrinsicInMap(AArch64SIMDIntrinsicMap, BuiltinID, AArch64SIMDIntrinsicsProvenSorted); if (Builtin) return EmitCommonNeonBuiltinExpr( Builtin->BuiltinID, Builtin->LLVMIntrinsic, Builtin->AltLLVMIntrinsic, Builtin->NameHint, Builtin->TypeModifier, E, Ops, /*never use addresses*/ Address::invalid(), Address::invalid()); if (Value *V = EmitAArch64TblBuiltinExpr(*this, BuiltinID, E, Ops)) return V; unsigned Int; switch (BuiltinID) { default: return nullptr; case NEON::BI__builtin_neon_vbsl_v: case NEON::BI__builtin_neon_vbslq_v: { llvm::Type *BitTy = llvm::VectorType::getInteger(VTy); Ops[0] = Builder.CreateBitCast(Ops[0], BitTy, "vbsl"); Ops[1] = Builder.CreateBitCast(Ops[1], BitTy, "vbsl"); Ops[2] = Builder.CreateBitCast(Ops[2], BitTy, "vbsl"); Ops[1] = Builder.CreateAnd(Ops[0], Ops[1], "vbsl"); Ops[2] = Builder.CreateAnd(Builder.CreateNot(Ops[0]), Ops[2], "vbsl"); Ops[0] = Builder.CreateOr(Ops[1], Ops[2], "vbsl"); return Builder.CreateBitCast(Ops[0], Ty); } case NEON::BI__builtin_neon_vfma_lane_v: case NEON::BI__builtin_neon_vfmaq_lane_v: { // Only used for FP types // The ARM builtins (and instructions) have the addend as the first // operand, but the 'fma' intrinsics have it last. Swap it around here. Value *Addend = Ops[0]; Value *Multiplicand = Ops[1]; Value *LaneSource = Ops[2]; Ops[0] = Multiplicand; Ops[1] = LaneSource; Ops[2] = Addend; // Now adjust things to handle the lane access. llvm::Type *SourceTy = BuiltinID == NEON::BI__builtin_neon_vfmaq_lane_v ? llvm::VectorType::get(VTy->getElementType(), VTy->getNumElements() / 2) : VTy; llvm::Constant *cst = cast(Ops[3]); Value *SV = llvm::ConstantVector::getSplat(VTy->getNumElements(), cst); Ops[1] = Builder.CreateBitCast(Ops[1], SourceTy); Ops[1] = Builder.CreateShuffleVector(Ops[1], Ops[1], SV, "lane"); Ops.pop_back(); Int = Intrinsic::fma; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "fmla"); } case NEON::BI__builtin_neon_vfma_laneq_v: { llvm::VectorType *VTy = cast(Ty); // v1f64 fma should be mapped to Neon scalar f64 fma if (VTy && VTy->getElementType() == DoubleTy) { Ops[0] = Builder.CreateBitCast(Ops[0], DoubleTy); Ops[1] = Builder.CreateBitCast(Ops[1], DoubleTy); llvm::Type *VTy = GetNeonType(this, NeonTypeFlags(NeonTypeFlags::Float64, false, true)); Ops[2] = Builder.CreateBitCast(Ops[2], VTy); Ops[2] = Builder.CreateExtractElement(Ops[2], Ops[3], "extract"); Value *F = CGM.getIntrinsic(Intrinsic::fma, DoubleTy); Value *Result = Builder.CreateCall(F, {Ops[1], Ops[2], Ops[0]}); return Builder.CreateBitCast(Result, Ty); } Value *F = CGM.getIntrinsic(Intrinsic::fma, Ty); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); llvm::Type *STy = llvm::VectorType::get(VTy->getElementType(), VTy->getNumElements() * 2); Ops[2] = Builder.CreateBitCast(Ops[2], STy); Value* SV = llvm::ConstantVector::getSplat(VTy->getNumElements(), cast(Ops[3])); Ops[2] = Builder.CreateShuffleVector(Ops[2], Ops[2], SV, "lane"); return Builder.CreateCall(F, {Ops[2], Ops[1], Ops[0]}); } case NEON::BI__builtin_neon_vfmaq_laneq_v: { Value *F = CGM.getIntrinsic(Intrinsic::fma, Ty); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Ops[2] = EmitNeonSplat(Ops[2], cast(Ops[3])); return Builder.CreateCall(F, {Ops[2], Ops[1], Ops[0]}); } case NEON::BI__builtin_neon_vfmas_lane_f32: case NEON::BI__builtin_neon_vfmas_laneq_f32: case NEON::BI__builtin_neon_vfmad_lane_f64: case NEON::BI__builtin_neon_vfmad_laneq_f64: { Ops.push_back(EmitScalarExpr(E->getArg(3))); llvm::Type *Ty = ConvertType(E->getCallReturnType(getContext())); Value *F = CGM.getIntrinsic(Intrinsic::fma, Ty); Ops[2] = Builder.CreateExtractElement(Ops[2], Ops[3], "extract"); return Builder.CreateCall(F, {Ops[1], Ops[2], Ops[0]}); } case NEON::BI__builtin_neon_vmull_v: // FIXME: improve sharing scheme to cope with 3 alternative LLVM intrinsics. Int = usgn ? Intrinsic::aarch64_neon_umull : Intrinsic::aarch64_neon_smull; if (Type.isPoly()) Int = Intrinsic::aarch64_neon_pmull; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vmull"); case NEON::BI__builtin_neon_vmax_v: case NEON::BI__builtin_neon_vmaxq_v: // FIXME: improve sharing scheme to cope with 3 alternative LLVM intrinsics. Int = usgn ? Intrinsic::aarch64_neon_umax : Intrinsic::aarch64_neon_smax; if (Ty->isFPOrFPVectorTy()) Int = Intrinsic::aarch64_neon_fmax; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vmax"); case NEON::BI__builtin_neon_vmin_v: case NEON::BI__builtin_neon_vminq_v: // FIXME: improve sharing scheme to cope with 3 alternative LLVM intrinsics. Int = usgn ? Intrinsic::aarch64_neon_umin : Intrinsic::aarch64_neon_smin; if (Ty->isFPOrFPVectorTy()) Int = Intrinsic::aarch64_neon_fmin; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vmin"); case NEON::BI__builtin_neon_vabd_v: case NEON::BI__builtin_neon_vabdq_v: // FIXME: improve sharing scheme to cope with 3 alternative LLVM intrinsics. Int = usgn ? Intrinsic::aarch64_neon_uabd : Intrinsic::aarch64_neon_sabd; if (Ty->isFPOrFPVectorTy()) Int = Intrinsic::aarch64_neon_fabd; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vabd"); case NEON::BI__builtin_neon_vpadal_v: case NEON::BI__builtin_neon_vpadalq_v: { unsigned ArgElts = VTy->getNumElements(); llvm::IntegerType *EltTy = cast(VTy->getElementType()); unsigned BitWidth = EltTy->getBitWidth(); llvm::Type *ArgTy = llvm::VectorType::get( llvm::IntegerType::get(getLLVMContext(), BitWidth/2), 2*ArgElts); llvm::Type* Tys[2] = { VTy, ArgTy }; Int = usgn ? Intrinsic::aarch64_neon_uaddlp : Intrinsic::aarch64_neon_saddlp; SmallVector TmpOps; TmpOps.push_back(Ops[1]); Function *F = CGM.getIntrinsic(Int, Tys); llvm::Value *tmp = EmitNeonCall(F, TmpOps, "vpadal"); llvm::Value *addend = Builder.CreateBitCast(Ops[0], tmp->getType()); return Builder.CreateAdd(tmp, addend); } case NEON::BI__builtin_neon_vpmin_v: case NEON::BI__builtin_neon_vpminq_v: // FIXME: improve sharing scheme to cope with 3 alternative LLVM intrinsics. Int = usgn ? Intrinsic::aarch64_neon_uminp : Intrinsic::aarch64_neon_sminp; if (Ty->isFPOrFPVectorTy()) Int = Intrinsic::aarch64_neon_fminp; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vpmin"); case NEON::BI__builtin_neon_vpmax_v: case NEON::BI__builtin_neon_vpmaxq_v: // FIXME: improve sharing scheme to cope with 3 alternative LLVM intrinsics. Int = usgn ? Intrinsic::aarch64_neon_umaxp : Intrinsic::aarch64_neon_smaxp; if (Ty->isFPOrFPVectorTy()) Int = Intrinsic::aarch64_neon_fmaxp; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vpmax"); case NEON::BI__builtin_neon_vminnm_v: case NEON::BI__builtin_neon_vminnmq_v: Int = Intrinsic::aarch64_neon_fminnm; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vminnm"); case NEON::BI__builtin_neon_vmaxnm_v: case NEON::BI__builtin_neon_vmaxnmq_v: Int = Intrinsic::aarch64_neon_fmaxnm; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vmaxnm"); case NEON::BI__builtin_neon_vrecpss_f32: { Ops.push_back(EmitScalarExpr(E->getArg(1))); return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_frecps, FloatTy), Ops, "vrecps"); } case NEON::BI__builtin_neon_vrecpsd_f64: { Ops.push_back(EmitScalarExpr(E->getArg(1))); return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_frecps, DoubleTy), Ops, "vrecps"); } case NEON::BI__builtin_neon_vqshrun_n_v: Int = Intrinsic::aarch64_neon_sqshrun; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqshrun_n"); case NEON::BI__builtin_neon_vqrshrun_n_v: Int = Intrinsic::aarch64_neon_sqrshrun; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqrshrun_n"); case NEON::BI__builtin_neon_vqshrn_n_v: Int = usgn ? Intrinsic::aarch64_neon_uqshrn : Intrinsic::aarch64_neon_sqshrn; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqshrn_n"); case NEON::BI__builtin_neon_vrshrn_n_v: Int = Intrinsic::aarch64_neon_rshrn; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrshrn_n"); case NEON::BI__builtin_neon_vqrshrn_n_v: Int = usgn ? Intrinsic::aarch64_neon_uqrshrn : Intrinsic::aarch64_neon_sqrshrn; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vqrshrn_n"); case NEON::BI__builtin_neon_vrnda_v: case NEON::BI__builtin_neon_vrndaq_v: { Int = Intrinsic::round; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrnda"); } case NEON::BI__builtin_neon_vrndi_v: case NEON::BI__builtin_neon_vrndiq_v: { Int = Intrinsic::nearbyint; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrndi"); } case NEON::BI__builtin_neon_vrndm_v: case NEON::BI__builtin_neon_vrndmq_v: { Int = Intrinsic::floor; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrndm"); } case NEON::BI__builtin_neon_vrndn_v: case NEON::BI__builtin_neon_vrndnq_v: { Int = Intrinsic::aarch64_neon_frintn; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrndn"); } case NEON::BI__builtin_neon_vrndp_v: case NEON::BI__builtin_neon_vrndpq_v: { Int = Intrinsic::ceil; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrndp"); } case NEON::BI__builtin_neon_vrndx_v: case NEON::BI__builtin_neon_vrndxq_v: { Int = Intrinsic::rint; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrndx"); } case NEON::BI__builtin_neon_vrnd_v: case NEON::BI__builtin_neon_vrndq_v: { Int = Intrinsic::trunc; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrndz"); } case NEON::BI__builtin_neon_vceqz_v: case NEON::BI__builtin_neon_vceqzq_v: return EmitAArch64CompareBuiltinExpr(Ops[0], Ty, ICmpInst::FCMP_OEQ, ICmpInst::ICMP_EQ, "vceqz"); case NEON::BI__builtin_neon_vcgez_v: case NEON::BI__builtin_neon_vcgezq_v: return EmitAArch64CompareBuiltinExpr(Ops[0], Ty, ICmpInst::FCMP_OGE, ICmpInst::ICMP_SGE, "vcgez"); case NEON::BI__builtin_neon_vclez_v: case NEON::BI__builtin_neon_vclezq_v: return EmitAArch64CompareBuiltinExpr(Ops[0], Ty, ICmpInst::FCMP_OLE, ICmpInst::ICMP_SLE, "vclez"); case NEON::BI__builtin_neon_vcgtz_v: case NEON::BI__builtin_neon_vcgtzq_v: return EmitAArch64CompareBuiltinExpr(Ops[0], Ty, ICmpInst::FCMP_OGT, ICmpInst::ICMP_SGT, "vcgtz"); case NEON::BI__builtin_neon_vcltz_v: case NEON::BI__builtin_neon_vcltzq_v: return EmitAArch64CompareBuiltinExpr(Ops[0], Ty, ICmpInst::FCMP_OLT, ICmpInst::ICMP_SLT, "vcltz"); case NEON::BI__builtin_neon_vcvt_f64_v: case NEON::BI__builtin_neon_vcvtq_f64_v: Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ty = GetNeonType(this, NeonTypeFlags(NeonTypeFlags::Float64, false, quad)); return usgn ? Builder.CreateUIToFP(Ops[0], Ty, "vcvt") : Builder.CreateSIToFP(Ops[0], Ty, "vcvt"); case NEON::BI__builtin_neon_vcvt_f64_f32: { assert(Type.getEltType() == NeonTypeFlags::Float64 && quad && "unexpected vcvt_f64_f32 builtin"); NeonTypeFlags SrcFlag = NeonTypeFlags(NeonTypeFlags::Float32, false, false); Ops[0] = Builder.CreateBitCast(Ops[0], GetNeonType(this, SrcFlag)); return Builder.CreateFPExt(Ops[0], Ty, "vcvt"); } case NEON::BI__builtin_neon_vcvt_f32_f64: { assert(Type.getEltType() == NeonTypeFlags::Float32 && "unexpected vcvt_f32_f64 builtin"); NeonTypeFlags SrcFlag = NeonTypeFlags(NeonTypeFlags::Float64, false, true); Ops[0] = Builder.CreateBitCast(Ops[0], GetNeonType(this, SrcFlag)); return Builder.CreateFPTrunc(Ops[0], Ty, "vcvt"); } case NEON::BI__builtin_neon_vcvt_s32_v: case NEON::BI__builtin_neon_vcvt_u32_v: case NEON::BI__builtin_neon_vcvt_s64_v: case NEON::BI__builtin_neon_vcvt_u64_v: case NEON::BI__builtin_neon_vcvtq_s32_v: case NEON::BI__builtin_neon_vcvtq_u32_v: case NEON::BI__builtin_neon_vcvtq_s64_v: case NEON::BI__builtin_neon_vcvtq_u64_v: { Ops[0] = Builder.CreateBitCast(Ops[0], GetFloatNeonType(this, Type)); if (usgn) return Builder.CreateFPToUI(Ops[0], Ty); return Builder.CreateFPToSI(Ops[0], Ty); } case NEON::BI__builtin_neon_vcvta_s32_v: case NEON::BI__builtin_neon_vcvtaq_s32_v: case NEON::BI__builtin_neon_vcvta_u32_v: case NEON::BI__builtin_neon_vcvtaq_u32_v: case NEON::BI__builtin_neon_vcvta_s64_v: case NEON::BI__builtin_neon_vcvtaq_s64_v: case NEON::BI__builtin_neon_vcvta_u64_v: case NEON::BI__builtin_neon_vcvtaq_u64_v: { Int = usgn ? Intrinsic::aarch64_neon_fcvtau : Intrinsic::aarch64_neon_fcvtas; llvm::Type *Tys[2] = { Ty, GetFloatNeonType(this, Type) }; return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vcvta"); } case NEON::BI__builtin_neon_vcvtm_s32_v: case NEON::BI__builtin_neon_vcvtmq_s32_v: case NEON::BI__builtin_neon_vcvtm_u32_v: case NEON::BI__builtin_neon_vcvtmq_u32_v: case NEON::BI__builtin_neon_vcvtm_s64_v: case NEON::BI__builtin_neon_vcvtmq_s64_v: case NEON::BI__builtin_neon_vcvtm_u64_v: case NEON::BI__builtin_neon_vcvtmq_u64_v: { Int = usgn ? Intrinsic::aarch64_neon_fcvtmu : Intrinsic::aarch64_neon_fcvtms; llvm::Type *Tys[2] = { Ty, GetFloatNeonType(this, Type) }; return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vcvtm"); } case NEON::BI__builtin_neon_vcvtn_s32_v: case NEON::BI__builtin_neon_vcvtnq_s32_v: case NEON::BI__builtin_neon_vcvtn_u32_v: case NEON::BI__builtin_neon_vcvtnq_u32_v: case NEON::BI__builtin_neon_vcvtn_s64_v: case NEON::BI__builtin_neon_vcvtnq_s64_v: case NEON::BI__builtin_neon_vcvtn_u64_v: case NEON::BI__builtin_neon_vcvtnq_u64_v: { Int = usgn ? Intrinsic::aarch64_neon_fcvtnu : Intrinsic::aarch64_neon_fcvtns; llvm::Type *Tys[2] = { Ty, GetFloatNeonType(this, Type) }; return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vcvtn"); } case NEON::BI__builtin_neon_vcvtp_s32_v: case NEON::BI__builtin_neon_vcvtpq_s32_v: case NEON::BI__builtin_neon_vcvtp_u32_v: case NEON::BI__builtin_neon_vcvtpq_u32_v: case NEON::BI__builtin_neon_vcvtp_s64_v: case NEON::BI__builtin_neon_vcvtpq_s64_v: case NEON::BI__builtin_neon_vcvtp_u64_v: case NEON::BI__builtin_neon_vcvtpq_u64_v: { Int = usgn ? Intrinsic::aarch64_neon_fcvtpu : Intrinsic::aarch64_neon_fcvtps; llvm::Type *Tys[2] = { Ty, GetFloatNeonType(this, Type) }; return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vcvtp"); } case NEON::BI__builtin_neon_vmulx_v: case NEON::BI__builtin_neon_vmulxq_v: { Int = Intrinsic::aarch64_neon_fmulx; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vmulx"); } case NEON::BI__builtin_neon_vmul_lane_v: case NEON::BI__builtin_neon_vmul_laneq_v: { // v1f64 vmul_lane should be mapped to Neon scalar mul lane bool Quad = false; if (BuiltinID == NEON::BI__builtin_neon_vmul_laneq_v) Quad = true; Ops[0] = Builder.CreateBitCast(Ops[0], DoubleTy); llvm::Type *VTy = GetNeonType(this, NeonTypeFlags(NeonTypeFlags::Float64, false, Quad)); Ops[1] = Builder.CreateBitCast(Ops[1], VTy); Ops[1] = Builder.CreateExtractElement(Ops[1], Ops[2], "extract"); Value *Result = Builder.CreateFMul(Ops[0], Ops[1]); return Builder.CreateBitCast(Result, Ty); } case NEON::BI__builtin_neon_vnegd_s64: return Builder.CreateNeg(EmitScalarExpr(E->getArg(0)), "vnegd"); case NEON::BI__builtin_neon_vpmaxnm_v: case NEON::BI__builtin_neon_vpmaxnmq_v: { Int = Intrinsic::aarch64_neon_fmaxnmp; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vpmaxnm"); } case NEON::BI__builtin_neon_vpminnm_v: case NEON::BI__builtin_neon_vpminnmq_v: { Int = Intrinsic::aarch64_neon_fminnmp; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vpminnm"); } case NEON::BI__builtin_neon_vsqrt_v: case NEON::BI__builtin_neon_vsqrtq_v: { Int = Intrinsic::sqrt; Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vsqrt"); } case NEON::BI__builtin_neon_vrbit_v: case NEON::BI__builtin_neon_vrbitq_v: { Int = Intrinsic::aarch64_neon_rbit; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vrbit"); } case NEON::BI__builtin_neon_vaddv_u8: // FIXME: These are handled by the AArch64 scalar code. usgn = true; // FALLTHROUGH case NEON::BI__builtin_neon_vaddv_s8: { Int = usgn ? Intrinsic::aarch64_neon_uaddv : Intrinsic::aarch64_neon_saddv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vaddv_u16: usgn = true; // FALLTHROUGH case NEON::BI__builtin_neon_vaddv_s16: { Int = usgn ? Intrinsic::aarch64_neon_uaddv : Intrinsic::aarch64_neon_saddv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 4); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vaddvq_u8: usgn = true; // FALLTHROUGH case NEON::BI__builtin_neon_vaddvq_s8: { Int = usgn ? Intrinsic::aarch64_neon_uaddv : Intrinsic::aarch64_neon_saddv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 16); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vaddvq_u16: usgn = true; // FALLTHROUGH case NEON::BI__builtin_neon_vaddvq_s16: { Int = usgn ? Intrinsic::aarch64_neon_uaddv : Intrinsic::aarch64_neon_saddv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vmaxv_u8: { Int = Intrinsic::aarch64_neon_umaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vmaxv_u16: { Int = Intrinsic::aarch64_neon_umaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 4); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vmaxvq_u8: { Int = Intrinsic::aarch64_neon_umaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 16); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vmaxvq_u16: { Int = Intrinsic::aarch64_neon_umaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vmaxv_s8: { Int = Intrinsic::aarch64_neon_smaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vmaxv_s16: { Int = Intrinsic::aarch64_neon_smaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 4); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vmaxvq_s8: { Int = Intrinsic::aarch64_neon_smaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 16); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vmaxvq_s16: { Int = Intrinsic::aarch64_neon_smaxv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vmaxv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vminv_u8: { Int = Intrinsic::aarch64_neon_uminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vminv_u16: { Int = Intrinsic::aarch64_neon_uminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 4); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vminvq_u8: { Int = Intrinsic::aarch64_neon_uminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 16); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vminvq_u16: { Int = Intrinsic::aarch64_neon_uminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vminv_s8: { Int = Intrinsic::aarch64_neon_sminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vminv_s16: { Int = Intrinsic::aarch64_neon_sminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 4); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vminvq_s8: { Int = Intrinsic::aarch64_neon_sminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 16); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int8Ty); } case NEON::BI__builtin_neon_vminvq_s16: { Int = Intrinsic::aarch64_neon_sminv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vminv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vmul_n_f64: { Ops[0] = Builder.CreateBitCast(Ops[0], DoubleTy); Value *RHS = Builder.CreateBitCast(EmitScalarExpr(E->getArg(1)), DoubleTy); return Builder.CreateFMul(Ops[0], RHS); } case NEON::BI__builtin_neon_vaddlv_u8: { Int = Intrinsic::aarch64_neon_uaddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vaddlv_u16: { Int = Intrinsic::aarch64_neon_uaddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 4); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); } case NEON::BI__builtin_neon_vaddlvq_u8: { Int = Intrinsic::aarch64_neon_uaddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 16); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vaddlvq_u16: { Int = Intrinsic::aarch64_neon_uaddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); } case NEON::BI__builtin_neon_vaddlv_s8: { Int = Intrinsic::aarch64_neon_saddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vaddlv_s16: { Int = Intrinsic::aarch64_neon_saddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 4); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); } case NEON::BI__builtin_neon_vaddlvq_s8: { Int = Intrinsic::aarch64_neon_saddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int8Ty, 16); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); Ops[0] = EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); return Builder.CreateTrunc(Ops[0], Int16Ty); } case NEON::BI__builtin_neon_vaddlvq_s16: { Int = Intrinsic::aarch64_neon_saddlv; Ty = Int32Ty; VTy = llvm::VectorType::get(Int16Ty, 8); llvm::Type *Tys[2] = { Ty, VTy }; Ops.push_back(EmitScalarExpr(E->getArg(0))); return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, "vaddlv"); } case NEON::BI__builtin_neon_vsri_n_v: case NEON::BI__builtin_neon_vsriq_n_v: { Int = Intrinsic::aarch64_neon_vsri; llvm::Function *Intrin = CGM.getIntrinsic(Int, Ty); return EmitNeonCall(Intrin, Ops, "vsri_n"); } case NEON::BI__builtin_neon_vsli_n_v: case NEON::BI__builtin_neon_vsliq_n_v: { Int = Intrinsic::aarch64_neon_vsli; llvm::Function *Intrin = CGM.getIntrinsic(Int, Ty); return EmitNeonCall(Intrin, Ops, "vsli_n"); } case NEON::BI__builtin_neon_vsra_n_v: case NEON::BI__builtin_neon_vsraq_n_v: Ops[0] = Builder.CreateBitCast(Ops[0], Ty); Ops[1] = EmitNeonRShiftImm(Ops[1], Ops[2], Ty, usgn, "vsra_n"); return Builder.CreateAdd(Ops[0], Ops[1]); case NEON::BI__builtin_neon_vrsra_n_v: case NEON::BI__builtin_neon_vrsraq_n_v: { Int = usgn ? Intrinsic::aarch64_neon_urshl : Intrinsic::aarch64_neon_srshl; SmallVector TmpOps; TmpOps.push_back(Ops[1]); TmpOps.push_back(Ops[2]); Function* F = CGM.getIntrinsic(Int, Ty); llvm::Value *tmp = EmitNeonCall(F, TmpOps, "vrshr_n", 1, true); Ops[0] = Builder.CreateBitCast(Ops[0], VTy); return Builder.CreateAdd(Ops[0], tmp); } // FIXME: Sharing loads & stores with 32-bit is complicated by the absence // of an Align parameter here. case NEON::BI__builtin_neon_vld1_x2_v: case NEON::BI__builtin_neon_vld1q_x2_v: case NEON::BI__builtin_neon_vld1_x3_v: case NEON::BI__builtin_neon_vld1q_x3_v: case NEON::BI__builtin_neon_vld1_x4_v: case NEON::BI__builtin_neon_vld1q_x4_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy->getVectorElementType()); Ops[1] = Builder.CreateBitCast(Ops[1], PTy); llvm::Type *Tys[2] = { VTy, PTy }; unsigned Int; switch (BuiltinID) { case NEON::BI__builtin_neon_vld1_x2_v: case NEON::BI__builtin_neon_vld1q_x2_v: Int = Intrinsic::aarch64_neon_ld1x2; break; case NEON::BI__builtin_neon_vld1_x3_v: case NEON::BI__builtin_neon_vld1q_x3_v: Int = Intrinsic::aarch64_neon_ld1x3; break; case NEON::BI__builtin_neon_vld1_x4_v: case NEON::BI__builtin_neon_vld1q_x4_v: Int = Intrinsic::aarch64_neon_ld1x4; break; } Function *F = CGM.getIntrinsic(Int, Tys); Ops[1] = Builder.CreateCall(F, Ops[1], "vld1xN"); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vst1_x2_v: case NEON::BI__builtin_neon_vst1q_x2_v: case NEON::BI__builtin_neon_vst1_x3_v: case NEON::BI__builtin_neon_vst1q_x3_v: case NEON::BI__builtin_neon_vst1_x4_v: case NEON::BI__builtin_neon_vst1q_x4_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy->getVectorElementType()); llvm::Type *Tys[2] = { VTy, PTy }; unsigned Int; switch (BuiltinID) { case NEON::BI__builtin_neon_vst1_x2_v: case NEON::BI__builtin_neon_vst1q_x2_v: Int = Intrinsic::aarch64_neon_st1x2; break; case NEON::BI__builtin_neon_vst1_x3_v: case NEON::BI__builtin_neon_vst1q_x3_v: Int = Intrinsic::aarch64_neon_st1x3; break; case NEON::BI__builtin_neon_vst1_x4_v: case NEON::BI__builtin_neon_vst1q_x4_v: Int = Intrinsic::aarch64_neon_st1x4; break; } std::rotate(Ops.begin(), Ops.begin() + 1, Ops.end()); return EmitNeonCall(CGM.getIntrinsic(Int, Tys), Ops, ""); } case NEON::BI__builtin_neon_vld1_v: case NEON::BI__builtin_neon_vld1q_v: { Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(VTy)); auto Alignment = CharUnits::fromQuantity( BuiltinID == NEON::BI__builtin_neon_vld1_v ? 8 : 16); return Builder.CreateAlignedLoad(VTy, Ops[0], Alignment); } case NEON::BI__builtin_neon_vst1_v: case NEON::BI__builtin_neon_vst1q_v: Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(VTy)); Ops[1] = Builder.CreateBitCast(Ops[1], VTy); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); case NEON::BI__builtin_neon_vld1_lane_v: case NEON::BI__builtin_neon_vld1q_lane_v: { Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ty = llvm::PointerType::getUnqual(VTy->getElementType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); auto Alignment = CharUnits::fromQuantity( BuiltinID == NEON::BI__builtin_neon_vld1_lane_v ? 8 : 16); Ops[0] = Builder.CreateAlignedLoad(VTy->getElementType(), Ops[0], Alignment); return Builder.CreateInsertElement(Ops[1], Ops[0], Ops[2], "vld1_lane"); } case NEON::BI__builtin_neon_vld1_dup_v: case NEON::BI__builtin_neon_vld1q_dup_v: { Value *V = UndefValue::get(Ty); Ty = llvm::PointerType::getUnqual(VTy->getElementType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); auto Alignment = CharUnits::fromQuantity( BuiltinID == NEON::BI__builtin_neon_vld1_dup_v ? 8 : 16); Ops[0] = Builder.CreateAlignedLoad(VTy->getElementType(), Ops[0], Alignment); llvm::Constant *CI = ConstantInt::get(Int32Ty, 0); Ops[0] = Builder.CreateInsertElement(V, Ops[0], CI); return EmitNeonSplat(Ops[0], CI); } case NEON::BI__builtin_neon_vst1_lane_v: case NEON::BI__builtin_neon_vst1q_lane_v: Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[1] = Builder.CreateExtractElement(Ops[1], Ops[2]); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); return Builder.CreateDefaultAlignedStore(Ops[1], Builder.CreateBitCast(Ops[0], Ty)); case NEON::BI__builtin_neon_vld2_v: case NEON::BI__builtin_neon_vld2q_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy); Ops[1] = Builder.CreateBitCast(Ops[1], PTy); llvm::Type *Tys[2] = { VTy, PTy }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld2, Tys); Ops[1] = Builder.CreateCall(F, Ops[1], "vld2"); Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld3_v: case NEON::BI__builtin_neon_vld3q_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy); Ops[1] = Builder.CreateBitCast(Ops[1], PTy); llvm::Type *Tys[2] = { VTy, PTy }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld3, Tys); Ops[1] = Builder.CreateCall(F, Ops[1], "vld3"); Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld4_v: case NEON::BI__builtin_neon_vld4q_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy); Ops[1] = Builder.CreateBitCast(Ops[1], PTy); llvm::Type *Tys[2] = { VTy, PTy }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld4, Tys); Ops[1] = Builder.CreateCall(F, Ops[1], "vld4"); Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld2_dup_v: case NEON::BI__builtin_neon_vld2q_dup_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy->getElementType()); Ops[1] = Builder.CreateBitCast(Ops[1], PTy); llvm::Type *Tys[2] = { VTy, PTy }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld2r, Tys); Ops[1] = Builder.CreateCall(F, Ops[1], "vld2"); Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld3_dup_v: case NEON::BI__builtin_neon_vld3q_dup_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy->getElementType()); Ops[1] = Builder.CreateBitCast(Ops[1], PTy); llvm::Type *Tys[2] = { VTy, PTy }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld3r, Tys); Ops[1] = Builder.CreateCall(F, Ops[1], "vld3"); Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld4_dup_v: case NEON::BI__builtin_neon_vld4q_dup_v: { llvm::Type *PTy = llvm::PointerType::getUnqual(VTy->getElementType()); Ops[1] = Builder.CreateBitCast(Ops[1], PTy); llvm::Type *Tys[2] = { VTy, PTy }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld4r, Tys); Ops[1] = Builder.CreateCall(F, Ops[1], "vld4"); Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld2_lane_v: case NEON::BI__builtin_neon_vld2q_lane_v: { llvm::Type *Tys[2] = { VTy, Ops[1]->getType() }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld2lane, Tys); Ops.push_back(Ops[1]); Ops.erase(Ops.begin()+1); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Ops[3] = Builder.CreateZExt(Ops[3], Int64Ty); Ops[1] = Builder.CreateCall(F, makeArrayRef(Ops).slice(1), "vld2_lane"); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld3_lane_v: case NEON::BI__builtin_neon_vld3q_lane_v: { llvm::Type *Tys[2] = { VTy, Ops[1]->getType() }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld3lane, Tys); Ops.push_back(Ops[1]); Ops.erase(Ops.begin()+1); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Ops[3] = Builder.CreateBitCast(Ops[3], Ty); Ops[4] = Builder.CreateZExt(Ops[4], Int64Ty); Ops[1] = Builder.CreateCall(F, makeArrayRef(Ops).slice(1), "vld3_lane"); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vld4_lane_v: case NEON::BI__builtin_neon_vld4q_lane_v: { llvm::Type *Tys[2] = { VTy, Ops[1]->getType() }; Function *F = CGM.getIntrinsic(Intrinsic::aarch64_neon_ld4lane, Tys); Ops.push_back(Ops[1]); Ops.erase(Ops.begin()+1); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Ops[3] = Builder.CreateBitCast(Ops[3], Ty); Ops[4] = Builder.CreateBitCast(Ops[4], Ty); Ops[5] = Builder.CreateZExt(Ops[5], Int64Ty); Ops[1] = Builder.CreateCall(F, makeArrayRef(Ops).slice(1), "vld4_lane"); Ty = llvm::PointerType::getUnqual(Ops[1]->getType()); Ops[0] = Builder.CreateBitCast(Ops[0], Ty); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case NEON::BI__builtin_neon_vst2_v: case NEON::BI__builtin_neon_vst2q_v: { Ops.push_back(Ops[0]); Ops.erase(Ops.begin()); llvm::Type *Tys[2] = { VTy, Ops[2]->getType() }; return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_st2, Tys), Ops, ""); } case NEON::BI__builtin_neon_vst2_lane_v: case NEON::BI__builtin_neon_vst2q_lane_v: { Ops.push_back(Ops[0]); Ops.erase(Ops.begin()); Ops[2] = Builder.CreateZExt(Ops[2], Int64Ty); llvm::Type *Tys[2] = { VTy, Ops[3]->getType() }; return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_st2lane, Tys), Ops, ""); } case NEON::BI__builtin_neon_vst3_v: case NEON::BI__builtin_neon_vst3q_v: { Ops.push_back(Ops[0]); Ops.erase(Ops.begin()); llvm::Type *Tys[2] = { VTy, Ops[3]->getType() }; return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_st3, Tys), Ops, ""); } case NEON::BI__builtin_neon_vst3_lane_v: case NEON::BI__builtin_neon_vst3q_lane_v: { Ops.push_back(Ops[0]); Ops.erase(Ops.begin()); Ops[3] = Builder.CreateZExt(Ops[3], Int64Ty); llvm::Type *Tys[2] = { VTy, Ops[4]->getType() }; return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_st3lane, Tys), Ops, ""); } case NEON::BI__builtin_neon_vst4_v: case NEON::BI__builtin_neon_vst4q_v: { Ops.push_back(Ops[0]); Ops.erase(Ops.begin()); llvm::Type *Tys[2] = { VTy, Ops[4]->getType() }; return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_st4, Tys), Ops, ""); } case NEON::BI__builtin_neon_vst4_lane_v: case NEON::BI__builtin_neon_vst4q_lane_v: { Ops.push_back(Ops[0]); Ops.erase(Ops.begin()); Ops[4] = Builder.CreateZExt(Ops[4], Int64Ty); llvm::Type *Tys[2] = { VTy, Ops[5]->getType() }; return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_st4lane, Tys), Ops, ""); } case NEON::BI__builtin_neon_vtrn_v: case NEON::BI__builtin_neon_vtrnq_v: { Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ty)); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Value *SV = nullptr; for (unsigned vi = 0; vi != 2; ++vi) { SmallVector Indices; for (unsigned i = 0, e = VTy->getNumElements(); i != e; i += 2) { Indices.push_back(i+vi); Indices.push_back(i+e+vi); } Value *Addr = Builder.CreateConstInBoundsGEP1_32(Ty, Ops[0], vi); SV = Builder.CreateShuffleVector(Ops[1], Ops[2], Indices, "vtrn"); SV = Builder.CreateDefaultAlignedStore(SV, Addr); } return SV; } case NEON::BI__builtin_neon_vuzp_v: case NEON::BI__builtin_neon_vuzpq_v: { Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ty)); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Value *SV = nullptr; for (unsigned vi = 0; vi != 2; ++vi) { SmallVector Indices; for (unsigned i = 0, e = VTy->getNumElements(); i != e; ++i) Indices.push_back(2*i+vi); Value *Addr = Builder.CreateConstInBoundsGEP1_32(Ty, Ops[0], vi); SV = Builder.CreateShuffleVector(Ops[1], Ops[2], Indices, "vuzp"); SV = Builder.CreateDefaultAlignedStore(SV, Addr); } return SV; } case NEON::BI__builtin_neon_vzip_v: case NEON::BI__builtin_neon_vzipq_v: { Ops[0] = Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ty)); Ops[1] = Builder.CreateBitCast(Ops[1], Ty); Ops[2] = Builder.CreateBitCast(Ops[2], Ty); Value *SV = nullptr; for (unsigned vi = 0; vi != 2; ++vi) { SmallVector Indices; for (unsigned i = 0, e = VTy->getNumElements(); i != e; i += 2) { Indices.push_back((i + vi*e) >> 1); Indices.push_back(((i + vi*e) >> 1)+e); } Value *Addr = Builder.CreateConstInBoundsGEP1_32(Ty, Ops[0], vi); SV = Builder.CreateShuffleVector(Ops[1], Ops[2], Indices, "vzip"); SV = Builder.CreateDefaultAlignedStore(SV, Addr); } return SV; } case NEON::BI__builtin_neon_vqtbl1q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbl1, Ty), Ops, "vtbl1"); } case NEON::BI__builtin_neon_vqtbl2q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbl2, Ty), Ops, "vtbl2"); } case NEON::BI__builtin_neon_vqtbl3q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbl3, Ty), Ops, "vtbl3"); } case NEON::BI__builtin_neon_vqtbl4q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbl4, Ty), Ops, "vtbl4"); } case NEON::BI__builtin_neon_vqtbx1q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbx1, Ty), Ops, "vtbx1"); } case NEON::BI__builtin_neon_vqtbx2q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbx2, Ty), Ops, "vtbx2"); } case NEON::BI__builtin_neon_vqtbx3q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbx3, Ty), Ops, "vtbx3"); } case NEON::BI__builtin_neon_vqtbx4q_v: { return EmitNeonCall(CGM.getIntrinsic(Intrinsic::aarch64_neon_tbx4, Ty), Ops, "vtbx4"); } case NEON::BI__builtin_neon_vsqadd_v: case NEON::BI__builtin_neon_vsqaddq_v: { Int = Intrinsic::aarch64_neon_usqadd; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vsqadd"); } case NEON::BI__builtin_neon_vuqadd_v: case NEON::BI__builtin_neon_vuqaddq_v: { Int = Intrinsic::aarch64_neon_suqadd; return EmitNeonCall(CGM.getIntrinsic(Int, Ty), Ops, "vuqadd"); } } } llvm::Value *CodeGenFunction:: BuildVector(ArrayRef Ops) { assert((Ops.size() & (Ops.size() - 1)) == 0 && "Not a power-of-two sized vector!"); bool AllConstants = true; for (unsigned i = 0, e = Ops.size(); i != e && AllConstants; ++i) AllConstants &= isa(Ops[i]); // If this is a constant vector, create a ConstantVector. if (AllConstants) { SmallVector CstOps; for (unsigned i = 0, e = Ops.size(); i != e; ++i) CstOps.push_back(cast(Ops[i])); return llvm::ConstantVector::get(CstOps); } // Otherwise, insertelement the values to build the vector. Value *Result = llvm::UndefValue::get(llvm::VectorType::get(Ops[0]->getType(), Ops.size())); for (unsigned i = 0, e = Ops.size(); i != e; ++i) Result = Builder.CreateInsertElement(Result, Ops[i], Builder.getInt32(i)); return Result; } // Convert the mask from an integer type to a vector of i1. static Value *getMaskVecValue(CodeGenFunction &CGF, Value *Mask, unsigned NumElts) { llvm::VectorType *MaskTy = llvm::VectorType::get(CGF.Builder.getInt1Ty(), cast(Mask->getType())->getBitWidth()); Value *MaskVec = CGF.Builder.CreateBitCast(Mask, MaskTy); // If we have less than 8 elements, then the starting mask was an i8 and // we need to extract down to the right number of elements. if (NumElts < 8) { uint32_t Indices[4]; for (unsigned i = 0; i != NumElts; ++i) Indices[i] = i; MaskVec = CGF.Builder.CreateShuffleVector(MaskVec, MaskVec, makeArrayRef(Indices, NumElts), "extract"); } return MaskVec; } static Value *EmitX86MaskedStore(CodeGenFunction &CGF, SmallVectorImpl &Ops, unsigned Align) { // Cast the pointer to right type. Ops[0] = CGF.Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); // If the mask is all ones just emit a regular store. if (const auto *C = dyn_cast(Ops[2])) if (C->isAllOnesValue()) return CGF.Builder.CreateAlignedStore(Ops[1], Ops[0], Align); Value *MaskVec = getMaskVecValue(CGF, Ops[2], Ops[1]->getType()->getVectorNumElements()); return CGF.Builder.CreateMaskedStore(Ops[1], Ops[0], Align, MaskVec); } static Value *EmitX86MaskedLoad(CodeGenFunction &CGF, SmallVectorImpl &Ops, unsigned Align) { // Cast the pointer to right type. Ops[0] = CGF.Builder.CreateBitCast(Ops[0], llvm::PointerType::getUnqual(Ops[1]->getType())); // If the mask is all ones just emit a regular store. if (const auto *C = dyn_cast(Ops[2])) if (C->isAllOnesValue()) return CGF.Builder.CreateAlignedLoad(Ops[0], Align); Value *MaskVec = getMaskVecValue(CGF, Ops[2], Ops[1]->getType()->getVectorNumElements()); return CGF.Builder.CreateMaskedLoad(Ops[0], Align, MaskVec, Ops[1]); } static Value *EmitX86SubVectorBroadcast(CodeGenFunction &CGF, SmallVectorImpl &Ops, llvm::Type *DstTy, unsigned SrcSizeInBits, unsigned Align) { // Load the subvector. Ops[0] = CGF.Builder.CreateAlignedLoad(Ops[0], Align); // Create broadcast mask. unsigned NumDstElts = DstTy->getVectorNumElements(); unsigned NumSrcElts = SrcSizeInBits / DstTy->getScalarSizeInBits(); SmallVector Mask; for (unsigned i = 0; i != NumDstElts; i += NumSrcElts) for (unsigned j = 0; j != NumSrcElts; ++j) Mask.push_back(j); return CGF.Builder.CreateShuffleVector(Ops[0], Ops[0], Mask, "subvecbcst"); } static Value *EmitX86Select(CodeGenFunction &CGF, Value *Mask, Value *Op0, Value *Op1) { // If the mask is all ones just return first argument. if (const auto *C = dyn_cast(Mask)) if (C->isAllOnesValue()) return Op0; Mask = getMaskVecValue(CGF, Mask, Op0->getType()->getVectorNumElements()); return CGF.Builder.CreateSelect(Mask, Op0, Op1); } static Value *EmitX86MaskedCompare(CodeGenFunction &CGF, unsigned CC, bool Signed, SmallVectorImpl &Ops) { unsigned NumElts = Ops[0]->getType()->getVectorNumElements(); Value *Cmp; if (CC == 3) { Cmp = Constant::getNullValue( llvm::VectorType::get(CGF.Builder.getInt1Ty(), NumElts)); } else if (CC == 7) { Cmp = Constant::getAllOnesValue( llvm::VectorType::get(CGF.Builder.getInt1Ty(), NumElts)); } else { ICmpInst::Predicate Pred; switch (CC) { default: llvm_unreachable("Unknown condition code"); case 0: Pred = ICmpInst::ICMP_EQ; break; case 1: Pred = Signed ? ICmpInst::ICMP_SLT : ICmpInst::ICMP_ULT; break; case 2: Pred = Signed ? ICmpInst::ICMP_SLE : ICmpInst::ICMP_ULE; break; case 4: Pred = ICmpInst::ICMP_NE; break; case 5: Pred = Signed ? ICmpInst::ICMP_SGE : ICmpInst::ICMP_UGE; break; case 6: Pred = Signed ? ICmpInst::ICMP_SGT : ICmpInst::ICMP_UGT; break; } Cmp = CGF.Builder.CreateICmp(Pred, Ops[0], Ops[1]); } const auto *C = dyn_cast(Ops.back()); if (!C || !C->isAllOnesValue()) Cmp = CGF.Builder.CreateAnd(Cmp, getMaskVecValue(CGF, Ops.back(), NumElts)); if (NumElts < 8) { uint32_t Indices[8]; for (unsigned i = 0; i != NumElts; ++i) Indices[i] = i; for (unsigned i = NumElts; i != 8; ++i) Indices[i] = i % NumElts + NumElts; Cmp = CGF.Builder.CreateShuffleVector( Cmp, llvm::Constant::getNullValue(Cmp->getType()), Indices); } return CGF.Builder.CreateBitCast(Cmp, IntegerType::get(CGF.getLLVMContext(), std::max(NumElts, 8U))); } static Value *EmitX86MinMax(CodeGenFunction &CGF, ICmpInst::Predicate Pred, ArrayRef Ops) { Value *Cmp = CGF.Builder.CreateICmp(Pred, Ops[0], Ops[1]); Value *Res = CGF.Builder.CreateSelect(Cmp, Ops[0], Ops[1]); if (Ops.size() == 2) return Res; assert(Ops.size() == 4); return EmitX86Select(CGF, Ops[3], Res, Ops[2]); } Value *CodeGenFunction::EmitX86BuiltinExpr(unsigned BuiltinID, const CallExpr *E) { if (BuiltinID == X86::BI__builtin_ms_va_start || BuiltinID == X86::BI__builtin_ms_va_end) return EmitVAStartEnd(EmitMSVAListRef(E->getArg(0)).getPointer(), BuiltinID == X86::BI__builtin_ms_va_start); if (BuiltinID == X86::BI__builtin_ms_va_copy) { // Lower this manually. We can't reliably determine whether or not any // given va_copy() is for a Win64 va_list from the calling convention // alone, because it's legal to do this from a System V ABI function. // With opaque pointer types, we won't have enough information in LLVM // IR to determine this from the argument types, either. Best to do it // now, while we have enough information. Address DestAddr = EmitMSVAListRef(E->getArg(0)); Address SrcAddr = EmitMSVAListRef(E->getArg(1)); llvm::Type *BPP = Int8PtrPtrTy; DestAddr = Address(Builder.CreateBitCast(DestAddr.getPointer(), BPP, "cp"), DestAddr.getAlignment()); SrcAddr = Address(Builder.CreateBitCast(SrcAddr.getPointer(), BPP, "ap"), SrcAddr.getAlignment()); Value *ArgPtr = Builder.CreateLoad(SrcAddr, "ap.val"); return Builder.CreateStore(ArgPtr, DestAddr); } SmallVector Ops; // Find out if any arguments are required to be integer constant expressions. unsigned ICEArguments = 0; ASTContext::GetBuiltinTypeError Error; getContext().GetBuiltinType(BuiltinID, Error, &ICEArguments); assert(Error == ASTContext::GE_None && "Should not codegen an error"); for (unsigned i = 0, e = E->getNumArgs(); i != e; i++) { // If this is a normal argument, just emit it as a scalar. if ((ICEArguments & (1 << i)) == 0) { Ops.push_back(EmitScalarExpr(E->getArg(i))); continue; } // If this is required to be a constant, constant fold it so that we know // that the generated intrinsic gets a ConstantInt. llvm::APSInt Result; bool IsConst = E->getArg(i)->isIntegerConstantExpr(Result, getContext()); assert(IsConst && "Constant arg isn't actually constant?"); (void)IsConst; Ops.push_back(llvm::ConstantInt::get(getLLVMContext(), Result)); } // These exist so that the builtin that takes an immediate can be bounds // checked by clang to avoid passing bad immediates to the backend. Since // AVX has a larger immediate than SSE we would need separate builtins to // do the different bounds checking. Rather than create a clang specific // SSE only builtin, this implements eight separate builtins to match gcc // implementation. auto getCmpIntrinsicCall = [this, &Ops](Intrinsic::ID ID, unsigned Imm) { Ops.push_back(llvm::ConstantInt::get(Int8Ty, Imm)); llvm::Function *F = CGM.getIntrinsic(ID); return Builder.CreateCall(F, Ops); }; // For the vector forms of FP comparisons, translate the builtins directly to // IR. // TODO: The builtins could be removed if the SSE header files used vector // extension comparisons directly (vector ordered/unordered may need // additional support via __builtin_isnan()). auto getVectorFCmpIR = [this, &Ops](CmpInst::Predicate Pred) { Value *Cmp = Builder.CreateFCmp(Pred, Ops[0], Ops[1]); llvm::VectorType *FPVecTy = cast(Ops[0]->getType()); llvm::VectorType *IntVecTy = llvm::VectorType::getInteger(FPVecTy); Value *Sext = Builder.CreateSExt(Cmp, IntVecTy); return Builder.CreateBitCast(Sext, FPVecTy); }; switch (BuiltinID) { default: return nullptr; case X86::BI__builtin_cpu_supports: { const Expr *FeatureExpr = E->getArg(0)->IgnoreParenCasts(); StringRef FeatureStr = cast(FeatureExpr)->getString(); // TODO: When/if this becomes more than x86 specific then use a TargetInfo // based mapping. // Processor features and mapping to processor feature value. enum X86Features { CMOV = 0, MMX, POPCNT, SSE, SSE2, SSE3, SSSE3, SSE4_1, SSE4_2, AVX, AVX2, SSE4_A, FMA4, XOP, FMA, AVX512F, BMI, BMI2, AES, PCLMUL, AVX512VL, AVX512BW, AVX512DQ, AVX512CD, AVX512ER, AVX512PF, AVX512VBMI, AVX512IFMA, MAX }; X86Features Feature = StringSwitch(FeatureStr) .Case("cmov", X86Features::CMOV) .Case("mmx", X86Features::MMX) .Case("popcnt", X86Features::POPCNT) .Case("sse", X86Features::SSE) .Case("sse2", X86Features::SSE2) .Case("sse3", X86Features::SSE3) .Case("ssse3", X86Features::SSSE3) .Case("sse4.1", X86Features::SSE4_1) .Case("sse4.2", X86Features::SSE4_2) .Case("avx", X86Features::AVX) .Case("avx2", X86Features::AVX2) .Case("sse4a", X86Features::SSE4_A) .Case("fma4", X86Features::FMA4) .Case("xop", X86Features::XOP) .Case("fma", X86Features::FMA) .Case("avx512f", X86Features::AVX512F) .Case("bmi", X86Features::BMI) .Case("bmi2", X86Features::BMI2) .Case("aes", X86Features::AES) .Case("pclmul", X86Features::PCLMUL) .Case("avx512vl", X86Features::AVX512VL) .Case("avx512bw", X86Features::AVX512BW) .Case("avx512dq", X86Features::AVX512DQ) .Case("avx512cd", X86Features::AVX512CD) .Case("avx512er", X86Features::AVX512ER) .Case("avx512pf", X86Features::AVX512PF) .Case("avx512vbmi", X86Features::AVX512VBMI) .Case("avx512ifma", X86Features::AVX512IFMA) .Default(X86Features::MAX); assert(Feature != X86Features::MAX && "Invalid feature!"); // Matching the struct layout from the compiler-rt/libgcc structure that is // filled in: // unsigned int __cpu_vendor; // unsigned int __cpu_type; // unsigned int __cpu_subtype; // unsigned int __cpu_features[1]; llvm::Type *STy = llvm::StructType::get( Int32Ty, Int32Ty, Int32Ty, llvm::ArrayType::get(Int32Ty, 1), nullptr); // Grab the global __cpu_model. llvm::Constant *CpuModel = CGM.CreateRuntimeVariable(STy, "__cpu_model"); // Grab the first (0th) element from the field __cpu_features off of the // global in the struct STy. Value *Idxs[] = { ConstantInt::get(Int32Ty, 0), ConstantInt::get(Int32Ty, 3), ConstantInt::get(Int32Ty, 0) }; Value *CpuFeatures = Builder.CreateGEP(STy, CpuModel, Idxs); Value *Features = Builder.CreateAlignedLoad(CpuFeatures, CharUnits::fromQuantity(4)); // Check the value of the bit corresponding to the feature requested. Value *Bitset = Builder.CreateAnd( Features, llvm::ConstantInt::get(Int32Ty, 1ULL << Feature)); return Builder.CreateICmpNE(Bitset, llvm::ConstantInt::get(Int32Ty, 0)); } case X86::BI_mm_prefetch: { Value *Address = Ops[0]; Value *RW = ConstantInt::get(Int32Ty, 0); Value *Locality = Ops[1]; Value *Data = ConstantInt::get(Int32Ty, 1); Value *F = CGM.getIntrinsic(Intrinsic::prefetch); return Builder.CreateCall(F, {Address, RW, Locality, Data}); } case X86::BI_mm_clflush: { return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_sse2_clflush), Ops[0]); } case X86::BI_mm_lfence: { return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_sse2_lfence)); } case X86::BI_mm_mfence: { return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_sse2_mfence)); } case X86::BI_mm_sfence: { return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_sse_sfence)); } case X86::BI_mm_pause: { return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_sse2_pause)); } case X86::BI__rdtsc: { return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_rdtsc)); } case X86::BI__builtin_ia32_undef128: case X86::BI__builtin_ia32_undef256: case X86::BI__builtin_ia32_undef512: return UndefValue::get(ConvertType(E->getType())); case X86::BI__builtin_ia32_vec_init_v8qi: case X86::BI__builtin_ia32_vec_init_v4hi: case X86::BI__builtin_ia32_vec_init_v2si: return Builder.CreateBitCast(BuildVector(Ops), llvm::Type::getX86_MMXTy(getLLVMContext())); case X86::BI__builtin_ia32_vec_ext_v2si: return Builder.CreateExtractElement(Ops[0], llvm::ConstantInt::get(Ops[1]->getType(), 0)); case X86::BI_mm_setcsr: case X86::BI__builtin_ia32_ldmxcsr: { Address Tmp = CreateMemTemp(E->getArg(0)->getType()); Builder.CreateStore(Ops[0], Tmp); return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_sse_ldmxcsr), Builder.CreateBitCast(Tmp.getPointer(), Int8PtrTy)); } case X86::BI_mm_getcsr: case X86::BI__builtin_ia32_stmxcsr: { Address Tmp = CreateMemTemp(E->getType()); Builder.CreateCall(CGM.getIntrinsic(Intrinsic::x86_sse_stmxcsr), Builder.CreateBitCast(Tmp.getPointer(), Int8PtrTy)); return Builder.CreateLoad(Tmp, "stmxcsr"); } case X86::BI__builtin_ia32_xsave: case X86::BI__builtin_ia32_xsave64: case X86::BI__builtin_ia32_xrstor: case X86::BI__builtin_ia32_xrstor64: case X86::BI__builtin_ia32_xsaveopt: case X86::BI__builtin_ia32_xsaveopt64: case X86::BI__builtin_ia32_xrstors: case X86::BI__builtin_ia32_xrstors64: case X86::BI__builtin_ia32_xsavec: case X86::BI__builtin_ia32_xsavec64: case X86::BI__builtin_ia32_xsaves: case X86::BI__builtin_ia32_xsaves64: { Intrinsic::ID ID; #define INTRINSIC_X86_XSAVE_ID(NAME) \ case X86::BI__builtin_ia32_##NAME: \ ID = Intrinsic::x86_##NAME; \ break switch (BuiltinID) { default: llvm_unreachable("Unsupported intrinsic!"); INTRINSIC_X86_XSAVE_ID(xsave); INTRINSIC_X86_XSAVE_ID(xsave64); INTRINSIC_X86_XSAVE_ID(xrstor); INTRINSIC_X86_XSAVE_ID(xrstor64); INTRINSIC_X86_XSAVE_ID(xsaveopt); INTRINSIC_X86_XSAVE_ID(xsaveopt64); INTRINSIC_X86_XSAVE_ID(xrstors); INTRINSIC_X86_XSAVE_ID(xrstors64); INTRINSIC_X86_XSAVE_ID(xsavec); INTRINSIC_X86_XSAVE_ID(xsavec64); INTRINSIC_X86_XSAVE_ID(xsaves); INTRINSIC_X86_XSAVE_ID(xsaves64); } #undef INTRINSIC_X86_XSAVE_ID Value *Mhi = Builder.CreateTrunc( Builder.CreateLShr(Ops[1], ConstantInt::get(Int64Ty, 32)), Int32Ty); Value *Mlo = Builder.CreateTrunc(Ops[1], Int32Ty); Ops[1] = Mhi; Ops.push_back(Mlo); return Builder.CreateCall(CGM.getIntrinsic(ID), Ops); } case X86::BI__builtin_ia32_storedqudi128_mask: case X86::BI__builtin_ia32_storedqusi128_mask: case X86::BI__builtin_ia32_storedquhi128_mask: case X86::BI__builtin_ia32_storedquqi128_mask: case X86::BI__builtin_ia32_storeupd128_mask: case X86::BI__builtin_ia32_storeups128_mask: case X86::BI__builtin_ia32_storedqudi256_mask: case X86::BI__builtin_ia32_storedqusi256_mask: case X86::BI__builtin_ia32_storedquhi256_mask: case X86::BI__builtin_ia32_storedquqi256_mask: case X86::BI__builtin_ia32_storeupd256_mask: case X86::BI__builtin_ia32_storeups256_mask: case X86::BI__builtin_ia32_storedqudi512_mask: case X86::BI__builtin_ia32_storedqusi512_mask: case X86::BI__builtin_ia32_storedquhi512_mask: case X86::BI__builtin_ia32_storedquqi512_mask: case X86::BI__builtin_ia32_storeupd512_mask: case X86::BI__builtin_ia32_storeups512_mask: return EmitX86MaskedStore(*this, Ops, 1); case X86::BI__builtin_ia32_storess128_mask: case X86::BI__builtin_ia32_storesd128_mask: { return EmitX86MaskedStore(*this, Ops, 16); } case X86::BI__builtin_ia32_movdqa32store128_mask: case X86::BI__builtin_ia32_movdqa64store128_mask: case X86::BI__builtin_ia32_storeaps128_mask: case X86::BI__builtin_ia32_storeapd128_mask: case X86::BI__builtin_ia32_movdqa32store256_mask: case X86::BI__builtin_ia32_movdqa64store256_mask: case X86::BI__builtin_ia32_storeaps256_mask: case X86::BI__builtin_ia32_storeapd256_mask: case X86::BI__builtin_ia32_movdqa32store512_mask: case X86::BI__builtin_ia32_movdqa64store512_mask: case X86::BI__builtin_ia32_storeaps512_mask: case X86::BI__builtin_ia32_storeapd512_mask: { unsigned Align = getContext().getTypeAlignInChars(E->getArg(1)->getType()).getQuantity(); return EmitX86MaskedStore(*this, Ops, Align); } case X86::BI__builtin_ia32_loadups128_mask: case X86::BI__builtin_ia32_loadups256_mask: case X86::BI__builtin_ia32_loadups512_mask: case X86::BI__builtin_ia32_loadupd128_mask: case X86::BI__builtin_ia32_loadupd256_mask: case X86::BI__builtin_ia32_loadupd512_mask: case X86::BI__builtin_ia32_loaddquqi128_mask: case X86::BI__builtin_ia32_loaddquqi256_mask: case X86::BI__builtin_ia32_loaddquqi512_mask: case X86::BI__builtin_ia32_loaddquhi128_mask: case X86::BI__builtin_ia32_loaddquhi256_mask: case X86::BI__builtin_ia32_loaddquhi512_mask: case X86::BI__builtin_ia32_loaddqusi128_mask: case X86::BI__builtin_ia32_loaddqusi256_mask: case X86::BI__builtin_ia32_loaddqusi512_mask: case X86::BI__builtin_ia32_loaddqudi128_mask: case X86::BI__builtin_ia32_loaddqudi256_mask: case X86::BI__builtin_ia32_loaddqudi512_mask: return EmitX86MaskedLoad(*this, Ops, 1); case X86::BI__builtin_ia32_loadss128_mask: case X86::BI__builtin_ia32_loadsd128_mask: return EmitX86MaskedLoad(*this, Ops, 16); case X86::BI__builtin_ia32_loadaps128_mask: case X86::BI__builtin_ia32_loadaps256_mask: case X86::BI__builtin_ia32_loadaps512_mask: case X86::BI__builtin_ia32_loadapd128_mask: case X86::BI__builtin_ia32_loadapd256_mask: case X86::BI__builtin_ia32_loadapd512_mask: case X86::BI__builtin_ia32_movdqa32load128_mask: case X86::BI__builtin_ia32_movdqa32load256_mask: case X86::BI__builtin_ia32_movdqa32load512_mask: case X86::BI__builtin_ia32_movdqa64load128_mask: case X86::BI__builtin_ia32_movdqa64load256_mask: case X86::BI__builtin_ia32_movdqa64load512_mask: { unsigned Align = getContext().getTypeAlignInChars(E->getArg(1)->getType()).getQuantity(); return EmitX86MaskedLoad(*this, Ops, Align); } case X86::BI__builtin_ia32_vbroadcastf128_pd256: case X86::BI__builtin_ia32_vbroadcastf128_ps256: { llvm::Type *DstTy = ConvertType(E->getType()); return EmitX86SubVectorBroadcast(*this, Ops, DstTy, 128, 1); } case X86::BI__builtin_ia32_storehps: case X86::BI__builtin_ia32_storelps: { llvm::Type *PtrTy = llvm::PointerType::getUnqual(Int64Ty); llvm::Type *VecTy = llvm::VectorType::get(Int64Ty, 2); // cast val v2i64 Ops[1] = Builder.CreateBitCast(Ops[1], VecTy, "cast"); // extract (0, 1) unsigned Index = BuiltinID == X86::BI__builtin_ia32_storelps ? 0 : 1; llvm::Value *Idx = llvm::ConstantInt::get(SizeTy, Index); Ops[1] = Builder.CreateExtractElement(Ops[1], Idx, "extract"); // cast pointer to i64 & store Ops[0] = Builder.CreateBitCast(Ops[0], PtrTy); return Builder.CreateDefaultAlignedStore(Ops[1], Ops[0]); } case X86::BI__builtin_ia32_palignr128: case X86::BI__builtin_ia32_palignr256: case X86::BI__builtin_ia32_palignr512_mask: { unsigned ShiftVal = cast(Ops[2])->getZExtValue(); unsigned NumElts = Ops[0]->getType()->getVectorNumElements(); assert(NumElts % 16 == 0); // If palignr is shifting the pair of vectors more than the size of two // lanes, emit zero. if (ShiftVal >= 32) return llvm::Constant::getNullValue(ConvertType(E->getType())); // If palignr is shifting the pair of input vectors more than one lane, // but less than two lanes, convert to shifting in zeroes. if (ShiftVal > 16) { ShiftVal -= 16; Ops[1] = Ops[0]; Ops[0] = llvm::Constant::getNullValue(Ops[0]->getType()); } uint32_t Indices[64]; // 256-bit palignr operates on 128-bit lanes so we need to handle that for (unsigned l = 0; l != NumElts; l += 16) { for (unsigned i = 0; i != 16; ++i) { unsigned Idx = ShiftVal + i; if (Idx >= 16) Idx += NumElts - 16; // End of lane, switch operand. Indices[l + i] = Idx + l; } } Value *Align = Builder.CreateShuffleVector(Ops[1], Ops[0], makeArrayRef(Indices, NumElts), "palignr"); // If this isn't a masked builtin, just return the align operation. if (Ops.size() == 3) return Align; return EmitX86Select(*this, Ops[4], Align, Ops[3]); } case X86::BI__builtin_ia32_movnti: case X86::BI__builtin_ia32_movnti64: case X86::BI__builtin_ia32_movntsd: case X86::BI__builtin_ia32_movntss: { llvm::MDNode *Node = llvm::MDNode::get( getLLVMContext(), llvm::ConstantAsMetadata::get(Builder.getInt32(1))); Value *Ptr = Ops[0]; Value *Src = Ops[1]; // Extract the 0'th element of the source vector. if (BuiltinID == X86::BI__builtin_ia32_movntsd || BuiltinID == X86::BI__builtin_ia32_movntss) Src = Builder.CreateExtractElement(Src, (uint64_t)0, "extract"); // Convert the type of the pointer to a pointer to the stored type. Value *BC = Builder.CreateBitCast( Ptr, llvm::PointerType::getUnqual(Src->getType()), "cast"); // Unaligned nontemporal store of the scalar value. StoreInst *SI = Builder.CreateDefaultAlignedStore(Src, BC); SI->setMetadata(CGM.getModule().getMDKindID("nontemporal"), Node); SI->setAlignment(1); return SI; } case X86::BI__builtin_ia32_selectb_128: case X86::BI__builtin_ia32_selectb_256: case X86::BI__builtin_ia32_selectb_512: case X86::BI__builtin_ia32_selectw_128: case X86::BI__builtin_ia32_selectw_256: case X86::BI__builtin_ia32_selectw_512: case X86::BI__builtin_ia32_selectd_128: case X86::BI__builtin_ia32_selectd_256: case X86::BI__builtin_ia32_selectd_512: case X86::BI__builtin_ia32_selectq_128: case X86::BI__builtin_ia32_selectq_256: case X86::BI__builtin_ia32_selectq_512: case X86::BI__builtin_ia32_selectps_128: case X86::BI__builtin_ia32_selectps_256: case X86::BI__builtin_ia32_selectps_512: case X86::BI__builtin_ia32_selectpd_128: case X86::BI__builtin_ia32_selectpd_256: case X86::BI__builtin_ia32_selectpd_512: return EmitX86Select(*this, Ops[0], Ops[1], Ops[2]); case X86::BI__builtin_ia32_pcmpeqb128_mask: case X86::BI__builtin_ia32_pcmpeqb256_mask: case X86::BI__builtin_ia32_pcmpeqb512_mask: case X86::BI__builtin_ia32_pcmpeqw128_mask: case X86::BI__builtin_ia32_pcmpeqw256_mask: case X86::BI__builtin_ia32_pcmpeqw512_mask: case X86::BI__builtin_ia32_pcmpeqd128_mask: case X86::BI__builtin_ia32_pcmpeqd256_mask: case X86::BI__builtin_ia32_pcmpeqd512_mask: case X86::BI__builtin_ia32_pcmpeqq128_mask: case X86::BI__builtin_ia32_pcmpeqq256_mask: case X86::BI__builtin_ia32_pcmpeqq512_mask: return EmitX86MaskedCompare(*this, 0, false, Ops); case X86::BI__builtin_ia32_pcmpgtb128_mask: case X86::BI__builtin_ia32_pcmpgtb256_mask: case X86::BI__builtin_ia32_pcmpgtb512_mask: case X86::BI__builtin_ia32_pcmpgtw128_mask: case X86::BI__builtin_ia32_pcmpgtw256_mask: case X86::BI__builtin_ia32_pcmpgtw512_mask: case X86::BI__builtin_ia32_pcmpgtd128_mask: case X86::BI__builtin_ia32_pcmpgtd256_mask: case X86::BI__builtin_ia32_pcmpgtd512_mask: case X86::BI__builtin_ia32_pcmpgtq128_mask: case X86::BI__builtin_ia32_pcmpgtq256_mask: case X86::BI__builtin_ia32_pcmpgtq512_mask: return EmitX86MaskedCompare(*this, 6, true, Ops); case X86::BI__builtin_ia32_cmpb128_mask: case X86::BI__builtin_ia32_cmpb256_mask: case X86::BI__builtin_ia32_cmpb512_mask: case X86::BI__builtin_ia32_cmpw128_mask: case X86::BI__builtin_ia32_cmpw256_mask: case X86::BI__builtin_ia32_cmpw512_mask: case X86::BI__builtin_ia32_cmpd128_mask: case X86::BI__builtin_ia32_cmpd256_mask: case X86::BI__builtin_ia32_cmpd512_mask: case X86::BI__builtin_ia32_cmpq128_mask: case X86::BI__builtin_ia32_cmpq256_mask: case X86::BI__builtin_ia32_cmpq512_mask: { unsigned CC = cast(Ops[2])->getZExtValue() & 0x7; return EmitX86MaskedCompare(*this, CC, true, Ops); } case X86::BI__builtin_ia32_ucmpb128_mask: case X86::BI__builtin_ia32_ucmpb256_mask: case X86::BI__builtin_ia32_ucmpb512_mask: case X86::BI__builtin_ia32_ucmpw128_mask: case X86::BI__builtin_ia32_ucmpw256_mask: case X86::BI__builtin_ia32_ucmpw512_mask: case X86::BI__builtin_ia32_ucmpd128_mask: case X86::BI__builtin_ia32_ucmpd256_mask: case X86::BI__builtin_ia32_ucmpd512_mask: case X86::BI__builtin_ia32_ucmpq128_mask: case X86::BI__builtin_ia32_ucmpq256_mask: case X86::BI__builtin_ia32_ucmpq512_mask: { unsigned CC = cast(Ops[2])->getZExtValue() & 0x7; return EmitX86MaskedCompare(*this, CC, false, Ops); } case X86::BI__builtin_ia32_vplzcntd_128_mask: case X86::BI__builtin_ia32_vplzcntd_256_mask: case X86::BI__builtin_ia32_vplzcntd_512_mask: case X86::BI__builtin_ia32_vplzcntq_128_mask: case X86::BI__builtin_ia32_vplzcntq_256_mask: case X86::BI__builtin_ia32_vplzcntq_512_mask: { Function *F = CGM.getIntrinsic(Intrinsic::ctlz, Ops[0]->getType()); return EmitX86Select(*this, Ops[2], Builder.CreateCall(F, {Ops[0],Builder.getInt1(false)}), Ops[1]); } case X86::BI__builtin_ia32_pmaxsb128: case X86::BI__builtin_ia32_pmaxsw128: case X86::BI__builtin_ia32_pmaxsd128: case X86::BI__builtin_ia32_pmaxsq128_mask: case X86::BI__builtin_ia32_pmaxsb256: case X86::BI__builtin_ia32_pmaxsw256: case X86::BI__builtin_ia32_pmaxsd256: case X86::BI__builtin_ia32_pmaxsq256_mask: case X86::BI__builtin_ia32_pmaxsb512_mask: case X86::BI__builtin_ia32_pmaxsw512_mask: case X86::BI__builtin_ia32_pmaxsd512_mask: case X86::BI__builtin_ia32_pmaxsq512_mask: return EmitX86MinMax(*this, ICmpInst::ICMP_SGT, Ops); case X86::BI__builtin_ia32_pmaxub128: case X86::BI__builtin_ia32_pmaxuw128: case X86::BI__builtin_ia32_pmaxud128: case X86::BI__builtin_ia32_pmaxuq128_mask: case X86::BI__builtin_ia32_pmaxub256: case X86::BI__builtin_ia32_pmaxuw256: case X86::BI__builtin_ia32_pmaxud256: case X86::BI__builtin_ia32_pmaxuq256_mask: case X86::BI__builtin_ia32_pmaxub512_mask: case X86::BI__builtin_ia32_pmaxuw512_mask: case X86::BI__builtin_ia32_pmaxud512_mask: case X86::BI__builtin_ia32_pmaxuq512_mask: return EmitX86MinMax(*this, ICmpInst::ICMP_UGT, Ops); case X86::BI__builtin_ia32_pminsb128: case X86::BI__builtin_ia32_pminsw128: case X86::BI__builtin_ia32_pminsd128: case X86::BI__builtin_ia32_pminsq128_mask: case X86::BI__builtin_ia32_pminsb256: case X86::BI__builtin_ia32_pminsw256: case X86::BI__builtin_ia32_pminsd256: case X86::BI__builtin_ia32_pminsq256_mask: case X86::BI__builtin_ia32_pminsb512_mask: case X86::BI__builtin_ia32_pminsw512_mask: case X86::BI__builtin_ia32_pminsd512_mask: case X86::BI__builtin_ia32_pminsq512_mask: return EmitX86MinMax(*this, ICmpInst::ICMP_SLT, Ops); case X86::BI__builtin_ia32_pminub128: case X86::BI__builtin_ia32_pminuw128: case X86::BI__builtin_ia32_pminud128: case X86::BI__builtin_ia32_pminuq128_mask: case X86::BI__builtin_ia32_pminub256: case X86::BI__builtin_ia32_pminuw256: case X86::BI__builtin_ia32_pminud256: case X86::BI__builtin_ia32_pminuq256_mask: case X86::BI__builtin_ia32_pminub512_mask: case X86::BI__builtin_ia32_pminuw512_mask: case X86::BI__builtin_ia32_pminud512_mask: case X86::BI__builtin_ia32_pminuq512_mask: return EmitX86MinMax(*this, ICmpInst::ICMP_ULT, Ops); // 3DNow! case X86::BI__builtin_ia32_pswapdsf: case X86::BI__builtin_ia32_pswapdsi: { llvm::Type *MMXTy = llvm::Type::getX86_MMXTy(getLLVMContext()); Ops[0] = Builder.CreateBitCast(Ops[0], MMXTy, "cast"); llvm::Function *F = CGM.getIntrinsic(Intrinsic::x86_3dnowa_pswapd); return Builder.CreateCall(F, Ops, "pswapd"); } case X86::BI__builtin_ia32_rdrand16_step: case X86::BI__builtin_ia32_rdrand32_step: case X86::BI__builtin_ia32_rdrand64_step: case X86::BI__builtin_ia32_rdseed16_step: case X86::BI__builtin_ia32_rdseed32_step: case X86::BI__builtin_ia32_rdseed64_step: { Intrinsic::ID ID; switch (BuiltinID) { default: llvm_unreachable("Unsupported intrinsic!"); case X86::BI__builtin_ia32_rdrand16_step: ID = Intrinsic::x86_rdrand_16; break; case X86::BI__builtin_ia32_rdrand32_step: ID = Intrinsic::x86_rdrand_32; break; case X86::BI__builtin_ia32_rdrand64_step: ID = Intrinsic::x86_rdrand_64; break; case X86::BI__builtin_ia32_rdseed16_step: ID = Intrinsic::x86_rdseed_16; break; case X86::BI__builtin_ia32_rdseed32_step: ID = Intrinsic::x86_rdseed_32; break; case X86::BI__builtin_ia32_rdseed64_step: ID = Intrinsic::x86_rdseed_64; break; } Value *Call = Builder.CreateCall(CGM.getIntrinsic(ID)); Builder.CreateDefaultAlignedStore(Builder.CreateExtractValue(Call, 0), Ops[0]); return Builder.CreateExtractValue(Call, 1); } // SSE packed comparison intrinsics case X86::BI__builtin_ia32_cmpeqps: case X86::BI__builtin_ia32_cmpeqpd: return getVectorFCmpIR(CmpInst::FCMP_OEQ); case X86::BI__builtin_ia32_cmpltps: case X86::BI__builtin_ia32_cmpltpd: return getVectorFCmpIR(CmpInst::FCMP_OLT); case X86::BI__builtin_ia32_cmpleps: case X86::BI__builtin_ia32_cmplepd: return getVectorFCmpIR(CmpInst::FCMP_OLE); case X86::BI__builtin_ia32_cmpunordps: case X86::BI__builtin_ia32_cmpunordpd: return getVectorFCmpIR(CmpInst::FCMP_UNO); case X86::BI__builtin_ia32_cmpneqps: case X86::BI__builtin_ia32_cmpneqpd: return getVectorFCmpIR(CmpInst::FCMP_UNE); case X86::BI__builtin_ia32_cmpnltps: case X86::BI__builtin_ia32_cmpnltpd: return getVectorFCmpIR(CmpInst::FCMP_UGE); case X86::BI__builtin_ia32_cmpnleps: case X86::BI__builtin_ia32_cmpnlepd: return getVectorFCmpIR(CmpInst::FCMP_UGT); case X86::BI__builtin_ia32_cmpordps: case X86::BI__builtin_ia32_cmpordpd: return getVectorFCmpIR(CmpInst::FCMP_ORD); case X86::BI__builtin_ia32_cmpps: case X86::BI__builtin_ia32_cmpps256: case X86::BI__builtin_ia32_cmppd: case X86::BI__builtin_ia32_cmppd256: { unsigned CC = cast(Ops[2])->getZExtValue(); // If this one of the SSE immediates, we can use native IR. if (CC < 8) { FCmpInst::Predicate Pred; switch (CC) { case 0: Pred = FCmpInst::FCMP_OEQ; break; case 1: Pred = FCmpInst::FCMP_OLT; break; case 2: Pred = FCmpInst::FCMP_OLE; break; case 3: Pred = FCmpInst::FCMP_UNO; break; case 4: Pred = FCmpInst::FCMP_UNE; break; case 5: Pred = FCmpInst::FCMP_UGE; break; case 6: Pred = FCmpInst::FCMP_UGT; break; case 7: Pred = FCmpInst::FCMP_ORD; break; } return getVectorFCmpIR(Pred); } // We can't handle 8-31 immediates with native IR, use the intrinsic. Intrinsic::ID ID; switch (BuiltinID) { default: llvm_unreachable("Unsupported intrinsic!"); case X86::BI__builtin_ia32_cmpps: ID = Intrinsic::x86_sse_cmp_ps; break; case X86::BI__builtin_ia32_cmpps256: ID = Intrinsic::x86_avx_cmp_ps_256; break; case X86::BI__builtin_ia32_cmppd: ID = Intrinsic::x86_sse2_cmp_pd; break; case X86::BI__builtin_ia32_cmppd256: ID = Intrinsic::x86_avx_cmp_pd_256; break; } return Builder.CreateCall(CGM.getIntrinsic(ID), Ops); } // SSE scalar comparison intrinsics case X86::BI__builtin_ia32_cmpeqss: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 0); case X86::BI__builtin_ia32_cmpltss: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 1); case X86::BI__builtin_ia32_cmpless: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 2); case X86::BI__builtin_ia32_cmpunordss: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 3); case X86::BI__builtin_ia32_cmpneqss: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 4); case X86::BI__builtin_ia32_cmpnltss: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 5); case X86::BI__builtin_ia32_cmpnless: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 6); case X86::BI__builtin_ia32_cmpordss: return getCmpIntrinsicCall(Intrinsic::x86_sse_cmp_ss, 7); case X86::BI__builtin_ia32_cmpeqsd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 0); case X86::BI__builtin_ia32_cmpltsd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 1); case X86::BI__builtin_ia32_cmplesd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 2); case X86::BI__builtin_ia32_cmpunordsd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 3); case X86::BI__builtin_ia32_cmpneqsd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 4); case X86::BI__builtin_ia32_cmpnltsd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 5); case X86::BI__builtin_ia32_cmpnlesd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 6); case X86::BI__builtin_ia32_cmpordsd: return getCmpIntrinsicCall(Intrinsic::x86_sse2_cmp_sd, 7); case X86::BI__emul: case X86::BI__emulu: { llvm::Type *Int64Ty = llvm::IntegerType::get(getLLVMContext(), 64); bool isSigned = (BuiltinID == X86::BI__emul); Value *LHS = Builder.CreateIntCast(Ops[0], Int64Ty, isSigned); Value *RHS = Builder.CreateIntCast(Ops[1], Int64Ty, isSigned); return Builder.CreateMul(LHS, RHS, "", !isSigned, isSigned); } case X86::BI__mulh: case X86::BI__umulh: case X86::BI_mul128: case X86::BI_umul128: { llvm::Type *ResType = ConvertType(E->getType()); llvm::Type *Int128Ty = llvm::IntegerType::get(getLLVMContext(), 128); bool IsSigned = (BuiltinID == X86::BI__mulh || BuiltinID == X86::BI_mul128); Value *LHS = Builder.CreateIntCast(Ops[0], Int128Ty, IsSigned); Value *RHS = Builder.CreateIntCast(Ops[1], Int128Ty, IsSigned); Value *MulResult, *HigherBits; if (IsSigned) { MulResult = Builder.CreateNSWMul(LHS, RHS); HigherBits = Builder.CreateAShr(MulResult, 64); } else { MulResult = Builder.CreateNUWMul(LHS, RHS); HigherBits = Builder.CreateLShr(MulResult, 64); } HigherBits = Builder.CreateIntCast(HigherBits, ResType, IsSigned); if (BuiltinID == X86::BI__mulh || BuiltinID == X86::BI__umulh) return HigherBits; Address HighBitsAddress = EmitPointerWithAlignment(E->getArg(2)); Builder.CreateStore(HigherBits, HighBitsAddress); return Builder.CreateIntCast(MulResult, ResType, IsSigned); } case X86::BI__faststorefence: { return Builder.CreateFence(llvm::AtomicOrdering::SequentiallyConsistent, llvm::CrossThread); } case X86::BI_ReadWriteBarrier: case X86::BI_ReadBarrier: case X86::BI_WriteBarrier: { return Builder.CreateFence(llvm::AtomicOrdering::SequentiallyConsistent, llvm::SingleThread); } case X86::BI_BitScanForward: case X86::BI_BitScanForward64: return EmitMSVCBuiltinExpr(MSVCIntrin::_BitScanForward, E); case X86::BI_BitScanReverse: case X86::BI_BitScanReverse64: return EmitMSVCBuiltinExpr(MSVCIntrin::_BitScanReverse, E); case X86::BI_InterlockedAnd64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedAnd, E); case X86::BI_InterlockedExchange64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchange, E); case X86::BI_InterlockedExchangeAdd64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchangeAdd, E); case X86::BI_InterlockedExchangeSub64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedExchangeSub, E); case X86::BI_InterlockedOr64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedOr, E); case X86::BI_InterlockedXor64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedXor, E); case X86::BI_InterlockedDecrement64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedDecrement, E); case X86::BI_InterlockedIncrement64: return EmitMSVCBuiltinExpr(MSVCIntrin::_InterlockedIncrement, E); case X86::BI_AddressOfReturnAddress: { Value *F = CGM.getIntrinsic(Intrinsic::addressofreturnaddress); return Builder.CreateCall(F); } case X86::BI__stosb: { // We treat __stosb as a volatile memset - it may not generate "rep stosb" // instruction, but it will create a memset that won't be optimized away. return Builder.CreateMemSet(Ops[0], Ops[1], Ops[2], 1, true); } } } Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned BuiltinID, const CallExpr *E) { SmallVector Ops; for (unsigned i = 0, e = E->getNumArgs(); i != e; i++) Ops.push_back(EmitScalarExpr(E->getArg(i))); Intrinsic::ID ID = Intrinsic::not_intrinsic; switch (BuiltinID) { default: return nullptr; // __builtin_ppc_get_timebase is GCC 4.8+'s PowerPC-specific name for what we // call __builtin_readcyclecounter. case PPC::BI__builtin_ppc_get_timebase: return Builder.CreateCall(CGM.getIntrinsic(Intrinsic::readcyclecounter)); // vec_ld, vec_xl_be, vec_lvsl, vec_lvsr case PPC::BI__builtin_altivec_lvx: case PPC::BI__builtin_altivec_lvxl: case PPC::BI__builtin_altivec_lvebx: case PPC::BI__builtin_altivec_lvehx: case PPC::BI__builtin_altivec_lvewx: case PPC::BI__builtin_altivec_lvsl: case PPC::BI__builtin_altivec_lvsr: case PPC::BI__builtin_vsx_lxvd2x: case PPC::BI__builtin_vsx_lxvw4x: case PPC::BI__builtin_vsx_lxvd2x_be: case PPC::BI__builtin_vsx_lxvw4x_be: case PPC::BI__builtin_vsx_lxvl: case PPC::BI__builtin_vsx_lxvll: { if(BuiltinID == PPC::BI__builtin_vsx_lxvl || BuiltinID == PPC::BI__builtin_vsx_lxvll){ Ops[0] = Builder.CreateBitCast(Ops[0], Int8PtrTy); }else { Ops[1] = Builder.CreateBitCast(Ops[1], Int8PtrTy); Ops[0] = Builder.CreateGEP(Ops[1], Ops[0]); Ops.pop_back(); } switch (BuiltinID) { default: llvm_unreachable("Unsupported ld/lvsl/lvsr intrinsic!"); case PPC::BI__builtin_altivec_lvx: ID = Intrinsic::ppc_altivec_lvx; break; case PPC::BI__builtin_altivec_lvxl: ID = Intrinsic::ppc_altivec_lvxl; break; case PPC::BI__builtin_altivec_lvebx: ID = Intrinsic::ppc_altivec_lvebx; break; case PPC::BI__builtin_altivec_lvehx: ID = Intrinsic::ppc_altivec_lvehx; break; case PPC::BI__builtin_altivec_lvewx: ID = Intrinsic::ppc_altivec_lvewx; break; case PPC::BI__builtin_altivec_lvsl: ID = Intrinsic::ppc_altivec_lvsl; break; case PPC::BI__builtin_altivec_lvsr: ID = Intrinsic::ppc_altivec_lvsr; break; case PPC::BI__builtin_vsx_lxvd2x: ID = Intrinsic::ppc_vsx_lxvd2x; break; case PPC::BI__builtin_vsx_lxvw4x: ID = Intrinsic::ppc_vsx_lxvw4x; break; case PPC::BI__builtin_vsx_lxvd2x_be: ID = Intrinsic::ppc_vsx_lxvd2x_be; break; case PPC::BI__builtin_vsx_lxvw4x_be: ID = Intrinsic::ppc_vsx_lxvw4x_be; break; case PPC::BI__builtin_vsx_lxvl: ID = Intrinsic::ppc_vsx_lxvl; break; case PPC::BI__builtin_vsx_lxvll: ID = Intrinsic::ppc_vsx_lxvll; break; } llvm::Function *F = CGM.getIntrinsic(ID); return Builder.CreateCall(F, Ops, ""); } // vec_st, vec_xst_be case PPC::BI__builtin_altivec_stvx: case PPC::BI__builtin_altivec_stvxl: case PPC::BI__builtin_altivec_stvebx: case PPC::BI__builtin_altivec_stvehx: case PPC::BI__builtin_altivec_stvewx: case PPC::BI__builtin_vsx_stxvd2x: case PPC::BI__builtin_vsx_stxvw4x: case PPC::BI__builtin_vsx_stxvd2x_be: case PPC::BI__builtin_vsx_stxvw4x_be: case PPC::BI__builtin_vsx_stxvl: case PPC::BI__builtin_vsx_stxvll: { if(BuiltinID == PPC::BI__builtin_vsx_stxvl || BuiltinID == PPC::BI__builtin_vsx_stxvll ){ Ops[1] = Builder.CreateBitCast(Ops[1], Int8PtrTy); }else { Ops[2] = Builder.CreateBitCast(Ops[2], Int8PtrTy); Ops[1] = Builder.CreateGEP(Ops[2], Ops[1]); Ops.pop_back(); } switch (BuiltinID) { default: llvm_unreachable("Unsupported st intrinsic!"); case PPC::BI__builtin_altivec_stvx: ID = Intrinsic::ppc_altivec_stvx; break; case PPC::BI__builtin_altivec_stvxl: ID = Intrinsic::ppc_altivec_stvxl; break; case PPC::BI__builtin_altivec_stvebx: ID = Intrinsic::ppc_altivec_stvebx; break; case PPC::BI__builtin_altivec_stvehx: ID = Intrinsic::ppc_altivec_stvehx; break; case PPC::BI__builtin_altivec_stvewx: ID = Intrinsic::ppc_altivec_stvewx; break; case PPC::BI__builtin_vsx_stxvd2x: ID = Intrinsic::ppc_vsx_stxvd2x; break; case PPC::BI__builtin_vsx_stxvw4x: ID = Intrinsic::ppc_vsx_stxvw4x; break; case PPC::BI__builtin_vsx_stxvd2x_be: ID = Intrinsic::ppc_vsx_stxvd2x_be; break; case PPC::BI__builtin_vsx_stxvw4x_be: ID = Intrinsic::ppc_vsx_stxvw4x_be; break; case PPC::BI__builtin_vsx_stxvl: ID = Intrinsic::ppc_vsx_stxvl; break; case PPC::BI__builtin_vsx_stxvll: ID = Intrinsic::ppc_vsx_stxvll; break; } llvm::Function *F = CGM.getIntrinsic(ID); return Builder.CreateCall(F, Ops, ""); } // Square root case PPC::BI__builtin_vsx_xvsqrtsp: case PPC::BI__builtin_vsx_xvsqrtdp: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); ID = Intrinsic::sqrt; llvm::Function *F = CGM.getIntrinsic(ID, ResultType); return Builder.CreateCall(F, X); } // Count leading zeros case PPC::BI__builtin_altivec_vclzb: case PPC::BI__builtin_altivec_vclzh: case PPC::BI__builtin_altivec_vclzw: case PPC::BI__builtin_altivec_vclzd: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Undef = ConstantInt::get(Builder.getInt1Ty(), false); Function *F = CGM.getIntrinsic(Intrinsic::ctlz, ResultType); return Builder.CreateCall(F, {X, Undef}); } case PPC::BI__builtin_altivec_vctzb: case PPC::BI__builtin_altivec_vctzh: case PPC::BI__builtin_altivec_vctzw: case PPC::BI__builtin_altivec_vctzd: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Undef = ConstantInt::get(Builder.getInt1Ty(), false); Function *F = CGM.getIntrinsic(Intrinsic::cttz, ResultType); return Builder.CreateCall(F, {X, Undef}); } case PPC::BI__builtin_altivec_vpopcntb: case PPC::BI__builtin_altivec_vpopcnth: case PPC::BI__builtin_altivec_vpopcntw: case PPC::BI__builtin_altivec_vpopcntd: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); llvm::Function *F = CGM.getIntrinsic(Intrinsic::ctpop, ResultType); return Builder.CreateCall(F, X); } // Copy sign case PPC::BI__builtin_vsx_xvcpsgnsp: case PPC::BI__builtin_vsx_xvcpsgndp: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Y = EmitScalarExpr(E->getArg(1)); ID = Intrinsic::copysign; llvm::Function *F = CGM.getIntrinsic(ID, ResultType); return Builder.CreateCall(F, {X, Y}); } // Rounding/truncation case PPC::BI__builtin_vsx_xvrspip: case PPC::BI__builtin_vsx_xvrdpip: case PPC::BI__builtin_vsx_xvrdpim: case PPC::BI__builtin_vsx_xvrspim: case PPC::BI__builtin_vsx_xvrdpi: case PPC::BI__builtin_vsx_xvrspi: case PPC::BI__builtin_vsx_xvrdpic: case PPC::BI__builtin_vsx_xvrspic: case PPC::BI__builtin_vsx_xvrdpiz: case PPC::BI__builtin_vsx_xvrspiz: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); if (BuiltinID == PPC::BI__builtin_vsx_xvrdpim || BuiltinID == PPC::BI__builtin_vsx_xvrspim) ID = Intrinsic::floor; else if (BuiltinID == PPC::BI__builtin_vsx_xvrdpi || BuiltinID == PPC::BI__builtin_vsx_xvrspi) ID = Intrinsic::round; else if (BuiltinID == PPC::BI__builtin_vsx_xvrdpic || BuiltinID == PPC::BI__builtin_vsx_xvrspic) ID = Intrinsic::nearbyint; else if (BuiltinID == PPC::BI__builtin_vsx_xvrdpip || BuiltinID == PPC::BI__builtin_vsx_xvrspip) ID = Intrinsic::ceil; else if (BuiltinID == PPC::BI__builtin_vsx_xvrdpiz || BuiltinID == PPC::BI__builtin_vsx_xvrspiz) ID = Intrinsic::trunc; llvm::Function *F = CGM.getIntrinsic(ID, ResultType); return Builder.CreateCall(F, X); } // Absolute value case PPC::BI__builtin_vsx_xvabsdp: case PPC::BI__builtin_vsx_xvabssp: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); llvm::Function *F = CGM.getIntrinsic(Intrinsic::fabs, ResultType); return Builder.CreateCall(F, X); } // FMA variations case PPC::BI__builtin_vsx_xvmaddadp: case PPC::BI__builtin_vsx_xvmaddasp: case PPC::BI__builtin_vsx_xvnmaddadp: case PPC::BI__builtin_vsx_xvnmaddasp: case PPC::BI__builtin_vsx_xvmsubadp: case PPC::BI__builtin_vsx_xvmsubasp: case PPC::BI__builtin_vsx_xvnmsubadp: case PPC::BI__builtin_vsx_xvnmsubasp: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Y = EmitScalarExpr(E->getArg(1)); Value *Z = EmitScalarExpr(E->getArg(2)); Value *Zero = llvm::ConstantFP::getZeroValueForNegation(ResultType); llvm::Function *F = CGM.getIntrinsic(Intrinsic::fma, ResultType); switch (BuiltinID) { case PPC::BI__builtin_vsx_xvmaddadp: case PPC::BI__builtin_vsx_xvmaddasp: return Builder.CreateCall(F, {X, Y, Z}); case PPC::BI__builtin_vsx_xvnmaddadp: case PPC::BI__builtin_vsx_xvnmaddasp: return Builder.CreateFSub(Zero, Builder.CreateCall(F, {X, Y, Z}), "sub"); case PPC::BI__builtin_vsx_xvmsubadp: case PPC::BI__builtin_vsx_xvmsubasp: return Builder.CreateCall(F, {X, Y, Builder.CreateFSub(Zero, Z, "sub")}); case PPC::BI__builtin_vsx_xvnmsubadp: case PPC::BI__builtin_vsx_xvnmsubasp: Value *FsubRes = Builder.CreateCall(F, {X, Y, Builder.CreateFSub(Zero, Z, "sub")}); return Builder.CreateFSub(Zero, FsubRes, "sub"); } llvm_unreachable("Unknown FMA operation"); return nullptr; // Suppress no-return warning } case PPC::BI__builtin_vsx_insertword: { llvm::Function *F = CGM.getIntrinsic(Intrinsic::ppc_vsx_xxinsertw); // Third argument is a compile time constant int. It must be clamped to // to the range [0, 12]. ConstantInt *ArgCI = dyn_cast(Ops[2]); assert(ArgCI && "Third arg to xxinsertw intrinsic must be constant integer"); const int64_t MaxIndex = 12; int64_t Index = clamp(ArgCI->getSExtValue(), 0, MaxIndex); // The builtin semantics don't exactly match the xxinsertw instructions // semantics (which ppc_vsx_xxinsertw follows). The builtin extracts the // word from the first argument, and inserts it in the second argument. The // instruction extracts the word from its second input register and inserts // it into its first input register, so swap the first and second arguments. std::swap(Ops[0], Ops[1]); // Need to cast the second argument from a vector of unsigned int to a // vector of long long. Ops[1] = Builder.CreateBitCast(Ops[1], llvm::VectorType::get(Int64Ty, 2)); if (getTarget().isLittleEndian()) { // Create a shuffle mask of (1, 0) Constant *ShuffleElts[2] = { ConstantInt::get(Int32Ty, 1), ConstantInt::get(Int32Ty, 0) }; Constant *ShuffleMask = llvm::ConstantVector::get(ShuffleElts); // Reverse the double words in the vector we will extract from. Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int64Ty, 2)); Ops[0] = Builder.CreateShuffleVector(Ops[0], Ops[0], ShuffleMask); // Reverse the index. Index = MaxIndex - Index; } // Intrinsic expects the first arg to be a vector of int. Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int32Ty, 4)); Ops[2] = ConstantInt::getSigned(Int32Ty, Index); return Builder.CreateCall(F, Ops); } case PPC::BI__builtin_vsx_extractuword: { llvm::Function *F = CGM.getIntrinsic(Intrinsic::ppc_vsx_xxextractuw); // Intrinsic expects the first argument to be a vector of doublewords. Ops[0] = Builder.CreateBitCast(Ops[0], llvm::VectorType::get(Int64Ty, 2)); // The second argument is a compile time constant int that needs to // be clamped to the range [0, 12]. ConstantInt *ArgCI = dyn_cast(Ops[1]); assert(ArgCI && "Second Arg to xxextractuw intrinsic must be a constant integer!"); const int64_t MaxIndex = 12; int64_t Index = clamp(ArgCI->getSExtValue(), 0, MaxIndex); if (getTarget().isLittleEndian()) { // Reverse the index. Index = MaxIndex - Index; Ops[1] = ConstantInt::getSigned(Int32Ty, Index); // Emit the call, then reverse the double words of the results vector. Value *Call = Builder.CreateCall(F, Ops); // Create a shuffle mask of (1, 0) Constant *ShuffleElts[2] = { ConstantInt::get(Int32Ty, 1), ConstantInt::get(Int32Ty, 0) }; Constant *ShuffleMask = llvm::ConstantVector::get(ShuffleElts); Value *ShuffleCall = Builder.CreateShuffleVector(Call, Call, ShuffleMask); return ShuffleCall; } else { Ops[1] = ConstantInt::getSigned(Int32Ty, Index); return Builder.CreateCall(F, Ops); } } } } Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID, const CallExpr *E) { switch (BuiltinID) { case AMDGPU::BI__builtin_amdgcn_div_scale: case AMDGPU::BI__builtin_amdgcn_div_scalef: { // Translate from the intrinsics's struct return to the builtin's out // argument. Address FlagOutPtr = EmitPointerWithAlignment(E->getArg(3)); llvm::Value *X = EmitScalarExpr(E->getArg(0)); llvm::Value *Y = EmitScalarExpr(E->getArg(1)); llvm::Value *Z = EmitScalarExpr(E->getArg(2)); llvm::Value *Callee = CGM.getIntrinsic(Intrinsic::amdgcn_div_scale, X->getType()); llvm::Value *Tmp = Builder.CreateCall(Callee, {X, Y, Z}); llvm::Value *Result = Builder.CreateExtractValue(Tmp, 0); llvm::Value *Flag = Builder.CreateExtractValue(Tmp, 1); llvm::Type *RealFlagType = FlagOutPtr.getPointer()->getType()->getPointerElementType(); llvm::Value *FlagExt = Builder.CreateZExt(Flag, RealFlagType); Builder.CreateStore(FlagExt, FlagOutPtr); return Result; } case AMDGPU::BI__builtin_amdgcn_div_fmas: case AMDGPU::BI__builtin_amdgcn_div_fmasf: { llvm::Value *Src0 = EmitScalarExpr(E->getArg(0)); llvm::Value *Src1 = EmitScalarExpr(E->getArg(1)); llvm::Value *Src2 = EmitScalarExpr(E->getArg(2)); llvm::Value *Src3 = EmitScalarExpr(E->getArg(3)); llvm::Value *F = CGM.getIntrinsic(Intrinsic::amdgcn_div_fmas, Src0->getType()); llvm::Value *Src3ToBool = Builder.CreateIsNotNull(Src3); return Builder.CreateCall(F, {Src0, Src1, Src2, Src3ToBool}); } case AMDGPU::BI__builtin_amdgcn_ds_swizzle: return emitBinaryBuiltin(*this, E, Intrinsic::amdgcn_ds_swizzle); case AMDGPU::BI__builtin_amdgcn_div_fixup: case AMDGPU::BI__builtin_amdgcn_div_fixupf: case AMDGPU::BI__builtin_amdgcn_div_fixuph: return emitTernaryBuiltin(*this, E, Intrinsic::amdgcn_div_fixup); case AMDGPU::BI__builtin_amdgcn_trig_preop: case AMDGPU::BI__builtin_amdgcn_trig_preopf: return emitFPIntBuiltin(*this, E, Intrinsic::amdgcn_trig_preop); case AMDGPU::BI__builtin_amdgcn_rcp: case AMDGPU::BI__builtin_amdgcn_rcpf: case AMDGPU::BI__builtin_amdgcn_rcph: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_rcp); case AMDGPU::BI__builtin_amdgcn_rsq: case AMDGPU::BI__builtin_amdgcn_rsqf: case AMDGPU::BI__builtin_amdgcn_rsqh: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_rsq); case AMDGPU::BI__builtin_amdgcn_rsq_clamp: case AMDGPU::BI__builtin_amdgcn_rsq_clampf: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_rsq_clamp); case AMDGPU::BI__builtin_amdgcn_sinf: case AMDGPU::BI__builtin_amdgcn_sinh: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_sin); case AMDGPU::BI__builtin_amdgcn_cosf: case AMDGPU::BI__builtin_amdgcn_cosh: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_cos); case AMDGPU::BI__builtin_amdgcn_log_clampf: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_log_clamp); case AMDGPU::BI__builtin_amdgcn_ldexp: case AMDGPU::BI__builtin_amdgcn_ldexpf: case AMDGPU::BI__builtin_amdgcn_ldexph: return emitFPIntBuiltin(*this, E, Intrinsic::amdgcn_ldexp); case AMDGPU::BI__builtin_amdgcn_frexp_mant: case AMDGPU::BI__builtin_amdgcn_frexp_mantf: case AMDGPU::BI__builtin_amdgcn_frexp_manth: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_frexp_mant); case AMDGPU::BI__builtin_amdgcn_frexp_exp: case AMDGPU::BI__builtin_amdgcn_frexp_expf: { Value *Src0 = EmitScalarExpr(E->getArg(0)); Value *F = CGM.getIntrinsic(Intrinsic::amdgcn_frexp_exp, { Builder.getInt32Ty(), Src0->getType() }); return Builder.CreateCall(F, Src0); } case AMDGPU::BI__builtin_amdgcn_frexp_exph: { Value *Src0 = EmitScalarExpr(E->getArg(0)); Value *F = CGM.getIntrinsic(Intrinsic::amdgcn_frexp_exp, { Builder.getInt16Ty(), Src0->getType() }); return Builder.CreateCall(F, Src0); } case AMDGPU::BI__builtin_amdgcn_fract: case AMDGPU::BI__builtin_amdgcn_fractf: case AMDGPU::BI__builtin_amdgcn_fracth: return emitUnaryBuiltin(*this, E, Intrinsic::amdgcn_fract); case AMDGPU::BI__builtin_amdgcn_lerp: return emitTernaryBuiltin(*this, E, Intrinsic::amdgcn_lerp); case AMDGPU::BI__builtin_amdgcn_uicmp: case AMDGPU::BI__builtin_amdgcn_uicmpl: case AMDGPU::BI__builtin_amdgcn_sicmp: case AMDGPU::BI__builtin_amdgcn_sicmpl: return emitTernaryBuiltin(*this, E, Intrinsic::amdgcn_icmp); case AMDGPU::BI__builtin_amdgcn_fcmp: case AMDGPU::BI__builtin_amdgcn_fcmpf: return emitTernaryBuiltin(*this, E, Intrinsic::amdgcn_fcmp); case AMDGPU::BI__builtin_amdgcn_class: case AMDGPU::BI__builtin_amdgcn_classf: case AMDGPU::BI__builtin_amdgcn_classh: return emitFPIntBuiltin(*this, E, Intrinsic::amdgcn_class); case AMDGPU::BI__builtin_amdgcn_read_exec: { CallInst *CI = cast( EmitSpecialRegisterBuiltin(*this, E, Int64Ty, Int64Ty, true, "exec")); CI->setConvergent(); return CI; } // amdgcn workitem case AMDGPU::BI__builtin_amdgcn_workitem_id_x: return emitRangedBuiltin(*this, Intrinsic::amdgcn_workitem_id_x, 0, 1024); case AMDGPU::BI__builtin_amdgcn_workitem_id_y: return emitRangedBuiltin(*this, Intrinsic::amdgcn_workitem_id_y, 0, 1024); case AMDGPU::BI__builtin_amdgcn_workitem_id_z: return emitRangedBuiltin(*this, Intrinsic::amdgcn_workitem_id_z, 0, 1024); // r600 intrinsics case AMDGPU::BI__builtin_r600_recipsqrt_ieee: case AMDGPU::BI__builtin_r600_recipsqrt_ieeef: return emitUnaryBuiltin(*this, E, Intrinsic::r600_recipsqrt_ieee); case AMDGPU::BI__builtin_r600_read_tidig_x: return emitRangedBuiltin(*this, Intrinsic::r600_read_tidig_x, 0, 1024); case AMDGPU::BI__builtin_r600_read_tidig_y: return emitRangedBuiltin(*this, Intrinsic::r600_read_tidig_y, 0, 1024); case AMDGPU::BI__builtin_r600_read_tidig_z: return emitRangedBuiltin(*this, Intrinsic::r600_read_tidig_z, 0, 1024); default: return nullptr; } } /// Handle a SystemZ function in which the final argument is a pointer /// to an int that receives the post-instruction CC value. At the LLVM level /// this is represented as a function that returns a {result, cc} pair. static Value *EmitSystemZIntrinsicWithCC(CodeGenFunction &CGF, unsigned IntrinsicID, const CallExpr *E) { unsigned NumArgs = E->getNumArgs() - 1; SmallVector Args(NumArgs); for (unsigned I = 0; I < NumArgs; ++I) Args[I] = CGF.EmitScalarExpr(E->getArg(I)); Address CCPtr = CGF.EmitPointerWithAlignment(E->getArg(NumArgs)); Value *F = CGF.CGM.getIntrinsic(IntrinsicID); Value *Call = CGF.Builder.CreateCall(F, Args); Value *CC = CGF.Builder.CreateExtractValue(Call, 1); CGF.Builder.CreateStore(CC, CCPtr); return CGF.Builder.CreateExtractValue(Call, 0); } Value *CodeGenFunction::EmitSystemZBuiltinExpr(unsigned BuiltinID, const CallExpr *E) { switch (BuiltinID) { case SystemZ::BI__builtin_tbegin: { Value *TDB = EmitScalarExpr(E->getArg(0)); Value *Control = llvm::ConstantInt::get(Int32Ty, 0xff0c); Value *F = CGM.getIntrinsic(Intrinsic::s390_tbegin); return Builder.CreateCall(F, {TDB, Control}); } case SystemZ::BI__builtin_tbegin_nofloat: { Value *TDB = EmitScalarExpr(E->getArg(0)); Value *Control = llvm::ConstantInt::get(Int32Ty, 0xff0c); Value *F = CGM.getIntrinsic(Intrinsic::s390_tbegin_nofloat); return Builder.CreateCall(F, {TDB, Control}); } case SystemZ::BI__builtin_tbeginc: { Value *TDB = llvm::ConstantPointerNull::get(Int8PtrTy); Value *Control = llvm::ConstantInt::get(Int32Ty, 0xff08); Value *F = CGM.getIntrinsic(Intrinsic::s390_tbeginc); return Builder.CreateCall(F, {TDB, Control}); } case SystemZ::BI__builtin_tabort: { Value *Data = EmitScalarExpr(E->getArg(0)); Value *F = CGM.getIntrinsic(Intrinsic::s390_tabort); return Builder.CreateCall(F, Builder.CreateSExt(Data, Int64Ty, "tabort")); } case SystemZ::BI__builtin_non_tx_store: { Value *Address = EmitScalarExpr(E->getArg(0)); Value *Data = EmitScalarExpr(E->getArg(1)); Value *F = CGM.getIntrinsic(Intrinsic::s390_ntstg); return Builder.CreateCall(F, {Data, Address}); } // Vector builtins. Note that most vector builtins are mapped automatically // to target-specific LLVM intrinsics. The ones handled specially here can // be represented via standard LLVM IR, which is preferable to enable common // LLVM optimizations. case SystemZ::BI__builtin_s390_vpopctb: case SystemZ::BI__builtin_s390_vpopcth: case SystemZ::BI__builtin_s390_vpopctf: case SystemZ::BI__builtin_s390_vpopctg: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Function *F = CGM.getIntrinsic(Intrinsic::ctpop, ResultType); return Builder.CreateCall(F, X); } case SystemZ::BI__builtin_s390_vclzb: case SystemZ::BI__builtin_s390_vclzh: case SystemZ::BI__builtin_s390_vclzf: case SystemZ::BI__builtin_s390_vclzg: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Undef = ConstantInt::get(Builder.getInt1Ty(), false); Function *F = CGM.getIntrinsic(Intrinsic::ctlz, ResultType); return Builder.CreateCall(F, {X, Undef}); } case SystemZ::BI__builtin_s390_vctzb: case SystemZ::BI__builtin_s390_vctzh: case SystemZ::BI__builtin_s390_vctzf: case SystemZ::BI__builtin_s390_vctzg: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Undef = ConstantInt::get(Builder.getInt1Ty(), false); Function *F = CGM.getIntrinsic(Intrinsic::cttz, ResultType); return Builder.CreateCall(F, {X, Undef}); } case SystemZ::BI__builtin_s390_vfsqdb: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Function *F = CGM.getIntrinsic(Intrinsic::sqrt, ResultType); return Builder.CreateCall(F, X); } case SystemZ::BI__builtin_s390_vfmadb: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Y = EmitScalarExpr(E->getArg(1)); Value *Z = EmitScalarExpr(E->getArg(2)); Function *F = CGM.getIntrinsic(Intrinsic::fma, ResultType); return Builder.CreateCall(F, {X, Y, Z}); } case SystemZ::BI__builtin_s390_vfmsdb: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Y = EmitScalarExpr(E->getArg(1)); Value *Z = EmitScalarExpr(E->getArg(2)); Value *Zero = llvm::ConstantFP::getZeroValueForNegation(ResultType); Function *F = CGM.getIntrinsic(Intrinsic::fma, ResultType); return Builder.CreateCall(F, {X, Y, Builder.CreateFSub(Zero, Z, "sub")}); } case SystemZ::BI__builtin_s390_vflpdb: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Function *F = CGM.getIntrinsic(Intrinsic::fabs, ResultType); return Builder.CreateCall(F, X); } case SystemZ::BI__builtin_s390_vflndb: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); Value *Zero = llvm::ConstantFP::getZeroValueForNegation(ResultType); Function *F = CGM.getIntrinsic(Intrinsic::fabs, ResultType); return Builder.CreateFSub(Zero, Builder.CreateCall(F, X), "sub"); } case SystemZ::BI__builtin_s390_vfidb: { llvm::Type *ResultType = ConvertType(E->getType()); Value *X = EmitScalarExpr(E->getArg(0)); // Constant-fold the M4 and M5 mask arguments. llvm::APSInt M4, M5; bool IsConstM4 = E->getArg(1)->isIntegerConstantExpr(M4, getContext()); bool IsConstM5 = E->getArg(2)->isIntegerConstantExpr(M5, getContext()); assert(IsConstM4 && IsConstM5 && "Constant arg isn't actually constant?"); (void)IsConstM4; (void)IsConstM5; // Check whether this instance of vfidb can be represented via a LLVM // standard intrinsic. We only support some combinations of M4 and M5. Intrinsic::ID ID = Intrinsic::not_intrinsic; switch (M4.getZExtValue()) { default: break; case 0: // IEEE-inexact exception allowed switch (M5.getZExtValue()) { default: break; case 0: ID = Intrinsic::rint; break; } break; case 4: // IEEE-inexact exception suppressed switch (M5.getZExtValue()) { default: break; case 0: ID = Intrinsic::nearbyint; break; case 1: ID = Intrinsic::round; break; case 5: ID = Intrinsic::trunc; break; case 6: ID = Intrinsic::ceil; break; case 7: ID = Intrinsic::floor; break; } break; } if (ID != Intrinsic::not_intrinsic) { Function *F = CGM.getIntrinsic(ID, ResultType); return Builder.CreateCall(F, X); } Function *F = CGM.getIntrinsic(Intrinsic::s390_vfidb); Value *M4Value = llvm::ConstantInt::get(getLLVMContext(), M4); Value *M5Value = llvm::ConstantInt::get(getLLVMContext(), M5); return Builder.CreateCall(F, {X, M4Value, M5Value}); } // Vector intrisincs that output the post-instruction CC value. #define INTRINSIC_WITH_CC(NAME) \ case SystemZ::BI__builtin_##NAME: \ return EmitSystemZIntrinsicWithCC(*this, Intrinsic::NAME, E) INTRINSIC_WITH_CC(s390_vpkshs); INTRINSIC_WITH_CC(s390_vpksfs); INTRINSIC_WITH_CC(s390_vpksgs); INTRINSIC_WITH_CC(s390_vpklshs); INTRINSIC_WITH_CC(s390_vpklsfs); INTRINSIC_WITH_CC(s390_vpklsgs); INTRINSIC_WITH_CC(s390_vceqbs); INTRINSIC_WITH_CC(s390_vceqhs); INTRINSIC_WITH_CC(s390_vceqfs); INTRINSIC_WITH_CC(s390_vceqgs); INTRINSIC_WITH_CC(s390_vchbs); INTRINSIC_WITH_CC(s390_vchhs); INTRINSIC_WITH_CC(s390_vchfs); INTRINSIC_WITH_CC(s390_vchgs); INTRINSIC_WITH_CC(s390_vchlbs); INTRINSIC_WITH_CC(s390_vchlhs); INTRINSIC_WITH_CC(s390_vchlfs); INTRINSIC_WITH_CC(s390_vchlgs); INTRINSIC_WITH_CC(s390_vfaebs); INTRINSIC_WITH_CC(s390_vfaehs); INTRINSIC_WITH_CC(s390_vfaefs); INTRINSIC_WITH_CC(s390_vfaezbs); INTRINSIC_WITH_CC(s390_vfaezhs); INTRINSIC_WITH_CC(s390_vfaezfs); INTRINSIC_WITH_CC(s390_vfeebs); INTRINSIC_WITH_CC(s390_vfeehs); INTRINSIC_WITH_CC(s390_vfeefs); INTRINSIC_WITH_CC(s390_vfeezbs); INTRINSIC_WITH_CC(s390_vfeezhs); INTRINSIC_WITH_CC(s390_vfeezfs); INTRINSIC_WITH_CC(s390_vfenebs); INTRINSIC_WITH_CC(s390_vfenehs); INTRINSIC_WITH_CC(s390_vfenefs); INTRINSIC_WITH_CC(s390_vfenezbs); INTRINSIC_WITH_CC(s390_vfenezhs); INTRINSIC_WITH_CC(s390_vfenezfs); INTRINSIC_WITH_CC(s390_vistrbs); INTRINSIC_WITH_CC(s390_vistrhs); INTRINSIC_WITH_CC(s390_vistrfs); INTRINSIC_WITH_CC(s390_vstrcbs); INTRINSIC_WITH_CC(s390_vstrchs); INTRINSIC_WITH_CC(s390_vstrcfs); INTRINSIC_WITH_CC(s390_vstrczbs); INTRINSIC_WITH_CC(s390_vstrczhs); INTRINSIC_WITH_CC(s390_vstrczfs); INTRINSIC_WITH_CC(s390_vfcedbs); INTRINSIC_WITH_CC(s390_vfchdbs); INTRINSIC_WITH_CC(s390_vfchedbs); INTRINSIC_WITH_CC(s390_vftcidb); #undef INTRINSIC_WITH_CC default: return nullptr; } } Value *CodeGenFunction::EmitNVPTXBuiltinExpr(unsigned BuiltinID, const CallExpr *E) { auto MakeLdg = [&](unsigned IntrinsicID) { Value *Ptr = EmitScalarExpr(E->getArg(0)); AlignmentSource AlignSource; clang::CharUnits Align = getNaturalPointeeTypeAlignment(E->getArg(0)->getType(), &AlignSource); return Builder.CreateCall( CGM.getIntrinsic(IntrinsicID, {Ptr->getType()->getPointerElementType(), Ptr->getType()}), {Ptr, ConstantInt::get(Builder.getInt32Ty(), Align.getQuantity())}); }; auto MakeScopedAtomic = [&](unsigned IntrinsicID) { Value *Ptr = EmitScalarExpr(E->getArg(0)); return Builder.CreateCall( CGM.getIntrinsic(IntrinsicID, {Ptr->getType()->getPointerElementType(), Ptr->getType()}), {Ptr, EmitScalarExpr(E->getArg(1))}); }; switch (BuiltinID) { case NVPTX::BI__nvvm_atom_add_gen_i: case NVPTX::BI__nvvm_atom_add_gen_l: case NVPTX::BI__nvvm_atom_add_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::Add, E); case NVPTX::BI__nvvm_atom_sub_gen_i: case NVPTX::BI__nvvm_atom_sub_gen_l: case NVPTX::BI__nvvm_atom_sub_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::Sub, E); case NVPTX::BI__nvvm_atom_and_gen_i: case NVPTX::BI__nvvm_atom_and_gen_l: case NVPTX::BI__nvvm_atom_and_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::And, E); case NVPTX::BI__nvvm_atom_or_gen_i: case NVPTX::BI__nvvm_atom_or_gen_l: case NVPTX::BI__nvvm_atom_or_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::Or, E); case NVPTX::BI__nvvm_atom_xor_gen_i: case NVPTX::BI__nvvm_atom_xor_gen_l: case NVPTX::BI__nvvm_atom_xor_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::Xor, E); case NVPTX::BI__nvvm_atom_xchg_gen_i: case NVPTX::BI__nvvm_atom_xchg_gen_l: case NVPTX::BI__nvvm_atom_xchg_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::Xchg, E); case NVPTX::BI__nvvm_atom_max_gen_i: case NVPTX::BI__nvvm_atom_max_gen_l: case NVPTX::BI__nvvm_atom_max_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::Max, E); case NVPTX::BI__nvvm_atom_max_gen_ui: case NVPTX::BI__nvvm_atom_max_gen_ul: case NVPTX::BI__nvvm_atom_max_gen_ull: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::UMax, E); case NVPTX::BI__nvvm_atom_min_gen_i: case NVPTX::BI__nvvm_atom_min_gen_l: case NVPTX::BI__nvvm_atom_min_gen_ll: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::Min, E); case NVPTX::BI__nvvm_atom_min_gen_ui: case NVPTX::BI__nvvm_atom_min_gen_ul: case NVPTX::BI__nvvm_atom_min_gen_ull: return MakeBinaryAtomicValue(*this, llvm::AtomicRMWInst::UMin, E); case NVPTX::BI__nvvm_atom_cas_gen_i: case NVPTX::BI__nvvm_atom_cas_gen_l: case NVPTX::BI__nvvm_atom_cas_gen_ll: // __nvvm_atom_cas_gen_* should return the old value rather than the // success flag. return MakeAtomicCmpXchgValue(*this, E, /*ReturnBool=*/false); case NVPTX::BI__nvvm_atom_add_gen_f: { Value *Ptr = EmitScalarExpr(E->getArg(0)); Value *Val = EmitScalarExpr(E->getArg(1)); // atomicrmw only deals with integer arguments so we need to use // LLVM's nvvm_atomic_load_add_f32 intrinsic for that. Value *FnALAF32 = CGM.getIntrinsic(Intrinsic::nvvm_atomic_load_add_f32, Ptr->getType()); return Builder.CreateCall(FnALAF32, {Ptr, Val}); } case NVPTX::BI__nvvm_atom_inc_gen_ui: { Value *Ptr = EmitScalarExpr(E->getArg(0)); Value *Val = EmitScalarExpr(E->getArg(1)); Value *FnALI32 = CGM.getIntrinsic(Intrinsic::nvvm_atomic_load_inc_32, Ptr->getType()); return Builder.CreateCall(FnALI32, {Ptr, Val}); } case NVPTX::BI__nvvm_atom_dec_gen_ui: { Value *Ptr = EmitScalarExpr(E->getArg(0)); Value *Val = EmitScalarExpr(E->getArg(1)); Value *FnALD32 = CGM.getIntrinsic(Intrinsic::nvvm_atomic_load_dec_32, Ptr->getType()); return Builder.CreateCall(FnALD32, {Ptr, Val}); } case NVPTX::BI__nvvm_ldg_c: case NVPTX::BI__nvvm_ldg_c2: case NVPTX::BI__nvvm_ldg_c4: case NVPTX::BI__nvvm_ldg_s: case NVPTX::BI__nvvm_ldg_s2: case NVPTX::BI__nvvm_ldg_s4: case NVPTX::BI__nvvm_ldg_i: case NVPTX::BI__nvvm_ldg_i2: case NVPTX::BI__nvvm_ldg_i4: case NVPTX::BI__nvvm_ldg_l: case NVPTX::BI__nvvm_ldg_ll: case NVPTX::BI__nvvm_ldg_ll2: case NVPTX::BI__nvvm_ldg_uc: case NVPTX::BI__nvvm_ldg_uc2: case NVPTX::BI__nvvm_ldg_uc4: case NVPTX::BI__nvvm_ldg_us: case NVPTX::BI__nvvm_ldg_us2: case NVPTX::BI__nvvm_ldg_us4: case NVPTX::BI__nvvm_ldg_ui: case NVPTX::BI__nvvm_ldg_ui2: case NVPTX::BI__nvvm_ldg_ui4: case NVPTX::BI__nvvm_ldg_ul: case NVPTX::BI__nvvm_ldg_ull: case NVPTX::BI__nvvm_ldg_ull2: // PTX Interoperability section 2.2: "For a vector with an even number of // elements, its alignment is set to number of elements times the alignment // of its member: n*alignof(t)." return MakeLdg(Intrinsic::nvvm_ldg_global_i); case NVPTX::BI__nvvm_ldg_f: case NVPTX::BI__nvvm_ldg_f2: case NVPTX::BI__nvvm_ldg_f4: case NVPTX::BI__nvvm_ldg_d: case NVPTX::BI__nvvm_ldg_d2: return MakeLdg(Intrinsic::nvvm_ldg_global_f); case NVPTX::BI__nvvm_atom_cta_add_gen_i: case NVPTX::BI__nvvm_atom_cta_add_gen_l: case NVPTX::BI__nvvm_atom_cta_add_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_add_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_add_gen_i: case NVPTX::BI__nvvm_atom_sys_add_gen_l: case NVPTX::BI__nvvm_atom_sys_add_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_add_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_add_gen_f: case NVPTX::BI__nvvm_atom_cta_add_gen_d: return MakeScopedAtomic(Intrinsic::nvvm_atomic_add_gen_f_cta); case NVPTX::BI__nvvm_atom_sys_add_gen_f: case NVPTX::BI__nvvm_atom_sys_add_gen_d: return MakeScopedAtomic(Intrinsic::nvvm_atomic_add_gen_f_sys); case NVPTX::BI__nvvm_atom_cta_xchg_gen_i: case NVPTX::BI__nvvm_atom_cta_xchg_gen_l: case NVPTX::BI__nvvm_atom_cta_xchg_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_exch_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_xchg_gen_i: case NVPTX::BI__nvvm_atom_sys_xchg_gen_l: case NVPTX::BI__nvvm_atom_sys_xchg_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_exch_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_max_gen_i: case NVPTX::BI__nvvm_atom_cta_max_gen_ui: case NVPTX::BI__nvvm_atom_cta_max_gen_l: case NVPTX::BI__nvvm_atom_cta_max_gen_ul: case NVPTX::BI__nvvm_atom_cta_max_gen_ll: case NVPTX::BI__nvvm_atom_cta_max_gen_ull: return MakeScopedAtomic(Intrinsic::nvvm_atomic_max_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_max_gen_i: case NVPTX::BI__nvvm_atom_sys_max_gen_ui: case NVPTX::BI__nvvm_atom_sys_max_gen_l: case NVPTX::BI__nvvm_atom_sys_max_gen_ul: case NVPTX::BI__nvvm_atom_sys_max_gen_ll: case NVPTX::BI__nvvm_atom_sys_max_gen_ull: return MakeScopedAtomic(Intrinsic::nvvm_atomic_max_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_min_gen_i: case NVPTX::BI__nvvm_atom_cta_min_gen_ui: case NVPTX::BI__nvvm_atom_cta_min_gen_l: case NVPTX::BI__nvvm_atom_cta_min_gen_ul: case NVPTX::BI__nvvm_atom_cta_min_gen_ll: case NVPTX::BI__nvvm_atom_cta_min_gen_ull: return MakeScopedAtomic(Intrinsic::nvvm_atomic_min_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_min_gen_i: case NVPTX::BI__nvvm_atom_sys_min_gen_ui: case NVPTX::BI__nvvm_atom_sys_min_gen_l: case NVPTX::BI__nvvm_atom_sys_min_gen_ul: case NVPTX::BI__nvvm_atom_sys_min_gen_ll: case NVPTX::BI__nvvm_atom_sys_min_gen_ull: return MakeScopedAtomic(Intrinsic::nvvm_atomic_min_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_inc_gen_ui: return MakeScopedAtomic(Intrinsic::nvvm_atomic_inc_gen_i_cta); case NVPTX::BI__nvvm_atom_cta_dec_gen_ui: return MakeScopedAtomic(Intrinsic::nvvm_atomic_dec_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_inc_gen_ui: return MakeScopedAtomic(Intrinsic::nvvm_atomic_inc_gen_i_sys); case NVPTX::BI__nvvm_atom_sys_dec_gen_ui: return MakeScopedAtomic(Intrinsic::nvvm_atomic_dec_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_and_gen_i: case NVPTX::BI__nvvm_atom_cta_and_gen_l: case NVPTX::BI__nvvm_atom_cta_and_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_and_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_and_gen_i: case NVPTX::BI__nvvm_atom_sys_and_gen_l: case NVPTX::BI__nvvm_atom_sys_and_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_and_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_or_gen_i: case NVPTX::BI__nvvm_atom_cta_or_gen_l: case NVPTX::BI__nvvm_atom_cta_or_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_or_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_or_gen_i: case NVPTX::BI__nvvm_atom_sys_or_gen_l: case NVPTX::BI__nvvm_atom_sys_or_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_or_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_xor_gen_i: case NVPTX::BI__nvvm_atom_cta_xor_gen_l: case NVPTX::BI__nvvm_atom_cta_xor_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_xor_gen_i_cta); case NVPTX::BI__nvvm_atom_sys_xor_gen_i: case NVPTX::BI__nvvm_atom_sys_xor_gen_l: case NVPTX::BI__nvvm_atom_sys_xor_gen_ll: return MakeScopedAtomic(Intrinsic::nvvm_atomic_xor_gen_i_sys); case NVPTX::BI__nvvm_atom_cta_cas_gen_i: case NVPTX::BI__nvvm_atom_cta_cas_gen_l: case NVPTX::BI__nvvm_atom_cta_cas_gen_ll: { Value *Ptr = EmitScalarExpr(E->getArg(0)); return Builder.CreateCall( CGM.getIntrinsic( Intrinsic::nvvm_atomic_cas_gen_i_cta, {Ptr->getType()->getPointerElementType(), Ptr->getType()}), {Ptr, EmitScalarExpr(E->getArg(1)), EmitScalarExpr(E->getArg(2))}); } case NVPTX::BI__nvvm_atom_sys_cas_gen_i: case NVPTX::BI__nvvm_atom_sys_cas_gen_l: case NVPTX::BI__nvvm_atom_sys_cas_gen_ll: { Value *Ptr = EmitScalarExpr(E->getArg(0)); return Builder.CreateCall( CGM.getIntrinsic( Intrinsic::nvvm_atomic_cas_gen_i_sys, {Ptr->getType()->getPointerElementType(), Ptr->getType()}), {Ptr, EmitScalarExpr(E->getArg(1)), EmitScalarExpr(E->getArg(2))}); } default: return nullptr; } } Value *CodeGenFunction::EmitWebAssemblyBuiltinExpr(unsigned BuiltinID, const CallExpr *E) { switch (BuiltinID) { case WebAssembly::BI__builtin_wasm_current_memory: { llvm::Type *ResultType = ConvertType(E->getType()); Value *Callee = CGM.getIntrinsic(Intrinsic::wasm_current_memory, ResultType); return Builder.CreateCall(Callee); } case WebAssembly::BI__builtin_wasm_grow_memory: { Value *X = EmitScalarExpr(E->getArg(0)); Value *Callee = CGM.getIntrinsic(Intrinsic::wasm_grow_memory, X->getType()); return Builder.CreateCall(Callee, X); } default: return nullptr; } } Index: vendor/clang/dist/lib/Lex/PPMacroExpansion.cpp =================================================================== --- vendor/clang/dist/lib/Lex/PPMacroExpansion.cpp (revision 312705) +++ vendor/clang/dist/lib/Lex/PPMacroExpansion.cpp (revision 312706) @@ -1,1907 +1,1908 @@ //===--- MacroExpansion.cpp - Top level Macro Expansion -------------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements the top level handling of macro expansion for the // preprocessor. // //===----------------------------------------------------------------------===// #include "clang/Basic/Attributes.h" #include "clang/Basic/FileManager.h" #include "clang/Basic/IdentifierTable.h" #include "clang/Basic/LLVM.h" #include "clang/Basic/LangOptions.h" #include "clang/Basic/ObjCRuntime.h" #include "clang/Basic/SourceLocation.h" #include "clang/Basic/TargetInfo.h" #include "clang/Lex/CodeCompletionHandler.h" #include "clang/Lex/DirectoryLookup.h" #include "clang/Lex/ExternalPreprocessorSource.h" #include "clang/Lex/LexDiagnostic.h" #include "clang/Lex/MacroArgs.h" #include "clang/Lex/MacroInfo.h" #include "clang/Lex/Preprocessor.h" #include "clang/Lex/PreprocessorLexer.h" #include "clang/Lex/PTHLexer.h" #include "clang/Lex/Token.h" #include "llvm/ADT/ArrayRef.h" #include "llvm/ADT/DenseMap.h" #include "llvm/ADT/DenseSet.h" #include "llvm/ADT/FoldingSet.h" #include "llvm/ADT/None.h" #include "llvm/ADT/Optional.h" #include "llvm/ADT/SmallString.h" #include "llvm/ADT/SmallVector.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/StringRef.h" #include "llvm/ADT/StringSwitch.h" #include "llvm/Config/llvm-config.h" #include "llvm/Support/Casting.h" #include "llvm/Support/ErrorHandling.h" #include "llvm/Support/Format.h" #include "llvm/Support/raw_ostream.h" #include #include #include #include #include #include #include #include using namespace clang; MacroDirective * Preprocessor::getLocalMacroDirectiveHistory(const IdentifierInfo *II) const { if (!II->hadMacroDefinition()) return nullptr; auto Pos = CurSubmoduleState->Macros.find(II); return Pos == CurSubmoduleState->Macros.end() ? nullptr : Pos->second.getLatest(); } void Preprocessor::appendMacroDirective(IdentifierInfo *II, MacroDirective *MD){ assert(MD && "MacroDirective should be non-zero!"); assert(!MD->getPrevious() && "Already attached to a MacroDirective history."); MacroState &StoredMD = CurSubmoduleState->Macros[II]; auto *OldMD = StoredMD.getLatest(); MD->setPrevious(OldMD); StoredMD.setLatest(MD); StoredMD.overrideActiveModuleMacros(*this, II); if (needModuleMacros()) { // Track that we created a new macro directive, so we know we should // consider building a ModuleMacro for it when we get to the end of // the module. PendingModuleMacroNames.push_back(II); } // Set up the identifier as having associated macro history. II->setHasMacroDefinition(true); if (!MD->isDefined() && LeafModuleMacros.find(II) == LeafModuleMacros.end()) II->setHasMacroDefinition(false); if (II->isFromAST()) II->setChangedSinceDeserialization(); } void Preprocessor::setLoadedMacroDirective(IdentifierInfo *II, MacroDirective *ED, MacroDirective *MD) { // Normally, when a macro is defined, it goes through appendMacroDirective() // above, which chains a macro to previous defines, undefs, etc. // However, in a pch, the whole macro history up to the end of the pch is // stored, so ASTReader goes through this function instead. // However, built-in macros are already registered in the Preprocessor // ctor, and ASTWriter stops writing the macro chain at built-in macros, // so in that case the chain from the pch needs to be spliced to the existing // built-in. assert(II && MD); MacroState &StoredMD = CurSubmoduleState->Macros[II]; if (auto *OldMD = StoredMD.getLatest()) { // shouldIgnoreMacro() in ASTWriter also stops at macros from the // predefines buffer in module builds. However, in module builds, modules // are loaded completely before predefines are processed, so StoredMD // will be nullptr for them when they're loaded. StoredMD should only be // non-nullptr for builtins read from a pch file. assert(OldMD->getMacroInfo()->isBuiltinMacro() && "only built-ins should have an entry here"); assert(!OldMD->getPrevious() && "builtin should only have a single entry"); ED->setPrevious(OldMD); StoredMD.setLatest(MD); } else { StoredMD = MD; } // Setup the identifier as having associated macro history. II->setHasMacroDefinition(true); if (!MD->isDefined() && LeafModuleMacros.find(II) == LeafModuleMacros.end()) II->setHasMacroDefinition(false); } ModuleMacro *Preprocessor::addModuleMacro(Module *Mod, IdentifierInfo *II, MacroInfo *Macro, ArrayRef Overrides, bool &New) { llvm::FoldingSetNodeID ID; ModuleMacro::Profile(ID, Mod, II); void *InsertPos; if (auto *MM = ModuleMacros.FindNodeOrInsertPos(ID, InsertPos)) { New = false; return MM; } auto *MM = ModuleMacro::create(*this, Mod, II, Macro, Overrides); ModuleMacros.InsertNode(MM, InsertPos); // Each overridden macro is now overridden by one more macro. bool HidAny = false; for (auto *O : Overrides) { HidAny |= (O->NumOverriddenBy == 0); ++O->NumOverriddenBy; } // If we were the first overrider for any macro, it's no longer a leaf. auto &LeafMacros = LeafModuleMacros[II]; if (HidAny) { LeafMacros.erase(std::remove_if(LeafMacros.begin(), LeafMacros.end(), [](ModuleMacro *MM) { return MM->NumOverriddenBy != 0; }), LeafMacros.end()); } // The new macro is always a leaf macro. LeafMacros.push_back(MM); // The identifier now has defined macros (that may or may not be visible). II->setHasMacroDefinition(true); New = true; return MM; } ModuleMacro *Preprocessor::getModuleMacro(Module *Mod, IdentifierInfo *II) { llvm::FoldingSetNodeID ID; ModuleMacro::Profile(ID, Mod, II); void *InsertPos; return ModuleMacros.FindNodeOrInsertPos(ID, InsertPos); } void Preprocessor::updateModuleMacroInfo(const IdentifierInfo *II, ModuleMacroInfo &Info) { assert(Info.ActiveModuleMacrosGeneration != CurSubmoduleState->VisibleModules.getGeneration() && "don't need to update this macro name info"); Info.ActiveModuleMacrosGeneration = CurSubmoduleState->VisibleModules.getGeneration(); auto Leaf = LeafModuleMacros.find(II); if (Leaf == LeafModuleMacros.end()) { // No imported macros at all: nothing to do. return; } Info.ActiveModuleMacros.clear(); // Every macro that's locally overridden is overridden by a visible macro. llvm::DenseMap NumHiddenOverrides; for (auto *O : Info.OverriddenMacros) NumHiddenOverrides[O] = -1; // Collect all macros that are not overridden by a visible macro. llvm::SmallVector Worklist; for (auto *LeafMM : Leaf->second) { assert(LeafMM->getNumOverridingMacros() == 0 && "leaf macro overridden"); if (NumHiddenOverrides.lookup(LeafMM) == 0) Worklist.push_back(LeafMM); } while (!Worklist.empty()) { auto *MM = Worklist.pop_back_val(); if (CurSubmoduleState->VisibleModules.isVisible(MM->getOwningModule())) { // We only care about collecting definitions; undefinitions only act // to override other definitions. if (MM->getMacroInfo()) Info.ActiveModuleMacros.push_back(MM); } else { for (auto *O : MM->overrides()) if ((unsigned)++NumHiddenOverrides[O] == O->getNumOverridingMacros()) Worklist.push_back(O); } } // Our reverse postorder walk found the macros in reverse order. std::reverse(Info.ActiveModuleMacros.begin(), Info.ActiveModuleMacros.end()); // Determine whether the macro name is ambiguous. MacroInfo *MI = nullptr; bool IsSystemMacro = true; bool IsAmbiguous = false; if (auto *MD = Info.MD) { while (MD && isa(MD)) MD = MD->getPrevious(); if (auto *DMD = dyn_cast_or_null(MD)) { MI = DMD->getInfo(); IsSystemMacro &= SourceMgr.isInSystemHeader(DMD->getLocation()); } } for (auto *Active : Info.ActiveModuleMacros) { auto *NewMI = Active->getMacroInfo(); // Before marking the macro as ambiguous, check if this is a case where // both macros are in system headers. If so, we trust that the system // did not get it wrong. This also handles cases where Clang's own // headers have a different spelling of certain system macros: // #define LONG_MAX __LONG_MAX__ (clang's limits.h) // #define LONG_MAX 0x7fffffffffffffffL (system's limits.h) // // FIXME: Remove the defined-in-system-headers check. clang's limits.h // overrides the system limits.h's macros, so there's no conflict here. if (MI && NewMI != MI && !MI->isIdenticalTo(*NewMI, *this, /*Syntactically=*/true)) IsAmbiguous = true; IsSystemMacro &= Active->getOwningModule()->IsSystem || SourceMgr.isInSystemHeader(NewMI->getDefinitionLoc()); MI = NewMI; } Info.IsAmbiguous = IsAmbiguous && !IsSystemMacro; } void Preprocessor::dumpMacroInfo(const IdentifierInfo *II) { ArrayRef Leaf; auto LeafIt = LeafModuleMacros.find(II); if (LeafIt != LeafModuleMacros.end()) Leaf = LeafIt->second; const MacroState *State = nullptr; auto Pos = CurSubmoduleState->Macros.find(II); if (Pos != CurSubmoduleState->Macros.end()) State = &Pos->second; llvm::errs() << "MacroState " << State << " " << II->getNameStart(); if (State && State->isAmbiguous(*this, II)) llvm::errs() << " ambiguous"; if (State && !State->getOverriddenMacros().empty()) { llvm::errs() << " overrides"; for (auto *O : State->getOverriddenMacros()) llvm::errs() << " " << O->getOwningModule()->getFullModuleName(); } llvm::errs() << "\n"; // Dump local macro directives. for (auto *MD = State ? State->getLatest() : nullptr; MD; MD = MD->getPrevious()) { llvm::errs() << " "; MD->dump(); } // Dump module macros. llvm::DenseSet Active; for (auto *MM : State ? State->getActiveModuleMacros(*this, II) : None) Active.insert(MM); llvm::DenseSet Visited; llvm::SmallVector Worklist(Leaf.begin(), Leaf.end()); while (!Worklist.empty()) { auto *MM = Worklist.pop_back_val(); llvm::errs() << " ModuleMacro " << MM << " " << MM->getOwningModule()->getFullModuleName(); if (!MM->getMacroInfo()) llvm::errs() << " undef"; if (Active.count(MM)) llvm::errs() << " active"; else if (!CurSubmoduleState->VisibleModules.isVisible( MM->getOwningModule())) llvm::errs() << " hidden"; else if (MM->getMacroInfo()) llvm::errs() << " overridden"; if (!MM->overrides().empty()) { llvm::errs() << " overrides"; for (auto *O : MM->overrides()) { llvm::errs() << " " << O->getOwningModule()->getFullModuleName(); if (Visited.insert(O).second) Worklist.push_back(O); } } llvm::errs() << "\n"; if (auto *MI = MM->getMacroInfo()) { llvm::errs() << " "; MI->dump(); llvm::errs() << "\n"; } } } /// RegisterBuiltinMacro - Register the specified identifier in the identifier /// table and mark it as a builtin macro to be expanded. static IdentifierInfo *RegisterBuiltinMacro(Preprocessor &PP, const char *Name){ // Get the identifier. IdentifierInfo *Id = PP.getIdentifierInfo(Name); // Mark it as being a macro that is builtin. MacroInfo *MI = PP.AllocateMacroInfo(SourceLocation()); MI->setIsBuiltinMacro(); PP.appendDefMacroDirective(Id, MI); return Id; } /// RegisterBuiltinMacros - Register builtin macros, such as __LINE__ with the /// identifier table. void Preprocessor::RegisterBuiltinMacros() { Ident__LINE__ = RegisterBuiltinMacro(*this, "__LINE__"); Ident__FILE__ = RegisterBuiltinMacro(*this, "__FILE__"); Ident__DATE__ = RegisterBuiltinMacro(*this, "__DATE__"); Ident__TIME__ = RegisterBuiltinMacro(*this, "__TIME__"); Ident__COUNTER__ = RegisterBuiltinMacro(*this, "__COUNTER__"); Ident_Pragma = RegisterBuiltinMacro(*this, "_Pragma"); // C++ Standing Document Extensions. if (LangOpts.CPlusPlus) Ident__has_cpp_attribute = RegisterBuiltinMacro(*this, "__has_cpp_attribute"); else Ident__has_cpp_attribute = nullptr; // GCC Extensions. Ident__BASE_FILE__ = RegisterBuiltinMacro(*this, "__BASE_FILE__"); Ident__INCLUDE_LEVEL__ = RegisterBuiltinMacro(*this, "__INCLUDE_LEVEL__"); Ident__TIMESTAMP__ = RegisterBuiltinMacro(*this, "__TIMESTAMP__"); // Microsoft Extensions. if (LangOpts.MicrosoftExt) { Ident__identifier = RegisterBuiltinMacro(*this, "__identifier"); Ident__pragma = RegisterBuiltinMacro(*this, "__pragma"); } else { Ident__identifier = nullptr; Ident__pragma = nullptr; } // Clang Extensions. Ident__has_feature = RegisterBuiltinMacro(*this, "__has_feature"); Ident__has_extension = RegisterBuiltinMacro(*this, "__has_extension"); Ident__has_builtin = RegisterBuiltinMacro(*this, "__has_builtin"); Ident__has_attribute = RegisterBuiltinMacro(*this, "__has_attribute"); Ident__has_declspec = RegisterBuiltinMacro(*this, "__has_declspec_attribute"); Ident__has_include = RegisterBuiltinMacro(*this, "__has_include"); Ident__has_include_next = RegisterBuiltinMacro(*this, "__has_include_next"); Ident__has_warning = RegisterBuiltinMacro(*this, "__has_warning"); Ident__is_identifier = RegisterBuiltinMacro(*this, "__is_identifier"); // Modules. Ident__building_module = RegisterBuiltinMacro(*this, "__building_module"); if (!LangOpts.CurrentModule.empty()) Ident__MODULE__ = RegisterBuiltinMacro(*this, "__MODULE__"); else Ident__MODULE__ = nullptr; } /// isTrivialSingleTokenExpansion - Return true if MI, which has a single token /// in its expansion, currently expands to that token literally. static bool isTrivialSingleTokenExpansion(const MacroInfo *MI, const IdentifierInfo *MacroIdent, Preprocessor &PP) { IdentifierInfo *II = MI->getReplacementToken(0).getIdentifierInfo(); // If the token isn't an identifier, it's always literally expanded. if (!II) return true; // If the information about this identifier is out of date, update it from // the external source. if (II->isOutOfDate()) PP.getExternalSource()->updateOutOfDateIdentifier(*II); // If the identifier is a macro, and if that macro is enabled, it may be // expanded so it's not a trivial expansion. if (auto *ExpansionMI = PP.getMacroInfo(II)) if (ExpansionMI->isEnabled() && // Fast expanding "#define X X" is ok, because X would be disabled. II != MacroIdent) return false; // If this is an object-like macro invocation, it is safe to trivially expand // it. if (MI->isObjectLike()) return true; // If this is a function-like macro invocation, it's safe to trivially expand // as long as the identifier is not a macro argument. return std::find(MI->arg_begin(), MI->arg_end(), II) == MI->arg_end(); } /// isNextPPTokenLParen - Determine whether the next preprocessor token to be /// lexed is a '('. If so, consume the token and return true, if not, this /// method should have no observable side-effect on the lexed tokens. bool Preprocessor::isNextPPTokenLParen() { // Do some quick tests for rejection cases. unsigned Val; if (CurLexer) Val = CurLexer->isNextPPTokenLParen(); else if (CurPTHLexer) Val = CurPTHLexer->isNextPPTokenLParen(); else Val = CurTokenLexer->isNextTokenLParen(); if (Val == 2) { // We have run off the end. If it's a source file we don't // examine enclosing ones (C99 5.1.1.2p4). Otherwise walk up the // macro stack. if (CurPPLexer) return false; for (const IncludeStackInfo &Entry : llvm::reverse(IncludeMacroStack)) { if (Entry.TheLexer) Val = Entry.TheLexer->isNextPPTokenLParen(); else if (Entry.ThePTHLexer) Val = Entry.ThePTHLexer->isNextPPTokenLParen(); else Val = Entry.TheTokenLexer->isNextTokenLParen(); if (Val != 2) break; // Ran off the end of a source file? if (Entry.ThePPLexer) return false; } } // Okay, if we know that the token is a '(', lex it and return. Otherwise we // have found something that isn't a '(' or we found the end of the // translation unit. In either case, return false. return Val == 1; } /// HandleMacroExpandedIdentifier - If an identifier token is read that is to be /// expanded as a macro, handle it and return the next token as 'Identifier'. bool Preprocessor::HandleMacroExpandedIdentifier(Token &Identifier, const MacroDefinition &M) { MacroInfo *MI = M.getMacroInfo(); // If this is a macro expansion in the "#if !defined(x)" line for the file, // then the macro could expand to different things in other contexts, we need // to disable the optimization in this case. if (CurPPLexer) CurPPLexer->MIOpt.ExpandedMacro(); // If this is a builtin macro, like __LINE__ or _Pragma, handle it specially. if (MI->isBuiltinMacro()) { if (Callbacks) Callbacks->MacroExpands(Identifier, M, Identifier.getLocation(), /*Args=*/nullptr); ExpandBuiltinMacro(Identifier); return true; } /// Args - If this is a function-like macro expansion, this contains, /// for each macro argument, the list of tokens that were provided to the /// invocation. MacroArgs *Args = nullptr; // Remember where the end of the expansion occurred. For an object-like // macro, this is the identifier. For a function-like macro, this is the ')'. SourceLocation ExpansionEnd = Identifier.getLocation(); // If this is a function-like macro, read the arguments. if (MI->isFunctionLike()) { // Remember that we are now parsing the arguments to a macro invocation. // Preprocessor directives used inside macro arguments are not portable, and // this enables the warning. InMacroArgs = true; Args = ReadFunctionLikeMacroArgs(Identifier, MI, ExpansionEnd); // Finished parsing args. InMacroArgs = false; // If there was an error parsing the arguments, bail out. if (!Args) return true; ++NumFnMacroExpanded; } else { ++NumMacroExpanded; } // Notice that this macro has been used. markMacroAsUsed(MI); // Remember where the token is expanded. SourceLocation ExpandLoc = Identifier.getLocation(); SourceRange ExpansionRange(ExpandLoc, ExpansionEnd); if (Callbacks) { if (InMacroArgs) { // We can have macro expansion inside a conditional directive while // reading the function macro arguments. To ensure, in that case, that // MacroExpands callbacks still happen in source order, queue this // callback to have it happen after the function macro callback. DelayedMacroExpandsCallbacks.push_back( MacroExpandsInfo(Identifier, M, ExpansionRange)); } else { Callbacks->MacroExpands(Identifier, M, ExpansionRange, Args); if (!DelayedMacroExpandsCallbacks.empty()) { for (const MacroExpandsInfo &Info : DelayedMacroExpandsCallbacks) { // FIXME: We lose macro args info with delayed callback. Callbacks->MacroExpands(Info.Tok, Info.MD, Info.Range, /*Args=*/nullptr); } DelayedMacroExpandsCallbacks.clear(); } } } // If the macro definition is ambiguous, complain. if (M.isAmbiguous()) { Diag(Identifier, diag::warn_pp_ambiguous_macro) << Identifier.getIdentifierInfo(); Diag(MI->getDefinitionLoc(), diag::note_pp_ambiguous_macro_chosen) << Identifier.getIdentifierInfo(); M.forAllDefinitions([&](const MacroInfo *OtherMI) { if (OtherMI != MI) Diag(OtherMI->getDefinitionLoc(), diag::note_pp_ambiguous_macro_other) << Identifier.getIdentifierInfo(); }); } // If we started lexing a macro, enter the macro expansion body. // If this macro expands to no tokens, don't bother to push it onto the // expansion stack, only to take it right back off. if (MI->getNumTokens() == 0) { // No need for arg info. if (Args) Args->destroy(*this); // Propagate whitespace info as if we had pushed, then popped, // a macro context. Identifier.setFlag(Token::LeadingEmptyMacro); PropagateLineStartLeadingSpaceInfo(Identifier); ++NumFastMacroExpanded; return false; } else if (MI->getNumTokens() == 1 && isTrivialSingleTokenExpansion(MI, Identifier.getIdentifierInfo(), *this)) { // Otherwise, if this macro expands into a single trivially-expanded // token: expand it now. This handles common cases like // "#define VAL 42". // No need for arg info. if (Args) Args->destroy(*this); // Propagate the isAtStartOfLine/hasLeadingSpace markers of the macro // identifier to the expanded token. bool isAtStartOfLine = Identifier.isAtStartOfLine(); bool hasLeadingSpace = Identifier.hasLeadingSpace(); // Replace the result token. Identifier = MI->getReplacementToken(0); // Restore the StartOfLine/LeadingSpace markers. Identifier.setFlagValue(Token::StartOfLine , isAtStartOfLine); Identifier.setFlagValue(Token::LeadingSpace, hasLeadingSpace); // Update the tokens location to include both its expansion and physical // locations. SourceLocation Loc = SourceMgr.createExpansionLoc(Identifier.getLocation(), ExpandLoc, ExpansionEnd,Identifier.getLength()); Identifier.setLocation(Loc); // If this is a disabled macro or #define X X, we must mark the result as // unexpandable. if (IdentifierInfo *NewII = Identifier.getIdentifierInfo()) { if (MacroInfo *NewMI = getMacroInfo(NewII)) if (!NewMI->isEnabled() || NewMI == MI) { Identifier.setFlag(Token::DisableExpand); // Don't warn for "#define X X" like "#define bool bool" from // stdbool.h. if (NewMI != MI || MI->isFunctionLike()) Diag(Identifier, diag::pp_disabled_macro_expansion); } } // Since this is not an identifier token, it can't be macro expanded, so // we're done. ++NumFastMacroExpanded; return true; } // Start expanding the macro. EnterMacro(Identifier, ExpansionEnd, MI, Args); return false; } enum Bracket { Brace, Paren }; /// CheckMatchedBrackets - Returns true if the braces and parentheses in the /// token vector are properly nested. static bool CheckMatchedBrackets(const SmallVectorImpl &Tokens) { SmallVector Brackets; for (SmallVectorImpl::const_iterator I = Tokens.begin(), E = Tokens.end(); I != E; ++I) { if (I->is(tok::l_paren)) { Brackets.push_back(Paren); } else if (I->is(tok::r_paren)) { if (Brackets.empty() || Brackets.back() == Brace) return false; Brackets.pop_back(); } else if (I->is(tok::l_brace)) { Brackets.push_back(Brace); } else if (I->is(tok::r_brace)) { if (Brackets.empty() || Brackets.back() == Paren) return false; Brackets.pop_back(); } } return Brackets.empty(); } /// GenerateNewArgTokens - Returns true if OldTokens can be converted to a new /// vector of tokens in NewTokens. The new number of arguments will be placed /// in NumArgs and the ranges which need to surrounded in parentheses will be /// in ParenHints. /// Returns false if the token stream cannot be changed. If this is because /// of an initializer list starting a macro argument, the range of those /// initializer lists will be place in InitLists. static bool GenerateNewArgTokens(Preprocessor &PP, SmallVectorImpl &OldTokens, SmallVectorImpl &NewTokens, unsigned &NumArgs, SmallVectorImpl &ParenHints, SmallVectorImpl &InitLists) { if (!CheckMatchedBrackets(OldTokens)) return false; // Once it is known that the brackets are matched, only a simple count of the // braces is needed. unsigned Braces = 0; // First token of a new macro argument. SmallVectorImpl::iterator ArgStartIterator = OldTokens.begin(); // First closing brace in a new macro argument. Used to generate // SourceRanges for InitLists. SmallVectorImpl::iterator ClosingBrace = OldTokens.end(); NumArgs = 0; Token TempToken; // Set to true when a macro separator token is found inside a braced list. // If true, the fixed argument spans multiple old arguments and ParenHints // will be updated. bool FoundSeparatorToken = false; for (SmallVectorImpl::iterator I = OldTokens.begin(), E = OldTokens.end(); I != E; ++I) { if (I->is(tok::l_brace)) { ++Braces; } else if (I->is(tok::r_brace)) { --Braces; if (Braces == 0 && ClosingBrace == E && FoundSeparatorToken) ClosingBrace = I; } else if (I->is(tok::eof)) { // EOF token is used to separate macro arguments if (Braces != 0) { // Assume comma separator is actually braced list separator and change // it back to a comma. FoundSeparatorToken = true; I->setKind(tok::comma); I->setLength(1); } else { // Braces == 0 // Separator token still separates arguments. ++NumArgs; // If the argument starts with a brace, it can't be fixed with // parentheses. A different diagnostic will be given. if (FoundSeparatorToken && ArgStartIterator->is(tok::l_brace)) { InitLists.push_back( SourceRange(ArgStartIterator->getLocation(), PP.getLocForEndOfToken(ClosingBrace->getLocation()))); ClosingBrace = E; } // Add left paren if (FoundSeparatorToken) { TempToken.startToken(); TempToken.setKind(tok::l_paren); TempToken.setLocation(ArgStartIterator->getLocation()); TempToken.setLength(0); NewTokens.push_back(TempToken); } // Copy over argument tokens NewTokens.insert(NewTokens.end(), ArgStartIterator, I); // Add right paren and store the paren locations in ParenHints if (FoundSeparatorToken) { SourceLocation Loc = PP.getLocForEndOfToken((I - 1)->getLocation()); TempToken.startToken(); TempToken.setKind(tok::r_paren); TempToken.setLocation(Loc); TempToken.setLength(0); NewTokens.push_back(TempToken); ParenHints.push_back(SourceRange(ArgStartIterator->getLocation(), Loc)); } // Copy separator token NewTokens.push_back(*I); // Reset values ArgStartIterator = I + 1; FoundSeparatorToken = false; } } } return !ParenHints.empty() && InitLists.empty(); } /// ReadFunctionLikeMacroArgs - After reading "MACRO" and knowing that the next /// token is the '(' of the macro, this method is invoked to read all of the /// actual arguments specified for the macro invocation. This returns null on /// error. MacroArgs *Preprocessor::ReadFunctionLikeMacroArgs(Token &MacroName, MacroInfo *MI, SourceLocation &MacroEnd) { // The number of fixed arguments to parse. unsigned NumFixedArgsLeft = MI->getNumArgs(); bool isVariadic = MI->isVariadic(); // Outer loop, while there are more arguments, keep reading them. Token Tok; // Read arguments as unexpanded tokens. This avoids issues, e.g., where // an argument value in a macro could expand to ',' or '(' or ')'. LexUnexpandedToken(Tok); assert(Tok.is(tok::l_paren) && "Error computing l-paren-ness?"); // ArgTokens - Build up a list of tokens that make up each argument. Each // argument is separated by an EOF token. Use a SmallVector so we can avoid // heap allocations in the common case. SmallVector ArgTokens; bool ContainsCodeCompletionTok = false; bool FoundElidedComma = false; SourceLocation TooManyArgsLoc; unsigned NumActuals = 0; while (Tok.isNot(tok::r_paren)) { if (ContainsCodeCompletionTok && Tok.isOneOf(tok::eof, tok::eod)) break; assert(Tok.isOneOf(tok::l_paren, tok::comma) && "only expect argument separators here"); size_t ArgTokenStart = ArgTokens.size(); SourceLocation ArgStartLoc = Tok.getLocation(); // C99 6.10.3p11: Keep track of the number of l_parens we have seen. Note // that we already consumed the first one. unsigned NumParens = 0; while (true) { // Read arguments as unexpanded tokens. This avoids issues, e.g., where // an argument value in a macro could expand to ',' or '(' or ')'. LexUnexpandedToken(Tok); if (Tok.isOneOf(tok::eof, tok::eod)) { // "#if f(" & "#if f(\n" if (!ContainsCodeCompletionTok) { Diag(MacroName, diag::err_unterm_macro_invoc); Diag(MI->getDefinitionLoc(), diag::note_macro_here) << MacroName.getIdentifierInfo(); // Do not lose the EOF/EOD. Return it to the client. MacroName = Tok; return nullptr; } // Do not lose the EOF/EOD. auto Toks = llvm::make_unique(1); Toks[0] = Tok; EnterTokenStream(std::move(Toks), 1, true); break; } else if (Tok.is(tok::r_paren)) { // If we found the ) token, the macro arg list is done. if (NumParens-- == 0) { MacroEnd = Tok.getLocation(); if (!ArgTokens.empty() && ArgTokens.back().commaAfterElided()) { FoundElidedComma = true; } break; } } else if (Tok.is(tok::l_paren)) { ++NumParens; } else if (Tok.is(tok::comma) && NumParens == 0 && !(Tok.getFlags() & Token::IgnoredComma)) { // In Microsoft-compatibility mode, single commas from nested macro // expansions should not be considered as argument separators. We test // for this with the IgnoredComma token flag above. // Comma ends this argument if there are more fixed arguments expected. // However, if this is a variadic macro, and this is part of the // variadic part, then the comma is just an argument token. if (!isVariadic) break; if (NumFixedArgsLeft > 1) break; } else if (Tok.is(tok::comment) && !KeepMacroComments) { // If this is a comment token in the argument list and we're just in // -C mode (not -CC mode), discard the comment. continue; } else if (!Tok.isAnnotation() && Tok.getIdentifierInfo() != nullptr) { // Reading macro arguments can cause macros that we are currently // expanding from to be popped off the expansion stack. Doing so causes // them to be reenabled for expansion. Here we record whether any // identifiers we lex as macro arguments correspond to disabled macros. // If so, we mark the token as noexpand. This is a subtle aspect of // C99 6.10.3.4p2. if (MacroInfo *MI = getMacroInfo(Tok.getIdentifierInfo())) if (!MI->isEnabled()) Tok.setFlag(Token::DisableExpand); } else if (Tok.is(tok::code_completion)) { ContainsCodeCompletionTok = true; if (CodeComplete) CodeComplete->CodeCompleteMacroArgument(MacroName.getIdentifierInfo(), MI, NumActuals); // Don't mark that we reached the code-completion point because the // parser is going to handle the token and there will be another // code-completion callback. } ArgTokens.push_back(Tok); } // If this was an empty argument list foo(), don't add this as an empty // argument. if (ArgTokens.empty() && Tok.getKind() == tok::r_paren) break; // If this is not a variadic macro, and too many args were specified, emit // an error. if (!isVariadic && NumFixedArgsLeft == 0 && TooManyArgsLoc.isInvalid()) { if (ArgTokens.size() != ArgTokenStart) TooManyArgsLoc = ArgTokens[ArgTokenStart].getLocation(); else TooManyArgsLoc = ArgStartLoc; } // Empty arguments are standard in C99 and C++0x, and are supported as an // extension in other modes. if (ArgTokens.size() == ArgTokenStart && !LangOpts.C99) Diag(Tok, LangOpts.CPlusPlus11 ? diag::warn_cxx98_compat_empty_fnmacro_arg : diag::ext_empty_fnmacro_arg); // Add a marker EOF token to the end of the token list for this argument. Token EOFTok; EOFTok.startToken(); EOFTok.setKind(tok::eof); EOFTok.setLocation(Tok.getLocation()); EOFTok.setLength(0); ArgTokens.push_back(EOFTok); ++NumActuals; if (!ContainsCodeCompletionTok && NumFixedArgsLeft != 0) --NumFixedArgsLeft; } // Okay, we either found the r_paren. Check to see if we parsed too few // arguments. unsigned MinArgsExpected = MI->getNumArgs(); // If this is not a variadic macro, and too many args were specified, emit // an error. if (!isVariadic && NumActuals > MinArgsExpected && !ContainsCodeCompletionTok) { // Emit the diagnostic at the macro name in case there is a missing ). // Emitting it at the , could be far away from the macro name. Diag(TooManyArgsLoc, diag::err_too_many_args_in_macro_invoc); Diag(MI->getDefinitionLoc(), diag::note_macro_here) << MacroName.getIdentifierInfo(); // Commas from braced initializer lists will be treated as argument // separators inside macros. Attempt to correct for this with parentheses. // TODO: See if this can be generalized to angle brackets for templates // inside macro arguments. SmallVector FixedArgTokens; unsigned FixedNumArgs = 0; SmallVector ParenHints, InitLists; if (!GenerateNewArgTokens(*this, ArgTokens, FixedArgTokens, FixedNumArgs, ParenHints, InitLists)) { if (!InitLists.empty()) { DiagnosticBuilder DB = Diag(MacroName, diag::note_init_list_at_beginning_of_macro_argument); for (SourceRange Range : InitLists) DB << Range; } return nullptr; } if (FixedNumArgs != MinArgsExpected) return nullptr; DiagnosticBuilder DB = Diag(MacroName, diag::note_suggest_parens_for_macro); for (SourceRange ParenLocation : ParenHints) { DB << FixItHint::CreateInsertion(ParenLocation.getBegin(), "("); DB << FixItHint::CreateInsertion(ParenLocation.getEnd(), ")"); } ArgTokens.swap(FixedArgTokens); NumActuals = FixedNumArgs; } // See MacroArgs instance var for description of this. bool isVarargsElided = false; if (ContainsCodeCompletionTok) { // Recover from not-fully-formed macro invocation during code-completion. Token EOFTok; EOFTok.startToken(); EOFTok.setKind(tok::eof); EOFTok.setLocation(Tok.getLocation()); EOFTok.setLength(0); for (; NumActuals < MinArgsExpected; ++NumActuals) ArgTokens.push_back(EOFTok); } if (NumActuals < MinArgsExpected) { // There are several cases where too few arguments is ok, handle them now. if (NumActuals == 0 && MinArgsExpected == 1) { // #define A(X) or #define A(...) ---> A() // If there is exactly one argument, and that argument is missing, // then we have an empty "()" argument empty list. This is fine, even if // the macro expects one argument (the argument is just empty). isVarargsElided = MI->isVariadic(); } else if ((FoundElidedComma || MI->isVariadic()) && (NumActuals+1 == MinArgsExpected || // A(x, ...) -> A(X) (NumActuals == 0 && MinArgsExpected == 2))) {// A(x,...) -> A() // Varargs where the named vararg parameter is missing: OK as extension. // #define A(x, ...) // A("blah") // // If the macro contains the comma pasting extension, the diagnostic // is suppressed; we know we'll get another diagnostic later. if (!MI->hasCommaPasting()) { Diag(Tok, diag::ext_missing_varargs_arg); Diag(MI->getDefinitionLoc(), diag::note_macro_here) << MacroName.getIdentifierInfo(); } // Remember this occurred, allowing us to elide the comma when used for // cases like: // #define A(x, foo...) blah(a, ## foo) // #define B(x, ...) blah(a, ## __VA_ARGS__) // #define C(...) blah(a, ## __VA_ARGS__) // A(x) B(x) C() isVarargsElided = true; } else if (!ContainsCodeCompletionTok) { // Otherwise, emit the error. Diag(Tok, diag::err_too_few_args_in_macro_invoc); Diag(MI->getDefinitionLoc(), diag::note_macro_here) << MacroName.getIdentifierInfo(); return nullptr; } // Add a marker EOF token to the end of the token list for this argument. SourceLocation EndLoc = Tok.getLocation(); Tok.startToken(); Tok.setKind(tok::eof); Tok.setLocation(EndLoc); Tok.setLength(0); ArgTokens.push_back(Tok); // If we expect two arguments, add both as empty. if (NumActuals == 0 && MinArgsExpected == 2) ArgTokens.push_back(Tok); } else if (NumActuals > MinArgsExpected && !MI->isVariadic() && !ContainsCodeCompletionTok) { // Emit the diagnostic at the macro name in case there is a missing ). // Emitting it at the , could be far away from the macro name. Diag(MacroName, diag::err_too_many_args_in_macro_invoc); Diag(MI->getDefinitionLoc(), diag::note_macro_here) << MacroName.getIdentifierInfo(); return nullptr; } return MacroArgs::create(MI, ArgTokens, isVarargsElided, *this); } /// \brief Keeps macro expanded tokens for TokenLexers. // /// Works like a stack; a TokenLexer adds the macro expanded tokens that is /// going to lex in the cache and when it finishes the tokens are removed /// from the end of the cache. Token *Preprocessor::cacheMacroExpandedTokens(TokenLexer *tokLexer, ArrayRef tokens) { assert(tokLexer); if (tokens.empty()) return nullptr; size_t newIndex = MacroExpandedTokens.size(); bool cacheNeedsToGrow = tokens.size() > MacroExpandedTokens.capacity()-MacroExpandedTokens.size(); MacroExpandedTokens.append(tokens.begin(), tokens.end()); if (cacheNeedsToGrow) { // Go through all the TokenLexers whose 'Tokens' pointer points in the // buffer and update the pointers to the (potential) new buffer array. for (const auto &Lexer : MacroExpandingLexersStack) { TokenLexer *prevLexer; size_t tokIndex; std::tie(prevLexer, tokIndex) = Lexer; prevLexer->Tokens = MacroExpandedTokens.data() + tokIndex; } } MacroExpandingLexersStack.push_back(std::make_pair(tokLexer, newIndex)); return MacroExpandedTokens.data() + newIndex; } void Preprocessor::removeCachedMacroExpandedTokensOfLastLexer() { assert(!MacroExpandingLexersStack.empty()); size_t tokIndex = MacroExpandingLexersStack.back().second; assert(tokIndex < MacroExpandedTokens.size()); // Pop the cached macro expanded tokens from the end. MacroExpandedTokens.resize(tokIndex); MacroExpandingLexersStack.pop_back(); } /// ComputeDATE_TIME - Compute the current time, enter it into the specified /// scratch buffer, then return DATELoc/TIMELoc locations with the position of /// the identifier tokens inserted. static void ComputeDATE_TIME(SourceLocation &DATELoc, SourceLocation &TIMELoc, Preprocessor &PP) { time_t TT = time(nullptr); struct tm *TM = localtime(&TT); static const char * const Months[] = { "Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec" }; { SmallString<32> TmpBuffer; llvm::raw_svector_ostream TmpStream(TmpBuffer); TmpStream << llvm::format("\"%s %2d %4d\"", Months[TM->tm_mon], TM->tm_mday, TM->tm_year + 1900); Token TmpTok; TmpTok.startToken(); PP.CreateString(TmpStream.str(), TmpTok); DATELoc = TmpTok.getLocation(); } { SmallString<32> TmpBuffer; llvm::raw_svector_ostream TmpStream(TmpBuffer); TmpStream << llvm::format("\"%02d:%02d:%02d\"", TM->tm_hour, TM->tm_min, TM->tm_sec); Token TmpTok; TmpTok.startToken(); PP.CreateString(TmpStream.str(), TmpTok); TIMELoc = TmpTok.getLocation(); } } /// HasFeature - Return true if we recognize and implement the feature /// specified by the identifier as a standard language feature. static bool HasFeature(const Preprocessor &PP, StringRef Feature) { const LangOptions &LangOpts = PP.getLangOpts(); // Normalize the feature name, __foo__ becomes foo. if (Feature.startswith("__") && Feature.endswith("__") && Feature.size() >= 4) Feature = Feature.substr(2, Feature.size() - 4); return llvm::StringSwitch(Feature) .Case("address_sanitizer", LangOpts.Sanitize.hasOneOf(SanitizerKind::Address | SanitizerKind::KernelAddress)) .Case("assume_nonnull", true) .Case("attribute_analyzer_noreturn", true) .Case("attribute_availability", true) .Case("attribute_availability_with_message", true) .Case("attribute_availability_app_extension", true) .Case("attribute_availability_with_version_underscores", true) .Case("attribute_availability_tvos", true) .Case("attribute_availability_watchos", true) .Case("attribute_availability_with_strict", true) .Case("attribute_availability_with_replacement", true) .Case("attribute_availability_in_templates", true) .Case("attribute_cf_returns_not_retained", true) .Case("attribute_cf_returns_retained", true) .Case("attribute_cf_returns_on_parameters", true) .Case("attribute_deprecated_with_message", true) .Case("attribute_deprecated_with_replacement", true) .Case("attribute_ext_vector_type", true) .Case("attribute_ns_returns_not_retained", true) .Case("attribute_ns_returns_retained", true) .Case("attribute_ns_consumes_self", true) .Case("attribute_ns_consumed", true) .Case("attribute_cf_consumed", true) .Case("attribute_objc_ivar_unused", true) .Case("attribute_objc_method_family", true) .Case("attribute_overloadable", true) .Case("attribute_unavailable_with_message", true) .Case("attribute_unused_on_fields", true) .Case("blocks", LangOpts.Blocks) .Case("c_thread_safety_attributes", true) .Case("cxx_exceptions", LangOpts.CXXExceptions) .Case("cxx_rtti", LangOpts.RTTI && LangOpts.RTTIData) .Case("enumerator_attributes", true) .Case("nullability", true) .Case("nullability_on_arrays", true) .Case("memory_sanitizer", LangOpts.Sanitize.has(SanitizerKind::Memory)) .Case("thread_sanitizer", LangOpts.Sanitize.has(SanitizerKind::Thread)) .Case("dataflow_sanitizer", LangOpts.Sanitize.has(SanitizerKind::DataFlow)) .Case("efficiency_sanitizer", LangOpts.Sanitize.hasOneOf(SanitizerKind::Efficiency)) // Objective-C features .Case("objc_arr", LangOpts.ObjCAutoRefCount) // FIXME: REMOVE? .Case("objc_arc", LangOpts.ObjCAutoRefCount) .Case("objc_arc_weak", LangOpts.ObjCWeak) .Case("objc_default_synthesize_properties", LangOpts.ObjC2) .Case("objc_fixed_enum", LangOpts.ObjC2) .Case("objc_instancetype", LangOpts.ObjC2) .Case("objc_kindof", LangOpts.ObjC2) .Case("objc_modules", LangOpts.ObjC2 && LangOpts.Modules) .Case("objc_nonfragile_abi", LangOpts.ObjCRuntime.isNonFragile()) .Case("objc_property_explicit_atomic", true) // Does clang support explicit "atomic" keyword? .Case("objc_protocol_qualifier_mangling", true) .Case("objc_weak_class", LangOpts.ObjCRuntime.hasWeakClassImport()) .Case("ownership_holds", true) .Case("ownership_returns", true) .Case("ownership_takes", true) .Case("objc_bool", true) .Case("objc_subscripting", LangOpts.ObjCRuntime.isNonFragile()) .Case("objc_array_literals", LangOpts.ObjC2) .Case("objc_dictionary_literals", LangOpts.ObjC2) .Case("objc_boxed_expressions", LangOpts.ObjC2) .Case("objc_boxed_nsvalue_expressions", LangOpts.ObjC2) .Case("arc_cf_code_audited", true) .Case("objc_bridge_id", true) .Case("objc_bridge_id_on_typedefs", true) .Case("objc_generics", LangOpts.ObjC2) .Case("objc_generics_variance", LangOpts.ObjC2) .Case("objc_class_property", LangOpts.ObjC2) // C11 features .Case("c_alignas", LangOpts.C11) .Case("c_alignof", LangOpts.C11) .Case("c_atomic", LangOpts.C11) .Case("c_generic_selections", LangOpts.C11) .Case("c_static_assert", LangOpts.C11) .Case("c_thread_local", LangOpts.C11 && PP.getTargetInfo().isTLSSupported()) // C++11 features .Case("cxx_access_control_sfinae", LangOpts.CPlusPlus11) .Case("cxx_alias_templates", LangOpts.CPlusPlus11) .Case("cxx_alignas", LangOpts.CPlusPlus11) .Case("cxx_alignof", LangOpts.CPlusPlus11) .Case("cxx_atomic", LangOpts.CPlusPlus11) .Case("cxx_attributes", LangOpts.CPlusPlus11) .Case("cxx_auto_type", LangOpts.CPlusPlus11) .Case("cxx_constexpr", LangOpts.CPlusPlus11) + .Case("cxx_constexpr_string_builtins", LangOpts.CPlusPlus11) .Case("cxx_decltype", LangOpts.CPlusPlus11) .Case("cxx_decltype_incomplete_return_types", LangOpts.CPlusPlus11) .Case("cxx_default_function_template_args", LangOpts.CPlusPlus11) .Case("cxx_defaulted_functions", LangOpts.CPlusPlus11) .Case("cxx_delegating_constructors", LangOpts.CPlusPlus11) .Case("cxx_deleted_functions", LangOpts.CPlusPlus11) .Case("cxx_explicit_conversions", LangOpts.CPlusPlus11) .Case("cxx_generalized_initializers", LangOpts.CPlusPlus11) .Case("cxx_implicit_moves", LangOpts.CPlusPlus11) .Case("cxx_inheriting_constructors", LangOpts.CPlusPlus11) .Case("cxx_inline_namespaces", LangOpts.CPlusPlus11) .Case("cxx_lambdas", LangOpts.CPlusPlus11) .Case("cxx_local_type_template_args", LangOpts.CPlusPlus11) .Case("cxx_nonstatic_member_init", LangOpts.CPlusPlus11) .Case("cxx_noexcept", LangOpts.CPlusPlus11) .Case("cxx_nullptr", LangOpts.CPlusPlus11) .Case("cxx_override_control", LangOpts.CPlusPlus11) .Case("cxx_range_for", LangOpts.CPlusPlus11) .Case("cxx_raw_string_literals", LangOpts.CPlusPlus11) .Case("cxx_reference_qualified_functions", LangOpts.CPlusPlus11) .Case("cxx_rvalue_references", LangOpts.CPlusPlus11) .Case("cxx_strong_enums", LangOpts.CPlusPlus11) .Case("cxx_static_assert", LangOpts.CPlusPlus11) .Case("cxx_thread_local", LangOpts.CPlusPlus11 && PP.getTargetInfo().isTLSSupported()) .Case("cxx_trailing_return", LangOpts.CPlusPlus11) .Case("cxx_unicode_literals", LangOpts.CPlusPlus11) .Case("cxx_unrestricted_unions", LangOpts.CPlusPlus11) .Case("cxx_user_literals", LangOpts.CPlusPlus11) .Case("cxx_variadic_templates", LangOpts.CPlusPlus11) // C++14 features .Case("cxx_aggregate_nsdmi", LangOpts.CPlusPlus14) .Case("cxx_binary_literals", LangOpts.CPlusPlus14) .Case("cxx_contextual_conversions", LangOpts.CPlusPlus14) .Case("cxx_decltype_auto", LangOpts.CPlusPlus14) .Case("cxx_generic_lambdas", LangOpts.CPlusPlus14) .Case("cxx_init_captures", LangOpts.CPlusPlus14) .Case("cxx_relaxed_constexpr", LangOpts.CPlusPlus14) .Case("cxx_return_type_deduction", LangOpts.CPlusPlus14) .Case("cxx_variable_templates", LangOpts.CPlusPlus14) // NOTE: For features covered by SD-6, it is preferable to provide *only* // the SD-6 macro and not a __has_feature check. // C++ TSes //.Case("cxx_runtime_arrays", LangOpts.CPlusPlusTSArrays) //.Case("cxx_concepts", LangOpts.CPlusPlusTSConcepts) // FIXME: Should this be __has_feature or __has_extension? //.Case("raw_invocation_type", LangOpts.CPlusPlus) // Type traits // N.B. Additional type traits should not be added to the following list. // Instead, they should be detected by has_extension. .Case("has_nothrow_assign", LangOpts.CPlusPlus) .Case("has_nothrow_copy", LangOpts.CPlusPlus) .Case("has_nothrow_constructor", LangOpts.CPlusPlus) .Case("has_trivial_assign", LangOpts.CPlusPlus) .Case("has_trivial_copy", LangOpts.CPlusPlus) .Case("has_trivial_constructor", LangOpts.CPlusPlus) .Case("has_trivial_destructor", LangOpts.CPlusPlus) .Case("has_virtual_destructor", LangOpts.CPlusPlus) .Case("is_abstract", LangOpts.CPlusPlus) .Case("is_base_of", LangOpts.CPlusPlus) .Case("is_class", LangOpts.CPlusPlus) .Case("is_constructible", LangOpts.CPlusPlus) .Case("is_convertible_to", LangOpts.CPlusPlus) .Case("is_empty", LangOpts.CPlusPlus) .Case("is_enum", LangOpts.CPlusPlus) .Case("is_final", LangOpts.CPlusPlus) .Case("is_literal", LangOpts.CPlusPlus) .Case("is_standard_layout", LangOpts.CPlusPlus) .Case("is_pod", LangOpts.CPlusPlus) .Case("is_polymorphic", LangOpts.CPlusPlus) .Case("is_sealed", LangOpts.CPlusPlus && LangOpts.MicrosoftExt) .Case("is_trivial", LangOpts.CPlusPlus) .Case("is_trivially_assignable", LangOpts.CPlusPlus) .Case("is_trivially_constructible", LangOpts.CPlusPlus) .Case("is_trivially_copyable", LangOpts.CPlusPlus) .Case("is_union", LangOpts.CPlusPlus) .Case("modules", LangOpts.Modules) .Case("safe_stack", LangOpts.Sanitize.has(SanitizerKind::SafeStack)) .Case("tls", PP.getTargetInfo().isTLSSupported()) .Case("underlying_type", LangOpts.CPlusPlus) .Default(false); } /// HasExtension - Return true if we recognize and implement the feature /// specified by the identifier, either as an extension or a standard language /// feature. static bool HasExtension(const Preprocessor &PP, StringRef Extension) { if (HasFeature(PP, Extension)) return true; // If the use of an extension results in an error diagnostic, extensions are // effectively unavailable, so just return false here. if (PP.getDiagnostics().getExtensionHandlingBehavior() >= diag::Severity::Error) return false; const LangOptions &LangOpts = PP.getLangOpts(); // Normalize the extension name, __foo__ becomes foo. if (Extension.startswith("__") && Extension.endswith("__") && Extension.size() >= 4) Extension = Extension.substr(2, Extension.size() - 4); // Because we inherit the feature list from HasFeature, this string switch // must be less restrictive than HasFeature's. return llvm::StringSwitch(Extension) // C11 features supported by other languages as extensions. .Case("c_alignas", true) .Case("c_alignof", true) .Case("c_atomic", true) .Case("c_generic_selections", true) .Case("c_static_assert", true) .Case("c_thread_local", PP.getTargetInfo().isTLSSupported()) // C++11 features supported by other languages as extensions. .Case("cxx_atomic", LangOpts.CPlusPlus) .Case("cxx_deleted_functions", LangOpts.CPlusPlus) .Case("cxx_explicit_conversions", LangOpts.CPlusPlus) .Case("cxx_inline_namespaces", LangOpts.CPlusPlus) .Case("cxx_local_type_template_args", LangOpts.CPlusPlus) .Case("cxx_nonstatic_member_init", LangOpts.CPlusPlus) .Case("cxx_override_control", LangOpts.CPlusPlus) .Case("cxx_range_for", LangOpts.CPlusPlus) .Case("cxx_reference_qualified_functions", LangOpts.CPlusPlus) .Case("cxx_rvalue_references", LangOpts.CPlusPlus) .Case("cxx_variadic_templates", LangOpts.CPlusPlus) // C++14 features supported by other languages as extensions. .Case("cxx_binary_literals", true) .Case("cxx_init_captures", LangOpts.CPlusPlus11) .Case("cxx_variable_templates", LangOpts.CPlusPlus) .Default(false); } /// EvaluateHasIncludeCommon - Process a '__has_include("path")' /// or '__has_include_next("path")' expression. /// Returns true if successful. static bool EvaluateHasIncludeCommon(Token &Tok, IdentifierInfo *II, Preprocessor &PP, const DirectoryLookup *LookupFrom, const FileEntry *LookupFromFile) { // Save the location of the current token. If a '(' is later found, use // that location. If not, use the end of this location instead. SourceLocation LParenLoc = Tok.getLocation(); // These expressions are only allowed within a preprocessor directive. if (!PP.isParsingIfOrElifDirective()) { PP.Diag(LParenLoc, diag::err_pp_directive_required) << II->getName(); // Return a valid identifier token. assert(Tok.is(tok::identifier)); Tok.setIdentifierInfo(II); return false; } // Get '('. PP.LexNonComment(Tok); // Ensure we have a '('. if (Tok.isNot(tok::l_paren)) { // No '(', use end of last token. LParenLoc = PP.getLocForEndOfToken(LParenLoc); PP.Diag(LParenLoc, diag::err_pp_expected_after) << II << tok::l_paren; // If the next token looks like a filename or the start of one, // assume it is and process it as such. if (!Tok.is(tok::angle_string_literal) && !Tok.is(tok::string_literal) && !Tok.is(tok::less)) return false; } else { // Save '(' location for possible missing ')' message. LParenLoc = Tok.getLocation(); if (PP.getCurrentLexer()) { // Get the file name. PP.getCurrentLexer()->LexIncludeFilename(Tok); } else { // We're in a macro, so we can't use LexIncludeFilename; just // grab the next token. PP.Lex(Tok); } } // Reserve a buffer to get the spelling. SmallString<128> FilenameBuffer; StringRef Filename; SourceLocation EndLoc; switch (Tok.getKind()) { case tok::eod: // If the token kind is EOD, the error has already been diagnosed. return false; case tok::angle_string_literal: case tok::string_literal: { bool Invalid = false; Filename = PP.getSpelling(Tok, FilenameBuffer, &Invalid); if (Invalid) return false; break; } case tok::less: // This could be a file coming from a macro expansion. In this // case, glue the tokens together into FilenameBuffer and interpret those. FilenameBuffer.push_back('<'); if (PP.ConcatenateIncludeName(FilenameBuffer, EndLoc)) { // Let the caller know a was found by changing the Token kind. Tok.setKind(tok::eod); return false; // Found but no ">"? Diagnostic already emitted. } Filename = FilenameBuffer; break; default: PP.Diag(Tok.getLocation(), diag::err_pp_expects_filename); return false; } SourceLocation FilenameLoc = Tok.getLocation(); // Get ')'. PP.LexNonComment(Tok); // Ensure we have a trailing ). if (Tok.isNot(tok::r_paren)) { PP.Diag(PP.getLocForEndOfToken(FilenameLoc), diag::err_pp_expected_after) << II << tok::r_paren; PP.Diag(LParenLoc, diag::note_matching) << tok::l_paren; return false; } bool isAngled = PP.GetIncludeFilenameSpelling(Tok.getLocation(), Filename); // If GetIncludeFilenameSpelling set the start ptr to null, there was an // error. if (Filename.empty()) return false; // Search include directories. const DirectoryLookup *CurDir; const FileEntry *File = PP.LookupFile(FilenameLoc, Filename, isAngled, LookupFrom, LookupFromFile, CurDir, nullptr, nullptr, nullptr); // Get the result value. A result of true means the file exists. return File != nullptr; } /// EvaluateHasInclude - Process a '__has_include("path")' expression. /// Returns true if successful. static bool EvaluateHasInclude(Token &Tok, IdentifierInfo *II, Preprocessor &PP) { return EvaluateHasIncludeCommon(Tok, II, PP, nullptr, nullptr); } /// EvaluateHasIncludeNext - Process '__has_include_next("path")' expression. /// Returns true if successful. static bool EvaluateHasIncludeNext(Token &Tok, IdentifierInfo *II, Preprocessor &PP) { // __has_include_next is like __has_include, except that we start // searching after the current found directory. If we can't do this, // issue a diagnostic. // FIXME: Factor out duplication with // Preprocessor::HandleIncludeNextDirective. const DirectoryLookup *Lookup = PP.GetCurDirLookup(); const FileEntry *LookupFromFile = nullptr; if (PP.isInPrimaryFile() && PP.getLangOpts().IsHeaderFile) { // If the main file is a header, then it's either for PCH/AST generation, // or libclang opened it. Either way, handle it as a normal include below // and do not complain about __has_include_next. } else if (PP.isInPrimaryFile()) { Lookup = nullptr; PP.Diag(Tok, diag::pp_include_next_in_primary); } else if (PP.getCurrentSubmodule()) { // Start looking up in the directory *after* the one in which the current // file would be found, if any. assert(PP.getCurrentLexer() && "#include_next directive in macro?"); LookupFromFile = PP.getCurrentLexer()->getFileEntry(); Lookup = nullptr; } else if (!Lookup) { PP.Diag(Tok, diag::pp_include_next_absolute_path); } else { // Start looking up in the next directory. ++Lookup; } return EvaluateHasIncludeCommon(Tok, II, PP, Lookup, LookupFromFile); } /// \brief Process single-argument builtin feature-like macros that return /// integer values. static void EvaluateFeatureLikeBuiltinMacro(llvm::raw_svector_ostream& OS, Token &Tok, IdentifierInfo *II, Preprocessor &PP, llvm::function_ref< int(Token &Tok, bool &HasLexedNextTok)> Op) { // Parse the initial '('. PP.LexUnexpandedToken(Tok); if (Tok.isNot(tok::l_paren)) { PP.Diag(Tok.getLocation(), diag::err_pp_expected_after) << II << tok::l_paren; // Provide a dummy '0' value on output stream to elide further errors. if (!Tok.isOneOf(tok::eof, tok::eod)) { OS << 0; Tok.setKind(tok::numeric_constant); } return; } unsigned ParenDepth = 1; SourceLocation LParenLoc = Tok.getLocation(); llvm::Optional Result; Token ResultTok; bool SuppressDiagnostic = false; while (true) { // Parse next token. PP.LexUnexpandedToken(Tok); already_lexed: switch (Tok.getKind()) { case tok::eof: case tok::eod: // Don't provide even a dummy value if the eod or eof marker is // reached. Simply provide a diagnostic. PP.Diag(Tok.getLocation(), diag::err_unterm_macro_invoc); return; case tok::comma: if (!SuppressDiagnostic) { PP.Diag(Tok.getLocation(), diag::err_too_many_args_in_macro_invoc); SuppressDiagnostic = true; } continue; case tok::l_paren: ++ParenDepth; if (Result.hasValue()) break; if (!SuppressDiagnostic) { PP.Diag(Tok.getLocation(), diag::err_pp_nested_paren) << II; SuppressDiagnostic = true; } continue; case tok::r_paren: if (--ParenDepth > 0) continue; // The last ')' has been reached; return the value if one found or // a diagnostic and a dummy value. if (Result.hasValue()) OS << Result.getValue(); else { OS << 0; if (!SuppressDiagnostic) PP.Diag(Tok.getLocation(), diag::err_too_few_args_in_macro_invoc); } Tok.setKind(tok::numeric_constant); return; default: { // Parse the macro argument, if one not found so far. if (Result.hasValue()) break; bool HasLexedNextToken = false; Result = Op(Tok, HasLexedNextToken); ResultTok = Tok; if (HasLexedNextToken) goto already_lexed; continue; } } // Diagnose missing ')'. if (!SuppressDiagnostic) { if (auto Diag = PP.Diag(Tok.getLocation(), diag::err_pp_expected_after)) { if (IdentifierInfo *LastII = ResultTok.getIdentifierInfo()) Diag << LastII; else Diag << ResultTok.getKind(); Diag << tok::r_paren << ResultTok.getLocation(); } PP.Diag(LParenLoc, diag::note_matching) << tok::l_paren; SuppressDiagnostic = true; } } } /// \brief Helper function to return the IdentifierInfo structure of a Token /// or generate a diagnostic if none available. static IdentifierInfo *ExpectFeatureIdentifierInfo(Token &Tok, Preprocessor &PP, signed DiagID) { IdentifierInfo *II; if (!Tok.isAnnotation() && (II = Tok.getIdentifierInfo())) return II; PP.Diag(Tok.getLocation(), DiagID); return nullptr; } /// ExpandBuiltinMacro - If an identifier token is read that is to be expanded /// as a builtin macro, handle it and return the next token as 'Tok'. void Preprocessor::ExpandBuiltinMacro(Token &Tok) { // Figure out which token this is. IdentifierInfo *II = Tok.getIdentifierInfo(); assert(II && "Can't be a macro without id info!"); // If this is an _Pragma or Microsoft __pragma directive, expand it, // invoke the pragma handler, then lex the token after it. if (II == Ident_Pragma) return Handle_Pragma(Tok); else if (II == Ident__pragma) // in non-MS mode this is null return HandleMicrosoft__pragma(Tok); ++NumBuiltinMacroExpanded; SmallString<128> TmpBuffer; llvm::raw_svector_ostream OS(TmpBuffer); // Set up the return result. Tok.setIdentifierInfo(nullptr); Tok.clearFlag(Token::NeedsCleaning); if (II == Ident__LINE__) { // C99 6.10.8: "__LINE__: The presumed line number (within the current // source file) of the current source line (an integer constant)". This can // be affected by #line. SourceLocation Loc = Tok.getLocation(); // Advance to the location of the first _, this might not be the first byte // of the token if it starts with an escaped newline. Loc = AdvanceToTokenCharacter(Loc, 0); // One wrinkle here is that GCC expands __LINE__ to location of the *end* of // a macro expansion. This doesn't matter for object-like macros, but // can matter for a function-like macro that expands to contain __LINE__. // Skip down through expansion points until we find a file loc for the // end of the expansion history. Loc = SourceMgr.getExpansionRange(Loc).second; PresumedLoc PLoc = SourceMgr.getPresumedLoc(Loc); // __LINE__ expands to a simple numeric value. OS << (PLoc.isValid()? PLoc.getLine() : 1); Tok.setKind(tok::numeric_constant); } else if (II == Ident__FILE__ || II == Ident__BASE_FILE__) { // C99 6.10.8: "__FILE__: The presumed name of the current source file (a // character string literal)". This can be affected by #line. PresumedLoc PLoc = SourceMgr.getPresumedLoc(Tok.getLocation()); // __BASE_FILE__ is a GNU extension that returns the top of the presumed // #include stack instead of the current file. if (II == Ident__BASE_FILE__ && PLoc.isValid()) { SourceLocation NextLoc = PLoc.getIncludeLoc(); while (NextLoc.isValid()) { PLoc = SourceMgr.getPresumedLoc(NextLoc); if (PLoc.isInvalid()) break; NextLoc = PLoc.getIncludeLoc(); } } // Escape this filename. Turn '\' -> '\\' '"' -> '\"' SmallString<128> FN; if (PLoc.isValid()) { FN += PLoc.getFilename(); Lexer::Stringify(FN); OS << '"' << FN << '"'; } Tok.setKind(tok::string_literal); } else if (II == Ident__DATE__) { Diag(Tok.getLocation(), diag::warn_pp_date_time); if (!DATELoc.isValid()) ComputeDATE_TIME(DATELoc, TIMELoc, *this); Tok.setKind(tok::string_literal); Tok.setLength(strlen("\"Mmm dd yyyy\"")); Tok.setLocation(SourceMgr.createExpansionLoc(DATELoc, Tok.getLocation(), Tok.getLocation(), Tok.getLength())); return; } else if (II == Ident__TIME__) { Diag(Tok.getLocation(), diag::warn_pp_date_time); if (!TIMELoc.isValid()) ComputeDATE_TIME(DATELoc, TIMELoc, *this); Tok.setKind(tok::string_literal); Tok.setLength(strlen("\"hh:mm:ss\"")); Tok.setLocation(SourceMgr.createExpansionLoc(TIMELoc, Tok.getLocation(), Tok.getLocation(), Tok.getLength())); return; } else if (II == Ident__INCLUDE_LEVEL__) { // Compute the presumed include depth of this token. This can be affected // by GNU line markers. unsigned Depth = 0; PresumedLoc PLoc = SourceMgr.getPresumedLoc(Tok.getLocation()); if (PLoc.isValid()) { PLoc = SourceMgr.getPresumedLoc(PLoc.getIncludeLoc()); for (; PLoc.isValid(); ++Depth) PLoc = SourceMgr.getPresumedLoc(PLoc.getIncludeLoc()); } // __INCLUDE_LEVEL__ expands to a simple numeric value. OS << Depth; Tok.setKind(tok::numeric_constant); } else if (II == Ident__TIMESTAMP__) { Diag(Tok.getLocation(), diag::warn_pp_date_time); // MSVC, ICC, GCC, VisualAge C++ extension. The generated string should be // of the form "Ddd Mmm dd hh::mm::ss yyyy", which is returned by asctime. // Get the file that we are lexing out of. If we're currently lexing from // a macro, dig into the include stack. const FileEntry *CurFile = nullptr; PreprocessorLexer *TheLexer = getCurrentFileLexer(); if (TheLexer) CurFile = SourceMgr.getFileEntryForID(TheLexer->getFileID()); const char *Result; if (CurFile) { time_t TT = CurFile->getModificationTime(); struct tm *TM = localtime(&TT); Result = asctime(TM); } else { Result = "??? ??? ?? ??:??:?? ????\n"; } // Surround the string with " and strip the trailing newline. OS << '"' << StringRef(Result).drop_back() << '"'; Tok.setKind(tok::string_literal); } else if (II == Ident__COUNTER__) { // __COUNTER__ expands to a simple numeric value. OS << CounterValue++; Tok.setKind(tok::numeric_constant); } else if (II == Ident__has_feature) { EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { IdentifierInfo *II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_feature_check_malformed); return II && HasFeature(*this, II->getName()); }); } else if (II == Ident__has_extension) { EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { IdentifierInfo *II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_feature_check_malformed); return II && HasExtension(*this, II->getName()); }); } else if (II == Ident__has_builtin) { EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { IdentifierInfo *II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_feature_check_malformed); if (!II) return false; else if (II->getBuiltinID() != 0) return true; else { const LangOptions &LangOpts = getLangOpts(); return llvm::StringSwitch(II->getName()) .Case("__make_integer_seq", LangOpts.CPlusPlus) .Case("__type_pack_element", LangOpts.CPlusPlus) .Default(false); } }); } else if (II == Ident__is_identifier) { EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [](Token &Tok, bool &HasLexedNextToken) -> int { return Tok.is(tok::identifier); }); } else if (II == Ident__has_attribute) { EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { IdentifierInfo *II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_feature_check_malformed); return II ? hasAttribute(AttrSyntax::GNU, nullptr, II, getTargetInfo(), getLangOpts()) : 0; }); } else if (II == Ident__has_declspec) { EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { IdentifierInfo *II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_feature_check_malformed); return II ? hasAttribute(AttrSyntax::Declspec, nullptr, II, getTargetInfo(), getLangOpts()) : 0; }); } else if (II == Ident__has_cpp_attribute) { EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { IdentifierInfo *ScopeII = nullptr; IdentifierInfo *II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_feature_check_malformed); if (!II) return false; // It is possible to receive a scope token. Read the "::", if it is // available, and the subsequent identifier. LexUnexpandedToken(Tok); if (Tok.isNot(tok::coloncolon)) HasLexedNextToken = true; else { ScopeII = II; LexUnexpandedToken(Tok); II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_feature_check_malformed); } return II ? hasAttribute(AttrSyntax::CXX, ScopeII, II, getTargetInfo(), getLangOpts()) : 0; }); } else if (II == Ident__has_include || II == Ident__has_include_next) { // The argument to these two builtins should be a parenthesized // file name string literal using angle brackets (<>) or // double-quotes (""). bool Value; if (II == Ident__has_include) Value = EvaluateHasInclude(Tok, II, *this); else Value = EvaluateHasIncludeNext(Tok, II, *this); if (Tok.isNot(tok::r_paren)) return; OS << (int)Value; Tok.setKind(tok::numeric_constant); } else if (II == Ident__has_warning) { // The argument should be a parenthesized string literal. EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { std::string WarningName; SourceLocation StrStartLoc = Tok.getLocation(); HasLexedNextToken = Tok.is(tok::string_literal); if (!FinishLexStringLiteral(Tok, WarningName, "'__has_warning'", /*MacroExpansion=*/false)) return false; // FIXME: Should we accept "-R..." flags here, or should that be // handled by a separate __has_remark? if (WarningName.size() < 3 || WarningName[0] != '-' || WarningName[1] != 'W') { Diag(StrStartLoc, diag::warn_has_warning_invalid_option); return false; } // Finally, check if the warning flags maps to a diagnostic group. // We construct a SmallVector here to talk to getDiagnosticIDs(). // Although we don't use the result, this isn't a hot path, and not // worth special casing. SmallVector Diags; return !getDiagnostics().getDiagnosticIDs()-> getDiagnosticsInGroup(diag::Flavor::WarningOrError, WarningName.substr(2), Diags); }); } else if (II == Ident__building_module) { // The argument to this builtin should be an identifier. The // builtin evaluates to 1 when that identifier names the module we are // currently building. EvaluateFeatureLikeBuiltinMacro(OS, Tok, II, *this, [this](Token &Tok, bool &HasLexedNextToken) -> int { IdentifierInfo *II = ExpectFeatureIdentifierInfo(Tok, *this, diag::err_expected_id_building_module); return getLangOpts().isCompilingModule() && II && (II->getName() == getLangOpts().CurrentModule); }); } else if (II == Ident__MODULE__) { // The current module as an identifier. OS << getLangOpts().CurrentModule; IdentifierInfo *ModuleII = getIdentifierInfo(getLangOpts().CurrentModule); Tok.setIdentifierInfo(ModuleII); Tok.setKind(ModuleII->getTokenID()); } else if (II == Ident__identifier) { SourceLocation Loc = Tok.getLocation(); // We're expecting '__identifier' '(' identifier ')'. Try to recover // if the parens are missing. LexNonComment(Tok); if (Tok.isNot(tok::l_paren)) { // No '(', use end of last token. Diag(getLocForEndOfToken(Loc), diag::err_pp_expected_after) << II << tok::l_paren; // If the next token isn't valid as our argument, we can't recover. if (!Tok.isAnnotation() && Tok.getIdentifierInfo()) Tok.setKind(tok::identifier); return; } SourceLocation LParenLoc = Tok.getLocation(); LexNonComment(Tok); if (!Tok.isAnnotation() && Tok.getIdentifierInfo()) Tok.setKind(tok::identifier); else { Diag(Tok.getLocation(), diag::err_pp_identifier_arg_not_identifier) << Tok.getKind(); // Don't walk past anything that's not a real token. if (Tok.isOneOf(tok::eof, tok::eod) || Tok.isAnnotation()) return; } // Discard the ')', preserving 'Tok' as our result. Token RParen; LexNonComment(RParen); if (RParen.isNot(tok::r_paren)) { Diag(getLocForEndOfToken(Tok.getLocation()), diag::err_pp_expected_after) << Tok.getKind() << tok::r_paren; Diag(LParenLoc, diag::note_matching) << tok::l_paren; } return; } else { llvm_unreachable("Unknown identifier!"); } CreateString(OS.str(), Tok, Tok.getLocation(), Tok.getLocation()); } void Preprocessor::markMacroAsUsed(MacroInfo *MI) { // If the 'used' status changed, and the macro requires 'unused' warning, // remove its SourceLocation from the warn-for-unused-macro locations. if (MI->isWarnIfUnused() && !MI->isUsed()) WarnUnusedMacroLocs.erase(MI->getDefinitionLoc()); MI->setIsUsed(true); } Index: vendor/clang/dist/lib/Sema/SemaDeclCXX.cpp =================================================================== --- vendor/clang/dist/lib/Sema/SemaDeclCXX.cpp (revision 312705) +++ vendor/clang/dist/lib/Sema/SemaDeclCXX.cpp (revision 312706) @@ -1,14867 +1,14867 @@ //===------ SemaDeclCXX.cpp - Semantic Analysis for C++ Declarations ------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements semantic analysis for C++ declarations. // //===----------------------------------------------------------------------===// #include "clang/AST/ASTConsumer.h" #include "clang/AST/ASTContext.h" #include "clang/AST/ASTLambda.h" #include "clang/AST/ASTMutationListener.h" #include "clang/AST/CXXInheritance.h" #include "clang/AST/CharUnits.h" #include "clang/AST/EvaluatedExprVisitor.h" #include "clang/AST/ExprCXX.h" #include "clang/AST/RecordLayout.h" #include "clang/AST/RecursiveASTVisitor.h" #include "clang/AST/StmtVisitor.h" #include "clang/AST/TypeLoc.h" #include "clang/AST/TypeOrdering.h" #include "clang/Basic/PartialDiagnostic.h" #include "clang/Basic/TargetInfo.h" #include "clang/Lex/LiteralSupport.h" #include "clang/Lex/Preprocessor.h" #include "clang/Sema/CXXFieldCollector.h" #include "clang/Sema/DeclSpec.h" #include "clang/Sema/Initialization.h" #include "clang/Sema/Lookup.h" #include "clang/Sema/ParsedTemplate.h" #include "clang/Sema/Scope.h" #include "clang/Sema/ScopeInfo.h" #include "clang/Sema/SemaInternal.h" #include "clang/Sema/Template.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/SmallString.h" #include "llvm/ADT/StringExtras.h" #include #include using namespace clang; //===----------------------------------------------------------------------===// // CheckDefaultArgumentVisitor //===----------------------------------------------------------------------===// namespace { /// CheckDefaultArgumentVisitor - C++ [dcl.fct.default] Traverses /// the default argument of a parameter to determine whether it /// contains any ill-formed subexpressions. For example, this will /// diagnose the use of local variables or parameters within the /// default argument expression. class CheckDefaultArgumentVisitor : public StmtVisitor { Expr *DefaultArg; Sema *S; public: CheckDefaultArgumentVisitor(Expr *defarg, Sema *s) : DefaultArg(defarg), S(s) {} bool VisitExpr(Expr *Node); bool VisitDeclRefExpr(DeclRefExpr *DRE); bool VisitCXXThisExpr(CXXThisExpr *ThisE); bool VisitLambdaExpr(LambdaExpr *Lambda); bool VisitPseudoObjectExpr(PseudoObjectExpr *POE); }; /// VisitExpr - Visit all of the children of this expression. bool CheckDefaultArgumentVisitor::VisitExpr(Expr *Node) { bool IsInvalid = false; for (Stmt *SubStmt : Node->children()) IsInvalid |= Visit(SubStmt); return IsInvalid; } /// VisitDeclRefExpr - Visit a reference to a declaration, to /// determine whether this declaration can be used in the default /// argument expression. bool CheckDefaultArgumentVisitor::VisitDeclRefExpr(DeclRefExpr *DRE) { NamedDecl *Decl = DRE->getDecl(); if (ParmVarDecl *Param = dyn_cast(Decl)) { // C++ [dcl.fct.default]p9 // Default arguments are evaluated each time the function is // called. The order of evaluation of function arguments is // unspecified. Consequently, parameters of a function shall not // be used in default argument expressions, even if they are not // evaluated. Parameters of a function declared before a default // argument expression are in scope and can hide namespace and // class member names. return S->Diag(DRE->getLocStart(), diag::err_param_default_argument_references_param) << Param->getDeclName() << DefaultArg->getSourceRange(); } else if (VarDecl *VDecl = dyn_cast(Decl)) { // C++ [dcl.fct.default]p7 // Local variables shall not be used in default argument // expressions. if (VDecl->isLocalVarDecl()) return S->Diag(DRE->getLocStart(), diag::err_param_default_argument_references_local) << VDecl->getDeclName() << DefaultArg->getSourceRange(); } return false; } /// VisitCXXThisExpr - Visit a C++ "this" expression. bool CheckDefaultArgumentVisitor::VisitCXXThisExpr(CXXThisExpr *ThisE) { // C++ [dcl.fct.default]p8: // The keyword this shall not be used in a default argument of a // member function. return S->Diag(ThisE->getLocStart(), diag::err_param_default_argument_references_this) << ThisE->getSourceRange(); } bool CheckDefaultArgumentVisitor::VisitPseudoObjectExpr(PseudoObjectExpr *POE) { bool Invalid = false; for (PseudoObjectExpr::semantics_iterator i = POE->semantics_begin(), e = POE->semantics_end(); i != e; ++i) { Expr *E = *i; // Look through bindings. if (OpaqueValueExpr *OVE = dyn_cast(E)) { E = OVE->getSourceExpr(); assert(E && "pseudo-object binding without source expression?"); } Invalid |= Visit(E); } return Invalid; } bool CheckDefaultArgumentVisitor::VisitLambdaExpr(LambdaExpr *Lambda) { // C++11 [expr.lambda.prim]p13: // A lambda-expression appearing in a default argument shall not // implicitly or explicitly capture any entity. if (Lambda->capture_begin() == Lambda->capture_end()) return false; return S->Diag(Lambda->getLocStart(), diag::err_lambda_capture_default_arg); } } void Sema::ImplicitExceptionSpecification::CalledDecl(SourceLocation CallLoc, const CXXMethodDecl *Method) { // If we have an MSAny spec already, don't bother. if (!Method || ComputedEST == EST_MSAny) return; const FunctionProtoType *Proto = Method->getType()->getAs(); Proto = Self->ResolveExceptionSpec(CallLoc, Proto); if (!Proto) return; ExceptionSpecificationType EST = Proto->getExceptionSpecType(); // If we have a throw-all spec at this point, ignore the function. if (ComputedEST == EST_None) return; switch(EST) { // If this function can throw any exceptions, make a note of that. case EST_MSAny: case EST_None: ClearExceptions(); ComputedEST = EST; return; // FIXME: If the call to this decl is using any of its default arguments, we // need to search them for potentially-throwing calls. // If this function has a basic noexcept, it doesn't affect the outcome. case EST_BasicNoexcept: return; // If we're still at noexcept(true) and there's a nothrow() callee, // change to that specification. case EST_DynamicNone: if (ComputedEST == EST_BasicNoexcept) ComputedEST = EST_DynamicNone; return; // Check out noexcept specs. case EST_ComputedNoexcept: { FunctionProtoType::NoexceptResult NR = Proto->getNoexceptSpec(Self->Context); assert(NR != FunctionProtoType::NR_NoNoexcept && "Must have noexcept result for EST_ComputedNoexcept."); assert(NR != FunctionProtoType::NR_Dependent && "Should not generate implicit declarations for dependent cases, " "and don't know how to handle them anyway."); // noexcept(false) -> no spec on the new function if (NR == FunctionProtoType::NR_Throw) { ClearExceptions(); ComputedEST = EST_None; } // noexcept(true) won't change anything either. return; } default: break; } assert(EST == EST_Dynamic && "EST case not considered earlier."); assert(ComputedEST != EST_None && "Shouldn't collect exceptions when throw-all is guaranteed."); ComputedEST = EST_Dynamic; // Record the exceptions in this function's exception specification. for (const auto &E : Proto->exceptions()) if (ExceptionsSeen.insert(Self->Context.getCanonicalType(E)).second) Exceptions.push_back(E); } void Sema::ImplicitExceptionSpecification::CalledExpr(Expr *E) { if (!E || ComputedEST == EST_MSAny) return; // FIXME: // // C++0x [except.spec]p14: // [An] implicit exception-specification specifies the type-id T if and // only if T is allowed by the exception-specification of a function directly // invoked by f's implicit definition; f shall allow all exceptions if any // function it directly invokes allows all exceptions, and f shall allow no // exceptions if every function it directly invokes allows no exceptions. // // Note in particular that if an implicit exception-specification is generated // for a function containing a throw-expression, that specification can still // be noexcept(true). // // Note also that 'directly invoked' is not defined in the standard, and there // is no indication that we should only consider potentially-evaluated calls. // // Ultimately we should implement the intent of the standard: the exception // specification should be the set of exceptions which can be thrown by the // implicit definition. For now, we assume that any non-nothrow expression can // throw any exception. if (Self->canThrow(E)) ComputedEST = EST_None; } bool Sema::SetParamDefaultArgument(ParmVarDecl *Param, Expr *Arg, SourceLocation EqualLoc) { if (RequireCompleteType(Param->getLocation(), Param->getType(), diag::err_typecheck_decl_incomplete_type)) { Param->setInvalidDecl(); return true; } // C++ [dcl.fct.default]p5 // A default argument expression is implicitly converted (clause // 4) to the parameter type. The default argument expression has // the same semantic constraints as the initializer expression in // a declaration of a variable of the parameter type, using the // copy-initialization semantics (8.5). InitializedEntity Entity = InitializedEntity::InitializeParameter(Context, Param); InitializationKind Kind = InitializationKind::CreateCopy(Param->getLocation(), EqualLoc); InitializationSequence InitSeq(*this, Entity, Kind, Arg); ExprResult Result = InitSeq.Perform(*this, Entity, Kind, Arg); if (Result.isInvalid()) return true; Arg = Result.getAs(); CheckCompletedExpr(Arg, EqualLoc); Arg = MaybeCreateExprWithCleanups(Arg); // Okay: add the default argument to the parameter Param->setDefaultArg(Arg); // We have already instantiated this parameter; provide each of the // instantiations with the uninstantiated default argument. UnparsedDefaultArgInstantiationsMap::iterator InstPos = UnparsedDefaultArgInstantiations.find(Param); if (InstPos != UnparsedDefaultArgInstantiations.end()) { for (unsigned I = 0, N = InstPos->second.size(); I != N; ++I) InstPos->second[I]->setUninstantiatedDefaultArg(Arg); // We're done tracking this parameter's instantiations. UnparsedDefaultArgInstantiations.erase(InstPos); } return false; } /// ActOnParamDefaultArgument - Check whether the default argument /// provided for a function parameter is well-formed. If so, attach it /// to the parameter declaration. void Sema::ActOnParamDefaultArgument(Decl *param, SourceLocation EqualLoc, Expr *DefaultArg) { if (!param || !DefaultArg) return; ParmVarDecl *Param = cast(param); UnparsedDefaultArgLocs.erase(Param); // Default arguments are only permitted in C++ if (!getLangOpts().CPlusPlus) { Diag(EqualLoc, diag::err_param_default_argument) << DefaultArg->getSourceRange(); Param->setInvalidDecl(); return; } // Check for unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(DefaultArg, UPPC_DefaultArgument)) { Param->setInvalidDecl(); return; } // C++11 [dcl.fct.default]p3 // A default argument expression [...] shall not be specified for a // parameter pack. if (Param->isParameterPack()) { Diag(EqualLoc, diag::err_param_default_argument_on_parameter_pack) << DefaultArg->getSourceRange(); return; } // Check that the default argument is well-formed CheckDefaultArgumentVisitor DefaultArgChecker(DefaultArg, this); if (DefaultArgChecker.Visit(DefaultArg)) { Param->setInvalidDecl(); return; } SetParamDefaultArgument(Param, DefaultArg, EqualLoc); } /// ActOnParamUnparsedDefaultArgument - We've seen a default /// argument for a function parameter, but we can't parse it yet /// because we're inside a class definition. Note that this default /// argument will be parsed later. void Sema::ActOnParamUnparsedDefaultArgument(Decl *param, SourceLocation EqualLoc, SourceLocation ArgLoc) { if (!param) return; ParmVarDecl *Param = cast(param); Param->setUnparsedDefaultArg(); UnparsedDefaultArgLocs[Param] = ArgLoc; } /// ActOnParamDefaultArgumentError - Parsing or semantic analysis of /// the default argument for the parameter param failed. void Sema::ActOnParamDefaultArgumentError(Decl *param, SourceLocation EqualLoc) { if (!param) return; ParmVarDecl *Param = cast(param); Param->setInvalidDecl(); UnparsedDefaultArgLocs.erase(Param); Param->setDefaultArg(new(Context) OpaqueValueExpr(EqualLoc, Param->getType().getNonReferenceType(), VK_RValue)); } /// CheckExtraCXXDefaultArguments - Check for any extra default /// arguments in the declarator, which is not a function declaration /// or definition and therefore is not permitted to have default /// arguments. This routine should be invoked for every declarator /// that is not a function declaration or definition. void Sema::CheckExtraCXXDefaultArguments(Declarator &D) { // C++ [dcl.fct.default]p3 // A default argument expression shall be specified only in the // parameter-declaration-clause of a function declaration or in a // template-parameter (14.1). It shall not be specified for a // parameter pack. If it is specified in a // parameter-declaration-clause, it shall not occur within a // declarator or abstract-declarator of a parameter-declaration. bool MightBeFunction = D.isFunctionDeclarationContext(); for (unsigned i = 0, e = D.getNumTypeObjects(); i != e; ++i) { DeclaratorChunk &chunk = D.getTypeObject(i); if (chunk.Kind == DeclaratorChunk::Function) { if (MightBeFunction) { // This is a function declaration. It can have default arguments, but // keep looking in case its return type is a function type with default // arguments. MightBeFunction = false; continue; } for (unsigned argIdx = 0, e = chunk.Fun.NumParams; argIdx != e; ++argIdx) { ParmVarDecl *Param = cast(chunk.Fun.Params[argIdx].Param); if (Param->hasUnparsedDefaultArg()) { std::unique_ptr Toks = std::move(chunk.Fun.Params[argIdx].DefaultArgTokens); SourceRange SR; if (Toks->size() > 1) SR = SourceRange((*Toks)[1].getLocation(), Toks->back().getLocation()); else SR = UnparsedDefaultArgLocs[Param]; Diag(Param->getLocation(), diag::err_param_default_argument_nonfunc) << SR; } else if (Param->getDefaultArg()) { Diag(Param->getLocation(), diag::err_param_default_argument_nonfunc) << Param->getDefaultArg()->getSourceRange(); Param->setDefaultArg(nullptr); } } } else if (chunk.Kind != DeclaratorChunk::Paren) { MightBeFunction = false; } } } static bool functionDeclHasDefaultArgument(const FunctionDecl *FD) { for (unsigned NumParams = FD->getNumParams(); NumParams > 0; --NumParams) { const ParmVarDecl *PVD = FD->getParamDecl(NumParams-1); if (!PVD->hasDefaultArg()) return false; if (!PVD->hasInheritedDefaultArg()) return true; } return false; } /// MergeCXXFunctionDecl - Merge two declarations of the same C++ /// function, once we already know that they have the same /// type. Subroutine of MergeFunctionDecl. Returns true if there was an /// error, false otherwise. bool Sema::MergeCXXFunctionDecl(FunctionDecl *New, FunctionDecl *Old, Scope *S) { bool Invalid = false; // The declaration context corresponding to the scope is the semantic // parent, unless this is a local function declaration, in which case // it is that surrounding function. DeclContext *ScopeDC = New->isLocalExternDecl() ? New->getLexicalDeclContext() : New->getDeclContext(); // Find the previous declaration for the purpose of default arguments. FunctionDecl *PrevForDefaultArgs = Old; for (/**/; PrevForDefaultArgs; // Don't bother looking back past the latest decl if this is a local // extern declaration; nothing else could work. PrevForDefaultArgs = New->isLocalExternDecl() ? nullptr : PrevForDefaultArgs->getPreviousDecl()) { // Ignore hidden declarations. if (!LookupResult::isVisible(*this, PrevForDefaultArgs)) continue; if (S && !isDeclInScope(PrevForDefaultArgs, ScopeDC, S) && !New->isCXXClassMember()) { // Ignore default arguments of old decl if they are not in // the same scope and this is not an out-of-line definition of // a member function. continue; } if (PrevForDefaultArgs->isLocalExternDecl() != New->isLocalExternDecl()) { // If only one of these is a local function declaration, then they are // declared in different scopes, even though isDeclInScope may think // they're in the same scope. (If both are local, the scope check is // sufficent, and if neither is local, then they are in the same scope.) continue; } // We found the right previous declaration. break; } // C++ [dcl.fct.default]p4: // For non-template functions, default arguments can be added in // later declarations of a function in the same // scope. Declarations in different scopes have completely // distinct sets of default arguments. That is, declarations in // inner scopes do not acquire default arguments from // declarations in outer scopes, and vice versa. In a given // function declaration, all parameters subsequent to a // parameter with a default argument shall have default // arguments supplied in this or previous declarations. A // default argument shall not be redefined by a later // declaration (not even to the same value). // // C++ [dcl.fct.default]p6: // Except for member functions of class templates, the default arguments // in a member function definition that appears outside of the class // definition are added to the set of default arguments provided by the // member function declaration in the class definition. for (unsigned p = 0, NumParams = PrevForDefaultArgs ? PrevForDefaultArgs->getNumParams() : 0; p < NumParams; ++p) { ParmVarDecl *OldParam = PrevForDefaultArgs->getParamDecl(p); ParmVarDecl *NewParam = New->getParamDecl(p); bool OldParamHasDfl = OldParam ? OldParam->hasDefaultArg() : false; bool NewParamHasDfl = NewParam->hasDefaultArg(); if (OldParamHasDfl && NewParamHasDfl) { unsigned DiagDefaultParamID = diag::err_param_default_argument_redefinition; // MSVC accepts that default parameters be redefined for member functions // of template class. The new default parameter's value is ignored. Invalid = true; if (getLangOpts().MicrosoftExt) { CXXMethodDecl *MD = dyn_cast(New); if (MD && MD->getParent()->getDescribedClassTemplate()) { // Merge the old default argument into the new parameter. NewParam->setHasInheritedDefaultArg(); if (OldParam->hasUninstantiatedDefaultArg()) NewParam->setUninstantiatedDefaultArg( OldParam->getUninstantiatedDefaultArg()); else NewParam->setDefaultArg(OldParam->getInit()); DiagDefaultParamID = diag::ext_param_default_argument_redefinition; Invalid = false; } } // FIXME: If we knew where the '=' was, we could easily provide a fix-it // hint here. Alternatively, we could walk the type-source information // for NewParam to find the last source location in the type... but it // isn't worth the effort right now. This is the kind of test case that // is hard to get right: // int f(int); // void g(int (*fp)(int) = f); // void g(int (*fp)(int) = &f); Diag(NewParam->getLocation(), DiagDefaultParamID) << NewParam->getDefaultArgRange(); // Look for the function declaration where the default argument was // actually written, which may be a declaration prior to Old. for (auto Older = PrevForDefaultArgs; OldParam->hasInheritedDefaultArg(); /**/) { Older = Older->getPreviousDecl(); OldParam = Older->getParamDecl(p); } Diag(OldParam->getLocation(), diag::note_previous_definition) << OldParam->getDefaultArgRange(); } else if (OldParamHasDfl) { // Merge the old default argument into the new parameter. // It's important to use getInit() here; getDefaultArg() // strips off any top-level ExprWithCleanups. NewParam->setHasInheritedDefaultArg(); if (OldParam->hasUnparsedDefaultArg()) NewParam->setUnparsedDefaultArg(); else if (OldParam->hasUninstantiatedDefaultArg()) NewParam->setUninstantiatedDefaultArg( OldParam->getUninstantiatedDefaultArg()); else NewParam->setDefaultArg(OldParam->getInit()); } else if (NewParamHasDfl) { if (New->getDescribedFunctionTemplate()) { // Paragraph 4, quoted above, only applies to non-template functions. Diag(NewParam->getLocation(), diag::err_param_default_argument_template_redecl) << NewParam->getDefaultArgRange(); Diag(PrevForDefaultArgs->getLocation(), diag::note_template_prev_declaration) << false; } else if (New->getTemplateSpecializationKind() != TSK_ImplicitInstantiation && New->getTemplateSpecializationKind() != TSK_Undeclared) { // C++ [temp.expr.spec]p21: // Default function arguments shall not be specified in a declaration // or a definition for one of the following explicit specializations: // - the explicit specialization of a function template; // - the explicit specialization of a member function template; // - the explicit specialization of a member function of a class // template where the class template specialization to which the // member function specialization belongs is implicitly // instantiated. Diag(NewParam->getLocation(), diag::err_template_spec_default_arg) << (New->getTemplateSpecializationKind() ==TSK_ExplicitSpecialization) << New->getDeclName() << NewParam->getDefaultArgRange(); } else if (New->getDeclContext()->isDependentContext()) { // C++ [dcl.fct.default]p6 (DR217): // Default arguments for a member function of a class template shall // be specified on the initial declaration of the member function // within the class template. // // Reading the tea leaves a bit in DR217 and its reference to DR205 // leads me to the conclusion that one cannot add default function // arguments for an out-of-line definition of a member function of a // dependent type. int WhichKind = 2; if (CXXRecordDecl *Record = dyn_cast(New->getDeclContext())) { if (Record->getDescribedClassTemplate()) WhichKind = 0; else if (isa(Record)) WhichKind = 1; else WhichKind = 2; } Diag(NewParam->getLocation(), diag::err_param_default_argument_member_template_redecl) << WhichKind << NewParam->getDefaultArgRange(); } } } // DR1344: If a default argument is added outside a class definition and that // default argument makes the function a special member function, the program // is ill-formed. This can only happen for constructors. if (isa(New) && New->getMinRequiredArguments() < Old->getMinRequiredArguments()) { CXXSpecialMember NewSM = getSpecialMember(cast(New)), OldSM = getSpecialMember(cast(Old)); if (NewSM != OldSM) { ParmVarDecl *NewParam = New->getParamDecl(New->getMinRequiredArguments()); assert(NewParam->hasDefaultArg()); Diag(NewParam->getLocation(), diag::err_default_arg_makes_ctor_special) << NewParam->getDefaultArgRange() << NewSM; Diag(Old->getLocation(), diag::note_previous_declaration); } } const FunctionDecl *Def; // C++11 [dcl.constexpr]p1: If any declaration of a function or function // template has a constexpr specifier then all its declarations shall // contain the constexpr specifier. if (New->isConstexpr() != Old->isConstexpr()) { Diag(New->getLocation(), diag::err_constexpr_redecl_mismatch) << New << New->isConstexpr(); Diag(Old->getLocation(), diag::note_previous_declaration); Invalid = true; } else if (!Old->getMostRecentDecl()->isInlined() && New->isInlined() && Old->isDefined(Def)) { // C++11 [dcl.fcn.spec]p4: // If the definition of a function appears in a translation unit before its // first declaration as inline, the program is ill-formed. Diag(New->getLocation(), diag::err_inline_decl_follows_def) << New; Diag(Def->getLocation(), diag::note_previous_definition); Invalid = true; } // C++11 [dcl.fct.default]p4: If a friend declaration specifies a default // argument expression, that declaration shall be a definition and shall be // the only declaration of the function or function template in the // translation unit. if (Old->getFriendObjectKind() == Decl::FOK_Undeclared && functionDeclHasDefaultArgument(Old)) { Diag(New->getLocation(), diag::err_friend_decl_with_def_arg_redeclared); Diag(Old->getLocation(), diag::note_previous_declaration); Invalid = true; } return Invalid; } NamedDecl * Sema::ActOnDecompositionDeclarator(Scope *S, Declarator &D, MultiTemplateParamsArg TemplateParamLists) { assert(D.isDecompositionDeclarator()); const DecompositionDeclarator &Decomp = D.getDecompositionDeclarator(); // The syntax only allows a decomposition declarator as a simple-declaration // or a for-range-declaration, but we parse it in more cases than that. if (!D.mayHaveDecompositionDeclarator()) { Diag(Decomp.getLSquareLoc(), diag::err_decomp_decl_context) << Decomp.getSourceRange(); return nullptr; } if (!TemplateParamLists.empty()) { // FIXME: There's no rule against this, but there are also no rules that // would actually make it usable, so we reject it for now. Diag(TemplateParamLists.front()->getTemplateLoc(), diag::err_decomp_decl_template); return nullptr; } Diag(Decomp.getLSquareLoc(), getLangOpts().CPlusPlus1z ? diag::warn_cxx14_compat_decomp_decl : diag::ext_decomp_decl) << Decomp.getSourceRange(); // The semantic context is always just the current context. DeclContext *const DC = CurContext; // C++1z [dcl.dcl]/8: // The decl-specifier-seq shall contain only the type-specifier auto // and cv-qualifiers. auto &DS = D.getDeclSpec(); { SmallVector BadSpecifiers; SmallVector BadSpecifierLocs; if (auto SCS = DS.getStorageClassSpec()) { BadSpecifiers.push_back(DeclSpec::getSpecifierName(SCS)); BadSpecifierLocs.push_back(DS.getStorageClassSpecLoc()); } if (auto TSCS = DS.getThreadStorageClassSpec()) { BadSpecifiers.push_back(DeclSpec::getSpecifierName(TSCS)); BadSpecifierLocs.push_back(DS.getThreadStorageClassSpecLoc()); } if (DS.isConstexprSpecified()) { BadSpecifiers.push_back("constexpr"); BadSpecifierLocs.push_back(DS.getConstexprSpecLoc()); } if (DS.isInlineSpecified()) { BadSpecifiers.push_back("inline"); BadSpecifierLocs.push_back(DS.getInlineSpecLoc()); } if (!BadSpecifiers.empty()) { auto &&Err = Diag(BadSpecifierLocs.front(), diag::err_decomp_decl_spec); Err << (int)BadSpecifiers.size() << llvm::join(BadSpecifiers.begin(), BadSpecifiers.end(), " "); // Don't add FixItHints to remove the specifiers; we do still respect // them when building the underlying variable. for (auto Loc : BadSpecifierLocs) Err << SourceRange(Loc, Loc); } // We can't recover from it being declared as a typedef. if (DS.getStorageClassSpec() == DeclSpec::SCS_typedef) return nullptr; } TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); QualType R = TInfo->getType(); if (DiagnoseUnexpandedParameterPack(D.getIdentifierLoc(), TInfo, UPPC_DeclarationType)) D.setInvalidType(); // The syntax only allows a single ref-qualifier prior to the decomposition // declarator. No other declarator chunks are permitted. Also check the type // specifier here. if (DS.getTypeSpecType() != DeclSpec::TST_auto || D.hasGroupingParens() || D.getNumTypeObjects() > 1 || (D.getNumTypeObjects() == 1 && D.getTypeObject(0).Kind != DeclaratorChunk::Reference)) { Diag(Decomp.getLSquareLoc(), (D.hasGroupingParens() || (D.getNumTypeObjects() && D.getTypeObject(0).Kind == DeclaratorChunk::Paren)) ? diag::err_decomp_decl_parens : diag::err_decomp_decl_type) << R; // In most cases, there's no actual problem with an explicitly-specified // type, but a function type won't work here, and ActOnVariableDeclarator // shouldn't be called for such a type. if (R->isFunctionType()) D.setInvalidType(); } // Build the BindingDecls. SmallVector Bindings; // Build the BindingDecls. for (auto &B : D.getDecompositionDeclarator().bindings()) { // Check for name conflicts. DeclarationNameInfo NameInfo(B.Name, B.NameLoc); LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); LookupName(Previous, S, /*CreateBuiltins*/DC->getRedeclContext()->isTranslationUnit()); // It's not permitted to shadow a template parameter name. if (Previous.isSingleResult() && Previous.getFoundDecl()->isTemplateParameter()) { DiagnoseTemplateParameterShadow(D.getIdentifierLoc(), Previous.getFoundDecl()); Previous.clear(); } bool ConsiderLinkage = DC->isFunctionOrMethod() && DS.getStorageClassSpec() == DeclSpec::SCS_extern; FilterLookupForScope(Previous, DC, S, ConsiderLinkage, /*AllowInlineNamespace*/false); if (!Previous.empty()) { auto *Old = Previous.getRepresentativeDecl(); Diag(B.NameLoc, diag::err_redefinition) << B.Name; Diag(Old->getLocation(), diag::note_previous_definition); } auto *BD = BindingDecl::Create(Context, DC, B.NameLoc, B.Name); PushOnScopeChains(BD, S, true); Bindings.push_back(BD); ParsingInitForAutoVars.insert(BD); } // There are no prior lookup results for the variable itself, because it // is unnamed. DeclarationNameInfo NameInfo((IdentifierInfo *)nullptr, Decomp.getLSquareLoc()); LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); // Build the variable that holds the non-decomposed object. bool AddToScope = true; NamedDecl *New = ActOnVariableDeclarator(S, D, DC, TInfo, Previous, MultiTemplateParamsArg(), AddToScope, Bindings); CurContext->addHiddenDecl(New); if (isInOpenMPDeclareTargetContext()) checkDeclIsAllowedInOpenMPTarget(nullptr, New); return New; } static bool checkSimpleDecomposition( Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const llvm::APSInt &NumElems, QualType ElemType, llvm::function_ref GetInit) { if ((int64_t)Bindings.size() != NumElems) { S.Diag(Src->getLocation(), diag::err_decomp_decl_wrong_number_bindings) << DecompType << (unsigned)Bindings.size() << NumElems.toString(10) << (NumElems < Bindings.size()); return true; } unsigned I = 0; for (auto *B : Bindings) { SourceLocation Loc = B->getLocation(); ExprResult E = S.BuildDeclRefExpr(Src, DecompType, VK_LValue, Loc); if (E.isInvalid()) return true; E = GetInit(Loc, E.get(), I++); if (E.isInvalid()) return true; B->setBinding(ElemType, E.get()); } return false; } static bool checkArrayLikeDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const llvm::APSInt &NumElems, QualType ElemType) { return checkSimpleDecomposition( S, Bindings, Src, DecompType, NumElems, ElemType, [&](SourceLocation Loc, Expr *Base, unsigned I) -> ExprResult { ExprResult E = S.ActOnIntegerConstant(Loc, I); if (E.isInvalid()) return ExprError(); return S.CreateBuiltinArraySubscriptExpr(Base, Loc, E.get(), Loc); }); } static bool checkArrayDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const ConstantArrayType *CAT) { return checkArrayLikeDecomposition(S, Bindings, Src, DecompType, llvm::APSInt(CAT->getSize()), CAT->getElementType()); } static bool checkVectorDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const VectorType *VT) { return checkArrayLikeDecomposition( S, Bindings, Src, DecompType, llvm::APSInt::get(VT->getNumElements()), S.Context.getQualifiedType(VT->getElementType(), DecompType.getQualifiers())); } static bool checkComplexDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const ComplexType *CT) { return checkSimpleDecomposition( S, Bindings, Src, DecompType, llvm::APSInt::get(2), S.Context.getQualifiedType(CT->getElementType(), DecompType.getQualifiers()), [&](SourceLocation Loc, Expr *Base, unsigned I) -> ExprResult { return S.CreateBuiltinUnaryOp(Loc, I ? UO_Imag : UO_Real, Base); }); } static std::string printTemplateArgs(const PrintingPolicy &PrintingPolicy, TemplateArgumentListInfo &Args) { SmallString<128> SS; llvm::raw_svector_ostream OS(SS); bool First = true; for (auto &Arg : Args.arguments()) { if (!First) OS << ", "; Arg.getArgument().print(PrintingPolicy, OS); First = false; } return OS.str(); } static bool lookupStdTypeTraitMember(Sema &S, LookupResult &TraitMemberLookup, SourceLocation Loc, StringRef Trait, TemplateArgumentListInfo &Args, unsigned DiagID) { auto DiagnoseMissing = [&] { if (DiagID) S.Diag(Loc, DiagID) << printTemplateArgs(S.Context.getPrintingPolicy(), Args); return true; }; // FIXME: Factor out duplication with lookupPromiseType in SemaCoroutine. NamespaceDecl *Std = S.getStdNamespace(); if (!Std) return DiagnoseMissing(); // Look up the trait itself, within namespace std. We can diagnose various // problems with this lookup even if we've been asked to not diagnose a // missing specialization, because this can only fail if the user has been // declaring their own names in namespace std or we don't support the // standard library implementation in use. LookupResult Result(S, &S.PP.getIdentifierTable().get(Trait), Loc, Sema::LookupOrdinaryName); if (!S.LookupQualifiedName(Result, Std)) return DiagnoseMissing(); if (Result.isAmbiguous()) return true; ClassTemplateDecl *TraitTD = Result.getAsSingle(); if (!TraitTD) { Result.suppressDiagnostics(); NamedDecl *Found = *Result.begin(); S.Diag(Loc, diag::err_std_type_trait_not_class_template) << Trait; S.Diag(Found->getLocation(), diag::note_declared_at); return true; } // Build the template-id. QualType TraitTy = S.CheckTemplateIdType(TemplateName(TraitTD), Loc, Args); if (TraitTy.isNull()) return true; if (!S.isCompleteType(Loc, TraitTy)) { if (DiagID) S.RequireCompleteType( Loc, TraitTy, DiagID, printTemplateArgs(S.Context.getPrintingPolicy(), Args)); return true; } CXXRecordDecl *RD = TraitTy->getAsCXXRecordDecl(); assert(RD && "specialization of class template is not a class?"); // Look up the member of the trait type. S.LookupQualifiedName(TraitMemberLookup, RD); return TraitMemberLookup.isAmbiguous(); } static TemplateArgumentLoc getTrivialIntegralTemplateArgument(Sema &S, SourceLocation Loc, QualType T, uint64_t I) { TemplateArgument Arg(S.Context, S.Context.MakeIntValue(I, T), T); return S.getTrivialTemplateArgumentLoc(Arg, T, Loc); } static TemplateArgumentLoc getTrivialTypeTemplateArgument(Sema &S, SourceLocation Loc, QualType T) { return S.getTrivialTemplateArgumentLoc(TemplateArgument(T), QualType(), Loc); } namespace { enum class IsTupleLike { TupleLike, NotTupleLike, Error }; } static IsTupleLike isTupleLike(Sema &S, SourceLocation Loc, QualType T, llvm::APSInt &Size) { EnterExpressionEvaluationContext ContextRAII(S, Sema::ConstantEvaluated); DeclarationName Value = S.PP.getIdentifierInfo("value"); LookupResult R(S, Value, Loc, Sema::LookupOrdinaryName); // Form template argument list for tuple_size. TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument(getTrivialTypeTemplateArgument(S, Loc, T)); // If there's no tuple_size specialization, it's not tuple-like. if (lookupStdTypeTraitMember(S, R, Loc, "tuple_size", Args, /*DiagID*/0)) return IsTupleLike::NotTupleLike; // If we get this far, we've committed to the tuple interpretation, but // we can still fail if there actually isn't a usable ::value. struct ICEDiagnoser : Sema::VerifyICEDiagnoser { LookupResult &R; TemplateArgumentListInfo &Args; ICEDiagnoser(LookupResult &R, TemplateArgumentListInfo &Args) : R(R), Args(Args) {} void diagnoseNotICE(Sema &S, SourceLocation Loc, SourceRange SR) { S.Diag(Loc, diag::err_decomp_decl_std_tuple_size_not_constant) << printTemplateArgs(S.Context.getPrintingPolicy(), Args); } } Diagnoser(R, Args); if (R.empty()) { Diagnoser.diagnoseNotICE(S, Loc, SourceRange()); return IsTupleLike::Error; } ExprResult E = S.BuildDeclarationNameExpr(CXXScopeSpec(), R, /*NeedsADL*/false); if (E.isInvalid()) return IsTupleLike::Error; E = S.VerifyIntegerConstantExpression(E.get(), &Size, Diagnoser, false); if (E.isInvalid()) return IsTupleLike::Error; return IsTupleLike::TupleLike; } /// \return std::tuple_element::type. static QualType getTupleLikeElementType(Sema &S, SourceLocation Loc, unsigned I, QualType T) { // Form template argument list for tuple_element. TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument( getTrivialIntegralTemplateArgument(S, Loc, S.Context.getSizeType(), I)); Args.addArgument(getTrivialTypeTemplateArgument(S, Loc, T)); DeclarationName TypeDN = S.PP.getIdentifierInfo("type"); LookupResult R(S, TypeDN, Loc, Sema::LookupOrdinaryName); if (lookupStdTypeTraitMember( S, R, Loc, "tuple_element", Args, diag::err_decomp_decl_std_tuple_element_not_specialized)) return QualType(); auto *TD = R.getAsSingle(); if (!TD) { R.suppressDiagnostics(); S.Diag(Loc, diag::err_decomp_decl_std_tuple_element_not_specialized) << printTemplateArgs(S.Context.getPrintingPolicy(), Args); if (!R.empty()) S.Diag(R.getRepresentativeDecl()->getLocation(), diag::note_declared_at); return QualType(); } return S.Context.getTypeDeclType(TD); } namespace { struct BindingDiagnosticTrap { Sema &S; DiagnosticErrorTrap Trap; BindingDecl *BD; BindingDiagnosticTrap(Sema &S, BindingDecl *BD) : S(S), Trap(S.Diags), BD(BD) {} ~BindingDiagnosticTrap() { if (Trap.hasErrorOccurred()) S.Diag(BD->getLocation(), diag::note_in_binding_decl_init) << BD; } }; } static bool checkTupleLikeDecomposition(Sema &S, ArrayRef Bindings, VarDecl *Src, QualType DecompType, const llvm::APSInt &TupleSize) { if ((int64_t)Bindings.size() != TupleSize) { S.Diag(Src->getLocation(), diag::err_decomp_decl_wrong_number_bindings) << DecompType << (unsigned)Bindings.size() << TupleSize.toString(10) << (TupleSize < Bindings.size()); return true; } if (Bindings.empty()) return false; DeclarationName GetDN = S.PP.getIdentifierInfo("get"); // [dcl.decomp]p3: // The unqualified-id get is looked up in the scope of E by class member // access lookup LookupResult MemberGet(S, GetDN, Src->getLocation(), Sema::LookupMemberName); bool UseMemberGet = false; if (S.isCompleteType(Src->getLocation(), DecompType)) { if (auto *RD = DecompType->getAsCXXRecordDecl()) S.LookupQualifiedName(MemberGet, RD); if (MemberGet.isAmbiguous()) return true; UseMemberGet = !MemberGet.empty(); S.FilterAcceptableTemplateNames(MemberGet); } unsigned I = 0; for (auto *B : Bindings) { BindingDiagnosticTrap Trap(S, B); SourceLocation Loc = B->getLocation(); ExprResult E = S.BuildDeclRefExpr(Src, DecompType, VK_LValue, Loc); if (E.isInvalid()) return true; // e is an lvalue if the type of the entity is an lvalue reference and // an xvalue otherwise if (!Src->getType()->isLValueReferenceType()) E = ImplicitCastExpr::Create(S.Context, E.get()->getType(), CK_NoOp, E.get(), nullptr, VK_XValue); TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument( getTrivialIntegralTemplateArgument(S, Loc, S.Context.getSizeType(), I)); if (UseMemberGet) { // if [lookup of member get] finds at least one declaration, the // initializer is e.get(). E = S.BuildMemberReferenceExpr(E.get(), DecompType, Loc, false, CXXScopeSpec(), SourceLocation(), nullptr, MemberGet, &Args, nullptr); if (E.isInvalid()) return true; E = S.ActOnCallExpr(nullptr, E.get(), Loc, None, Loc); } else { // Otherwise, the initializer is get(e), where get is looked up // in the associated namespaces. Expr *Get = UnresolvedLookupExpr::Create( S.Context, nullptr, NestedNameSpecifierLoc(), SourceLocation(), DeclarationNameInfo(GetDN, Loc), /*RequiresADL*/true, &Args, UnresolvedSetIterator(), UnresolvedSetIterator()); Expr *Arg = E.get(); E = S.ActOnCallExpr(nullptr, Get, Loc, Arg, Loc); } if (E.isInvalid()) return true; Expr *Init = E.get(); // Given the type T designated by std::tuple_element::type, QualType T = getTupleLikeElementType(S, Loc, I, DecompType); if (T.isNull()) return true; // each vi is a variable of type "reference to T" initialized with the // initializer, where the reference is an lvalue reference if the // initializer is an lvalue and an rvalue reference otherwise QualType RefType = S.BuildReferenceType(T, E.get()->isLValue(), Loc, B->getDeclName()); if (RefType.isNull()) return true; auto *RefVD = VarDecl::Create( S.Context, Src->getDeclContext(), Loc, Loc, B->getDeclName().getAsIdentifierInfo(), RefType, S.Context.getTrivialTypeSourceInfo(T, Loc), Src->getStorageClass()); RefVD->setLexicalDeclContext(Src->getLexicalDeclContext()); RefVD->setTSCSpec(Src->getTSCSpec()); RefVD->setImplicit(); if (Src->isInlineSpecified()) RefVD->setInlineSpecified(); RefVD->getLexicalDeclContext()->addHiddenDecl(RefVD); InitializedEntity Entity = InitializedEntity::InitializeBinding(RefVD); InitializationKind Kind = InitializationKind::CreateCopy(Loc, Loc); InitializationSequence Seq(S, Entity, Kind, Init); E = Seq.Perform(S, Entity, Kind, Init); if (E.isInvalid()) return true; E = S.ActOnFinishFullExpr(E.get(), Loc); if (E.isInvalid()) return true; RefVD->setInit(E.get()); RefVD->checkInitIsICE(); E = S.BuildDeclarationNameExpr(CXXScopeSpec(), DeclarationNameInfo(B->getDeclName(), Loc), RefVD); if (E.isInvalid()) return true; B->setBinding(T, E.get()); I++; } return false; } /// Find the base class to decompose in a built-in decomposition of a class type. /// This base class search is, unfortunately, not quite like any other that we /// perform anywhere else in C++. static const CXXRecordDecl *findDecomposableBaseClass(Sema &S, SourceLocation Loc, const CXXRecordDecl *RD, CXXCastPath &BasePath) { auto BaseHasFields = [](const CXXBaseSpecifier *Specifier, CXXBasePath &Path) { return Specifier->getType()->getAsCXXRecordDecl()->hasDirectFields(); }; const CXXRecordDecl *ClassWithFields = nullptr; if (RD->hasDirectFields()) // [dcl.decomp]p4: // Otherwise, all of E's non-static data members shall be public direct // members of E ... ClassWithFields = RD; else { // ... or of ... CXXBasePaths Paths; Paths.setOrigin(const_cast(RD)); if (!RD->lookupInBases(BaseHasFields, Paths)) { // If no classes have fields, just decompose RD itself. (This will work // if and only if zero bindings were provided.) return RD; } CXXBasePath *BestPath = nullptr; for (auto &P : Paths) { if (!BestPath) BestPath = &P; else if (!S.Context.hasSameType(P.back().Base->getType(), BestPath->back().Base->getType())) { // ... the same ... S.Diag(Loc, diag::err_decomp_decl_multiple_bases_with_members) << false << RD << BestPath->back().Base->getType() << P.back().Base->getType(); return nullptr; } else if (P.Access < BestPath->Access) { BestPath = &P; } } // ... unambiguous ... QualType BaseType = BestPath->back().Base->getType(); if (Paths.isAmbiguous(S.Context.getCanonicalType(BaseType))) { S.Diag(Loc, diag::err_decomp_decl_ambiguous_base) << RD << BaseType << S.getAmbiguousPathsDisplayString(Paths); return nullptr; } // ... public base class of E. if (BestPath->Access != AS_public) { S.Diag(Loc, diag::err_decomp_decl_non_public_base) << RD << BaseType; for (auto &BS : *BestPath) { if (BS.Base->getAccessSpecifier() != AS_public) { S.Diag(BS.Base->getLocStart(), diag::note_access_constrained_by_path) << (BS.Base->getAccessSpecifier() == AS_protected) << (BS.Base->getAccessSpecifierAsWritten() == AS_none); break; } } return nullptr; } ClassWithFields = BaseType->getAsCXXRecordDecl(); S.BuildBasePathArray(Paths, BasePath); } // The above search did not check whether the selected class itself has base // classes with fields, so check that now. CXXBasePaths Paths; if (ClassWithFields->lookupInBases(BaseHasFields, Paths)) { S.Diag(Loc, diag::err_decomp_decl_multiple_bases_with_members) << (ClassWithFields == RD) << RD << ClassWithFields << Paths.front().back().Base->getType(); return nullptr; } return ClassWithFields; } static bool checkMemberDecomposition(Sema &S, ArrayRef Bindings, ValueDecl *Src, QualType DecompType, const CXXRecordDecl *RD) { CXXCastPath BasePath; RD = findDecomposableBaseClass(S, Src->getLocation(), RD, BasePath); if (!RD) return true; QualType BaseType = S.Context.getQualifiedType(S.Context.getRecordType(RD), DecompType.getQualifiers()); auto DiagnoseBadNumberOfBindings = [&]() -> bool { unsigned NumFields = std::count_if(RD->field_begin(), RD->field_end(), [](FieldDecl *FD) { return !FD->isUnnamedBitfield(); }); assert(Bindings.size() != NumFields); S.Diag(Src->getLocation(), diag::err_decomp_decl_wrong_number_bindings) << DecompType << (unsigned)Bindings.size() << NumFields << (NumFields < Bindings.size()); return true; }; // all of E's non-static data members shall be public [...] members, // E shall not have an anonymous union member, ... unsigned I = 0; for (auto *FD : RD->fields()) { if (FD->isUnnamedBitfield()) continue; if (FD->isAnonymousStructOrUnion()) { S.Diag(Src->getLocation(), diag::err_decomp_decl_anon_union_member) << DecompType << FD->getType()->isUnionType(); S.Diag(FD->getLocation(), diag::note_declared_at); return true; } // We have a real field to bind. if (I >= Bindings.size()) return DiagnoseBadNumberOfBindings(); auto *B = Bindings[I++]; SourceLocation Loc = B->getLocation(); if (FD->getAccess() != AS_public) { S.Diag(Loc, diag::err_decomp_decl_non_public_member) << FD << DecompType; // Determine whether the access specifier was explicit. bool Implicit = true; for (const auto *D : RD->decls()) { if (declaresSameEntity(D, FD)) break; if (isa(D)) { Implicit = false; break; } } S.Diag(FD->getLocation(), diag::note_access_natural) << (FD->getAccess() == AS_protected) << Implicit; return true; } // Initialize the binding to Src.FD. ExprResult E = S.BuildDeclRefExpr(Src, DecompType, VK_LValue, Loc); if (E.isInvalid()) return true; E = S.ImpCastExprToType(E.get(), BaseType, CK_UncheckedDerivedToBase, VK_LValue, &BasePath); if (E.isInvalid()) return true; E = S.BuildFieldReferenceExpr(E.get(), /*IsArrow*/ false, Loc, CXXScopeSpec(), FD, DeclAccessPair::make(FD, FD->getAccess()), DeclarationNameInfo(FD->getDeclName(), Loc)); if (E.isInvalid()) return true; // If the type of the member is T, the referenced type is cv T, where cv is // the cv-qualification of the decomposition expression. // // FIXME: We resolve a defect here: if the field is mutable, we do not add // 'const' to the type of the field. Qualifiers Q = DecompType.getQualifiers(); if (FD->isMutable()) Q.removeConst(); B->setBinding(S.BuildQualifiedType(FD->getType(), Loc, Q), E.get()); } if (I != Bindings.size()) return DiagnoseBadNumberOfBindings(); return false; } void Sema::CheckCompleteDecompositionDeclaration(DecompositionDecl *DD) { QualType DecompType = DD->getType(); // If the type of the decomposition is dependent, then so is the type of // each binding. if (DecompType->isDependentType()) { for (auto *B : DD->bindings()) B->setType(Context.DependentTy); return; } DecompType = DecompType.getNonReferenceType(); ArrayRef Bindings = DD->bindings(); // C++1z [dcl.decomp]/2: // If E is an array type [...] // As an extension, we also support decomposition of built-in complex and // vector types. if (auto *CAT = Context.getAsConstantArrayType(DecompType)) { if (checkArrayDecomposition(*this, Bindings, DD, DecompType, CAT)) DD->setInvalidDecl(); return; } if (auto *VT = DecompType->getAs()) { if (checkVectorDecomposition(*this, Bindings, DD, DecompType, VT)) DD->setInvalidDecl(); return; } if (auto *CT = DecompType->getAs()) { if (checkComplexDecomposition(*this, Bindings, DD, DecompType, CT)) DD->setInvalidDecl(); return; } // C++1z [dcl.decomp]/3: // if the expression std::tuple_size::value is a well-formed integral // constant expression, [...] llvm::APSInt TupleSize(32); switch (isTupleLike(*this, DD->getLocation(), DecompType, TupleSize)) { case IsTupleLike::Error: DD->setInvalidDecl(); return; case IsTupleLike::TupleLike: if (checkTupleLikeDecomposition(*this, Bindings, DD, DecompType, TupleSize)) DD->setInvalidDecl(); return; case IsTupleLike::NotTupleLike: break; } // C++1z [dcl.dcl]/8: // [E shall be of array or non-union class type] CXXRecordDecl *RD = DecompType->getAsCXXRecordDecl(); if (!RD || RD->isUnion()) { Diag(DD->getLocation(), diag::err_decomp_decl_unbindable_type) << DD << !RD << DecompType; DD->setInvalidDecl(); return; } // C++1z [dcl.decomp]/4: // all of E's non-static data members shall be [...] direct members of // E or of the same unambiguous public base class of E, ... if (checkMemberDecomposition(*this, Bindings, DD, DecompType, RD)) DD->setInvalidDecl(); } /// \brief Merge the exception specifications of two variable declarations. /// /// This is called when there's a redeclaration of a VarDecl. The function /// checks if the redeclaration might have an exception specification and /// validates compatibility and merges the specs if necessary. void Sema::MergeVarDeclExceptionSpecs(VarDecl *New, VarDecl *Old) { // Shortcut if exceptions are disabled. if (!getLangOpts().CXXExceptions) return; assert(Context.hasSameType(New->getType(), Old->getType()) && "Should only be called if types are otherwise the same."); QualType NewType = New->getType(); QualType OldType = Old->getType(); // We're only interested in pointers and references to functions, as well // as pointers to member functions. if (const ReferenceType *R = NewType->getAs()) { NewType = R->getPointeeType(); OldType = OldType->getAs()->getPointeeType(); } else if (const PointerType *P = NewType->getAs()) { NewType = P->getPointeeType(); OldType = OldType->getAs()->getPointeeType(); } else if (const MemberPointerType *M = NewType->getAs()) { NewType = M->getPointeeType(); OldType = OldType->getAs()->getPointeeType(); } if (!NewType->isFunctionProtoType()) return; // There's lots of special cases for functions. For function pointers, system // libraries are hopefully not as broken so that we don't need these // workarounds. if (CheckEquivalentExceptionSpec( OldType->getAs(), Old->getLocation(), NewType->getAs(), New->getLocation())) { New->setInvalidDecl(); } } /// CheckCXXDefaultArguments - Verify that the default arguments for a /// function declaration are well-formed according to C++ /// [dcl.fct.default]. void Sema::CheckCXXDefaultArguments(FunctionDecl *FD) { unsigned NumParams = FD->getNumParams(); unsigned p; // Find first parameter with a default argument for (p = 0; p < NumParams; ++p) { ParmVarDecl *Param = FD->getParamDecl(p); if (Param->hasDefaultArg()) break; } // C++11 [dcl.fct.default]p4: // In a given function declaration, each parameter subsequent to a parameter // with a default argument shall have a default argument supplied in this or // a previous declaration or shall be a function parameter pack. A default // argument shall not be redefined by a later declaration (not even to the // same value). unsigned LastMissingDefaultArg = 0; for (; p < NumParams; ++p) { ParmVarDecl *Param = FD->getParamDecl(p); if (!Param->hasDefaultArg() && !Param->isParameterPack()) { if (Param->isInvalidDecl()) /* We already complained about this parameter. */; else if (Param->getIdentifier()) Diag(Param->getLocation(), diag::err_param_default_argument_missing_name) << Param->getIdentifier(); else Diag(Param->getLocation(), diag::err_param_default_argument_missing); LastMissingDefaultArg = p; } } if (LastMissingDefaultArg > 0) { // Some default arguments were missing. Clear out all of the // default arguments up to (and including) the last missing // default argument, so that we leave the function parameters // in a semantically valid state. for (p = 0; p <= LastMissingDefaultArg; ++p) { ParmVarDecl *Param = FD->getParamDecl(p); if (Param->hasDefaultArg()) { Param->setDefaultArg(nullptr); } } } } // CheckConstexprParameterTypes - Check whether a function's parameter types // are all literal types. If so, return true. If not, produce a suitable // diagnostic and return false. static bool CheckConstexprParameterTypes(Sema &SemaRef, const FunctionDecl *FD) { unsigned ArgIndex = 0; const FunctionProtoType *FT = FD->getType()->getAs(); for (FunctionProtoType::param_type_iterator i = FT->param_type_begin(), e = FT->param_type_end(); i != e; ++i, ++ArgIndex) { const ParmVarDecl *PD = FD->getParamDecl(ArgIndex); SourceLocation ParamLoc = PD->getLocation(); if (!(*i)->isDependentType() && SemaRef.RequireLiteralType(ParamLoc, *i, diag::err_constexpr_non_literal_param, ArgIndex+1, PD->getSourceRange(), isa(FD))) return false; } return true; } /// \brief Get diagnostic %select index for tag kind for /// record diagnostic message. /// WARNING: Indexes apply to particular diagnostics only! /// /// \returns diagnostic %select index. static unsigned getRecordDiagFromTagKind(TagTypeKind Tag) { switch (Tag) { case TTK_Struct: return 0; case TTK_Interface: return 1; case TTK_Class: return 2; default: llvm_unreachable("Invalid tag kind for record diagnostic!"); } } // CheckConstexprFunctionDecl - Check whether a function declaration satisfies // the requirements of a constexpr function definition or a constexpr // constructor definition. If so, return true. If not, produce appropriate // diagnostics and return false. // // This implements C++11 [dcl.constexpr]p3,4, as amended by DR1360. bool Sema::CheckConstexprFunctionDecl(const FunctionDecl *NewFD) { const CXXMethodDecl *MD = dyn_cast(NewFD); if (MD && MD->isInstance()) { // C++11 [dcl.constexpr]p4: // The definition of a constexpr constructor shall satisfy the following // constraints: // - the class shall not have any virtual base classes; const CXXRecordDecl *RD = MD->getParent(); if (RD->getNumVBases()) { Diag(NewFD->getLocation(), diag::err_constexpr_virtual_base) << isa(NewFD) << getRecordDiagFromTagKind(RD->getTagKind()) << RD->getNumVBases(); for (const auto &I : RD->vbases()) Diag(I.getLocStart(), diag::note_constexpr_virtual_base_here) << I.getSourceRange(); return false; } } if (!isa(NewFD)) { // C++11 [dcl.constexpr]p3: // The definition of a constexpr function shall satisfy the following // constraints: // - it shall not be virtual; const CXXMethodDecl *Method = dyn_cast(NewFD); if (Method && Method->isVirtual()) { Method = Method->getCanonicalDecl(); Diag(Method->getLocation(), diag::err_constexpr_virtual); // If it's not obvious why this function is virtual, find an overridden // function which uses the 'virtual' keyword. const CXXMethodDecl *WrittenVirtual = Method; while (!WrittenVirtual->isVirtualAsWritten()) WrittenVirtual = *WrittenVirtual->begin_overridden_methods(); if (WrittenVirtual != Method) Diag(WrittenVirtual->getLocation(), diag::note_overridden_virtual_function); return false; } // - its return type shall be a literal type; QualType RT = NewFD->getReturnType(); if (!RT->isDependentType() && RequireLiteralType(NewFD->getLocation(), RT, diag::err_constexpr_non_literal_return)) return false; } // - each of its parameter types shall be a literal type; if (!CheckConstexprParameterTypes(*this, NewFD)) return false; return true; } /// Check the given declaration statement is legal within a constexpr function /// body. C++11 [dcl.constexpr]p3,p4, and C++1y [dcl.constexpr]p3. /// /// \return true if the body is OK (maybe only as an extension), false if we /// have diagnosed a problem. static bool CheckConstexprDeclStmt(Sema &SemaRef, const FunctionDecl *Dcl, DeclStmt *DS, SourceLocation &Cxx1yLoc) { // C++11 [dcl.constexpr]p3 and p4: // The definition of a constexpr function(p3) or constructor(p4) [...] shall // contain only for (const auto *DclIt : DS->decls()) { switch (DclIt->getKind()) { case Decl::StaticAssert: case Decl::Using: case Decl::UsingShadow: case Decl::UsingDirective: case Decl::UnresolvedUsingTypename: case Decl::UnresolvedUsingValue: // - static_assert-declarations // - using-declarations, // - using-directives, continue; case Decl::Typedef: case Decl::TypeAlias: { // - typedef declarations and alias-declarations that do not define // classes or enumerations, const auto *TN = cast(DclIt); if (TN->getUnderlyingType()->isVariablyModifiedType()) { // Don't allow variably-modified types in constexpr functions. TypeLoc TL = TN->getTypeSourceInfo()->getTypeLoc(); SemaRef.Diag(TL.getBeginLoc(), diag::err_constexpr_vla) << TL.getSourceRange() << TL.getType() << isa(Dcl); return false; } continue; } case Decl::Enum: case Decl::CXXRecord: // C++1y allows types to be defined, not just declared. if (cast(DclIt)->isThisDeclarationADefinition()) SemaRef.Diag(DS->getLocStart(), SemaRef.getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_type_definition : diag::ext_constexpr_type_definition) << isa(Dcl); continue; case Decl::EnumConstant: case Decl::IndirectField: case Decl::ParmVar: // These can only appear with other declarations which are banned in // C++11 and permitted in C++1y, so ignore them. continue; case Decl::Var: case Decl::Decomposition: { // C++1y [dcl.constexpr]p3 allows anything except: // a definition of a variable of non-literal type or of static or // thread storage duration or for which no initialization is performed. const auto *VD = cast(DclIt); if (VD->isThisDeclarationADefinition()) { if (VD->isStaticLocal()) { SemaRef.Diag(VD->getLocation(), diag::err_constexpr_local_var_static) << isa(Dcl) << (VD->getTLSKind() == VarDecl::TLS_Dynamic); return false; } if (!VD->getType()->isDependentType() && SemaRef.RequireLiteralType( VD->getLocation(), VD->getType(), diag::err_constexpr_local_var_non_literal_type, isa(Dcl))) return false; if (!VD->getType()->isDependentType() && !VD->hasInit() && !VD->isCXXForRangeDecl()) { SemaRef.Diag(VD->getLocation(), diag::err_constexpr_local_var_no_init) << isa(Dcl); return false; } } SemaRef.Diag(VD->getLocation(), SemaRef.getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_local_var : diag::ext_constexpr_local_var) << isa(Dcl); continue; } case Decl::NamespaceAlias: case Decl::Function: // These are disallowed in C++11 and permitted in C++1y. Allow them // everywhere as an extension. if (!Cxx1yLoc.isValid()) Cxx1yLoc = DS->getLocStart(); continue; default: SemaRef.Diag(DS->getLocStart(), diag::err_constexpr_body_invalid_stmt) << isa(Dcl); return false; } } return true; } /// Check that the given field is initialized within a constexpr constructor. /// /// \param Dcl The constexpr constructor being checked. /// \param Field The field being checked. This may be a member of an anonymous /// struct or union nested within the class being checked. /// \param Inits All declarations, including anonymous struct/union members and /// indirect members, for which any initialization was provided. /// \param Diagnosed Set to true if an error is produced. static void CheckConstexprCtorInitializer(Sema &SemaRef, const FunctionDecl *Dcl, FieldDecl *Field, llvm::SmallSet &Inits, bool &Diagnosed) { if (Field->isInvalidDecl()) return; if (Field->isUnnamedBitfield()) return; // Anonymous unions with no variant members and empty anonymous structs do not // need to be explicitly initialized. FIXME: Anonymous structs that contain no // indirect fields don't need initializing. if (Field->isAnonymousStructOrUnion() && (Field->getType()->isUnionType() ? !Field->getType()->getAsCXXRecordDecl()->hasVariantMembers() : Field->getType()->getAsCXXRecordDecl()->isEmpty())) return; if (!Inits.count(Field)) { if (!Diagnosed) { SemaRef.Diag(Dcl->getLocation(), diag::err_constexpr_ctor_missing_init); Diagnosed = true; } SemaRef.Diag(Field->getLocation(), diag::note_constexpr_ctor_missing_init); } else if (Field->isAnonymousStructOrUnion()) { const RecordDecl *RD = Field->getType()->castAs()->getDecl(); for (auto *I : RD->fields()) // If an anonymous union contains an anonymous struct of which any member // is initialized, all members must be initialized. if (!RD->isUnion() || Inits.count(I)) CheckConstexprCtorInitializer(SemaRef, Dcl, I, Inits, Diagnosed); } } /// Check the provided statement is allowed in a constexpr function /// definition. static bool CheckConstexprFunctionStmt(Sema &SemaRef, const FunctionDecl *Dcl, Stmt *S, SmallVectorImpl &ReturnStmts, SourceLocation &Cxx1yLoc) { // - its function-body shall be [...] a compound-statement that contains only switch (S->getStmtClass()) { case Stmt::NullStmtClass: // - null statements, return true; case Stmt::DeclStmtClass: // - static_assert-declarations // - using-declarations, // - using-directives, // - typedef declarations and alias-declarations that do not define // classes or enumerations, if (!CheckConstexprDeclStmt(SemaRef, Dcl, cast(S), Cxx1yLoc)) return false; return true; case Stmt::ReturnStmtClass: // - and exactly one return statement; if (isa(Dcl)) { // C++1y allows return statements in constexpr constructors. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); return true; } ReturnStmts.push_back(S->getLocStart()); return true; case Stmt::CompoundStmtClass: { // C++1y allows compound-statements. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); CompoundStmt *CompStmt = cast(S); for (auto *BodyIt : CompStmt->body()) { if (!CheckConstexprFunctionStmt(SemaRef, Dcl, BodyIt, ReturnStmts, Cxx1yLoc)) return false; } return true; } case Stmt::AttributedStmtClass: if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); return true; case Stmt::IfStmtClass: { // C++1y allows if-statements. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); IfStmt *If = cast(S); if (!CheckConstexprFunctionStmt(SemaRef, Dcl, If->getThen(), ReturnStmts, Cxx1yLoc)) return false; if (If->getElse() && !CheckConstexprFunctionStmt(SemaRef, Dcl, If->getElse(), ReturnStmts, Cxx1yLoc)) return false; return true; } case Stmt::WhileStmtClass: case Stmt::DoStmtClass: case Stmt::ForStmtClass: case Stmt::CXXForRangeStmtClass: case Stmt::ContinueStmtClass: // C++1y allows all of these. We don't allow them as extensions in C++11, // because they don't make sense without variable mutation. if (!SemaRef.getLangOpts().CPlusPlus14) break; if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); for (Stmt *SubStmt : S->children()) if (SubStmt && !CheckConstexprFunctionStmt(SemaRef, Dcl, SubStmt, ReturnStmts, Cxx1yLoc)) return false; return true; case Stmt::SwitchStmtClass: case Stmt::CaseStmtClass: case Stmt::DefaultStmtClass: case Stmt::BreakStmtClass: // C++1y allows switch-statements, and since they don't need variable // mutation, we can reasonably allow them in C++11 as an extension. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); for (Stmt *SubStmt : S->children()) if (SubStmt && !CheckConstexprFunctionStmt(SemaRef, Dcl, SubStmt, ReturnStmts, Cxx1yLoc)) return false; return true; default: if (!isa(S)) break; // C++1y allows expression-statements. if (!Cxx1yLoc.isValid()) Cxx1yLoc = S->getLocStart(); return true; } SemaRef.Diag(S->getLocStart(), diag::err_constexpr_body_invalid_stmt) << isa(Dcl); return false; } /// Check the body for the given constexpr function declaration only contains /// the permitted types of statement. C++11 [dcl.constexpr]p3,p4. /// /// \return true if the body is OK, false if we have diagnosed a problem. bool Sema::CheckConstexprFunctionBody(const FunctionDecl *Dcl, Stmt *Body) { if (isa(Body)) { // C++11 [dcl.constexpr]p3: // The definition of a constexpr function shall satisfy the following // constraints: [...] // - its function-body shall be = delete, = default, or a // compound-statement // // C++11 [dcl.constexpr]p4: // In the definition of a constexpr constructor, [...] // - its function-body shall not be a function-try-block; Diag(Body->getLocStart(), diag::err_constexpr_function_try_block) << isa(Dcl); return false; } SmallVector ReturnStmts; // - its function-body shall be [...] a compound-statement that contains only // [... list of cases ...] CompoundStmt *CompBody = cast(Body); SourceLocation Cxx1yLoc; for (auto *BodyIt : CompBody->body()) { if (!CheckConstexprFunctionStmt(*this, Dcl, BodyIt, ReturnStmts, Cxx1yLoc)) return false; } if (Cxx1yLoc.isValid()) Diag(Cxx1yLoc, getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_body_invalid_stmt : diag::ext_constexpr_body_invalid_stmt) << isa(Dcl); if (const CXXConstructorDecl *Constructor = dyn_cast(Dcl)) { const CXXRecordDecl *RD = Constructor->getParent(); // DR1359: // - every non-variant non-static data member and base class sub-object // shall be initialized; // DR1460: // - if the class is a union having variant members, exactly one of them // shall be initialized; if (RD->isUnion()) { if (Constructor->getNumCtorInitializers() == 0 && RD->hasVariantMembers()) { Diag(Dcl->getLocation(), diag::err_constexpr_union_ctor_no_init); return false; } } else if (!Constructor->isDependentContext() && !Constructor->isDelegatingConstructor()) { assert(RD->getNumVBases() == 0 && "constexpr ctor with virtual bases"); // Skip detailed checking if we have enough initializers, and we would // allow at most one initializer per member. bool AnyAnonStructUnionMembers = false; unsigned Fields = 0; for (CXXRecordDecl::field_iterator I = RD->field_begin(), E = RD->field_end(); I != E; ++I, ++Fields) { if (I->isAnonymousStructOrUnion()) { AnyAnonStructUnionMembers = true; break; } } // DR1460: // - if the class is a union-like class, but is not a union, for each of // its anonymous union members having variant members, exactly one of // them shall be initialized; if (AnyAnonStructUnionMembers || Constructor->getNumCtorInitializers() != RD->getNumBases() + Fields) { // Check initialization of non-static data members. Base classes are // always initialized so do not need to be checked. Dependent bases // might not have initializers in the member initializer list. llvm::SmallSet Inits; for (const auto *I: Constructor->inits()) { if (FieldDecl *FD = I->getMember()) Inits.insert(FD); else if (IndirectFieldDecl *ID = I->getIndirectMember()) Inits.insert(ID->chain_begin(), ID->chain_end()); } bool Diagnosed = false; for (auto *I : RD->fields()) CheckConstexprCtorInitializer(*this, Dcl, I, Inits, Diagnosed); if (Diagnosed) return false; } } } else { if (ReturnStmts.empty()) { // C++1y doesn't require constexpr functions to contain a 'return' // statement. We still do, unless the return type might be void, because // otherwise if there's no return statement, the function cannot // be used in a core constant expression. bool OK = getLangOpts().CPlusPlus14 && (Dcl->getReturnType()->isVoidType() || Dcl->getReturnType()->isDependentType()); Diag(Dcl->getLocation(), OK ? diag::warn_cxx11_compat_constexpr_body_no_return : diag::err_constexpr_body_no_return); if (!OK) return false; } else if (ReturnStmts.size() > 1) { Diag(ReturnStmts.back(), getLangOpts().CPlusPlus14 ? diag::warn_cxx11_compat_constexpr_body_multiple_return : diag::ext_constexpr_body_multiple_return); for (unsigned I = 0; I < ReturnStmts.size() - 1; ++I) Diag(ReturnStmts[I], diag::note_constexpr_body_previous_return); } } // C++11 [dcl.constexpr]p5: // if no function argument values exist such that the function invocation // substitution would produce a constant expression, the program is // ill-formed; no diagnostic required. // C++11 [dcl.constexpr]p3: // - every constructor call and implicit conversion used in initializing the // return value shall be one of those allowed in a constant expression. // C++11 [dcl.constexpr]p4: // - every constructor involved in initializing non-static data members and // base class sub-objects shall be a constexpr constructor. SmallVector Diags; if (!Expr::isPotentialConstantExpr(Dcl, Diags)) { Diag(Dcl->getLocation(), diag::ext_constexpr_function_never_constant_expr) << isa(Dcl); for (size_t I = 0, N = Diags.size(); I != N; ++I) Diag(Diags[I].first, Diags[I].second); // Don't return false here: we allow this for compatibility in // system headers. } return true; } /// isCurrentClassName - Determine whether the identifier II is the /// name of the class type currently being defined. In the case of /// nested classes, this will only return true if II is the name of /// the innermost class. bool Sema::isCurrentClassName(const IdentifierInfo &II, Scope *, const CXXScopeSpec *SS) { assert(getLangOpts().CPlusPlus && "No class names in C!"); CXXRecordDecl *CurDecl; if (SS && SS->isSet() && !SS->isInvalid()) { DeclContext *DC = computeDeclContext(*SS, true); CurDecl = dyn_cast_or_null(DC); } else CurDecl = dyn_cast_or_null(CurContext); if (CurDecl && CurDecl->getIdentifier()) return &II == CurDecl->getIdentifier(); return false; } /// \brief Determine whether the identifier II is a typo for the name of /// the class type currently being defined. If so, update it to the identifier /// that should have been used. bool Sema::isCurrentClassNameTypo(IdentifierInfo *&II, const CXXScopeSpec *SS) { assert(getLangOpts().CPlusPlus && "No class names in C!"); if (!getLangOpts().SpellChecking) return false; CXXRecordDecl *CurDecl; if (SS && SS->isSet() && !SS->isInvalid()) { DeclContext *DC = computeDeclContext(*SS, true); CurDecl = dyn_cast_or_null(DC); } else CurDecl = dyn_cast_or_null(CurContext); if (CurDecl && CurDecl->getIdentifier() && II != CurDecl->getIdentifier() && 3 * II->getName().edit_distance(CurDecl->getIdentifier()->getName()) < II->getLength()) { II = CurDecl->getIdentifier(); return true; } return false; } /// \brief Determine whether the given class is a base class of the given /// class, including looking at dependent bases. static bool findCircularInheritance(const CXXRecordDecl *Class, const CXXRecordDecl *Current) { SmallVector Queue; Class = Class->getCanonicalDecl(); while (true) { for (const auto &I : Current->bases()) { CXXRecordDecl *Base = I.getType()->getAsCXXRecordDecl(); if (!Base) continue; Base = Base->getDefinition(); if (!Base) continue; if (Base->getCanonicalDecl() == Class) return true; Queue.push_back(Base); } if (Queue.empty()) return false; Current = Queue.pop_back_val(); } return false; } /// \brief Check the validity of a C++ base class specifier. /// /// \returns a new CXXBaseSpecifier if well-formed, emits diagnostics /// and returns NULL otherwise. CXXBaseSpecifier * Sema::CheckBaseSpecifier(CXXRecordDecl *Class, SourceRange SpecifierRange, bool Virtual, AccessSpecifier Access, TypeSourceInfo *TInfo, SourceLocation EllipsisLoc) { QualType BaseType = TInfo->getType(); // C++ [class.union]p1: // A union shall not have base classes. if (Class->isUnion()) { Diag(Class->getLocation(), diag::err_base_clause_on_union) << SpecifierRange; return nullptr; } if (EllipsisLoc.isValid() && !TInfo->getType()->containsUnexpandedParameterPack()) { Diag(EllipsisLoc, diag::err_pack_expansion_without_parameter_packs) << TInfo->getTypeLoc().getSourceRange(); EllipsisLoc = SourceLocation(); } SourceLocation BaseLoc = TInfo->getTypeLoc().getBeginLoc(); if (BaseType->isDependentType()) { // Make sure that we don't have circular inheritance among our dependent // bases. For non-dependent bases, the check for completeness below handles // this. if (CXXRecordDecl *BaseDecl = BaseType->getAsCXXRecordDecl()) { if (BaseDecl->getCanonicalDecl() == Class->getCanonicalDecl() || ((BaseDecl = BaseDecl->getDefinition()) && findCircularInheritance(Class, BaseDecl))) { Diag(BaseLoc, diag::err_circular_inheritance) << BaseType << Context.getTypeDeclType(Class); if (BaseDecl->getCanonicalDecl() != Class->getCanonicalDecl()) Diag(BaseDecl->getLocation(), diag::note_previous_decl) << BaseType; return nullptr; } } return new (Context) CXXBaseSpecifier(SpecifierRange, Virtual, Class->getTagKind() == TTK_Class, Access, TInfo, EllipsisLoc); } // Base specifiers must be record types. if (!BaseType->isRecordType()) { Diag(BaseLoc, diag::err_base_must_be_class) << SpecifierRange; return nullptr; } // C++ [class.union]p1: // A union shall not be used as a base class. if (BaseType->isUnionType()) { Diag(BaseLoc, diag::err_union_as_base_class) << SpecifierRange; return nullptr; } // For the MS ABI, propagate DLL attributes to base class templates. if (Context.getTargetInfo().getCXXABI().isMicrosoft()) { if (Attr *ClassAttr = getDLLAttr(Class)) { if (auto *BaseTemplate = dyn_cast_or_null( BaseType->getAsCXXRecordDecl())) { propagateDLLAttrToBaseClassTemplate(Class, ClassAttr, BaseTemplate, BaseLoc); } } } // C++ [class.derived]p2: // The class-name in a base-specifier shall not be an incompletely // defined class. if (RequireCompleteType(BaseLoc, BaseType, diag::err_incomplete_base_class, SpecifierRange)) { Class->setInvalidDecl(); return nullptr; } // If the base class is polymorphic or isn't empty, the new one is/isn't, too. RecordDecl *BaseDecl = BaseType->getAs()->getDecl(); assert(BaseDecl && "Record type has no declaration"); BaseDecl = BaseDecl->getDefinition(); assert(BaseDecl && "Base type is not incomplete, but has no definition"); CXXRecordDecl *CXXBaseDecl = cast(BaseDecl); assert(CXXBaseDecl && "Base type is not a C++ type"); // A class which contains a flexible array member is not suitable for use as a // base class: // - If the layout determines that a base comes before another base, // the flexible array member would index into the subsequent base. // - If the layout determines that base comes before the derived class, // the flexible array member would index into the derived class. if (CXXBaseDecl->hasFlexibleArrayMember()) { Diag(BaseLoc, diag::err_base_class_has_flexible_array_member) << CXXBaseDecl->getDeclName(); return nullptr; } // C++ [class]p3: // If a class is marked final and it appears as a base-type-specifier in // base-clause, the program is ill-formed. if (FinalAttr *FA = CXXBaseDecl->getAttr()) { Diag(BaseLoc, diag::err_class_marked_final_used_as_base) << CXXBaseDecl->getDeclName() << FA->isSpelledAsSealed(); Diag(CXXBaseDecl->getLocation(), diag::note_entity_declared_at) << CXXBaseDecl->getDeclName() << FA->getRange(); return nullptr; } if (BaseDecl->isInvalidDecl()) Class->setInvalidDecl(); // Create the base specifier. return new (Context) CXXBaseSpecifier(SpecifierRange, Virtual, Class->getTagKind() == TTK_Class, Access, TInfo, EllipsisLoc); } /// ActOnBaseSpecifier - Parsed a base specifier. A base specifier is /// one entry in the base class list of a class specifier, for /// example: /// class foo : public bar, virtual private baz { /// 'public bar' and 'virtual private baz' are each base-specifiers. BaseResult Sema::ActOnBaseSpecifier(Decl *classdecl, SourceRange SpecifierRange, ParsedAttributes &Attributes, bool Virtual, AccessSpecifier Access, ParsedType basetype, SourceLocation BaseLoc, SourceLocation EllipsisLoc) { if (!classdecl) return true; AdjustDeclIfTemplate(classdecl); CXXRecordDecl *Class = dyn_cast(classdecl); if (!Class) return true; // We haven't yet attached the base specifiers. Class->setIsParsingBaseSpecifiers(); // We do not support any C++11 attributes on base-specifiers yet. // Diagnose any attributes we see. if (!Attributes.empty()) { for (AttributeList *Attr = Attributes.getList(); Attr; Attr = Attr->getNext()) { if (Attr->isInvalid() || Attr->getKind() == AttributeList::IgnoredAttribute) continue; Diag(Attr->getLoc(), Attr->getKind() == AttributeList::UnknownAttribute ? diag::warn_unknown_attribute_ignored : diag::err_base_specifier_attribute) << Attr->getName(); } } TypeSourceInfo *TInfo = nullptr; GetTypeFromParser(basetype, &TInfo); if (EllipsisLoc.isInvalid() && DiagnoseUnexpandedParameterPack(SpecifierRange.getBegin(), TInfo, UPPC_BaseType)) return true; if (CXXBaseSpecifier *BaseSpec = CheckBaseSpecifier(Class, SpecifierRange, Virtual, Access, TInfo, EllipsisLoc)) return BaseSpec; else Class->setInvalidDecl(); return true; } /// Use small set to collect indirect bases. As this is only used /// locally, there's no need to abstract the small size parameter. typedef llvm::SmallPtrSet IndirectBaseSet; /// \brief Recursively add the bases of Type. Don't add Type itself. static void NoteIndirectBases(ASTContext &Context, IndirectBaseSet &Set, const QualType &Type) { // Even though the incoming type is a base, it might not be // a class -- it could be a template parm, for instance. if (auto Rec = Type->getAs()) { auto Decl = Rec->getAsCXXRecordDecl(); // Iterate over its bases. for (const auto &BaseSpec : Decl->bases()) { QualType Base = Context.getCanonicalType(BaseSpec.getType()) .getUnqualifiedType(); if (Set.insert(Base).second) // If we've not already seen it, recurse. NoteIndirectBases(Context, Set, Base); } } } /// \brief Performs the actual work of attaching the given base class /// specifiers to a C++ class. bool Sema::AttachBaseSpecifiers(CXXRecordDecl *Class, MutableArrayRef Bases) { if (Bases.empty()) return false; // Used to keep track of which base types we have already seen, so // that we can properly diagnose redundant direct base types. Note // that the key is always the unqualified canonical type of the base // class. std::map KnownBaseTypes; // Used to track indirect bases so we can see if a direct base is // ambiguous. IndirectBaseSet IndirectBaseTypes; // Copy non-redundant base specifiers into permanent storage. unsigned NumGoodBases = 0; bool Invalid = false; for (unsigned idx = 0; idx < Bases.size(); ++idx) { QualType NewBaseType = Context.getCanonicalType(Bases[idx]->getType()); NewBaseType = NewBaseType.getLocalUnqualifiedType(); CXXBaseSpecifier *&KnownBase = KnownBaseTypes[NewBaseType]; if (KnownBase) { // C++ [class.mi]p3: // A class shall not be specified as a direct base class of a // derived class more than once. Diag(Bases[idx]->getLocStart(), diag::err_duplicate_base_class) << KnownBase->getType() << Bases[idx]->getSourceRange(); // Delete the duplicate base class specifier; we're going to // overwrite its pointer later. Context.Deallocate(Bases[idx]); Invalid = true; } else { // Okay, add this new base class. KnownBase = Bases[idx]; Bases[NumGoodBases++] = Bases[idx]; // Note this base's direct & indirect bases, if there could be ambiguity. if (Bases.size() > 1) NoteIndirectBases(Context, IndirectBaseTypes, NewBaseType); if (const RecordType *Record = NewBaseType->getAs()) { const CXXRecordDecl *RD = cast(Record->getDecl()); if (Class->isInterface() && (!RD->isInterface() || KnownBase->getAccessSpecifier() != AS_public)) { // The Microsoft extension __interface does not permit bases that // are not themselves public interfaces. Diag(KnownBase->getLocStart(), diag::err_invalid_base_in_interface) << getRecordDiagFromTagKind(RD->getTagKind()) << RD->getName() << RD->getSourceRange(); Invalid = true; } if (RD->hasAttr()) Class->addAttr(WeakAttr::CreateImplicit(Context)); } } } // Attach the remaining base class specifiers to the derived class. Class->setBases(Bases.data(), NumGoodBases); for (unsigned idx = 0; idx < NumGoodBases; ++idx) { // Check whether this direct base is inaccessible due to ambiguity. QualType BaseType = Bases[idx]->getType(); CanQualType CanonicalBase = Context.getCanonicalType(BaseType) .getUnqualifiedType(); if (IndirectBaseTypes.count(CanonicalBase)) { CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/true); bool found = Class->isDerivedFrom(CanonicalBase->getAsCXXRecordDecl(), Paths); assert(found); (void)found; if (Paths.isAmbiguous(CanonicalBase)) Diag(Bases[idx]->getLocStart (), diag::warn_inaccessible_base_class) << BaseType << getAmbiguousPathsDisplayString(Paths) << Bases[idx]->getSourceRange(); else assert(Bases[idx]->isVirtual()); } // Delete the base class specifier, since its data has been copied // into the CXXRecordDecl. Context.Deallocate(Bases[idx]); } return Invalid; } /// ActOnBaseSpecifiers - Attach the given base specifiers to the /// class, after checking whether there are any duplicate base /// classes. void Sema::ActOnBaseSpecifiers(Decl *ClassDecl, MutableArrayRef Bases) { if (!ClassDecl || Bases.empty()) return; AdjustDeclIfTemplate(ClassDecl); AttachBaseSpecifiers(cast(ClassDecl), Bases); } /// \brief Determine whether the type \p Derived is a C++ class that is /// derived from the type \p Base. bool Sema::IsDerivedFrom(SourceLocation Loc, QualType Derived, QualType Base) { if (!getLangOpts().CPlusPlus) return false; CXXRecordDecl *DerivedRD = Derived->getAsCXXRecordDecl(); if (!DerivedRD) return false; CXXRecordDecl *BaseRD = Base->getAsCXXRecordDecl(); if (!BaseRD) return false; // If either the base or the derived type is invalid, don't try to // check whether one is derived from the other. if (BaseRD->isInvalidDecl() || DerivedRD->isInvalidDecl()) return false; // FIXME: In a modules build, do we need the entire path to be visible for us // to be able to use the inheritance relationship? if (!isCompleteType(Loc, Derived) && !DerivedRD->isBeingDefined()) return false; return DerivedRD->isDerivedFrom(BaseRD); } /// \brief Determine whether the type \p Derived is a C++ class that is /// derived from the type \p Base. bool Sema::IsDerivedFrom(SourceLocation Loc, QualType Derived, QualType Base, CXXBasePaths &Paths) { if (!getLangOpts().CPlusPlus) return false; CXXRecordDecl *DerivedRD = Derived->getAsCXXRecordDecl(); if (!DerivedRD) return false; CXXRecordDecl *BaseRD = Base->getAsCXXRecordDecl(); if (!BaseRD) return false; if (!isCompleteType(Loc, Derived) && !DerivedRD->isBeingDefined()) return false; return DerivedRD->isDerivedFrom(BaseRD, Paths); } void Sema::BuildBasePathArray(const CXXBasePaths &Paths, CXXCastPath &BasePathArray) { assert(BasePathArray.empty() && "Base path array must be empty!"); assert(Paths.isRecordingPaths() && "Must record paths!"); const CXXBasePath &Path = Paths.front(); // We first go backward and check if we have a virtual base. // FIXME: It would be better if CXXBasePath had the base specifier for // the nearest virtual base. unsigned Start = 0; for (unsigned I = Path.size(); I != 0; --I) { if (Path[I - 1].Base->isVirtual()) { Start = I - 1; break; } } // Now add all bases. for (unsigned I = Start, E = Path.size(); I != E; ++I) BasePathArray.push_back(const_cast(Path[I].Base)); } /// CheckDerivedToBaseConversion - Check whether the Derived-to-Base /// conversion (where Derived and Base are class types) is /// well-formed, meaning that the conversion is unambiguous (and /// that all of the base classes are accessible). Returns true /// and emits a diagnostic if the code is ill-formed, returns false /// otherwise. Loc is the location where this routine should point to /// if there is an error, and Range is the source range to highlight /// if there is an error. /// /// If either InaccessibleBaseID or AmbigiousBaseConvID are 0, then the /// diagnostic for the respective type of error will be suppressed, but the /// check for ill-formed code will still be performed. bool Sema::CheckDerivedToBaseConversion(QualType Derived, QualType Base, unsigned InaccessibleBaseID, unsigned AmbigiousBaseConvID, SourceLocation Loc, SourceRange Range, DeclarationName Name, CXXCastPath *BasePath, bool IgnoreAccess) { // First, determine whether the path from Derived to Base is // ambiguous. This is slightly more expensive than checking whether // the Derived to Base conversion exists, because here we need to // explore multiple paths to determine if there is an ambiguity. CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/false); bool DerivationOkay = IsDerivedFrom(Loc, Derived, Base, Paths); assert(DerivationOkay && "Can only be used with a derived-to-base conversion"); (void)DerivationOkay; if (!Paths.isAmbiguous(Context.getCanonicalType(Base).getUnqualifiedType())) { if (!IgnoreAccess) { // Check that the base class can be accessed. switch (CheckBaseClassAccess(Loc, Base, Derived, Paths.front(), InaccessibleBaseID)) { case AR_inaccessible: return true; case AR_accessible: case AR_dependent: case AR_delayed: break; } } // Build a base path if necessary. if (BasePath) BuildBasePathArray(Paths, *BasePath); return false; } if (AmbigiousBaseConvID) { // We know that the derived-to-base conversion is ambiguous, and // we're going to produce a diagnostic. Perform the derived-to-base // search just one more time to compute all of the possible paths so // that we can print them out. This is more expensive than any of // the previous derived-to-base checks we've done, but at this point // performance isn't as much of an issue. Paths.clear(); Paths.setRecordingPaths(true); bool StillOkay = IsDerivedFrom(Loc, Derived, Base, Paths); assert(StillOkay && "Can only be used with a derived-to-base conversion"); (void)StillOkay; // Build up a textual representation of the ambiguous paths, e.g., // D -> B -> A, that will be used to illustrate the ambiguous // conversions in the diagnostic. We only print one of the paths // to each base class subobject. std::string PathDisplayStr = getAmbiguousPathsDisplayString(Paths); Diag(Loc, AmbigiousBaseConvID) << Derived << Base << PathDisplayStr << Range << Name; } return true; } bool Sema::CheckDerivedToBaseConversion(QualType Derived, QualType Base, SourceLocation Loc, SourceRange Range, CXXCastPath *BasePath, bool IgnoreAccess) { return CheckDerivedToBaseConversion( Derived, Base, diag::err_upcast_to_inaccessible_base, diag::err_ambiguous_derived_to_base_conv, Loc, Range, DeclarationName(), BasePath, IgnoreAccess); } /// @brief Builds a string representing ambiguous paths from a /// specific derived class to different subobjects of the same base /// class. /// /// This function builds a string that can be used in error messages /// to show the different paths that one can take through the /// inheritance hierarchy to go from the derived class to different /// subobjects of a base class. The result looks something like this: /// @code /// struct D -> struct B -> struct A /// struct D -> struct C -> struct A /// @endcode std::string Sema::getAmbiguousPathsDisplayString(CXXBasePaths &Paths) { std::string PathDisplayStr; std::set DisplayedPaths; for (CXXBasePaths::paths_iterator Path = Paths.begin(); Path != Paths.end(); ++Path) { if (DisplayedPaths.insert(Path->back().SubobjectNumber).second) { // We haven't displayed a path to this particular base // class subobject yet. PathDisplayStr += "\n "; PathDisplayStr += Context.getTypeDeclType(Paths.getOrigin()).getAsString(); for (CXXBasePath::const_iterator Element = Path->begin(); Element != Path->end(); ++Element) PathDisplayStr += " -> " + Element->Base->getType().getAsString(); } } return PathDisplayStr; } //===----------------------------------------------------------------------===// // C++ class member Handling //===----------------------------------------------------------------------===// /// ActOnAccessSpecifier - Parsed an access specifier followed by a colon. bool Sema::ActOnAccessSpecifier(AccessSpecifier Access, SourceLocation ASLoc, SourceLocation ColonLoc, AttributeList *Attrs) { assert(Access != AS_none && "Invalid kind for syntactic access specifier!"); AccessSpecDecl *ASDecl = AccessSpecDecl::Create(Context, Access, CurContext, ASLoc, ColonLoc); CurContext->addHiddenDecl(ASDecl); return ProcessAccessDeclAttributeList(ASDecl, Attrs); } /// CheckOverrideControl - Check C++11 override control semantics. void Sema::CheckOverrideControl(NamedDecl *D) { if (D->isInvalidDecl()) return; // We only care about "override" and "final" declarations. if (!D->hasAttr() && !D->hasAttr()) return; CXXMethodDecl *MD = dyn_cast(D); // We can't check dependent instance methods. if (MD && MD->isInstance() && (MD->getParent()->hasAnyDependentBases() || MD->getType()->isDependentType())) return; if (MD && !MD->isVirtual()) { // If we have a non-virtual method, check if if hides a virtual method. // (In that case, it's most likely the method has the wrong type.) SmallVector OverloadedMethods; FindHiddenVirtualMethods(MD, OverloadedMethods); if (!OverloadedMethods.empty()) { if (OverrideAttr *OA = D->getAttr()) { Diag(OA->getLocation(), diag::override_keyword_hides_virtual_member_function) << "override" << (OverloadedMethods.size() > 1); } else if (FinalAttr *FA = D->getAttr()) { Diag(FA->getLocation(), diag::override_keyword_hides_virtual_member_function) << (FA->isSpelledAsSealed() ? "sealed" : "final") << (OverloadedMethods.size() > 1); } NoteHiddenVirtualMethods(MD, OverloadedMethods); MD->setInvalidDecl(); return; } // Fall through into the general case diagnostic. // FIXME: We might want to attempt typo correction here. } if (!MD || !MD->isVirtual()) { if (OverrideAttr *OA = D->getAttr()) { Diag(OA->getLocation(), diag::override_keyword_only_allowed_on_virtual_member_functions) << "override" << FixItHint::CreateRemoval(OA->getLocation()); D->dropAttr(); } if (FinalAttr *FA = D->getAttr()) { Diag(FA->getLocation(), diag::override_keyword_only_allowed_on_virtual_member_functions) << (FA->isSpelledAsSealed() ? "sealed" : "final") << FixItHint::CreateRemoval(FA->getLocation()); D->dropAttr(); } return; } // C++11 [class.virtual]p5: // If a function is marked with the virt-specifier override and // does not override a member function of a base class, the program is // ill-formed. bool HasOverriddenMethods = MD->begin_overridden_methods() != MD->end_overridden_methods(); if (MD->hasAttr() && !HasOverriddenMethods) Diag(MD->getLocation(), diag::err_function_marked_override_not_overriding) << MD->getDeclName(); } void Sema::DiagnoseAbsenceOfOverrideControl(NamedDecl *D) { if (D->isInvalidDecl() || D->hasAttr()) return; CXXMethodDecl *MD = dyn_cast(D); if (!MD || MD->isImplicit() || MD->hasAttr() || isa(MD)) return; SourceLocation Loc = MD->getLocation(); SourceLocation SpellingLoc = Loc; if (getSourceManager().isMacroArgExpansion(Loc)) SpellingLoc = getSourceManager().getImmediateExpansionRange(Loc).first; SpellingLoc = getSourceManager().getSpellingLoc(SpellingLoc); if (SpellingLoc.isValid() && getSourceManager().isInSystemHeader(SpellingLoc)) return; if (MD->size_overridden_methods() > 0) { Diag(MD->getLocation(), diag::warn_function_marked_not_override_overriding) << MD->getDeclName(); const CXXMethodDecl *OMD = *MD->begin_overridden_methods(); Diag(OMD->getLocation(), diag::note_overridden_virtual_function); } } /// CheckIfOverriddenFunctionIsMarkedFinal - Checks whether a virtual member /// function overrides a virtual member function marked 'final', according to /// C++11 [class.virtual]p4. bool Sema::CheckIfOverriddenFunctionIsMarkedFinal(const CXXMethodDecl *New, const CXXMethodDecl *Old) { FinalAttr *FA = Old->getAttr(); if (!FA) return false; Diag(New->getLocation(), diag::err_final_function_overridden) << New->getDeclName() << FA->isSpelledAsSealed(); Diag(Old->getLocation(), diag::note_overridden_virtual_function); return true; } static bool InitializationHasSideEffects(const FieldDecl &FD) { const Type *T = FD.getType()->getBaseElementTypeUnsafe(); // FIXME: Destruction of ObjC lifetime types has side-effects. if (const CXXRecordDecl *RD = T->getAsCXXRecordDecl()) return !RD->isCompleteDefinition() || !RD->hasTrivialDefaultConstructor() || !RD->hasTrivialDestructor(); return false; } static AttributeList *getMSPropertyAttr(AttributeList *list) { for (AttributeList *it = list; it != nullptr; it = it->getNext()) if (it->isDeclspecPropertyAttribute()) return it; return nullptr; } /// ActOnCXXMemberDeclarator - This is invoked when a C++ class member /// declarator is parsed. 'AS' is the access specifier, 'BW' specifies the /// bitfield width if there is one, 'InitExpr' specifies the initializer if /// one has been parsed, and 'InitStyle' is set if an in-class initializer is /// present (but parsing it has been deferred). NamedDecl * Sema::ActOnCXXMemberDeclarator(Scope *S, AccessSpecifier AS, Declarator &D, MultiTemplateParamsArg TemplateParameterLists, Expr *BW, const VirtSpecifiers &VS, InClassInitStyle InitStyle) { const DeclSpec &DS = D.getDeclSpec(); DeclarationNameInfo NameInfo = GetNameForDeclarator(D); DeclarationName Name = NameInfo.getName(); SourceLocation Loc = NameInfo.getLoc(); // For anonymous bitfields, the location should point to the type. if (Loc.isInvalid()) Loc = D.getLocStart(); Expr *BitWidth = static_cast(BW); assert(isa(CurContext)); assert(!DS.isFriendSpecified()); bool isFunc = D.isDeclarationOfFunction(); if (cast(CurContext)->isInterface()) { // The Microsoft extension __interface only permits public member functions // and prohibits constructors, destructors, operators, non-public member // functions, static methods and data members. unsigned InvalidDecl; bool ShowDeclName = true; if (!isFunc) InvalidDecl = (DS.getStorageClassSpec() == DeclSpec::SCS_typedef) ? 0 : 1; else if (AS != AS_public) InvalidDecl = 2; else if (DS.getStorageClassSpec() == DeclSpec::SCS_static) InvalidDecl = 3; else switch (Name.getNameKind()) { case DeclarationName::CXXConstructorName: InvalidDecl = 4; ShowDeclName = false; break; case DeclarationName::CXXDestructorName: InvalidDecl = 5; ShowDeclName = false; break; case DeclarationName::CXXOperatorName: case DeclarationName::CXXConversionFunctionName: InvalidDecl = 6; break; default: InvalidDecl = 0; break; } if (InvalidDecl) { if (ShowDeclName) Diag(Loc, diag::err_invalid_member_in_interface) << (InvalidDecl-1) << Name; else Diag(Loc, diag::err_invalid_member_in_interface) << (InvalidDecl-1) << ""; return nullptr; } } // C++ 9.2p6: A member shall not be declared to have automatic storage // duration (auto, register) or with the extern storage-class-specifier. // C++ 7.1.1p8: The mutable specifier can be applied only to names of class // data members and cannot be applied to names declared const or static, // and cannot be applied to reference members. switch (DS.getStorageClassSpec()) { case DeclSpec::SCS_unspecified: case DeclSpec::SCS_typedef: case DeclSpec::SCS_static: break; case DeclSpec::SCS_mutable: if (isFunc) { Diag(DS.getStorageClassSpecLoc(), diag::err_mutable_function); // FIXME: It would be nicer if the keyword was ignored only for this // declarator. Otherwise we could get follow-up errors. D.getMutableDeclSpec().ClearStorageClassSpecs(); } break; default: Diag(DS.getStorageClassSpecLoc(), diag::err_storageclass_invalid_for_member); D.getMutableDeclSpec().ClearStorageClassSpecs(); break; } bool isInstField = ((DS.getStorageClassSpec() == DeclSpec::SCS_unspecified || DS.getStorageClassSpec() == DeclSpec::SCS_mutable) && !isFunc); if (DS.isConstexprSpecified() && isInstField) { SemaDiagnosticBuilder B = Diag(DS.getConstexprSpecLoc(), diag::err_invalid_constexpr_member); SourceLocation ConstexprLoc = DS.getConstexprSpecLoc(); if (InitStyle == ICIS_NoInit) { B << 0 << 0; if (D.getDeclSpec().getTypeQualifiers() & DeclSpec::TQ_const) B << FixItHint::CreateRemoval(ConstexprLoc); else { B << FixItHint::CreateReplacement(ConstexprLoc, "const"); D.getMutableDeclSpec().ClearConstexprSpec(); const char *PrevSpec; unsigned DiagID; bool Failed = D.getMutableDeclSpec().SetTypeQual( DeclSpec::TQ_const, ConstexprLoc, PrevSpec, DiagID, getLangOpts()); (void)Failed; assert(!Failed && "Making a constexpr member const shouldn't fail"); } } else { B << 1; const char *PrevSpec; unsigned DiagID; if (D.getMutableDeclSpec().SetStorageClassSpec( *this, DeclSpec::SCS_static, ConstexprLoc, PrevSpec, DiagID, Context.getPrintingPolicy())) { assert(DS.getStorageClassSpec() == DeclSpec::SCS_mutable && "This is the only DeclSpec that should fail to be applied"); B << 1; } else { B << 0 << FixItHint::CreateInsertion(ConstexprLoc, "static "); isInstField = false; } } } NamedDecl *Member; if (isInstField) { CXXScopeSpec &SS = D.getCXXScopeSpec(); // Data members must have identifiers for names. if (!Name.isIdentifier()) { Diag(Loc, diag::err_bad_variable_name) << Name; return nullptr; } IdentifierInfo *II = Name.getAsIdentifierInfo(); // Member field could not be with "template" keyword. // So TemplateParameterLists should be empty in this case. if (TemplateParameterLists.size()) { TemplateParameterList* TemplateParams = TemplateParameterLists[0]; if (TemplateParams->size()) { // There is no such thing as a member field template. Diag(D.getIdentifierLoc(), diag::err_template_member) << II << SourceRange(TemplateParams->getTemplateLoc(), TemplateParams->getRAngleLoc()); } else { // There is an extraneous 'template<>' for this member. Diag(TemplateParams->getTemplateLoc(), diag::err_template_member_noparams) << II << SourceRange(TemplateParams->getTemplateLoc(), TemplateParams->getRAngleLoc()); } return nullptr; } if (SS.isSet() && !SS.isInvalid()) { // The user provided a superfluous scope specifier inside a class // definition: // // class X { // int X::member; // }; if (DeclContext *DC = computeDeclContext(SS, false)) diagnoseQualifiedDeclaration(SS, DC, Name, D.getIdentifierLoc()); else Diag(D.getIdentifierLoc(), diag::err_member_qualification) << Name << SS.getRange(); SS.clear(); } AttributeList *MSPropertyAttr = getMSPropertyAttr(D.getDeclSpec().getAttributes().getList()); if (MSPropertyAttr) { Member = HandleMSProperty(S, cast(CurContext), Loc, D, BitWidth, InitStyle, AS, MSPropertyAttr); if (!Member) return nullptr; isInstField = false; } else { Member = HandleField(S, cast(CurContext), Loc, D, BitWidth, InitStyle, AS); if (!Member) return nullptr; } } else { Member = HandleDeclarator(S, D, TemplateParameterLists); if (!Member) return nullptr; // Non-instance-fields can't have a bitfield. if (BitWidth) { if (Member->isInvalidDecl()) { // don't emit another diagnostic. } else if (isa(Member) || isa(Member)) { // C++ 9.6p3: A bit-field shall not be a static member. // "static member 'A' cannot be a bit-field" Diag(Loc, diag::err_static_not_bitfield) << Name << BitWidth->getSourceRange(); } else if (isa(Member)) { // "typedef member 'x' cannot be a bit-field" Diag(Loc, diag::err_typedef_not_bitfield) << Name << BitWidth->getSourceRange(); } else { // A function typedef ("typedef int f(); f a;"). // C++ 9.6p3: A bit-field shall have integral or enumeration type. Diag(Loc, diag::err_not_integral_type_bitfield) << Name << cast(Member)->getType() << BitWidth->getSourceRange(); } BitWidth = nullptr; Member->setInvalidDecl(); } Member->setAccess(AS); // If we have declared a member function template or static data member // template, set the access of the templated declaration as well. if (FunctionTemplateDecl *FunTmpl = dyn_cast(Member)) FunTmpl->getTemplatedDecl()->setAccess(AS); else if (VarTemplateDecl *VarTmpl = dyn_cast(Member)) VarTmpl->getTemplatedDecl()->setAccess(AS); } if (VS.isOverrideSpecified()) Member->addAttr(new (Context) OverrideAttr(VS.getOverrideLoc(), Context, 0)); if (VS.isFinalSpecified()) Member->addAttr(new (Context) FinalAttr(VS.getFinalLoc(), Context, VS.isFinalSpelledSealed())); if (VS.getLastLocation().isValid()) { // Update the end location of a method that has a virt-specifiers. if (CXXMethodDecl *MD = dyn_cast_or_null(Member)) MD->setRangeEnd(VS.getLastLocation()); } CheckOverrideControl(Member); assert((Name || isInstField) && "No identifier for non-field ?"); if (isInstField) { FieldDecl *FD = cast(Member); FieldCollector->Add(FD); if (!Diags.isIgnored(diag::warn_unused_private_field, FD->getLocation())) { // Remember all explicit private FieldDecls that have a name, no side // effects and are not part of a dependent type declaration. if (!FD->isImplicit() && FD->getDeclName() && FD->getAccess() == AS_private && !FD->hasAttr() && !FD->getParent()->isDependentContext() && !InitializationHasSideEffects(*FD)) UnusedPrivateFields.insert(FD); } } return Member; } namespace { class UninitializedFieldVisitor : public EvaluatedExprVisitor { Sema &S; // List of Decls to generate a warning on. Also remove Decls that become // initialized. llvm::SmallPtrSetImpl &Decls; // List of base classes of the record. Classes are removed after their // initializers. llvm::SmallPtrSetImpl &BaseClasses; // Vector of decls to be removed from the Decl set prior to visiting the // nodes. These Decls may have been initialized in the prior initializer. llvm::SmallVector DeclsToRemove; // If non-null, add a note to the warning pointing back to the constructor. const CXXConstructorDecl *Constructor; // Variables to hold state when processing an initializer list. When // InitList is true, special case initialization of FieldDecls matching // InitListFieldDecl. bool InitList; FieldDecl *InitListFieldDecl; llvm::SmallVector InitFieldIndex; public: typedef EvaluatedExprVisitor Inherited; UninitializedFieldVisitor(Sema &S, llvm::SmallPtrSetImpl &Decls, llvm::SmallPtrSetImpl &BaseClasses) : Inherited(S.Context), S(S), Decls(Decls), BaseClasses(BaseClasses), Constructor(nullptr), InitList(false), InitListFieldDecl(nullptr) {} // Returns true if the use of ME is not an uninitialized use. bool IsInitListMemberExprInitialized(MemberExpr *ME, bool CheckReferenceOnly) { llvm::SmallVector Fields; bool ReferenceField = false; while (ME) { FieldDecl *FD = dyn_cast(ME->getMemberDecl()); if (!FD) return false; Fields.push_back(FD); if (FD->getType()->isReferenceType()) ReferenceField = true; ME = dyn_cast(ME->getBase()->IgnoreParenImpCasts()); } // Binding a reference to an unintialized field is not an // uninitialized use. if (CheckReferenceOnly && !ReferenceField) return true; llvm::SmallVector UsedFieldIndex; // Discard the first field since it is the field decl that is being // initialized. for (auto I = Fields.rbegin() + 1, E = Fields.rend(); I != E; ++I) { UsedFieldIndex.push_back((*I)->getFieldIndex()); } for (auto UsedIter = UsedFieldIndex.begin(), UsedEnd = UsedFieldIndex.end(), OrigIter = InitFieldIndex.begin(), OrigEnd = InitFieldIndex.end(); UsedIter != UsedEnd && OrigIter != OrigEnd; ++UsedIter, ++OrigIter) { if (*UsedIter < *OrigIter) return true; if (*UsedIter > *OrigIter) break; } return false; } void HandleMemberExpr(MemberExpr *ME, bool CheckReferenceOnly, bool AddressOf) { if (isa(ME->getMemberDecl())) return; // FieldME is the inner-most MemberExpr that is not an anonymous struct // or union. MemberExpr *FieldME = ME; bool AllPODFields = FieldME->getType().isPODType(S.Context); Expr *Base = ME; while (MemberExpr *SubME = dyn_cast(Base->IgnoreParenImpCasts())) { if (isa(SubME->getMemberDecl())) return; if (FieldDecl *FD = dyn_cast(SubME->getMemberDecl())) if (!FD->isAnonymousStructOrUnion()) FieldME = SubME; if (!FieldME->getType().isPODType(S.Context)) AllPODFields = false; Base = SubME->getBase(); } if (!isa(Base->IgnoreParenImpCasts())) return; if (AddressOf && AllPODFields) return; ValueDecl* FoundVD = FieldME->getMemberDecl(); if (ImplicitCastExpr *BaseCast = dyn_cast(Base)) { while (isa(BaseCast->getSubExpr())) { BaseCast = cast(BaseCast->getSubExpr()); } if (BaseCast->getCastKind() == CK_UncheckedDerivedToBase) { QualType T = BaseCast->getType(); if (T->isPointerType() && BaseClasses.count(T->getPointeeType())) { S.Diag(FieldME->getExprLoc(), diag::warn_base_class_is_uninit) << T->getPointeeType() << FoundVD; } } } if (!Decls.count(FoundVD)) return; const bool IsReference = FoundVD->getType()->isReferenceType(); if (InitList && !AddressOf && FoundVD == InitListFieldDecl) { // Special checking for initializer lists. if (IsInitListMemberExprInitialized(ME, CheckReferenceOnly)) { return; } } else { // Prevent double warnings on use of unbounded references. if (CheckReferenceOnly && !IsReference) return; } unsigned diag = IsReference ? diag::warn_reference_field_is_uninit : diag::warn_field_is_uninit; S.Diag(FieldME->getExprLoc(), diag) << FoundVD; if (Constructor) S.Diag(Constructor->getLocation(), diag::note_uninit_in_this_constructor) << (Constructor->isDefaultConstructor() && Constructor->isImplicit()); } void HandleValue(Expr *E, bool AddressOf) { E = E->IgnoreParens(); if (MemberExpr *ME = dyn_cast(E)) { HandleMemberExpr(ME, false /*CheckReferenceOnly*/, AddressOf /*AddressOf*/); return; } if (ConditionalOperator *CO = dyn_cast(E)) { Visit(CO->getCond()); HandleValue(CO->getTrueExpr(), AddressOf); HandleValue(CO->getFalseExpr(), AddressOf); return; } if (BinaryConditionalOperator *BCO = dyn_cast(E)) { Visit(BCO->getCond()); HandleValue(BCO->getFalseExpr(), AddressOf); return; } if (OpaqueValueExpr *OVE = dyn_cast(E)) { HandleValue(OVE->getSourceExpr(), AddressOf); return; } if (BinaryOperator *BO = dyn_cast(E)) { switch (BO->getOpcode()) { default: break; case(BO_PtrMemD): case(BO_PtrMemI): HandleValue(BO->getLHS(), AddressOf); Visit(BO->getRHS()); return; case(BO_Comma): Visit(BO->getLHS()); HandleValue(BO->getRHS(), AddressOf); return; } } Visit(E); } void CheckInitListExpr(InitListExpr *ILE) { InitFieldIndex.push_back(0); for (auto Child : ILE->children()) { if (InitListExpr *SubList = dyn_cast(Child)) { CheckInitListExpr(SubList); } else { Visit(Child); } ++InitFieldIndex.back(); } InitFieldIndex.pop_back(); } void CheckInitializer(Expr *E, const CXXConstructorDecl *FieldConstructor, FieldDecl *Field, const Type *BaseClass) { // Remove Decls that may have been initialized in the previous // initializer. for (ValueDecl* VD : DeclsToRemove) Decls.erase(VD); DeclsToRemove.clear(); Constructor = FieldConstructor; InitListExpr *ILE = dyn_cast(E); if (ILE && Field) { InitList = true; InitListFieldDecl = Field; InitFieldIndex.clear(); CheckInitListExpr(ILE); } else { InitList = false; Visit(E); } if (Field) Decls.erase(Field); if (BaseClass) BaseClasses.erase(BaseClass->getCanonicalTypeInternal()); } void VisitMemberExpr(MemberExpr *ME) { // All uses of unbounded reference fields will warn. HandleMemberExpr(ME, true /*CheckReferenceOnly*/, false /*AddressOf*/); } void VisitImplicitCastExpr(ImplicitCastExpr *E) { if (E->getCastKind() == CK_LValueToRValue) { HandleValue(E->getSubExpr(), false /*AddressOf*/); return; } Inherited::VisitImplicitCastExpr(E); } void VisitCXXConstructExpr(CXXConstructExpr *E) { if (E->getConstructor()->isCopyConstructor()) { Expr *ArgExpr = E->getArg(0); if (InitListExpr *ILE = dyn_cast(ArgExpr)) if (ILE->getNumInits() == 1) ArgExpr = ILE->getInit(0); if (ImplicitCastExpr *ICE = dyn_cast(ArgExpr)) if (ICE->getCastKind() == CK_NoOp) ArgExpr = ICE->getSubExpr(); HandleValue(ArgExpr, false /*AddressOf*/); return; } Inherited::VisitCXXConstructExpr(E); } void VisitCXXMemberCallExpr(CXXMemberCallExpr *E) { Expr *Callee = E->getCallee(); if (isa(Callee)) { HandleValue(Callee, false /*AddressOf*/); for (auto Arg : E->arguments()) Visit(Arg); return; } Inherited::VisitCXXMemberCallExpr(E); } void VisitCallExpr(CallExpr *E) { // Treat std::move as a use. if (E->getNumArgs() == 1) { if (FunctionDecl *FD = E->getDirectCallee()) { if (FD->isInStdNamespace() && FD->getIdentifier() && FD->getIdentifier()->isStr("move")) { HandleValue(E->getArg(0), false /*AddressOf*/); return; } } } Inherited::VisitCallExpr(E); } void VisitCXXOperatorCallExpr(CXXOperatorCallExpr *E) { Expr *Callee = E->getCallee(); if (isa(Callee)) return Inherited::VisitCXXOperatorCallExpr(E); Visit(Callee); for (auto Arg : E->arguments()) HandleValue(Arg->IgnoreParenImpCasts(), false /*AddressOf*/); } void VisitBinaryOperator(BinaryOperator *E) { // If a field assignment is detected, remove the field from the // uninitiailized field set. if (E->getOpcode() == BO_Assign) if (MemberExpr *ME = dyn_cast(E->getLHS())) if (FieldDecl *FD = dyn_cast(ME->getMemberDecl())) if (!FD->getType()->isReferenceType()) DeclsToRemove.push_back(FD); if (E->isCompoundAssignmentOp()) { HandleValue(E->getLHS(), false /*AddressOf*/); Visit(E->getRHS()); return; } Inherited::VisitBinaryOperator(E); } void VisitUnaryOperator(UnaryOperator *E) { if (E->isIncrementDecrementOp()) { HandleValue(E->getSubExpr(), false /*AddressOf*/); return; } if (E->getOpcode() == UO_AddrOf) { if (MemberExpr *ME = dyn_cast(E->getSubExpr())) { HandleValue(ME->getBase(), true /*AddressOf*/); return; } } Inherited::VisitUnaryOperator(E); } }; // Diagnose value-uses of fields to initialize themselves, e.g. // foo(foo) // where foo is not also a parameter to the constructor. // Also diagnose across field uninitialized use such as // x(y), y(x) // TODO: implement -Wuninitialized and fold this into that framework. static void DiagnoseUninitializedFields( Sema &SemaRef, const CXXConstructorDecl *Constructor) { if (SemaRef.getDiagnostics().isIgnored(diag::warn_field_is_uninit, Constructor->getLocation())) { return; } if (Constructor->isInvalidDecl()) return; const CXXRecordDecl *RD = Constructor->getParent(); if (RD->getDescribedClassTemplate()) return; // Holds fields that are uninitialized. llvm::SmallPtrSet UninitializedFields; // At the beginning, all fields are uninitialized. for (auto *I : RD->decls()) { if (auto *FD = dyn_cast(I)) { UninitializedFields.insert(FD); } else if (auto *IFD = dyn_cast(I)) { UninitializedFields.insert(IFD->getAnonField()); } } llvm::SmallPtrSet UninitializedBaseClasses; for (auto I : RD->bases()) UninitializedBaseClasses.insert(I.getType().getCanonicalType()); if (UninitializedFields.empty() && UninitializedBaseClasses.empty()) return; UninitializedFieldVisitor UninitializedChecker(SemaRef, UninitializedFields, UninitializedBaseClasses); for (const auto *FieldInit : Constructor->inits()) { if (UninitializedFields.empty() && UninitializedBaseClasses.empty()) break; Expr *InitExpr = FieldInit->getInit(); if (!InitExpr) continue; if (CXXDefaultInitExpr *Default = dyn_cast(InitExpr)) { InitExpr = Default->getExpr(); if (!InitExpr) continue; // In class initializers will point to the constructor. UninitializedChecker.CheckInitializer(InitExpr, Constructor, FieldInit->getAnyMember(), FieldInit->getBaseClass()); } else { UninitializedChecker.CheckInitializer(InitExpr, nullptr, FieldInit->getAnyMember(), FieldInit->getBaseClass()); } } } } // namespace /// \brief Enter a new C++ default initializer scope. After calling this, the /// caller must call \ref ActOnFinishCXXInClassMemberInitializer, even if /// parsing or instantiating the initializer failed. void Sema::ActOnStartCXXInClassMemberInitializer() { // Create a synthetic function scope to represent the call to the constructor // that notionally surrounds a use of this initializer. PushFunctionScope(); } /// \brief This is invoked after parsing an in-class initializer for a /// non-static C++ class member, and after instantiating an in-class initializer /// in a class template. Such actions are deferred until the class is complete. void Sema::ActOnFinishCXXInClassMemberInitializer(Decl *D, SourceLocation InitLoc, Expr *InitExpr) { // Pop the notional constructor scope we created earlier. PopFunctionScopeInfo(nullptr, D); FieldDecl *FD = dyn_cast(D); assert((isa(D) || FD->getInClassInitStyle() != ICIS_NoInit) && "must set init style when field is created"); if (!InitExpr) { D->setInvalidDecl(); if (FD) FD->removeInClassInitializer(); return; } if (DiagnoseUnexpandedParameterPack(InitExpr, UPPC_Initializer)) { FD->setInvalidDecl(); FD->removeInClassInitializer(); return; } ExprResult Init = InitExpr; if (!FD->getType()->isDependentType() && !InitExpr->isTypeDependent()) { InitializedEntity Entity = InitializedEntity::InitializeMember(FD); InitializationKind Kind = FD->getInClassInitStyle() == ICIS_ListInit ? InitializationKind::CreateDirectList(InitExpr->getLocStart()) : InitializationKind::CreateCopy(InitExpr->getLocStart(), InitLoc); InitializationSequence Seq(*this, Entity, Kind, InitExpr); Init = Seq.Perform(*this, Entity, Kind, InitExpr); if (Init.isInvalid()) { FD->setInvalidDecl(); return; } } // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. Init = ActOnFinishFullExpr(Init.get(), InitLoc); if (Init.isInvalid()) { FD->setInvalidDecl(); return; } InitExpr = Init.get(); FD->setInClassInitializer(InitExpr); } /// \brief Find the direct and/or virtual base specifiers that /// correspond to the given base type, for use in base initialization /// within a constructor. static bool FindBaseInitializer(Sema &SemaRef, CXXRecordDecl *ClassDecl, QualType BaseType, const CXXBaseSpecifier *&DirectBaseSpec, const CXXBaseSpecifier *&VirtualBaseSpec) { // First, check for a direct base class. DirectBaseSpec = nullptr; for (const auto &Base : ClassDecl->bases()) { if (SemaRef.Context.hasSameUnqualifiedType(BaseType, Base.getType())) { // We found a direct base of this type. That's what we're // initializing. DirectBaseSpec = &Base; break; } } // Check for a virtual base class. // FIXME: We might be able to short-circuit this if we know in advance that // there are no virtual bases. VirtualBaseSpec = nullptr; if (!DirectBaseSpec || !DirectBaseSpec->isVirtual()) { // We haven't found a base yet; search the class hierarchy for a // virtual base class. CXXBasePaths Paths(/*FindAmbiguities=*/true, /*RecordPaths=*/true, /*DetectVirtual=*/false); if (SemaRef.IsDerivedFrom(ClassDecl->getLocation(), SemaRef.Context.getTypeDeclType(ClassDecl), BaseType, Paths)) { for (CXXBasePaths::paths_iterator Path = Paths.begin(); Path != Paths.end(); ++Path) { if (Path->back().Base->isVirtual()) { VirtualBaseSpec = Path->back().Base; break; } } } } return DirectBaseSpec || VirtualBaseSpec; } /// \brief Handle a C++ member initializer using braced-init-list syntax. MemInitResult Sema::ActOnMemInitializer(Decl *ConstructorD, Scope *S, CXXScopeSpec &SS, IdentifierInfo *MemberOrBase, ParsedType TemplateTypeTy, const DeclSpec &DS, SourceLocation IdLoc, Expr *InitList, SourceLocation EllipsisLoc) { return BuildMemInitializer(ConstructorD, S, SS, MemberOrBase, TemplateTypeTy, DS, IdLoc, InitList, EllipsisLoc); } /// \brief Handle a C++ member initializer using parentheses syntax. MemInitResult Sema::ActOnMemInitializer(Decl *ConstructorD, Scope *S, CXXScopeSpec &SS, IdentifierInfo *MemberOrBase, ParsedType TemplateTypeTy, const DeclSpec &DS, SourceLocation IdLoc, SourceLocation LParenLoc, ArrayRef Args, SourceLocation RParenLoc, SourceLocation EllipsisLoc) { Expr *List = new (Context) ParenListExpr(Context, LParenLoc, Args, RParenLoc); return BuildMemInitializer(ConstructorD, S, SS, MemberOrBase, TemplateTypeTy, DS, IdLoc, List, EllipsisLoc); } namespace { // Callback to only accept typo corrections that can be a valid C++ member // intializer: either a non-static field member or a base class. class MemInitializerValidatorCCC : public CorrectionCandidateCallback { public: explicit MemInitializerValidatorCCC(CXXRecordDecl *ClassDecl) : ClassDecl(ClassDecl) {} bool ValidateCandidate(const TypoCorrection &candidate) override { if (NamedDecl *ND = candidate.getCorrectionDecl()) { if (FieldDecl *Member = dyn_cast(ND)) return Member->getDeclContext()->getRedeclContext()->Equals(ClassDecl); return isa(ND); } return false; } private: CXXRecordDecl *ClassDecl; }; } /// \brief Handle a C++ member initializer. MemInitResult Sema::BuildMemInitializer(Decl *ConstructorD, Scope *S, CXXScopeSpec &SS, IdentifierInfo *MemberOrBase, ParsedType TemplateTypeTy, const DeclSpec &DS, SourceLocation IdLoc, Expr *Init, SourceLocation EllipsisLoc) { ExprResult Res = CorrectDelayedTyposInExpr(Init); if (!Res.isUsable()) return true; Init = Res.get(); if (!ConstructorD) return true; AdjustDeclIfTemplate(ConstructorD); CXXConstructorDecl *Constructor = dyn_cast(ConstructorD); if (!Constructor) { // The user wrote a constructor initializer on a function that is // not a C++ constructor. Ignore the error for now, because we may // have more member initializers coming; we'll diagnose it just // once in ActOnMemInitializers. return true; } CXXRecordDecl *ClassDecl = Constructor->getParent(); // C++ [class.base.init]p2: // Names in a mem-initializer-id are looked up in the scope of the // constructor's class and, if not found in that scope, are looked // up in the scope containing the constructor's definition. // [Note: if the constructor's class contains a member with the // same name as a direct or virtual base class of the class, a // mem-initializer-id naming the member or base class and composed // of a single identifier refers to the class member. A // mem-initializer-id for the hidden base class may be specified // using a qualified name. ] if (!SS.getScopeRep() && !TemplateTypeTy) { // Look for a member, first. DeclContext::lookup_result Result = ClassDecl->lookup(MemberOrBase); if (!Result.empty()) { ValueDecl *Member; if ((Member = dyn_cast(Result.front())) || (Member = dyn_cast(Result.front()))) { if (EllipsisLoc.isValid()) Diag(EllipsisLoc, diag::err_pack_expansion_member_init) << MemberOrBase << SourceRange(IdLoc, Init->getSourceRange().getEnd()); return BuildMemberInitializer(Member, Init, IdLoc); } } } // It didn't name a member, so see if it names a class. QualType BaseType; TypeSourceInfo *TInfo = nullptr; if (TemplateTypeTy) { BaseType = GetTypeFromParser(TemplateTypeTy, &TInfo); } else if (DS.getTypeSpecType() == TST_decltype) { BaseType = BuildDecltypeType(DS.getRepAsExpr(), DS.getTypeSpecTypeLoc()); } else { LookupResult R(*this, MemberOrBase, IdLoc, LookupOrdinaryName); LookupParsedName(R, S, &SS); TypeDecl *TyD = R.getAsSingle(); if (!TyD) { if (R.isAmbiguous()) return true; // We don't want access-control diagnostics here. R.suppressDiagnostics(); if (SS.isSet() && isDependentScopeSpecifier(SS)) { bool NotUnknownSpecialization = false; DeclContext *DC = computeDeclContext(SS, false); if (CXXRecordDecl *Record = dyn_cast_or_null(DC)) NotUnknownSpecialization = !Record->hasAnyDependentBases(); if (!NotUnknownSpecialization) { // When the scope specifier can refer to a member of an unknown // specialization, we take it as a type name. BaseType = CheckTypenameType(ETK_None, SourceLocation(), SS.getWithLocInContext(Context), *MemberOrBase, IdLoc); if (BaseType.isNull()) return true; R.clear(); R.setLookupName(MemberOrBase); } } // If no results were found, try to correct typos. TypoCorrection Corr; if (R.empty() && BaseType.isNull() && (Corr = CorrectTypo( R.getLookupNameInfo(), R.getLookupKind(), S, &SS, llvm::make_unique(ClassDecl), CTK_ErrorRecovery, ClassDecl))) { if (FieldDecl *Member = Corr.getCorrectionDeclAs()) { // We have found a non-static data member with a similar // name to what was typed; complain and initialize that // member. diagnoseTypo(Corr, PDiag(diag::err_mem_init_not_member_or_class_suggest) << MemberOrBase << true); return BuildMemberInitializer(Member, Init, IdLoc); } else if (TypeDecl *Type = Corr.getCorrectionDeclAs()) { const CXXBaseSpecifier *DirectBaseSpec; const CXXBaseSpecifier *VirtualBaseSpec; if (FindBaseInitializer(*this, ClassDecl, Context.getTypeDeclType(Type), DirectBaseSpec, VirtualBaseSpec)) { // We have found a direct or virtual base class with a // similar name to what was typed; complain and initialize // that base class. diagnoseTypo(Corr, PDiag(diag::err_mem_init_not_member_or_class_suggest) << MemberOrBase << false, PDiag() /*Suppress note, we provide our own.*/); const CXXBaseSpecifier *BaseSpec = DirectBaseSpec ? DirectBaseSpec : VirtualBaseSpec; Diag(BaseSpec->getLocStart(), diag::note_base_class_specified_here) << BaseSpec->getType() << BaseSpec->getSourceRange(); TyD = Type; } } } if (!TyD && BaseType.isNull()) { Diag(IdLoc, diag::err_mem_init_not_member_or_class) << MemberOrBase << SourceRange(IdLoc,Init->getSourceRange().getEnd()); return true; } } if (BaseType.isNull()) { BaseType = Context.getTypeDeclType(TyD); MarkAnyDeclReferenced(TyD->getLocation(), TyD, /*OdrUse=*/false); if (SS.isSet()) { BaseType = Context.getElaboratedType(ETK_None, SS.getScopeRep(), BaseType); TInfo = Context.CreateTypeSourceInfo(BaseType); ElaboratedTypeLoc TL = TInfo->getTypeLoc().castAs(); TL.getNamedTypeLoc().castAs().setNameLoc(IdLoc); TL.setElaboratedKeywordLoc(SourceLocation()); TL.setQualifierLoc(SS.getWithLocInContext(Context)); } } } if (!TInfo) TInfo = Context.getTrivialTypeSourceInfo(BaseType, IdLoc); return BuildBaseInitializer(BaseType, TInfo, Init, ClassDecl, EllipsisLoc); } /// Checks a member initializer expression for cases where reference (or /// pointer) members are bound to by-value parameters (or their addresses). static void CheckForDanglingReferenceOrPointer(Sema &S, ValueDecl *Member, Expr *Init, SourceLocation IdLoc) { QualType MemberTy = Member->getType(); // We only handle pointers and references currently. // FIXME: Would this be relevant for ObjC object pointers? Or block pointers? if (!MemberTy->isReferenceType() && !MemberTy->isPointerType()) return; const bool IsPointer = MemberTy->isPointerType(); if (IsPointer) { if (const UnaryOperator *Op = dyn_cast(Init->IgnoreParenImpCasts())) { // The only case we're worried about with pointers requires taking the // address. if (Op->getOpcode() != UO_AddrOf) return; Init = Op->getSubExpr(); } else { // We only handle address-of expression initializers for pointers. return; } } if (const DeclRefExpr *DRE = dyn_cast(Init->IgnoreParens())) { // We only warn when referring to a non-reference parameter declaration. const ParmVarDecl *Parameter = dyn_cast(DRE->getDecl()); if (!Parameter || Parameter->getType()->isReferenceType()) return; S.Diag(Init->getExprLoc(), IsPointer ? diag::warn_init_ptr_member_to_parameter_addr : diag::warn_bind_ref_member_to_parameter) << Member << Parameter << Init->getSourceRange(); } else { // Other initializers are fine. return; } S.Diag(Member->getLocation(), diag::note_ref_or_ptr_member_declared_here) << (unsigned)IsPointer; } MemInitResult Sema::BuildMemberInitializer(ValueDecl *Member, Expr *Init, SourceLocation IdLoc) { FieldDecl *DirectMember = dyn_cast(Member); IndirectFieldDecl *IndirectMember = dyn_cast(Member); assert((DirectMember || IndirectMember) && "Member must be a FieldDecl or IndirectFieldDecl"); if (DiagnoseUnexpandedParameterPack(Init, UPPC_Initializer)) return true; if (Member->isInvalidDecl()) return true; MultiExprArg Args; if (ParenListExpr *ParenList = dyn_cast(Init)) { Args = MultiExprArg(ParenList->getExprs(), ParenList->getNumExprs()); } else if (InitListExpr *InitList = dyn_cast(Init)) { Args = MultiExprArg(InitList->getInits(), InitList->getNumInits()); } else { // Template instantiation doesn't reconstruct ParenListExprs for us. Args = Init; } SourceRange InitRange = Init->getSourceRange(); if (Member->getType()->isDependentType() || Init->isTypeDependent()) { // Can't check initialization for a member of dependent type or when // any of the arguments are type-dependent expressions. DiscardCleanupsInEvaluationContext(); } else { bool InitList = false; if (isa(Init)) { InitList = true; Args = Init; } // Initialize the member. InitializedEntity MemberEntity = DirectMember ? InitializedEntity::InitializeMember(DirectMember, nullptr) : InitializedEntity::InitializeMember(IndirectMember, nullptr); InitializationKind Kind = InitList ? InitializationKind::CreateDirectList(IdLoc) : InitializationKind::CreateDirect(IdLoc, InitRange.getBegin(), InitRange.getEnd()); InitializationSequence InitSeq(*this, MemberEntity, Kind, Args); ExprResult MemberInit = InitSeq.Perform(*this, MemberEntity, Kind, Args, nullptr); if (MemberInit.isInvalid()) return true; CheckForDanglingReferenceOrPointer(*this, Member, MemberInit.get(), IdLoc); // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. MemberInit = ActOnFinishFullExpr(MemberInit.get(), InitRange.getBegin()); if (MemberInit.isInvalid()) return true; Init = MemberInit.get(); } if (DirectMember) { return new (Context) CXXCtorInitializer(Context, DirectMember, IdLoc, InitRange.getBegin(), Init, InitRange.getEnd()); } else { return new (Context) CXXCtorInitializer(Context, IndirectMember, IdLoc, InitRange.getBegin(), Init, InitRange.getEnd()); } } MemInitResult Sema::BuildDelegatingInitializer(TypeSourceInfo *TInfo, Expr *Init, CXXRecordDecl *ClassDecl) { SourceLocation NameLoc = TInfo->getTypeLoc().getLocalSourceRange().getBegin(); if (!LangOpts.CPlusPlus11) return Diag(NameLoc, diag::err_delegating_ctor) << TInfo->getTypeLoc().getLocalSourceRange(); Diag(NameLoc, diag::warn_cxx98_compat_delegating_ctor); bool InitList = true; MultiExprArg Args = Init; if (ParenListExpr *ParenList = dyn_cast(Init)) { InitList = false; Args = MultiExprArg(ParenList->getExprs(), ParenList->getNumExprs()); } SourceRange InitRange = Init->getSourceRange(); // Initialize the object. InitializedEntity DelegationEntity = InitializedEntity::InitializeDelegation( QualType(ClassDecl->getTypeForDecl(), 0)); InitializationKind Kind = InitList ? InitializationKind::CreateDirectList(NameLoc) : InitializationKind::CreateDirect(NameLoc, InitRange.getBegin(), InitRange.getEnd()); InitializationSequence InitSeq(*this, DelegationEntity, Kind, Args); ExprResult DelegationInit = InitSeq.Perform(*this, DelegationEntity, Kind, Args, nullptr); if (DelegationInit.isInvalid()) return true; assert(cast(DelegationInit.get())->getConstructor() && "Delegating constructor with no target?"); // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. DelegationInit = ActOnFinishFullExpr(DelegationInit.get(), InitRange.getBegin()); if (DelegationInit.isInvalid()) return true; // If we are in a dependent context, template instantiation will // perform this type-checking again. Just save the arguments that we // received in a ParenListExpr. // FIXME: This isn't quite ideal, since our ASTs don't capture all // of the information that we have about the base // initializer. However, deconstructing the ASTs is a dicey process, // and this approach is far more likely to get the corner cases right. if (CurContext->isDependentContext()) DelegationInit = Init; return new (Context) CXXCtorInitializer(Context, TInfo, InitRange.getBegin(), DelegationInit.getAs(), InitRange.getEnd()); } MemInitResult Sema::BuildBaseInitializer(QualType BaseType, TypeSourceInfo *BaseTInfo, Expr *Init, CXXRecordDecl *ClassDecl, SourceLocation EllipsisLoc) { SourceLocation BaseLoc = BaseTInfo->getTypeLoc().getLocalSourceRange().getBegin(); if (!BaseType->isDependentType() && !BaseType->isRecordType()) return Diag(BaseLoc, diag::err_base_init_does_not_name_class) << BaseType << BaseTInfo->getTypeLoc().getLocalSourceRange(); // C++ [class.base.init]p2: // [...] Unless the mem-initializer-id names a nonstatic data // member of the constructor's class or a direct or virtual base // of that class, the mem-initializer is ill-formed. A // mem-initializer-list can initialize a base class using any // name that denotes that base class type. bool Dependent = BaseType->isDependentType() || Init->isTypeDependent(); SourceRange InitRange = Init->getSourceRange(); if (EllipsisLoc.isValid()) { // This is a pack expansion. if (!BaseType->containsUnexpandedParameterPack()) { Diag(EllipsisLoc, diag::err_pack_expansion_without_parameter_packs) << SourceRange(BaseLoc, InitRange.getEnd()); EllipsisLoc = SourceLocation(); } } else { // Check for any unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(BaseLoc, BaseTInfo, UPPC_Initializer)) return true; if (DiagnoseUnexpandedParameterPack(Init, UPPC_Initializer)) return true; } // Check for direct and virtual base classes. const CXXBaseSpecifier *DirectBaseSpec = nullptr; const CXXBaseSpecifier *VirtualBaseSpec = nullptr; if (!Dependent) { if (Context.hasSameUnqualifiedType(QualType(ClassDecl->getTypeForDecl(),0), BaseType)) return BuildDelegatingInitializer(BaseTInfo, Init, ClassDecl); FindBaseInitializer(*this, ClassDecl, BaseType, DirectBaseSpec, VirtualBaseSpec); // C++ [base.class.init]p2: // Unless the mem-initializer-id names a nonstatic data member of the // constructor's class or a direct or virtual base of that class, the // mem-initializer is ill-formed. if (!DirectBaseSpec && !VirtualBaseSpec) { // If the class has any dependent bases, then it's possible that // one of those types will resolve to the same type as // BaseType. Therefore, just treat this as a dependent base // class initialization. FIXME: Should we try to check the // initialization anyway? It seems odd. if (ClassDecl->hasAnyDependentBases()) Dependent = true; else return Diag(BaseLoc, diag::err_not_direct_base_or_virtual) << BaseType << Context.getTypeDeclType(ClassDecl) << BaseTInfo->getTypeLoc().getLocalSourceRange(); } } if (Dependent) { DiscardCleanupsInEvaluationContext(); return new (Context) CXXCtorInitializer(Context, BaseTInfo, /*IsVirtual=*/false, InitRange.getBegin(), Init, InitRange.getEnd(), EllipsisLoc); } // C++ [base.class.init]p2: // If a mem-initializer-id is ambiguous because it designates both // a direct non-virtual base class and an inherited virtual base // class, the mem-initializer is ill-formed. if (DirectBaseSpec && VirtualBaseSpec) return Diag(BaseLoc, diag::err_base_init_direct_and_virtual) << BaseType << BaseTInfo->getTypeLoc().getLocalSourceRange(); const CXXBaseSpecifier *BaseSpec = DirectBaseSpec; if (!BaseSpec) BaseSpec = VirtualBaseSpec; // Initialize the base. bool InitList = true; MultiExprArg Args = Init; if (ParenListExpr *ParenList = dyn_cast(Init)) { InitList = false; Args = MultiExprArg(ParenList->getExprs(), ParenList->getNumExprs()); } InitializedEntity BaseEntity = InitializedEntity::InitializeBase(Context, BaseSpec, VirtualBaseSpec); InitializationKind Kind = InitList ? InitializationKind::CreateDirectList(BaseLoc) : InitializationKind::CreateDirect(BaseLoc, InitRange.getBegin(), InitRange.getEnd()); InitializationSequence InitSeq(*this, BaseEntity, Kind, Args); ExprResult BaseInit = InitSeq.Perform(*this, BaseEntity, Kind, Args, nullptr); if (BaseInit.isInvalid()) return true; // C++11 [class.base.init]p7: // The initialization of each base and member constitutes a // full-expression. BaseInit = ActOnFinishFullExpr(BaseInit.get(), InitRange.getBegin()); if (BaseInit.isInvalid()) return true; // If we are in a dependent context, template instantiation will // perform this type-checking again. Just save the arguments that we // received in a ParenListExpr. // FIXME: This isn't quite ideal, since our ASTs don't capture all // of the information that we have about the base // initializer. However, deconstructing the ASTs is a dicey process, // and this approach is far more likely to get the corner cases right. if (CurContext->isDependentContext()) BaseInit = Init; return new (Context) CXXCtorInitializer(Context, BaseTInfo, BaseSpec->isVirtual(), InitRange.getBegin(), BaseInit.getAs(), InitRange.getEnd(), EllipsisLoc); } // Create a static_cast\(expr). static Expr *CastForMoving(Sema &SemaRef, Expr *E, QualType T = QualType()) { if (T.isNull()) T = E->getType(); QualType TargetType = SemaRef.BuildReferenceType( T, /*SpelledAsLValue*/false, SourceLocation(), DeclarationName()); SourceLocation ExprLoc = E->getLocStart(); TypeSourceInfo *TargetLoc = SemaRef.Context.getTrivialTypeSourceInfo( TargetType, ExprLoc); return SemaRef.BuildCXXNamedCast(ExprLoc, tok::kw_static_cast, TargetLoc, E, SourceRange(ExprLoc, ExprLoc), E->getSourceRange()).get(); } /// ImplicitInitializerKind - How an implicit base or member initializer should /// initialize its base or member. enum ImplicitInitializerKind { IIK_Default, IIK_Copy, IIK_Move, IIK_Inherit }; static bool BuildImplicitBaseInitializer(Sema &SemaRef, CXXConstructorDecl *Constructor, ImplicitInitializerKind ImplicitInitKind, CXXBaseSpecifier *BaseSpec, bool IsInheritedVirtualBase, CXXCtorInitializer *&CXXBaseInit) { InitializedEntity InitEntity = InitializedEntity::InitializeBase(SemaRef.Context, BaseSpec, IsInheritedVirtualBase); ExprResult BaseInit; switch (ImplicitInitKind) { case IIK_Inherit: case IIK_Default: { InitializationKind InitKind = InitializationKind::CreateDefault(Constructor->getLocation()); InitializationSequence InitSeq(SemaRef, InitEntity, InitKind, None); BaseInit = InitSeq.Perform(SemaRef, InitEntity, InitKind, None); break; } case IIK_Move: case IIK_Copy: { bool Moving = ImplicitInitKind == IIK_Move; ParmVarDecl *Param = Constructor->getParamDecl(0); QualType ParamType = Param->getType().getNonReferenceType(); Expr *CopyCtorArg = DeclRefExpr::Create(SemaRef.Context, NestedNameSpecifierLoc(), SourceLocation(), Param, false, Constructor->getLocation(), ParamType, VK_LValue, nullptr); SemaRef.MarkDeclRefReferenced(cast(CopyCtorArg)); // Cast to the base class to avoid ambiguities. QualType ArgTy = SemaRef.Context.getQualifiedType(BaseSpec->getType().getUnqualifiedType(), ParamType.getQualifiers()); if (Moving) { CopyCtorArg = CastForMoving(SemaRef, CopyCtorArg); } CXXCastPath BasePath; BasePath.push_back(BaseSpec); CopyCtorArg = SemaRef.ImpCastExprToType(CopyCtorArg, ArgTy, CK_UncheckedDerivedToBase, Moving ? VK_XValue : VK_LValue, &BasePath).get(); InitializationKind InitKind = InitializationKind::CreateDirect(Constructor->getLocation(), SourceLocation(), SourceLocation()); InitializationSequence InitSeq(SemaRef, InitEntity, InitKind, CopyCtorArg); BaseInit = InitSeq.Perform(SemaRef, InitEntity, InitKind, CopyCtorArg); break; } } BaseInit = SemaRef.MaybeCreateExprWithCleanups(BaseInit); if (BaseInit.isInvalid()) return true; CXXBaseInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, SemaRef.Context.getTrivialTypeSourceInfo(BaseSpec->getType(), SourceLocation()), BaseSpec->isVirtual(), SourceLocation(), BaseInit.getAs(), SourceLocation(), SourceLocation()); return false; } static bool RefersToRValueRef(Expr *MemRef) { ValueDecl *Referenced = cast(MemRef)->getMemberDecl(); return Referenced->getType()->isRValueReferenceType(); } static bool BuildImplicitMemberInitializer(Sema &SemaRef, CXXConstructorDecl *Constructor, ImplicitInitializerKind ImplicitInitKind, FieldDecl *Field, IndirectFieldDecl *Indirect, CXXCtorInitializer *&CXXMemberInit) { if (Field->isInvalidDecl()) return true; SourceLocation Loc = Constructor->getLocation(); if (ImplicitInitKind == IIK_Copy || ImplicitInitKind == IIK_Move) { bool Moving = ImplicitInitKind == IIK_Move; ParmVarDecl *Param = Constructor->getParamDecl(0); QualType ParamType = Param->getType().getNonReferenceType(); // Suppress copying zero-width bitfields. if (Field->isBitField() && Field->getBitWidthValue(SemaRef.Context) == 0) return false; Expr *MemberExprBase = DeclRefExpr::Create(SemaRef.Context, NestedNameSpecifierLoc(), SourceLocation(), Param, false, Loc, ParamType, VK_LValue, nullptr); SemaRef.MarkDeclRefReferenced(cast(MemberExprBase)); if (Moving) { MemberExprBase = CastForMoving(SemaRef, MemberExprBase); } // Build a reference to this field within the parameter. CXXScopeSpec SS; LookupResult MemberLookup(SemaRef, Field->getDeclName(), Loc, Sema::LookupMemberName); MemberLookup.addDecl(Indirect ? cast(Indirect) : cast(Field), AS_public); MemberLookup.resolveKind(); ExprResult CtorArg = SemaRef.BuildMemberReferenceExpr(MemberExprBase, ParamType, Loc, /*IsArrow=*/false, SS, /*TemplateKWLoc=*/SourceLocation(), /*FirstQualifierInScope=*/nullptr, MemberLookup, /*TemplateArgs=*/nullptr, /*S*/nullptr); if (CtorArg.isInvalid()) return true; // C++11 [class.copy]p15: // - if a member m has rvalue reference type T&&, it is direct-initialized // with static_cast(x.m); if (RefersToRValueRef(CtorArg.get())) { CtorArg = CastForMoving(SemaRef, CtorArg.get()); } InitializedEntity Entity = Indirect ? InitializedEntity::InitializeMember(Indirect, nullptr, /*Implicit*/ true) : InitializedEntity::InitializeMember(Field, nullptr, /*Implicit*/ true); // Direct-initialize to use the copy constructor. InitializationKind InitKind = InitializationKind::CreateDirect(Loc, SourceLocation(), SourceLocation()); Expr *CtorArgE = CtorArg.getAs(); InitializationSequence InitSeq(SemaRef, Entity, InitKind, CtorArgE); ExprResult MemberInit = InitSeq.Perform(SemaRef, Entity, InitKind, MultiExprArg(&CtorArgE, 1)); MemberInit = SemaRef.MaybeCreateExprWithCleanups(MemberInit); if (MemberInit.isInvalid()) return true; if (Indirect) CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer( SemaRef.Context, Indirect, Loc, Loc, MemberInit.getAs(), Loc); else CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer( SemaRef.Context, Field, Loc, Loc, MemberInit.getAs(), Loc); return false; } assert((ImplicitInitKind == IIK_Default || ImplicitInitKind == IIK_Inherit) && "Unhandled implicit init kind!"); QualType FieldBaseElementType = SemaRef.Context.getBaseElementType(Field->getType()); if (FieldBaseElementType->isRecordType()) { InitializedEntity InitEntity = Indirect ? InitializedEntity::InitializeMember(Indirect, nullptr, /*Implicit*/ true) : InitializedEntity::InitializeMember(Field, nullptr, /*Implicit*/ true); InitializationKind InitKind = InitializationKind::CreateDefault(Loc); InitializationSequence InitSeq(SemaRef, InitEntity, InitKind, None); ExprResult MemberInit = InitSeq.Perform(SemaRef, InitEntity, InitKind, None); MemberInit = SemaRef.MaybeCreateExprWithCleanups(MemberInit); if (MemberInit.isInvalid()) return true; if (Indirect) CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Indirect, Loc, Loc, MemberInit.get(), Loc); else CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Field, Loc, Loc, MemberInit.get(), Loc); return false; } if (!Field->getParent()->isUnion()) { if (FieldBaseElementType->isReferenceType()) { SemaRef.Diag(Constructor->getLocation(), diag::err_uninitialized_member_in_ctor) << (int)Constructor->isImplicit() << SemaRef.Context.getTagDeclType(Constructor->getParent()) << 0 << Field->getDeclName(); SemaRef.Diag(Field->getLocation(), diag::note_declared_at); return true; } if (FieldBaseElementType.isConstQualified()) { SemaRef.Diag(Constructor->getLocation(), diag::err_uninitialized_member_in_ctor) << (int)Constructor->isImplicit() << SemaRef.Context.getTagDeclType(Constructor->getParent()) << 1 << Field->getDeclName(); SemaRef.Diag(Field->getLocation(), diag::note_declared_at); return true; } } if (SemaRef.getLangOpts().ObjCAutoRefCount && FieldBaseElementType->isObjCRetainableType() && FieldBaseElementType.getObjCLifetime() != Qualifiers::OCL_None && FieldBaseElementType.getObjCLifetime() != Qualifiers::OCL_ExplicitNone) { // ARC: // Default-initialize Objective-C pointers to NULL. CXXMemberInit = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Field, Loc, Loc, new (SemaRef.Context) ImplicitValueInitExpr(Field->getType()), Loc); return false; } // Nothing to initialize. CXXMemberInit = nullptr; return false; } namespace { struct BaseAndFieldInfo { Sema &S; CXXConstructorDecl *Ctor; bool AnyErrorsInInits; ImplicitInitializerKind IIK; llvm::DenseMap AllBaseFields; SmallVector AllToInit; llvm::DenseMap ActiveUnionMember; BaseAndFieldInfo(Sema &S, CXXConstructorDecl *Ctor, bool ErrorsInInits) : S(S), Ctor(Ctor), AnyErrorsInInits(ErrorsInInits) { bool Generated = Ctor->isImplicit() || Ctor->isDefaulted(); if (Ctor->getInheritedConstructor()) IIK = IIK_Inherit; else if (Generated && Ctor->isCopyConstructor()) IIK = IIK_Copy; else if (Generated && Ctor->isMoveConstructor()) IIK = IIK_Move; else IIK = IIK_Default; } bool isImplicitCopyOrMove() const { switch (IIK) { case IIK_Copy: case IIK_Move: return true; case IIK_Default: case IIK_Inherit: return false; } llvm_unreachable("Invalid ImplicitInitializerKind!"); } bool addFieldInitializer(CXXCtorInitializer *Init) { AllToInit.push_back(Init); // Check whether this initializer makes the field "used". if (Init->getInit()->HasSideEffects(S.Context)) S.UnusedPrivateFields.remove(Init->getAnyMember()); return false; } bool isInactiveUnionMember(FieldDecl *Field) { RecordDecl *Record = Field->getParent(); if (!Record->isUnion()) return false; if (FieldDecl *Active = ActiveUnionMember.lookup(Record->getCanonicalDecl())) return Active != Field->getCanonicalDecl(); // In an implicit copy or move constructor, ignore any in-class initializer. if (isImplicitCopyOrMove()) return true; // If there's no explicit initialization, the field is active only if it // has an in-class initializer... if (Field->hasInClassInitializer()) return false; // ... or it's an anonymous struct or union whose class has an in-class // initializer. if (!Field->isAnonymousStructOrUnion()) return true; CXXRecordDecl *FieldRD = Field->getType()->getAsCXXRecordDecl(); return !FieldRD->hasInClassInitializer(); } /// \brief Determine whether the given field is, or is within, a union member /// that is inactive (because there was an initializer given for a different /// member of the union, or because the union was not initialized at all). bool isWithinInactiveUnionMember(FieldDecl *Field, IndirectFieldDecl *Indirect) { if (!Indirect) return isInactiveUnionMember(Field); for (auto *C : Indirect->chain()) { FieldDecl *Field = dyn_cast(C); if (Field && isInactiveUnionMember(Field)) return true; } return false; } }; } /// \brief Determine whether the given type is an incomplete or zero-lenfgth /// array type. static bool isIncompleteOrZeroLengthArrayType(ASTContext &Context, QualType T) { if (T->isIncompleteArrayType()) return true; while (const ConstantArrayType *ArrayT = Context.getAsConstantArrayType(T)) { if (!ArrayT->getSize()) return true; T = ArrayT->getElementType(); } return false; } static bool CollectFieldInitializer(Sema &SemaRef, BaseAndFieldInfo &Info, FieldDecl *Field, IndirectFieldDecl *Indirect = nullptr) { if (Field->isInvalidDecl()) return false; // Overwhelmingly common case: we have a direct initializer for this field. if (CXXCtorInitializer *Init = Info.AllBaseFields.lookup(Field->getCanonicalDecl())) return Info.addFieldInitializer(Init); // C++11 [class.base.init]p8: // if the entity is a non-static data member that has a // brace-or-equal-initializer and either // -- the constructor's class is a union and no other variant member of that // union is designated by a mem-initializer-id or // -- the constructor's class is not a union, and, if the entity is a member // of an anonymous union, no other member of that union is designated by // a mem-initializer-id, // the entity is initialized as specified in [dcl.init]. // // We also apply the same rules to handle anonymous structs within anonymous // unions. if (Info.isWithinInactiveUnionMember(Field, Indirect)) return false; if (Field->hasInClassInitializer() && !Info.isImplicitCopyOrMove()) { ExprResult DIE = SemaRef.BuildCXXDefaultInitExpr(Info.Ctor->getLocation(), Field); if (DIE.isInvalid()) return true; CXXCtorInitializer *Init; if (Indirect) Init = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Indirect, SourceLocation(), SourceLocation(), DIE.get(), SourceLocation()); else Init = new (SemaRef.Context) CXXCtorInitializer(SemaRef.Context, Field, SourceLocation(), SourceLocation(), DIE.get(), SourceLocation()); return Info.addFieldInitializer(Init); } // Don't initialize incomplete or zero-length arrays. if (isIncompleteOrZeroLengthArrayType(SemaRef.Context, Field->getType())) return false; // Don't try to build an implicit initializer if there were semantic // errors in any of the initializers (and therefore we might be // missing some that the user actually wrote). if (Info.AnyErrorsInInits) return false; CXXCtorInitializer *Init = nullptr; if (BuildImplicitMemberInitializer(Info.S, Info.Ctor, Info.IIK, Field, Indirect, Init)) return true; if (!Init) return false; return Info.addFieldInitializer(Init); } bool Sema::SetDelegatingInitializer(CXXConstructorDecl *Constructor, CXXCtorInitializer *Initializer) { assert(Initializer->isDelegatingInitializer()); Constructor->setNumCtorInitializers(1); CXXCtorInitializer **initializer = new (Context) CXXCtorInitializer*[1]; memcpy(initializer, &Initializer, sizeof (CXXCtorInitializer*)); Constructor->setCtorInitializers(initializer); if (CXXDestructorDecl *Dtor = LookupDestructor(Constructor->getParent())) { MarkFunctionReferenced(Initializer->getSourceLocation(), Dtor); DiagnoseUseOfDecl(Dtor, Initializer->getSourceLocation()); } DelegatingCtorDecls.push_back(Constructor); DiagnoseUninitializedFields(*this, Constructor); return false; } bool Sema::SetCtorInitializers(CXXConstructorDecl *Constructor, bool AnyErrors, ArrayRef Initializers) { if (Constructor->isDependentContext()) { // Just store the initializers as written, they will be checked during // instantiation. if (!Initializers.empty()) { Constructor->setNumCtorInitializers(Initializers.size()); CXXCtorInitializer **baseOrMemberInitializers = new (Context) CXXCtorInitializer*[Initializers.size()]; memcpy(baseOrMemberInitializers, Initializers.data(), Initializers.size() * sizeof(CXXCtorInitializer*)); Constructor->setCtorInitializers(baseOrMemberInitializers); } // Let template instantiation know whether we had errors. if (AnyErrors) Constructor->setInvalidDecl(); return false; } BaseAndFieldInfo Info(*this, Constructor, AnyErrors); // We need to build the initializer AST according to order of construction // and not what user specified in the Initializers list. CXXRecordDecl *ClassDecl = Constructor->getParent()->getDefinition(); if (!ClassDecl) return true; bool HadError = false; for (unsigned i = 0; i < Initializers.size(); i++) { CXXCtorInitializer *Member = Initializers[i]; if (Member->isBaseInitializer()) Info.AllBaseFields[Member->getBaseClass()->getAs()] = Member; else { Info.AllBaseFields[Member->getAnyMember()->getCanonicalDecl()] = Member; if (IndirectFieldDecl *F = Member->getIndirectMember()) { for (auto *C : F->chain()) { FieldDecl *FD = dyn_cast(C); if (FD && FD->getParent()->isUnion()) Info.ActiveUnionMember.insert(std::make_pair( FD->getParent()->getCanonicalDecl(), FD->getCanonicalDecl())); } } else if (FieldDecl *FD = Member->getMember()) { if (FD->getParent()->isUnion()) Info.ActiveUnionMember.insert(std::make_pair( FD->getParent()->getCanonicalDecl(), FD->getCanonicalDecl())); } } } // Keep track of the direct virtual bases. llvm::SmallPtrSet DirectVBases; for (auto &I : ClassDecl->bases()) { if (I.isVirtual()) DirectVBases.insert(&I); } // Push virtual bases before others. for (auto &VBase : ClassDecl->vbases()) { if (CXXCtorInitializer *Value = Info.AllBaseFields.lookup(VBase.getType()->getAs())) { // [class.base.init]p7, per DR257: // A mem-initializer where the mem-initializer-id names a virtual base // class is ignored during execution of a constructor of any class that // is not the most derived class. if (ClassDecl->isAbstract()) { // FIXME: Provide a fixit to remove the base specifier. This requires // tracking the location of the associated comma for a base specifier. Diag(Value->getSourceLocation(), diag::warn_abstract_vbase_init_ignored) << VBase.getType() << ClassDecl; DiagnoseAbstractType(ClassDecl); } Info.AllToInit.push_back(Value); } else if (!AnyErrors && !ClassDecl->isAbstract()) { // [class.base.init]p8, per DR257: // If a given [...] base class is not named by a mem-initializer-id // [...] and the entity is not a virtual base class of an abstract // class, then [...] the entity is default-initialized. bool IsInheritedVirtualBase = !DirectVBases.count(&VBase); CXXCtorInitializer *CXXBaseInit; if (BuildImplicitBaseInitializer(*this, Constructor, Info.IIK, &VBase, IsInheritedVirtualBase, CXXBaseInit)) { HadError = true; continue; } Info.AllToInit.push_back(CXXBaseInit); } } // Non-virtual bases. for (auto &Base : ClassDecl->bases()) { // Virtuals are in the virtual base list and already constructed. if (Base.isVirtual()) continue; if (CXXCtorInitializer *Value = Info.AllBaseFields.lookup(Base.getType()->getAs())) { Info.AllToInit.push_back(Value); } else if (!AnyErrors) { CXXCtorInitializer *CXXBaseInit; if (BuildImplicitBaseInitializer(*this, Constructor, Info.IIK, &Base, /*IsInheritedVirtualBase=*/false, CXXBaseInit)) { HadError = true; continue; } Info.AllToInit.push_back(CXXBaseInit); } } // Fields. for (auto *Mem : ClassDecl->decls()) { if (auto *F = dyn_cast(Mem)) { // C++ [class.bit]p2: // A declaration for a bit-field that omits the identifier declares an // unnamed bit-field. Unnamed bit-fields are not members and cannot be // initialized. if (F->isUnnamedBitfield()) continue; // If we're not generating the implicit copy/move constructor, then we'll // handle anonymous struct/union fields based on their individual // indirect fields. if (F->isAnonymousStructOrUnion() && !Info.isImplicitCopyOrMove()) continue; if (CollectFieldInitializer(*this, Info, F)) HadError = true; continue; } // Beyond this point, we only consider default initialization. if (Info.isImplicitCopyOrMove()) continue; if (auto *F = dyn_cast(Mem)) { if (F->getType()->isIncompleteArrayType()) { assert(ClassDecl->hasFlexibleArrayMember() && "Incomplete array type is not valid"); continue; } // Initialize each field of an anonymous struct individually. if (CollectFieldInitializer(*this, Info, F->getAnonField(), F)) HadError = true; continue; } } unsigned NumInitializers = Info.AllToInit.size(); if (NumInitializers > 0) { Constructor->setNumCtorInitializers(NumInitializers); CXXCtorInitializer **baseOrMemberInitializers = new (Context) CXXCtorInitializer*[NumInitializers]; memcpy(baseOrMemberInitializers, Info.AllToInit.data(), NumInitializers * sizeof(CXXCtorInitializer*)); Constructor->setCtorInitializers(baseOrMemberInitializers); // Constructors implicitly reference the base and member // destructors. MarkBaseAndMemberDestructorsReferenced(Constructor->getLocation(), Constructor->getParent()); } return HadError; } static void PopulateKeysForFields(FieldDecl *Field, SmallVectorImpl &IdealInits) { if (const RecordType *RT = Field->getType()->getAs()) { const RecordDecl *RD = RT->getDecl(); if (RD->isAnonymousStructOrUnion()) { for (auto *Field : RD->fields()) PopulateKeysForFields(Field, IdealInits); return; } } IdealInits.push_back(Field->getCanonicalDecl()); } static const void *GetKeyForBase(ASTContext &Context, QualType BaseType) { return Context.getCanonicalType(BaseType).getTypePtr(); } static const void *GetKeyForMember(ASTContext &Context, CXXCtorInitializer *Member) { if (!Member->isAnyMemberInitializer()) return GetKeyForBase(Context, QualType(Member->getBaseClass(), 0)); return Member->getAnyMember()->getCanonicalDecl(); } static void DiagnoseBaseOrMemInitializerOrder( Sema &SemaRef, const CXXConstructorDecl *Constructor, ArrayRef Inits) { if (Constructor->getDeclContext()->isDependentContext()) return; // Don't check initializers order unless the warning is enabled at the // location of at least one initializer. bool ShouldCheckOrder = false; for (unsigned InitIndex = 0; InitIndex != Inits.size(); ++InitIndex) { CXXCtorInitializer *Init = Inits[InitIndex]; if (!SemaRef.Diags.isIgnored(diag::warn_initializer_out_of_order, Init->getSourceLocation())) { ShouldCheckOrder = true; break; } } if (!ShouldCheckOrder) return; // Build the list of bases and members in the order that they'll // actually be initialized. The explicit initializers should be in // this same order but may be missing things. SmallVector IdealInitKeys; const CXXRecordDecl *ClassDecl = Constructor->getParent(); // 1. Virtual bases. for (const auto &VBase : ClassDecl->vbases()) IdealInitKeys.push_back(GetKeyForBase(SemaRef.Context, VBase.getType())); // 2. Non-virtual bases. for (const auto &Base : ClassDecl->bases()) { if (Base.isVirtual()) continue; IdealInitKeys.push_back(GetKeyForBase(SemaRef.Context, Base.getType())); } // 3. Direct fields. for (auto *Field : ClassDecl->fields()) { if (Field->isUnnamedBitfield()) continue; PopulateKeysForFields(Field, IdealInitKeys); } unsigned NumIdealInits = IdealInitKeys.size(); unsigned IdealIndex = 0; CXXCtorInitializer *PrevInit = nullptr; for (unsigned InitIndex = 0; InitIndex != Inits.size(); ++InitIndex) { CXXCtorInitializer *Init = Inits[InitIndex]; const void *InitKey = GetKeyForMember(SemaRef.Context, Init); // Scan forward to try to find this initializer in the idealized // initializers list. for (; IdealIndex != NumIdealInits; ++IdealIndex) if (InitKey == IdealInitKeys[IdealIndex]) break; // If we didn't find this initializer, it must be because we // scanned past it on a previous iteration. That can only // happen if we're out of order; emit a warning. if (IdealIndex == NumIdealInits && PrevInit) { Sema::SemaDiagnosticBuilder D = SemaRef.Diag(PrevInit->getSourceLocation(), diag::warn_initializer_out_of_order); if (PrevInit->isAnyMemberInitializer()) D << 0 << PrevInit->getAnyMember()->getDeclName(); else D << 1 << PrevInit->getTypeSourceInfo()->getType(); if (Init->isAnyMemberInitializer()) D << 0 << Init->getAnyMember()->getDeclName(); else D << 1 << Init->getTypeSourceInfo()->getType(); // Move back to the initializer's location in the ideal list. for (IdealIndex = 0; IdealIndex != NumIdealInits; ++IdealIndex) if (InitKey == IdealInitKeys[IdealIndex]) break; assert(IdealIndex < NumIdealInits && "initializer not found in initializer list"); } PrevInit = Init; } } namespace { bool CheckRedundantInit(Sema &S, CXXCtorInitializer *Init, CXXCtorInitializer *&PrevInit) { if (!PrevInit) { PrevInit = Init; return false; } if (FieldDecl *Field = Init->getAnyMember()) S.Diag(Init->getSourceLocation(), diag::err_multiple_mem_initialization) << Field->getDeclName() << Init->getSourceRange(); else { const Type *BaseClass = Init->getBaseClass(); assert(BaseClass && "neither field nor base"); S.Diag(Init->getSourceLocation(), diag::err_multiple_base_initialization) << QualType(BaseClass, 0) << Init->getSourceRange(); } S.Diag(PrevInit->getSourceLocation(), diag::note_previous_initializer) << 0 << PrevInit->getSourceRange(); return true; } typedef std::pair UnionEntry; typedef llvm::DenseMap RedundantUnionMap; bool CheckRedundantUnionInit(Sema &S, CXXCtorInitializer *Init, RedundantUnionMap &Unions) { FieldDecl *Field = Init->getAnyMember(); RecordDecl *Parent = Field->getParent(); NamedDecl *Child = Field; while (Parent->isAnonymousStructOrUnion() || Parent->isUnion()) { if (Parent->isUnion()) { UnionEntry &En = Unions[Parent]; if (En.first && En.first != Child) { S.Diag(Init->getSourceLocation(), diag::err_multiple_mem_union_initialization) << Field->getDeclName() << Init->getSourceRange(); S.Diag(En.second->getSourceLocation(), diag::note_previous_initializer) << 0 << En.second->getSourceRange(); return true; } if (!En.first) { En.first = Child; En.second = Init; } if (!Parent->isAnonymousStructOrUnion()) return false; } Child = Parent; Parent = cast(Parent->getDeclContext()); } return false; } } /// ActOnMemInitializers - Handle the member initializers for a constructor. void Sema::ActOnMemInitializers(Decl *ConstructorDecl, SourceLocation ColonLoc, ArrayRef MemInits, bool AnyErrors) { if (!ConstructorDecl) return; AdjustDeclIfTemplate(ConstructorDecl); CXXConstructorDecl *Constructor = dyn_cast(ConstructorDecl); if (!Constructor) { Diag(ColonLoc, diag::err_only_constructors_take_base_inits); return; } // Mapping for the duplicate initializers check. // For member initializers, this is keyed with a FieldDecl*. // For base initializers, this is keyed with a Type*. llvm::DenseMap Members; // Mapping for the inconsistent anonymous-union initializers check. RedundantUnionMap MemberUnions; bool HadError = false; for (unsigned i = 0; i < MemInits.size(); i++) { CXXCtorInitializer *Init = MemInits[i]; // Set the source order index. Init->setSourceOrder(i); if (Init->isAnyMemberInitializer()) { const void *Key = GetKeyForMember(Context, Init); if (CheckRedundantInit(*this, Init, Members[Key]) || CheckRedundantUnionInit(*this, Init, MemberUnions)) HadError = true; } else if (Init->isBaseInitializer()) { const void *Key = GetKeyForMember(Context, Init); if (CheckRedundantInit(*this, Init, Members[Key])) HadError = true; } else { assert(Init->isDelegatingInitializer()); // This must be the only initializer if (MemInits.size() != 1) { Diag(Init->getSourceLocation(), diag::err_delegating_initializer_alone) << Init->getSourceRange() << MemInits[i ? 0 : 1]->getSourceRange(); // We will treat this as being the only initializer. } SetDelegatingInitializer(Constructor, MemInits[i]); // Return immediately as the initializer is set. return; } } if (HadError) return; DiagnoseBaseOrMemInitializerOrder(*this, Constructor, MemInits); SetCtorInitializers(Constructor, AnyErrors, MemInits); DiagnoseUninitializedFields(*this, Constructor); } void Sema::MarkBaseAndMemberDestructorsReferenced(SourceLocation Location, CXXRecordDecl *ClassDecl) { // Ignore dependent contexts. Also ignore unions, since their members never // have destructors implicitly called. if (ClassDecl->isDependentContext() || ClassDecl->isUnion()) return; // FIXME: all the access-control diagnostics are positioned on the // field/base declaration. That's probably good; that said, the // user might reasonably want to know why the destructor is being // emitted, and we currently don't say. // Non-static data members. for (auto *Field : ClassDecl->fields()) { if (Field->isInvalidDecl()) continue; // Don't destroy incomplete or zero-length arrays. if (isIncompleteOrZeroLengthArrayType(Context, Field->getType())) continue; QualType FieldType = Context.getBaseElementType(Field->getType()); const RecordType* RT = FieldType->getAs(); if (!RT) continue; CXXRecordDecl *FieldClassDecl = cast(RT->getDecl()); if (FieldClassDecl->isInvalidDecl()) continue; if (FieldClassDecl->hasIrrelevantDestructor()) continue; // The destructor for an implicit anonymous union member is never invoked. if (FieldClassDecl->isUnion() && FieldClassDecl->isAnonymousStructOrUnion()) continue; CXXDestructorDecl *Dtor = LookupDestructor(FieldClassDecl); assert(Dtor && "No dtor found for FieldClassDecl!"); CheckDestructorAccess(Field->getLocation(), Dtor, PDiag(diag::err_access_dtor_field) << Field->getDeclName() << FieldType); MarkFunctionReferenced(Location, Dtor); DiagnoseUseOfDecl(Dtor, Location); } llvm::SmallPtrSet DirectVirtualBases; // Bases. for (const auto &Base : ClassDecl->bases()) { // Bases are always records in a well-formed non-dependent class. const RecordType *RT = Base.getType()->getAs(); // Remember direct virtual bases. if (Base.isVirtual()) DirectVirtualBases.insert(RT); CXXRecordDecl *BaseClassDecl = cast(RT->getDecl()); // If our base class is invalid, we probably can't get its dtor anyway. if (BaseClassDecl->isInvalidDecl()) continue; if (BaseClassDecl->hasIrrelevantDestructor()) continue; CXXDestructorDecl *Dtor = LookupDestructor(BaseClassDecl); assert(Dtor && "No dtor found for BaseClassDecl!"); // FIXME: caret should be on the start of the class name CheckDestructorAccess(Base.getLocStart(), Dtor, PDiag(diag::err_access_dtor_base) << Base.getType() << Base.getSourceRange(), Context.getTypeDeclType(ClassDecl)); MarkFunctionReferenced(Location, Dtor); DiagnoseUseOfDecl(Dtor, Location); } // Virtual bases. for (const auto &VBase : ClassDecl->vbases()) { // Bases are always records in a well-formed non-dependent class. const RecordType *RT = VBase.getType()->castAs(); // Ignore direct virtual bases. if (DirectVirtualBases.count(RT)) continue; CXXRecordDecl *BaseClassDecl = cast(RT->getDecl()); // If our base class is invalid, we probably can't get its dtor anyway. if (BaseClassDecl->isInvalidDecl()) continue; if (BaseClassDecl->hasIrrelevantDestructor()) continue; CXXDestructorDecl *Dtor = LookupDestructor(BaseClassDecl); assert(Dtor && "No dtor found for BaseClassDecl!"); if (CheckDestructorAccess( ClassDecl->getLocation(), Dtor, PDiag(diag::err_access_dtor_vbase) << Context.getTypeDeclType(ClassDecl) << VBase.getType(), Context.getTypeDeclType(ClassDecl)) == AR_accessible) { CheckDerivedToBaseConversion( Context.getTypeDeclType(ClassDecl), VBase.getType(), diag::err_access_dtor_vbase, 0, ClassDecl->getLocation(), SourceRange(), DeclarationName(), nullptr); } MarkFunctionReferenced(Location, Dtor); DiagnoseUseOfDecl(Dtor, Location); } } void Sema::ActOnDefaultCtorInitializers(Decl *CDtorDecl) { if (!CDtorDecl) return; if (CXXConstructorDecl *Constructor = dyn_cast(CDtorDecl)) { SetCtorInitializers(Constructor, /*AnyErrors=*/false); DiagnoseUninitializedFields(*this, Constructor); } } bool Sema::isAbstractType(SourceLocation Loc, QualType T) { if (!getLangOpts().CPlusPlus) return false; const auto *RD = Context.getBaseElementType(T)->getAsCXXRecordDecl(); if (!RD) return false; // FIXME: Per [temp.inst]p1, we are supposed to trigger instantiation of a // class template specialization here, but doing so breaks a lot of code. // We can't answer whether something is abstract until it has a // definition. If it's currently being defined, we'll walk back // over all the declarations when we have a full definition. const CXXRecordDecl *Def = RD->getDefinition(); if (!Def || Def->isBeingDefined()) return false; return RD->isAbstract(); } bool Sema::RequireNonAbstractType(SourceLocation Loc, QualType T, TypeDiagnoser &Diagnoser) { if (!isAbstractType(Loc, T)) return false; T = Context.getBaseElementType(T); Diagnoser.diagnose(*this, Loc, T); DiagnoseAbstractType(T->getAsCXXRecordDecl()); return true; } void Sema::DiagnoseAbstractType(const CXXRecordDecl *RD) { // Check if we've already emitted the list of pure virtual functions // for this class. if (PureVirtualClassDiagSet && PureVirtualClassDiagSet->count(RD)) return; // If the diagnostic is suppressed, don't emit the notes. We're only // going to emit them once, so try to attach them to a diagnostic we're // actually going to show. if (Diags.isLastDiagnosticIgnored()) return; CXXFinalOverriderMap FinalOverriders; RD->getFinalOverriders(FinalOverriders); // Keep a set of seen pure methods so we won't diagnose the same method // more than once. llvm::SmallPtrSet SeenPureMethods; for (CXXFinalOverriderMap::iterator M = FinalOverriders.begin(), MEnd = FinalOverriders.end(); M != MEnd; ++M) { for (OverridingMethods::iterator SO = M->second.begin(), SOEnd = M->second.end(); SO != SOEnd; ++SO) { // C++ [class.abstract]p4: // A class is abstract if it contains or inherits at least one // pure virtual function for which the final overrider is pure // virtual. // if (SO->second.size() != 1) continue; if (!SO->second.front().Method->isPure()) continue; if (!SeenPureMethods.insert(SO->second.front().Method).second) continue; Diag(SO->second.front().Method->getLocation(), diag::note_pure_virtual_function) << SO->second.front().Method->getDeclName() << RD->getDeclName(); } } if (!PureVirtualClassDiagSet) PureVirtualClassDiagSet.reset(new RecordDeclSetTy); PureVirtualClassDiagSet->insert(RD); } namespace { struct AbstractUsageInfo { Sema &S; CXXRecordDecl *Record; CanQualType AbstractType; bool Invalid; AbstractUsageInfo(Sema &S, CXXRecordDecl *Record) : S(S), Record(Record), AbstractType(S.Context.getCanonicalType( S.Context.getTypeDeclType(Record))), Invalid(false) {} void DiagnoseAbstractType() { if (Invalid) return; S.DiagnoseAbstractType(Record); Invalid = true; } void CheckType(const NamedDecl *D, TypeLoc TL, Sema::AbstractDiagSelID Sel); }; struct CheckAbstractUsage { AbstractUsageInfo &Info; const NamedDecl *Ctx; CheckAbstractUsage(AbstractUsageInfo &Info, const NamedDecl *Ctx) : Info(Info), Ctx(Ctx) {} void Visit(TypeLoc TL, Sema::AbstractDiagSelID Sel) { switch (TL.getTypeLocClass()) { #define ABSTRACT_TYPELOC(CLASS, PARENT) #define TYPELOC(CLASS, PARENT) \ case TypeLoc::CLASS: Check(TL.castAs(), Sel); break; #include "clang/AST/TypeLocNodes.def" } } void Check(FunctionProtoTypeLoc TL, Sema::AbstractDiagSelID Sel) { Visit(TL.getReturnLoc(), Sema::AbstractReturnType); for (unsigned I = 0, E = TL.getNumParams(); I != E; ++I) { if (!TL.getParam(I)) continue; TypeSourceInfo *TSI = TL.getParam(I)->getTypeSourceInfo(); if (TSI) Visit(TSI->getTypeLoc(), Sema::AbstractParamType); } } void Check(ArrayTypeLoc TL, Sema::AbstractDiagSelID Sel) { Visit(TL.getElementLoc(), Sema::AbstractArrayType); } void Check(TemplateSpecializationTypeLoc TL, Sema::AbstractDiagSelID Sel) { // Visit the type parameters from a permissive context. for (unsigned I = 0, E = TL.getNumArgs(); I != E; ++I) { TemplateArgumentLoc TAL = TL.getArgLoc(I); if (TAL.getArgument().getKind() == TemplateArgument::Type) if (TypeSourceInfo *TSI = TAL.getTypeSourceInfo()) Visit(TSI->getTypeLoc(), Sema::AbstractNone); // TODO: other template argument types? } } // Visit pointee types from a permissive context. #define CheckPolymorphic(Type) \ void Check(Type TL, Sema::AbstractDiagSelID Sel) { \ Visit(TL.getNextTypeLoc(), Sema::AbstractNone); \ } CheckPolymorphic(PointerTypeLoc) CheckPolymorphic(ReferenceTypeLoc) CheckPolymorphic(MemberPointerTypeLoc) CheckPolymorphic(BlockPointerTypeLoc) CheckPolymorphic(AtomicTypeLoc) /// Handle all the types we haven't given a more specific /// implementation for above. void Check(TypeLoc TL, Sema::AbstractDiagSelID Sel) { // Every other kind of type that we haven't called out already // that has an inner type is either (1) sugar or (2) contains that // inner type in some way as a subobject. if (TypeLoc Next = TL.getNextTypeLoc()) return Visit(Next, Sel); // If there's no inner type and we're in a permissive context, // don't diagnose. if (Sel == Sema::AbstractNone) return; // Check whether the type matches the abstract type. QualType T = TL.getType(); if (T->isArrayType()) { Sel = Sema::AbstractArrayType; T = Info.S.Context.getBaseElementType(T); } CanQualType CT = T->getCanonicalTypeUnqualified().getUnqualifiedType(); if (CT != Info.AbstractType) return; // It matched; do some magic. if (Sel == Sema::AbstractArrayType) { Info.S.Diag(Ctx->getLocation(), diag::err_array_of_abstract_type) << T << TL.getSourceRange(); } else { Info.S.Diag(Ctx->getLocation(), diag::err_abstract_type_in_decl) << Sel << T << TL.getSourceRange(); } Info.DiagnoseAbstractType(); } }; void AbstractUsageInfo::CheckType(const NamedDecl *D, TypeLoc TL, Sema::AbstractDiagSelID Sel) { CheckAbstractUsage(*this, D).Visit(TL, Sel); } } /// Check for invalid uses of an abstract type in a method declaration. static void CheckAbstractClassUsage(AbstractUsageInfo &Info, CXXMethodDecl *MD) { // No need to do the check on definitions, which require that // the return/param types be complete. if (MD->doesThisDeclarationHaveABody()) return; // For safety's sake, just ignore it if we don't have type source // information. This should never happen for non-implicit methods, // but... if (TypeSourceInfo *TSI = MD->getTypeSourceInfo()) Info.CheckType(MD, TSI->getTypeLoc(), Sema::AbstractNone); } /// Check for invalid uses of an abstract type within a class definition. static void CheckAbstractClassUsage(AbstractUsageInfo &Info, CXXRecordDecl *RD) { for (auto *D : RD->decls()) { if (D->isImplicit()) continue; // Methods and method templates. if (isa(D)) { CheckAbstractClassUsage(Info, cast(D)); } else if (isa(D)) { FunctionDecl *FD = cast(D)->getTemplatedDecl(); CheckAbstractClassUsage(Info, cast(FD)); // Fields and static variables. } else if (isa(D)) { FieldDecl *FD = cast(D); if (TypeSourceInfo *TSI = FD->getTypeSourceInfo()) Info.CheckType(FD, TSI->getTypeLoc(), Sema::AbstractFieldType); } else if (isa(D)) { VarDecl *VD = cast(D); if (TypeSourceInfo *TSI = VD->getTypeSourceInfo()) Info.CheckType(VD, TSI->getTypeLoc(), Sema::AbstractVariableType); // Nested classes and class templates. } else if (isa(D)) { CheckAbstractClassUsage(Info, cast(D)); } else if (isa(D)) { CheckAbstractClassUsage(Info, cast(D)->getTemplatedDecl()); } } } static void ReferenceDllExportedMethods(Sema &S, CXXRecordDecl *Class) { Attr *ClassAttr = getDLLAttr(Class); if (!ClassAttr) return; assert(ClassAttr->getKind() == attr::DLLExport); TemplateSpecializationKind TSK = Class->getTemplateSpecializationKind(); if (TSK == TSK_ExplicitInstantiationDeclaration) // Don't go any further if this is just an explicit instantiation // declaration. return; for (Decl *Member : Class->decls()) { auto *MD = dyn_cast(Member); if (!MD) continue; if (Member->getAttr()) { if (MD->isUserProvided()) { // Instantiate non-default class member functions ... // .. except for certain kinds of template specializations. if (TSK == TSK_ImplicitInstantiation && !ClassAttr->isInherited()) continue; S.MarkFunctionReferenced(Class->getLocation(), MD); // The function will be passed to the consumer when its definition is // encountered. } else if (!MD->isTrivial() || MD->isExplicitlyDefaulted() || MD->isCopyAssignmentOperator() || MD->isMoveAssignmentOperator()) { // Synthesize and instantiate non-trivial implicit methods, explicitly // defaulted methods, and the copy and move assignment operators. The // latter are exported even if they are trivial, because the address of // an operator can be taken and should compare equal accross libraries. DiagnosticErrorTrap Trap(S.Diags); S.MarkFunctionReferenced(Class->getLocation(), MD); if (Trap.hasErrorOccurred()) { S.Diag(ClassAttr->getLocation(), diag::note_due_to_dllexported_class) << Class->getName() << !S.getLangOpts().CPlusPlus11; break; } // There is no later point when we will see the definition of this // function, so pass it to the consumer now. S.Consumer.HandleTopLevelDecl(DeclGroupRef(MD)); } } } } static void checkForMultipleExportedDefaultConstructors(Sema &S, CXXRecordDecl *Class) { // Only the MS ABI has default constructor closures, so we don't need to do // this semantic checking anywhere else. if (!S.Context.getTargetInfo().getCXXABI().isMicrosoft()) return; CXXConstructorDecl *LastExportedDefaultCtor = nullptr; for (Decl *Member : Class->decls()) { // Look for exported default constructors. auto *CD = dyn_cast(Member); if (!CD || !CD->isDefaultConstructor()) continue; auto *Attr = CD->getAttr(); if (!Attr) continue; // If the class is non-dependent, mark the default arguments as ODR-used so // that we can properly codegen the constructor closure. if (!Class->isDependentContext()) { for (ParmVarDecl *PD : CD->parameters()) { (void)S.CheckCXXDefaultArgExpr(Attr->getLocation(), CD, PD); S.DiscardCleanupsInEvaluationContext(); } } if (LastExportedDefaultCtor) { S.Diag(LastExportedDefaultCtor->getLocation(), diag::err_attribute_dll_ambiguous_default_ctor) << Class; S.Diag(CD->getLocation(), diag::note_entity_declared_at) << CD->getDeclName(); return; } LastExportedDefaultCtor = CD; } } /// \brief Check class-level dllimport/dllexport attribute. void Sema::checkClassLevelDLLAttribute(CXXRecordDecl *Class) { Attr *ClassAttr = getDLLAttr(Class); // MSVC inherits DLL attributes to partial class template specializations. if (Context.getTargetInfo().getCXXABI().isMicrosoft() && !ClassAttr) { if (auto *Spec = dyn_cast(Class)) { if (Attr *TemplateAttr = getDLLAttr(Spec->getSpecializedTemplate()->getTemplatedDecl())) { auto *A = cast(TemplateAttr->clone(getASTContext())); A->setInherited(true); ClassAttr = A; } } } if (!ClassAttr) return; if (!Class->isExternallyVisible()) { Diag(Class->getLocation(), diag::err_attribute_dll_not_extern) << Class << ClassAttr; return; } if (Context.getTargetInfo().getCXXABI().isMicrosoft() && !ClassAttr->isInherited()) { // Diagnose dll attributes on members of class with dll attribute. for (Decl *Member : Class->decls()) { if (!isa(Member) && !isa(Member)) continue; InheritableAttr *MemberAttr = getDLLAttr(Member); if (!MemberAttr || MemberAttr->isInherited() || Member->isInvalidDecl()) continue; Diag(MemberAttr->getLocation(), diag::err_attribute_dll_member_of_dll_class) << MemberAttr << ClassAttr; Diag(ClassAttr->getLocation(), diag::note_previous_attribute); Member->setInvalidDecl(); } } if (Class->getDescribedClassTemplate()) // Don't inherit dll attribute until the template is instantiated. return; // The class is either imported or exported. const bool ClassExported = ClassAttr->getKind() == attr::DLLExport; TemplateSpecializationKind TSK = Class->getTemplateSpecializationKind(); // Ignore explicit dllexport on explicit class template instantiation declarations. if (ClassExported && !ClassAttr->isInherited() && TSK == TSK_ExplicitInstantiationDeclaration) { Class->dropAttr(); return; } // Force declaration of implicit members so they can inherit the attribute. ForceDeclarationOfImplicitMembers(Class); // FIXME: MSVC's docs say all bases must be exportable, but this doesn't // seem to be true in practice? for (Decl *Member : Class->decls()) { VarDecl *VD = dyn_cast(Member); CXXMethodDecl *MD = dyn_cast(Member); // Only methods and static fields inherit the attributes. if (!VD && !MD) continue; if (MD) { // Don't process deleted methods. if (MD->isDeleted()) continue; if (MD->isInlined()) { // MinGW does not import or export inline methods. if (!Context.getTargetInfo().getCXXABI().isMicrosoft() && !Context.getTargetInfo().getTriple().isWindowsItaniumEnvironment()) continue; // MSVC versions before 2015 don't export the move assignment operators // and move constructor, so don't attempt to import/export them if // we have a definition. auto *Ctor = dyn_cast(MD); if ((MD->isMoveAssignmentOperator() || (Ctor && Ctor->isMoveConstructor())) && !getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015)) continue; // MSVC2015 doesn't export trivial defaulted x-tor but copy assign // operator is exported anyway. if (getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015) && (Ctor || isa(MD)) && MD->isTrivial()) continue; } } if (!cast(Member)->isExternallyVisible()) continue; if (!getDLLAttr(Member)) { auto *NewAttr = cast(ClassAttr->clone(getASTContext())); NewAttr->setInherited(true); Member->addAttr(NewAttr); } } if (ClassExported) DelayedDllExportClasses.push_back(Class); } /// \brief Perform propagation of DLL attributes from a derived class to a /// templated base class for MS compatibility. void Sema::propagateDLLAttrToBaseClassTemplate( CXXRecordDecl *Class, Attr *ClassAttr, ClassTemplateSpecializationDecl *BaseTemplateSpec, SourceLocation BaseLoc) { if (getDLLAttr( BaseTemplateSpec->getSpecializedTemplate()->getTemplatedDecl())) { // If the base class template has a DLL attribute, don't try to change it. return; } auto TSK = BaseTemplateSpec->getSpecializationKind(); if (!getDLLAttr(BaseTemplateSpec) && (TSK == TSK_Undeclared || TSK == TSK_ExplicitInstantiationDeclaration || TSK == TSK_ImplicitInstantiation)) { // The template hasn't been instantiated yet (or it has, but only as an // explicit instantiation declaration or implicit instantiation, which means // we haven't codegenned any members yet), so propagate the attribute. auto *NewAttr = cast(ClassAttr->clone(getASTContext())); NewAttr->setInherited(true); BaseTemplateSpec->addAttr(NewAttr); // If the template is already instantiated, checkDLLAttributeRedeclaration() // needs to be run again to work see the new attribute. Otherwise this will // get run whenever the template is instantiated. if (TSK != TSK_Undeclared) checkClassLevelDLLAttribute(BaseTemplateSpec); return; } if (getDLLAttr(BaseTemplateSpec)) { // The template has already been specialized or instantiated with an // attribute, explicitly or through propagation. We should not try to change // it. return; } // The template was previously instantiated or explicitly specialized without // a dll attribute, It's too late for us to add an attribute, so warn that // this is unsupported. Diag(BaseLoc, diag::warn_attribute_dll_instantiated_base_class) << BaseTemplateSpec->isExplicitSpecialization(); Diag(ClassAttr->getLocation(), diag::note_attribute); if (BaseTemplateSpec->isExplicitSpecialization()) { Diag(BaseTemplateSpec->getLocation(), diag::note_template_class_explicit_specialization_was_here) << BaseTemplateSpec; } else { Diag(BaseTemplateSpec->getPointOfInstantiation(), diag::note_template_class_instantiation_was_here) << BaseTemplateSpec; } } static void DefineImplicitSpecialMember(Sema &S, CXXMethodDecl *MD, SourceLocation DefaultLoc) { switch (S.getSpecialMember(MD)) { case Sema::CXXDefaultConstructor: S.DefineImplicitDefaultConstructor(DefaultLoc, cast(MD)); break; case Sema::CXXCopyConstructor: S.DefineImplicitCopyConstructor(DefaultLoc, cast(MD)); break; case Sema::CXXCopyAssignment: S.DefineImplicitCopyAssignment(DefaultLoc, MD); break; case Sema::CXXDestructor: S.DefineImplicitDestructor(DefaultLoc, cast(MD)); break; case Sema::CXXMoveConstructor: S.DefineImplicitMoveConstructor(DefaultLoc, cast(MD)); break; case Sema::CXXMoveAssignment: S.DefineImplicitMoveAssignment(DefaultLoc, MD); break; case Sema::CXXInvalid: llvm_unreachable("Invalid special member."); } } /// \brief Perform semantic checks on a class definition that has been /// completing, introducing implicitly-declared members, checking for /// abstract types, etc. void Sema::CheckCompletedCXXClass(CXXRecordDecl *Record) { if (!Record) return; if (Record->isAbstract() && !Record->isInvalidDecl()) { AbstractUsageInfo Info(*this, Record); CheckAbstractClassUsage(Info, Record); } // If this is not an aggregate type and has no user-declared constructor, // complain about any non-static data members of reference or const scalar // type, since they will never get initializers. if (!Record->isInvalidDecl() && !Record->isDependentType() && !Record->isAggregate() && !Record->hasUserDeclaredConstructor() && !Record->isLambda()) { bool Complained = false; for (const auto *F : Record->fields()) { if (F->hasInClassInitializer() || F->isUnnamedBitfield()) continue; if (F->getType()->isReferenceType() || (F->getType().isConstQualified() && F->getType()->isScalarType())) { if (!Complained) { Diag(Record->getLocation(), diag::warn_no_constructor_for_refconst) << Record->getTagKind() << Record; Complained = true; } Diag(F->getLocation(), diag::note_refconst_member_not_initialized) << F->getType()->isReferenceType() << F->getDeclName(); } } } if (Record->getIdentifier()) { // C++ [class.mem]p13: // If T is the name of a class, then each of the following shall have a // name different from T: // - every member of every anonymous union that is a member of class T. // // C++ [class.mem]p14: // In addition, if class T has a user-declared constructor (12.1), every // non-static data member of class T shall have a name different from T. DeclContext::lookup_result R = Record->lookup(Record->getDeclName()); for (DeclContext::lookup_iterator I = R.begin(), E = R.end(); I != E; ++I) { NamedDecl *D = *I; if ((isa(D) && Record->hasUserDeclaredConstructor()) || isa(D)) { Diag(D->getLocation(), diag::err_member_name_of_class) << D->getDeclName(); break; } } } // Warn if the class has virtual methods but non-virtual public destructor. if (Record->isPolymorphic() && !Record->isDependentType()) { CXXDestructorDecl *dtor = Record->getDestructor(); if ((!dtor || (!dtor->isVirtual() && dtor->getAccess() == AS_public)) && !Record->hasAttr()) Diag(dtor ? dtor->getLocation() : Record->getLocation(), diag::warn_non_virtual_dtor) << Context.getRecordType(Record); } if (Record->isAbstract()) { if (FinalAttr *FA = Record->getAttr()) { Diag(Record->getLocation(), diag::warn_abstract_final_class) << FA->isSpelledAsSealed(); DiagnoseAbstractType(Record); } } bool HasMethodWithOverrideControl = false, HasOverridingMethodWithoutOverrideControl = false; if (!Record->isDependentType()) { for (auto *M : Record->methods()) { // See if a method overloads virtual methods in a base // class without overriding any. if (!M->isStatic()) DiagnoseHiddenVirtualMethods(M); if (M->hasAttr()) HasMethodWithOverrideControl = true; else if (M->size_overridden_methods() > 0) HasOverridingMethodWithoutOverrideControl = true; // Check whether the explicitly-defaulted special members are valid. if (!M->isInvalidDecl() && M->isExplicitlyDefaulted()) CheckExplicitlyDefaultedSpecialMember(M); // For an explicitly defaulted or deleted special member, we defer // determining triviality until the class is complete. That time is now! CXXSpecialMember CSM = getSpecialMember(M); if (!M->isImplicit() && !M->isUserProvided()) { if (CSM != CXXInvalid) { M->setTrivial(SpecialMemberIsTrivial(M, CSM)); // Inform the class that we've finished declaring this member. Record->finishedDefaultedOrDeletedMember(M); } } if (!M->isInvalidDecl() && M->isExplicitlyDefaulted() && M->hasAttr()) { if (getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015) && M->isTrivial() && (CSM == CXXDefaultConstructor || CSM == CXXCopyConstructor || CSM == CXXDestructor)) M->dropAttr(); if (M->hasAttr()) { DefineImplicitSpecialMember(*this, M, M->getLocation()); ActOnFinishInlineFunctionDef(M); } } } } if (HasMethodWithOverrideControl && HasOverridingMethodWithoutOverrideControl) { // At least one method has the 'override' control declared. // Diagnose all other overridden methods which do not have 'override' specified on them. for (auto *M : Record->methods()) DiagnoseAbsenceOfOverrideControl(M); } // ms_struct is a request to use the same ABI rules as MSVC. Check // whether this class uses any C++ features that are implemented // completely differently in MSVC, and if so, emit a diagnostic. // That diagnostic defaults to an error, but we allow projects to // map it down to a warning (or ignore it). It's a fairly common // practice among users of the ms_struct pragma to mass-annotate // headers, sweeping up a bunch of types that the project doesn't // really rely on MSVC-compatible layout for. We must therefore // support "ms_struct except for C++ stuff" as a secondary ABI. if (Record->isMsStruct(Context) && (Record->isPolymorphic() || Record->getNumBases())) { Diag(Record->getLocation(), diag::warn_cxx_ms_struct); } checkClassLevelDLLAttribute(Record); } /// Look up the special member function that would be called by a special /// member function for a subobject of class type. /// /// \param Class The class type of the subobject. /// \param CSM The kind of special member function. /// \param FieldQuals If the subobject is a field, its cv-qualifiers. /// \param ConstRHS True if this is a copy operation with a const object /// on its RHS, that is, if the argument to the outer special member /// function is 'const' and this is not a field marked 'mutable'. static Sema::SpecialMemberOverloadResult *lookupCallFromSpecialMember( Sema &S, CXXRecordDecl *Class, Sema::CXXSpecialMember CSM, unsigned FieldQuals, bool ConstRHS) { unsigned LHSQuals = 0; if (CSM == Sema::CXXCopyAssignment || CSM == Sema::CXXMoveAssignment) LHSQuals = FieldQuals; unsigned RHSQuals = FieldQuals; if (CSM == Sema::CXXDefaultConstructor || CSM == Sema::CXXDestructor) RHSQuals = 0; else if (ConstRHS) RHSQuals |= Qualifiers::Const; return S.LookupSpecialMember(Class, CSM, RHSQuals & Qualifiers::Const, RHSQuals & Qualifiers::Volatile, false, LHSQuals & Qualifiers::Const, LHSQuals & Qualifiers::Volatile); } class Sema::InheritedConstructorInfo { Sema &S; SourceLocation UseLoc; /// A mapping from the base classes through which the constructor was /// inherited to the using shadow declaration in that base class (or a null /// pointer if the constructor was declared in that base class). llvm::DenseMap InheritedFromBases; public: InheritedConstructorInfo(Sema &S, SourceLocation UseLoc, ConstructorUsingShadowDecl *Shadow) : S(S), UseLoc(UseLoc) { bool DiagnosedMultipleConstructedBases = false; CXXRecordDecl *ConstructedBase = nullptr; UsingDecl *ConstructedBaseUsing = nullptr; // Find the set of such base class subobjects and check that there's a // unique constructed subobject. for (auto *D : Shadow->redecls()) { auto *DShadow = cast(D); auto *DNominatedBase = DShadow->getNominatedBaseClass(); auto *DConstructedBase = DShadow->getConstructedBaseClass(); InheritedFromBases.insert( std::make_pair(DNominatedBase->getCanonicalDecl(), DShadow->getNominatedBaseClassShadowDecl())); if (DShadow->constructsVirtualBase()) InheritedFromBases.insert( std::make_pair(DConstructedBase->getCanonicalDecl(), DShadow->getConstructedBaseClassShadowDecl())); else assert(DNominatedBase == DConstructedBase); // [class.inhctor.init]p2: // If the constructor was inherited from multiple base class subobjects // of type B, the program is ill-formed. if (!ConstructedBase) { ConstructedBase = DConstructedBase; ConstructedBaseUsing = D->getUsingDecl(); } else if (ConstructedBase != DConstructedBase && !Shadow->isInvalidDecl()) { if (!DiagnosedMultipleConstructedBases) { S.Diag(UseLoc, diag::err_ambiguous_inherited_constructor) << Shadow->getTargetDecl(); S.Diag(ConstructedBaseUsing->getLocation(), diag::note_ambiguous_inherited_constructor_using) << ConstructedBase; DiagnosedMultipleConstructedBases = true; } S.Diag(D->getUsingDecl()->getLocation(), diag::note_ambiguous_inherited_constructor_using) << DConstructedBase; } } if (DiagnosedMultipleConstructedBases) Shadow->setInvalidDecl(); } /// Find the constructor to use for inherited construction of a base class, /// and whether that base class constructor inherits the constructor from a /// virtual base class (in which case it won't actually invoke it). std::pair findConstructorForBase(CXXRecordDecl *Base, CXXConstructorDecl *Ctor) const { auto It = InheritedFromBases.find(Base->getCanonicalDecl()); if (It == InheritedFromBases.end()) return std::make_pair(nullptr, false); // This is an intermediary class. if (It->second) return std::make_pair( S.findInheritingConstructor(UseLoc, Ctor, It->second), It->second->constructsVirtualBase()); // This is the base class from which the constructor was inherited. return std::make_pair(Ctor, false); } }; /// Is the special member function which would be selected to perform the /// specified operation on the specified class type a constexpr constructor? static bool specialMemberIsConstexpr(Sema &S, CXXRecordDecl *ClassDecl, Sema::CXXSpecialMember CSM, unsigned Quals, bool ConstRHS, CXXConstructorDecl *InheritedCtor = nullptr, Sema::InheritedConstructorInfo *Inherited = nullptr) { // If we're inheriting a constructor, see if we need to call it for this base // class. if (InheritedCtor) { assert(CSM == Sema::CXXDefaultConstructor); auto BaseCtor = Inherited->findConstructorForBase(ClassDecl, InheritedCtor).first; if (BaseCtor) return BaseCtor->isConstexpr(); } if (CSM == Sema::CXXDefaultConstructor) return ClassDecl->hasConstexprDefaultConstructor(); Sema::SpecialMemberOverloadResult *SMOR = lookupCallFromSpecialMember(S, ClassDecl, CSM, Quals, ConstRHS); if (!SMOR || !SMOR->getMethod()) // A constructor we wouldn't select can't be "involved in initializing" // anything. return true; return SMOR->getMethod()->isConstexpr(); } /// Determine whether the specified special member function would be constexpr /// if it were implicitly defined. static bool defaultedSpecialMemberIsConstexpr( Sema &S, CXXRecordDecl *ClassDecl, Sema::CXXSpecialMember CSM, bool ConstArg, CXXConstructorDecl *InheritedCtor = nullptr, Sema::InheritedConstructorInfo *Inherited = nullptr) { if (!S.getLangOpts().CPlusPlus11) return false; // C++11 [dcl.constexpr]p4: // In the definition of a constexpr constructor [...] bool Ctor = true; switch (CSM) { case Sema::CXXDefaultConstructor: if (Inherited) break; // Since default constructor lookup is essentially trivial (and cannot // involve, for instance, template instantiation), we compute whether a // defaulted default constructor is constexpr directly within CXXRecordDecl. // // This is important for performance; we need to know whether the default // constructor is constexpr to determine whether the type is a literal type. return ClassDecl->defaultedDefaultConstructorIsConstexpr(); case Sema::CXXCopyConstructor: case Sema::CXXMoveConstructor: // For copy or move constructors, we need to perform overload resolution. break; case Sema::CXXCopyAssignment: case Sema::CXXMoveAssignment: if (!S.getLangOpts().CPlusPlus14) return false; // In C++1y, we need to perform overload resolution. Ctor = false; break; case Sema::CXXDestructor: case Sema::CXXInvalid: return false; } // -- if the class is a non-empty union, or for each non-empty anonymous // union member of a non-union class, exactly one non-static data member // shall be initialized; [DR1359] // // If we squint, this is guaranteed, since exactly one non-static data member // will be initialized (if the constructor isn't deleted), we just don't know // which one. if (Ctor && ClassDecl->isUnion()) return CSM == Sema::CXXDefaultConstructor ? ClassDecl->hasInClassInitializer() || !ClassDecl->hasVariantMembers() : true; // -- the class shall not have any virtual base classes; if (Ctor && ClassDecl->getNumVBases()) return false; // C++1y [class.copy]p26: // -- [the class] is a literal type, and if (!Ctor && !ClassDecl->isLiteral()) return false; // -- every constructor involved in initializing [...] base class // sub-objects shall be a constexpr constructor; // -- the assignment operator selected to copy/move each direct base // class is a constexpr function, and for (const auto &B : ClassDecl->bases()) { const RecordType *BaseType = B.getType()->getAs(); if (!BaseType) continue; CXXRecordDecl *BaseClassDecl = cast(BaseType->getDecl()); if (!specialMemberIsConstexpr(S, BaseClassDecl, CSM, 0, ConstArg, InheritedCtor, Inherited)) return false; } // -- every constructor involved in initializing non-static data members // [...] shall be a constexpr constructor; // -- every non-static data member and base class sub-object shall be // initialized // -- for each non-static data member of X that is of class type (or array // thereof), the assignment operator selected to copy/move that member is // a constexpr function for (const auto *F : ClassDecl->fields()) { if (F->isInvalidDecl()) continue; if (CSM == Sema::CXXDefaultConstructor && F->hasInClassInitializer()) continue; QualType BaseType = S.Context.getBaseElementType(F->getType()); if (const RecordType *RecordTy = BaseType->getAs()) { CXXRecordDecl *FieldRecDecl = cast(RecordTy->getDecl()); if (!specialMemberIsConstexpr(S, FieldRecDecl, CSM, BaseType.getCVRQualifiers(), ConstArg && !F->isMutable())) return false; } else if (CSM == Sema::CXXDefaultConstructor) { return false; } } // All OK, it's constexpr! return true; } static Sema::ImplicitExceptionSpecification computeImplicitExceptionSpec(Sema &S, SourceLocation Loc, CXXMethodDecl *MD) { switch (S.getSpecialMember(MD)) { case Sema::CXXDefaultConstructor: return S.ComputeDefaultedDefaultCtorExceptionSpec(Loc, MD); case Sema::CXXCopyConstructor: return S.ComputeDefaultedCopyCtorExceptionSpec(MD); case Sema::CXXCopyAssignment: return S.ComputeDefaultedCopyAssignmentExceptionSpec(MD); case Sema::CXXMoveConstructor: return S.ComputeDefaultedMoveCtorExceptionSpec(MD); case Sema::CXXMoveAssignment: return S.ComputeDefaultedMoveAssignmentExceptionSpec(MD); case Sema::CXXDestructor: return S.ComputeDefaultedDtorExceptionSpec(MD); case Sema::CXXInvalid: break; } assert(cast(MD)->getInheritedConstructor() && "only special members have implicit exception specs"); return S.ComputeInheritingCtorExceptionSpec(Loc, cast(MD)); } static FunctionProtoType::ExtProtoInfo getImplicitMethodEPI(Sema &S, CXXMethodDecl *MD) { FunctionProtoType::ExtProtoInfo EPI; // Build an exception specification pointing back at this member. EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = MD; // Set the calling convention to the default for C++ instance methods. EPI.ExtInfo = EPI.ExtInfo.withCallingConv( S.Context.getDefaultCallingConvention(/*IsVariadic=*/false, /*IsCXXMethod=*/true)); return EPI; } void Sema::EvaluateImplicitExceptionSpec(SourceLocation Loc, CXXMethodDecl *MD) { const FunctionProtoType *FPT = MD->getType()->castAs(); if (FPT->getExceptionSpecType() != EST_Unevaluated) return; // Evaluate the exception specification. auto IES = computeImplicitExceptionSpec(*this, Loc, MD); auto ESI = IES.getExceptionSpec(); // Update the type of the special member to use it. UpdateExceptionSpec(MD, ESI); // A user-provided destructor can be defined outside the class. When that // happens, be sure to update the exception specification on both // declarations. const FunctionProtoType *CanonicalFPT = MD->getCanonicalDecl()->getType()->castAs(); if (CanonicalFPT->getExceptionSpecType() == EST_Unevaluated) UpdateExceptionSpec(MD->getCanonicalDecl(), ESI); } void Sema::CheckExplicitlyDefaultedSpecialMember(CXXMethodDecl *MD) { CXXRecordDecl *RD = MD->getParent(); CXXSpecialMember CSM = getSpecialMember(MD); assert(MD->isExplicitlyDefaulted() && CSM != CXXInvalid && "not an explicitly-defaulted special member"); // Whether this was the first-declared instance of the constructor. // This affects whether we implicitly add an exception spec and constexpr. bool First = MD == MD->getCanonicalDecl(); bool HadError = false; // C++11 [dcl.fct.def.default]p1: // A function that is explicitly defaulted shall // -- be a special member function (checked elsewhere), // -- have the same type (except for ref-qualifiers, and except that a // copy operation can take a non-const reference) as an implicit // declaration, and // -- not have default arguments. unsigned ExpectedParams = 1; if (CSM == CXXDefaultConstructor || CSM == CXXDestructor) ExpectedParams = 0; if (MD->getNumParams() != ExpectedParams) { // This also checks for default arguments: a copy or move constructor with a // default argument is classified as a default constructor, and assignment // operations and destructors can't have default arguments. Diag(MD->getLocation(), diag::err_defaulted_special_member_params) << CSM << MD->getSourceRange(); HadError = true; } else if (MD->isVariadic()) { Diag(MD->getLocation(), diag::err_defaulted_special_member_variadic) << CSM << MD->getSourceRange(); HadError = true; } const FunctionProtoType *Type = MD->getType()->getAs(); bool CanHaveConstParam = false; if (CSM == CXXCopyConstructor) CanHaveConstParam = RD->implicitCopyConstructorHasConstParam(); else if (CSM == CXXCopyAssignment) CanHaveConstParam = RD->implicitCopyAssignmentHasConstParam(); QualType ReturnType = Context.VoidTy; if (CSM == CXXCopyAssignment || CSM == CXXMoveAssignment) { // Check for return type matching. ReturnType = Type->getReturnType(); QualType ExpectedReturnType = Context.getLValueReferenceType(Context.getTypeDeclType(RD)); if (!Context.hasSameType(ReturnType, ExpectedReturnType)) { Diag(MD->getLocation(), diag::err_defaulted_special_member_return_type) << (CSM == CXXMoveAssignment) << ExpectedReturnType; HadError = true; } // A defaulted special member cannot have cv-qualifiers. if (Type->getTypeQuals()) { Diag(MD->getLocation(), diag::err_defaulted_special_member_quals) << (CSM == CXXMoveAssignment) << getLangOpts().CPlusPlus14; HadError = true; } } // Check for parameter type matching. QualType ArgType = ExpectedParams ? Type->getParamType(0) : QualType(); bool HasConstParam = false; if (ExpectedParams && ArgType->isReferenceType()) { // Argument must be reference to possibly-const T. QualType ReferentType = ArgType->getPointeeType(); HasConstParam = ReferentType.isConstQualified(); if (ReferentType.isVolatileQualified()) { Diag(MD->getLocation(), diag::err_defaulted_special_member_volatile_param) << CSM; HadError = true; } if (HasConstParam && !CanHaveConstParam) { if (CSM == CXXCopyConstructor || CSM == CXXCopyAssignment) { Diag(MD->getLocation(), diag::err_defaulted_special_member_copy_const_param) << (CSM == CXXCopyAssignment); // FIXME: Explain why this special member can't be const. } else { Diag(MD->getLocation(), diag::err_defaulted_special_member_move_const_param) << (CSM == CXXMoveAssignment); } HadError = true; } } else if (ExpectedParams) { // A copy assignment operator can take its argument by value, but a // defaulted one cannot. assert(CSM == CXXCopyAssignment && "unexpected non-ref argument"); Diag(MD->getLocation(), diag::err_defaulted_copy_assign_not_ref); HadError = true; } // C++11 [dcl.fct.def.default]p2: // An explicitly-defaulted function may be declared constexpr only if it // would have been implicitly declared as constexpr, // Do not apply this rule to members of class templates, since core issue 1358 // makes such functions always instantiate to constexpr functions. For // functions which cannot be constexpr (for non-constructors in C++11 and for // destructors in C++1y), this is checked elsewhere. bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, RD, CSM, HasConstParam); if ((getLangOpts().CPlusPlus14 ? !isa(MD) : isa(MD)) && MD->isConstexpr() && !Constexpr && MD->getTemplatedKind() == FunctionDecl::TK_NonTemplate) { Diag(MD->getLocStart(), diag::err_incorrect_defaulted_constexpr) << CSM; // FIXME: Explain why the special member can't be constexpr. HadError = true; } // and may have an explicit exception-specification only if it is compatible // with the exception-specification on the implicit declaration. if (Type->hasExceptionSpec()) { // Delay the check if this is the first declaration of the special member, // since we may not have parsed some necessary in-class initializers yet. if (First) { // If the exception specification needs to be instantiated, do so now, // before we clobber it with an EST_Unevaluated specification below. if (Type->getExceptionSpecType() == EST_Uninstantiated) { InstantiateExceptionSpec(MD->getLocStart(), MD); Type = MD->getType()->getAs(); } DelayedDefaultedMemberExceptionSpecs.push_back(std::make_pair(MD, Type)); } else CheckExplicitlyDefaultedMemberExceptionSpec(MD, Type); } // If a function is explicitly defaulted on its first declaration, if (First) { // -- it is implicitly considered to be constexpr if the implicit // definition would be, MD->setConstexpr(Constexpr); // -- it is implicitly considered to have the same exception-specification // as if it had been implicitly declared, FunctionProtoType::ExtProtoInfo EPI = Type->getExtProtoInfo(); EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = MD; MD->setType(Context.getFunctionType(ReturnType, llvm::makeArrayRef(&ArgType, ExpectedParams), EPI)); } if (ShouldDeleteSpecialMember(MD, CSM)) { if (First) { SetDeclDeleted(MD, MD->getLocation()); } else { // C++11 [dcl.fct.def.default]p4: // [For a] user-provided explicitly-defaulted function [...] if such a // function is implicitly defined as deleted, the program is ill-formed. Diag(MD->getLocation(), diag::err_out_of_line_default_deletes) << CSM; ShouldDeleteSpecialMember(MD, CSM, nullptr, /*Diagnose*/true); HadError = true; } } if (HadError) MD->setInvalidDecl(); } /// Check whether the exception specification provided for an /// explicitly-defaulted special member matches the exception specification /// that would have been generated for an implicit special member, per /// C++11 [dcl.fct.def.default]p2. void Sema::CheckExplicitlyDefaultedMemberExceptionSpec( CXXMethodDecl *MD, const FunctionProtoType *SpecifiedType) { // If the exception specification was explicitly specified but hadn't been // parsed when the method was defaulted, grab it now. if (SpecifiedType->getExceptionSpecType() == EST_Unparsed) SpecifiedType = MD->getTypeSourceInfo()->getType()->castAs(); // Compute the implicit exception specification. CallingConv CC = Context.getDefaultCallingConvention(/*IsVariadic=*/false, /*IsCXXMethod=*/true); FunctionProtoType::ExtProtoInfo EPI(CC); auto IES = computeImplicitExceptionSpec(*this, MD->getLocation(), MD); EPI.ExceptionSpec = IES.getExceptionSpec(); const FunctionProtoType *ImplicitType = cast( Context.getFunctionType(Context.VoidTy, None, EPI)); // Ensure that it matches. CheckEquivalentExceptionSpec( PDiag(diag::err_incorrect_defaulted_exception_spec) << getSpecialMember(MD), PDiag(), ImplicitType, SourceLocation(), SpecifiedType, MD->getLocation()); } void Sema::CheckDelayedMemberExceptionSpecs() { decltype(DelayedExceptionSpecChecks) Checks; decltype(DelayedDefaultedMemberExceptionSpecs) Specs; std::swap(Checks, DelayedExceptionSpecChecks); std::swap(Specs, DelayedDefaultedMemberExceptionSpecs); // Perform any deferred checking of exception specifications for virtual // destructors. for (auto &Check : Checks) CheckOverridingFunctionExceptionSpec(Check.first, Check.second); // Check that any explicitly-defaulted methods have exception specifications // compatible with their implicit exception specifications. for (auto &Spec : Specs) CheckExplicitlyDefaultedMemberExceptionSpec(Spec.first, Spec.second); } namespace { struct SpecialMemberDeletionInfo { Sema &S; CXXMethodDecl *MD; Sema::CXXSpecialMember CSM; Sema::InheritedConstructorInfo *ICI; bool Diagnose; // Properties of the special member, computed for convenience. bool IsConstructor, IsAssignment, IsMove, ConstArg; SourceLocation Loc; bool AllFieldsAreConst; SpecialMemberDeletionInfo(Sema &S, CXXMethodDecl *MD, Sema::CXXSpecialMember CSM, Sema::InheritedConstructorInfo *ICI, bool Diagnose) : S(S), MD(MD), CSM(CSM), ICI(ICI), Diagnose(Diagnose), IsConstructor(false), IsAssignment(false), IsMove(false), ConstArg(false), Loc(MD->getLocation()), AllFieldsAreConst(true) { switch (CSM) { case Sema::CXXDefaultConstructor: case Sema::CXXCopyConstructor: IsConstructor = true; break; case Sema::CXXMoveConstructor: IsConstructor = true; IsMove = true; break; case Sema::CXXCopyAssignment: IsAssignment = true; break; case Sema::CXXMoveAssignment: IsAssignment = true; IsMove = true; break; case Sema::CXXDestructor: break; case Sema::CXXInvalid: llvm_unreachable("invalid special member kind"); } if (MD->getNumParams()) { if (const ReferenceType *RT = MD->getParamDecl(0)->getType()->getAs()) ConstArg = RT->getPointeeType().isConstQualified(); } } bool inUnion() const { return MD->getParent()->isUnion(); } Sema::CXXSpecialMember getEffectiveCSM() { return ICI ? Sema::CXXInvalid : CSM; } /// Look up the corresponding special member in the given class. Sema::SpecialMemberOverloadResult *lookupIn(CXXRecordDecl *Class, unsigned Quals, bool IsMutable) { return lookupCallFromSpecialMember(S, Class, CSM, Quals, ConstArg && !IsMutable); } typedef llvm::PointerUnion Subobject; bool shouldDeleteForBase(CXXBaseSpecifier *Base); bool shouldDeleteForField(FieldDecl *FD); bool shouldDeleteForAllConstMembers(); bool shouldDeleteForClassSubobject(CXXRecordDecl *Class, Subobject Subobj, unsigned Quals); bool shouldDeleteForSubobjectCall(Subobject Subobj, Sema::SpecialMemberOverloadResult *SMOR, bool IsDtorCallInCtor); bool isAccessible(Subobject Subobj, CXXMethodDecl *D); }; } /// Is the given special member inaccessible when used on the given /// sub-object. bool SpecialMemberDeletionInfo::isAccessible(Subobject Subobj, CXXMethodDecl *target) { /// If we're operating on a base class, the object type is the /// type of this special member. QualType objectTy; AccessSpecifier access = target->getAccess(); if (CXXBaseSpecifier *base = Subobj.dyn_cast()) { objectTy = S.Context.getTypeDeclType(MD->getParent()); access = CXXRecordDecl::MergeAccess(base->getAccessSpecifier(), access); // If we're operating on a field, the object type is the type of the field. } else { objectTy = S.Context.getTypeDeclType(target->getParent()); } return S.isSpecialMemberAccessibleForDeletion(target, access, objectTy); } /// Check whether we should delete a special member due to the implicit /// definition containing a call to a special member of a subobject. bool SpecialMemberDeletionInfo::shouldDeleteForSubobjectCall( Subobject Subobj, Sema::SpecialMemberOverloadResult *SMOR, bool IsDtorCallInCtor) { CXXMethodDecl *Decl = SMOR->getMethod(); FieldDecl *Field = Subobj.dyn_cast(); int DiagKind = -1; if (SMOR->getKind() == Sema::SpecialMemberOverloadResult::NoMemberOrDeleted) DiagKind = !Decl ? 0 : 1; else if (SMOR->getKind() == Sema::SpecialMemberOverloadResult::Ambiguous) DiagKind = 2; else if (!isAccessible(Subobj, Decl)) DiagKind = 3; else if (!IsDtorCallInCtor && Field && Field->getParent()->isUnion() && !Decl->isTrivial()) { // A member of a union must have a trivial corresponding special member. // As a weird special case, a destructor call from a union's constructor // must be accessible and non-deleted, but need not be trivial. Such a // destructor is never actually called, but is semantically checked as // if it were. DiagKind = 4; } if (DiagKind == -1) return false; if (Diagnose) { if (Field) { S.Diag(Field->getLocation(), diag::note_deleted_special_member_class_subobject) << getEffectiveCSM() << MD->getParent() << /*IsField*/true << Field << DiagKind << IsDtorCallInCtor; } else { CXXBaseSpecifier *Base = Subobj.get(); S.Diag(Base->getLocStart(), diag::note_deleted_special_member_class_subobject) << getEffectiveCSM() << MD->getParent() << /*IsField*/false << Base->getType() << DiagKind << IsDtorCallInCtor; } if (DiagKind == 1) S.NoteDeletedFunction(Decl); // FIXME: Explain inaccessibility if DiagKind == 3. } return true; } /// Check whether we should delete a special member function due to having a /// direct or virtual base class or non-static data member of class type M. bool SpecialMemberDeletionInfo::shouldDeleteForClassSubobject( CXXRecordDecl *Class, Subobject Subobj, unsigned Quals) { FieldDecl *Field = Subobj.dyn_cast(); bool IsMutable = Field && Field->isMutable(); // C++11 [class.ctor]p5: // -- any direct or virtual base class, or non-static data member with no // brace-or-equal-initializer, has class type M (or array thereof) and // either M has no default constructor or overload resolution as applied // to M's default constructor results in an ambiguity or in a function // that is deleted or inaccessible // C++11 [class.copy]p11, C++11 [class.copy]p23: // -- a direct or virtual base class B that cannot be copied/moved because // overload resolution, as applied to B's corresponding special member, // results in an ambiguity or a function that is deleted or inaccessible // from the defaulted special member // C++11 [class.dtor]p5: // -- any direct or virtual base class [...] has a type with a destructor // that is deleted or inaccessible if (!(CSM == Sema::CXXDefaultConstructor && Field && Field->hasInClassInitializer()) && shouldDeleteForSubobjectCall(Subobj, lookupIn(Class, Quals, IsMutable), false)) return true; // C++11 [class.ctor]p5, C++11 [class.copy]p11: // -- any direct or virtual base class or non-static data member has a // type with a destructor that is deleted or inaccessible if (IsConstructor) { Sema::SpecialMemberOverloadResult *SMOR = S.LookupSpecialMember(Class, Sema::CXXDestructor, false, false, false, false, false); if (shouldDeleteForSubobjectCall(Subobj, SMOR, true)) return true; } return false; } /// Check whether we should delete a special member function due to the class /// having a particular direct or virtual base class. bool SpecialMemberDeletionInfo::shouldDeleteForBase(CXXBaseSpecifier *Base) { CXXRecordDecl *BaseClass = Base->getType()->getAsCXXRecordDecl(); // If program is correct, BaseClass cannot be null, but if it is, the error // must be reported elsewhere. if (!BaseClass) return false; // If we have an inheriting constructor, check whether we're calling an // inherited constructor instead of a default constructor. if (ICI) { assert(CSM == Sema::CXXDefaultConstructor); auto *BaseCtor = ICI->findConstructorForBase(BaseClass, cast(MD) ->getInheritedConstructor() .getConstructor()) .first; if (BaseCtor) { if (BaseCtor->isDeleted() && Diagnose) { S.Diag(Base->getLocStart(), diag::note_deleted_special_member_class_subobject) << getEffectiveCSM() << MD->getParent() << /*IsField*/false << Base->getType() << /*Deleted*/1 << /*IsDtorCallInCtor*/false; S.NoteDeletedFunction(BaseCtor); } return BaseCtor->isDeleted(); } } return shouldDeleteForClassSubobject(BaseClass, Base, 0); } /// Check whether we should delete a special member function due to the class /// having a particular non-static data member. bool SpecialMemberDeletionInfo::shouldDeleteForField(FieldDecl *FD) { QualType FieldType = S.Context.getBaseElementType(FD->getType()); CXXRecordDecl *FieldRecord = FieldType->getAsCXXRecordDecl(); if (CSM == Sema::CXXDefaultConstructor) { // For a default constructor, all references must be initialized in-class // and, if a union, it must have a non-const member. if (FieldType->isReferenceType() && !FD->hasInClassInitializer()) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_default_ctor_uninit_field) << !!ICI << MD->getParent() << FD << FieldType << /*Reference*/0; return true; } // C++11 [class.ctor]p5: any non-variant non-static data member of // const-qualified type (or array thereof) with no // brace-or-equal-initializer does not have a user-provided default // constructor. if (!inUnion() && FieldType.isConstQualified() && !FD->hasInClassInitializer() && (!FieldRecord || !FieldRecord->hasUserProvidedDefaultConstructor())) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_default_ctor_uninit_field) << !!ICI << MD->getParent() << FD << FD->getType() << /*Const*/1; return true; } if (inUnion() && !FieldType.isConstQualified()) AllFieldsAreConst = false; } else if (CSM == Sema::CXXCopyConstructor) { // For a copy constructor, data members must not be of rvalue reference // type. if (FieldType->isRValueReferenceType()) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_copy_ctor_rvalue_reference) << MD->getParent() << FD << FieldType; return true; } } else if (IsAssignment) { // For an assignment operator, data members must not be of reference type. if (FieldType->isReferenceType()) { if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_assign_field) << IsMove << MD->getParent() << FD << FieldType << /*Reference*/0; return true; } if (!FieldRecord && FieldType.isConstQualified()) { // C++11 [class.copy]p23: // -- a non-static data member of const non-class type (or array thereof) if (Diagnose) S.Diag(FD->getLocation(), diag::note_deleted_assign_field) << IsMove << MD->getParent() << FD << FD->getType() << /*Const*/1; return true; } } if (FieldRecord) { // Some additional restrictions exist on the variant members. if (!inUnion() && FieldRecord->isUnion() && FieldRecord->isAnonymousStructOrUnion()) { bool AllVariantFieldsAreConst = true; // FIXME: Handle anonymous unions declared within anonymous unions. for (auto *UI : FieldRecord->fields()) { QualType UnionFieldType = S.Context.getBaseElementType(UI->getType()); if (!UnionFieldType.isConstQualified()) AllVariantFieldsAreConst = false; CXXRecordDecl *UnionFieldRecord = UnionFieldType->getAsCXXRecordDecl(); if (UnionFieldRecord && shouldDeleteForClassSubobject(UnionFieldRecord, UI, UnionFieldType.getCVRQualifiers())) return true; } // At least one member in each anonymous union must be non-const if (CSM == Sema::CXXDefaultConstructor && AllVariantFieldsAreConst && !FieldRecord->field_empty()) { if (Diagnose) S.Diag(FieldRecord->getLocation(), diag::note_deleted_default_ctor_all_const) << !!ICI << MD->getParent() << /*anonymous union*/1; return true; } // Don't check the implicit member of the anonymous union type. // This is technically non-conformant, but sanity demands it. return false; } if (shouldDeleteForClassSubobject(FieldRecord, FD, FieldType.getCVRQualifiers())) return true; } return false; } /// C++11 [class.ctor] p5: /// A defaulted default constructor for a class X is defined as deleted if /// X is a union and all of its variant members are of const-qualified type. bool SpecialMemberDeletionInfo::shouldDeleteForAllConstMembers() { // This is a silly definition, because it gives an empty union a deleted // default constructor. Don't do that. if (CSM == Sema::CXXDefaultConstructor && inUnion() && AllFieldsAreConst) { bool AnyFields = false; for (auto *F : MD->getParent()->fields()) if ((AnyFields = !F->isUnnamedBitfield())) break; if (!AnyFields) return false; if (Diagnose) S.Diag(MD->getParent()->getLocation(), diag::note_deleted_default_ctor_all_const) << !!ICI << MD->getParent() << /*not anonymous union*/0; return true; } return false; } /// Determine whether a defaulted special member function should be defined as /// deleted, as specified in C++11 [class.ctor]p5, C++11 [class.copy]p11, /// C++11 [class.copy]p23, and C++11 [class.dtor]p5. bool Sema::ShouldDeleteSpecialMember(CXXMethodDecl *MD, CXXSpecialMember CSM, InheritedConstructorInfo *ICI, bool Diagnose) { if (MD->isInvalidDecl()) return false; CXXRecordDecl *RD = MD->getParent(); assert(!RD->isDependentType() && "do deletion after instantiation"); if (!LangOpts.CPlusPlus11 || RD->isInvalidDecl()) return false; // C++11 [expr.lambda.prim]p19: // The closure type associated with a lambda-expression has a // deleted (8.4.3) default constructor and a deleted copy // assignment operator. if (RD->isLambda() && (CSM == CXXDefaultConstructor || CSM == CXXCopyAssignment)) { if (Diagnose) Diag(RD->getLocation(), diag::note_lambda_decl); return true; } // For an anonymous struct or union, the copy and assignment special members // will never be used, so skip the check. For an anonymous union declared at // namespace scope, the constructor and destructor are used. if (CSM != CXXDefaultConstructor && CSM != CXXDestructor && RD->isAnonymousStructOrUnion()) return false; // C++11 [class.copy]p7, p18: // If the class definition declares a move constructor or move assignment // operator, an implicitly declared copy constructor or copy assignment // operator is defined as deleted. if (MD->isImplicit() && (CSM == CXXCopyConstructor || CSM == CXXCopyAssignment)) { CXXMethodDecl *UserDeclaredMove = nullptr; // In Microsoft mode up to MSVC 2013, a user-declared move only causes the // deletion of the corresponding copy operation, not both copy operations. // MSVC 2015 has adopted the standards conforming behavior. bool DeletesOnlyMatchingCopy = getLangOpts().MSVCCompat && !getLangOpts().isCompatibleWithMSVC(LangOptions::MSVC2015); if (RD->hasUserDeclaredMoveConstructor() && (!DeletesOnlyMatchingCopy || CSM == CXXCopyConstructor)) { if (!Diagnose) return true; // Find any user-declared move constructor. for (auto *I : RD->ctors()) { if (I->isMoveConstructor()) { UserDeclaredMove = I; break; } } assert(UserDeclaredMove); } else if (RD->hasUserDeclaredMoveAssignment() && (!DeletesOnlyMatchingCopy || CSM == CXXCopyAssignment)) { if (!Diagnose) return true; // Find any user-declared move assignment operator. for (auto *I : RD->methods()) { if (I->isMoveAssignmentOperator()) { UserDeclaredMove = I; break; } } assert(UserDeclaredMove); } if (UserDeclaredMove) { Diag(UserDeclaredMove->getLocation(), diag::note_deleted_copy_user_declared_move) << (CSM == CXXCopyAssignment) << RD << UserDeclaredMove->isMoveAssignmentOperator(); return true; } } // Do access control from the special member function ContextRAII MethodContext(*this, MD); // C++11 [class.dtor]p5: // -- for a virtual destructor, lookup of the non-array deallocation function // results in an ambiguity or in a function that is deleted or inaccessible if (CSM == CXXDestructor && MD->isVirtual()) { FunctionDecl *OperatorDelete = nullptr; DeclarationName Name = Context.DeclarationNames.getCXXOperatorName(OO_Delete); if (FindDeallocationFunction(MD->getLocation(), MD->getParent(), Name, OperatorDelete, /*Diagnose*/false)) { if (Diagnose) Diag(RD->getLocation(), diag::note_deleted_dtor_no_operator_delete); return true; } } SpecialMemberDeletionInfo SMI(*this, MD, CSM, ICI, Diagnose); for (auto &BI : RD->bases()) if ((SMI.IsAssignment || !BI.isVirtual()) && SMI.shouldDeleteForBase(&BI)) return true; // Per DR1611, do not consider virtual bases of constructors of abstract // classes, since we are not going to construct them. For assignment // operators, we only assign (and thus only consider) direct bases. if ((!RD->isAbstract() || !SMI.IsConstructor) && !SMI.IsAssignment) { for (auto &BI : RD->vbases()) if (SMI.shouldDeleteForBase(&BI)) return true; } for (auto *FI : RD->fields()) if (!FI->isInvalidDecl() && !FI->isUnnamedBitfield() && SMI.shouldDeleteForField(FI)) return true; if (SMI.shouldDeleteForAllConstMembers()) return true; if (getLangOpts().CUDA) { // We should delete the special member in CUDA mode if target inference // failed. return inferCUDATargetForImplicitSpecialMember(RD, CSM, MD, SMI.ConstArg, Diagnose); } return false; } /// Perform lookup for a special member of the specified kind, and determine /// whether it is trivial. If the triviality can be determined without the /// lookup, skip it. This is intended for use when determining whether a /// special member of a containing object is trivial, and thus does not ever /// perform overload resolution for default constructors. /// /// If \p Selected is not \c NULL, \c *Selected will be filled in with the /// member that was most likely to be intended to be trivial, if any. static bool findTrivialSpecialMember(Sema &S, CXXRecordDecl *RD, Sema::CXXSpecialMember CSM, unsigned Quals, bool ConstRHS, CXXMethodDecl **Selected) { if (Selected) *Selected = nullptr; switch (CSM) { case Sema::CXXInvalid: llvm_unreachable("not a special member"); case Sema::CXXDefaultConstructor: // C++11 [class.ctor]p5: // A default constructor is trivial if: // - all the [direct subobjects] have trivial default constructors // // Note, no overload resolution is performed in this case. if (RD->hasTrivialDefaultConstructor()) return true; if (Selected) { // If there's a default constructor which could have been trivial, dig it // out. Otherwise, if there's any user-provided default constructor, point // to that as an example of why there's not a trivial one. CXXConstructorDecl *DefCtor = nullptr; if (RD->needsImplicitDefaultConstructor()) S.DeclareImplicitDefaultConstructor(RD); for (auto *CI : RD->ctors()) { if (!CI->isDefaultConstructor()) continue; DefCtor = CI; if (!DefCtor->isUserProvided()) break; } *Selected = DefCtor; } return false; case Sema::CXXDestructor: // C++11 [class.dtor]p5: // A destructor is trivial if: // - all the direct [subobjects] have trivial destructors if (RD->hasTrivialDestructor()) return true; if (Selected) { if (RD->needsImplicitDestructor()) S.DeclareImplicitDestructor(RD); *Selected = RD->getDestructor(); } return false; case Sema::CXXCopyConstructor: // C++11 [class.copy]p12: // A copy constructor is trivial if: // - the constructor selected to copy each direct [subobject] is trivial if (RD->hasTrivialCopyConstructor()) { if (Quals == Qualifiers::Const) // We must either select the trivial copy constructor or reach an // ambiguity; no need to actually perform overload resolution. return true; } else if (!Selected) { return false; } // In C++98, we are not supposed to perform overload resolution here, but we // treat that as a language defect, as suggested on cxx-abi-dev, to treat // cases like B as having a non-trivial copy constructor: // struct A { template A(T&); }; // struct B { mutable A a; }; goto NeedOverloadResolution; case Sema::CXXCopyAssignment: // C++11 [class.copy]p25: // A copy assignment operator is trivial if: // - the assignment operator selected to copy each direct [subobject] is // trivial if (RD->hasTrivialCopyAssignment()) { if (Quals == Qualifiers::Const) return true; } else if (!Selected) { return false; } // In C++98, we are not supposed to perform overload resolution here, but we // treat that as a language defect. goto NeedOverloadResolution; case Sema::CXXMoveConstructor: case Sema::CXXMoveAssignment: NeedOverloadResolution: Sema::SpecialMemberOverloadResult *SMOR = lookupCallFromSpecialMember(S, RD, CSM, Quals, ConstRHS); // The standard doesn't describe how to behave if the lookup is ambiguous. // We treat it as not making the member non-trivial, just like the standard // mandates for the default constructor. This should rarely matter, because // the member will also be deleted. if (SMOR->getKind() == Sema::SpecialMemberOverloadResult::Ambiguous) return true; if (!SMOR->getMethod()) { assert(SMOR->getKind() == Sema::SpecialMemberOverloadResult::NoMemberOrDeleted); return false; } // We deliberately don't check if we found a deleted special member. We're // not supposed to! if (Selected) *Selected = SMOR->getMethod(); return SMOR->getMethod()->isTrivial(); } llvm_unreachable("unknown special method kind"); } static CXXConstructorDecl *findUserDeclaredCtor(CXXRecordDecl *RD) { for (auto *CI : RD->ctors()) if (!CI->isImplicit()) return CI; // Look for constructor templates. typedef CXXRecordDecl::specific_decl_iterator tmpl_iter; for (tmpl_iter TI(RD->decls_begin()), TE(RD->decls_end()); TI != TE; ++TI) { if (CXXConstructorDecl *CD = dyn_cast(TI->getTemplatedDecl())) return CD; } return nullptr; } /// The kind of subobject we are checking for triviality. The values of this /// enumeration are used in diagnostics. enum TrivialSubobjectKind { /// The subobject is a base class. TSK_BaseClass, /// The subobject is a non-static data member. TSK_Field, /// The object is actually the complete object. TSK_CompleteObject }; /// Check whether the special member selected for a given type would be trivial. static bool checkTrivialSubobjectCall(Sema &S, SourceLocation SubobjLoc, QualType SubType, bool ConstRHS, Sema::CXXSpecialMember CSM, TrivialSubobjectKind Kind, bool Diagnose) { CXXRecordDecl *SubRD = SubType->getAsCXXRecordDecl(); if (!SubRD) return true; CXXMethodDecl *Selected; if (findTrivialSpecialMember(S, SubRD, CSM, SubType.getCVRQualifiers(), ConstRHS, Diagnose ? &Selected : nullptr)) return true; if (Diagnose) { if (ConstRHS) SubType.addConst(); if (!Selected && CSM == Sema::CXXDefaultConstructor) { S.Diag(SubobjLoc, diag::note_nontrivial_no_def_ctor) << Kind << SubType.getUnqualifiedType(); if (CXXConstructorDecl *CD = findUserDeclaredCtor(SubRD)) S.Diag(CD->getLocation(), diag::note_user_declared_ctor); } else if (!Selected) S.Diag(SubobjLoc, diag::note_nontrivial_no_copy) << Kind << SubType.getUnqualifiedType() << CSM << SubType; else if (Selected->isUserProvided()) { if (Kind == TSK_CompleteObject) S.Diag(Selected->getLocation(), diag::note_nontrivial_user_provided) << Kind << SubType.getUnqualifiedType() << CSM; else { S.Diag(SubobjLoc, diag::note_nontrivial_user_provided) << Kind << SubType.getUnqualifiedType() << CSM; S.Diag(Selected->getLocation(), diag::note_declared_at); } } else { if (Kind != TSK_CompleteObject) S.Diag(SubobjLoc, diag::note_nontrivial_subobject) << Kind << SubType.getUnqualifiedType() << CSM; // Explain why the defaulted or deleted special member isn't trivial. S.SpecialMemberIsTrivial(Selected, CSM, Diagnose); } } return false; } /// Check whether the members of a class type allow a special member to be /// trivial. static bool checkTrivialClassMembers(Sema &S, CXXRecordDecl *RD, Sema::CXXSpecialMember CSM, bool ConstArg, bool Diagnose) { for (const auto *FI : RD->fields()) { if (FI->isInvalidDecl() || FI->isUnnamedBitfield()) continue; QualType FieldType = S.Context.getBaseElementType(FI->getType()); // Pretend anonymous struct or union members are members of this class. if (FI->isAnonymousStructOrUnion()) { if (!checkTrivialClassMembers(S, FieldType->getAsCXXRecordDecl(), CSM, ConstArg, Diagnose)) return false; continue; } // C++11 [class.ctor]p5: // A default constructor is trivial if [...] // -- no non-static data member of its class has a // brace-or-equal-initializer if (CSM == Sema::CXXDefaultConstructor && FI->hasInClassInitializer()) { if (Diagnose) S.Diag(FI->getLocation(), diag::note_nontrivial_in_class_init) << FI; return false; } // Objective C ARC 4.3.5: // [...] nontrivally ownership-qualified types are [...] not trivially // default constructible, copy constructible, move constructible, copy // assignable, move assignable, or destructible [...] if (S.getLangOpts().ObjCAutoRefCount && FieldType.hasNonTrivialObjCLifetime()) { if (Diagnose) S.Diag(FI->getLocation(), diag::note_nontrivial_objc_ownership) << RD << FieldType.getObjCLifetime(); return false; } bool ConstRHS = ConstArg && !FI->isMutable(); if (!checkTrivialSubobjectCall(S, FI->getLocation(), FieldType, ConstRHS, CSM, TSK_Field, Diagnose)) return false; } return true; } /// Diagnose why the specified class does not have a trivial special member of /// the given kind. void Sema::DiagnoseNontrivial(const CXXRecordDecl *RD, CXXSpecialMember CSM) { QualType Ty = Context.getRecordType(RD); bool ConstArg = (CSM == CXXCopyConstructor || CSM == CXXCopyAssignment); checkTrivialSubobjectCall(*this, RD->getLocation(), Ty, ConstArg, CSM, TSK_CompleteObject, /*Diagnose*/true); } /// Determine whether a defaulted or deleted special member function is trivial, /// as specified in C++11 [class.ctor]p5, C++11 [class.copy]p12, /// C++11 [class.copy]p25, and C++11 [class.dtor]p5. bool Sema::SpecialMemberIsTrivial(CXXMethodDecl *MD, CXXSpecialMember CSM, bool Diagnose) { assert(!MD->isUserProvided() && CSM != CXXInvalid && "not special enough"); CXXRecordDecl *RD = MD->getParent(); bool ConstArg = false; // C++11 [class.copy]p12, p25: [DR1593] // A [special member] is trivial if [...] its parameter-type-list is // equivalent to the parameter-type-list of an implicit declaration [...] switch (CSM) { case CXXDefaultConstructor: case CXXDestructor: // Trivial default constructors and destructors cannot have parameters. break; case CXXCopyConstructor: case CXXCopyAssignment: { // Trivial copy operations always have const, non-volatile parameter types. ConstArg = true; const ParmVarDecl *Param0 = MD->getParamDecl(0); const ReferenceType *RT = Param0->getType()->getAs(); if (!RT || RT->getPointeeType().getCVRQualifiers() != Qualifiers::Const) { if (Diagnose) Diag(Param0->getLocation(), diag::note_nontrivial_param_type) << Param0->getSourceRange() << Param0->getType() << Context.getLValueReferenceType( Context.getRecordType(RD).withConst()); return false; } break; } case CXXMoveConstructor: case CXXMoveAssignment: { // Trivial move operations always have non-cv-qualified parameters. const ParmVarDecl *Param0 = MD->getParamDecl(0); const RValueReferenceType *RT = Param0->getType()->getAs(); if (!RT || RT->getPointeeType().getCVRQualifiers()) { if (Diagnose) Diag(Param0->getLocation(), diag::note_nontrivial_param_type) << Param0->getSourceRange() << Param0->getType() << Context.getRValueReferenceType(Context.getRecordType(RD)); return false; } break; } case CXXInvalid: llvm_unreachable("not a special member"); } if (MD->getMinRequiredArguments() < MD->getNumParams()) { if (Diagnose) Diag(MD->getParamDecl(MD->getMinRequiredArguments())->getLocation(), diag::note_nontrivial_default_arg) << MD->getParamDecl(MD->getMinRequiredArguments())->getSourceRange(); return false; } if (MD->isVariadic()) { if (Diagnose) Diag(MD->getLocation(), diag::note_nontrivial_variadic); return false; } // C++11 [class.ctor]p5, C++11 [class.dtor]p5: // A copy/move [constructor or assignment operator] is trivial if // -- the [member] selected to copy/move each direct base class subobject // is trivial // // C++11 [class.copy]p12, C++11 [class.copy]p25: // A [default constructor or destructor] is trivial if // -- all the direct base classes have trivial [default constructors or // destructors] for (const auto &BI : RD->bases()) if (!checkTrivialSubobjectCall(*this, BI.getLocStart(), BI.getType(), ConstArg, CSM, TSK_BaseClass, Diagnose)) return false; // C++11 [class.ctor]p5, C++11 [class.dtor]p5: // A copy/move [constructor or assignment operator] for a class X is // trivial if // -- for each non-static data member of X that is of class type (or array // thereof), the constructor selected to copy/move that member is // trivial // // C++11 [class.copy]p12, C++11 [class.copy]p25: // A [default constructor or destructor] is trivial if // -- for all of the non-static data members of its class that are of class // type (or array thereof), each such class has a trivial [default // constructor or destructor] if (!checkTrivialClassMembers(*this, RD, CSM, ConstArg, Diagnose)) return false; // C++11 [class.dtor]p5: // A destructor is trivial if [...] // -- the destructor is not virtual if (CSM == CXXDestructor && MD->isVirtual()) { if (Diagnose) Diag(MD->getLocation(), diag::note_nontrivial_virtual_dtor) << RD; return false; } // C++11 [class.ctor]p5, C++11 [class.copy]p12, C++11 [class.copy]p25: // A [special member] for class X is trivial if [...] // -- class X has no virtual functions and no virtual base classes if (CSM != CXXDestructor && MD->getParent()->isDynamicClass()) { if (!Diagnose) return false; if (RD->getNumVBases()) { // Check for virtual bases. We already know that the corresponding // member in all bases is trivial, so vbases must all be direct. CXXBaseSpecifier &BS = *RD->vbases_begin(); assert(BS.isVirtual()); Diag(BS.getLocStart(), diag::note_nontrivial_has_virtual) << RD << 1; return false; } // Must have a virtual method. for (const auto *MI : RD->methods()) { if (MI->isVirtual()) { SourceLocation MLoc = MI->getLocStart(); Diag(MLoc, diag::note_nontrivial_has_virtual) << RD << 0; return false; } } llvm_unreachable("dynamic class with no vbases and no virtual functions"); } // Looks like it's trivial! return true; } namespace { struct FindHiddenVirtualMethod { Sema *S; CXXMethodDecl *Method; llvm::SmallPtrSet OverridenAndUsingBaseMethods; SmallVector OverloadedMethods; private: /// Check whether any most overriden method from MD in Methods static bool CheckMostOverridenMethods( const CXXMethodDecl *MD, const llvm::SmallPtrSetImpl &Methods) { if (MD->size_overridden_methods() == 0) return Methods.count(MD->getCanonicalDecl()); for (CXXMethodDecl::method_iterator I = MD->begin_overridden_methods(), E = MD->end_overridden_methods(); I != E; ++I) if (CheckMostOverridenMethods(*I, Methods)) return true; return false; } public: /// Member lookup function that determines whether a given C++ /// method overloads virtual methods in a base class without overriding any, /// to be used with CXXRecordDecl::lookupInBases(). bool operator()(const CXXBaseSpecifier *Specifier, CXXBasePath &Path) { RecordDecl *BaseRecord = Specifier->getType()->getAs()->getDecl(); DeclarationName Name = Method->getDeclName(); assert(Name.getNameKind() == DeclarationName::Identifier); bool foundSameNameMethod = false; SmallVector overloadedMethods; for (Path.Decls = BaseRecord->lookup(Name); !Path.Decls.empty(); Path.Decls = Path.Decls.slice(1)) { NamedDecl *D = Path.Decls.front(); if (CXXMethodDecl *MD = dyn_cast(D)) { MD = MD->getCanonicalDecl(); foundSameNameMethod = true; // Interested only in hidden virtual methods. if (!MD->isVirtual()) continue; // If the method we are checking overrides a method from its base // don't warn about the other overloaded methods. Clang deviates from // GCC by only diagnosing overloads of inherited virtual functions that // do not override any other virtual functions in the base. GCC's // -Woverloaded-virtual diagnoses any derived function hiding a virtual // function from a base class. These cases may be better served by a // warning (not specific to virtual functions) on call sites when the // call would select a different function from the base class, were it // visible. // See FIXME in test/SemaCXX/warn-overload-virtual.cpp for an example. if (!S->IsOverload(Method, MD, false)) return true; // Collect the overload only if its hidden. if (!CheckMostOverridenMethods(MD, OverridenAndUsingBaseMethods)) overloadedMethods.push_back(MD); } } if (foundSameNameMethod) OverloadedMethods.append(overloadedMethods.begin(), overloadedMethods.end()); return foundSameNameMethod; } }; } // end anonymous namespace /// \brief Add the most overriden methods from MD to Methods static void AddMostOverridenMethods(const CXXMethodDecl *MD, llvm::SmallPtrSetImpl& Methods) { if (MD->size_overridden_methods() == 0) Methods.insert(MD->getCanonicalDecl()); for (CXXMethodDecl::method_iterator I = MD->begin_overridden_methods(), E = MD->end_overridden_methods(); I != E; ++I) AddMostOverridenMethods(*I, Methods); } /// \brief Check if a method overloads virtual methods in a base class without /// overriding any. void Sema::FindHiddenVirtualMethods(CXXMethodDecl *MD, SmallVectorImpl &OverloadedMethods) { if (!MD->getDeclName().isIdentifier()) return; CXXBasePaths Paths(/*FindAmbiguities=*/true, // true to look in all bases. /*bool RecordPaths=*/false, /*bool DetectVirtual=*/false); FindHiddenVirtualMethod FHVM; FHVM.Method = MD; FHVM.S = this; // Keep the base methods that were overriden or introduced in the subclass // by 'using' in a set. A base method not in this set is hidden. CXXRecordDecl *DC = MD->getParent(); DeclContext::lookup_result R = DC->lookup(MD->getDeclName()); for (DeclContext::lookup_iterator I = R.begin(), E = R.end(); I != E; ++I) { NamedDecl *ND = *I; if (UsingShadowDecl *shad = dyn_cast(*I)) ND = shad->getTargetDecl(); if (CXXMethodDecl *MD = dyn_cast(ND)) AddMostOverridenMethods(MD, FHVM.OverridenAndUsingBaseMethods); } if (DC->lookupInBases(FHVM, Paths)) OverloadedMethods = FHVM.OverloadedMethods; } void Sema::NoteHiddenVirtualMethods(CXXMethodDecl *MD, SmallVectorImpl &OverloadedMethods) { for (unsigned i = 0, e = OverloadedMethods.size(); i != e; ++i) { CXXMethodDecl *overloadedMD = OverloadedMethods[i]; PartialDiagnostic PD = PDiag( diag::note_hidden_overloaded_virtual_declared_here) << overloadedMD; HandleFunctionTypeMismatch(PD, MD->getType(), overloadedMD->getType()); Diag(overloadedMD->getLocation(), PD); } } /// \brief Diagnose methods which overload virtual methods in a base class /// without overriding any. void Sema::DiagnoseHiddenVirtualMethods(CXXMethodDecl *MD) { if (MD->isInvalidDecl()) return; if (Diags.isIgnored(diag::warn_overloaded_virtual, MD->getLocation())) return; SmallVector OverloadedMethods; FindHiddenVirtualMethods(MD, OverloadedMethods); if (!OverloadedMethods.empty()) { Diag(MD->getLocation(), diag::warn_overloaded_virtual) << MD << (OverloadedMethods.size() > 1); NoteHiddenVirtualMethods(MD, OverloadedMethods); } } void Sema::ActOnFinishCXXMemberSpecification(Scope* S, SourceLocation RLoc, Decl *TagDecl, SourceLocation LBrac, SourceLocation RBrac, AttributeList *AttrList) { if (!TagDecl) return; AdjustDeclIfTemplate(TagDecl); for (const AttributeList* l = AttrList; l; l = l->getNext()) { if (l->getKind() != AttributeList::AT_Visibility) continue; l->setInvalid(); Diag(l->getLoc(), diag::warn_attribute_after_definition_ignored) << l->getName(); } ActOnFields(S, RLoc, TagDecl, llvm::makeArrayRef( // strict aliasing violation! reinterpret_cast(FieldCollector->getCurFields()), FieldCollector->getCurNumFields()), LBrac, RBrac, AttrList); CheckCompletedCXXClass( dyn_cast_or_null(TagDecl)); } /// AddImplicitlyDeclaredMembersToClass - Adds any implicitly-declared /// special functions, such as the default constructor, copy /// constructor, or destructor, to the given C++ class (C++ /// [special]p1). This routine can only be executed just before the /// definition of the class is complete. void Sema::AddImplicitlyDeclaredMembersToClass(CXXRecordDecl *ClassDecl) { if (ClassDecl->needsImplicitDefaultConstructor()) { ++ASTContext::NumImplicitDefaultConstructors; if (ClassDecl->hasInheritedConstructor()) DeclareImplicitDefaultConstructor(ClassDecl); } if (ClassDecl->needsImplicitCopyConstructor()) { ++ASTContext::NumImplicitCopyConstructors; // If the properties or semantics of the copy constructor couldn't be // determined while the class was being declared, force a declaration // of it now. if (ClassDecl->needsOverloadResolutionForCopyConstructor() || ClassDecl->hasInheritedConstructor()) DeclareImplicitCopyConstructor(ClassDecl); // For the MS ABI we need to know whether the copy ctor is deleted. A // prerequisite for deleting the implicit copy ctor is that the class has a // move ctor or move assignment that is either user-declared or whose // semantics are inherited from a subobject. FIXME: We should provide a more // direct way for CodeGen to ask whether the constructor was deleted. else if (Context.getTargetInfo().getCXXABI().isMicrosoft() && (ClassDecl->hasUserDeclaredMoveConstructor() || ClassDecl->needsOverloadResolutionForMoveConstructor() || ClassDecl->hasUserDeclaredMoveAssignment() || ClassDecl->needsOverloadResolutionForMoveAssignment())) DeclareImplicitCopyConstructor(ClassDecl); } if (getLangOpts().CPlusPlus11 && ClassDecl->needsImplicitMoveConstructor()) { ++ASTContext::NumImplicitMoveConstructors; if (ClassDecl->needsOverloadResolutionForMoveConstructor() || ClassDecl->hasInheritedConstructor()) DeclareImplicitMoveConstructor(ClassDecl); } if (ClassDecl->needsImplicitCopyAssignment()) { ++ASTContext::NumImplicitCopyAssignmentOperators; // If we have a dynamic class, then the copy assignment operator may be // virtual, so we have to declare it immediately. This ensures that, e.g., // it shows up in the right place in the vtable and that we diagnose // problems with the implicit exception specification. if (ClassDecl->isDynamicClass() || ClassDecl->needsOverloadResolutionForCopyAssignment() || ClassDecl->hasInheritedAssignment()) DeclareImplicitCopyAssignment(ClassDecl); } if (getLangOpts().CPlusPlus11 && ClassDecl->needsImplicitMoveAssignment()) { ++ASTContext::NumImplicitMoveAssignmentOperators; // Likewise for the move assignment operator. if (ClassDecl->isDynamicClass() || ClassDecl->needsOverloadResolutionForMoveAssignment() || ClassDecl->hasInheritedAssignment()) DeclareImplicitMoveAssignment(ClassDecl); } if (ClassDecl->needsImplicitDestructor()) { ++ASTContext::NumImplicitDestructors; // If we have a dynamic class, then the destructor may be virtual, so we // have to declare the destructor immediately. This ensures that, e.g., it // shows up in the right place in the vtable and that we diagnose problems // with the implicit exception specification. if (ClassDecl->isDynamicClass() || ClassDecl->needsOverloadResolutionForDestructor()) DeclareImplicitDestructor(ClassDecl); } } unsigned Sema::ActOnReenterTemplateScope(Scope *S, Decl *D) { if (!D) return 0; // The order of template parameters is not important here. All names // get added to the same scope. SmallVector ParameterLists; if (TemplateDecl *TD = dyn_cast(D)) D = TD->getTemplatedDecl(); if (auto *PSD = dyn_cast(D)) ParameterLists.push_back(PSD->getTemplateParameters()); if (DeclaratorDecl *DD = dyn_cast(D)) { for (unsigned i = 0; i < DD->getNumTemplateParameterLists(); ++i) ParameterLists.push_back(DD->getTemplateParameterList(i)); if (FunctionDecl *FD = dyn_cast(D)) { if (FunctionTemplateDecl *FTD = FD->getDescribedFunctionTemplate()) ParameterLists.push_back(FTD->getTemplateParameters()); } } if (TagDecl *TD = dyn_cast(D)) { for (unsigned i = 0; i < TD->getNumTemplateParameterLists(); ++i) ParameterLists.push_back(TD->getTemplateParameterList(i)); if (CXXRecordDecl *RD = dyn_cast(TD)) { if (ClassTemplateDecl *CTD = RD->getDescribedClassTemplate()) ParameterLists.push_back(CTD->getTemplateParameters()); } } unsigned Count = 0; for (TemplateParameterList *Params : ParameterLists) { if (Params->size() > 0) // Ignore explicit specializations; they don't contribute to the template // depth. ++Count; for (NamedDecl *Param : *Params) { if (Param->getDeclName()) { S->AddDecl(Param); IdResolver.AddDecl(Param); } } } return Count; } void Sema::ActOnStartDelayedMemberDeclarations(Scope *S, Decl *RecordD) { if (!RecordD) return; AdjustDeclIfTemplate(RecordD); CXXRecordDecl *Record = cast(RecordD); PushDeclContext(S, Record); } void Sema::ActOnFinishDelayedMemberDeclarations(Scope *S, Decl *RecordD) { if (!RecordD) return; PopDeclContext(); } /// This is used to implement the constant expression evaluation part of the /// attribute enable_if extension. There is nothing in standard C++ which would /// require reentering parameters. void Sema::ActOnReenterCXXMethodParameter(Scope *S, ParmVarDecl *Param) { if (!Param) return; S->AddDecl(Param); if (Param->getDeclName()) IdResolver.AddDecl(Param); } /// ActOnStartDelayedCXXMethodDeclaration - We have completed /// parsing a top-level (non-nested) C++ class, and we are now /// parsing those parts of the given Method declaration that could /// not be parsed earlier (C++ [class.mem]p2), such as default /// arguments. This action should enter the scope of the given /// Method declaration as if we had just parsed the qualified method /// name. However, it should not bring the parameters into scope; /// that will be performed by ActOnDelayedCXXMethodParameter. void Sema::ActOnStartDelayedCXXMethodDeclaration(Scope *S, Decl *MethodD) { } /// ActOnDelayedCXXMethodParameter - We've already started a delayed /// C++ method declaration. We're (re-)introducing the given /// function parameter into scope for use in parsing later parts of /// the method declaration. For example, we could see an /// ActOnParamDefaultArgument event for this parameter. void Sema::ActOnDelayedCXXMethodParameter(Scope *S, Decl *ParamD) { if (!ParamD) return; ParmVarDecl *Param = cast(ParamD); // If this parameter has an unparsed default argument, clear it out // to make way for the parsed default argument. if (Param->hasUnparsedDefaultArg()) Param->setDefaultArg(nullptr); S->AddDecl(Param); if (Param->getDeclName()) IdResolver.AddDecl(Param); } /// ActOnFinishDelayedCXXMethodDeclaration - We have finished /// processing the delayed method declaration for Method. The method /// declaration is now considered finished. There may be a separate /// ActOnStartOfFunctionDef action later (not necessarily /// immediately!) for this method, if it was also defined inside the /// class body. void Sema::ActOnFinishDelayedCXXMethodDeclaration(Scope *S, Decl *MethodD) { if (!MethodD) return; AdjustDeclIfTemplate(MethodD); FunctionDecl *Method = cast(MethodD); // Now that we have our default arguments, check the constructor // again. It could produce additional diagnostics or affect whether // the class has implicitly-declared destructors, among other // things. if (CXXConstructorDecl *Constructor = dyn_cast(Method)) CheckConstructor(Constructor); // Check the default arguments, which we may have added. if (!Method->isInvalidDecl()) CheckCXXDefaultArguments(Method); } /// CheckConstructorDeclarator - Called by ActOnDeclarator to check /// the well-formedness of the constructor declarator @p D with type @p /// R. If there are any errors in the declarator, this routine will /// emit diagnostics and set the invalid bit to true. In any case, the type /// will be updated to reflect a well-formed type for the constructor and /// returned. QualType Sema::CheckConstructorDeclarator(Declarator &D, QualType R, StorageClass &SC) { bool isVirtual = D.getDeclSpec().isVirtualSpecified(); // C++ [class.ctor]p3: // A constructor shall not be virtual (10.3) or static (9.4). A // constructor can be invoked for a const, volatile or const // volatile object. A constructor shall not be declared const, // volatile, or const volatile (9.3.2). if (isVirtual) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_constructor_cannot_be) << "virtual" << SourceRange(D.getDeclSpec().getVirtualSpecLoc()) << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } if (SC == SC_Static) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_constructor_cannot_be) << "static" << SourceRange(D.getDeclSpec().getStorageClassSpecLoc()) << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); SC = SC_None; } if (unsigned TypeQuals = D.getDeclSpec().getTypeQualifiers()) { diagnoseIgnoredQualifiers( diag::err_constructor_return_type, TypeQuals, SourceLocation(), D.getDeclSpec().getConstSpecLoc(), D.getDeclSpec().getVolatileSpecLoc(), D.getDeclSpec().getRestrictSpecLoc(), D.getDeclSpec().getAtomicSpecLoc()); D.setInvalidType(); } DeclaratorChunk::FunctionTypeInfo &FTI = D.getFunctionTypeInfo(); if (FTI.TypeQuals != 0) { if (FTI.TypeQuals & Qualifiers::Const) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_constructor) << "const" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Volatile) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_constructor) << "volatile" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Restrict) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_constructor) << "restrict" << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } // C++0x [class.ctor]p4: // A constructor shall not be declared with a ref-qualifier. if (FTI.hasRefQualifier()) { Diag(FTI.getRefQualifierLoc(), diag::err_ref_qualifier_constructor) << FTI.RefQualifierIsLValueRef << FixItHint::CreateRemoval(FTI.getRefQualifierLoc()); D.setInvalidType(); } // Rebuild the function type "R" without any type qualifiers (in // case any of the errors above fired) and with "void" as the // return type, since constructors don't have return types. const FunctionProtoType *Proto = R->getAs(); if (Proto->getReturnType() == Context.VoidTy && !D.isInvalidType()) return R; FunctionProtoType::ExtProtoInfo EPI = Proto->getExtProtoInfo(); EPI.TypeQuals = 0; EPI.RefQualifier = RQ_None; return Context.getFunctionType(Context.VoidTy, Proto->getParamTypes(), EPI); } /// CheckConstructor - Checks a fully-formed constructor for /// well-formedness, issuing any diagnostics required. Returns true if /// the constructor declarator is invalid. void Sema::CheckConstructor(CXXConstructorDecl *Constructor) { CXXRecordDecl *ClassDecl = dyn_cast(Constructor->getDeclContext()); if (!ClassDecl) return Constructor->setInvalidDecl(); // C++ [class.copy]p3: // A declaration of a constructor for a class X is ill-formed if // its first parameter is of type (optionally cv-qualified) X and // either there are no other parameters or else all other // parameters have default arguments. if (!Constructor->isInvalidDecl() && ((Constructor->getNumParams() == 1) || (Constructor->getNumParams() > 1 && Constructor->getParamDecl(1)->hasDefaultArg())) && Constructor->getTemplateSpecializationKind() != TSK_ImplicitInstantiation) { QualType ParamType = Constructor->getParamDecl(0)->getType(); QualType ClassTy = Context.getTagDeclType(ClassDecl); if (Context.getCanonicalType(ParamType).getUnqualifiedType() == ClassTy) { SourceLocation ParamLoc = Constructor->getParamDecl(0)->getLocation(); const char *ConstRef = Constructor->getParamDecl(0)->getIdentifier() ? "const &" : " const &"; Diag(ParamLoc, diag::err_constructor_byvalue_arg) << FixItHint::CreateInsertion(ParamLoc, ConstRef); // FIXME: Rather that making the constructor invalid, we should endeavor // to fix the type. Constructor->setInvalidDecl(); } } } /// CheckDestructor - Checks a fully-formed destructor definition for /// well-formedness, issuing any diagnostics required. Returns true /// on error. bool Sema::CheckDestructor(CXXDestructorDecl *Destructor) { CXXRecordDecl *RD = Destructor->getParent(); if (!Destructor->getOperatorDelete() && Destructor->isVirtual()) { SourceLocation Loc; if (!Destructor->isImplicit()) Loc = Destructor->getLocation(); else Loc = RD->getLocation(); // If we have a virtual destructor, look up the deallocation function if (FunctionDecl *OperatorDelete = FindDeallocationFunctionForDestructor(Loc, RD)) { MarkFunctionReferenced(Loc, OperatorDelete); Destructor->setOperatorDelete(OperatorDelete); } } return false; } /// CheckDestructorDeclarator - Called by ActOnDeclarator to check /// the well-formednes of the destructor declarator @p D with type @p /// R. If there are any errors in the declarator, this routine will /// emit diagnostics and set the declarator to invalid. Even if this happens, /// will be updated to reflect a well-formed type for the destructor and /// returned. QualType Sema::CheckDestructorDeclarator(Declarator &D, QualType R, StorageClass& SC) { // C++ [class.dtor]p1: // [...] A typedef-name that names a class is a class-name // (7.1.3); however, a typedef-name that names a class shall not // be used as the identifier in the declarator for a destructor // declaration. QualType DeclaratorType = GetTypeFromParser(D.getName().DestructorName); if (const TypedefType *TT = DeclaratorType->getAs()) Diag(D.getIdentifierLoc(), diag::err_destructor_typedef_name) << DeclaratorType << isa(TT->getDecl()); else if (const TemplateSpecializationType *TST = DeclaratorType->getAs()) if (TST->isTypeAlias()) Diag(D.getIdentifierLoc(), diag::err_destructor_typedef_name) << DeclaratorType << 1; // C++ [class.dtor]p2: // A destructor is used to destroy objects of its class type. A // destructor takes no parameters, and no return type can be // specified for it (not even void). The address of a destructor // shall not be taken. A destructor shall not be static. A // destructor can be invoked for a const, volatile or const // volatile object. A destructor shall not be declared const, // volatile or const volatile (9.3.2). if (SC == SC_Static) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_destructor_cannot_be) << "static" << SourceRange(D.getDeclSpec().getStorageClassSpecLoc()) << SourceRange(D.getIdentifierLoc()) << FixItHint::CreateRemoval(D.getDeclSpec().getStorageClassSpecLoc()); SC = SC_None; } if (!D.isInvalidType()) { // Destructors don't have return types, but the parser will // happily parse something like: // // class X { // float ~X(); // }; // // The return type will be eliminated later. if (D.getDeclSpec().hasTypeSpecifier()) Diag(D.getIdentifierLoc(), diag::err_destructor_return_type) << SourceRange(D.getDeclSpec().getTypeSpecTypeLoc()) << SourceRange(D.getIdentifierLoc()); else if (unsigned TypeQuals = D.getDeclSpec().getTypeQualifiers()) { diagnoseIgnoredQualifiers(diag::err_destructor_return_type, TypeQuals, SourceLocation(), D.getDeclSpec().getConstSpecLoc(), D.getDeclSpec().getVolatileSpecLoc(), D.getDeclSpec().getRestrictSpecLoc(), D.getDeclSpec().getAtomicSpecLoc()); D.setInvalidType(); } } DeclaratorChunk::FunctionTypeInfo &FTI = D.getFunctionTypeInfo(); if (FTI.TypeQuals != 0 && !D.isInvalidType()) { if (FTI.TypeQuals & Qualifiers::Const) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_destructor) << "const" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Volatile) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_destructor) << "volatile" << SourceRange(D.getIdentifierLoc()); if (FTI.TypeQuals & Qualifiers::Restrict) Diag(D.getIdentifierLoc(), diag::err_invalid_qualified_destructor) << "restrict" << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } // C++0x [class.dtor]p2: // A destructor shall not be declared with a ref-qualifier. if (FTI.hasRefQualifier()) { Diag(FTI.getRefQualifierLoc(), diag::err_ref_qualifier_destructor) << FTI.RefQualifierIsLValueRef << FixItHint::CreateRemoval(FTI.getRefQualifierLoc()); D.setInvalidType(); } // Make sure we don't have any parameters. if (FTIHasNonVoidParameters(FTI)) { Diag(D.getIdentifierLoc(), diag::err_destructor_with_params); // Delete the parameters. FTI.freeParams(); D.setInvalidType(); } // Make sure the destructor isn't variadic. if (FTI.isVariadic) { Diag(D.getIdentifierLoc(), diag::err_destructor_variadic); D.setInvalidType(); } // Rebuild the function type "R" without any type qualifiers or // parameters (in case any of the errors above fired) and with // "void" as the return type, since destructors don't have return // types. if (!D.isInvalidType()) return R; const FunctionProtoType *Proto = R->getAs(); FunctionProtoType::ExtProtoInfo EPI = Proto->getExtProtoInfo(); EPI.Variadic = false; EPI.TypeQuals = 0; EPI.RefQualifier = RQ_None; return Context.getFunctionType(Context.VoidTy, None, EPI); } static void extendLeft(SourceRange &R, SourceRange Before) { if (Before.isInvalid()) return; R.setBegin(Before.getBegin()); if (R.getEnd().isInvalid()) R.setEnd(Before.getEnd()); } static void extendRight(SourceRange &R, SourceRange After) { if (After.isInvalid()) return; if (R.getBegin().isInvalid()) R.setBegin(After.getBegin()); R.setEnd(After.getEnd()); } /// CheckConversionDeclarator - Called by ActOnDeclarator to check the /// well-formednes of the conversion function declarator @p D with /// type @p R. If there are any errors in the declarator, this routine /// will emit diagnostics and return true. Otherwise, it will return /// false. Either way, the type @p R will be updated to reflect a /// well-formed type for the conversion operator. void Sema::CheckConversionDeclarator(Declarator &D, QualType &R, StorageClass& SC) { // C++ [class.conv.fct]p1: // Neither parameter types nor return type can be specified. The // type of a conversion function (8.3.5) is "function taking no // parameter returning conversion-type-id." if (SC == SC_Static) { if (!D.isInvalidType()) Diag(D.getIdentifierLoc(), diag::err_conv_function_not_member) << SourceRange(D.getDeclSpec().getStorageClassSpecLoc()) << D.getName().getSourceRange(); D.setInvalidType(); SC = SC_None; } TypeSourceInfo *ConvTSI = nullptr; QualType ConvType = GetTypeFromParser(D.getName().ConversionFunctionId, &ConvTSI); if (D.getDeclSpec().hasTypeSpecifier() && !D.isInvalidType()) { // Conversion functions don't have return types, but the parser will // happily parse something like: // // class X { // float operator bool(); // }; // // The return type will be changed later anyway. Diag(D.getIdentifierLoc(), diag::err_conv_function_return_type) << SourceRange(D.getDeclSpec().getTypeSpecTypeLoc()) << SourceRange(D.getIdentifierLoc()); D.setInvalidType(); } const FunctionProtoType *Proto = R->getAs(); // Make sure we don't have any parameters. if (Proto->getNumParams() > 0) { Diag(D.getIdentifierLoc(), diag::err_conv_function_with_params); // Delete the parameters. D.getFunctionTypeInfo().freeParams(); D.setInvalidType(); } else if (Proto->isVariadic()) { Diag(D.getIdentifierLoc(), diag::err_conv_function_variadic); D.setInvalidType(); } // Diagnose "&operator bool()" and other such nonsense. This // is actually a gcc extension which we don't support. if (Proto->getReturnType() != ConvType) { bool NeedsTypedef = false; SourceRange Before, After; // Walk the chunks and extract information on them for our diagnostic. bool PastFunctionChunk = false; for (auto &Chunk : D.type_objects()) { switch (Chunk.Kind) { case DeclaratorChunk::Function: if (!PastFunctionChunk) { if (Chunk.Fun.HasTrailingReturnType) { TypeSourceInfo *TRT = nullptr; GetTypeFromParser(Chunk.Fun.getTrailingReturnType(), &TRT); if (TRT) extendRight(After, TRT->getTypeLoc().getSourceRange()); } PastFunctionChunk = true; break; } // Fall through. case DeclaratorChunk::Array: NeedsTypedef = true; extendRight(After, Chunk.getSourceRange()); break; case DeclaratorChunk::Pointer: case DeclaratorChunk::BlockPointer: case DeclaratorChunk::Reference: case DeclaratorChunk::MemberPointer: case DeclaratorChunk::Pipe: extendLeft(Before, Chunk.getSourceRange()); break; case DeclaratorChunk::Paren: extendLeft(Before, Chunk.Loc); extendRight(After, Chunk.EndLoc); break; } } SourceLocation Loc = Before.isValid() ? Before.getBegin() : After.isValid() ? After.getBegin() : D.getIdentifierLoc(); auto &&DB = Diag(Loc, diag::err_conv_function_with_complex_decl); DB << Before << After; if (!NeedsTypedef) { DB << /*don't need a typedef*/0; // If we can provide a correct fix-it hint, do so. if (After.isInvalid() && ConvTSI) { SourceLocation InsertLoc = getLocForEndOfToken(ConvTSI->getTypeLoc().getLocEnd()); DB << FixItHint::CreateInsertion(InsertLoc, " ") << FixItHint::CreateInsertionFromRange( InsertLoc, CharSourceRange::getTokenRange(Before)) << FixItHint::CreateRemoval(Before); } } else if (!Proto->getReturnType()->isDependentType()) { DB << /*typedef*/1 << Proto->getReturnType(); } else if (getLangOpts().CPlusPlus11) { DB << /*alias template*/2 << Proto->getReturnType(); } else { DB << /*might not be fixable*/3; } // Recover by incorporating the other type chunks into the result type. // Note, this does *not* change the name of the function. This is compatible // with the GCC extension: // struct S { &operator int(); } s; // int &r = s.operator int(); // ok in GCC // S::operator int&() {} // error in GCC, function name is 'operator int'. ConvType = Proto->getReturnType(); } // C++ [class.conv.fct]p4: // The conversion-type-id shall not represent a function type nor // an array type. if (ConvType->isArrayType()) { Diag(D.getIdentifierLoc(), diag::err_conv_function_to_array); ConvType = Context.getPointerType(ConvType); D.setInvalidType(); } else if (ConvType->isFunctionType()) { Diag(D.getIdentifierLoc(), diag::err_conv_function_to_function); ConvType = Context.getPointerType(ConvType); D.setInvalidType(); } // Rebuild the function type "R" without any parameters (in case any // of the errors above fired) and with the conversion type as the // return type. if (D.isInvalidType()) R = Context.getFunctionType(ConvType, None, Proto->getExtProtoInfo()); // C++0x explicit conversion operators. if (D.getDeclSpec().isExplicitSpecified()) Diag(D.getDeclSpec().getExplicitSpecLoc(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_explicit_conversion_functions : diag::ext_explicit_conversion_functions) << SourceRange(D.getDeclSpec().getExplicitSpecLoc()); } /// ActOnConversionDeclarator - Called by ActOnDeclarator to complete /// the declaration of the given C++ conversion function. This routine /// is responsible for recording the conversion function in the C++ /// class, if possible. Decl *Sema::ActOnConversionDeclarator(CXXConversionDecl *Conversion) { assert(Conversion && "Expected to receive a conversion function declaration"); CXXRecordDecl *ClassDecl = cast(Conversion->getDeclContext()); // Make sure we aren't redeclaring the conversion function. QualType ConvType = Context.getCanonicalType(Conversion->getConversionType()); // C++ [class.conv.fct]p1: // [...] A conversion function is never used to convert a // (possibly cv-qualified) object to the (possibly cv-qualified) // same object type (or a reference to it), to a (possibly // cv-qualified) base class of that type (or a reference to it), // or to (possibly cv-qualified) void. // FIXME: Suppress this warning if the conversion function ends up being a // virtual function that overrides a virtual function in a base class. QualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(ClassDecl)); if (const ReferenceType *ConvTypeRef = ConvType->getAs()) ConvType = ConvTypeRef->getPointeeType(); if (Conversion->getTemplateSpecializationKind() != TSK_Undeclared && Conversion->getTemplateSpecializationKind() != TSK_ExplicitSpecialization) /* Suppress diagnostics for instantiations. */; else if (ConvType->isRecordType()) { ConvType = Context.getCanonicalType(ConvType).getUnqualifiedType(); if (ConvType == ClassType) Diag(Conversion->getLocation(), diag::warn_conv_to_self_not_used) << ClassType; else if (IsDerivedFrom(Conversion->getLocation(), ClassType, ConvType)) Diag(Conversion->getLocation(), diag::warn_conv_to_base_not_used) << ClassType << ConvType; } else if (ConvType->isVoidType()) { Diag(Conversion->getLocation(), diag::warn_conv_to_void_not_used) << ClassType << ConvType; } if (FunctionTemplateDecl *ConversionTemplate = Conversion->getDescribedFunctionTemplate()) return ConversionTemplate; return Conversion; } //===----------------------------------------------------------------------===// // Namespace Handling //===----------------------------------------------------------------------===// /// \brief Diagnose a mismatch in 'inline' qualifiers when a namespace is /// reopened. static void DiagnoseNamespaceInlineMismatch(Sema &S, SourceLocation KeywordLoc, SourceLocation Loc, IdentifierInfo *II, bool *IsInline, NamespaceDecl *PrevNS) { assert(*IsInline != PrevNS->isInline()); // HACK: Work around a bug in libstdc++4.6's , where // std::__atomic[0,1,2] are defined as non-inline namespaces, then reopened as // inline namespaces, with the intention of bringing names into namespace std. // // We support this just well enough to get that case working; this is not // sufficient to support reopening namespaces as inline in general. if (*IsInline && II && II->getName().startswith("__atomic") && S.getSourceManager().isInSystemHeader(Loc)) { // Mark all prior declarations of the namespace as inline. for (NamespaceDecl *NS = PrevNS->getMostRecentDecl(); NS; NS = NS->getPreviousDecl()) NS->setInline(*IsInline); // Patch up the lookup table for the containing namespace. This isn't really // correct, but it's good enough for this particular case. for (auto *I : PrevNS->decls()) if (auto *ND = dyn_cast(I)) PrevNS->getParent()->makeDeclVisibleInContext(ND); return; } if (PrevNS->isInline()) // The user probably just forgot the 'inline', so suggest that it // be added back. S.Diag(Loc, diag::warn_inline_namespace_reopened_noninline) << FixItHint::CreateInsertion(KeywordLoc, "inline "); else S.Diag(Loc, diag::err_inline_namespace_mismatch); S.Diag(PrevNS->getLocation(), diag::note_previous_definition); *IsInline = PrevNS->isInline(); } /// ActOnStartNamespaceDef - This is called at the start of a namespace /// definition. Decl *Sema::ActOnStartNamespaceDef(Scope *NamespcScope, SourceLocation InlineLoc, SourceLocation NamespaceLoc, SourceLocation IdentLoc, IdentifierInfo *II, SourceLocation LBrace, AttributeList *AttrList, UsingDirectiveDecl *&UD) { SourceLocation StartLoc = InlineLoc.isValid() ? InlineLoc : NamespaceLoc; // For anonymous namespace, take the location of the left brace. SourceLocation Loc = II ? IdentLoc : LBrace; bool IsInline = InlineLoc.isValid(); bool IsInvalid = false; bool IsStd = false; bool AddToKnown = false; Scope *DeclRegionScope = NamespcScope->getParent(); NamespaceDecl *PrevNS = nullptr; if (II) { // C++ [namespace.def]p2: // The identifier in an original-namespace-definition shall not // have been previously defined in the declarative region in // which the original-namespace-definition appears. The // identifier in an original-namespace-definition is the name of // the namespace. Subsequently in that declarative region, it is // treated as an original-namespace-name. // // Since namespace names are unique in their scope, and we don't // look through using directives, just look for any ordinary names // as if by qualified name lookup. LookupResult R(*this, II, IdentLoc, LookupOrdinaryName, ForRedeclaration); LookupQualifiedName(R, CurContext->getRedeclContext()); NamedDecl *PrevDecl = R.isSingleResult() ? R.getRepresentativeDecl() : nullptr; PrevNS = dyn_cast_or_null(PrevDecl); if (PrevNS) { // This is an extended namespace definition. if (IsInline != PrevNS->isInline()) DiagnoseNamespaceInlineMismatch(*this, NamespaceLoc, Loc, II, &IsInline, PrevNS); } else if (PrevDecl) { // This is an invalid name redefinition. Diag(Loc, diag::err_redefinition_different_kind) << II; Diag(PrevDecl->getLocation(), diag::note_previous_definition); IsInvalid = true; // Continue on to push Namespc as current DeclContext and return it. } else if (II->isStr("std") && CurContext->getRedeclContext()->isTranslationUnit()) { // This is the first "real" definition of the namespace "std", so update // our cache of the "std" namespace to point at this definition. PrevNS = getStdNamespace(); IsStd = true; AddToKnown = !IsInline; } else { // We've seen this namespace for the first time. AddToKnown = !IsInline; } } else { // Anonymous namespaces. // Determine whether the parent already has an anonymous namespace. DeclContext *Parent = CurContext->getRedeclContext(); if (TranslationUnitDecl *TU = dyn_cast(Parent)) { PrevNS = TU->getAnonymousNamespace(); } else { NamespaceDecl *ND = cast(Parent); PrevNS = ND->getAnonymousNamespace(); } if (PrevNS && IsInline != PrevNS->isInline()) DiagnoseNamespaceInlineMismatch(*this, NamespaceLoc, NamespaceLoc, II, &IsInline, PrevNS); } NamespaceDecl *Namespc = NamespaceDecl::Create(Context, CurContext, IsInline, StartLoc, Loc, II, PrevNS); if (IsInvalid) Namespc->setInvalidDecl(); ProcessDeclAttributeList(DeclRegionScope, Namespc, AttrList); // FIXME: Should we be merging attributes? if (const VisibilityAttr *Attr = Namespc->getAttr()) PushNamespaceVisibilityAttr(Attr, Loc); if (IsStd) StdNamespace = Namespc; if (AddToKnown) KnownNamespaces[Namespc] = false; if (II) { PushOnScopeChains(Namespc, DeclRegionScope); } else { // Link the anonymous namespace into its parent. DeclContext *Parent = CurContext->getRedeclContext(); if (TranslationUnitDecl *TU = dyn_cast(Parent)) { TU->setAnonymousNamespace(Namespc); } else { cast(Parent)->setAnonymousNamespace(Namespc); } CurContext->addDecl(Namespc); // C++ [namespace.unnamed]p1. An unnamed-namespace-definition // behaves as if it were replaced by // namespace unique { /* empty body */ } // using namespace unique; // namespace unique { namespace-body } // where all occurrences of 'unique' in a translation unit are // replaced by the same identifier and this identifier differs // from all other identifiers in the entire program. // We just create the namespace with an empty name and then add an // implicit using declaration, just like the standard suggests. // // CodeGen enforces the "universally unique" aspect by giving all // declarations semantically contained within an anonymous // namespace internal linkage. if (!PrevNS) { UD = UsingDirectiveDecl::Create(Context, Parent, /* 'using' */ LBrace, /* 'namespace' */ SourceLocation(), /* qualifier */ NestedNameSpecifierLoc(), /* identifier */ SourceLocation(), Namespc, /* Ancestor */ Parent); UD->setImplicit(); Parent->addDecl(UD); } } ActOnDocumentableDecl(Namespc); // Although we could have an invalid decl (i.e. the namespace name is a // redefinition), push it as current DeclContext and try to continue parsing. // FIXME: We should be able to push Namespc here, so that the each DeclContext // for the namespace has the declarations that showed up in that particular // namespace definition. PushDeclContext(NamespcScope, Namespc); return Namespc; } /// getNamespaceDecl - Returns the namespace a decl represents. If the decl /// is a namespace alias, returns the namespace it points to. static inline NamespaceDecl *getNamespaceDecl(NamedDecl *D) { if (NamespaceAliasDecl *AD = dyn_cast_or_null(D)) return AD->getNamespace(); return dyn_cast_or_null(D); } /// ActOnFinishNamespaceDef - This callback is called after a namespace is /// exited. Decl is the DeclTy returned by ActOnStartNamespaceDef. void Sema::ActOnFinishNamespaceDef(Decl *Dcl, SourceLocation RBrace) { NamespaceDecl *Namespc = dyn_cast_or_null(Dcl); assert(Namespc && "Invalid parameter, expected NamespaceDecl"); Namespc->setRBraceLoc(RBrace); PopDeclContext(); if (Namespc->hasAttr()) PopPragmaVisibility(true, RBrace); } CXXRecordDecl *Sema::getStdBadAlloc() const { return cast_or_null( StdBadAlloc.get(Context.getExternalSource())); } EnumDecl *Sema::getStdAlignValT() const { return cast_or_null(StdAlignValT.get(Context.getExternalSource())); } NamespaceDecl *Sema::getStdNamespace() const { return cast_or_null( StdNamespace.get(Context.getExternalSource())); } NamespaceDecl *Sema::lookupStdExperimentalNamespace() { if (!StdExperimentalNamespaceCache) { if (auto Std = getStdNamespace()) { LookupResult Result(*this, &PP.getIdentifierTable().get("experimental"), SourceLocation(), LookupNamespaceName); if (!LookupQualifiedName(Result, Std) || !(StdExperimentalNamespaceCache = Result.getAsSingle())) Result.suppressDiagnostics(); } } return StdExperimentalNamespaceCache; } /// \brief Retrieve the special "std" namespace, which may require us to /// implicitly define the namespace. NamespaceDecl *Sema::getOrCreateStdNamespace() { if (!StdNamespace) { // The "std" namespace has not yet been defined, so build one implicitly. StdNamespace = NamespaceDecl::Create(Context, Context.getTranslationUnitDecl(), /*Inline=*/false, SourceLocation(), SourceLocation(), &PP.getIdentifierTable().get("std"), /*PrevDecl=*/nullptr); getStdNamespace()->setImplicit(true); } return getStdNamespace(); } bool Sema::isStdInitializerList(QualType Ty, QualType *Element) { assert(getLangOpts().CPlusPlus && "Looking for std::initializer_list outside of C++."); // We're looking for implicit instantiations of // template class std::initializer_list. if (!StdNamespace) // If we haven't seen namespace std yet, this can't be it. return false; ClassTemplateDecl *Template = nullptr; const TemplateArgument *Arguments = nullptr; if (const RecordType *RT = Ty->getAs()) { ClassTemplateSpecializationDecl *Specialization = dyn_cast(RT->getDecl()); if (!Specialization) return false; Template = Specialization->getSpecializedTemplate(); Arguments = Specialization->getTemplateArgs().data(); } else if (const TemplateSpecializationType *TST = Ty->getAs()) { Template = dyn_cast_or_null( TST->getTemplateName().getAsTemplateDecl()); Arguments = TST->getArgs(); } if (!Template) return false; if (!StdInitializerList) { // Haven't recognized std::initializer_list yet, maybe this is it. CXXRecordDecl *TemplateClass = Template->getTemplatedDecl(); if (TemplateClass->getIdentifier() != &PP.getIdentifierTable().get("initializer_list") || !getStdNamespace()->InEnclosingNamespaceSetOf( TemplateClass->getDeclContext())) return false; // This is a template called std::initializer_list, but is it the right // template? TemplateParameterList *Params = Template->getTemplateParameters(); if (Params->getMinRequiredArguments() != 1) return false; if (!isa(Params->getParam(0))) return false; // It's the right template. StdInitializerList = Template; } if (Template->getCanonicalDecl() != StdInitializerList->getCanonicalDecl()) return false; // This is an instance of std::initializer_list. Find the argument type. if (Element) *Element = Arguments[0].getAsType(); return true; } static ClassTemplateDecl *LookupStdInitializerList(Sema &S, SourceLocation Loc){ NamespaceDecl *Std = S.getStdNamespace(); if (!Std) { S.Diag(Loc, diag::err_implied_std_initializer_list_not_found); return nullptr; } LookupResult Result(S, &S.PP.getIdentifierTable().get("initializer_list"), Loc, Sema::LookupOrdinaryName); if (!S.LookupQualifiedName(Result, Std)) { S.Diag(Loc, diag::err_implied_std_initializer_list_not_found); return nullptr; } ClassTemplateDecl *Template = Result.getAsSingle(); if (!Template) { Result.suppressDiagnostics(); // We found something weird. Complain about the first thing we found. NamedDecl *Found = *Result.begin(); S.Diag(Found->getLocation(), diag::err_malformed_std_initializer_list); return nullptr; } // We found some template called std::initializer_list. Now verify that it's // correct. TemplateParameterList *Params = Template->getTemplateParameters(); if (Params->getMinRequiredArguments() != 1 || !isa(Params->getParam(0))) { S.Diag(Template->getLocation(), diag::err_malformed_std_initializer_list); return nullptr; } return Template; } QualType Sema::BuildStdInitializerList(QualType Element, SourceLocation Loc) { if (!StdInitializerList) { StdInitializerList = LookupStdInitializerList(*this, Loc); if (!StdInitializerList) return QualType(); } TemplateArgumentListInfo Args(Loc, Loc); Args.addArgument(TemplateArgumentLoc(TemplateArgument(Element), Context.getTrivialTypeSourceInfo(Element, Loc))); return Context.getCanonicalType( CheckTemplateIdType(TemplateName(StdInitializerList), Loc, Args)); } bool Sema::isInitListConstructor(const CXXConstructorDecl* Ctor) { // C++ [dcl.init.list]p2: // A constructor is an initializer-list constructor if its first parameter // is of type std::initializer_list or reference to possibly cv-qualified // std::initializer_list for some type E, and either there are no other // parameters or else all other parameters have default arguments. if (Ctor->getNumParams() < 1 || (Ctor->getNumParams() > 1 && !Ctor->getParamDecl(1)->hasDefaultArg())) return false; QualType ArgType = Ctor->getParamDecl(0)->getType(); if (const ReferenceType *RT = ArgType->getAs()) ArgType = RT->getPointeeType().getUnqualifiedType(); return isStdInitializerList(ArgType, nullptr); } /// \brief Determine whether a using statement is in a context where it will be /// apply in all contexts. static bool IsUsingDirectiveInToplevelContext(DeclContext *CurContext) { switch (CurContext->getDeclKind()) { case Decl::TranslationUnit: return true; case Decl::LinkageSpec: return IsUsingDirectiveInToplevelContext(CurContext->getParent()); default: return false; } } namespace { // Callback to only accept typo corrections that are namespaces. class NamespaceValidatorCCC : public CorrectionCandidateCallback { public: bool ValidateCandidate(const TypoCorrection &candidate) override { if (NamedDecl *ND = candidate.getCorrectionDecl()) return isa(ND) || isa(ND); return false; } }; } static bool TryNamespaceTypoCorrection(Sema &S, LookupResult &R, Scope *Sc, CXXScopeSpec &SS, SourceLocation IdentLoc, IdentifierInfo *Ident) { R.clear(); if (TypoCorrection Corrected = S.CorrectTypo(R.getLookupNameInfo(), R.getLookupKind(), Sc, &SS, llvm::make_unique(), Sema::CTK_ErrorRecovery)) { if (DeclContext *DC = S.computeDeclContext(SS, false)) { std::string CorrectedStr(Corrected.getAsString(S.getLangOpts())); bool DroppedSpecifier = Corrected.WillReplaceSpecifier() && Ident->getName().equals(CorrectedStr); S.diagnoseTypo(Corrected, S.PDiag(diag::err_using_directive_member_suggest) << Ident << DC << DroppedSpecifier << SS.getRange(), S.PDiag(diag::note_namespace_defined_here)); } else { S.diagnoseTypo(Corrected, S.PDiag(diag::err_using_directive_suggest) << Ident, S.PDiag(diag::note_namespace_defined_here)); } R.addDecl(Corrected.getFoundDecl()); return true; } return false; } Decl *Sema::ActOnUsingDirective(Scope *S, SourceLocation UsingLoc, SourceLocation NamespcLoc, CXXScopeSpec &SS, SourceLocation IdentLoc, IdentifierInfo *NamespcName, AttributeList *AttrList) { assert(!SS.isInvalid() && "Invalid CXXScopeSpec."); assert(NamespcName && "Invalid NamespcName."); assert(IdentLoc.isValid() && "Invalid NamespceName location."); // This can only happen along a recovery path. while (S->isTemplateParamScope()) S = S->getParent(); assert(S->getFlags() & Scope::DeclScope && "Invalid Scope."); UsingDirectiveDecl *UDir = nullptr; NestedNameSpecifier *Qualifier = nullptr; if (SS.isSet()) Qualifier = SS.getScopeRep(); // Lookup namespace name. LookupResult R(*this, NamespcName, IdentLoc, LookupNamespaceName); LookupParsedName(R, S, &SS); if (R.isAmbiguous()) return nullptr; if (R.empty()) { R.clear(); // Allow "using namespace std;" or "using namespace ::std;" even if // "std" hasn't been defined yet, for GCC compatibility. if ((!Qualifier || Qualifier->getKind() == NestedNameSpecifier::Global) && NamespcName->isStr("std")) { Diag(IdentLoc, diag::ext_using_undefined_std); R.addDecl(getOrCreateStdNamespace()); R.resolveKind(); } // Otherwise, attempt typo correction. else TryNamespaceTypoCorrection(*this, R, S, SS, IdentLoc, NamespcName); } if (!R.empty()) { NamedDecl *Named = R.getRepresentativeDecl(); NamespaceDecl *NS = R.getAsSingle(); assert(NS && "expected namespace decl"); // The use of a nested name specifier may trigger deprecation warnings. DiagnoseUseOfDecl(Named, IdentLoc); // C++ [namespace.udir]p1: // A using-directive specifies that the names in the nominated // namespace can be used in the scope in which the // using-directive appears after the using-directive. During // unqualified name lookup (3.4.1), the names appear as if they // were declared in the nearest enclosing namespace which // contains both the using-directive and the nominated // namespace. [Note: in this context, "contains" means "contains // directly or indirectly". ] // Find enclosing context containing both using-directive and // nominated namespace. DeclContext *CommonAncestor = cast(NS); while (CommonAncestor && !CommonAncestor->Encloses(CurContext)) CommonAncestor = CommonAncestor->getParent(); UDir = UsingDirectiveDecl::Create(Context, CurContext, UsingLoc, NamespcLoc, SS.getWithLocInContext(Context), IdentLoc, Named, CommonAncestor); if (IsUsingDirectiveInToplevelContext(CurContext) && !SourceMgr.isInMainFile(SourceMgr.getExpansionLoc(IdentLoc))) { Diag(IdentLoc, diag::warn_using_directive_in_header); } PushUsingDirective(S, UDir); } else { Diag(IdentLoc, diag::err_expected_namespace_name) << SS.getRange(); } if (UDir) ProcessDeclAttributeList(S, UDir, AttrList); return UDir; } void Sema::PushUsingDirective(Scope *S, UsingDirectiveDecl *UDir) { // If the scope has an associated entity and the using directive is at // namespace or translation unit scope, add the UsingDirectiveDecl into // its lookup structure so qualified name lookup can find it. DeclContext *Ctx = S->getEntity(); if (Ctx && !Ctx->isFunctionOrMethod()) Ctx->addDecl(UDir); else // Otherwise, it is at block scope. The using-directives will affect lookup // only to the end of the scope. S->PushUsingDirective(UDir); } Decl *Sema::ActOnUsingDeclaration(Scope *S, AccessSpecifier AS, SourceLocation UsingLoc, SourceLocation TypenameLoc, CXXScopeSpec &SS, UnqualifiedId &Name, SourceLocation EllipsisLoc, AttributeList *AttrList) { assert(S->getFlags() & Scope::DeclScope && "Invalid Scope."); if (SS.isEmpty()) { Diag(Name.getLocStart(), diag::err_using_requires_qualname); return nullptr; } switch (Name.getKind()) { case UnqualifiedId::IK_ImplicitSelfParam: case UnqualifiedId::IK_Identifier: case UnqualifiedId::IK_OperatorFunctionId: case UnqualifiedId::IK_LiteralOperatorId: case UnqualifiedId::IK_ConversionFunctionId: break; case UnqualifiedId::IK_ConstructorName: case UnqualifiedId::IK_ConstructorTemplateId: // C++11 inheriting constructors. Diag(Name.getLocStart(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_using_decl_constructor : diag::err_using_decl_constructor) << SS.getRange(); if (getLangOpts().CPlusPlus11) break; return nullptr; case UnqualifiedId::IK_DestructorName: Diag(Name.getLocStart(), diag::err_using_decl_destructor) << SS.getRange(); return nullptr; case UnqualifiedId::IK_TemplateId: Diag(Name.getLocStart(), diag::err_using_decl_template_id) << SourceRange(Name.TemplateId->LAngleLoc, Name.TemplateId->RAngleLoc); return nullptr; } DeclarationNameInfo TargetNameInfo = GetNameFromUnqualifiedId(Name); DeclarationName TargetName = TargetNameInfo.getName(); if (!TargetName) return nullptr; // Warn about access declarations. if (UsingLoc.isInvalid()) { Diag(Name.getLocStart(), getLangOpts().CPlusPlus11 ? diag::err_access_decl : diag::warn_access_decl_deprecated) << FixItHint::CreateInsertion(SS.getRange().getBegin(), "using "); } if (EllipsisLoc.isInvalid()) { if (DiagnoseUnexpandedParameterPack(SS, UPPC_UsingDeclaration) || DiagnoseUnexpandedParameterPack(TargetNameInfo, UPPC_UsingDeclaration)) return nullptr; } else { if (!SS.getScopeRep()->containsUnexpandedParameterPack() && !TargetNameInfo.containsUnexpandedParameterPack()) { Diag(EllipsisLoc, diag::err_pack_expansion_without_parameter_packs) << SourceRange(SS.getBeginLoc(), TargetNameInfo.getEndLoc()); EllipsisLoc = SourceLocation(); } } NamedDecl *UD = BuildUsingDeclaration(S, AS, UsingLoc, TypenameLoc.isValid(), TypenameLoc, SS, TargetNameInfo, EllipsisLoc, AttrList, /*IsInstantiation*/false); if (UD) PushOnScopeChains(UD, S, /*AddToContext*/ false); return UD; } /// \brief Determine whether a using declaration considers the given /// declarations as "equivalent", e.g., if they are redeclarations of /// the same entity or are both typedefs of the same type. static bool IsEquivalentForUsingDecl(ASTContext &Context, NamedDecl *D1, NamedDecl *D2) { if (D1->getCanonicalDecl() == D2->getCanonicalDecl()) return true; if (TypedefNameDecl *TD1 = dyn_cast(D1)) if (TypedefNameDecl *TD2 = dyn_cast(D2)) return Context.hasSameType(TD1->getUnderlyingType(), TD2->getUnderlyingType()); return false; } /// Determines whether to create a using shadow decl for a particular /// decl, given the set of decls existing prior to this using lookup. bool Sema::CheckUsingShadowDecl(UsingDecl *Using, NamedDecl *Orig, const LookupResult &Previous, UsingShadowDecl *&PrevShadow) { // Diagnose finding a decl which is not from a base class of the // current class. We do this now because there are cases where this // function will silently decide not to build a shadow decl, which // will pre-empt further diagnostics. // // We don't need to do this in C++11 because we do the check once on // the qualifier. // // FIXME: diagnose the following if we care enough: // struct A { int foo; }; // struct B : A { using A::foo; }; // template struct C : A {}; // template struct D : C { using B::foo; } // <--- // This is invalid (during instantiation) in C++03 because B::foo // resolves to the using decl in B, which is not a base class of D. // We can't diagnose it immediately because C is an unknown // specialization. The UsingShadowDecl in D then points directly // to A::foo, which will look well-formed when we instantiate. // The right solution is to not collapse the shadow-decl chain. if (!getLangOpts().CPlusPlus11 && CurContext->isRecord()) { DeclContext *OrigDC = Orig->getDeclContext(); // Handle enums and anonymous structs. if (isa(OrigDC)) OrigDC = OrigDC->getParent(); CXXRecordDecl *OrigRec = cast(OrigDC); while (OrigRec->isAnonymousStructOrUnion()) OrigRec = cast(OrigRec->getDeclContext()); if (cast(CurContext)->isProvablyNotDerivedFrom(OrigRec)) { if (OrigDC == CurContext) { Diag(Using->getLocation(), diag::err_using_decl_nested_name_specifier_is_current_class) << Using->getQualifierLoc().getSourceRange(); Diag(Orig->getLocation(), diag::note_using_decl_target); Using->setInvalidDecl(); return true; } Diag(Using->getQualifierLoc().getBeginLoc(), diag::err_using_decl_nested_name_specifier_is_not_base_class) << Using->getQualifier() << cast(CurContext) << Using->getQualifierLoc().getSourceRange(); Diag(Orig->getLocation(), diag::note_using_decl_target); Using->setInvalidDecl(); return true; } } if (Previous.empty()) return false; NamedDecl *Target = Orig; if (isa(Target)) Target = cast(Target)->getTargetDecl(); // If the target happens to be one of the previous declarations, we // don't have a conflict. // // FIXME: but we might be increasing its access, in which case we // should redeclare it. NamedDecl *NonTag = nullptr, *Tag = nullptr; bool FoundEquivalentDecl = false; for (LookupResult::iterator I = Previous.begin(), E = Previous.end(); I != E; ++I) { NamedDecl *D = (*I)->getUnderlyingDecl(); // We can have UsingDecls in our Previous results because we use the same // LookupResult for checking whether the UsingDecl itself is a valid // redeclaration. if (isa(D) || isa(D)) continue; if (IsEquivalentForUsingDecl(Context, D, Target)) { if (UsingShadowDecl *Shadow = dyn_cast(*I)) PrevShadow = Shadow; FoundEquivalentDecl = true; } else if (isEquivalentInternalLinkageDeclaration(D, Target)) { // We don't conflict with an existing using shadow decl of an equivalent // declaration, but we're not a redeclaration of it. FoundEquivalentDecl = true; } if (isVisible(D)) (isa(D) ? Tag : NonTag) = D; } if (FoundEquivalentDecl) return false; if (FunctionDecl *FD = Target->getAsFunction()) { NamedDecl *OldDecl = nullptr; switch (CheckOverload(nullptr, FD, Previous, OldDecl, /*IsForUsingDecl*/ true)) { case Ovl_Overload: return false; case Ovl_NonFunction: Diag(Using->getLocation(), diag::err_using_decl_conflict); break; // We found a decl with the exact signature. case Ovl_Match: // If we're in a record, we want to hide the target, so we // return true (without a diagnostic) to tell the caller not to // build a shadow decl. if (CurContext->isRecord()) return true; // If we're not in a record, this is an error. Diag(Using->getLocation(), diag::err_using_decl_conflict); break; } Diag(Target->getLocation(), diag::note_using_decl_target); Diag(OldDecl->getLocation(), diag::note_using_decl_conflict); Using->setInvalidDecl(); return true; } // Target is not a function. if (isa(Target)) { // No conflict between a tag and a non-tag. if (!Tag) return false; Diag(Using->getLocation(), diag::err_using_decl_conflict); Diag(Target->getLocation(), diag::note_using_decl_target); Diag(Tag->getLocation(), diag::note_using_decl_conflict); Using->setInvalidDecl(); return true; } // No conflict between a tag and a non-tag. if (!NonTag) return false; Diag(Using->getLocation(), diag::err_using_decl_conflict); Diag(Target->getLocation(), diag::note_using_decl_target); Diag(NonTag->getLocation(), diag::note_using_decl_conflict); Using->setInvalidDecl(); return true; } /// Determine whether a direct base class is a virtual base class. static bool isVirtualDirectBase(CXXRecordDecl *Derived, CXXRecordDecl *Base) { if (!Derived->getNumVBases()) return false; for (auto &B : Derived->bases()) if (B.getType()->getAsCXXRecordDecl() == Base) return B.isVirtual(); llvm_unreachable("not a direct base class"); } /// Builds a shadow declaration corresponding to a 'using' declaration. UsingShadowDecl *Sema::BuildUsingShadowDecl(Scope *S, UsingDecl *UD, NamedDecl *Orig, UsingShadowDecl *PrevDecl) { // If we resolved to another shadow declaration, just coalesce them. NamedDecl *Target = Orig; if (isa(Target)) { Target = cast(Target)->getTargetDecl(); assert(!isa(Target) && "nested shadow declaration"); } NamedDecl *NonTemplateTarget = Target; if (auto *TargetTD = dyn_cast(Target)) NonTemplateTarget = TargetTD->getTemplatedDecl(); UsingShadowDecl *Shadow; if (isa(NonTemplateTarget)) { bool IsVirtualBase = isVirtualDirectBase(cast(CurContext), UD->getQualifier()->getAsRecordDecl()); Shadow = ConstructorUsingShadowDecl::Create( Context, CurContext, UD->getLocation(), UD, Orig, IsVirtualBase); } else { Shadow = UsingShadowDecl::Create(Context, CurContext, UD->getLocation(), UD, Target); } UD->addShadowDecl(Shadow); Shadow->setAccess(UD->getAccess()); if (Orig->isInvalidDecl() || UD->isInvalidDecl()) Shadow->setInvalidDecl(); Shadow->setPreviousDecl(PrevDecl); if (S) PushOnScopeChains(Shadow, S); else CurContext->addDecl(Shadow); return Shadow; } /// Hides a using shadow declaration. This is required by the current /// using-decl implementation when a resolvable using declaration in a /// class is followed by a declaration which would hide or override /// one or more of the using decl's targets; for example: /// /// struct Base { void foo(int); }; /// struct Derived : Base { /// using Base::foo; /// void foo(int); /// }; /// /// The governing language is C++03 [namespace.udecl]p12: /// /// When a using-declaration brings names from a base class into a /// derived class scope, member functions in the derived class /// override and/or hide member functions with the same name and /// parameter types in a base class (rather than conflicting). /// /// There are two ways to implement this: /// (1) optimistically create shadow decls when they're not hidden /// by existing declarations, or /// (2) don't create any shadow decls (or at least don't make them /// visible) until we've fully parsed/instantiated the class. /// The problem with (1) is that we might have to retroactively remove /// a shadow decl, which requires several O(n) operations because the /// decl structures are (very reasonably) not designed for removal. /// (2) avoids this but is very fiddly and phase-dependent. void Sema::HideUsingShadowDecl(Scope *S, UsingShadowDecl *Shadow) { if (Shadow->getDeclName().getNameKind() == DeclarationName::CXXConversionFunctionName) cast(Shadow->getDeclContext())->removeConversion(Shadow); // Remove it from the DeclContext... Shadow->getDeclContext()->removeDecl(Shadow); // ...and the scope, if applicable... if (S) { S->RemoveDecl(Shadow); IdResolver.RemoveDecl(Shadow); } // ...and the using decl. Shadow->getUsingDecl()->removeShadowDecl(Shadow); // TODO: complain somehow if Shadow was used. It shouldn't // be possible for this to happen, because...? } /// Find the base specifier for a base class with the given type. static CXXBaseSpecifier *findDirectBaseWithType(CXXRecordDecl *Derived, QualType DesiredBase, bool &AnyDependentBases) { // Check whether the named type is a direct base class. CanQualType CanonicalDesiredBase = DesiredBase->getCanonicalTypeUnqualified(); for (auto &Base : Derived->bases()) { CanQualType BaseType = Base.getType()->getCanonicalTypeUnqualified(); if (CanonicalDesiredBase == BaseType) return &Base; if (BaseType->isDependentType()) AnyDependentBases = true; } return nullptr; } namespace { class UsingValidatorCCC : public CorrectionCandidateCallback { public: UsingValidatorCCC(bool HasTypenameKeyword, bool IsInstantiation, NestedNameSpecifier *NNS, CXXRecordDecl *RequireMemberOf) : HasTypenameKeyword(HasTypenameKeyword), IsInstantiation(IsInstantiation), OldNNS(NNS), RequireMemberOf(RequireMemberOf) {} bool ValidateCandidate(const TypoCorrection &Candidate) override { NamedDecl *ND = Candidate.getCorrectionDecl(); // Keywords are not valid here. if (!ND || isa(ND)) return false; // Completely unqualified names are invalid for a 'using' declaration. if (Candidate.WillReplaceSpecifier() && !Candidate.getCorrectionSpecifier()) return false; // FIXME: Don't correct to a name that CheckUsingDeclRedeclaration would // reject. if (RequireMemberOf) { auto *FoundRecord = dyn_cast(ND); if (FoundRecord && FoundRecord->isInjectedClassName()) { // No-one ever wants a using-declaration to name an injected-class-name // of a base class, unless they're declaring an inheriting constructor. ASTContext &Ctx = ND->getASTContext(); if (!Ctx.getLangOpts().CPlusPlus11) return false; QualType FoundType = Ctx.getRecordType(FoundRecord); // Check that the injected-class-name is named as a member of its own // type; we don't want to suggest 'using Derived::Base;', since that // means something else. NestedNameSpecifier *Specifier = Candidate.WillReplaceSpecifier() ? Candidate.getCorrectionSpecifier() : OldNNS; if (!Specifier->getAsType() || !Ctx.hasSameType(QualType(Specifier->getAsType(), 0), FoundType)) return false; // Check that this inheriting constructor declaration actually names a // direct base class of the current class. bool AnyDependentBases = false; if (!findDirectBaseWithType(RequireMemberOf, Ctx.getRecordType(FoundRecord), AnyDependentBases) && !AnyDependentBases) return false; } else { auto *RD = dyn_cast(ND->getDeclContext()); if (!RD || RequireMemberOf->isProvablyNotDerivedFrom(RD)) return false; // FIXME: Check that the base class member is accessible? } } else { auto *FoundRecord = dyn_cast(ND); if (FoundRecord && FoundRecord->isInjectedClassName()) return false; } if (isa(ND)) return HasTypenameKeyword || !IsInstantiation; return !HasTypenameKeyword; } private: bool HasTypenameKeyword; bool IsInstantiation; NestedNameSpecifier *OldNNS; CXXRecordDecl *RequireMemberOf; }; } // end anonymous namespace /// Builds a using declaration. /// /// \param IsInstantiation - Whether this call arises from an /// instantiation of an unresolved using declaration. We treat /// the lookup differently for these declarations. NamedDecl *Sema::BuildUsingDeclaration(Scope *S, AccessSpecifier AS, SourceLocation UsingLoc, bool HasTypenameKeyword, SourceLocation TypenameLoc, CXXScopeSpec &SS, DeclarationNameInfo NameInfo, SourceLocation EllipsisLoc, AttributeList *AttrList, bool IsInstantiation) { assert(!SS.isInvalid() && "Invalid CXXScopeSpec."); SourceLocation IdentLoc = NameInfo.getLoc(); assert(IdentLoc.isValid() && "Invalid TargetName location."); // FIXME: We ignore attributes for now. // For an inheriting constructor declaration, the name of the using // declaration is the name of a constructor in this class, not in the // base class. DeclarationNameInfo UsingName = NameInfo; if (UsingName.getName().getNameKind() == DeclarationName::CXXConstructorName) if (auto *RD = dyn_cast(CurContext)) UsingName.setName(Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(Context.getRecordType(RD)))); // Do the redeclaration lookup in the current scope. LookupResult Previous(*this, UsingName, LookupUsingDeclName, ForRedeclaration); Previous.setHideTags(false); if (S) { LookupName(Previous, S); // It is really dumb that we have to do this. LookupResult::Filter F = Previous.makeFilter(); while (F.hasNext()) { NamedDecl *D = F.next(); if (!isDeclInScope(D, CurContext, S)) F.erase(); // If we found a local extern declaration that's not ordinarily visible, // and this declaration is being added to a non-block scope, ignore it. // We're only checking for scope conflicts here, not also for violations // of the linkage rules. else if (!CurContext->isFunctionOrMethod() && D->isLocalExternDecl() && !(D->getIdentifierNamespace() & Decl::IDNS_Ordinary)) F.erase(); } F.done(); } else { assert(IsInstantiation && "no scope in non-instantiation"); if (CurContext->isRecord()) LookupQualifiedName(Previous, CurContext); else { // No redeclaration check is needed here; in non-member contexts we // diagnosed all possible conflicts with other using-declarations when // building the template: // // For a dependent non-type using declaration, the only valid case is // if we instantiate to a single enumerator. We check for conflicts // between shadow declarations we introduce, and we check in the template // definition for conflicts between a non-type using declaration and any // other declaration, which together covers all cases. // // A dependent typename using declaration will never successfully // instantiate, since it will always name a class member, so we reject // that in the template definition. } } // Check for invalid redeclarations. if (CheckUsingDeclRedeclaration(UsingLoc, HasTypenameKeyword, SS, IdentLoc, Previous)) return nullptr; // Check for bad qualifiers. if (CheckUsingDeclQualifier(UsingLoc, HasTypenameKeyword, SS, NameInfo, IdentLoc)) return nullptr; DeclContext *LookupContext = computeDeclContext(SS); NamedDecl *D; NestedNameSpecifierLoc QualifierLoc = SS.getWithLocInContext(Context); if (!LookupContext || EllipsisLoc.isValid()) { if (HasTypenameKeyword) { // FIXME: not all declaration name kinds are legal here D = UnresolvedUsingTypenameDecl::Create(Context, CurContext, UsingLoc, TypenameLoc, QualifierLoc, IdentLoc, NameInfo.getName(), EllipsisLoc); } else { D = UnresolvedUsingValueDecl::Create(Context, CurContext, UsingLoc, QualifierLoc, NameInfo, EllipsisLoc); } D->setAccess(AS); CurContext->addDecl(D); return D; } auto Build = [&](bool Invalid) { UsingDecl *UD = UsingDecl::Create(Context, CurContext, UsingLoc, QualifierLoc, UsingName, HasTypenameKeyword); UD->setAccess(AS); CurContext->addDecl(UD); UD->setInvalidDecl(Invalid); return UD; }; auto BuildInvalid = [&]{ return Build(true); }; auto BuildValid = [&]{ return Build(false); }; if (RequireCompleteDeclContext(SS, LookupContext)) return BuildInvalid(); // Look up the target name. LookupResult R(*this, NameInfo, LookupOrdinaryName); // Unlike most lookups, we don't always want to hide tag // declarations: tag names are visible through the using declaration // even if hidden by ordinary names, *except* in a dependent context // where it's important for the sanity of two-phase lookup. if (!IsInstantiation) R.setHideTags(false); // For the purposes of this lookup, we have a base object type // equal to that of the current context. if (CurContext->isRecord()) { R.setBaseObjectType( Context.getTypeDeclType(cast(CurContext))); } LookupQualifiedName(R, LookupContext); // Try to correct typos if possible. If constructor name lookup finds no // results, that means the named class has no explicit constructors, and we // suppressed declaring implicit ones (probably because it's dependent or // invalid). if (R.empty() && NameInfo.getName().getNameKind() != DeclarationName::CXXConstructorName) { // HACK: Work around a bug in libstdc++'s detection of ::gets. Sometimes // it will believe that glibc provides a ::gets in cases where it does not, // and will try to pull it into namespace std with a using-declaration. // Just ignore the using-declaration in that case. auto *II = NameInfo.getName().getAsIdentifierInfo(); if (getLangOpts().CPlusPlus14 && II && II->isStr("gets") && CurContext->isStdNamespace() && isa(LookupContext) && getSourceManager().isInSystemHeader(UsingLoc)) return nullptr; if (TypoCorrection Corrected = CorrectTypo( R.getLookupNameInfo(), R.getLookupKind(), S, &SS, llvm::make_unique( HasTypenameKeyword, IsInstantiation, SS.getScopeRep(), dyn_cast(CurContext)), CTK_ErrorRecovery)) { // We reject any correction for which ND would be NULL. NamedDecl *ND = Corrected.getCorrectionDecl(); // We reject candidates where DroppedSpecifier == true, hence the // literal '0' below. diagnoseTypo(Corrected, PDiag(diag::err_no_member_suggest) << NameInfo.getName() << LookupContext << 0 << SS.getRange()); // If we corrected to an inheriting constructor, handle it as one. auto *RD = dyn_cast(ND); if (RD && RD->isInjectedClassName()) { // The parent of the injected class name is the class itself. RD = cast(RD->getParent()); // Fix up the information we'll use to build the using declaration. if (Corrected.WillReplaceSpecifier()) { NestedNameSpecifierLocBuilder Builder; Builder.MakeTrivial(Context, Corrected.getCorrectionSpecifier(), QualifierLoc.getSourceRange()); QualifierLoc = Builder.getWithLocInContext(Context); } // In this case, the name we introduce is the name of a derived class // constructor. auto *CurClass = cast(CurContext); UsingName.setName(Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(Context.getRecordType(CurClass)))); UsingName.setNamedTypeInfo(nullptr); for (auto *Ctor : LookupConstructors(RD)) R.addDecl(Ctor); R.resolveKind(); } else { // FIXME: Pick up all the declarations if we found an overloaded // function. UsingName.setName(ND->getDeclName()); R.addDecl(ND); } } else { Diag(IdentLoc, diag::err_no_member) << NameInfo.getName() << LookupContext << SS.getRange(); return BuildInvalid(); } } if (R.isAmbiguous()) return BuildInvalid(); if (HasTypenameKeyword) { // If we asked for a typename and got a non-type decl, error out. if (!R.getAsSingle()) { Diag(IdentLoc, diag::err_using_typename_non_type); for (LookupResult::iterator I = R.begin(), E = R.end(); I != E; ++I) Diag((*I)->getUnderlyingDecl()->getLocation(), diag::note_using_decl_target); return BuildInvalid(); } } else { // If we asked for a non-typename and we got a type, error out, // but only if this is an instantiation of an unresolved using // decl. Otherwise just silently find the type name. if (IsInstantiation && R.getAsSingle()) { Diag(IdentLoc, diag::err_using_dependent_value_is_type); Diag(R.getFoundDecl()->getLocation(), diag::note_using_decl_target); return BuildInvalid(); } } // C++14 [namespace.udecl]p6: // A using-declaration shall not name a namespace. if (R.getAsSingle()) { Diag(IdentLoc, diag::err_using_decl_can_not_refer_to_namespace) << SS.getRange(); return BuildInvalid(); } // C++14 [namespace.udecl]p7: // A using-declaration shall not name a scoped enumerator. if (auto *ED = R.getAsSingle()) { if (cast(ED->getDeclContext())->isScoped()) { Diag(IdentLoc, diag::err_using_decl_can_not_refer_to_scoped_enum) << SS.getRange(); return BuildInvalid(); } } UsingDecl *UD = BuildValid(); // Some additional rules apply to inheriting constructors. if (UsingName.getName().getNameKind() == DeclarationName::CXXConstructorName) { // Suppress access diagnostics; the access check is instead performed at the // point of use for an inheriting constructor. R.suppressDiagnostics(); if (CheckInheritingConstructorUsingDecl(UD)) return UD; } for (LookupResult::iterator I = R.begin(), E = R.end(); I != E; ++I) { UsingShadowDecl *PrevDecl = nullptr; if (!CheckUsingShadowDecl(UD, *I, Previous, PrevDecl)) BuildUsingShadowDecl(S, UD, *I, PrevDecl); } return UD; } NamedDecl *Sema::BuildUsingPackDecl(NamedDecl *InstantiatedFrom, ArrayRef Expansions) { assert(isa(InstantiatedFrom) || isa(InstantiatedFrom) || isa(InstantiatedFrom)); auto *UPD = UsingPackDecl::Create(Context, CurContext, InstantiatedFrom, Expansions); UPD->setAccess(InstantiatedFrom->getAccess()); CurContext->addDecl(UPD); return UPD; } /// Additional checks for a using declaration referring to a constructor name. bool Sema::CheckInheritingConstructorUsingDecl(UsingDecl *UD) { assert(!UD->hasTypename() && "expecting a constructor name"); const Type *SourceType = UD->getQualifier()->getAsType(); assert(SourceType && "Using decl naming constructor doesn't have type in scope spec."); CXXRecordDecl *TargetClass = cast(CurContext); // Check whether the named type is a direct base class. bool AnyDependentBases = false; auto *Base = findDirectBaseWithType(TargetClass, QualType(SourceType, 0), AnyDependentBases); if (!Base && !AnyDependentBases) { Diag(UD->getUsingLoc(), diag::err_using_decl_constructor_not_in_direct_base) << UD->getNameInfo().getSourceRange() << QualType(SourceType, 0) << TargetClass; UD->setInvalidDecl(); return true; } if (Base) Base->setInheritConstructors(); return false; } /// Checks that the given using declaration is not an invalid /// redeclaration. Note that this is checking only for the using decl /// itself, not for any ill-formedness among the UsingShadowDecls. bool Sema::CheckUsingDeclRedeclaration(SourceLocation UsingLoc, bool HasTypenameKeyword, const CXXScopeSpec &SS, SourceLocation NameLoc, const LookupResult &Prev) { NestedNameSpecifier *Qual = SS.getScopeRep(); // C++03 [namespace.udecl]p8: // C++0x [namespace.udecl]p10: // A using-declaration is a declaration and can therefore be used // repeatedly where (and only where) multiple declarations are // allowed. // // That's in non-member contexts. if (!CurContext->getRedeclContext()->isRecord()) { // A dependent qualifier outside a class can only ever resolve to an // enumeration type. Therefore it conflicts with any other non-type // declaration in the same scope. // FIXME: How should we check for dependent type-type conflicts at block // scope? if (Qual->isDependent() && !HasTypenameKeyword) { for (auto *D : Prev) { if (!isa(D) && !isa(D) && !isa(D)) { bool OldCouldBeEnumerator = isa(D) || isa(D); Diag(NameLoc, OldCouldBeEnumerator ? diag::err_redefinition : diag::err_redefinition_different_kind) << Prev.getLookupName(); Diag(D->getLocation(), diag::note_previous_definition); return true; } } } return false; } for (LookupResult::iterator I = Prev.begin(), E = Prev.end(); I != E; ++I) { NamedDecl *D = *I; bool DTypename; NestedNameSpecifier *DQual; if (UsingDecl *UD = dyn_cast(D)) { DTypename = UD->hasTypename(); DQual = UD->getQualifier(); } else if (UnresolvedUsingValueDecl *UD = dyn_cast(D)) { DTypename = false; DQual = UD->getQualifier(); } else if (UnresolvedUsingTypenameDecl *UD = dyn_cast(D)) { DTypename = true; DQual = UD->getQualifier(); } else continue; // using decls differ if one says 'typename' and the other doesn't. // FIXME: non-dependent using decls? if (HasTypenameKeyword != DTypename) continue; // using decls differ if they name different scopes (but note that // template instantiation can cause this check to trigger when it // didn't before instantiation). if (Context.getCanonicalNestedNameSpecifier(Qual) != Context.getCanonicalNestedNameSpecifier(DQual)) continue; Diag(NameLoc, diag::err_using_decl_redeclaration) << SS.getRange(); Diag(D->getLocation(), diag::note_using_decl) << 1; return true; } return false; } /// Checks that the given nested-name qualifier used in a using decl /// in the current context is appropriately related to the current /// scope. If an error is found, diagnoses it and returns true. bool Sema::CheckUsingDeclQualifier(SourceLocation UsingLoc, bool HasTypename, const CXXScopeSpec &SS, const DeclarationNameInfo &NameInfo, SourceLocation NameLoc) { DeclContext *NamedContext = computeDeclContext(SS); if (!CurContext->isRecord()) { // C++03 [namespace.udecl]p3: // C++0x [namespace.udecl]p8: // A using-declaration for a class member shall be a member-declaration. // If we weren't able to compute a valid scope, it might validly be a // dependent class scope or a dependent enumeration unscoped scope. If // we have a 'typename' keyword, the scope must resolve to a class type. if ((HasTypename && !NamedContext) || (NamedContext && NamedContext->getRedeclContext()->isRecord())) { auto *RD = NamedContext ? cast(NamedContext->getRedeclContext()) : nullptr; if (RD && RequireCompleteDeclContext(const_cast(SS), RD)) RD = nullptr; Diag(NameLoc, diag::err_using_decl_can_not_refer_to_class_member) << SS.getRange(); // If we have a complete, non-dependent source type, try to suggest a // way to get the same effect. if (!RD) return true; // Find what this using-declaration was referring to. LookupResult R(*this, NameInfo, LookupOrdinaryName); R.setHideTags(false); R.suppressDiagnostics(); LookupQualifiedName(R, RD); if (R.getAsSingle()) { if (getLangOpts().CPlusPlus11) { // Convert 'using X::Y;' to 'using Y = X::Y;'. Diag(SS.getBeginLoc(), diag::note_using_decl_class_member_workaround) << 0 // alias declaration << FixItHint::CreateInsertion(SS.getBeginLoc(), NameInfo.getName().getAsString() + " = "); } else { // Convert 'using X::Y;' to 'typedef X::Y Y;'. SourceLocation InsertLoc = getLocForEndOfToken(NameInfo.getLocEnd()); Diag(InsertLoc, diag::note_using_decl_class_member_workaround) << 1 // typedef declaration << FixItHint::CreateReplacement(UsingLoc, "typedef") << FixItHint::CreateInsertion( InsertLoc, " " + NameInfo.getName().getAsString()); } } else if (R.getAsSingle()) { // Don't provide a fixit outside C++11 mode; we don't want to suggest // repeating the type of the static data member here. FixItHint FixIt; if (getLangOpts().CPlusPlus11) { // Convert 'using X::Y;' to 'auto &Y = X::Y;'. FixIt = FixItHint::CreateReplacement( UsingLoc, "auto &" + NameInfo.getName().getAsString() + " = "); } Diag(UsingLoc, diag::note_using_decl_class_member_workaround) << 2 // reference declaration << FixIt; } else if (R.getAsSingle()) { // Don't provide a fixit outside C++11 mode; we don't want to suggest // repeating the type of the enumeration here, and we can't do so if // the type is anonymous. FixItHint FixIt; if (getLangOpts().CPlusPlus11) { // Convert 'using X::Y;' to 'auto &Y = X::Y;'. FixIt = FixItHint::CreateReplacement( UsingLoc, "constexpr auto " + NameInfo.getName().getAsString() + " = "); } Diag(UsingLoc, diag::note_using_decl_class_member_workaround) << (getLangOpts().CPlusPlus11 ? 4 : 3) // const[expr] variable << FixIt; } return true; } // Otherwise, this might be valid. return false; } // The current scope is a record. // If the named context is dependent, we can't decide much. if (!NamedContext) { // FIXME: in C++0x, we can diagnose if we can prove that the // nested-name-specifier does not refer to a base class, which is // still possible in some cases. // Otherwise we have to conservatively report that things might be // okay. return false; } if (!NamedContext->isRecord()) { // Ideally this would point at the last name in the specifier, // but we don't have that level of source info. Diag(SS.getRange().getBegin(), diag::err_using_decl_nested_name_specifier_is_not_class) << SS.getScopeRep() << SS.getRange(); return true; } if (!NamedContext->isDependentContext() && RequireCompleteDeclContext(const_cast(SS), NamedContext)) return true; if (getLangOpts().CPlusPlus11) { // C++11 [namespace.udecl]p3: // In a using-declaration used as a member-declaration, the // nested-name-specifier shall name a base class of the class // being defined. if (cast(CurContext)->isProvablyNotDerivedFrom( cast(NamedContext))) { if (CurContext == NamedContext) { Diag(NameLoc, diag::err_using_decl_nested_name_specifier_is_current_class) << SS.getRange(); return true; } if (!cast(NamedContext)->isInvalidDecl()) { Diag(SS.getRange().getBegin(), diag::err_using_decl_nested_name_specifier_is_not_base_class) << SS.getScopeRep() << cast(CurContext) << SS.getRange(); } return true; } return false; } // C++03 [namespace.udecl]p4: // A using-declaration used as a member-declaration shall refer // to a member of a base class of the class being defined [etc.]. // Salient point: SS doesn't have to name a base class as long as // lookup only finds members from base classes. Therefore we can // diagnose here only if we can prove that that can't happen, // i.e. if the class hierarchies provably don't intersect. // TODO: it would be nice if "definitely valid" results were cached // in the UsingDecl and UsingShadowDecl so that these checks didn't // need to be repeated. llvm::SmallPtrSet Bases; auto Collect = [&Bases](const CXXRecordDecl *Base) { Bases.insert(Base); return true; }; // Collect all bases. Return false if we find a dependent base. if (!cast(CurContext)->forallBases(Collect)) return false; // Returns true if the base is dependent or is one of the accumulated base // classes. auto IsNotBase = [&Bases](const CXXRecordDecl *Base) { return !Bases.count(Base); }; // Return false if the class has a dependent base or if it or one // of its bases is present in the base set of the current context. if (Bases.count(cast(NamedContext)) || !cast(NamedContext)->forallBases(IsNotBase)) return false; Diag(SS.getRange().getBegin(), diag::err_using_decl_nested_name_specifier_is_not_base_class) << SS.getScopeRep() << cast(CurContext) << SS.getRange(); return true; } Decl *Sema::ActOnAliasDeclaration(Scope *S, AccessSpecifier AS, MultiTemplateParamsArg TemplateParamLists, SourceLocation UsingLoc, UnqualifiedId &Name, AttributeList *AttrList, TypeResult Type, Decl *DeclFromDeclSpec) { // Skip up to the relevant declaration scope. while (S->isTemplateParamScope()) S = S->getParent(); assert((S->getFlags() & Scope::DeclScope) && "got alias-declaration outside of declaration scope"); if (Type.isInvalid()) return nullptr; bool Invalid = false; DeclarationNameInfo NameInfo = GetNameFromUnqualifiedId(Name); TypeSourceInfo *TInfo = nullptr; GetTypeFromParser(Type.get(), &TInfo); if (DiagnoseClassNameShadow(CurContext, NameInfo)) return nullptr; if (DiagnoseUnexpandedParameterPack(Name.StartLocation, TInfo, UPPC_DeclarationType)) { Invalid = true; TInfo = Context.getTrivialTypeSourceInfo(Context.IntTy, TInfo->getTypeLoc().getBeginLoc()); } LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); LookupName(Previous, S); // Warn about shadowing the name of a template parameter. if (Previous.isSingleResult() && Previous.getFoundDecl()->isTemplateParameter()) { DiagnoseTemplateParameterShadow(Name.StartLocation,Previous.getFoundDecl()); Previous.clear(); } assert(Name.Kind == UnqualifiedId::IK_Identifier && "name in alias declaration must be an identifier"); TypeAliasDecl *NewTD = TypeAliasDecl::Create(Context, CurContext, UsingLoc, Name.StartLocation, Name.Identifier, TInfo); NewTD->setAccess(AS); if (Invalid) NewTD->setInvalidDecl(); ProcessDeclAttributeList(S, NewTD, AttrList); CheckTypedefForVariablyModifiedType(S, NewTD); Invalid |= NewTD->isInvalidDecl(); bool Redeclaration = false; NamedDecl *NewND; if (TemplateParamLists.size()) { TypeAliasTemplateDecl *OldDecl = nullptr; TemplateParameterList *OldTemplateParams = nullptr; if (TemplateParamLists.size() != 1) { Diag(UsingLoc, diag::err_alias_template_extra_headers) << SourceRange(TemplateParamLists[1]->getTemplateLoc(), TemplateParamLists[TemplateParamLists.size()-1]->getRAngleLoc()); } TemplateParameterList *TemplateParams = TemplateParamLists[0]; // Check that we can declare a template here. if (CheckTemplateDeclScope(S, TemplateParams)) return nullptr; // Only consider previous declarations in the same scope. FilterLookupForScope(Previous, CurContext, S, /*ConsiderLinkage*/false, /*ExplicitInstantiationOrSpecialization*/false); if (!Previous.empty()) { Redeclaration = true; OldDecl = Previous.getAsSingle(); if (!OldDecl && !Invalid) { Diag(UsingLoc, diag::err_redefinition_different_kind) << Name.Identifier; NamedDecl *OldD = Previous.getRepresentativeDecl(); if (OldD->getLocation().isValid()) Diag(OldD->getLocation(), diag::note_previous_definition); Invalid = true; } if (!Invalid && OldDecl && !OldDecl->isInvalidDecl()) { if (TemplateParameterListsAreEqual(TemplateParams, OldDecl->getTemplateParameters(), /*Complain=*/true, TPL_TemplateMatch)) OldTemplateParams = OldDecl->getTemplateParameters(); else Invalid = true; TypeAliasDecl *OldTD = OldDecl->getTemplatedDecl(); if (!Invalid && !Context.hasSameType(OldTD->getUnderlyingType(), NewTD->getUnderlyingType())) { // FIXME: The C++0x standard does not clearly say this is ill-formed, // but we can't reasonably accept it. Diag(NewTD->getLocation(), diag::err_redefinition_different_typedef) << 2 << NewTD->getUnderlyingType() << OldTD->getUnderlyingType(); if (OldTD->getLocation().isValid()) Diag(OldTD->getLocation(), diag::note_previous_definition); Invalid = true; } } } // Merge any previous default template arguments into our parameters, // and check the parameter list. if (CheckTemplateParameterList(TemplateParams, OldTemplateParams, TPC_TypeAliasTemplate)) return nullptr; TypeAliasTemplateDecl *NewDecl = TypeAliasTemplateDecl::Create(Context, CurContext, UsingLoc, Name.Identifier, TemplateParams, NewTD); NewTD->setDescribedAliasTemplate(NewDecl); NewDecl->setAccess(AS); if (Invalid) NewDecl->setInvalidDecl(); else if (OldDecl) NewDecl->setPreviousDecl(OldDecl); NewND = NewDecl; } else { if (auto *TD = dyn_cast_or_null(DeclFromDeclSpec)) { setTagNameForLinkagePurposes(TD, NewTD); handleTagNumbering(TD, S); } ActOnTypedefNameDecl(S, CurContext, NewTD, Previous, Redeclaration); NewND = NewTD; } PushOnScopeChains(NewND, S); ActOnDocumentableDecl(NewND); return NewND; } Decl *Sema::ActOnNamespaceAliasDef(Scope *S, SourceLocation NamespaceLoc, SourceLocation AliasLoc, IdentifierInfo *Alias, CXXScopeSpec &SS, SourceLocation IdentLoc, IdentifierInfo *Ident) { // Lookup the namespace name. LookupResult R(*this, Ident, IdentLoc, LookupNamespaceName); LookupParsedName(R, S, &SS); if (R.isAmbiguous()) return nullptr; if (R.empty()) { if (!TryNamespaceTypoCorrection(*this, R, S, SS, IdentLoc, Ident)) { Diag(IdentLoc, diag::err_expected_namespace_name) << SS.getRange(); return nullptr; } } assert(!R.isAmbiguous() && !R.empty()); NamedDecl *ND = R.getRepresentativeDecl(); // Check if we have a previous declaration with the same name. LookupResult PrevR(*this, Alias, AliasLoc, LookupOrdinaryName, ForRedeclaration); LookupName(PrevR, S); // Check we're not shadowing a template parameter. if (PrevR.isSingleResult() && PrevR.getFoundDecl()->isTemplateParameter()) { DiagnoseTemplateParameterShadow(AliasLoc, PrevR.getFoundDecl()); PrevR.clear(); } // Filter out any other lookup result from an enclosing scope. FilterLookupForScope(PrevR, CurContext, S, /*ConsiderLinkage*/false, /*AllowInlineNamespace*/false); // Find the previous declaration and check that we can redeclare it. NamespaceAliasDecl *Prev = nullptr; if (PrevR.isSingleResult()) { NamedDecl *PrevDecl = PrevR.getRepresentativeDecl(); if (NamespaceAliasDecl *AD = dyn_cast(PrevDecl)) { // We already have an alias with the same name that points to the same // namespace; check that it matches. if (AD->getNamespace()->Equals(getNamespaceDecl(ND))) { Prev = AD; } else if (isVisible(PrevDecl)) { Diag(AliasLoc, diag::err_redefinition_different_namespace_alias) << Alias; Diag(AD->getLocation(), diag::note_previous_namespace_alias) << AD->getNamespace(); return nullptr; } } else if (isVisible(PrevDecl)) { unsigned DiagID = isa(PrevDecl->getUnderlyingDecl()) ? diag::err_redefinition : diag::err_redefinition_different_kind; Diag(AliasLoc, DiagID) << Alias; Diag(PrevDecl->getLocation(), diag::note_previous_definition); return nullptr; } } // The use of a nested name specifier may trigger deprecation warnings. DiagnoseUseOfDecl(ND, IdentLoc); NamespaceAliasDecl *AliasDecl = NamespaceAliasDecl::Create(Context, CurContext, NamespaceLoc, AliasLoc, Alias, SS.getWithLocInContext(Context), IdentLoc, ND); if (Prev) AliasDecl->setPreviousDecl(Prev); PushOnScopeChains(AliasDecl, S); return AliasDecl; } Sema::ImplicitExceptionSpecification Sema::ComputeDefaultedDefaultCtorExceptionSpec(SourceLocation Loc, CXXMethodDecl *MD) { CXXRecordDecl *ClassDecl = MD->getParent(); // C++ [except.spec]p14: // An implicitly declared special member function (Clause 12) shall have an // exception-specification. [...] ImplicitExceptionSpecification ExceptSpec(*this); if (ClassDecl->isInvalidDecl()) return ExceptSpec; // Direct base-class constructors. for (const auto &B : ClassDecl->bases()) { if (B.isVirtual()) // Handled below. continue; if (const RecordType *BaseType = B.getType()->getAs()) { CXXRecordDecl *BaseClassDecl = cast(BaseType->getDecl()); CXXConstructorDecl *Constructor = LookupDefaultConstructor(BaseClassDecl); // If this is a deleted function, add it anyway. This might be conformant // with the standard. This might not. I'm not sure. It might not matter. if (Constructor) ExceptSpec.CalledDecl(B.getLocStart(), Constructor); } } // Virtual base-class constructors. for (const auto &B : ClassDecl->vbases()) { if (const RecordType *BaseType = B.getType()->getAs()) { CXXRecordDecl *BaseClassDecl = cast(BaseType->getDecl()); CXXConstructorDecl *Constructor = LookupDefaultConstructor(BaseClassDecl); // If this is a deleted function, add it anyway. This might be conformant // with the standard. This might not. I'm not sure. It might not matter. if (Constructor) ExceptSpec.CalledDecl(B.getLocStart(), Constructor); } } // Field constructors. for (auto *F : ClassDecl->fields()) { if (F->hasInClassInitializer()) { Expr *E = F->getInClassInitializer(); if (!E) // FIXME: It's a little wasteful to build and throw away a // CXXDefaultInitExpr here. E = BuildCXXDefaultInitExpr(Loc, F).get(); if (E) ExceptSpec.CalledExpr(E); } else if (const RecordType *RecordTy = Context.getBaseElementType(F->getType())->getAs()) { CXXRecordDecl *FieldRecDecl = cast(RecordTy->getDecl()); CXXConstructorDecl *Constructor = LookupDefaultConstructor(FieldRecDecl); // If this is a deleted function, add it anyway. This might be conformant // with the standard. This might not. I'm not sure. It might not matter. // In particular, the problem is that this function never gets called. It // might just be ill-formed because this function attempts to refer to // a deleted function here. if (Constructor) ExceptSpec.CalledDecl(F->getLocation(), Constructor); } } return ExceptSpec; } Sema::ImplicitExceptionSpecification Sema::ComputeInheritingCtorExceptionSpec(SourceLocation Loc, CXXConstructorDecl *CD) { CXXRecordDecl *ClassDecl = CD->getParent(); // C++ [except.spec]p14: // An inheriting constructor [...] shall have an exception-specification. [...] ImplicitExceptionSpecification ExceptSpec(*this); if (ClassDecl->isInvalidDecl()) return ExceptSpec; auto Inherited = CD->getInheritedConstructor(); InheritedConstructorInfo ICI(*this, Loc, Inherited.getShadowDecl()); // Direct and virtual base-class constructors. for (bool VBase : {false, true}) { for (CXXBaseSpecifier &B : VBase ? ClassDecl->vbases() : ClassDecl->bases()) { // Don't visit direct vbases twice. if (B.isVirtual() != VBase) continue; CXXRecordDecl *BaseClass = B.getType()->getAsCXXRecordDecl(); if (!BaseClass) continue; CXXConstructorDecl *Constructor = ICI.findConstructorForBase(BaseClass, Inherited.getConstructor()) .first; if (!Constructor) Constructor = LookupDefaultConstructor(BaseClass); if (Constructor) ExceptSpec.CalledDecl(B.getLocStart(), Constructor); } } // Field constructors. for (const auto *F : ClassDecl->fields()) { if (F->hasInClassInitializer()) { if (Expr *E = F->getInClassInitializer()) ExceptSpec.CalledExpr(E); } else if (const RecordType *RecordTy = Context.getBaseElementType(F->getType())->getAs()) { CXXRecordDecl *FieldRecDecl = cast(RecordTy->getDecl()); CXXConstructorDecl *Constructor = LookupDefaultConstructor(FieldRecDecl); if (Constructor) ExceptSpec.CalledDecl(F->getLocation(), Constructor); } } return ExceptSpec; } namespace { /// RAII object to register a special member as being currently declared. struct DeclaringSpecialMember { Sema &S; Sema::SpecialMemberDecl D; Sema::ContextRAII SavedContext; bool WasAlreadyBeingDeclared; DeclaringSpecialMember(Sema &S, CXXRecordDecl *RD, Sema::CXXSpecialMember CSM) : S(S), D(RD, CSM), SavedContext(S, RD) { WasAlreadyBeingDeclared = !S.SpecialMembersBeingDeclared.insert(D).second; if (WasAlreadyBeingDeclared) // This almost never happens, but if it does, ensure that our cache // doesn't contain a stale result. S.SpecialMemberCache.clear(); // FIXME: Register a note to be produced if we encounter an error while // declaring the special member. } ~DeclaringSpecialMember() { if (!WasAlreadyBeingDeclared) S.SpecialMembersBeingDeclared.erase(D); } /// \brief Are we already trying to declare this special member? bool isAlreadyBeingDeclared() const { return WasAlreadyBeingDeclared; } }; } void Sema::CheckImplicitSpecialMemberDeclaration(Scope *S, FunctionDecl *FD) { // Look up any existing declarations, but don't trigger declaration of all // implicit special members with this name. DeclarationName Name = FD->getDeclName(); LookupResult R(*this, Name, SourceLocation(), LookupOrdinaryName, ForRedeclaration); for (auto *D : FD->getParent()->lookup(Name)) if (auto *Acceptable = R.getAcceptableDecl(D)) R.addDecl(Acceptable); R.resolveKind(); R.suppressDiagnostics(); CheckFunctionDeclaration(S, FD, R, /*IsExplicitSpecialization*/false); } CXXConstructorDecl *Sema::DeclareImplicitDefaultConstructor( CXXRecordDecl *ClassDecl) { // C++ [class.ctor]p5: // A default constructor for a class X is a constructor of class X // that can be called without an argument. If there is no // user-declared constructor for class X, a default constructor is // implicitly declared. An implicitly-declared default constructor // is an inline public member of its class. assert(ClassDecl->needsImplicitDefaultConstructor() && "Should not build implicit default constructor!"); DeclaringSpecialMember DSM(*this, ClassDecl, CXXDefaultConstructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXDefaultConstructor, false); // Create the actual constructor declaration. CanQualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(ClassDecl)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationName Name = Context.DeclarationNames.getCXXConstructorName(ClassType); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXConstructorDecl *DefaultCon = CXXConstructorDecl::Create( Context, ClassDecl, ClassLoc, NameInfo, /*Type*/QualType(), /*TInfo=*/nullptr, /*isExplicit=*/false, /*isInline=*/true, /*isImplicitlyDeclared=*/true, Constexpr); DefaultCon->setAccess(AS_public); DefaultCon->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXDefaultConstructor, DefaultCon, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this constructor. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, DefaultCon); DefaultCon->setType(Context.getFunctionType(Context.VoidTy, None, EPI)); // We don't need to use SpecialMemberIsTrivial here; triviality for default // constructors is easy to compute. DefaultCon->setTrivial(ClassDecl->hasTrivialDefaultConstructor()); // Note that we have declared this constructor. ++ASTContext::NumImplicitDefaultConstructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, DefaultCon); if (ShouldDeleteSpecialMember(DefaultCon, CXXDefaultConstructor)) SetDeclDeleted(DefaultCon, ClassLoc); if (S) PushOnScopeChains(DefaultCon, S, false); ClassDecl->addDecl(DefaultCon); return DefaultCon; } void Sema::DefineImplicitDefaultConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *Constructor) { assert((Constructor->isDefaulted() && Constructor->isDefaultConstructor() && !Constructor->doesThisDeclarationHaveABody() && !Constructor->isDeleted()) && "DefineImplicitDefaultConstructor - call it for implicit default ctor"); CXXRecordDecl *ClassDecl = Constructor->getParent(); assert(ClassDecl && "DefineImplicitDefaultConstructor - invalid constructor"); SynthesizedFunctionScope Scope(*this, Constructor); DiagnosticErrorTrap Trap(Diags); if (SetCtorInitializers(Constructor, /*AnyErrors=*/false) || Trap.hasErrorOccurred()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXDefaultConstructor << Context.getTagDeclType(ClassDecl); Constructor->setInvalidDecl(); return; } // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, Constructor->getType()->castAs()); SourceLocation Loc = Constructor->getLocEnd().isValid() ? Constructor->getLocEnd() : Constructor->getLocation(); Constructor->setBody(new (Context) CompoundStmt(Loc)); Constructor->markUsed(Context); MarkVTableUsed(CurrentLocation, ClassDecl); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Constructor); } DiagnoseUninitializedFields(*this, Constructor); } void Sema::ActOnFinishDelayedMemberInitializers(Decl *D) { // Perform any delayed checks on exception specifications. CheckDelayedMemberExceptionSpecs(); } /// Find or create the fake constructor we synthesize to model constructing an /// object of a derived class via a constructor of a base class. CXXConstructorDecl * Sema::findInheritingConstructor(SourceLocation Loc, CXXConstructorDecl *BaseCtor, ConstructorUsingShadowDecl *Shadow) { CXXRecordDecl *Derived = Shadow->getParent(); SourceLocation UsingLoc = Shadow->getLocation(); // FIXME: Add a new kind of DeclarationName for an inherited constructor. // For now we use the name of the base class constructor as a member of the // derived class to indicate a (fake) inherited constructor name. DeclarationName Name = BaseCtor->getDeclName(); // Check to see if we already have a fake constructor for this inherited // constructor call. for (NamedDecl *Ctor : Derived->lookup(Name)) if (declaresSameEntity(cast(Ctor) ->getInheritedConstructor() .getConstructor(), BaseCtor)) return cast(Ctor); DeclarationNameInfo NameInfo(Name, UsingLoc); TypeSourceInfo *TInfo = Context.getTrivialTypeSourceInfo(BaseCtor->getType(), UsingLoc); FunctionProtoTypeLoc ProtoLoc = TInfo->getTypeLoc().IgnoreParens().castAs(); // Check the inherited constructor is valid and find the list of base classes // from which it was inherited. InheritedConstructorInfo ICI(*this, Loc, Shadow); bool Constexpr = BaseCtor->isConstexpr() && defaultedSpecialMemberIsConstexpr(*this, Derived, CXXDefaultConstructor, false, BaseCtor, &ICI); CXXConstructorDecl *DerivedCtor = CXXConstructorDecl::Create( Context, Derived, UsingLoc, NameInfo, TInfo->getType(), TInfo, BaseCtor->isExplicit(), /*Inline=*/true, /*ImplicitlyDeclared=*/true, Constexpr, InheritedConstructor(Shadow, BaseCtor)); if (Shadow->isInvalidDecl()) DerivedCtor->setInvalidDecl(); // Build an unevaluated exception specification for this fake constructor. const FunctionProtoType *FPT = TInfo->getType()->castAs(); FunctionProtoType::ExtProtoInfo EPI = FPT->getExtProtoInfo(); EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = DerivedCtor; DerivedCtor->setType(Context.getFunctionType(FPT->getReturnType(), FPT->getParamTypes(), EPI)); // Build the parameter declarations. SmallVector ParamDecls; for (unsigned I = 0, N = FPT->getNumParams(); I != N; ++I) { TypeSourceInfo *TInfo = Context.getTrivialTypeSourceInfo(FPT->getParamType(I), UsingLoc); ParmVarDecl *PD = ParmVarDecl::Create( Context, DerivedCtor, UsingLoc, UsingLoc, /*IdentifierInfo=*/nullptr, FPT->getParamType(I), TInfo, SC_None, /*DefaultArg=*/nullptr); PD->setScopeInfo(0, I); PD->setImplicit(); // Ensure attributes are propagated onto parameters (this matters for // format, pass_object_size, ...). mergeDeclAttributes(PD, BaseCtor->getParamDecl(I)); ParamDecls.push_back(PD); ProtoLoc.setParam(I, PD); } // Set up the new constructor. assert(!BaseCtor->isDeleted() && "should not use deleted constructor"); DerivedCtor->setAccess(BaseCtor->getAccess()); DerivedCtor->setParams(ParamDecls); Derived->addDecl(DerivedCtor); if (ShouldDeleteSpecialMember(DerivedCtor, CXXDefaultConstructor, &ICI)) SetDeclDeleted(DerivedCtor, UsingLoc); return DerivedCtor; } void Sema::NoteDeletedInheritingConstructor(CXXConstructorDecl *Ctor) { InheritedConstructorInfo ICI(*this, Ctor->getLocation(), Ctor->getInheritedConstructor().getShadowDecl()); ShouldDeleteSpecialMember(Ctor, CXXDefaultConstructor, &ICI, /*Diagnose*/true); } void Sema::DefineInheritingConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *Constructor) { CXXRecordDecl *ClassDecl = Constructor->getParent(); assert(Constructor->getInheritedConstructor() && !Constructor->doesThisDeclarationHaveABody() && !Constructor->isDeleted()); if (Constructor->isInvalidDecl()) return; ConstructorUsingShadowDecl *Shadow = Constructor->getInheritedConstructor().getShadowDecl(); CXXConstructorDecl *InheritedCtor = Constructor->getInheritedConstructor().getConstructor(); // [class.inhctor.init]p1: // initialization proceeds as if a defaulted default constructor is used to // initialize the D object and each base class subobject from which the // constructor was inherited InheritedConstructorInfo ICI(*this, CurrentLocation, Shadow); CXXRecordDecl *RD = Shadow->getParent(); SourceLocation InitLoc = Shadow->getLocation(); // Initializations are performed "as if by a defaulted default constructor", // so enter the appropriate scope. SynthesizedFunctionScope Scope(*this, Constructor); DiagnosticErrorTrap Trap(Diags); // Build explicit initializers for all base classes from which the // constructor was inherited. SmallVector Inits; for (bool VBase : {false, true}) { for (CXXBaseSpecifier &B : VBase ? RD->vbases() : RD->bases()) { if (B.isVirtual() != VBase) continue; auto *BaseRD = B.getType()->getAsCXXRecordDecl(); if (!BaseRD) continue; auto BaseCtor = ICI.findConstructorForBase(BaseRD, InheritedCtor); if (!BaseCtor.first) continue; MarkFunctionReferenced(CurrentLocation, BaseCtor.first); ExprResult Init = new (Context) CXXInheritedCtorInitExpr( InitLoc, B.getType(), BaseCtor.first, VBase, BaseCtor.second); auto *TInfo = Context.getTrivialTypeSourceInfo(B.getType(), InitLoc); Inits.push_back(new (Context) CXXCtorInitializer( Context, TInfo, VBase, InitLoc, Init.get(), InitLoc, SourceLocation())); } } // We now proceed as if for a defaulted default constructor, with the relevant // initializers replaced. bool HadError = SetCtorInitializers(Constructor, /*AnyErrors*/false, Inits); if (HadError || Trap.hasErrorOccurred()) { Diag(CurrentLocation, diag::note_inhctor_synthesized_at) << RD; Constructor->setInvalidDecl(); return; } // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, Constructor->getType()->castAs()); Constructor->setBody(new (Context) CompoundStmt(InitLoc)); Constructor->markUsed(Context); MarkVTableUsed(CurrentLocation, ClassDecl); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Constructor); } DiagnoseUninitializedFields(*this, Constructor); } Sema::ImplicitExceptionSpecification Sema::ComputeDefaultedDtorExceptionSpec(CXXMethodDecl *MD) { CXXRecordDecl *ClassDecl = MD->getParent(); // C++ [except.spec]p14: // An implicitly declared special member function (Clause 12) shall have // an exception-specification. ImplicitExceptionSpecification ExceptSpec(*this); if (ClassDecl->isInvalidDecl()) return ExceptSpec; // Direct base-class destructors. for (const auto &B : ClassDecl->bases()) { if (B.isVirtual()) // Handled below. continue; if (const RecordType *BaseType = B.getType()->getAs()) ExceptSpec.CalledDecl(B.getLocStart(), LookupDestructor(cast(BaseType->getDecl()))); } // Virtual base-class destructors. for (const auto &B : ClassDecl->vbases()) { if (const RecordType *BaseType = B.getType()->getAs()) ExceptSpec.CalledDecl(B.getLocStart(), LookupDestructor(cast(BaseType->getDecl()))); } // Field destructors. for (const auto *F : ClassDecl->fields()) { if (const RecordType *RecordTy = Context.getBaseElementType(F->getType())->getAs()) ExceptSpec.CalledDecl(F->getLocation(), LookupDestructor(cast(RecordTy->getDecl()))); } return ExceptSpec; } CXXDestructorDecl *Sema::DeclareImplicitDestructor(CXXRecordDecl *ClassDecl) { // C++ [class.dtor]p2: // If a class has no user-declared destructor, a destructor is // declared implicitly. An implicitly-declared destructor is an // inline public member of its class. assert(ClassDecl->needsImplicitDestructor()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXDestructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; // Create the actual destructor declaration. CanQualType ClassType = Context.getCanonicalType(Context.getTypeDeclType(ClassDecl)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationName Name = Context.DeclarationNames.getCXXDestructorName(ClassType); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXDestructorDecl *Destructor = CXXDestructorDecl::Create(Context, ClassDecl, ClassLoc, NameInfo, QualType(), nullptr, /*isInline=*/true, /*isImplicitlyDeclared=*/true); Destructor->setAccess(AS_public); Destructor->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXDestructor, Destructor, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this destructor. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, Destructor); Destructor->setType(Context.getFunctionType(Context.VoidTy, None, EPI)); // We don't need to use SpecialMemberIsTrivial here; triviality for // destructors is easy to compute. Destructor->setTrivial(ClassDecl->hasTrivialDestructor()); // Note that we have declared this destructor. ++ASTContext::NumImplicitDestructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, Destructor); // We can't check whether an implicit destructor is deleted before we complete // the definition of the class, because its validity depends on the alignment // of the class. We'll check this from ActOnFields once the class is complete. if (ClassDecl->isCompleteDefinition() && ShouldDeleteSpecialMember(Destructor, CXXDestructor)) SetDeclDeleted(Destructor, ClassLoc); // Introduce this destructor into its scope. if (S) PushOnScopeChains(Destructor, S, false); ClassDecl->addDecl(Destructor); return Destructor; } void Sema::DefineImplicitDestructor(SourceLocation CurrentLocation, CXXDestructorDecl *Destructor) { assert((Destructor->isDefaulted() && !Destructor->doesThisDeclarationHaveABody() && !Destructor->isDeleted()) && "DefineImplicitDestructor - call it for implicit default dtor"); CXXRecordDecl *ClassDecl = Destructor->getParent(); assert(ClassDecl && "DefineImplicitDestructor - invalid destructor"); if (Destructor->isInvalidDecl()) return; SynthesizedFunctionScope Scope(*this, Destructor); DiagnosticErrorTrap Trap(Diags); MarkBaseAndMemberDestructorsReferenced(Destructor->getLocation(), Destructor->getParent()); if (CheckDestructor(Destructor) || Trap.hasErrorOccurred()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXDestructor << Context.getTagDeclType(ClassDecl); Destructor->setInvalidDecl(); return; } // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, Destructor->getType()->castAs()); SourceLocation Loc = Destructor->getLocEnd().isValid() ? Destructor->getLocEnd() : Destructor->getLocation(); Destructor->setBody(new (Context) CompoundStmt(Loc)); Destructor->markUsed(Context); MarkVTableUsed(CurrentLocation, ClassDecl); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Destructor); } } /// \brief Perform any semantic analysis which needs to be delayed until all /// pending class member declarations have been parsed. void Sema::ActOnFinishCXXMemberDecls() { // If the context is an invalid C++ class, just suppress these checks. if (CXXRecordDecl *Record = dyn_cast(CurContext)) { if (Record->isInvalidDecl()) { DelayedDefaultedMemberExceptionSpecs.clear(); DelayedExceptionSpecChecks.clear(); return; } checkForMultipleExportedDefaultConstructors(*this, Record); } } void Sema::ActOnFinishCXXNonNestedClass(Decl *D) { referenceDLLExportedClassMethods(); } void Sema::referenceDLLExportedClassMethods() { if (!DelayedDllExportClasses.empty()) { // Calling ReferenceDllExportedMethods might cause the current function to // be called again, so use a local copy of DelayedDllExportClasses. SmallVector WorkList; std::swap(DelayedDllExportClasses, WorkList); for (CXXRecordDecl *Class : WorkList) ReferenceDllExportedMethods(*this, Class); } } void Sema::AdjustDestructorExceptionSpec(CXXRecordDecl *ClassDecl, CXXDestructorDecl *Destructor) { assert(getLangOpts().CPlusPlus11 && "adjusting dtor exception specs was introduced in c++11"); // C++11 [class.dtor]p3: // A declaration of a destructor that does not have an exception- // specification is implicitly considered to have the same exception- // specification as an implicit declaration. const FunctionProtoType *DtorType = Destructor->getType()-> getAs(); if (DtorType->hasExceptionSpec()) return; // Replace the destructor's type, building off the existing one. Fortunately, // the only thing of interest in the destructor type is its extended info. // The return and arguments are fixed. FunctionProtoType::ExtProtoInfo EPI = DtorType->getExtProtoInfo(); EPI.ExceptionSpec.Type = EST_Unevaluated; EPI.ExceptionSpec.SourceDecl = Destructor; Destructor->setType(Context.getFunctionType(Context.VoidTy, None, EPI)); // FIXME: If the destructor has a body that could throw, and the newly created // spec doesn't allow exceptions, we should emit a warning, because this // change in behavior can break conforming C++03 programs at runtime. // However, we don't have a body or an exception specification yet, so it // needs to be done somewhere else. } namespace { /// \brief An abstract base class for all helper classes used in building the // copy/move operators. These classes serve as factory functions and help us // avoid using the same Expr* in the AST twice. class ExprBuilder { ExprBuilder(const ExprBuilder&) = delete; ExprBuilder &operator=(const ExprBuilder&) = delete; protected: static Expr *assertNotNull(Expr *E) { assert(E && "Expression construction must not fail."); return E; } public: ExprBuilder() {} virtual ~ExprBuilder() {} virtual Expr *build(Sema &S, SourceLocation Loc) const = 0; }; class RefBuilder: public ExprBuilder { VarDecl *Var; QualType VarType; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.BuildDeclRefExpr(Var, VarType, VK_LValue, Loc).get()); } RefBuilder(VarDecl *Var, QualType VarType) : Var(Var), VarType(VarType) {} }; class ThisBuilder: public ExprBuilder { public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.ActOnCXXThis(Loc).getAs()); } }; class CastBuilder: public ExprBuilder { const ExprBuilder &Builder; QualType Type; ExprValueKind Kind; const CXXCastPath &Path; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.ImpCastExprToType(Builder.build(S, Loc), Type, CK_UncheckedDerivedToBase, Kind, &Path).get()); } CastBuilder(const ExprBuilder &Builder, QualType Type, ExprValueKind Kind, const CXXCastPath &Path) : Builder(Builder), Type(Type), Kind(Kind), Path(Path) {} }; class DerefBuilder: public ExprBuilder { const ExprBuilder &Builder; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull( S.CreateBuiltinUnaryOp(Loc, UO_Deref, Builder.build(S, Loc)).get()); } DerefBuilder(const ExprBuilder &Builder) : Builder(Builder) {} }; class MemberBuilder: public ExprBuilder { const ExprBuilder &Builder; QualType Type; CXXScopeSpec SS; bool IsArrow; LookupResult &MemberLookup; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.BuildMemberReferenceExpr( Builder.build(S, Loc), Type, Loc, IsArrow, SS, SourceLocation(), nullptr, MemberLookup, nullptr, nullptr).get()); } MemberBuilder(const ExprBuilder &Builder, QualType Type, bool IsArrow, LookupResult &MemberLookup) : Builder(Builder), Type(Type), IsArrow(IsArrow), MemberLookup(MemberLookup) {} }; class MoveCastBuilder: public ExprBuilder { const ExprBuilder &Builder; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(CastForMoving(S, Builder.build(S, Loc))); } MoveCastBuilder(const ExprBuilder &Builder) : Builder(Builder) {} }; class LvalueConvBuilder: public ExprBuilder { const ExprBuilder &Builder; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull( S.DefaultLvalueConversion(Builder.build(S, Loc)).get()); } LvalueConvBuilder(const ExprBuilder &Builder) : Builder(Builder) {} }; class SubscriptBuilder: public ExprBuilder { const ExprBuilder &Base; const ExprBuilder &Index; public: Expr *build(Sema &S, SourceLocation Loc) const override { return assertNotNull(S.CreateBuiltinArraySubscriptExpr( Base.build(S, Loc), Loc, Index.build(S, Loc), Loc).get()); } SubscriptBuilder(const ExprBuilder &Base, const ExprBuilder &Index) : Base(Base), Index(Index) {} }; } // end anonymous namespace /// When generating a defaulted copy or move assignment operator, if a field /// should be copied with __builtin_memcpy rather than via explicit assignments, /// do so. This optimization only applies for arrays of scalars, and for arrays /// of class type where the selected copy/move-assignment operator is trivial. static StmtResult buildMemcpyForAssignmentOp(Sema &S, SourceLocation Loc, QualType T, const ExprBuilder &ToB, const ExprBuilder &FromB) { // Compute the size of the memory buffer to be copied. QualType SizeType = S.Context.getSizeType(); llvm::APInt Size(S.Context.getTypeSize(SizeType), S.Context.getTypeSizeInChars(T).getQuantity()); // Take the address of the field references for "from" and "to". We // directly construct UnaryOperators here because semantic analysis // does not permit us to take the address of an xvalue. Expr *From = FromB.build(S, Loc); From = new (S.Context) UnaryOperator(From, UO_AddrOf, S.Context.getPointerType(From->getType()), VK_RValue, OK_Ordinary, Loc); Expr *To = ToB.build(S, Loc); To = new (S.Context) UnaryOperator(To, UO_AddrOf, S.Context.getPointerType(To->getType()), VK_RValue, OK_Ordinary, Loc); const Type *E = T->getBaseElementTypeUnsafe(); bool NeedsCollectableMemCpy = E->isRecordType() && E->getAs()->getDecl()->hasObjectMember(); // Create a reference to the __builtin_objc_memmove_collectable function StringRef MemCpyName = NeedsCollectableMemCpy ? "__builtin_objc_memmove_collectable" : "__builtin_memcpy"; LookupResult R(S, &S.Context.Idents.get(MemCpyName), Loc, Sema::LookupOrdinaryName); S.LookupName(R, S.TUScope, true); FunctionDecl *MemCpy = R.getAsSingle(); if (!MemCpy) // Something went horribly wrong earlier, and we will have complained // about it. return StmtError(); ExprResult MemCpyRef = S.BuildDeclRefExpr(MemCpy, S.Context.BuiltinFnTy, VK_RValue, Loc, nullptr); assert(MemCpyRef.isUsable() && "Builtin reference cannot fail"); Expr *CallArgs[] = { To, From, IntegerLiteral::Create(S.Context, Size, SizeType, Loc) }; ExprResult Call = S.ActOnCallExpr(/*Scope=*/nullptr, MemCpyRef.get(), Loc, CallArgs, Loc); assert(!Call.isInvalid() && "Call to __builtin_memcpy cannot fail!"); return Call.getAs(); } /// \brief Builds a statement that copies/moves the given entity from \p From to /// \c To. /// /// This routine is used to copy/move the members of a class with an /// implicitly-declared copy/move assignment operator. When the entities being /// copied are arrays, this routine builds for loops to copy them. /// /// \param S The Sema object used for type-checking. /// /// \param Loc The location where the implicit copy/move is being generated. /// /// \param T The type of the expressions being copied/moved. Both expressions /// must have this type. /// /// \param To The expression we are copying/moving to. /// /// \param From The expression we are copying/moving from. /// /// \param CopyingBaseSubobject Whether we're copying/moving a base subobject. /// Otherwise, it's a non-static member subobject. /// /// \param Copying Whether we're copying or moving. /// /// \param Depth Internal parameter recording the depth of the recursion. /// /// \returns A statement or a loop that copies the expressions, or StmtResult(0) /// if a memcpy should be used instead. static StmtResult buildSingleCopyAssignRecursively(Sema &S, SourceLocation Loc, QualType T, const ExprBuilder &To, const ExprBuilder &From, bool CopyingBaseSubobject, bool Copying, unsigned Depth = 0) { // C++11 [class.copy]p28: // Each subobject is assigned in the manner appropriate to its type: // // - if the subobject is of class type, as if by a call to operator= with // the subobject as the object expression and the corresponding // subobject of x as a single function argument (as if by explicit // qualification; that is, ignoring any possible virtual overriding // functions in more derived classes); // // C++03 [class.copy]p13: // - if the subobject is of class type, the copy assignment operator for // the class is used (as if by explicit qualification; that is, // ignoring any possible virtual overriding functions in more derived // classes); if (const RecordType *RecordTy = T->getAs()) { CXXRecordDecl *ClassDecl = cast(RecordTy->getDecl()); // Look for operator=. DeclarationName Name = S.Context.DeclarationNames.getCXXOperatorName(OO_Equal); LookupResult OpLookup(S, Name, Loc, Sema::LookupOrdinaryName); S.LookupQualifiedName(OpLookup, ClassDecl, false); // Prior to C++11, filter out any result that isn't a copy/move-assignment // operator. if (!S.getLangOpts().CPlusPlus11) { LookupResult::Filter F = OpLookup.makeFilter(); while (F.hasNext()) { NamedDecl *D = F.next(); if (CXXMethodDecl *Method = dyn_cast(D)) if (Method->isCopyAssignmentOperator() || (!Copying && Method->isMoveAssignmentOperator())) continue; F.erase(); } F.done(); } // Suppress the protected check (C++ [class.protected]) for each of the // assignment operators we found. This strange dance is required when // we're assigning via a base classes's copy-assignment operator. To // ensure that we're getting the right base class subobject (without // ambiguities), we need to cast "this" to that subobject type; to // ensure that we don't go through the virtual call mechanism, we need // to qualify the operator= name with the base class (see below). However, // this means that if the base class has a protected copy assignment // operator, the protected member access check will fail. So, we // rewrite "protected" access to "public" access in this case, since we // know by construction that we're calling from a derived class. if (CopyingBaseSubobject) { for (LookupResult::iterator L = OpLookup.begin(), LEnd = OpLookup.end(); L != LEnd; ++L) { if (L.getAccess() == AS_protected) L.setAccess(AS_public); } } // Create the nested-name-specifier that will be used to qualify the // reference to operator=; this is required to suppress the virtual // call mechanism. CXXScopeSpec SS; const Type *CanonicalT = S.Context.getCanonicalType(T.getTypePtr()); SS.MakeTrivial(S.Context, NestedNameSpecifier::Create(S.Context, nullptr, false, CanonicalT), Loc); // Create the reference to operator=. ExprResult OpEqualRef = S.BuildMemberReferenceExpr(To.build(S, Loc), T, Loc, /*isArrow=*/false, SS, /*TemplateKWLoc=*/SourceLocation(), /*FirstQualifierInScope=*/nullptr, OpLookup, /*TemplateArgs=*/nullptr, /*S*/nullptr, /*SuppressQualifierCheck=*/true); if (OpEqualRef.isInvalid()) return StmtError(); // Build the call to the assignment operator. Expr *FromInst = From.build(S, Loc); ExprResult Call = S.BuildCallToMemberFunction(/*Scope=*/nullptr, OpEqualRef.getAs(), Loc, FromInst, Loc); if (Call.isInvalid()) return StmtError(); // If we built a call to a trivial 'operator=' while copying an array, // bail out. We'll replace the whole shebang with a memcpy. CXXMemberCallExpr *CE = dyn_cast(Call.get()); if (CE && CE->getMethodDecl()->isTrivial() && Depth) return StmtResult((Stmt*)nullptr); // Convert to an expression-statement, and clean up any produced // temporaries. return S.ActOnExprStmt(Call); } // - if the subobject is of scalar type, the built-in assignment // operator is used. const ConstantArrayType *ArrayTy = S.Context.getAsConstantArrayType(T); if (!ArrayTy) { ExprResult Assignment = S.CreateBuiltinBinOp( Loc, BO_Assign, To.build(S, Loc), From.build(S, Loc)); if (Assignment.isInvalid()) return StmtError(); return S.ActOnExprStmt(Assignment); } // - if the subobject is an array, each element is assigned, in the // manner appropriate to the element type; // Construct a loop over the array bounds, e.g., // // for (__SIZE_TYPE__ i0 = 0; i0 != array-size; ++i0) // // that will copy each of the array elements. QualType SizeType = S.Context.getSizeType(); // Create the iteration variable. IdentifierInfo *IterationVarName = nullptr; { SmallString<8> Str; llvm::raw_svector_ostream OS(Str); OS << "__i" << Depth; IterationVarName = &S.Context.Idents.get(OS.str()); } VarDecl *IterationVar = VarDecl::Create(S.Context, S.CurContext, Loc, Loc, IterationVarName, SizeType, S.Context.getTrivialTypeSourceInfo(SizeType, Loc), SC_None); // Initialize the iteration variable to zero. llvm::APInt Zero(S.Context.getTypeSize(SizeType), 0); IterationVar->setInit(IntegerLiteral::Create(S.Context, Zero, SizeType, Loc)); // Creates a reference to the iteration variable. RefBuilder IterationVarRef(IterationVar, SizeType); LvalueConvBuilder IterationVarRefRVal(IterationVarRef); // Create the DeclStmt that holds the iteration variable. Stmt *InitStmt = new (S.Context) DeclStmt(DeclGroupRef(IterationVar),Loc,Loc); // Subscript the "from" and "to" expressions with the iteration variable. SubscriptBuilder FromIndexCopy(From, IterationVarRefRVal); MoveCastBuilder FromIndexMove(FromIndexCopy); const ExprBuilder *FromIndex; if (Copying) FromIndex = &FromIndexCopy; else FromIndex = &FromIndexMove; SubscriptBuilder ToIndex(To, IterationVarRefRVal); // Build the copy/move for an individual element of the array. StmtResult Copy = buildSingleCopyAssignRecursively(S, Loc, ArrayTy->getElementType(), ToIndex, *FromIndex, CopyingBaseSubobject, Copying, Depth + 1); // Bail out if copying fails or if we determined that we should use memcpy. if (Copy.isInvalid() || !Copy.get()) return Copy; // Create the comparison against the array bound. llvm::APInt Upper = ArrayTy->getSize().zextOrTrunc(S.Context.getTypeSize(SizeType)); Expr *Comparison = new (S.Context) BinaryOperator(IterationVarRefRVal.build(S, Loc), IntegerLiteral::Create(S.Context, Upper, SizeType, Loc), BO_NE, S.Context.BoolTy, VK_RValue, OK_Ordinary, Loc, false); // Create the pre-increment of the iteration variable. Expr *Increment = new (S.Context) UnaryOperator(IterationVarRef.build(S, Loc), UO_PreInc, SizeType, VK_LValue, OK_Ordinary, Loc); // Construct the loop that copies all elements of this array. return S.ActOnForStmt( Loc, Loc, InitStmt, S.ActOnCondition(nullptr, Loc, Comparison, Sema::ConditionKind::Boolean), S.MakeFullDiscardedValueExpr(Increment), Loc, Copy.get()); } static StmtResult buildSingleCopyAssign(Sema &S, SourceLocation Loc, QualType T, const ExprBuilder &To, const ExprBuilder &From, bool CopyingBaseSubobject, bool Copying) { // Maybe we should use a memcpy? if (T->isArrayType() && !T.isConstQualified() && !T.isVolatileQualified() && T.isTriviallyCopyableType(S.Context)) return buildMemcpyForAssignmentOp(S, Loc, T, To, From); StmtResult Result(buildSingleCopyAssignRecursively(S, Loc, T, To, From, CopyingBaseSubobject, Copying, 0)); // If we ended up picking a trivial assignment operator for an array of a // non-trivially-copyable class type, just emit a memcpy. if (!Result.isInvalid() && !Result.get()) return buildMemcpyForAssignmentOp(S, Loc, T, To, From); return Result; } Sema::ImplicitExceptionSpecification Sema::ComputeDefaultedCopyAssignmentExceptionSpec(CXXMethodDecl *MD) { CXXRecordDecl *ClassDecl = MD->getParent(); ImplicitExceptionSpecification ExceptSpec(*this); if (ClassDecl->isInvalidDecl()) return ExceptSpec; const FunctionProtoType *T = MD->getType()->castAs(); assert(T->getNumParams() == 1 && "not a copy assignment op"); unsigned ArgQuals = T->getParamType(0).getNonReferenceType().getCVRQualifiers(); // C++ [except.spec]p14: // An implicitly declared special member function (Clause 12) shall have an // exception-specification. [...] // It is unspecified whether or not an implicit copy assignment operator // attempts to deduplicate calls to assignment operators of virtual bases are // made. As such, this exception specification is effectively unspecified. // Based on a similar decision made for constness in C++0x, we're erring on // the side of assuming such calls to be made regardless of whether they // actually happen. for (const auto &Base : ClassDecl->bases()) { if (Base.isVirtual()) continue; CXXRecordDecl *BaseClassDecl = cast(Base.getType()->getAs()->getDecl()); if (CXXMethodDecl *CopyAssign = LookupCopyingAssignment(BaseClassDecl, ArgQuals, false, 0)) ExceptSpec.CalledDecl(Base.getLocStart(), CopyAssign); } for (const auto &Base : ClassDecl->vbases()) { CXXRecordDecl *BaseClassDecl = cast(Base.getType()->getAs()->getDecl()); if (CXXMethodDecl *CopyAssign = LookupCopyingAssignment(BaseClassDecl, ArgQuals, false, 0)) ExceptSpec.CalledDecl(Base.getLocStart(), CopyAssign); } for (const auto *Field : ClassDecl->fields()) { QualType FieldType = Context.getBaseElementType(Field->getType()); if (CXXRecordDecl *FieldClassDecl = FieldType->getAsCXXRecordDecl()) { if (CXXMethodDecl *CopyAssign = LookupCopyingAssignment(FieldClassDecl, ArgQuals | FieldType.getCVRQualifiers(), false, 0)) ExceptSpec.CalledDecl(Field->getLocation(), CopyAssign); } } return ExceptSpec; } CXXMethodDecl *Sema::DeclareImplicitCopyAssignment(CXXRecordDecl *ClassDecl) { // Note: The following rules are largely analoguous to the copy // constructor rules. Note that virtual bases are not taken into account // for determining the argument type of the operator. Note also that // operators taking an object instead of a reference are allowed. assert(ClassDecl->needsImplicitCopyAssignment()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXCopyAssignment); if (DSM.isAlreadyBeingDeclared()) return nullptr; QualType ArgType = Context.getTypeDeclType(ClassDecl); QualType RetType = Context.getLValueReferenceType(ArgType); bool Const = ClassDecl->implicitCopyAssignmentHasConstParam(); if (Const) ArgType = ArgType.withConst(); ArgType = Context.getLValueReferenceType(ArgType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXCopyAssignment, Const); // An implicitly-declared copy assignment operator is an inline public // member of its class. DeclarationName Name = Context.DeclarationNames.getCXXOperatorName(OO_Equal); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXMethodDecl *CopyAssignment = CXXMethodDecl::Create(Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*StorageClass=*/SC_None, /*isInline=*/true, Constexpr, SourceLocation()); CopyAssignment->setAccess(AS_public); CopyAssignment->setDefaulted(); CopyAssignment->setImplicit(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXCopyAssignment, CopyAssignment, /* ConstRHS */ Const, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, CopyAssignment); CopyAssignment->setType(Context.getFunctionType(RetType, ArgType, EPI)); // Add the parameter to the operator. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, CopyAssignment, ClassLoc, ClassLoc, /*Id=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); CopyAssignment->setParams(FromParam); CopyAssignment->setTrivial( ClassDecl->needsOverloadResolutionForCopyAssignment() ? SpecialMemberIsTrivial(CopyAssignment, CXXCopyAssignment) : ClassDecl->hasTrivialCopyAssignment()); // Note that we have added this copy-assignment operator. ++ASTContext::NumImplicitCopyAssignmentOperatorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, CopyAssignment); if (ShouldDeleteSpecialMember(CopyAssignment, CXXCopyAssignment)) SetDeclDeleted(CopyAssignment, ClassLoc); if (S) PushOnScopeChains(CopyAssignment, S, false); ClassDecl->addDecl(CopyAssignment); return CopyAssignment; } /// Diagnose an implicit copy operation for a class which is odr-used, but /// which is deprecated because the class has a user-declared copy constructor, /// copy assignment operator, or destructor. static void diagnoseDeprecatedCopyOperation(Sema &S, CXXMethodDecl *CopyOp, SourceLocation UseLoc) { assert(CopyOp->isImplicit()); CXXRecordDecl *RD = CopyOp->getParent(); CXXMethodDecl *UserDeclaredOperation = nullptr; // In Microsoft mode, assignment operations don't affect constructors and // vice versa. if (RD->hasUserDeclaredDestructor()) { UserDeclaredOperation = RD->getDestructor(); } else if (!isa(CopyOp) && RD->hasUserDeclaredCopyConstructor() && !S.getLangOpts().MSVCCompat) { // Find any user-declared copy constructor. for (auto *I : RD->ctors()) { if (I->isCopyConstructor()) { UserDeclaredOperation = I; break; } } assert(UserDeclaredOperation); } else if (isa(CopyOp) && RD->hasUserDeclaredCopyAssignment() && !S.getLangOpts().MSVCCompat) { // Find any user-declared move assignment operator. for (auto *I : RD->methods()) { if (I->isCopyAssignmentOperator()) { UserDeclaredOperation = I; break; } } assert(UserDeclaredOperation); } if (UserDeclaredOperation) { S.Diag(UserDeclaredOperation->getLocation(), diag::warn_deprecated_copy_operation) << RD << /*copy assignment*/!isa(CopyOp) << /*destructor*/isa(UserDeclaredOperation); S.Diag(UseLoc, diag::note_member_synthesized_at) << (isa(CopyOp) ? Sema::CXXCopyConstructor : Sema::CXXCopyAssignment) << RD; } } void Sema::DefineImplicitCopyAssignment(SourceLocation CurrentLocation, CXXMethodDecl *CopyAssignOperator) { assert((CopyAssignOperator->isDefaulted() && CopyAssignOperator->isOverloadedOperator() && CopyAssignOperator->getOverloadedOperator() == OO_Equal && !CopyAssignOperator->doesThisDeclarationHaveABody() && !CopyAssignOperator->isDeleted()) && "DefineImplicitCopyAssignment called for wrong function"); CXXRecordDecl *ClassDecl = CopyAssignOperator->getParent(); if (ClassDecl->isInvalidDecl() || CopyAssignOperator->isInvalidDecl()) { CopyAssignOperator->setInvalidDecl(); return; } // C++11 [class.copy]p18: // The [definition of an implicitly declared copy assignment operator] is // deprecated if the class has a user-declared copy constructor or a // user-declared destructor. if (getLangOpts().CPlusPlus11 && CopyAssignOperator->isImplicit()) diagnoseDeprecatedCopyOperation(*this, CopyAssignOperator, CurrentLocation); CopyAssignOperator->markUsed(Context); SynthesizedFunctionScope Scope(*this, CopyAssignOperator); DiagnosticErrorTrap Trap(Diags); // C++0x [class.copy]p30: // The implicitly-defined or explicitly-defaulted copy assignment operator // for a non-union class X performs memberwise copy assignment of its // subobjects. The direct base classes of X are assigned first, in the // order of their declaration in the base-specifier-list, and then the // immediate non-static data members of X are assigned, in the order in // which they were declared in the class definition. // The statements that form the synthesized function body. SmallVector Statements; // The parameter for the "other" object, which we are copying from. ParmVarDecl *Other = CopyAssignOperator->getParamDecl(0); Qualifiers OtherQuals = Other->getType().getQualifiers(); QualType OtherRefType = Other->getType(); if (const LValueReferenceType *OtherRef = OtherRefType->getAs()) { OtherRefType = OtherRef->getPointeeType(); OtherQuals = OtherRefType.getQualifiers(); } // Our location for everything implicitly-generated. SourceLocation Loc = CopyAssignOperator->getLocEnd().isValid() ? CopyAssignOperator->getLocEnd() : CopyAssignOperator->getLocation(); // Builds a DeclRefExpr for the "other" object. RefBuilder OtherRef(Other, OtherRefType); // Builds the "this" pointer. ThisBuilder This; // Assign base classes. bool Invalid = false; for (auto &Base : ClassDecl->bases()) { // Form the assignment: // static_cast(this)->Base::operator=(static_cast(other)); QualType BaseType = Base.getType().getUnqualifiedType(); if (!BaseType->isRecordType()) { Invalid = true; continue; } CXXCastPath BasePath; BasePath.push_back(&Base); // Construct the "from" expression, which is an implicit cast to the // appropriately-qualified base type. CastBuilder From(OtherRef, Context.getQualifiedType(BaseType, OtherQuals), VK_LValue, BasePath); // Dereference "this". DerefBuilder DerefThis(This); CastBuilder To(DerefThis, Context.getCVRQualifiedType( BaseType, CopyAssignOperator->getTypeQualifiers()), VK_LValue, BasePath); // Build the copy. StmtResult Copy = buildSingleCopyAssign(*this, Loc, BaseType, To, From, /*CopyingBaseSubobject=*/true, /*Copying=*/true); if (Copy.isInvalid()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXCopyAssignment << Context.getTagDeclType(ClassDecl); CopyAssignOperator->setInvalidDecl(); return; } // Success! Record the copy. Statements.push_back(Copy.getAs()); } // Assign non-static members. for (auto *Field : ClassDecl->fields()) { // FIXME: We should form some kind of AST representation for the implied // memcpy in a union copy operation. if (Field->isUnnamedBitfield() || Field->getParent()->isUnion()) continue; if (Field->isInvalidDecl()) { Invalid = true; continue; } // Check for members of reference type; we can't copy those. if (Field->getType()->isReferenceType()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 0 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXCopyAssignment << Context.getTagDeclType(ClassDecl); Invalid = true; continue; } // Check for members of const-qualified, non-class type. QualType BaseType = Context.getBaseElementType(Field->getType()); if (!BaseType->getAs() && BaseType.isConstQualified()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 1 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXCopyAssignment << Context.getTagDeclType(ClassDecl); Invalid = true; continue; } // Suppress assigning zero-width bitfields. if (Field->isBitField() && Field->getBitWidthValue(Context) == 0) continue; QualType FieldType = Field->getType().getNonReferenceType(); if (FieldType->isIncompleteArrayType()) { assert(ClassDecl->hasFlexibleArrayMember() && "Incomplete array type is not valid"); continue; } // Build references to the field in the object we're copying from and to. CXXScopeSpec SS; // Intentionally empty LookupResult MemberLookup(*this, Field->getDeclName(), Loc, LookupMemberName); MemberLookup.addDecl(Field); MemberLookup.resolveKind(); MemberBuilder From(OtherRef, OtherRefType, /*IsArrow=*/false, MemberLookup); MemberBuilder To(This, getCurrentThisType(), /*IsArrow=*/true, MemberLookup); // Build the copy of this field. StmtResult Copy = buildSingleCopyAssign(*this, Loc, FieldType, To, From, /*CopyingBaseSubobject=*/false, /*Copying=*/true); if (Copy.isInvalid()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXCopyAssignment << Context.getTagDeclType(ClassDecl); CopyAssignOperator->setInvalidDecl(); return; } // Success! Record the copy. Statements.push_back(Copy.getAs()); } if (!Invalid) { // Add a "return *this;" ExprResult ThisObj = CreateBuiltinUnaryOp(Loc, UO_Deref, This.build(*this, Loc)); StmtResult Return = BuildReturnStmt(Loc, ThisObj.get()); if (Return.isInvalid()) Invalid = true; else { Statements.push_back(Return.getAs()); if (Trap.hasErrorOccurred()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXCopyAssignment << Context.getTagDeclType(ClassDecl); Invalid = true; } } } // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, CopyAssignOperator->getType()->castAs()); if (Invalid) { CopyAssignOperator->setInvalidDecl(); return; } StmtResult Body; { CompoundScopeRAII CompoundScope(*this); Body = ActOnCompoundStmt(Loc, Loc, Statements, /*isStmtExpr=*/false); assert(!Body.isInvalid() && "Compound statement creation cannot fail"); } CopyAssignOperator->setBody(Body.getAs()); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(CopyAssignOperator); } } Sema::ImplicitExceptionSpecification Sema::ComputeDefaultedMoveAssignmentExceptionSpec(CXXMethodDecl *MD) { CXXRecordDecl *ClassDecl = MD->getParent(); ImplicitExceptionSpecification ExceptSpec(*this); if (ClassDecl->isInvalidDecl()) return ExceptSpec; // C++0x [except.spec]p14: // An implicitly declared special member function (Clause 12) shall have an // exception-specification. [...] // It is unspecified whether or not an implicit move assignment operator // attempts to deduplicate calls to assignment operators of virtual bases are // made. As such, this exception specification is effectively unspecified. // Based on a similar decision made for constness in C++0x, we're erring on // the side of assuming such calls to be made regardless of whether they // actually happen. // Note that a move constructor is not implicitly declared when there are // virtual bases, but it can still be user-declared and explicitly defaulted. for (const auto &Base : ClassDecl->bases()) { if (Base.isVirtual()) continue; CXXRecordDecl *BaseClassDecl = cast(Base.getType()->getAs()->getDecl()); if (CXXMethodDecl *MoveAssign = LookupMovingAssignment(BaseClassDecl, 0, false, 0)) ExceptSpec.CalledDecl(Base.getLocStart(), MoveAssign); } for (const auto &Base : ClassDecl->vbases()) { CXXRecordDecl *BaseClassDecl = cast(Base.getType()->getAs()->getDecl()); if (CXXMethodDecl *MoveAssign = LookupMovingAssignment(BaseClassDecl, 0, false, 0)) ExceptSpec.CalledDecl(Base.getLocStart(), MoveAssign); } for (const auto *Field : ClassDecl->fields()) { QualType FieldType = Context.getBaseElementType(Field->getType()); if (CXXRecordDecl *FieldClassDecl = FieldType->getAsCXXRecordDecl()) { if (CXXMethodDecl *MoveAssign = LookupMovingAssignment(FieldClassDecl, FieldType.getCVRQualifiers(), false, 0)) ExceptSpec.CalledDecl(Field->getLocation(), MoveAssign); } } return ExceptSpec; } CXXMethodDecl *Sema::DeclareImplicitMoveAssignment(CXXRecordDecl *ClassDecl) { assert(ClassDecl->needsImplicitMoveAssignment()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXMoveAssignment); if (DSM.isAlreadyBeingDeclared()) return nullptr; // Note: The following rules are largely analoguous to the move // constructor rules. QualType ArgType = Context.getTypeDeclType(ClassDecl); QualType RetType = Context.getLValueReferenceType(ArgType); ArgType = Context.getRValueReferenceType(ArgType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXMoveAssignment, false); // An implicitly-declared move assignment operator is an inline public // member of its class. DeclarationName Name = Context.DeclarationNames.getCXXOperatorName(OO_Equal); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); CXXMethodDecl *MoveAssignment = CXXMethodDecl::Create(Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*StorageClass=*/SC_None, /*isInline=*/true, Constexpr, SourceLocation()); MoveAssignment->setAccess(AS_public); MoveAssignment->setDefaulted(); MoveAssignment->setImplicit(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXMoveAssignment, MoveAssignment, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, MoveAssignment); MoveAssignment->setType(Context.getFunctionType(RetType, ArgType, EPI)); // Add the parameter to the operator. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, MoveAssignment, ClassLoc, ClassLoc, /*Id=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); MoveAssignment->setParams(FromParam); MoveAssignment->setTrivial( ClassDecl->needsOverloadResolutionForMoveAssignment() ? SpecialMemberIsTrivial(MoveAssignment, CXXMoveAssignment) : ClassDecl->hasTrivialMoveAssignment()); // Note that we have added this copy-assignment operator. ++ASTContext::NumImplicitMoveAssignmentOperatorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, MoveAssignment); if (ShouldDeleteSpecialMember(MoveAssignment, CXXMoveAssignment)) { ClassDecl->setImplicitMoveAssignmentIsDeleted(); SetDeclDeleted(MoveAssignment, ClassLoc); } if (S) PushOnScopeChains(MoveAssignment, S, false); ClassDecl->addDecl(MoveAssignment); return MoveAssignment; } /// Check if we're implicitly defining a move assignment operator for a class /// with virtual bases. Such a move assignment might move-assign the virtual /// base multiple times. static void checkMoveAssignmentForRepeatedMove(Sema &S, CXXRecordDecl *Class, SourceLocation CurrentLocation) { assert(!Class->isDependentContext() && "should not define dependent move"); // Only a virtual base could get implicitly move-assigned multiple times. // Only a non-trivial move assignment can observe this. We only want to // diagnose if we implicitly define an assignment operator that assigns // two base classes, both of which move-assign the same virtual base. if (Class->getNumVBases() == 0 || Class->hasTrivialMoveAssignment() || Class->getNumBases() < 2) return; llvm::SmallVector Worklist; typedef llvm::DenseMap VBaseMap; VBaseMap VBases; for (auto &BI : Class->bases()) { Worklist.push_back(&BI); while (!Worklist.empty()) { CXXBaseSpecifier *BaseSpec = Worklist.pop_back_val(); CXXRecordDecl *Base = BaseSpec->getType()->getAsCXXRecordDecl(); // If the base has no non-trivial move assignment operators, // we don't care about moves from it. if (!Base->hasNonTrivialMoveAssignment()) continue; // If there's nothing virtual here, skip it. if (!BaseSpec->isVirtual() && !Base->getNumVBases()) continue; // If we're not actually going to call a move assignment for this base, // or the selected move assignment is trivial, skip it. Sema::SpecialMemberOverloadResult *SMOR = S.LookupSpecialMember(Base, Sema::CXXMoveAssignment, /*ConstArg*/false, /*VolatileArg*/false, /*RValueThis*/true, /*ConstThis*/false, /*VolatileThis*/false); if (!SMOR->getMethod() || SMOR->getMethod()->isTrivial() || !SMOR->getMethod()->isMoveAssignmentOperator()) continue; if (BaseSpec->isVirtual()) { // We're going to move-assign this virtual base, and its move // assignment operator is not trivial. If this can happen for // multiple distinct direct bases of Class, diagnose it. (If it // only happens in one base, we'll diagnose it when synthesizing // that base class's move assignment operator.) CXXBaseSpecifier *&Existing = VBases.insert(std::make_pair(Base->getCanonicalDecl(), &BI)) .first->second; if (Existing && Existing != &BI) { S.Diag(CurrentLocation, diag::warn_vbase_moved_multiple_times) << Class << Base; S.Diag(Existing->getLocStart(), diag::note_vbase_moved_here) << (Base->getCanonicalDecl() == Existing->getType()->getAsCXXRecordDecl()->getCanonicalDecl()) << Base << Existing->getType() << Existing->getSourceRange(); S.Diag(BI.getLocStart(), diag::note_vbase_moved_here) << (Base->getCanonicalDecl() == BI.getType()->getAsCXXRecordDecl()->getCanonicalDecl()) << Base << BI.getType() << BaseSpec->getSourceRange(); // Only diagnose each vbase once. Existing = nullptr; } } else { // Only walk over bases that have defaulted move assignment operators. // We assume that any user-provided move assignment operator handles // the multiple-moves-of-vbase case itself somehow. if (!SMOR->getMethod()->isDefaulted()) continue; // We're going to move the base classes of Base. Add them to the list. for (auto &BI : Base->bases()) Worklist.push_back(&BI); } } } } void Sema::DefineImplicitMoveAssignment(SourceLocation CurrentLocation, CXXMethodDecl *MoveAssignOperator) { assert((MoveAssignOperator->isDefaulted() && MoveAssignOperator->isOverloadedOperator() && MoveAssignOperator->getOverloadedOperator() == OO_Equal && !MoveAssignOperator->doesThisDeclarationHaveABody() && !MoveAssignOperator->isDeleted()) && "DefineImplicitMoveAssignment called for wrong function"); CXXRecordDecl *ClassDecl = MoveAssignOperator->getParent(); if (ClassDecl->isInvalidDecl() || MoveAssignOperator->isInvalidDecl()) { MoveAssignOperator->setInvalidDecl(); return; } MoveAssignOperator->markUsed(Context); SynthesizedFunctionScope Scope(*this, MoveAssignOperator); DiagnosticErrorTrap Trap(Diags); // C++0x [class.copy]p28: // The implicitly-defined or move assignment operator for a non-union class // X performs memberwise move assignment of its subobjects. The direct base // classes of X are assigned first, in the order of their declaration in the // base-specifier-list, and then the immediate non-static data members of X // are assigned, in the order in which they were declared in the class // definition. // Issue a warning if our implicit move assignment operator will move // from a virtual base more than once. checkMoveAssignmentForRepeatedMove(*this, ClassDecl, CurrentLocation); // The statements that form the synthesized function body. SmallVector Statements; // The parameter for the "other" object, which we are move from. ParmVarDecl *Other = MoveAssignOperator->getParamDecl(0); QualType OtherRefType = Other->getType()-> getAs()->getPointeeType(); assert(!OtherRefType.getQualifiers() && "Bad argument type of defaulted move assignment"); // Our location for everything implicitly-generated. SourceLocation Loc = MoveAssignOperator->getLocEnd().isValid() ? MoveAssignOperator->getLocEnd() : MoveAssignOperator->getLocation(); // Builds a reference to the "other" object. RefBuilder OtherRef(Other, OtherRefType); // Cast to rvalue. MoveCastBuilder MoveOther(OtherRef); // Builds the "this" pointer. ThisBuilder This; // Assign base classes. bool Invalid = false; for (auto &Base : ClassDecl->bases()) { // C++11 [class.copy]p28: // It is unspecified whether subobjects representing virtual base classes // are assigned more than once by the implicitly-defined copy assignment // operator. // FIXME: Do not assign to a vbase that will be assigned by some other base // class. For a move-assignment, this can result in the vbase being moved // multiple times. // Form the assignment: // static_cast(this)->Base::operator=(static_cast(other)); QualType BaseType = Base.getType().getUnqualifiedType(); if (!BaseType->isRecordType()) { Invalid = true; continue; } CXXCastPath BasePath; BasePath.push_back(&Base); // Construct the "from" expression, which is an implicit cast to the // appropriately-qualified base type. CastBuilder From(OtherRef, BaseType, VK_XValue, BasePath); // Dereference "this". DerefBuilder DerefThis(This); // Implicitly cast "this" to the appropriately-qualified base type. CastBuilder To(DerefThis, Context.getCVRQualifiedType( BaseType, MoveAssignOperator->getTypeQualifiers()), VK_LValue, BasePath); // Build the move. StmtResult Move = buildSingleCopyAssign(*this, Loc, BaseType, To, From, /*CopyingBaseSubobject=*/true, /*Copying=*/false); if (Move.isInvalid()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXMoveAssignment << Context.getTagDeclType(ClassDecl); MoveAssignOperator->setInvalidDecl(); return; } // Success! Record the move. Statements.push_back(Move.getAs()); } // Assign non-static members. for (auto *Field : ClassDecl->fields()) { // FIXME: We should form some kind of AST representation for the implied // memcpy in a union copy operation. if (Field->isUnnamedBitfield() || Field->getParent()->isUnion()) continue; if (Field->isInvalidDecl()) { Invalid = true; continue; } // Check for members of reference type; we can't move those. if (Field->getType()->isReferenceType()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 0 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXMoveAssignment << Context.getTagDeclType(ClassDecl); Invalid = true; continue; } // Check for members of const-qualified, non-class type. QualType BaseType = Context.getBaseElementType(Field->getType()); if (!BaseType->getAs() && BaseType.isConstQualified()) { Diag(ClassDecl->getLocation(), diag::err_uninitialized_member_for_assign) << Context.getTagDeclType(ClassDecl) << 1 << Field->getDeclName(); Diag(Field->getLocation(), diag::note_declared_at); Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXMoveAssignment << Context.getTagDeclType(ClassDecl); Invalid = true; continue; } // Suppress assigning zero-width bitfields. if (Field->isBitField() && Field->getBitWidthValue(Context) == 0) continue; QualType FieldType = Field->getType().getNonReferenceType(); if (FieldType->isIncompleteArrayType()) { assert(ClassDecl->hasFlexibleArrayMember() && "Incomplete array type is not valid"); continue; } // Build references to the field in the object we're copying from and to. LookupResult MemberLookup(*this, Field->getDeclName(), Loc, LookupMemberName); MemberLookup.addDecl(Field); MemberLookup.resolveKind(); MemberBuilder From(MoveOther, OtherRefType, /*IsArrow=*/false, MemberLookup); MemberBuilder To(This, getCurrentThisType(), /*IsArrow=*/true, MemberLookup); assert(!From.build(*this, Loc)->isLValue() && // could be xvalue or prvalue "Member reference with rvalue base must be rvalue except for reference " "members, which aren't allowed for move assignment."); // Build the move of this field. StmtResult Move = buildSingleCopyAssign(*this, Loc, FieldType, To, From, /*CopyingBaseSubobject=*/false, /*Copying=*/false); if (Move.isInvalid()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXMoveAssignment << Context.getTagDeclType(ClassDecl); MoveAssignOperator->setInvalidDecl(); return; } // Success! Record the copy. Statements.push_back(Move.getAs()); } if (!Invalid) { // Add a "return *this;" ExprResult ThisObj = CreateBuiltinUnaryOp(Loc, UO_Deref, This.build(*this, Loc)); StmtResult Return = BuildReturnStmt(Loc, ThisObj.get()); if (Return.isInvalid()) Invalid = true; else { Statements.push_back(Return.getAs()); if (Trap.hasErrorOccurred()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXMoveAssignment << Context.getTagDeclType(ClassDecl); Invalid = true; } } } // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, MoveAssignOperator->getType()->castAs()); if (Invalid) { MoveAssignOperator->setInvalidDecl(); return; } StmtResult Body; { CompoundScopeRAII CompoundScope(*this); Body = ActOnCompoundStmt(Loc, Loc, Statements, /*isStmtExpr=*/false); assert(!Body.isInvalid() && "Compound statement creation cannot fail"); } MoveAssignOperator->setBody(Body.getAs()); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(MoveAssignOperator); } } Sema::ImplicitExceptionSpecification Sema::ComputeDefaultedCopyCtorExceptionSpec(CXXMethodDecl *MD) { CXXRecordDecl *ClassDecl = MD->getParent(); ImplicitExceptionSpecification ExceptSpec(*this); if (ClassDecl->isInvalidDecl()) return ExceptSpec; const FunctionProtoType *T = MD->getType()->castAs(); assert(T->getNumParams() >= 1 && "not a copy ctor"); unsigned Quals = T->getParamType(0).getNonReferenceType().getCVRQualifiers(); // C++ [except.spec]p14: // An implicitly declared special member function (Clause 12) shall have an // exception-specification. [...] for (const auto &Base : ClassDecl->bases()) { // Virtual bases are handled below. if (Base.isVirtual()) continue; CXXRecordDecl *BaseClassDecl = cast(Base.getType()->getAs()->getDecl()); if (CXXConstructorDecl *CopyConstructor = LookupCopyingConstructor(BaseClassDecl, Quals)) ExceptSpec.CalledDecl(Base.getLocStart(), CopyConstructor); } for (const auto &Base : ClassDecl->vbases()) { CXXRecordDecl *BaseClassDecl = cast(Base.getType()->getAs()->getDecl()); if (CXXConstructorDecl *CopyConstructor = LookupCopyingConstructor(BaseClassDecl, Quals)) ExceptSpec.CalledDecl(Base.getLocStart(), CopyConstructor); } for (const auto *Field : ClassDecl->fields()) { QualType FieldType = Context.getBaseElementType(Field->getType()); if (CXXRecordDecl *FieldClassDecl = FieldType->getAsCXXRecordDecl()) { if (CXXConstructorDecl *CopyConstructor = LookupCopyingConstructor(FieldClassDecl, Quals | FieldType.getCVRQualifiers())) ExceptSpec.CalledDecl(Field->getLocation(), CopyConstructor); } } return ExceptSpec; } CXXConstructorDecl *Sema::DeclareImplicitCopyConstructor( CXXRecordDecl *ClassDecl) { // C++ [class.copy]p4: // If the class definition does not explicitly declare a copy // constructor, one is declared implicitly. assert(ClassDecl->needsImplicitCopyConstructor()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXCopyConstructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; QualType ClassType = Context.getTypeDeclType(ClassDecl); QualType ArgType = ClassType; bool Const = ClassDecl->implicitCopyConstructorHasConstParam(); if (Const) ArgType = ArgType.withConst(); ArgType = Context.getLValueReferenceType(ArgType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXCopyConstructor, Const); DeclarationName Name = Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(ClassType)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); // An implicitly-declared copy constructor is an inline public // member of its class. CXXConstructorDecl *CopyConstructor = CXXConstructorDecl::Create( Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*isExplicit=*/false, /*isInline=*/true, /*isImplicitlyDeclared=*/true, Constexpr); CopyConstructor->setAccess(AS_public); CopyConstructor->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXCopyConstructor, CopyConstructor, /* ConstRHS */ Const, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, CopyConstructor); CopyConstructor->setType( Context.getFunctionType(Context.VoidTy, ArgType, EPI)); // Add the parameter to the constructor. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, CopyConstructor, ClassLoc, ClassLoc, /*IdentifierInfo=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); CopyConstructor->setParams(FromParam); CopyConstructor->setTrivial( ClassDecl->needsOverloadResolutionForCopyConstructor() ? SpecialMemberIsTrivial(CopyConstructor, CXXCopyConstructor) : ClassDecl->hasTrivialCopyConstructor()); // Note that we have declared this constructor. ++ASTContext::NumImplicitCopyConstructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, CopyConstructor); if (ShouldDeleteSpecialMember(CopyConstructor, CXXCopyConstructor)) SetDeclDeleted(CopyConstructor, ClassLoc); if (S) PushOnScopeChains(CopyConstructor, S, false); ClassDecl->addDecl(CopyConstructor); return CopyConstructor; } void Sema::DefineImplicitCopyConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *CopyConstructor) { assert((CopyConstructor->isDefaulted() && CopyConstructor->isCopyConstructor() && !CopyConstructor->doesThisDeclarationHaveABody() && !CopyConstructor->isDeleted()) && "DefineImplicitCopyConstructor - call it for implicit copy ctor"); CXXRecordDecl *ClassDecl = CopyConstructor->getParent(); assert(ClassDecl && "DefineImplicitCopyConstructor - invalid constructor"); // C++11 [class.copy]p7: // The [definition of an implicitly declared copy constructor] is // deprecated if the class has a user-declared copy assignment operator // or a user-declared destructor. if (getLangOpts().CPlusPlus11 && CopyConstructor->isImplicit()) diagnoseDeprecatedCopyOperation(*this, CopyConstructor, CurrentLocation); SynthesizedFunctionScope Scope(*this, CopyConstructor); DiagnosticErrorTrap Trap(Diags); if (SetCtorInitializers(CopyConstructor, /*AnyErrors=*/false) || Trap.hasErrorOccurred()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXCopyConstructor << Context.getTagDeclType(ClassDecl); CopyConstructor->setInvalidDecl(); } else { SourceLocation Loc = CopyConstructor->getLocEnd().isValid() ? CopyConstructor->getLocEnd() : CopyConstructor->getLocation(); Sema::CompoundScopeRAII CompoundScope(*this); CopyConstructor->setBody( ActOnCompoundStmt(Loc, Loc, None, /*isStmtExpr=*/false).getAs()); } // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, CopyConstructor->getType()->castAs()); CopyConstructor->markUsed(Context); MarkVTableUsed(CurrentLocation, ClassDecl); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(CopyConstructor); } } Sema::ImplicitExceptionSpecification Sema::ComputeDefaultedMoveCtorExceptionSpec(CXXMethodDecl *MD) { CXXRecordDecl *ClassDecl = MD->getParent(); // C++ [except.spec]p14: // An implicitly declared special member function (Clause 12) shall have an // exception-specification. [...] ImplicitExceptionSpecification ExceptSpec(*this); if (ClassDecl->isInvalidDecl()) return ExceptSpec; // Direct base-class constructors. for (const auto &B : ClassDecl->bases()) { if (B.isVirtual()) // Handled below. continue; if (const RecordType *BaseType = B.getType()->getAs()) { CXXRecordDecl *BaseClassDecl = cast(BaseType->getDecl()); CXXConstructorDecl *Constructor = LookupMovingConstructor(BaseClassDecl, 0); // If this is a deleted function, add it anyway. This might be conformant // with the standard. This might not. I'm not sure. It might not matter. if (Constructor) ExceptSpec.CalledDecl(B.getLocStart(), Constructor); } } // Virtual base-class constructors. for (const auto &B : ClassDecl->vbases()) { if (const RecordType *BaseType = B.getType()->getAs()) { CXXRecordDecl *BaseClassDecl = cast(BaseType->getDecl()); CXXConstructorDecl *Constructor = LookupMovingConstructor(BaseClassDecl, 0); // If this is a deleted function, add it anyway. This might be conformant // with the standard. This might not. I'm not sure. It might not matter. if (Constructor) ExceptSpec.CalledDecl(B.getLocStart(), Constructor); } } // Field constructors. for (const auto *F : ClassDecl->fields()) { QualType FieldType = Context.getBaseElementType(F->getType()); if (CXXRecordDecl *FieldRecDecl = FieldType->getAsCXXRecordDecl()) { CXXConstructorDecl *Constructor = LookupMovingConstructor(FieldRecDecl, FieldType.getCVRQualifiers()); // If this is a deleted function, add it anyway. This might be conformant // with the standard. This might not. I'm not sure. It might not matter. // In particular, the problem is that this function never gets called. It // might just be ill-formed because this function attempts to refer to // a deleted function here. if (Constructor) ExceptSpec.CalledDecl(F->getLocation(), Constructor); } } return ExceptSpec; } CXXConstructorDecl *Sema::DeclareImplicitMoveConstructor( CXXRecordDecl *ClassDecl) { assert(ClassDecl->needsImplicitMoveConstructor()); DeclaringSpecialMember DSM(*this, ClassDecl, CXXMoveConstructor); if (DSM.isAlreadyBeingDeclared()) return nullptr; QualType ClassType = Context.getTypeDeclType(ClassDecl); QualType ArgType = Context.getRValueReferenceType(ClassType); bool Constexpr = defaultedSpecialMemberIsConstexpr(*this, ClassDecl, CXXMoveConstructor, false); DeclarationName Name = Context.DeclarationNames.getCXXConstructorName( Context.getCanonicalType(ClassType)); SourceLocation ClassLoc = ClassDecl->getLocation(); DeclarationNameInfo NameInfo(Name, ClassLoc); // C++11 [class.copy]p11: // An implicitly-declared copy/move constructor is an inline public // member of its class. CXXConstructorDecl *MoveConstructor = CXXConstructorDecl::Create( Context, ClassDecl, ClassLoc, NameInfo, QualType(), /*TInfo=*/nullptr, /*isExplicit=*/false, /*isInline=*/true, /*isImplicitlyDeclared=*/true, Constexpr); MoveConstructor->setAccess(AS_public); MoveConstructor->setDefaulted(); if (getLangOpts().CUDA) { inferCUDATargetForImplicitSpecialMember(ClassDecl, CXXMoveConstructor, MoveConstructor, /* ConstRHS */ false, /* Diagnose */ false); } // Build an exception specification pointing back at this member. FunctionProtoType::ExtProtoInfo EPI = getImplicitMethodEPI(*this, MoveConstructor); MoveConstructor->setType( Context.getFunctionType(Context.VoidTy, ArgType, EPI)); // Add the parameter to the constructor. ParmVarDecl *FromParam = ParmVarDecl::Create(Context, MoveConstructor, ClassLoc, ClassLoc, /*IdentifierInfo=*/nullptr, ArgType, /*TInfo=*/nullptr, SC_None, nullptr); MoveConstructor->setParams(FromParam); MoveConstructor->setTrivial( ClassDecl->needsOverloadResolutionForMoveConstructor() ? SpecialMemberIsTrivial(MoveConstructor, CXXMoveConstructor) : ClassDecl->hasTrivialMoveConstructor()); // Note that we have declared this constructor. ++ASTContext::NumImplicitMoveConstructorsDeclared; Scope *S = getScopeForContext(ClassDecl); CheckImplicitSpecialMemberDeclaration(S, MoveConstructor); if (ShouldDeleteSpecialMember(MoveConstructor, CXXMoveConstructor)) { ClassDecl->setImplicitMoveConstructorIsDeleted(); SetDeclDeleted(MoveConstructor, ClassLoc); } if (S) PushOnScopeChains(MoveConstructor, S, false); ClassDecl->addDecl(MoveConstructor); return MoveConstructor; } void Sema::DefineImplicitMoveConstructor(SourceLocation CurrentLocation, CXXConstructorDecl *MoveConstructor) { assert((MoveConstructor->isDefaulted() && MoveConstructor->isMoveConstructor() && !MoveConstructor->doesThisDeclarationHaveABody() && !MoveConstructor->isDeleted()) && "DefineImplicitMoveConstructor - call it for implicit move ctor"); CXXRecordDecl *ClassDecl = MoveConstructor->getParent(); assert(ClassDecl && "DefineImplicitMoveConstructor - invalid constructor"); SynthesizedFunctionScope Scope(*this, MoveConstructor); DiagnosticErrorTrap Trap(Diags); if (SetCtorInitializers(MoveConstructor, /*AnyErrors=*/false) || Trap.hasErrorOccurred()) { Diag(CurrentLocation, diag::note_member_synthesized_at) << CXXMoveConstructor << Context.getTagDeclType(ClassDecl); MoveConstructor->setInvalidDecl(); } else { SourceLocation Loc = MoveConstructor->getLocEnd().isValid() ? MoveConstructor->getLocEnd() : MoveConstructor->getLocation(); Sema::CompoundScopeRAII CompoundScope(*this); MoveConstructor->setBody(ActOnCompoundStmt( Loc, Loc, None, /*isStmtExpr=*/ false).getAs()); } // The exception specification is needed because we are defining the // function. ResolveExceptionSpec(CurrentLocation, MoveConstructor->getType()->castAs()); MoveConstructor->markUsed(Context); MarkVTableUsed(CurrentLocation, ClassDecl); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(MoveConstructor); } } bool Sema::isImplicitlyDeleted(FunctionDecl *FD) { return FD->isDeleted() && FD->isDefaulted() && isa(FD); } void Sema::DefineImplicitLambdaToFunctionPointerConversion( SourceLocation CurrentLocation, CXXConversionDecl *Conv) { CXXRecordDecl *Lambda = Conv->getParent(); CXXMethodDecl *CallOp = Lambda->getLambdaCallOperator(); // If we are defining a specialization of a conversion to function-ptr // cache the deduced template arguments for this specialization // so that we can use them to retrieve the corresponding call-operator // and static-invoker. const TemplateArgumentList *DeducedTemplateArgs = nullptr; // Retrieve the corresponding call-operator specialization. if (Lambda->isGenericLambda()) { assert(Conv->isFunctionTemplateSpecialization()); FunctionTemplateDecl *CallOpTemplate = CallOp->getDescribedFunctionTemplate(); DeducedTemplateArgs = Conv->getTemplateSpecializationArgs(); void *InsertPos = nullptr; FunctionDecl *CallOpSpec = CallOpTemplate->findSpecialization( DeducedTemplateArgs->asArray(), InsertPos); assert(CallOpSpec && "Conversion operator must have a corresponding call operator"); CallOp = cast(CallOpSpec); } // Mark the call operator referenced (and add to pending instantiations // if necessary). // For both the conversion and static-invoker template specializations // we construct their body's in this function, so no need to add them // to the PendingInstantiations. MarkFunctionReferenced(CurrentLocation, CallOp); SynthesizedFunctionScope Scope(*this, Conv); DiagnosticErrorTrap Trap(Diags); // Retrieve the static invoker... CXXMethodDecl *Invoker = Lambda->getLambdaStaticInvoker(); // ... and get the corresponding specialization for a generic lambda. if (Lambda->isGenericLambda()) { assert(DeducedTemplateArgs && "Must have deduced template arguments from Conversion Operator"); FunctionTemplateDecl *InvokeTemplate = Invoker->getDescribedFunctionTemplate(); void *InsertPos = nullptr; FunctionDecl *InvokeSpec = InvokeTemplate->findSpecialization( DeducedTemplateArgs->asArray(), InsertPos); assert(InvokeSpec && "Must have a corresponding static invoker specialization"); Invoker = cast(InvokeSpec); } // Construct the body of the conversion function { return __invoke; }. Expr *FunctionRef = BuildDeclRefExpr(Invoker, Invoker->getType(), VK_LValue, Conv->getLocation()).get(); assert(FunctionRef && "Can't refer to __invoke function?"); Stmt *Return = BuildReturnStmt(Conv->getLocation(), FunctionRef).get(); Conv->setBody(new (Context) CompoundStmt(Context, Return, Conv->getLocation(), Conv->getLocation())); Conv->markUsed(Context); Conv->setReferenced(); // Fill in the __invoke function with a dummy implementation. IR generation // will fill in the actual details. Invoker->markUsed(Context); Invoker->setReferenced(); Invoker->setBody(new (Context) CompoundStmt(Conv->getLocation())); if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Conv); L->CompletedImplicitDefinition(Invoker); } } void Sema::DefineImplicitLambdaToBlockPointerConversion( SourceLocation CurrentLocation, CXXConversionDecl *Conv) { assert(!Conv->getParent()->isGenericLambda()); Conv->markUsed(Context); SynthesizedFunctionScope Scope(*this, Conv); DiagnosticErrorTrap Trap(Diags); // Copy-initialize the lambda object as needed to capture it. Expr *This = ActOnCXXThis(CurrentLocation).get(); Expr *DerefThis =CreateBuiltinUnaryOp(CurrentLocation, UO_Deref, This).get(); ExprResult BuildBlock = BuildBlockForLambdaConversion(CurrentLocation, Conv->getLocation(), Conv, DerefThis); // If we're not under ARC, make sure we still get the _Block_copy/autorelease // behavior. Note that only the general conversion function does this // (since it's unusable otherwise); in the case where we inline the // block literal, it has block literal lifetime semantics. if (!BuildBlock.isInvalid() && !getLangOpts().ObjCAutoRefCount) BuildBlock = ImplicitCastExpr::Create(Context, BuildBlock.get()->getType(), CK_CopyAndAutoreleaseBlockObject, BuildBlock.get(), nullptr, VK_RValue); if (BuildBlock.isInvalid()) { Diag(CurrentLocation, diag::note_lambda_to_block_conv); Conv->setInvalidDecl(); return; } // Create the return statement that returns the block from the conversion // function. StmtResult Return = BuildReturnStmt(Conv->getLocation(), BuildBlock.get()); if (Return.isInvalid()) { Diag(CurrentLocation, diag::note_lambda_to_block_conv); Conv->setInvalidDecl(); return; } // Set the body of the conversion function. Stmt *ReturnS = Return.get(); Conv->setBody(new (Context) CompoundStmt(Context, ReturnS, Conv->getLocation(), Conv->getLocation())); // We're done; notify the mutation listener, if any. if (ASTMutationListener *L = getASTMutationListener()) { L->CompletedImplicitDefinition(Conv); } } /// \brief Determine whether the given list arguments contains exactly one /// "real" (non-default) argument. static bool hasOneRealArgument(MultiExprArg Args) { switch (Args.size()) { case 0: return false; default: if (!Args[1]->isDefaultArgument()) return false; // fall through case 1: return !Args[0]->isDefaultArgument(); } return false; } ExprResult Sema::BuildCXXConstructExpr(SourceLocation ConstructLoc, QualType DeclInitType, NamedDecl *FoundDecl, CXXConstructorDecl *Constructor, MultiExprArg ExprArgs, bool HadMultipleCandidates, bool IsListInitialization, bool IsStdInitListInitialization, bool RequiresZeroInit, unsigned ConstructKind, SourceRange ParenRange) { bool Elidable = false; // C++0x [class.copy]p34: // When certain criteria are met, an implementation is allowed to // omit the copy/move construction of a class object, even if the // copy/move constructor and/or destructor for the object have // side effects. [...] // - when a temporary class object that has not been bound to a // reference (12.2) would be copied/moved to a class object // with the same cv-unqualified type, the copy/move operation // can be omitted by constructing the temporary object // directly into the target of the omitted copy/move if (ConstructKind == CXXConstructExpr::CK_Complete && Constructor && Constructor->isCopyOrMoveConstructor() && hasOneRealArgument(ExprArgs)) { Expr *SubExpr = ExprArgs[0]; Elidable = SubExpr->isTemporaryObject( Context, cast(FoundDecl->getDeclContext())); } return BuildCXXConstructExpr(ConstructLoc, DeclInitType, FoundDecl, Constructor, Elidable, ExprArgs, HadMultipleCandidates, IsListInitialization, IsStdInitListInitialization, RequiresZeroInit, ConstructKind, ParenRange); } ExprResult Sema::BuildCXXConstructExpr(SourceLocation ConstructLoc, QualType DeclInitType, NamedDecl *FoundDecl, CXXConstructorDecl *Constructor, bool Elidable, MultiExprArg ExprArgs, bool HadMultipleCandidates, bool IsListInitialization, bool IsStdInitListInitialization, bool RequiresZeroInit, unsigned ConstructKind, SourceRange ParenRange) { if (auto *Shadow = dyn_cast(FoundDecl)) { Constructor = findInheritingConstructor(ConstructLoc, Constructor, Shadow); if (DiagnoseUseOfDecl(Constructor, ConstructLoc)) return ExprError(); } return BuildCXXConstructExpr( ConstructLoc, DeclInitType, Constructor, Elidable, ExprArgs, HadMultipleCandidates, IsListInitialization, IsStdInitListInitialization, RequiresZeroInit, ConstructKind, ParenRange); } /// BuildCXXConstructExpr - Creates a complete call to a constructor, /// including handling of its default argument expressions. ExprResult Sema::BuildCXXConstructExpr(SourceLocation ConstructLoc, QualType DeclInitType, CXXConstructorDecl *Constructor, bool Elidable, MultiExprArg ExprArgs, bool HadMultipleCandidates, bool IsListInitialization, bool IsStdInitListInitialization, bool RequiresZeroInit, unsigned ConstructKind, SourceRange ParenRange) { assert(declaresSameEntity( Constructor->getParent(), DeclInitType->getBaseElementTypeUnsafe()->getAsCXXRecordDecl()) && "given constructor for wrong type"); MarkFunctionReferenced(ConstructLoc, Constructor); if (getLangOpts().CUDA && !CheckCUDACall(ConstructLoc, Constructor)) return ExprError(); return CXXConstructExpr::Create( Context, DeclInitType, ConstructLoc, Constructor, Elidable, ExprArgs, HadMultipleCandidates, IsListInitialization, IsStdInitListInitialization, RequiresZeroInit, static_cast(ConstructKind), ParenRange); } ExprResult Sema::BuildCXXDefaultInitExpr(SourceLocation Loc, FieldDecl *Field) { assert(Field->hasInClassInitializer()); // If we already have the in-class initializer nothing needs to be done. if (Field->getInClassInitializer()) return CXXDefaultInitExpr::Create(Context, Loc, Field); // If we might have already tried and failed to instantiate, don't try again. if (Field->isInvalidDecl()) return ExprError(); // Maybe we haven't instantiated the in-class initializer. Go check the // pattern FieldDecl to see if it has one. CXXRecordDecl *ParentRD = cast(Field->getParent()); if (isTemplateInstantiation(ParentRD->getTemplateSpecializationKind())) { CXXRecordDecl *ClassPattern = ParentRD->getTemplateInstantiationPattern(); DeclContext::lookup_result Lookup = ClassPattern->lookup(Field->getDeclName()); // Lookup can return at most two results: the pattern for the field, or the // injected class name of the parent record. No other member can have the // same name as the field. // In modules mode, lookup can return multiple results (coming from // different modules). assert((getLangOpts().Modules || (!Lookup.empty() && Lookup.size() <= 2)) && "more than two lookup results for field name"); FieldDecl *Pattern = dyn_cast(Lookup[0]); if (!Pattern) { assert(isa(Lookup[0]) && "cannot have other non-field member with same name"); for (auto L : Lookup) if (isa(L)) { Pattern = cast(L); break; } assert(Pattern && "We must have set the Pattern!"); } if (InstantiateInClassInitializer(Loc, Field, Pattern, getTemplateInstantiationArgs(Field))) { // Don't diagnose this again. Field->setInvalidDecl(); return ExprError(); } return CXXDefaultInitExpr::Create(Context, Loc, Field); } // DR1351: // If the brace-or-equal-initializer of a non-static data member // invokes a defaulted default constructor of its class or of an // enclosing class in a potentially evaluated subexpression, the // program is ill-formed. // // This resolution is unworkable: the exception specification of the // default constructor can be needed in an unevaluated context, in // particular, in the operand of a noexcept-expression, and we can be // unable to compute an exception specification for an enclosed class. // // Any attempt to resolve the exception specification of a defaulted default // constructor before the initializer is lexically complete will ultimately // come here at which point we can diagnose it. RecordDecl *OutermostClass = ParentRD->getOuterLexicalRecordContext(); Diag(Loc, diag::err_in_class_initializer_not_yet_parsed) << OutermostClass << Field; Diag(Field->getLocEnd(), diag::note_in_class_initializer_not_yet_parsed); - - // Don't diagnose this again. - Field->setInvalidDecl(); + // Recover by marking the field invalid, unless we're in a SFINAE context. + if (!isSFINAEContext()) + Field->setInvalidDecl(); return ExprError(); } void Sema::FinalizeVarWithDestructor(VarDecl *VD, const RecordType *Record) { if (VD->isInvalidDecl()) return; CXXRecordDecl *ClassDecl = cast(Record->getDecl()); if (ClassDecl->isInvalidDecl()) return; if (ClassDecl->hasIrrelevantDestructor()) return; if (ClassDecl->isDependentContext()) return; CXXDestructorDecl *Destructor = LookupDestructor(ClassDecl); MarkFunctionReferenced(VD->getLocation(), Destructor); CheckDestructorAccess(VD->getLocation(), Destructor, PDiag(diag::err_access_dtor_var) << VD->getDeclName() << VD->getType()); DiagnoseUseOfDecl(Destructor, VD->getLocation()); if (Destructor->isTrivial()) return; if (!VD->hasGlobalStorage()) return; // Emit warning for non-trivial dtor in global scope (a real global, // class-static, function-static). Diag(VD->getLocation(), diag::warn_exit_time_destructor); // TODO: this should be re-enabled for static locals by !CXAAtExit if (!VD->isStaticLocal()) Diag(VD->getLocation(), diag::warn_global_destructor); } /// \brief Given a constructor and the set of arguments provided for the /// constructor, convert the arguments and add any required default arguments /// to form a proper call to this constructor. /// /// \returns true if an error occurred, false otherwise. bool Sema::CompleteConstructorCall(CXXConstructorDecl *Constructor, MultiExprArg ArgsPtr, SourceLocation Loc, SmallVectorImpl &ConvertedArgs, bool AllowExplicit, bool IsListInitialization) { // FIXME: This duplicates a lot of code from Sema::ConvertArgumentsForCall. unsigned NumArgs = ArgsPtr.size(); Expr **Args = ArgsPtr.data(); const FunctionProtoType *Proto = Constructor->getType()->getAs(); assert(Proto && "Constructor without a prototype?"); unsigned NumParams = Proto->getNumParams(); // If too few arguments are available, we'll fill in the rest with defaults. if (NumArgs < NumParams) ConvertedArgs.reserve(NumParams); else ConvertedArgs.reserve(NumArgs); VariadicCallType CallType = Proto->isVariadic() ? VariadicConstructor : VariadicDoesNotApply; SmallVector AllArgs; bool Invalid = GatherArgumentsForCall(Loc, Constructor, Proto, 0, llvm::makeArrayRef(Args, NumArgs), AllArgs, CallType, AllowExplicit, IsListInitialization); ConvertedArgs.append(AllArgs.begin(), AllArgs.end()); DiagnoseSentinelCalls(Constructor, Loc, AllArgs); CheckConstructorCall(Constructor, llvm::makeArrayRef(AllArgs.data(), AllArgs.size()), Proto, Loc); return Invalid; } static inline bool CheckOperatorNewDeleteDeclarationScope(Sema &SemaRef, const FunctionDecl *FnDecl) { const DeclContext *DC = FnDecl->getDeclContext()->getRedeclContext(); if (isa(DC)) { return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_declared_in_namespace) << FnDecl->getDeclName(); } if (isa(DC) && FnDecl->getStorageClass() == SC_Static) { return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_declared_static) << FnDecl->getDeclName(); } return false; } static inline bool CheckOperatorNewDeleteTypes(Sema &SemaRef, const FunctionDecl *FnDecl, CanQualType ExpectedResultType, CanQualType ExpectedFirstParamType, unsigned DependentParamTypeDiag, unsigned InvalidParamTypeDiag) { QualType ResultType = FnDecl->getType()->getAs()->getReturnType(); // Check that the result type is not dependent. if (ResultType->isDependentType()) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_dependent_result_type) << FnDecl->getDeclName() << ExpectedResultType; // Check that the result type is what we expect. if (SemaRef.Context.getCanonicalType(ResultType) != ExpectedResultType) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_invalid_result_type) << FnDecl->getDeclName() << ExpectedResultType; // A function template must have at least 2 parameters. if (FnDecl->getDescribedFunctionTemplate() && FnDecl->getNumParams() < 2) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_template_too_few_parameters) << FnDecl->getDeclName(); // The function decl must have at least 1 parameter. if (FnDecl->getNumParams() == 0) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_delete_too_few_parameters) << FnDecl->getDeclName(); // Check the first parameter type is not dependent. QualType FirstParamType = FnDecl->getParamDecl(0)->getType(); if (FirstParamType->isDependentType()) return SemaRef.Diag(FnDecl->getLocation(), DependentParamTypeDiag) << FnDecl->getDeclName() << ExpectedFirstParamType; // Check that the first parameter type is what we expect. if (SemaRef.Context.getCanonicalType(FirstParamType).getUnqualifiedType() != ExpectedFirstParamType) return SemaRef.Diag(FnDecl->getLocation(), InvalidParamTypeDiag) << FnDecl->getDeclName() << ExpectedFirstParamType; return false; } static bool CheckOperatorNewDeclaration(Sema &SemaRef, const FunctionDecl *FnDecl) { // C++ [basic.stc.dynamic.allocation]p1: // A program is ill-formed if an allocation function is declared in a // namespace scope other than global scope or declared static in global // scope. if (CheckOperatorNewDeleteDeclarationScope(SemaRef, FnDecl)) return true; CanQualType SizeTy = SemaRef.Context.getCanonicalType(SemaRef.Context.getSizeType()); // C++ [basic.stc.dynamic.allocation]p1: // The return type shall be void*. The first parameter shall have type // std::size_t. if (CheckOperatorNewDeleteTypes(SemaRef, FnDecl, SemaRef.Context.VoidPtrTy, SizeTy, diag::err_operator_new_dependent_param_type, diag::err_operator_new_param_type)) return true; // C++ [basic.stc.dynamic.allocation]p1: // The first parameter shall not have an associated default argument. if (FnDecl->getParamDecl(0)->hasDefaultArg()) return SemaRef.Diag(FnDecl->getLocation(), diag::err_operator_new_default_arg) << FnDecl->getDeclName() << FnDecl->getParamDecl(0)->getDefaultArgRange(); return false; } static bool CheckOperatorDeleteDeclaration(Sema &SemaRef, FunctionDecl *FnDecl) { // C++ [basic.stc.dynamic.deallocation]p1: // A program is ill-formed if deallocation functions are declared in a // namespace scope other than global scope or declared static in global // scope. if (CheckOperatorNewDeleteDeclarationScope(SemaRef, FnDecl)) return true; // C++ [basic.stc.dynamic.deallocation]p2: // Each deallocation function shall return void and its first parameter // shall be void*. if (CheckOperatorNewDeleteTypes(SemaRef, FnDecl, SemaRef.Context.VoidTy, SemaRef.Context.VoidPtrTy, diag::err_operator_delete_dependent_param_type, diag::err_operator_delete_param_type)) return true; return false; } /// CheckOverloadedOperatorDeclaration - Check whether the declaration /// of this overloaded operator is well-formed. If so, returns false; /// otherwise, emits appropriate diagnostics and returns true. bool Sema::CheckOverloadedOperatorDeclaration(FunctionDecl *FnDecl) { assert(FnDecl && FnDecl->isOverloadedOperator() && "Expected an overloaded operator declaration"); OverloadedOperatorKind Op = FnDecl->getOverloadedOperator(); // C++ [over.oper]p5: // The allocation and deallocation functions, operator new, // operator new[], operator delete and operator delete[], are // described completely in 3.7.3. The attributes and restrictions // found in the rest of this subclause do not apply to them unless // explicitly stated in 3.7.3. if (Op == OO_Delete || Op == OO_Array_Delete) return CheckOperatorDeleteDeclaration(*this, FnDecl); if (Op == OO_New || Op == OO_Array_New) return CheckOperatorNewDeclaration(*this, FnDecl); // C++ [over.oper]p6: // An operator function shall either be a non-static member // function or be a non-member function and have at least one // parameter whose type is a class, a reference to a class, an // enumeration, or a reference to an enumeration. if (CXXMethodDecl *MethodDecl = dyn_cast(FnDecl)) { if (MethodDecl->isStatic()) return Diag(FnDecl->getLocation(), diag::err_operator_overload_static) << FnDecl->getDeclName(); } else { bool ClassOrEnumParam = false; for (auto Param : FnDecl->parameters()) { QualType ParamType = Param->getType().getNonReferenceType(); if (ParamType->isDependentType() || ParamType->isRecordType() || ParamType->isEnumeralType()) { ClassOrEnumParam = true; break; } } if (!ClassOrEnumParam) return Diag(FnDecl->getLocation(), diag::err_operator_overload_needs_class_or_enum) << FnDecl->getDeclName(); } // C++ [over.oper]p8: // An operator function cannot have default arguments (8.3.6), // except where explicitly stated below. // // Only the function-call operator allows default arguments // (C++ [over.call]p1). if (Op != OO_Call) { for (auto Param : FnDecl->parameters()) { if (Param->hasDefaultArg()) return Diag(Param->getLocation(), diag::err_operator_overload_default_arg) << FnDecl->getDeclName() << Param->getDefaultArgRange(); } } static const bool OperatorUses[NUM_OVERLOADED_OPERATORS][3] = { { false, false, false } #define OVERLOADED_OPERATOR(Name,Spelling,Token,Unary,Binary,MemberOnly) \ , { Unary, Binary, MemberOnly } #include "clang/Basic/OperatorKinds.def" }; bool CanBeUnaryOperator = OperatorUses[Op][0]; bool CanBeBinaryOperator = OperatorUses[Op][1]; bool MustBeMemberOperator = OperatorUses[Op][2]; // C++ [over.oper]p8: // [...] Operator functions cannot have more or fewer parameters // than the number required for the corresponding operator, as // described in the rest of this subclause. unsigned NumParams = FnDecl->getNumParams() + (isa(FnDecl)? 1 : 0); if (Op != OO_Call && ((NumParams == 1 && !CanBeUnaryOperator) || (NumParams == 2 && !CanBeBinaryOperator) || (NumParams < 1) || (NumParams > 2))) { // We have the wrong number of parameters. unsigned ErrorKind; if (CanBeUnaryOperator && CanBeBinaryOperator) { ErrorKind = 2; // 2 -> unary or binary. } else if (CanBeUnaryOperator) { ErrorKind = 0; // 0 -> unary } else { assert(CanBeBinaryOperator && "All non-call overloaded operators are unary or binary!"); ErrorKind = 1; // 1 -> binary } return Diag(FnDecl->getLocation(), diag::err_operator_overload_must_be) << FnDecl->getDeclName() << NumParams << ErrorKind; } // Overloaded operators other than operator() cannot be variadic. if (Op != OO_Call && FnDecl->getType()->getAs()->isVariadic()) { return Diag(FnDecl->getLocation(), diag::err_operator_overload_variadic) << FnDecl->getDeclName(); } // Some operators must be non-static member functions. if (MustBeMemberOperator && !isa(FnDecl)) { return Diag(FnDecl->getLocation(), diag::err_operator_overload_must_be_member) << FnDecl->getDeclName(); } // C++ [over.inc]p1: // The user-defined function called operator++ implements the // prefix and postfix ++ operator. If this function is a member // function with no parameters, or a non-member function with one // parameter of class or enumeration type, it defines the prefix // increment operator ++ for objects of that type. If the function // is a member function with one parameter (which shall be of type // int) or a non-member function with two parameters (the second // of which shall be of type int), it defines the postfix // increment operator ++ for objects of that type. if ((Op == OO_PlusPlus || Op == OO_MinusMinus) && NumParams == 2) { ParmVarDecl *LastParam = FnDecl->getParamDecl(FnDecl->getNumParams() - 1); QualType ParamType = LastParam->getType(); if (!ParamType->isSpecificBuiltinType(BuiltinType::Int) && !ParamType->isDependentType()) return Diag(LastParam->getLocation(), diag::err_operator_overload_post_incdec_must_be_int) << LastParam->getType() << (Op == OO_MinusMinus); } return false; } static bool checkLiteralOperatorTemplateParameterList(Sema &SemaRef, FunctionTemplateDecl *TpDecl) { TemplateParameterList *TemplateParams = TpDecl->getTemplateParameters(); // Must have one or two template parameters. if (TemplateParams->size() == 1) { NonTypeTemplateParmDecl *PmDecl = dyn_cast(TemplateParams->getParam(0)); // The template parameter must be a char parameter pack. if (PmDecl && PmDecl->isTemplateParameterPack() && SemaRef.Context.hasSameType(PmDecl->getType(), SemaRef.Context.CharTy)) return false; } else if (TemplateParams->size() == 2) { TemplateTypeParmDecl *PmType = dyn_cast(TemplateParams->getParam(0)); NonTypeTemplateParmDecl *PmArgs = dyn_cast(TemplateParams->getParam(1)); // The second template parameter must be a parameter pack with the // first template parameter as its type. if (PmType && PmArgs && !PmType->isTemplateParameterPack() && PmArgs->isTemplateParameterPack()) { const TemplateTypeParmType *TArgs = PmArgs->getType()->getAs(); if (TArgs && TArgs->getDepth() == PmType->getDepth() && TArgs->getIndex() == PmType->getIndex()) { if (SemaRef.ActiveTemplateInstantiations.empty()) SemaRef.Diag(TpDecl->getLocation(), diag::ext_string_literal_operator_template); return false; } } } SemaRef.Diag(TpDecl->getTemplateParameters()->getSourceRange().getBegin(), diag::err_literal_operator_template) << TpDecl->getTemplateParameters()->getSourceRange(); return true; } /// CheckLiteralOperatorDeclaration - Check whether the declaration /// of this literal operator function is well-formed. If so, returns /// false; otherwise, emits appropriate diagnostics and returns true. bool Sema::CheckLiteralOperatorDeclaration(FunctionDecl *FnDecl) { if (isa(FnDecl)) { Diag(FnDecl->getLocation(), diag::err_literal_operator_outside_namespace) << FnDecl->getDeclName(); return true; } if (FnDecl->isExternC()) { Diag(FnDecl->getLocation(), diag::err_literal_operator_extern_c); if (const LinkageSpecDecl *LSD = FnDecl->getDeclContext()->getExternCContext()) Diag(LSD->getExternLoc(), diag::note_extern_c_begins_here); return true; } // This might be the definition of a literal operator template. FunctionTemplateDecl *TpDecl = FnDecl->getDescribedFunctionTemplate(); // This might be a specialization of a literal operator template. if (!TpDecl) TpDecl = FnDecl->getPrimaryTemplate(); // template type operator "" name() and // template type operator "" name() are the only valid // template signatures, and the only valid signatures with no parameters. if (TpDecl) { if (FnDecl->param_size() != 0) { Diag(FnDecl->getLocation(), diag::err_literal_operator_template_with_params); return true; } if (checkLiteralOperatorTemplateParameterList(*this, TpDecl)) return true; } else if (FnDecl->param_size() == 1) { const ParmVarDecl *Param = FnDecl->getParamDecl(0); QualType ParamType = Param->getType().getUnqualifiedType(); // Only unsigned long long int, long double, any character type, and const // char * are allowed as the only parameters. if (ParamType->isSpecificBuiltinType(BuiltinType::ULongLong) || ParamType->isSpecificBuiltinType(BuiltinType::LongDouble) || Context.hasSameType(ParamType, Context.CharTy) || Context.hasSameType(ParamType, Context.WideCharTy) || Context.hasSameType(ParamType, Context.Char16Ty) || Context.hasSameType(ParamType, Context.Char32Ty)) { } else if (const PointerType *Ptr = ParamType->getAs()) { QualType InnerType = Ptr->getPointeeType(); // Pointer parameter must be a const char *. if (!(Context.hasSameType(InnerType.getUnqualifiedType(), Context.CharTy) && InnerType.isConstQualified() && !InnerType.isVolatileQualified())) { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_param) << ParamType << "'const char *'" << Param->getSourceRange(); return true; } } else if (ParamType->isRealFloatingType()) { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_param) << ParamType << Context.LongDoubleTy << Param->getSourceRange(); return true; } else if (ParamType->isIntegerType()) { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_param) << ParamType << Context.UnsignedLongLongTy << Param->getSourceRange(); return true; } else { Diag(Param->getSourceRange().getBegin(), diag::err_literal_operator_invalid_param) << ParamType << Param->getSourceRange(); return true; } } else if (FnDecl->param_size() == 2) { FunctionDecl::param_iterator Param = FnDecl->param_begin(); // First, verify that the first parameter is correct. QualType FirstParamType = (*Param)->getType().getUnqualifiedType(); // Two parameter function must have a pointer to const as a // first parameter; let's strip those qualifiers. const PointerType *PT = FirstParamType->getAs(); if (!PT) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << FirstParamType << "'const char *'" << (*Param)->getSourceRange(); return true; } QualType PointeeType = PT->getPointeeType(); // First parameter must be const if (!PointeeType.isConstQualified() || PointeeType.isVolatileQualified()) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << FirstParamType << "'const char *'" << (*Param)->getSourceRange(); return true; } QualType InnerType = PointeeType.getUnqualifiedType(); // Only const char *, const wchar_t*, const char16_t*, and const char32_t* // are allowed as the first parameter to a two-parameter function if (!(Context.hasSameType(InnerType, Context.CharTy) || Context.hasSameType(InnerType, Context.WideCharTy) || Context.hasSameType(InnerType, Context.Char16Ty) || Context.hasSameType(InnerType, Context.Char32Ty))) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << FirstParamType << "'const char *'" << (*Param)->getSourceRange(); return true; } // Move on to the second and final parameter. ++Param; // The second parameter must be a std::size_t. QualType SecondParamType = (*Param)->getType().getUnqualifiedType(); if (!Context.hasSameType(SecondParamType, Context.getSizeType())) { Diag((*Param)->getSourceRange().getBegin(), diag::err_literal_operator_param) << SecondParamType << Context.getSizeType() << (*Param)->getSourceRange(); return true; } } else { Diag(FnDecl->getLocation(), diag::err_literal_operator_bad_param_count); return true; } // Parameters are good. // A parameter-declaration-clause containing a default argument is not // equivalent to any of the permitted forms. for (auto Param : FnDecl->parameters()) { if (Param->hasDefaultArg()) { Diag(Param->getDefaultArgRange().getBegin(), diag::err_literal_operator_default_argument) << Param->getDefaultArgRange(); break; } } StringRef LiteralName = FnDecl->getDeclName().getCXXLiteralIdentifier()->getName(); if (LiteralName[0] != '_') { // C++11 [usrlit.suffix]p1: // Literal suffix identifiers that do not start with an underscore // are reserved for future standardization. Diag(FnDecl->getLocation(), diag::warn_user_literal_reserved) << StringLiteralParser::isValidUDSuffix(getLangOpts(), LiteralName); } return false; } /// ActOnStartLinkageSpecification - Parsed the beginning of a C++ /// linkage specification, including the language and (if present) /// the '{'. ExternLoc is the location of the 'extern', Lang is the /// language string literal. LBraceLoc, if valid, provides the location of /// the '{' brace. Otherwise, this linkage specification does not /// have any braces. Decl *Sema::ActOnStartLinkageSpecification(Scope *S, SourceLocation ExternLoc, Expr *LangStr, SourceLocation LBraceLoc) { StringLiteral *Lit = cast(LangStr); if (!Lit->isAscii()) { Diag(LangStr->getExprLoc(), diag::err_language_linkage_spec_not_ascii) << LangStr->getSourceRange(); return nullptr; } StringRef Lang = Lit->getString(); LinkageSpecDecl::LanguageIDs Language; if (Lang == "C") Language = LinkageSpecDecl::lang_c; else if (Lang == "C++") Language = LinkageSpecDecl::lang_cxx; else { Diag(LangStr->getExprLoc(), diag::err_language_linkage_spec_unknown) << LangStr->getSourceRange(); return nullptr; } // FIXME: Add all the various semantics of linkage specifications LinkageSpecDecl *D = LinkageSpecDecl::Create(Context, CurContext, ExternLoc, LangStr->getExprLoc(), Language, LBraceLoc.isValid()); CurContext->addDecl(D); PushDeclContext(S, D); return D; } /// ActOnFinishLinkageSpecification - Complete the definition of /// the C++ linkage specification LinkageSpec. If RBraceLoc is /// valid, it's the position of the closing '}' brace in a linkage /// specification that uses braces. Decl *Sema::ActOnFinishLinkageSpecification(Scope *S, Decl *LinkageSpec, SourceLocation RBraceLoc) { if (RBraceLoc.isValid()) { LinkageSpecDecl* LSDecl = cast(LinkageSpec); LSDecl->setRBraceLoc(RBraceLoc); } PopDeclContext(); return LinkageSpec; } Decl *Sema::ActOnEmptyDeclaration(Scope *S, AttributeList *AttrList, SourceLocation SemiLoc) { Decl *ED = EmptyDecl::Create(Context, CurContext, SemiLoc); // Attribute declarations appertain to empty declaration so we handle // them here. if (AttrList) ProcessDeclAttributeList(S, ED, AttrList); CurContext->addDecl(ED); return ED; } /// \brief Perform semantic analysis for the variable declaration that /// occurs within a C++ catch clause, returning the newly-created /// variable. VarDecl *Sema::BuildExceptionDeclaration(Scope *S, TypeSourceInfo *TInfo, SourceLocation StartLoc, SourceLocation Loc, IdentifierInfo *Name) { bool Invalid = false; QualType ExDeclType = TInfo->getType(); // Arrays and functions decay. if (ExDeclType->isArrayType()) ExDeclType = Context.getArrayDecayedType(ExDeclType); else if (ExDeclType->isFunctionType()) ExDeclType = Context.getPointerType(ExDeclType); // C++ 15.3p1: The exception-declaration shall not denote an incomplete type. // The exception-declaration shall not denote a pointer or reference to an // incomplete type, other than [cv] void*. // N2844 forbids rvalue references. if (!ExDeclType->isDependentType() && ExDeclType->isRValueReferenceType()) { Diag(Loc, diag::err_catch_rvalue_ref); Invalid = true; } if (ExDeclType->isVariablyModifiedType()) { Diag(Loc, diag::err_catch_variably_modified) << ExDeclType; Invalid = true; } QualType BaseType = ExDeclType; int Mode = 0; // 0 for direct type, 1 for pointer, 2 for reference unsigned DK = diag::err_catch_incomplete; if (const PointerType *Ptr = BaseType->getAs()) { BaseType = Ptr->getPointeeType(); Mode = 1; DK = diag::err_catch_incomplete_ptr; } else if (const ReferenceType *Ref = BaseType->getAs()) { // For the purpose of error recovery, we treat rvalue refs like lvalue refs. BaseType = Ref->getPointeeType(); Mode = 2; DK = diag::err_catch_incomplete_ref; } if (!Invalid && (Mode == 0 || !BaseType->isVoidType()) && !BaseType->isDependentType() && RequireCompleteType(Loc, BaseType, DK)) Invalid = true; if (!Invalid && !ExDeclType->isDependentType() && RequireNonAbstractType(Loc, ExDeclType, diag::err_abstract_type_in_decl, AbstractVariableType)) Invalid = true; // Only the non-fragile NeXT runtime currently supports C++ catches // of ObjC types, and no runtime supports catching ObjC types by value. if (!Invalid && getLangOpts().ObjC1) { QualType T = ExDeclType; if (const ReferenceType *RT = T->getAs()) T = RT->getPointeeType(); if (T->isObjCObjectType()) { Diag(Loc, diag::err_objc_object_catch); Invalid = true; } else if (T->isObjCObjectPointerType()) { // FIXME: should this be a test for macosx-fragile specifically? if (getLangOpts().ObjCRuntime.isFragile()) Diag(Loc, diag::warn_objc_pointer_cxx_catch_fragile); } } VarDecl *ExDecl = VarDecl::Create(Context, CurContext, StartLoc, Loc, Name, ExDeclType, TInfo, SC_None); ExDecl->setExceptionVariable(true); // In ARC, infer 'retaining' for variables of retainable type. if (getLangOpts().ObjCAutoRefCount && inferObjCARCLifetime(ExDecl)) Invalid = true; if (!Invalid && !ExDeclType->isDependentType()) { if (const RecordType *recordType = ExDeclType->getAs()) { // Insulate this from anything else we might currently be parsing. EnterExpressionEvaluationContext scope(*this, PotentiallyEvaluated); // C++ [except.handle]p16: // The object declared in an exception-declaration or, if the // exception-declaration does not specify a name, a temporary (12.2) is // copy-initialized (8.5) from the exception object. [...] // The object is destroyed when the handler exits, after the destruction // of any automatic objects initialized within the handler. // // We just pretend to initialize the object with itself, then make sure // it can be destroyed later. QualType initType = Context.getExceptionObjectType(ExDeclType); InitializedEntity entity = InitializedEntity::InitializeVariable(ExDecl); InitializationKind initKind = InitializationKind::CreateCopy(Loc, SourceLocation()); Expr *opaqueValue = new (Context) OpaqueValueExpr(Loc, initType, VK_LValue, OK_Ordinary); InitializationSequence sequence(*this, entity, initKind, opaqueValue); ExprResult result = sequence.Perform(*this, entity, initKind, opaqueValue); if (result.isInvalid()) Invalid = true; else { // If the constructor used was non-trivial, set this as the // "initializer". CXXConstructExpr *construct = result.getAs(); if (!construct->getConstructor()->isTrivial()) { Expr *init = MaybeCreateExprWithCleanups(construct); ExDecl->setInit(init); } // And make sure it's destructable. FinalizeVarWithDestructor(ExDecl, recordType); } } } if (Invalid) ExDecl->setInvalidDecl(); return ExDecl; } /// ActOnExceptionDeclarator - Parsed the exception-declarator in a C++ catch /// handler. Decl *Sema::ActOnExceptionDeclarator(Scope *S, Declarator &D) { TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); bool Invalid = D.isInvalidType(); // Check for unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(D.getIdentifierLoc(), TInfo, UPPC_ExceptionType)) { TInfo = Context.getTrivialTypeSourceInfo(Context.IntTy, D.getIdentifierLoc()); Invalid = true; } IdentifierInfo *II = D.getIdentifier(); if (NamedDecl *PrevDecl = LookupSingleName(S, II, D.getIdentifierLoc(), LookupOrdinaryName, ForRedeclaration)) { // The scope should be freshly made just for us. There is just no way // it contains any previous declaration, except for function parameters in // a function-try-block's catch statement. assert(!S->isDeclScope(PrevDecl)); if (isDeclInScope(PrevDecl, CurContext, S)) { Diag(D.getIdentifierLoc(), diag::err_redefinition) << D.getIdentifier(); Diag(PrevDecl->getLocation(), diag::note_previous_definition); Invalid = true; } else if (PrevDecl->isTemplateParameter()) // Maybe we will complain about the shadowed template parameter. DiagnoseTemplateParameterShadow(D.getIdentifierLoc(), PrevDecl); } if (D.getCXXScopeSpec().isSet() && !Invalid) { Diag(D.getIdentifierLoc(), diag::err_qualified_catch_declarator) << D.getCXXScopeSpec().getRange(); Invalid = true; } VarDecl *ExDecl = BuildExceptionDeclaration(S, TInfo, D.getLocStart(), D.getIdentifierLoc(), D.getIdentifier()); if (Invalid) ExDecl->setInvalidDecl(); // Add the exception declaration into this scope. if (II) PushOnScopeChains(ExDecl, S); else CurContext->addDecl(ExDecl); ProcessDeclAttributes(S, ExDecl, D); return ExDecl; } Decl *Sema::ActOnStaticAssertDeclaration(SourceLocation StaticAssertLoc, Expr *AssertExpr, Expr *AssertMessageExpr, SourceLocation RParenLoc) { StringLiteral *AssertMessage = AssertMessageExpr ? cast(AssertMessageExpr) : nullptr; if (DiagnoseUnexpandedParameterPack(AssertExpr, UPPC_StaticAssertExpression)) return nullptr; return BuildStaticAssertDeclaration(StaticAssertLoc, AssertExpr, AssertMessage, RParenLoc, false); } Decl *Sema::BuildStaticAssertDeclaration(SourceLocation StaticAssertLoc, Expr *AssertExpr, StringLiteral *AssertMessage, SourceLocation RParenLoc, bool Failed) { assert(AssertExpr != nullptr && "Expected non-null condition"); if (!AssertExpr->isTypeDependent() && !AssertExpr->isValueDependent() && !Failed) { // In a static_assert-declaration, the constant-expression shall be a // constant expression that can be contextually converted to bool. ExprResult Converted = PerformContextuallyConvertToBool(AssertExpr); if (Converted.isInvalid()) Failed = true; llvm::APSInt Cond; if (!Failed && VerifyIntegerConstantExpression(Converted.get(), &Cond, diag::err_static_assert_expression_is_not_constant, /*AllowFold=*/false).isInvalid()) Failed = true; if (!Failed && !Cond) { SmallString<256> MsgBuffer; llvm::raw_svector_ostream Msg(MsgBuffer); if (AssertMessage) AssertMessage->printPretty(Msg, nullptr, getPrintingPolicy()); Diag(StaticAssertLoc, diag::err_static_assert_failed) << !AssertMessage << Msg.str() << AssertExpr->getSourceRange(); Failed = true; } } Decl *Decl = StaticAssertDecl::Create(Context, CurContext, StaticAssertLoc, AssertExpr, AssertMessage, RParenLoc, Failed); CurContext->addDecl(Decl); return Decl; } /// \brief Perform semantic analysis of the given friend type declaration. /// /// \returns A friend declaration that. FriendDecl *Sema::CheckFriendTypeDecl(SourceLocation LocStart, SourceLocation FriendLoc, TypeSourceInfo *TSInfo) { assert(TSInfo && "NULL TypeSourceInfo for friend type declaration"); QualType T = TSInfo->getType(); SourceRange TypeRange = TSInfo->getTypeLoc().getLocalSourceRange(); // C++03 [class.friend]p2: // An elaborated-type-specifier shall be used in a friend declaration // for a class.* // // * The class-key of the elaborated-type-specifier is required. if (!ActiveTemplateInstantiations.empty()) { // Do not complain about the form of friend template types during // template instantiation; we will already have complained when the // template was declared. } else { if (!T->isElaboratedTypeSpecifier()) { // If we evaluated the type to a record type, suggest putting // a tag in front. if (const RecordType *RT = T->getAs()) { RecordDecl *RD = RT->getDecl(); SmallString<16> InsertionText(" "); InsertionText += RD->getKindName(); Diag(TypeRange.getBegin(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_unelaborated_friend_type : diag::ext_unelaborated_friend_type) << (unsigned) RD->getTagKind() << T << FixItHint::CreateInsertion(getLocForEndOfToken(FriendLoc), InsertionText); } else { Diag(FriendLoc, getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_nonclass_type_friend : diag::ext_nonclass_type_friend) << T << TypeRange; } } else if (T->getAs()) { Diag(FriendLoc, getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_enum_friend : diag::ext_enum_friend) << T << TypeRange; } // C++11 [class.friend]p3: // A friend declaration that does not declare a function shall have one // of the following forms: // friend elaborated-type-specifier ; // friend simple-type-specifier ; // friend typename-specifier ; if (getLangOpts().CPlusPlus11 && LocStart != FriendLoc) Diag(FriendLoc, diag::err_friend_not_first_in_declaration) << T; } // If the type specifier in a friend declaration designates a (possibly // cv-qualified) class type, that class is declared as a friend; otherwise, // the friend declaration is ignored. return FriendDecl::Create(Context, CurContext, TSInfo->getTypeLoc().getLocStart(), TSInfo, FriendLoc); } /// Handle a friend tag declaration where the scope specifier was /// templated. Decl *Sema::ActOnTemplatedFriendTag(Scope *S, SourceLocation FriendLoc, unsigned TagSpec, SourceLocation TagLoc, CXXScopeSpec &SS, IdentifierInfo *Name, SourceLocation NameLoc, AttributeList *Attr, MultiTemplateParamsArg TempParamLists) { TagTypeKind Kind = TypeWithKeyword::getTagTypeKindForTypeSpec(TagSpec); bool isExplicitSpecialization = false; bool Invalid = false; if (TemplateParameterList *TemplateParams = MatchTemplateParametersToScopeSpecifier( TagLoc, NameLoc, SS, nullptr, TempParamLists, /*friend*/ true, isExplicitSpecialization, Invalid)) { if (TemplateParams->size() > 0) { // This is a declaration of a class template. if (Invalid) return nullptr; return CheckClassTemplate(S, TagSpec, TUK_Friend, TagLoc, SS, Name, NameLoc, Attr, TemplateParams, AS_public, /*ModulePrivateLoc=*/SourceLocation(), FriendLoc, TempParamLists.size() - 1, TempParamLists.data()).get(); } else { // The "template<>" header is extraneous. Diag(TemplateParams->getTemplateLoc(), diag::err_template_tag_noparams) << TypeWithKeyword::getTagTypeKindName(Kind) << Name; isExplicitSpecialization = true; } } if (Invalid) return nullptr; bool isAllExplicitSpecializations = true; for (unsigned I = TempParamLists.size(); I-- > 0; ) { if (TempParamLists[I]->size()) { isAllExplicitSpecializations = false; break; } } // FIXME: don't ignore attributes. // If it's explicit specializations all the way down, just forget // about the template header and build an appropriate non-templated // friend. TODO: for source fidelity, remember the headers. if (isAllExplicitSpecializations) { if (SS.isEmpty()) { bool Owned = false; bool IsDependent = false; return ActOnTag(S, TagSpec, TUK_Friend, TagLoc, SS, Name, NameLoc, Attr, AS_public, /*ModulePrivateLoc=*/SourceLocation(), MultiTemplateParamsArg(), Owned, IsDependent, /*ScopedEnumKWLoc=*/SourceLocation(), /*ScopedEnumUsesClassTag=*/false, /*UnderlyingType=*/TypeResult(), /*IsTypeSpecifier=*/false); } NestedNameSpecifierLoc QualifierLoc = SS.getWithLocInContext(Context); ElaboratedTypeKeyword Keyword = TypeWithKeyword::getKeywordForTagTypeKind(Kind); QualType T = CheckTypenameType(Keyword, TagLoc, QualifierLoc, *Name, NameLoc); if (T.isNull()) return nullptr; TypeSourceInfo *TSI = Context.CreateTypeSourceInfo(T); if (isa(T)) { DependentNameTypeLoc TL = TSI->getTypeLoc().castAs(); TL.setElaboratedKeywordLoc(TagLoc); TL.setQualifierLoc(QualifierLoc); TL.setNameLoc(NameLoc); } else { ElaboratedTypeLoc TL = TSI->getTypeLoc().castAs(); TL.setElaboratedKeywordLoc(TagLoc); TL.setQualifierLoc(QualifierLoc); TL.getNamedTypeLoc().castAs().setNameLoc(NameLoc); } FriendDecl *Friend = FriendDecl::Create(Context, CurContext, NameLoc, TSI, FriendLoc, TempParamLists); Friend->setAccess(AS_public); CurContext->addDecl(Friend); return Friend; } assert(SS.isNotEmpty() && "valid templated tag with no SS and no direct?"); // Handle the case of a templated-scope friend class. e.g. // template class A::B; // FIXME: we don't support these right now. Diag(NameLoc, diag::warn_template_qualified_friend_unsupported) << SS.getScopeRep() << SS.getRange() << cast(CurContext); ElaboratedTypeKeyword ETK = TypeWithKeyword::getKeywordForTagTypeKind(Kind); QualType T = Context.getDependentNameType(ETK, SS.getScopeRep(), Name); TypeSourceInfo *TSI = Context.CreateTypeSourceInfo(T); DependentNameTypeLoc TL = TSI->getTypeLoc().castAs(); TL.setElaboratedKeywordLoc(TagLoc); TL.setQualifierLoc(SS.getWithLocInContext(Context)); TL.setNameLoc(NameLoc); FriendDecl *Friend = FriendDecl::Create(Context, CurContext, NameLoc, TSI, FriendLoc, TempParamLists); Friend->setAccess(AS_public); Friend->setUnsupportedFriend(true); CurContext->addDecl(Friend); return Friend; } /// Handle a friend type declaration. This works in tandem with /// ActOnTag. /// /// Notes on friend class templates: /// /// We generally treat friend class declarations as if they were /// declaring a class. So, for example, the elaborated type specifier /// in a friend declaration is required to obey the restrictions of a /// class-head (i.e. no typedefs in the scope chain), template /// parameters are required to match up with simple template-ids, &c. /// However, unlike when declaring a template specialization, it's /// okay to refer to a template specialization without an empty /// template parameter declaration, e.g. /// friend class A::B; /// We permit this as a special case; if there are any template /// parameters present at all, require proper matching, i.e. /// template <> template \ friend class A::B; Decl *Sema::ActOnFriendTypeDecl(Scope *S, const DeclSpec &DS, MultiTemplateParamsArg TempParams) { SourceLocation Loc = DS.getLocStart(); assert(DS.isFriendSpecified()); assert(DS.getStorageClassSpec() == DeclSpec::SCS_unspecified); // Try to convert the decl specifier to a type. This works for // friend templates because ActOnTag never produces a ClassTemplateDecl // for a TUK_Friend. Declarator TheDeclarator(DS, Declarator::MemberContext); TypeSourceInfo *TSI = GetTypeForDeclarator(TheDeclarator, S); QualType T = TSI->getType(); if (TheDeclarator.isInvalidType()) return nullptr; if (DiagnoseUnexpandedParameterPack(Loc, TSI, UPPC_FriendDeclaration)) return nullptr; // This is definitely an error in C++98. It's probably meant to // be forbidden in C++0x, too, but the specification is just // poorly written. // // The problem is with declarations like the following: // template friend A::foo; // where deciding whether a class C is a friend or not now hinges // on whether there exists an instantiation of A that causes // 'foo' to equal C. There are restrictions on class-heads // (which we declare (by fiat) elaborated friend declarations to // be) that makes this tractable. // // FIXME: handle "template <> friend class A;", which // is possibly well-formed? Who even knows? if (TempParams.size() && !T->isElaboratedTypeSpecifier()) { Diag(Loc, diag::err_tagless_friend_type_template) << DS.getSourceRange(); return nullptr; } // C++98 [class.friend]p1: A friend of a class is a function // or class that is not a member of the class . . . // This is fixed in DR77, which just barely didn't make the C++03 // deadline. It's also a very silly restriction that seriously // affects inner classes and which nobody else seems to implement; // thus we never diagnose it, not even in -pedantic. // // But note that we could warn about it: it's always useless to // friend one of your own members (it's not, however, worthless to // friend a member of an arbitrary specialization of your template). Decl *D; if (!TempParams.empty()) D = FriendTemplateDecl::Create(Context, CurContext, Loc, TempParams, TSI, DS.getFriendSpecLoc()); else D = CheckFriendTypeDecl(Loc, DS.getFriendSpecLoc(), TSI); if (!D) return nullptr; D->setAccess(AS_public); CurContext->addDecl(D); return D; } NamedDecl *Sema::ActOnFriendFunctionDecl(Scope *S, Declarator &D, MultiTemplateParamsArg TemplateParams) { const DeclSpec &DS = D.getDeclSpec(); assert(DS.isFriendSpecified()); assert(DS.getStorageClassSpec() == DeclSpec::SCS_unspecified); SourceLocation Loc = D.getIdentifierLoc(); TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); // C++ [class.friend]p1 // A friend of a class is a function or class.... // Note that this sees through typedefs, which is intended. // It *doesn't* see through dependent types, which is correct // according to [temp.arg.type]p3: // If a declaration acquires a function type through a // type dependent on a template-parameter and this causes // a declaration that does not use the syntactic form of a // function declarator to have a function type, the program // is ill-formed. if (!TInfo->getType()->isFunctionType()) { Diag(Loc, diag::err_unexpected_friend); // It might be worthwhile to try to recover by creating an // appropriate declaration. return nullptr; } // C++ [namespace.memdef]p3 // - If a friend declaration in a non-local class first declares a // class or function, the friend class or function is a member // of the innermost enclosing namespace. // - The name of the friend is not found by simple name lookup // until a matching declaration is provided in that namespace // scope (either before or after the class declaration granting // friendship). // - If a friend function is called, its name may be found by the // name lookup that considers functions from namespaces and // classes associated with the types of the function arguments. // - When looking for a prior declaration of a class or a function // declared as a friend, scopes outside the innermost enclosing // namespace scope are not considered. CXXScopeSpec &SS = D.getCXXScopeSpec(); DeclarationNameInfo NameInfo = GetNameForDeclarator(D); DeclarationName Name = NameInfo.getName(); assert(Name); // Check for unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(Loc, TInfo, UPPC_FriendDeclaration) || DiagnoseUnexpandedParameterPack(NameInfo, UPPC_FriendDeclaration) || DiagnoseUnexpandedParameterPack(SS, UPPC_FriendDeclaration)) return nullptr; // The context we found the declaration in, or in which we should // create the declaration. DeclContext *DC; Scope *DCScope = S; LookupResult Previous(*this, NameInfo, LookupOrdinaryName, ForRedeclaration); // There are five cases here. // - There's no scope specifier and we're in a local class. Only look // for functions declared in the immediately-enclosing block scope. // We recover from invalid scope qualifiers as if they just weren't there. FunctionDecl *FunctionContainingLocalClass = nullptr; if ((SS.isInvalid() || !SS.isSet()) && (FunctionContainingLocalClass = cast(CurContext)->isLocalClass())) { // C++11 [class.friend]p11: // If a friend declaration appears in a local class and the name // specified is an unqualified name, a prior declaration is // looked up without considering scopes that are outside the // innermost enclosing non-class scope. For a friend function // declaration, if there is no prior declaration, the program is // ill-formed. // Find the innermost enclosing non-class scope. This is the block // scope containing the local class definition (or for a nested class, // the outer local class). DCScope = S->getFnParent(); // Look up the function name in the scope. Previous.clear(LookupLocalFriendName); LookupName(Previous, S, /*AllowBuiltinCreation*/false); if (!Previous.empty()) { // All possible previous declarations must have the same context: // either they were declared at block scope or they are members of // one of the enclosing local classes. DC = Previous.getRepresentativeDecl()->getDeclContext(); } else { // This is ill-formed, but provide the context that we would have // declared the function in, if we were permitted to, for error recovery. DC = FunctionContainingLocalClass; } adjustContextForLocalExternDecl(DC); // C++ [class.friend]p6: // A function can be defined in a friend declaration of a class if and // only if the class is a non-local class (9.8), the function name is // unqualified, and the function has namespace scope. if (D.isFunctionDefinition()) { Diag(NameInfo.getBeginLoc(), diag::err_friend_def_in_local_class); } // - There's no scope specifier, in which case we just go to the // appropriate scope and look for a function or function template // there as appropriate. } else if (SS.isInvalid() || !SS.isSet()) { // C++11 [namespace.memdef]p3: // If the name in a friend declaration is neither qualified nor // a template-id and the declaration is a function or an // elaborated-type-specifier, the lookup to determine whether // the entity has been previously declared shall not consider // any scopes outside the innermost enclosing namespace. bool isTemplateId = D.getName().getKind() == UnqualifiedId::IK_TemplateId; // Find the appropriate context according to the above. DC = CurContext; // Skip class contexts. If someone can cite chapter and verse // for this behavior, that would be nice --- it's what GCC and // EDG do, and it seems like a reasonable intent, but the spec // really only says that checks for unqualified existing // declarations should stop at the nearest enclosing namespace, // not that they should only consider the nearest enclosing // namespace. while (DC->isRecord()) DC = DC->getParent(); DeclContext *LookupDC = DC; while (LookupDC->isTransparentContext()) LookupDC = LookupDC->getParent(); while (true) { LookupQualifiedName(Previous, LookupDC); if (!Previous.empty()) { DC = LookupDC; break; } if (isTemplateId) { if (isa(LookupDC)) break; } else { if (LookupDC->isFileContext()) break; } LookupDC = LookupDC->getParent(); } DCScope = getScopeForDeclContext(S, DC); // - There's a non-dependent scope specifier, in which case we // compute it and do a previous lookup there for a function // or function template. } else if (!SS.getScopeRep()->isDependent()) { DC = computeDeclContext(SS); if (!DC) return nullptr; if (RequireCompleteDeclContext(SS, DC)) return nullptr; LookupQualifiedName(Previous, DC); // Ignore things found implicitly in the wrong scope. // TODO: better diagnostics for this case. Suggesting the right // qualified scope would be nice... LookupResult::Filter F = Previous.makeFilter(); while (F.hasNext()) { NamedDecl *D = F.next(); if (!DC->InEnclosingNamespaceSetOf( D->getDeclContext()->getRedeclContext())) F.erase(); } F.done(); if (Previous.empty()) { D.setInvalidType(); Diag(Loc, diag::err_qualified_friend_not_found) << Name << TInfo->getType(); return nullptr; } // C++ [class.friend]p1: A friend of a class is a function or // class that is not a member of the class . . . if (DC->Equals(CurContext)) Diag(DS.getFriendSpecLoc(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_friend_is_member : diag::err_friend_is_member); if (D.isFunctionDefinition()) { // C++ [class.friend]p6: // A function can be defined in a friend declaration of a class if and // only if the class is a non-local class (9.8), the function name is // unqualified, and the function has namespace scope. SemaDiagnosticBuilder DB = Diag(SS.getRange().getBegin(), diag::err_qualified_friend_def); DB << SS.getScopeRep(); if (DC->isFileContext()) DB << FixItHint::CreateRemoval(SS.getRange()); SS.clear(); } // - There's a scope specifier that does not match any template // parameter lists, in which case we use some arbitrary context, // create a method or method template, and wait for instantiation. // - There's a scope specifier that does match some template // parameter lists, which we don't handle right now. } else { if (D.isFunctionDefinition()) { // C++ [class.friend]p6: // A function can be defined in a friend declaration of a class if and // only if the class is a non-local class (9.8), the function name is // unqualified, and the function has namespace scope. Diag(SS.getRange().getBegin(), diag::err_qualified_friend_def) << SS.getScopeRep(); } DC = CurContext; assert(isa(DC) && "friend declaration not in class?"); } if (!DC->isRecord()) { int DiagArg = -1; switch (D.getName().getKind()) { case UnqualifiedId::IK_ConstructorTemplateId: case UnqualifiedId::IK_ConstructorName: DiagArg = 0; break; case UnqualifiedId::IK_DestructorName: DiagArg = 1; break; case UnqualifiedId::IK_ConversionFunctionId: DiagArg = 2; break; case UnqualifiedId::IK_Identifier: case UnqualifiedId::IK_ImplicitSelfParam: case UnqualifiedId::IK_LiteralOperatorId: case UnqualifiedId::IK_OperatorFunctionId: case UnqualifiedId::IK_TemplateId: break; } // This implies that it has to be an operator or function. if (DiagArg >= 0) { Diag(Loc, diag::err_introducing_special_friend) << DiagArg; return nullptr; } } // FIXME: This is an egregious hack to cope with cases where the scope stack // does not contain the declaration context, i.e., in an out-of-line // definition of a class. Scope FakeDCScope(S, Scope::DeclScope, Diags); if (!DCScope) { FakeDCScope.setEntity(DC); DCScope = &FakeDCScope; } bool AddToScope = true; NamedDecl *ND = ActOnFunctionDeclarator(DCScope, D, DC, TInfo, Previous, TemplateParams, AddToScope); if (!ND) return nullptr; assert(ND->getLexicalDeclContext() == CurContext); // If we performed typo correction, we might have added a scope specifier // and changed the decl context. DC = ND->getDeclContext(); // Add the function declaration to the appropriate lookup tables, // adjusting the redeclarations list as necessary. We don't // want to do this yet if the friending class is dependent. // // Also update the scope-based lookup if the target context's // lookup context is in lexical scope. if (!CurContext->isDependentContext()) { DC = DC->getRedeclContext(); DC->makeDeclVisibleInContext(ND); if (Scope *EnclosingScope = getScopeForDeclContext(S, DC)) PushOnScopeChains(ND, EnclosingScope, /*AddToContext=*/ false); } FriendDecl *FrD = FriendDecl::Create(Context, CurContext, D.getIdentifierLoc(), ND, DS.getFriendSpecLoc()); FrD->setAccess(AS_public); CurContext->addDecl(FrD); if (ND->isInvalidDecl()) { FrD->setInvalidDecl(); } else { if (DC->isRecord()) CheckFriendAccess(ND); FunctionDecl *FD; if (FunctionTemplateDecl *FTD = dyn_cast(ND)) FD = FTD->getTemplatedDecl(); else FD = cast(ND); // C++11 [dcl.fct.default]p4: If a friend declaration specifies a // default argument expression, that declaration shall be a definition // and shall be the only declaration of the function or function // template in the translation unit. if (functionDeclHasDefaultArgument(FD)) { // We can't look at FD->getPreviousDecl() because it may not have been set // if we're in a dependent context. If the function is known to be a // redeclaration, we will have narrowed Previous down to the right decl. if (D.isRedeclaration()) { Diag(FD->getLocation(), diag::err_friend_decl_with_def_arg_redeclared); Diag(Previous.getRepresentativeDecl()->getLocation(), diag::note_previous_declaration); } else if (!D.isFunctionDefinition()) Diag(FD->getLocation(), diag::err_friend_decl_with_def_arg_must_be_def); } // Mark templated-scope function declarations as unsupported. if (FD->getNumTemplateParameterLists() && SS.isValid()) { Diag(FD->getLocation(), diag::warn_template_qualified_friend_unsupported) << SS.getScopeRep() << SS.getRange() << cast(CurContext); FrD->setUnsupportedFriend(true); } } return ND; } void Sema::SetDeclDeleted(Decl *Dcl, SourceLocation DelLoc) { AdjustDeclIfTemplate(Dcl); FunctionDecl *Fn = dyn_cast_or_null(Dcl); if (!Fn) { Diag(DelLoc, diag::err_deleted_non_function); return; } if (const FunctionDecl *Prev = Fn->getPreviousDecl()) { // Don't consider the implicit declaration we generate for explicit // specializations. FIXME: Do not generate these implicit declarations. if ((Prev->getTemplateSpecializationKind() != TSK_ExplicitSpecialization || Prev->getPreviousDecl()) && !Prev->isDefined()) { Diag(DelLoc, diag::err_deleted_decl_not_first); Diag(Prev->getLocation().isInvalid() ? DelLoc : Prev->getLocation(), Prev->isImplicit() ? diag::note_previous_implicit_declaration : diag::note_previous_declaration); } // If the declaration wasn't the first, we delete the function anyway for // recovery. Fn = Fn->getCanonicalDecl(); } // dllimport/dllexport cannot be deleted. if (const InheritableAttr *DLLAttr = getDLLAttr(Fn)) { Diag(Fn->getLocation(), diag::err_attribute_dll_deleted) << DLLAttr; Fn->setInvalidDecl(); } if (Fn->isDeleted()) return; // See if we're deleting a function which is already known to override a // non-deleted virtual function. if (CXXMethodDecl *MD = dyn_cast(Fn)) { bool IssuedDiagnostic = false; for (CXXMethodDecl::method_iterator I = MD->begin_overridden_methods(), E = MD->end_overridden_methods(); I != E; ++I) { if (!(*MD->begin_overridden_methods())->isDeleted()) { if (!IssuedDiagnostic) { Diag(DelLoc, diag::err_deleted_override) << MD->getDeclName(); IssuedDiagnostic = true; } Diag((*I)->getLocation(), diag::note_overridden_virtual_function); } } // If this function was implicitly deleted because it was defaulted, // explain why it was deleted. if (IssuedDiagnostic && MD->isDefaulted()) ShouldDeleteSpecialMember(MD, getSpecialMember(MD), nullptr, /*Diagnose*/true); } // C++11 [basic.start.main]p3: // A program that defines main as deleted [...] is ill-formed. if (Fn->isMain()) Diag(DelLoc, diag::err_deleted_main); // C++11 [dcl.fct.def.delete]p4: // A deleted function is implicitly inline. Fn->setImplicitlyInline(); Fn->setDeletedAsWritten(); } void Sema::SetDeclDefaulted(Decl *Dcl, SourceLocation DefaultLoc) { CXXMethodDecl *MD = dyn_cast_or_null(Dcl); if (MD) { if (MD->getParent()->isDependentType()) { MD->setDefaulted(); MD->setExplicitlyDefaulted(); return; } CXXSpecialMember Member = getSpecialMember(MD); if (Member == CXXInvalid) { if (!MD->isInvalidDecl()) Diag(DefaultLoc, diag::err_default_special_members); return; } MD->setDefaulted(); MD->setExplicitlyDefaulted(); // If this definition appears within the record, do the checking when // the record is complete. const FunctionDecl *Primary = MD; if (const FunctionDecl *Pattern = MD->getTemplateInstantiationPattern()) // Ask the template instantiation pattern that actually had the // '= default' on it. Primary = Pattern; // If the method was defaulted on its first declaration, we will have // already performed the checking in CheckCompletedCXXClass. Such a // declaration doesn't trigger an implicit definition. if (Primary->getCanonicalDecl()->isDefaulted()) return; CheckExplicitlyDefaultedSpecialMember(MD); if (!MD->isInvalidDecl()) DefineImplicitSpecialMember(*this, MD, DefaultLoc); } else { Diag(DefaultLoc, diag::err_default_special_members); } } static void SearchForReturnInStmt(Sema &Self, Stmt *S) { for (Stmt *SubStmt : S->children()) { if (!SubStmt) continue; if (isa(SubStmt)) Self.Diag(SubStmt->getLocStart(), diag::err_return_in_constructor_handler); if (!isa(SubStmt)) SearchForReturnInStmt(Self, SubStmt); } } void Sema::DiagnoseReturnInConstructorExceptionHandler(CXXTryStmt *TryBlock) { for (unsigned I = 0, E = TryBlock->getNumHandlers(); I != E; ++I) { CXXCatchStmt *Handler = TryBlock->getHandler(I); SearchForReturnInStmt(*this, Handler); } } bool Sema::CheckOverridingFunctionAttributes(const CXXMethodDecl *New, const CXXMethodDecl *Old) { const FunctionType *NewFT = New->getType()->getAs(); const FunctionType *OldFT = Old->getType()->getAs(); CallingConv NewCC = NewFT->getCallConv(), OldCC = OldFT->getCallConv(); // If the calling conventions match, everything is fine if (NewCC == OldCC) return false; // If the calling conventions mismatch because the new function is static, // suppress the calling convention mismatch error; the error about static // function override (err_static_overrides_virtual from // Sema::CheckFunctionDeclaration) is more clear. if (New->getStorageClass() == SC_Static) return false; Diag(New->getLocation(), diag::err_conflicting_overriding_cc_attributes) << New->getDeclName() << New->getType() << Old->getType(); Diag(Old->getLocation(), diag::note_overridden_virtual_function); return true; } bool Sema::CheckOverridingFunctionReturnType(const CXXMethodDecl *New, const CXXMethodDecl *Old) { QualType NewTy = New->getType()->getAs()->getReturnType(); QualType OldTy = Old->getType()->getAs()->getReturnType(); if (Context.hasSameType(NewTy, OldTy) || NewTy->isDependentType() || OldTy->isDependentType()) return false; // Check if the return types are covariant QualType NewClassTy, OldClassTy; /// Both types must be pointers or references to classes. if (const PointerType *NewPT = NewTy->getAs()) { if (const PointerType *OldPT = OldTy->getAs()) { NewClassTy = NewPT->getPointeeType(); OldClassTy = OldPT->getPointeeType(); } } else if (const ReferenceType *NewRT = NewTy->getAs()) { if (const ReferenceType *OldRT = OldTy->getAs()) { if (NewRT->getTypeClass() == OldRT->getTypeClass()) { NewClassTy = NewRT->getPointeeType(); OldClassTy = OldRT->getPointeeType(); } } } // The return types aren't either both pointers or references to a class type. if (NewClassTy.isNull()) { Diag(New->getLocation(), diag::err_different_return_type_for_overriding_virtual_function) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } if (!Context.hasSameUnqualifiedType(NewClassTy, OldClassTy)) { // C++14 [class.virtual]p8: // If the class type in the covariant return type of D::f differs from // that of B::f, the class type in the return type of D::f shall be // complete at the point of declaration of D::f or shall be the class // type D. if (const RecordType *RT = NewClassTy->getAs()) { if (!RT->isBeingDefined() && RequireCompleteType(New->getLocation(), NewClassTy, diag::err_covariant_return_incomplete, New->getDeclName())) return true; } // Check if the new class derives from the old class. if (!IsDerivedFrom(New->getLocation(), NewClassTy, OldClassTy)) { Diag(New->getLocation(), diag::err_covariant_return_not_derived) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } // Check if we the conversion from derived to base is valid. if (CheckDerivedToBaseConversion( NewClassTy, OldClassTy, diag::err_covariant_return_inaccessible_base, diag::err_covariant_return_ambiguous_derived_to_base_conv, New->getLocation(), New->getReturnTypeSourceRange(), New->getDeclName(), nullptr)) { // FIXME: this note won't trigger for delayed access control // diagnostics, and it's impossible to get an undelayed error // here from access control during the original parse because // the ParsingDeclSpec/ParsingDeclarator are still in scope. Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } } // The qualifiers of the return types must be the same. if (NewTy.getLocalCVRQualifiers() != OldTy.getLocalCVRQualifiers()) { Diag(New->getLocation(), diag::err_covariant_return_type_different_qualifications) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } // The new class type must have the same or less qualifiers as the old type. if (NewClassTy.isMoreQualifiedThan(OldClassTy)) { Diag(New->getLocation(), diag::err_covariant_return_type_class_type_more_qualified) << New->getDeclName() << NewTy << OldTy << New->getReturnTypeSourceRange(); Diag(Old->getLocation(), diag::note_overridden_virtual_function) << Old->getReturnTypeSourceRange(); return true; } return false; } /// \brief Mark the given method pure. /// /// \param Method the method to be marked pure. /// /// \param InitRange the source range that covers the "0" initializer. bool Sema::CheckPureMethod(CXXMethodDecl *Method, SourceRange InitRange) { SourceLocation EndLoc = InitRange.getEnd(); if (EndLoc.isValid()) Method->setRangeEnd(EndLoc); if (Method->isVirtual() || Method->getParent()->isDependentContext()) { Method->setPure(); return false; } if (!Method->isInvalidDecl()) Diag(Method->getLocation(), diag::err_non_virtual_pure) << Method->getDeclName() << InitRange; return true; } void Sema::ActOnPureSpecifier(Decl *D, SourceLocation ZeroLoc) { if (D->getFriendObjectKind()) Diag(D->getLocation(), diag::err_pure_friend); else if (auto *M = dyn_cast(D)) CheckPureMethod(M, ZeroLoc); else Diag(D->getLocation(), diag::err_illegal_initializer); } /// \brief Determine whether the given declaration is a static data member. static bool isStaticDataMember(const Decl *D) { if (const VarDecl *Var = dyn_cast_or_null(D)) return Var->isStaticDataMember(); return false; } /// ActOnCXXEnterDeclInitializer - Invoked when we are about to parse /// an initializer for the out-of-line declaration 'Dcl'. The scope /// is a fresh scope pushed for just this purpose. /// /// After this method is called, according to [C++ 3.4.1p13], if 'Dcl' is a /// static data member of class X, names should be looked up in the scope of /// class X. void Sema::ActOnCXXEnterDeclInitializer(Scope *S, Decl *D) { // If there is no declaration, there was an error parsing it. if (!D || D->isInvalidDecl()) return; // We will always have a nested name specifier here, but this declaration // might not be out of line if the specifier names the current namespace: // extern int n; // int ::n = 0; if (D->isOutOfLine()) EnterDeclaratorContext(S, D->getDeclContext()); // If we are parsing the initializer for a static data member, push a // new expression evaluation context that is associated with this static // data member. if (isStaticDataMember(D)) PushExpressionEvaluationContext(PotentiallyEvaluated, D); } /// ActOnCXXExitDeclInitializer - Invoked after we are finished parsing an /// initializer for the out-of-line declaration 'D'. void Sema::ActOnCXXExitDeclInitializer(Scope *S, Decl *D) { // If there is no declaration, there was an error parsing it. if (!D || D->isInvalidDecl()) return; if (isStaticDataMember(D)) PopExpressionEvaluationContext(); if (D->isOutOfLine()) ExitDeclaratorContext(S); } /// ActOnCXXConditionDeclarationExpr - Parsed a condition declaration of a /// C++ if/switch/while/for statement. /// e.g: "if (int x = f()) {...}" DeclResult Sema::ActOnCXXConditionDeclaration(Scope *S, Declarator &D) { // C++ 6.4p2: // The declarator shall not specify a function or an array. // The type-specifier-seq shall not contain typedef and shall not declare a // new class or enumeration. assert(D.getDeclSpec().getStorageClassSpec() != DeclSpec::SCS_typedef && "Parser allowed 'typedef' as storage class of condition decl."); Decl *Dcl = ActOnDeclarator(S, D); if (!Dcl) return true; if (isa(Dcl)) { // The declarator shall not specify a function. Diag(Dcl->getLocation(), diag::err_invalid_use_of_function_type) << D.getSourceRange(); return true; } return Dcl; } void Sema::LoadExternalVTableUses() { if (!ExternalSource) return; SmallVector VTables; ExternalSource->ReadUsedVTables(VTables); SmallVector NewUses; for (unsigned I = 0, N = VTables.size(); I != N; ++I) { llvm::DenseMap::iterator Pos = VTablesUsed.find(VTables[I].Record); // Even if a definition wasn't required before, it may be required now. if (Pos != VTablesUsed.end()) { if (!Pos->second && VTables[I].DefinitionRequired) Pos->second = true; continue; } VTablesUsed[VTables[I].Record] = VTables[I].DefinitionRequired; NewUses.push_back(VTableUse(VTables[I].Record, VTables[I].Location)); } VTableUses.insert(VTableUses.begin(), NewUses.begin(), NewUses.end()); } void Sema::MarkVTableUsed(SourceLocation Loc, CXXRecordDecl *Class, bool DefinitionRequired) { // Ignore any vtable uses in unevaluated operands or for classes that do // not have a vtable. if (!Class->isDynamicClass() || Class->isDependentContext() || CurContext->isDependentContext() || isUnevaluatedContext()) return; // Try to insert this class into the map. LoadExternalVTableUses(); Class = cast(Class->getCanonicalDecl()); std::pair::iterator, bool> Pos = VTablesUsed.insert(std::make_pair(Class, DefinitionRequired)); if (!Pos.second) { // If we already had an entry, check to see if we are promoting this vtable // to require a definition. If so, we need to reappend to the VTableUses // list, since we may have already processed the first entry. if (DefinitionRequired && !Pos.first->second) { Pos.first->second = true; } else { // Otherwise, we can early exit. return; } } else { // The Microsoft ABI requires that we perform the destructor body // checks (i.e. operator delete() lookup) when the vtable is marked used, as // the deleting destructor is emitted with the vtable, not with the // destructor definition as in the Itanium ABI. if (Context.getTargetInfo().getCXXABI().isMicrosoft()) { CXXDestructorDecl *DD = Class->getDestructor(); if (DD && DD->isVirtual() && !DD->isDeleted()) { if (Class->hasUserDeclaredDestructor() && !DD->isDefined()) { // If this is an out-of-line declaration, marking it referenced will // not do anything. Manually call CheckDestructor to look up operator // delete(). ContextRAII SavedContext(*this, DD); CheckDestructor(DD); } else { MarkFunctionReferenced(Loc, Class->getDestructor()); } } } } // Local classes need to have their virtual members marked // immediately. For all other classes, we mark their virtual members // at the end of the translation unit. if (Class->isLocalClass()) MarkVirtualMembersReferenced(Loc, Class); else VTableUses.push_back(std::make_pair(Class, Loc)); } bool Sema::DefineUsedVTables() { LoadExternalVTableUses(); if (VTableUses.empty()) return false; // Note: The VTableUses vector could grow as a result of marking // the members of a class as "used", so we check the size each // time through the loop and prefer indices (which are stable) to // iterators (which are not). bool DefinedAnything = false; for (unsigned I = 0; I != VTableUses.size(); ++I) { CXXRecordDecl *Class = VTableUses[I].first->getDefinition(); if (!Class) continue; TemplateSpecializationKind ClassTSK = Class->getTemplateSpecializationKind(); SourceLocation Loc = VTableUses[I].second; bool DefineVTable = true; // If this class has a key function, but that key function is // defined in another translation unit, we don't need to emit the // vtable even though we're using it. const CXXMethodDecl *KeyFunction = Context.getCurrentKeyFunction(Class); if (KeyFunction && !KeyFunction->hasBody()) { // The key function is in another translation unit. DefineVTable = false; TemplateSpecializationKind TSK = KeyFunction->getTemplateSpecializationKind(); assert(TSK != TSK_ExplicitInstantiationDefinition && TSK != TSK_ImplicitInstantiation && "Instantiations don't have key functions"); (void)TSK; } else if (!KeyFunction) { // If we have a class with no key function that is the subject // of an explicit instantiation declaration, suppress the // vtable; it will live with the explicit instantiation // definition. bool IsExplicitInstantiationDeclaration = ClassTSK == TSK_ExplicitInstantiationDeclaration; for (auto R : Class->redecls()) { TemplateSpecializationKind TSK = cast(R)->getTemplateSpecializationKind(); if (TSK == TSK_ExplicitInstantiationDeclaration) IsExplicitInstantiationDeclaration = true; else if (TSK == TSK_ExplicitInstantiationDefinition) { IsExplicitInstantiationDeclaration = false; break; } } if (IsExplicitInstantiationDeclaration) DefineVTable = false; } // The exception specifications for all virtual members may be needed even // if we are not providing an authoritative form of the vtable in this TU. // We may choose to emit it available_externally anyway. if (!DefineVTable) { MarkVirtualMemberExceptionSpecsNeeded(Loc, Class); continue; } // Mark all of the virtual members of this class as referenced, so // that we can build a vtable. Then, tell the AST consumer that a // vtable for this class is required. DefinedAnything = true; MarkVirtualMembersReferenced(Loc, Class); CXXRecordDecl *Canonical = cast(Class->getCanonicalDecl()); if (VTablesUsed[Canonical]) Consumer.HandleVTable(Class); // Warn if we're emitting a weak vtable. The vtable will be weak if there is // no key function or the key function is inlined. Don't warn in C++ ABIs // that lack key functions, since the user won't be able to make one. if (Context.getTargetInfo().getCXXABI().hasKeyFunctions() && Class->isExternallyVisible() && ClassTSK != TSK_ImplicitInstantiation) { const FunctionDecl *KeyFunctionDef = nullptr; if (!KeyFunction || (KeyFunction->hasBody(KeyFunctionDef) && KeyFunctionDef->isInlined())) { Diag(Class->getLocation(), ClassTSK == TSK_ExplicitInstantiationDefinition ? diag::warn_weak_template_vtable : diag::warn_weak_vtable) << Class; } } } VTableUses.clear(); return DefinedAnything; } void Sema::MarkVirtualMemberExceptionSpecsNeeded(SourceLocation Loc, const CXXRecordDecl *RD) { for (const auto *I : RD->methods()) if (I->isVirtual() && !I->isPure()) ResolveExceptionSpec(Loc, I->getType()->castAs()); } void Sema::MarkVirtualMembersReferenced(SourceLocation Loc, const CXXRecordDecl *RD) { // Mark all functions which will appear in RD's vtable as used. CXXFinalOverriderMap FinalOverriders; RD->getFinalOverriders(FinalOverriders); for (CXXFinalOverriderMap::const_iterator I = FinalOverriders.begin(), E = FinalOverriders.end(); I != E; ++I) { for (OverridingMethods::const_iterator OI = I->second.begin(), OE = I->second.end(); OI != OE; ++OI) { assert(OI->second.size() > 0 && "no final overrider"); CXXMethodDecl *Overrider = OI->second.front().Method; // C++ [basic.def.odr]p2: // [...] A virtual member function is used if it is not pure. [...] if (!Overrider->isPure()) MarkFunctionReferenced(Loc, Overrider); } } // Only classes that have virtual bases need a VTT. if (RD->getNumVBases() == 0) return; for (const auto &I : RD->bases()) { const CXXRecordDecl *Base = cast(I.getType()->getAs()->getDecl()); if (Base->getNumVBases() == 0) continue; MarkVirtualMembersReferenced(Loc, Base); } } /// SetIvarInitializers - This routine builds initialization ASTs for the /// Objective-C implementation whose ivars need be initialized. void Sema::SetIvarInitializers(ObjCImplementationDecl *ObjCImplementation) { if (!getLangOpts().CPlusPlus) return; if (ObjCInterfaceDecl *OID = ObjCImplementation->getClassInterface()) { SmallVector ivars; CollectIvarsToConstructOrDestruct(OID, ivars); if (ivars.empty()) return; SmallVector AllToInit; for (unsigned i = 0; i < ivars.size(); i++) { FieldDecl *Field = ivars[i]; if (Field->isInvalidDecl()) continue; CXXCtorInitializer *Member; InitializedEntity InitEntity = InitializedEntity::InitializeMember(Field); InitializationKind InitKind = InitializationKind::CreateDefault(ObjCImplementation->getLocation()); InitializationSequence InitSeq(*this, InitEntity, InitKind, None); ExprResult MemberInit = InitSeq.Perform(*this, InitEntity, InitKind, None); MemberInit = MaybeCreateExprWithCleanups(MemberInit); // Note, MemberInit could actually come back empty if no initialization // is required (e.g., because it would call a trivial default constructor) if (!MemberInit.get() || MemberInit.isInvalid()) continue; Member = new (Context) CXXCtorInitializer(Context, Field, SourceLocation(), SourceLocation(), MemberInit.getAs(), SourceLocation()); AllToInit.push_back(Member); // Be sure that the destructor is accessible and is marked as referenced. if (const RecordType *RecordTy = Context.getBaseElementType(Field->getType()) ->getAs()) { CXXRecordDecl *RD = cast(RecordTy->getDecl()); if (CXXDestructorDecl *Destructor = LookupDestructor(RD)) { MarkFunctionReferenced(Field->getLocation(), Destructor); CheckDestructorAccess(Field->getLocation(), Destructor, PDiag(diag::err_access_dtor_ivar) << Context.getBaseElementType(Field->getType())); } } } ObjCImplementation->setIvarInitializers(Context, AllToInit.data(), AllToInit.size()); } } static void DelegatingCycleHelper(CXXConstructorDecl* Ctor, llvm::SmallSet &Valid, llvm::SmallSet &Invalid, llvm::SmallSet &Current, Sema &S) { if (Ctor->isInvalidDecl()) return; CXXConstructorDecl *Target = Ctor->getTargetConstructor(); // Target may not be determinable yet, for instance if this is a dependent // call in an uninstantiated template. if (Target) { const FunctionDecl *FNTarget = nullptr; (void)Target->hasBody(FNTarget); Target = const_cast( cast_or_null(FNTarget)); } CXXConstructorDecl *Canonical = Ctor->getCanonicalDecl(), // Avoid dereferencing a null pointer here. *TCanonical = Target? Target->getCanonicalDecl() : nullptr; if (!Current.insert(Canonical).second) return; // We know that beyond here, we aren't chaining into a cycle. if (!Target || !Target->isDelegatingConstructor() || Target->isInvalidDecl() || Valid.count(TCanonical)) { Valid.insert(Current.begin(), Current.end()); Current.clear(); // We've hit a cycle. } else if (TCanonical == Canonical || Invalid.count(TCanonical) || Current.count(TCanonical)) { // If we haven't diagnosed this cycle yet, do so now. if (!Invalid.count(TCanonical)) { S.Diag((*Ctor->init_begin())->getSourceLocation(), diag::warn_delegating_ctor_cycle) << Ctor; // Don't add a note for a function delegating directly to itself. if (TCanonical != Canonical) S.Diag(Target->getLocation(), diag::note_it_delegates_to); CXXConstructorDecl *C = Target; while (C->getCanonicalDecl() != Canonical) { const FunctionDecl *FNTarget = nullptr; (void)C->getTargetConstructor()->hasBody(FNTarget); assert(FNTarget && "Ctor cycle through bodiless function"); C = const_cast( cast(FNTarget)); S.Diag(C->getLocation(), diag::note_which_delegates_to); } } Invalid.insert(Current.begin(), Current.end()); Current.clear(); } else { DelegatingCycleHelper(Target, Valid, Invalid, Current, S); } } void Sema::CheckDelegatingCtorCycles() { llvm::SmallSet Valid, Invalid, Current; for (DelegatingCtorDeclsType::iterator I = DelegatingCtorDecls.begin(ExternalSource), E = DelegatingCtorDecls.end(); I != E; ++I) DelegatingCycleHelper(*I, Valid, Invalid, Current, *this); for (llvm::SmallSet::iterator CI = Invalid.begin(), CE = Invalid.end(); CI != CE; ++CI) (*CI)->setInvalidDecl(); } namespace { /// \brief AST visitor that finds references to the 'this' expression. class FindCXXThisExpr : public RecursiveASTVisitor { Sema &S; public: explicit FindCXXThisExpr(Sema &S) : S(S) { } bool VisitCXXThisExpr(CXXThisExpr *E) { S.Diag(E->getLocation(), diag::err_this_static_member_func) << E->isImplicit(); return false; } }; } bool Sema::checkThisInStaticMemberFunctionType(CXXMethodDecl *Method) { TypeSourceInfo *TSInfo = Method->getTypeSourceInfo(); if (!TSInfo) return false; TypeLoc TL = TSInfo->getTypeLoc(); FunctionProtoTypeLoc ProtoTL = TL.getAs(); if (!ProtoTL) return false; // C++11 [expr.prim.general]p3: // [The expression this] shall not appear before the optional // cv-qualifier-seq and it shall not appear within the declaration of a // static member function (although its type and value category are defined // within a static member function as they are within a non-static member // function). [ Note: this is because declaration matching does not occur // until the complete declarator is known. - end note ] const FunctionProtoType *Proto = ProtoTL.getTypePtr(); FindCXXThisExpr Finder(*this); // If the return type came after the cv-qualifier-seq, check it now. if (Proto->hasTrailingReturn() && !Finder.TraverseTypeLoc(ProtoTL.getReturnLoc())) return true; // Check the exception specification. if (checkThisInStaticMemberFunctionExceptionSpec(Method)) return true; return checkThisInStaticMemberFunctionAttributes(Method); } bool Sema::checkThisInStaticMemberFunctionExceptionSpec(CXXMethodDecl *Method) { TypeSourceInfo *TSInfo = Method->getTypeSourceInfo(); if (!TSInfo) return false; TypeLoc TL = TSInfo->getTypeLoc(); FunctionProtoTypeLoc ProtoTL = TL.getAs(); if (!ProtoTL) return false; const FunctionProtoType *Proto = ProtoTL.getTypePtr(); FindCXXThisExpr Finder(*this); switch (Proto->getExceptionSpecType()) { case EST_Unparsed: case EST_Uninstantiated: case EST_Unevaluated: case EST_BasicNoexcept: case EST_DynamicNone: case EST_MSAny: case EST_None: break; case EST_ComputedNoexcept: if (!Finder.TraverseStmt(Proto->getNoexceptExpr())) return true; case EST_Dynamic: for (const auto &E : Proto->exceptions()) { if (!Finder.TraverseType(E)) return true; } break; } return false; } bool Sema::checkThisInStaticMemberFunctionAttributes(CXXMethodDecl *Method) { FindCXXThisExpr Finder(*this); // Check attributes. for (const auto *A : Method->attrs()) { // FIXME: This should be emitted by tblgen. Expr *Arg = nullptr; ArrayRef Args; if (const auto *G = dyn_cast(A)) Arg = G->getArg(); else if (const auto *G = dyn_cast(A)) Arg = G->getArg(); else if (const auto *AA = dyn_cast(A)) Args = llvm::makeArrayRef(AA->args_begin(), AA->args_size()); else if (const auto *AB = dyn_cast(A)) Args = llvm::makeArrayRef(AB->args_begin(), AB->args_size()); else if (const auto *ETLF = dyn_cast(A)) { Arg = ETLF->getSuccessValue(); Args = llvm::makeArrayRef(ETLF->args_begin(), ETLF->args_size()); } else if (const auto *STLF = dyn_cast(A)) { Arg = STLF->getSuccessValue(); Args = llvm::makeArrayRef(STLF->args_begin(), STLF->args_size()); } else if (const auto *LR = dyn_cast(A)) Arg = LR->getArg(); else if (const auto *LE = dyn_cast(A)) Args = llvm::makeArrayRef(LE->args_begin(), LE->args_size()); else if (const auto *RC = dyn_cast(A)) Args = llvm::makeArrayRef(RC->args_begin(), RC->args_size()); else if (const auto *AC = dyn_cast(A)) Args = llvm::makeArrayRef(AC->args_begin(), AC->args_size()); else if (const auto *AC = dyn_cast(A)) Args = llvm::makeArrayRef(AC->args_begin(), AC->args_size()); else if (const auto *RC = dyn_cast(A)) Args = llvm::makeArrayRef(RC->args_begin(), RC->args_size()); if (Arg && !Finder.TraverseStmt(Arg)) return true; for (unsigned I = 0, N = Args.size(); I != N; ++I) { if (!Finder.TraverseStmt(Args[I])) return true; } } return false; } void Sema::checkExceptionSpecification( bool IsTopLevel, ExceptionSpecificationType EST, ArrayRef DynamicExceptions, ArrayRef DynamicExceptionRanges, Expr *NoexceptExpr, SmallVectorImpl &Exceptions, FunctionProtoType::ExceptionSpecInfo &ESI) { Exceptions.clear(); ESI.Type = EST; if (EST == EST_Dynamic) { Exceptions.reserve(DynamicExceptions.size()); for (unsigned ei = 0, ee = DynamicExceptions.size(); ei != ee; ++ei) { // FIXME: Preserve type source info. QualType ET = GetTypeFromParser(DynamicExceptions[ei]); if (IsTopLevel) { SmallVector Unexpanded; collectUnexpandedParameterPacks(ET, Unexpanded); if (!Unexpanded.empty()) { DiagnoseUnexpandedParameterPacks( DynamicExceptionRanges[ei].getBegin(), UPPC_ExceptionType, Unexpanded); continue; } } // Check that the type is valid for an exception spec, and // drop it if not. if (!CheckSpecifiedExceptionType(ET, DynamicExceptionRanges[ei])) Exceptions.push_back(ET); } ESI.Exceptions = Exceptions; return; } if (EST == EST_ComputedNoexcept) { // If an error occurred, there's no expression here. if (NoexceptExpr) { assert((NoexceptExpr->isTypeDependent() || NoexceptExpr->getType()->getCanonicalTypeUnqualified() == Context.BoolTy) && "Parser should have made sure that the expression is boolean"); if (IsTopLevel && NoexceptExpr && DiagnoseUnexpandedParameterPack(NoexceptExpr)) { ESI.Type = EST_BasicNoexcept; return; } if (!NoexceptExpr->isValueDependent()) NoexceptExpr = VerifyIntegerConstantExpression(NoexceptExpr, nullptr, diag::err_noexcept_needs_constant_expression, /*AllowFold*/ false).get(); ESI.NoexceptExpr = NoexceptExpr; } return; } } void Sema::actOnDelayedExceptionSpecification(Decl *MethodD, ExceptionSpecificationType EST, SourceRange SpecificationRange, ArrayRef DynamicExceptions, ArrayRef DynamicExceptionRanges, Expr *NoexceptExpr) { if (!MethodD) return; // Dig out the method we're referring to. if (FunctionTemplateDecl *FunTmpl = dyn_cast(MethodD)) MethodD = FunTmpl->getTemplatedDecl(); CXXMethodDecl *Method = dyn_cast(MethodD); if (!Method) return; // Check the exception specification. llvm::SmallVector Exceptions; FunctionProtoType::ExceptionSpecInfo ESI; checkExceptionSpecification(/*IsTopLevel*/true, EST, DynamicExceptions, DynamicExceptionRanges, NoexceptExpr, Exceptions, ESI); // Update the exception specification on the function type. Context.adjustExceptionSpec(Method, ESI, /*AsWritten*/true); if (Method->isStatic()) checkThisInStaticMemberFunctionExceptionSpec(Method); if (Method->isVirtual()) { // Check overrides, which we previously had to delay. for (CXXMethodDecl::method_iterator O = Method->begin_overridden_methods(), OEnd = Method->end_overridden_methods(); O != OEnd; ++O) CheckOverridingFunctionExceptionSpec(Method, *O); } } /// HandleMSProperty - Analyze a __delcspec(property) field of a C++ class. /// MSPropertyDecl *Sema::HandleMSProperty(Scope *S, RecordDecl *Record, SourceLocation DeclStart, Declarator &D, Expr *BitWidth, InClassInitStyle InitStyle, AccessSpecifier AS, AttributeList *MSPropertyAttr) { IdentifierInfo *II = D.getIdentifier(); if (!II) { Diag(DeclStart, diag::err_anonymous_property); return nullptr; } SourceLocation Loc = D.getIdentifierLoc(); TypeSourceInfo *TInfo = GetTypeForDeclarator(D, S); QualType T = TInfo->getType(); if (getLangOpts().CPlusPlus) { CheckExtraCXXDefaultArguments(D); if (DiagnoseUnexpandedParameterPack(D.getIdentifierLoc(), TInfo, UPPC_DataMemberType)) { D.setInvalidType(); T = Context.IntTy; TInfo = Context.getTrivialTypeSourceInfo(T, Loc); } } DiagnoseFunctionSpecifiers(D.getDeclSpec()); if (D.getDeclSpec().isInlineSpecified()) Diag(D.getDeclSpec().getInlineSpecLoc(), diag::err_inline_non_function) << getLangOpts().CPlusPlus1z; if (DeclSpec::TSCS TSCS = D.getDeclSpec().getThreadStorageClassSpec()) Diag(D.getDeclSpec().getThreadStorageClassSpecLoc(), diag::err_invalid_thread) << DeclSpec::getSpecifierName(TSCS); // Check to see if this name was declared as a member previously NamedDecl *PrevDecl = nullptr; LookupResult Previous(*this, II, Loc, LookupMemberName, ForRedeclaration); LookupName(Previous, S); switch (Previous.getResultKind()) { case LookupResult::Found: case LookupResult::FoundUnresolvedValue: PrevDecl = Previous.getAsSingle(); break; case LookupResult::FoundOverloaded: PrevDecl = Previous.getRepresentativeDecl(); break; case LookupResult::NotFound: case LookupResult::NotFoundInCurrentInstantiation: case LookupResult::Ambiguous: break; } if (PrevDecl && PrevDecl->isTemplateParameter()) { // Maybe we will complain about the shadowed template parameter. DiagnoseTemplateParameterShadow(D.getIdentifierLoc(), PrevDecl); // Just pretend that we didn't see the previous declaration. PrevDecl = nullptr; } if (PrevDecl && !isDeclInScope(PrevDecl, Record, S)) PrevDecl = nullptr; SourceLocation TSSL = D.getLocStart(); const AttributeList::PropertyData &Data = MSPropertyAttr->getPropertyData(); MSPropertyDecl *NewPD = MSPropertyDecl::Create( Context, Record, Loc, II, T, TInfo, TSSL, Data.GetterId, Data.SetterId); ProcessDeclAttributes(TUScope, NewPD, D); NewPD->setAccess(AS); if (NewPD->isInvalidDecl()) Record->setInvalidDecl(); if (D.getDeclSpec().isModulePrivateSpecified()) NewPD->setModulePrivate(); if (NewPD->isInvalidDecl() && PrevDecl) { // Don't introduce NewFD into scope; there's already something // with the same name in the same scope. } else if (II) { PushOnScopeChains(NewPD, S); } else Record->addDecl(NewPD); return NewPD; } Index: vendor/clang/dist/lib/Sema/SemaExpr.cpp =================================================================== --- vendor/clang/dist/lib/Sema/SemaExpr.cpp (revision 312705) +++ vendor/clang/dist/lib/Sema/SemaExpr.cpp (revision 312706) @@ -1,15397 +1,15396 @@ //===--- SemaExpr.cpp - Semantic Analysis for Expressions -----------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // This file implements semantic analysis for expressions. // //===----------------------------------------------------------------------===// #include "TreeTransform.h" #include "clang/AST/ASTConsumer.h" #include "clang/AST/ASTContext.h" #include "clang/AST/ASTLambda.h" #include "clang/AST/ASTMutationListener.h" #include "clang/AST/CXXInheritance.h" #include "clang/AST/DeclObjC.h" #include "clang/AST/DeclTemplate.h" #include "clang/AST/EvaluatedExprVisitor.h" #include "clang/AST/Expr.h" #include "clang/AST/ExprCXX.h" #include "clang/AST/ExprObjC.h" #include "clang/AST/ExprOpenMP.h" #include "clang/AST/RecursiveASTVisitor.h" #include "clang/AST/TypeLoc.h" #include "clang/Basic/PartialDiagnostic.h" #include "clang/Basic/SourceManager.h" #include "clang/Basic/TargetInfo.h" #include "clang/Lex/LiteralSupport.h" #include "clang/Lex/Preprocessor.h" #include "clang/Sema/AnalysisBasedWarnings.h" #include "clang/Sema/DeclSpec.h" #include "clang/Sema/DelayedDiagnostic.h" #include "clang/Sema/Designator.h" #include "clang/Sema/Initialization.h" #include "clang/Sema/Lookup.h" #include "clang/Sema/ParsedTemplate.h" #include "clang/Sema/Scope.h" #include "clang/Sema/ScopeInfo.h" #include "clang/Sema/SemaFixItUtils.h" #include "clang/Sema/SemaInternal.h" #include "clang/Sema/Template.h" #include "llvm/Support/ConvertUTF.h" using namespace clang; using namespace sema; /// \brief Determine whether the use of this declaration is valid, without /// emitting diagnostics. bool Sema::CanUseDecl(NamedDecl *D, bool TreatUnavailableAsInvalid) { // See if this is an auto-typed variable whose initializer we are parsing. if (ParsingInitForAutoVars.count(D)) return false; // See if this is a deleted function. if (FunctionDecl *FD = dyn_cast(D)) { if (FD->isDeleted()) return false; // If the function has a deduced return type, and we can't deduce it, // then we can't use it either. if (getLangOpts().CPlusPlus14 && FD->getReturnType()->isUndeducedType() && DeduceReturnType(FD, SourceLocation(), /*Diagnose*/ false)) return false; } // See if this function is unavailable. if (TreatUnavailableAsInvalid && D->getAvailability() == AR_Unavailable && cast(CurContext)->getAvailability() != AR_Unavailable) return false; return true; } static void DiagnoseUnusedOfDecl(Sema &S, NamedDecl *D, SourceLocation Loc) { // Warn if this is used but marked unused. if (const auto *A = D->getAttr()) { // [[maybe_unused]] should not diagnose uses, but __attribute__((unused)) // should diagnose them. if (A->getSemanticSpelling() != UnusedAttr::CXX11_maybe_unused) { const Decl *DC = cast_or_null(S.getCurObjCLexicalContext()); if (DC && !DC->hasAttr()) S.Diag(Loc, diag::warn_used_but_marked_unused) << D->getDeclName(); } } } static bool HasRedeclarationWithoutAvailabilityInCategory(const Decl *D) { const auto *OMD = dyn_cast(D); if (!OMD) return false; const ObjCInterfaceDecl *OID = OMD->getClassInterface(); if (!OID) return false; for (const ObjCCategoryDecl *Cat : OID->visible_categories()) if (ObjCMethodDecl *CatMeth = Cat->getMethod(OMD->getSelector(), OMD->isInstanceMethod())) if (!CatMeth->hasAttr()) return true; return false; } AvailabilityResult Sema::ShouldDiagnoseAvailabilityOfDecl(NamedDecl *&D, std::string *Message) { AvailabilityResult Result = D->getAvailability(Message); // For typedefs, if the typedef declaration appears available look // to the underlying type to see if it is more restrictive. while (const TypedefNameDecl *TD = dyn_cast(D)) { if (Result == AR_Available) { if (const TagType *TT = TD->getUnderlyingType()->getAs()) { D = TT->getDecl(); Result = D->getAvailability(Message); continue; } } break; } // Forward class declarations get their attributes from their definition. if (ObjCInterfaceDecl *IDecl = dyn_cast(D)) { if (IDecl->getDefinition()) { D = IDecl->getDefinition(); Result = D->getAvailability(Message); } } if (const EnumConstantDecl *ECD = dyn_cast(D)) if (Result == AR_Available) { const DeclContext *DC = ECD->getDeclContext(); if (const EnumDecl *TheEnumDecl = dyn_cast(DC)) Result = TheEnumDecl->getAvailability(Message); } if (Result == AR_NotYetIntroduced) { // Don't do this for enums, they can't be redeclared. if (isa(D) || isa(D)) return AR_Available; bool Warn = !D->getAttr()->isInherited(); // Objective-C method declarations in categories are not modelled as // redeclarations, so manually look for a redeclaration in a category // if necessary. if (Warn && HasRedeclarationWithoutAvailabilityInCategory(D)) Warn = false; // In general, D will point to the most recent redeclaration. However, // for `@class A;` decls, this isn't true -- manually go through the // redecl chain in that case. if (Warn && isa(D)) for (Decl *Redecl = D->getMostRecentDecl(); Redecl && Warn; Redecl = Redecl->getPreviousDecl()) if (!Redecl->hasAttr() || Redecl->getAttr()->isInherited()) Warn = false; return Warn ? AR_NotYetIntroduced : AR_Available; } return Result; } static void DiagnoseAvailabilityOfDecl(Sema &S, NamedDecl *D, SourceLocation Loc, const ObjCInterfaceDecl *UnknownObjCClass, bool ObjCPropertyAccess) { std::string Message; // See if this declaration is unavailable, deprecated, or partial. if (AvailabilityResult Result = S.ShouldDiagnoseAvailabilityOfDecl(D, &Message)) { if (Result == AR_NotYetIntroduced && S.getCurFunctionOrMethodDecl()) { S.getEnclosingFunction()->HasPotentialAvailabilityViolations = true; return; } const ObjCPropertyDecl *ObjCPDecl = nullptr; if (const ObjCMethodDecl *MD = dyn_cast(D)) { if (const ObjCPropertyDecl *PD = MD->findPropertyDecl()) { AvailabilityResult PDeclResult = PD->getAvailability(nullptr); if (PDeclResult == Result) ObjCPDecl = PD; } } S.EmitAvailabilityWarning(Result, D, Message, Loc, UnknownObjCClass, ObjCPDecl, ObjCPropertyAccess); } } /// \brief Emit a note explaining that this function is deleted. void Sema::NoteDeletedFunction(FunctionDecl *Decl) { assert(Decl->isDeleted()); CXXMethodDecl *Method = dyn_cast(Decl); if (Method && Method->isDeleted() && Method->isDefaulted()) { // If the method was explicitly defaulted, point at that declaration. if (!Method->isImplicit()) Diag(Decl->getLocation(), diag::note_implicitly_deleted); // Try to diagnose why this special member function was implicitly // deleted. This might fail, if that reason no longer applies. CXXSpecialMember CSM = getSpecialMember(Method); if (CSM != CXXInvalid) ShouldDeleteSpecialMember(Method, CSM, nullptr, /*Diagnose=*/true); return; } auto *Ctor = dyn_cast(Decl); if (Ctor && Ctor->isInheritingConstructor()) return NoteDeletedInheritingConstructor(Ctor); Diag(Decl->getLocation(), diag::note_availability_specified_here) << Decl << true; } /// \brief Determine whether a FunctionDecl was ever declared with an /// explicit storage class. static bool hasAnyExplicitStorageClass(const FunctionDecl *D) { for (auto I : D->redecls()) { if (I->getStorageClass() != SC_None) return true; } return false; } /// \brief Check whether we're in an extern inline function and referring to a /// variable or function with internal linkage (C11 6.7.4p3). /// /// This is only a warning because we used to silently accept this code, but /// in many cases it will not behave correctly. This is not enabled in C++ mode /// because the restriction language is a bit weaker (C++11 [basic.def.odr]p6) /// and so while there may still be user mistakes, most of the time we can't /// prove that there are errors. static void diagnoseUseOfInternalDeclInInlineFunction(Sema &S, const NamedDecl *D, SourceLocation Loc) { // This is disabled under C++; there are too many ways for this to fire in // contexts where the warning is a false positive, or where it is technically // correct but benign. if (S.getLangOpts().CPlusPlus) return; // Check if this is an inlined function or method. FunctionDecl *Current = S.getCurFunctionDecl(); if (!Current) return; if (!Current->isInlined()) return; if (!Current->isExternallyVisible()) return; // Check if the decl has internal linkage. if (D->getFormalLinkage() != InternalLinkage) return; // Downgrade from ExtWarn to Extension if // (1) the supposedly external inline function is in the main file, // and probably won't be included anywhere else. // (2) the thing we're referencing is a pure function. // (3) the thing we're referencing is another inline function. // This last can give us false negatives, but it's better than warning on // wrappers for simple C library functions. const FunctionDecl *UsedFn = dyn_cast(D); bool DowngradeWarning = S.getSourceManager().isInMainFile(Loc); if (!DowngradeWarning && UsedFn) DowngradeWarning = UsedFn->isInlined() || UsedFn->hasAttr(); S.Diag(Loc, DowngradeWarning ? diag::ext_internal_in_extern_inline_quiet : diag::ext_internal_in_extern_inline) << /*IsVar=*/!UsedFn << D; S.MaybeSuggestAddingStaticToDecl(Current); S.Diag(D->getCanonicalDecl()->getLocation(), diag::note_entity_declared_at) << D; } void Sema::MaybeSuggestAddingStaticToDecl(const FunctionDecl *Cur) { const FunctionDecl *First = Cur->getFirstDecl(); // Suggest "static" on the function, if possible. if (!hasAnyExplicitStorageClass(First)) { SourceLocation DeclBegin = First->getSourceRange().getBegin(); Diag(DeclBegin, diag::note_convert_inline_to_static) << Cur << FixItHint::CreateInsertion(DeclBegin, "static "); } } /// \brief Determine whether the use of this declaration is valid, and /// emit any corresponding diagnostics. /// /// This routine diagnoses various problems with referencing /// declarations that can occur when using a declaration. For example, /// it might warn if a deprecated or unavailable declaration is being /// used, or produce an error (and return true) if a C++0x deleted /// function is being used. /// /// \returns true if there was an error (this declaration cannot be /// referenced), false otherwise. /// bool Sema::DiagnoseUseOfDecl(NamedDecl *D, SourceLocation Loc, const ObjCInterfaceDecl *UnknownObjCClass, bool ObjCPropertyAccess) { if (getLangOpts().CPlusPlus && isa(D)) { // If there were any diagnostics suppressed by template argument deduction, // emit them now. auto Pos = SuppressedDiagnostics.find(D->getCanonicalDecl()); if (Pos != SuppressedDiagnostics.end()) { for (const PartialDiagnosticAt &Suppressed : Pos->second) Diag(Suppressed.first, Suppressed.second); // Clear out the list of suppressed diagnostics, so that we don't emit // them again for this specialization. However, we don't obsolete this // entry from the table, because we want to avoid ever emitting these // diagnostics again. Pos->second.clear(); } // C++ [basic.start.main]p3: // The function 'main' shall not be used within a program. if (cast(D)->isMain()) Diag(Loc, diag::ext_main_used); } // See if this is an auto-typed variable whose initializer we are parsing. if (ParsingInitForAutoVars.count(D)) { if (isa(D)) { Diag(Loc, diag::err_binding_cannot_appear_in_own_initializer) << D->getDeclName(); } else { const AutoType *AT = cast(D)->getType()->getContainedAutoType(); Diag(Loc, diag::err_auto_variable_cannot_appear_in_own_initializer) << D->getDeclName() << (unsigned)AT->getKeyword(); } return true; } // See if this is a deleted function. SmallVector DiagnoseIfWarnings; if (FunctionDecl *FD = dyn_cast(D)) { if (FD->isDeleted()) { auto *Ctor = dyn_cast(FD); if (Ctor && Ctor->isInheritingConstructor()) Diag(Loc, diag::err_deleted_inherited_ctor_use) << Ctor->getParent() << Ctor->getInheritedConstructor().getConstructor()->getParent(); else Diag(Loc, diag::err_deleted_function_use); NoteDeletedFunction(FD); return true; } // If the function has a deduced return type, and we can't deduce it, // then we can't use it either. if (getLangOpts().CPlusPlus14 && FD->getReturnType()->isUndeducedType() && DeduceReturnType(FD, Loc)) return true; if (getLangOpts().CUDA && !CheckCUDACall(Loc, FD)) return true; if (const DiagnoseIfAttr *A = checkArgIndependentDiagnoseIf(FD, DiagnoseIfWarnings)) { emitDiagnoseIfDiagnostic(Loc, A); return true; } } // [OpenMP 4.0], 2.15 declare reduction Directive, Restrictions // Only the variables omp_in and omp_out are allowed in the combiner. // Only the variables omp_priv and omp_orig are allowed in the // initializer-clause. auto *DRD = dyn_cast(CurContext); if (LangOpts.OpenMP && DRD && !CurContext->containsDecl(D) && isa(D)) { Diag(Loc, diag::err_omp_wrong_var_in_declare_reduction) << getCurFunction()->HasOMPDeclareReductionCombiner; Diag(D->getLocation(), diag::note_entity_declared_at) << D; return true; } for (const auto *W : DiagnoseIfWarnings) emitDiagnoseIfDiagnostic(Loc, W); DiagnoseAvailabilityOfDecl(*this, D, Loc, UnknownObjCClass, ObjCPropertyAccess); DiagnoseUnusedOfDecl(*this, D, Loc); diagnoseUseOfInternalDeclInInlineFunction(*this, D, Loc); return false; } /// \brief Retrieve the message suffix that should be added to a /// diagnostic complaining about the given function being deleted or /// unavailable. std::string Sema::getDeletedOrUnavailableSuffix(const FunctionDecl *FD) { std::string Message; if (FD->getAvailability(&Message)) return ": " + Message; return std::string(); } /// DiagnoseSentinelCalls - This routine checks whether a call or /// message-send is to a declaration with the sentinel attribute, and /// if so, it checks that the requirements of the sentinel are /// satisfied. void Sema::DiagnoseSentinelCalls(NamedDecl *D, SourceLocation Loc, ArrayRef Args) { const SentinelAttr *attr = D->getAttr(); if (!attr) return; // The number of formal parameters of the declaration. unsigned numFormalParams; // The kind of declaration. This is also an index into a %select in // the diagnostic. enum CalleeType { CT_Function, CT_Method, CT_Block } calleeType; if (ObjCMethodDecl *MD = dyn_cast(D)) { numFormalParams = MD->param_size(); calleeType = CT_Method; } else if (FunctionDecl *FD = dyn_cast(D)) { numFormalParams = FD->param_size(); calleeType = CT_Function; } else if (isa(D)) { QualType type = cast(D)->getType(); const FunctionType *fn = nullptr; if (const PointerType *ptr = type->getAs()) { fn = ptr->getPointeeType()->getAs(); if (!fn) return; calleeType = CT_Function; } else if (const BlockPointerType *ptr = type->getAs()) { fn = ptr->getPointeeType()->castAs(); calleeType = CT_Block; } else { return; } if (const FunctionProtoType *proto = dyn_cast(fn)) { numFormalParams = proto->getNumParams(); } else { numFormalParams = 0; } } else { return; } // "nullPos" is the number of formal parameters at the end which // effectively count as part of the variadic arguments. This is // useful if you would prefer to not have *any* formal parameters, // but the language forces you to have at least one. unsigned nullPos = attr->getNullPos(); assert((nullPos == 0 || nullPos == 1) && "invalid null position on sentinel"); numFormalParams = (nullPos > numFormalParams ? 0 : numFormalParams - nullPos); // The number of arguments which should follow the sentinel. unsigned numArgsAfterSentinel = attr->getSentinel(); // If there aren't enough arguments for all the formal parameters, // the sentinel, and the args after the sentinel, complain. if (Args.size() < numFormalParams + numArgsAfterSentinel + 1) { Diag(Loc, diag::warn_not_enough_argument) << D->getDeclName(); Diag(D->getLocation(), diag::note_sentinel_here) << int(calleeType); return; } // Otherwise, find the sentinel expression. Expr *sentinelExpr = Args[Args.size() - numArgsAfterSentinel - 1]; if (!sentinelExpr) return; if (sentinelExpr->isValueDependent()) return; if (Context.isSentinelNullExpr(sentinelExpr)) return; // Pick a reasonable string to insert. Optimistically use 'nil', 'nullptr', // or 'NULL' if those are actually defined in the context. Only use // 'nil' for ObjC methods, where it's much more likely that the // variadic arguments form a list of object pointers. SourceLocation MissingNilLoc = getLocForEndOfToken(sentinelExpr->getLocEnd()); std::string NullValue; if (calleeType == CT_Method && PP.isMacroDefined("nil")) NullValue = "nil"; else if (getLangOpts().CPlusPlus11) NullValue = "nullptr"; else if (PP.isMacroDefined("NULL")) NullValue = "NULL"; else NullValue = "(void*) 0"; if (MissingNilLoc.isInvalid()) Diag(Loc, diag::warn_missing_sentinel) << int(calleeType); else Diag(MissingNilLoc, diag::warn_missing_sentinel) << int(calleeType) << FixItHint::CreateInsertion(MissingNilLoc, ", " + NullValue); Diag(D->getLocation(), diag::note_sentinel_here) << int(calleeType); } SourceRange Sema::getExprRange(Expr *E) const { return E ? E->getSourceRange() : SourceRange(); } //===----------------------------------------------------------------------===// // Standard Promotions and Conversions //===----------------------------------------------------------------------===// /// DefaultFunctionArrayConversion (C99 6.3.2.1p3, C99 6.3.2.1p4). ExprResult Sema::DefaultFunctionArrayConversion(Expr *E, bool Diagnose) { // Handle any placeholder expressions which made it here. if (E->getType()->isPlaceholderType()) { ExprResult result = CheckPlaceholderExpr(E); if (result.isInvalid()) return ExprError(); E = result.get(); } QualType Ty = E->getType(); assert(!Ty.isNull() && "DefaultFunctionArrayConversion - missing type"); if (Ty->isFunctionType()) { // If we are here, we are not calling a function but taking // its address (which is not allowed in OpenCL v1.0 s6.8.a.3). if (getLangOpts().OpenCL) { if (Diagnose) Diag(E->getExprLoc(), diag::err_opencl_taking_function_address); return ExprError(); } if (auto *DRE = dyn_cast(E->IgnoreParenCasts())) if (auto *FD = dyn_cast(DRE->getDecl())) if (!checkAddressOfFunctionIsAvailable(FD, Diagnose, E->getExprLoc())) return ExprError(); E = ImpCastExprToType(E, Context.getPointerType(Ty), CK_FunctionToPointerDecay).get(); } else if (Ty->isArrayType()) { // In C90 mode, arrays only promote to pointers if the array expression is // an lvalue. The relevant legalese is C90 6.2.2.1p3: "an lvalue that has // type 'array of type' is converted to an expression that has type 'pointer // to type'...". In C99 this was changed to: C99 6.3.2.1p3: "an expression // that has type 'array of type' ...". The relevant change is "an lvalue" // (C90) to "an expression" (C99). // // C++ 4.2p1: // An lvalue or rvalue of type "array of N T" or "array of unknown bound of // T" can be converted to an rvalue of type "pointer to T". // if (getLangOpts().C99 || getLangOpts().CPlusPlus || E->isLValue()) E = ImpCastExprToType(E, Context.getArrayDecayedType(Ty), CK_ArrayToPointerDecay).get(); } return E; } static void CheckForNullPointerDereference(Sema &S, Expr *E) { // Check to see if we are dereferencing a null pointer. If so, // and if not volatile-qualified, this is undefined behavior that the // optimizer will delete, so warn about it. People sometimes try to use this // to get a deterministic trap and are surprised by clang's behavior. This // only handles the pattern "*null", which is a very syntactic check. if (UnaryOperator *UO = dyn_cast(E->IgnoreParenCasts())) if (UO->getOpcode() == UO_Deref && UO->getSubExpr()->IgnoreParenCasts()-> isNullPointerConstant(S.Context, Expr::NPC_ValueDependentIsNotNull) && !UO->getType().isVolatileQualified()) { S.DiagRuntimeBehavior(UO->getOperatorLoc(), UO, S.PDiag(diag::warn_indirection_through_null) << UO->getSubExpr()->getSourceRange()); S.DiagRuntimeBehavior(UO->getOperatorLoc(), UO, S.PDiag(diag::note_indirection_through_null)); } } static void DiagnoseDirectIsaAccess(Sema &S, const ObjCIvarRefExpr *OIRE, SourceLocation AssignLoc, const Expr* RHS) { const ObjCIvarDecl *IV = OIRE->getDecl(); if (!IV) return; DeclarationName MemberName = IV->getDeclName(); IdentifierInfo *Member = MemberName.getAsIdentifierInfo(); if (!Member || !Member->isStr("isa")) return; const Expr *Base = OIRE->getBase(); QualType BaseType = Base->getType(); if (OIRE->isArrow()) BaseType = BaseType->getPointeeType(); if (const ObjCObjectType *OTy = BaseType->getAs()) if (ObjCInterfaceDecl *IDecl = OTy->getInterface()) { ObjCInterfaceDecl *ClassDeclared = nullptr; ObjCIvarDecl *IV = IDecl->lookupInstanceVariable(Member, ClassDeclared); if (!ClassDeclared->getSuperClass() && (*ClassDeclared->ivar_begin()) == IV) { if (RHS) { NamedDecl *ObjectSetClass = S.LookupSingleName(S.TUScope, &S.Context.Idents.get("object_setClass"), SourceLocation(), S.LookupOrdinaryName); if (ObjectSetClass) { SourceLocation RHSLocEnd = S.getLocForEndOfToken(RHS->getLocEnd()); S.Diag(OIRE->getExprLoc(), diag::warn_objc_isa_assign) << FixItHint::CreateInsertion(OIRE->getLocStart(), "object_setClass(") << FixItHint::CreateReplacement(SourceRange(OIRE->getOpLoc(), AssignLoc), ",") << FixItHint::CreateInsertion(RHSLocEnd, ")"); } else S.Diag(OIRE->getLocation(), diag::warn_objc_isa_assign); } else { NamedDecl *ObjectGetClass = S.LookupSingleName(S.TUScope, &S.Context.Idents.get("object_getClass"), SourceLocation(), S.LookupOrdinaryName); if (ObjectGetClass) S.Diag(OIRE->getExprLoc(), diag::warn_objc_isa_use) << FixItHint::CreateInsertion(OIRE->getLocStart(), "object_getClass(") << FixItHint::CreateReplacement( SourceRange(OIRE->getOpLoc(), OIRE->getLocEnd()), ")"); else S.Diag(OIRE->getLocation(), diag::warn_objc_isa_use); } S.Diag(IV->getLocation(), diag::note_ivar_decl); } } } ExprResult Sema::DefaultLvalueConversion(Expr *E) { // Handle any placeholder expressions which made it here. if (E->getType()->isPlaceholderType()) { ExprResult result = CheckPlaceholderExpr(E); if (result.isInvalid()) return ExprError(); E = result.get(); } // C++ [conv.lval]p1: // A glvalue of a non-function, non-array type T can be // converted to a prvalue. if (!E->isGLValue()) return E; QualType T = E->getType(); assert(!T.isNull() && "r-value conversion on typeless expression?"); // We don't want to throw lvalue-to-rvalue casts on top of // expressions of certain types in C++. if (getLangOpts().CPlusPlus && (E->getType() == Context.OverloadTy || T->isDependentType() || T->isRecordType())) return E; // The C standard is actually really unclear on this point, and // DR106 tells us what the result should be but not why. It's // generally best to say that void types just doesn't undergo // lvalue-to-rvalue at all. Note that expressions of unqualified // 'void' type are never l-values, but qualified void can be. if (T->isVoidType()) return E; // OpenCL usually rejects direct accesses to values of 'half' type. if (getLangOpts().OpenCL && !getOpenCLOptions().isEnabled("cl_khr_fp16") && T->isHalfType()) { Diag(E->getExprLoc(), diag::err_opencl_half_load_store) << 0 << T; return ExprError(); } CheckForNullPointerDereference(*this, E); if (const ObjCIsaExpr *OISA = dyn_cast(E->IgnoreParenCasts())) { NamedDecl *ObjectGetClass = LookupSingleName(TUScope, &Context.Idents.get("object_getClass"), SourceLocation(), LookupOrdinaryName); if (ObjectGetClass) Diag(E->getExprLoc(), diag::warn_objc_isa_use) << FixItHint::CreateInsertion(OISA->getLocStart(), "object_getClass(") << FixItHint::CreateReplacement( SourceRange(OISA->getOpLoc(), OISA->getIsaMemberLoc()), ")"); else Diag(E->getExprLoc(), diag::warn_objc_isa_use); } else if (const ObjCIvarRefExpr *OIRE = dyn_cast(E->IgnoreParenCasts())) DiagnoseDirectIsaAccess(*this, OIRE, SourceLocation(), /* Expr*/nullptr); // C++ [conv.lval]p1: // [...] If T is a non-class type, the type of the prvalue is the // cv-unqualified version of T. Otherwise, the type of the // rvalue is T. // // C99 6.3.2.1p2: // If the lvalue has qualified type, the value has the unqualified // version of the type of the lvalue; otherwise, the value has the // type of the lvalue. if (T.hasQualifiers()) T = T.getUnqualifiedType(); // Under the MS ABI, lock down the inheritance model now. if (T->isMemberPointerType() && Context.getTargetInfo().getCXXABI().isMicrosoft()) (void)isCompleteType(E->getExprLoc(), T); UpdateMarkingForLValueToRValue(E); // Loading a __weak object implicitly retains the value, so we need a cleanup to // balance that. if (getLangOpts().ObjCAutoRefCount && E->getType().getObjCLifetime() == Qualifiers::OCL_Weak) Cleanup.setExprNeedsCleanups(true); ExprResult Res = ImplicitCastExpr::Create(Context, T, CK_LValueToRValue, E, nullptr, VK_RValue); // C11 6.3.2.1p2: // ... if the lvalue has atomic type, the value has the non-atomic version // of the type of the lvalue ... if (const AtomicType *Atomic = T->getAs()) { T = Atomic->getValueType().getUnqualifiedType(); Res = ImplicitCastExpr::Create(Context, T, CK_AtomicToNonAtomic, Res.get(), nullptr, VK_RValue); } return Res; } ExprResult Sema::DefaultFunctionArrayLvalueConversion(Expr *E, bool Diagnose) { ExprResult Res = DefaultFunctionArrayConversion(E, Diagnose); if (Res.isInvalid()) return ExprError(); Res = DefaultLvalueConversion(Res.get()); if (Res.isInvalid()) return ExprError(); return Res; } /// CallExprUnaryConversions - a special case of an unary conversion /// performed on a function designator of a call expression. ExprResult Sema::CallExprUnaryConversions(Expr *E) { QualType Ty = E->getType(); ExprResult Res = E; // Only do implicit cast for a function type, but not for a pointer // to function type. if (Ty->isFunctionType()) { Res = ImpCastExprToType(E, Context.getPointerType(Ty), CK_FunctionToPointerDecay).get(); if (Res.isInvalid()) return ExprError(); } Res = DefaultLvalueConversion(Res.get()); if (Res.isInvalid()) return ExprError(); return Res.get(); } /// UsualUnaryConversions - Performs various conversions that are common to most /// operators (C99 6.3). The conversions of array and function types are /// sometimes suppressed. For example, the array->pointer conversion doesn't /// apply if the array is an argument to the sizeof or address (&) operators. /// In these instances, this routine should *not* be called. ExprResult Sema::UsualUnaryConversions(Expr *E) { // First, convert to an r-value. ExprResult Res = DefaultFunctionArrayLvalueConversion(E); if (Res.isInvalid()) return ExprError(); E = Res.get(); QualType Ty = E->getType(); assert(!Ty.isNull() && "UsualUnaryConversions - missing type"); // Half FP have to be promoted to float unless it is natively supported if (Ty->isHalfType() && !getLangOpts().NativeHalfType) return ImpCastExprToType(Res.get(), Context.FloatTy, CK_FloatingCast); // Try to perform integral promotions if the object has a theoretically // promotable type. if (Ty->isIntegralOrUnscopedEnumerationType()) { // C99 6.3.1.1p2: // // The following may be used in an expression wherever an int or // unsigned int may be used: // - an object or expression with an integer type whose integer // conversion rank is less than or equal to the rank of int // and unsigned int. // - A bit-field of type _Bool, int, signed int, or unsigned int. // // If an int can represent all values of the original type, the // value is converted to an int; otherwise, it is converted to an // unsigned int. These are called the integer promotions. All // other types are unchanged by the integer promotions. QualType PTy = Context.isPromotableBitField(E); if (!PTy.isNull()) { E = ImpCastExprToType(E, PTy, CK_IntegralCast).get(); return E; } if (Ty->isPromotableIntegerType()) { QualType PT = Context.getPromotedIntegerType(Ty); E = ImpCastExprToType(E, PT, CK_IntegralCast).get(); return E; } } return E; } /// DefaultArgumentPromotion (C99 6.5.2.2p6). Used for function calls that /// do not have a prototype. Arguments that have type float or __fp16 /// are promoted to double. All other argument types are converted by /// UsualUnaryConversions(). ExprResult Sema::DefaultArgumentPromotion(Expr *E) { QualType Ty = E->getType(); assert(!Ty.isNull() && "DefaultArgumentPromotion - missing type"); ExprResult Res = UsualUnaryConversions(E); if (Res.isInvalid()) return ExprError(); E = Res.get(); // If this is a 'float' or '__fp16' (CVR qualified or typedef) promote to // double. const BuiltinType *BTy = Ty->getAs(); if (BTy && (BTy->getKind() == BuiltinType::Half || BTy->getKind() == BuiltinType::Float)) { if (getLangOpts().OpenCL && !getOpenCLOptions().isEnabled("cl_khr_fp64")) { if (BTy->getKind() == BuiltinType::Half) { E = ImpCastExprToType(E, Context.FloatTy, CK_FloatingCast).get(); } } else { E = ImpCastExprToType(E, Context.DoubleTy, CK_FloatingCast).get(); } } // C++ performs lvalue-to-rvalue conversion as a default argument // promotion, even on class types, but note: // C++11 [conv.lval]p2: // When an lvalue-to-rvalue conversion occurs in an unevaluated // operand or a subexpression thereof the value contained in the // referenced object is not accessed. Otherwise, if the glvalue // has a class type, the conversion copy-initializes a temporary // of type T from the glvalue and the result of the conversion // is a prvalue for the temporary. // FIXME: add some way to gate this entire thing for correctness in // potentially potentially evaluated contexts. if (getLangOpts().CPlusPlus && E->isGLValue() && !isUnevaluatedContext()) { ExprResult Temp = PerformCopyInitialization( InitializedEntity::InitializeTemporary(E->getType()), E->getExprLoc(), E); if (Temp.isInvalid()) return ExprError(); E = Temp.get(); } return E; } /// Determine the degree of POD-ness for an expression. /// Incomplete types are considered POD, since this check can be performed /// when we're in an unevaluated context. Sema::VarArgKind Sema::isValidVarArgType(const QualType &Ty) { if (Ty->isIncompleteType()) { // C++11 [expr.call]p7: // After these conversions, if the argument does not have arithmetic, // enumeration, pointer, pointer to member, or class type, the program // is ill-formed. // // Since we've already performed array-to-pointer and function-to-pointer // decay, the only such type in C++ is cv void. This also handles // initializer lists as variadic arguments. if (Ty->isVoidType()) return VAK_Invalid; if (Ty->isObjCObjectType()) return VAK_Invalid; return VAK_Valid; } if (Ty.isCXX98PODType(Context)) return VAK_Valid; // C++11 [expr.call]p7: // Passing a potentially-evaluated argument of class type (Clause 9) // having a non-trivial copy constructor, a non-trivial move constructor, // or a non-trivial destructor, with no corresponding parameter, // is conditionally-supported with implementation-defined semantics. if (getLangOpts().CPlusPlus11 && !Ty->isDependentType()) if (CXXRecordDecl *Record = Ty->getAsCXXRecordDecl()) if (!Record->hasNonTrivialCopyConstructor() && !Record->hasNonTrivialMoveConstructor() && !Record->hasNonTrivialDestructor()) return VAK_ValidInCXX11; if (getLangOpts().ObjCAutoRefCount && Ty->isObjCLifetimeType()) return VAK_Valid; if (Ty->isObjCObjectType()) return VAK_Invalid; if (getLangOpts().MSVCCompat) return VAK_MSVCUndefined; // FIXME: In C++11, these cases are conditionally-supported, meaning we're // permitted to reject them. We should consider doing so. return VAK_Undefined; } void Sema::checkVariadicArgument(const Expr *E, VariadicCallType CT) { // Don't allow one to pass an Objective-C interface to a vararg. const QualType &Ty = E->getType(); VarArgKind VAK = isValidVarArgType(Ty); // Complain about passing non-POD types through varargs. switch (VAK) { case VAK_ValidInCXX11: DiagRuntimeBehavior( E->getLocStart(), nullptr, PDiag(diag::warn_cxx98_compat_pass_non_pod_arg_to_vararg) << Ty << CT); // Fall through. case VAK_Valid: if (Ty->isRecordType()) { // This is unlikely to be what the user intended. If the class has a // 'c_str' member function, the user probably meant to call that. DiagRuntimeBehavior(E->getLocStart(), nullptr, PDiag(diag::warn_pass_class_arg_to_vararg) << Ty << CT << hasCStrMethod(E) << ".c_str()"); } break; case VAK_Undefined: case VAK_MSVCUndefined: DiagRuntimeBehavior( E->getLocStart(), nullptr, PDiag(diag::warn_cannot_pass_non_pod_arg_to_vararg) << getLangOpts().CPlusPlus11 << Ty << CT); break; case VAK_Invalid: if (Ty->isObjCObjectType()) DiagRuntimeBehavior( E->getLocStart(), nullptr, PDiag(diag::err_cannot_pass_objc_interface_to_vararg) << Ty << CT); else Diag(E->getLocStart(), diag::err_cannot_pass_to_vararg) << isa(E) << Ty << CT; break; } } /// DefaultVariadicArgumentPromotion - Like DefaultArgumentPromotion, but /// will create a trap if the resulting type is not a POD type. ExprResult Sema::DefaultVariadicArgumentPromotion(Expr *E, VariadicCallType CT, FunctionDecl *FDecl) { if (const BuiltinType *PlaceholderTy = E->getType()->getAsPlaceholderType()) { // Strip the unbridged-cast placeholder expression off, if applicable. if (PlaceholderTy->getKind() == BuiltinType::ARCUnbridgedCast && (CT == VariadicMethod || (FDecl && FDecl->hasAttr()))) { E = stripARCUnbridgedCast(E); // Otherwise, do normal placeholder checking. } else { ExprResult ExprRes = CheckPlaceholderExpr(E); if (ExprRes.isInvalid()) return ExprError(); E = ExprRes.get(); } } ExprResult ExprRes = DefaultArgumentPromotion(E); if (ExprRes.isInvalid()) return ExprError(); E = ExprRes.get(); // Diagnostics regarding non-POD argument types are // emitted along with format string checking in Sema::CheckFunctionCall(). if (isValidVarArgType(E->getType()) == VAK_Undefined) { // Turn this into a trap. CXXScopeSpec SS; SourceLocation TemplateKWLoc; UnqualifiedId Name; Name.setIdentifier(PP.getIdentifierInfo("__builtin_trap"), E->getLocStart()); ExprResult TrapFn = ActOnIdExpression(TUScope, SS, TemplateKWLoc, Name, true, false); if (TrapFn.isInvalid()) return ExprError(); ExprResult Call = ActOnCallExpr(TUScope, TrapFn.get(), E->getLocStart(), None, E->getLocEnd()); if (Call.isInvalid()) return ExprError(); ExprResult Comma = ActOnBinOp(TUScope, E->getLocStart(), tok::comma, Call.get(), E); if (Comma.isInvalid()) return ExprError(); return Comma.get(); } if (!getLangOpts().CPlusPlus && RequireCompleteType(E->getExprLoc(), E->getType(), diag::err_call_incomplete_argument)) return ExprError(); return E; } /// \brief Converts an integer to complex float type. Helper function of /// UsualArithmeticConversions() /// /// \return false if the integer expression is an integer type and is /// successfully converted to the complex type. static bool handleIntegerToComplexFloatConversion(Sema &S, ExprResult &IntExpr, ExprResult &ComplexExpr, QualType IntTy, QualType ComplexTy, bool SkipCast) { if (IntTy->isComplexType() || IntTy->isRealFloatingType()) return true; if (SkipCast) return false; if (IntTy->isIntegerType()) { QualType fpTy = cast(ComplexTy)->getElementType(); IntExpr = S.ImpCastExprToType(IntExpr.get(), fpTy, CK_IntegralToFloating); IntExpr = S.ImpCastExprToType(IntExpr.get(), ComplexTy, CK_FloatingRealToComplex); } else { assert(IntTy->isComplexIntegerType()); IntExpr = S.ImpCastExprToType(IntExpr.get(), ComplexTy, CK_IntegralComplexToFloatingComplex); } return false; } /// \brief Handle arithmetic conversion with complex types. Helper function of /// UsualArithmeticConversions() static QualType handleComplexFloatConversion(Sema &S, ExprResult &LHS, ExprResult &RHS, QualType LHSType, QualType RHSType, bool IsCompAssign) { // if we have an integer operand, the result is the complex type. if (!handleIntegerToComplexFloatConversion(S, RHS, LHS, RHSType, LHSType, /*skipCast*/false)) return LHSType; if (!handleIntegerToComplexFloatConversion(S, LHS, RHS, LHSType, RHSType, /*skipCast*/IsCompAssign)) return RHSType; // This handles complex/complex, complex/float, or float/complex. // When both operands are complex, the shorter operand is converted to the // type of the longer, and that is the type of the result. This corresponds // to what is done when combining two real floating-point operands. // The fun begins when size promotion occur across type domains. // From H&S 6.3.4: When one operand is complex and the other is a real // floating-point type, the less precise type is converted, within it's // real or complex domain, to the precision of the other type. For example, // when combining a "long double" with a "double _Complex", the // "double _Complex" is promoted to "long double _Complex". // Compute the rank of the two types, regardless of whether they are complex. int Order = S.Context.getFloatingTypeOrder(LHSType, RHSType); auto *LHSComplexType = dyn_cast(LHSType); auto *RHSComplexType = dyn_cast(RHSType); QualType LHSElementType = LHSComplexType ? LHSComplexType->getElementType() : LHSType; QualType RHSElementType = RHSComplexType ? RHSComplexType->getElementType() : RHSType; QualType ResultType = S.Context.getComplexType(LHSElementType); if (Order < 0) { // Promote the precision of the LHS if not an assignment. ResultType = S.Context.getComplexType(RHSElementType); if (!IsCompAssign) { if (LHSComplexType) LHS = S.ImpCastExprToType(LHS.get(), ResultType, CK_FloatingComplexCast); else LHS = S.ImpCastExprToType(LHS.get(), RHSElementType, CK_FloatingCast); } } else if (Order > 0) { // Promote the precision of the RHS. if (RHSComplexType) RHS = S.ImpCastExprToType(RHS.get(), ResultType, CK_FloatingComplexCast); else RHS = S.ImpCastExprToType(RHS.get(), LHSElementType, CK_FloatingCast); } return ResultType; } /// \brief Hande arithmetic conversion from integer to float. Helper function /// of UsualArithmeticConversions() static QualType handleIntToFloatConversion(Sema &S, ExprResult &FloatExpr, ExprResult &IntExpr, QualType FloatTy, QualType IntTy, bool ConvertFloat, bool ConvertInt) { if (IntTy->isIntegerType()) { if (ConvertInt) // Convert intExpr to the lhs floating point type. IntExpr = S.ImpCastExprToType(IntExpr.get(), FloatTy, CK_IntegralToFloating); return FloatTy; } // Convert both sides to the appropriate complex float. assert(IntTy->isComplexIntegerType()); QualType result = S.Context.getComplexType(FloatTy); // _Complex int -> _Complex float if (ConvertInt) IntExpr = S.ImpCastExprToType(IntExpr.get(), result, CK_IntegralComplexToFloatingComplex); // float -> _Complex float if (ConvertFloat) FloatExpr = S.ImpCastExprToType(FloatExpr.get(), result, CK_FloatingRealToComplex); return result; } /// \brief Handle arithmethic conversion with floating point types. Helper /// function of UsualArithmeticConversions() static QualType handleFloatConversion(Sema &S, ExprResult &LHS, ExprResult &RHS, QualType LHSType, QualType RHSType, bool IsCompAssign) { bool LHSFloat = LHSType->isRealFloatingType(); bool RHSFloat = RHSType->isRealFloatingType(); // If we have two real floating types, convert the smaller operand // to the bigger result. if (LHSFloat && RHSFloat) { int order = S.Context.getFloatingTypeOrder(LHSType, RHSType); if (order > 0) { RHS = S.ImpCastExprToType(RHS.get(), LHSType, CK_FloatingCast); return LHSType; } assert(order < 0 && "illegal float comparison"); if (!IsCompAssign) LHS = S.ImpCastExprToType(LHS.get(), RHSType, CK_FloatingCast); return RHSType; } if (LHSFloat) { // Half FP has to be promoted to float unless it is natively supported if (LHSType->isHalfType() && !S.getLangOpts().NativeHalfType) LHSType = S.Context.FloatTy; return handleIntToFloatConversion(S, LHS, RHS, LHSType, RHSType, /*convertFloat=*/!IsCompAssign, /*convertInt=*/ true); } assert(RHSFloat); return handleIntToFloatConversion(S, RHS, LHS, RHSType, LHSType, /*convertInt=*/ true, /*convertFloat=*/!IsCompAssign); } /// \brief Diagnose attempts to convert between __float128 and long double if /// there is no support for such conversion. Helper function of /// UsualArithmeticConversions(). static bool unsupportedTypeConversion(const Sema &S, QualType LHSType, QualType RHSType) { /* No issue converting if at least one of the types is not a floating point type or the two types have the same rank. */ if (!LHSType->isFloatingType() || !RHSType->isFloatingType() || S.Context.getFloatingTypeOrder(LHSType, RHSType) == 0) return false; assert(LHSType->isFloatingType() && RHSType->isFloatingType() && "The remaining types must be floating point types."); auto *LHSComplex = LHSType->getAs(); auto *RHSComplex = RHSType->getAs(); QualType LHSElemType = LHSComplex ? LHSComplex->getElementType() : LHSType; QualType RHSElemType = RHSComplex ? RHSComplex->getElementType() : RHSType; // No issue if the two types have the same representation if (&S.Context.getFloatTypeSemantics(LHSElemType) == &S.Context.getFloatTypeSemantics(RHSElemType)) return false; bool Float128AndLongDouble = (LHSElemType == S.Context.Float128Ty && RHSElemType == S.Context.LongDoubleTy); Float128AndLongDouble |= (LHSElemType == S.Context.LongDoubleTy && RHSElemType == S.Context.Float128Ty); /* We've handled the situation where __float128 and long double have the same representation. The only other allowable conversion is if long double is really just double. */ return Float128AndLongDouble && (&S.Context.getFloatTypeSemantics(S.Context.LongDoubleTy) != &llvm::APFloat::IEEEdouble()); } typedef ExprResult PerformCastFn(Sema &S, Expr *operand, QualType toType); namespace { /// These helper callbacks are placed in an anonymous namespace to /// permit their use as function template parameters. ExprResult doIntegralCast(Sema &S, Expr *op, QualType toType) { return S.ImpCastExprToType(op, toType, CK_IntegralCast); } ExprResult doComplexIntegralCast(Sema &S, Expr *op, QualType toType) { return S.ImpCastExprToType(op, S.Context.getComplexType(toType), CK_IntegralComplexCast); } } /// \brief Handle integer arithmetic conversions. Helper function of /// UsualArithmeticConversions() template static QualType handleIntegerConversion(Sema &S, ExprResult &LHS, ExprResult &RHS, QualType LHSType, QualType RHSType, bool IsCompAssign) { // The rules for this case are in C99 6.3.1.8 int order = S.Context.getIntegerTypeOrder(LHSType, RHSType); bool LHSSigned = LHSType->hasSignedIntegerRepresentation(); bool RHSSigned = RHSType->hasSignedIntegerRepresentation(); if (LHSSigned == RHSSigned) { // Same signedness; use the higher-ranked type if (order >= 0) { RHS = (*doRHSCast)(S, RHS.get(), LHSType); return LHSType; } else if (!IsCompAssign) LHS = (*doLHSCast)(S, LHS.get(), RHSType); return RHSType; } else if (order != (LHSSigned ? 1 : -1)) { // The unsigned type has greater than or equal rank to the // signed type, so use the unsigned type if (RHSSigned) { RHS = (*doRHSCast)(S, RHS.get(), LHSType); return LHSType; } else if (!IsCompAssign) LHS = (*doLHSCast)(S, LHS.get(), RHSType); return RHSType; } else if (S.Context.getIntWidth(LHSType) != S.Context.getIntWidth(RHSType)) { // The two types are different widths; if we are here, that // means the signed type is larger than the unsigned type, so // use the signed type. if (LHSSigned) { RHS = (*doRHSCast)(S, RHS.get(), LHSType); return LHSType; } else if (!IsCompAssign) LHS = (*doLHSCast)(S, LHS.get(), RHSType); return RHSType; } else { // The signed type is higher-ranked than the unsigned type, // but isn't actually any bigger (like unsigned int and long // on most 32-bit systems). Use the unsigned type corresponding // to the signed type. QualType result = S.Context.getCorrespondingUnsignedType(LHSSigned ? LHSType : RHSType); RHS = (*doRHSCast)(S, RHS.get(), result); if (!IsCompAssign) LHS = (*doLHSCast)(S, LHS.get(), result); return result; } } /// \brief Handle conversions with GCC complex int extension. Helper function /// of UsualArithmeticConversions() static QualType handleComplexIntConversion(Sema &S, ExprResult &LHS, ExprResult &RHS, QualType LHSType, QualType RHSType, bool IsCompAssign) { const ComplexType *LHSComplexInt = LHSType->getAsComplexIntegerType(); const ComplexType *RHSComplexInt = RHSType->getAsComplexIntegerType(); if (LHSComplexInt && RHSComplexInt) { QualType LHSEltType = LHSComplexInt->getElementType(); QualType RHSEltType = RHSComplexInt->getElementType(); QualType ScalarType = handleIntegerConversion (S, LHS, RHS, LHSEltType, RHSEltType, IsCompAssign); return S.Context.getComplexType(ScalarType); } if (LHSComplexInt) { QualType LHSEltType = LHSComplexInt->getElementType(); QualType ScalarType = handleIntegerConversion (S, LHS, RHS, LHSEltType, RHSType, IsCompAssign); QualType ComplexType = S.Context.getComplexType(ScalarType); RHS = S.ImpCastExprToType(RHS.get(), ComplexType, CK_IntegralRealToComplex); return ComplexType; } assert(RHSComplexInt); QualType RHSEltType = RHSComplexInt->getElementType(); QualType ScalarType = handleIntegerConversion (S, LHS, RHS, LHSType, RHSEltType, IsCompAssign); QualType ComplexType = S.Context.getComplexType(ScalarType); if (!IsCompAssign) LHS = S.ImpCastExprToType(LHS.get(), ComplexType, CK_IntegralRealToComplex); return ComplexType; } /// UsualArithmeticConversions - Performs various conversions that are common to /// binary operators (C99 6.3.1.8). If both operands aren't arithmetic, this /// routine returns the first non-arithmetic type found. The client is /// responsible for emitting appropriate error diagnostics. QualType Sema::UsualArithmeticConversions(ExprResult &LHS, ExprResult &RHS, bool IsCompAssign) { if (!IsCompAssign) { LHS = UsualUnaryConversions(LHS.get()); if (LHS.isInvalid()) return QualType(); } RHS = UsualUnaryConversions(RHS.get()); if (RHS.isInvalid()) return QualType(); // For conversion purposes, we ignore any qualifiers. // For example, "const float" and "float" are equivalent. QualType LHSType = Context.getCanonicalType(LHS.get()->getType()).getUnqualifiedType(); QualType RHSType = Context.getCanonicalType(RHS.get()->getType()).getUnqualifiedType(); // For conversion purposes, we ignore any atomic qualifier on the LHS. if (const AtomicType *AtomicLHS = LHSType->getAs()) LHSType = AtomicLHS->getValueType(); // If both types are identical, no conversion is needed. if (LHSType == RHSType) return LHSType; // If either side is a non-arithmetic type (e.g. a pointer), we are done. // The caller can deal with this (e.g. pointer + int). if (!LHSType->isArithmeticType() || !RHSType->isArithmeticType()) return QualType(); // Apply unary and bitfield promotions to the LHS's type. QualType LHSUnpromotedType = LHSType; if (LHSType->isPromotableIntegerType()) LHSType = Context.getPromotedIntegerType(LHSType); QualType LHSBitfieldPromoteTy = Context.isPromotableBitField(LHS.get()); if (!LHSBitfieldPromoteTy.isNull()) LHSType = LHSBitfieldPromoteTy; if (LHSType != LHSUnpromotedType && !IsCompAssign) LHS = ImpCastExprToType(LHS.get(), LHSType, CK_IntegralCast); // If both types are identical, no conversion is needed. if (LHSType == RHSType) return LHSType; // At this point, we have two different arithmetic types. // Diagnose attempts to convert between __float128 and long double where // such conversions currently can't be handled. if (unsupportedTypeConversion(*this, LHSType, RHSType)) return QualType(); // Handle complex types first (C99 6.3.1.8p1). if (LHSType->isComplexType() || RHSType->isComplexType()) return handleComplexFloatConversion(*this, LHS, RHS, LHSType, RHSType, IsCompAssign); // Now handle "real" floating types (i.e. float, double, long double). if (LHSType->isRealFloatingType() || RHSType->isRealFloatingType()) return handleFloatConversion(*this, LHS, RHS, LHSType, RHSType, IsCompAssign); // Handle GCC complex int extension. if (LHSType->isComplexIntegerType() || RHSType->isComplexIntegerType()) return handleComplexIntConversion(*this, LHS, RHS, LHSType, RHSType, IsCompAssign); // Finally, we have two differing integer types. return handleIntegerConversion (*this, LHS, RHS, LHSType, RHSType, IsCompAssign); } //===----------------------------------------------------------------------===// // Semantic Analysis for various Expression Types //===----------------------------------------------------------------------===// ExprResult Sema::ActOnGenericSelectionExpr(SourceLocation KeyLoc, SourceLocation DefaultLoc, SourceLocation RParenLoc, Expr *ControllingExpr, ArrayRef ArgTypes, ArrayRef ArgExprs) { unsigned NumAssocs = ArgTypes.size(); assert(NumAssocs == ArgExprs.size()); TypeSourceInfo **Types = new TypeSourceInfo*[NumAssocs]; for (unsigned i = 0; i < NumAssocs; ++i) { if (ArgTypes[i]) (void) GetTypeFromParser(ArgTypes[i], &Types[i]); else Types[i] = nullptr; } ExprResult ER = CreateGenericSelectionExpr(KeyLoc, DefaultLoc, RParenLoc, ControllingExpr, llvm::makeArrayRef(Types, NumAssocs), ArgExprs); delete [] Types; return ER; } ExprResult Sema::CreateGenericSelectionExpr(SourceLocation KeyLoc, SourceLocation DefaultLoc, SourceLocation RParenLoc, Expr *ControllingExpr, ArrayRef Types, ArrayRef Exprs) { unsigned NumAssocs = Types.size(); assert(NumAssocs == Exprs.size()); // Decay and strip qualifiers for the controlling expression type, and handle // placeholder type replacement. See committee discussion from WG14 DR423. { EnterExpressionEvaluationContext Unevaluated(*this, Sema::Unevaluated); ExprResult R = DefaultFunctionArrayLvalueConversion(ControllingExpr); if (R.isInvalid()) return ExprError(); ControllingExpr = R.get(); } // The controlling expression is an unevaluated operand, so side effects are // likely unintended. if (ActiveTemplateInstantiations.empty() && ControllingExpr->HasSideEffects(Context, false)) Diag(ControllingExpr->getExprLoc(), diag::warn_side_effects_unevaluated_context); bool TypeErrorFound = false, IsResultDependent = ControllingExpr->isTypeDependent(), ContainsUnexpandedParameterPack = ControllingExpr->containsUnexpandedParameterPack(); for (unsigned i = 0; i < NumAssocs; ++i) { if (Exprs[i]->containsUnexpandedParameterPack()) ContainsUnexpandedParameterPack = true; if (Types[i]) { if (Types[i]->getType()->containsUnexpandedParameterPack()) ContainsUnexpandedParameterPack = true; if (Types[i]->getType()->isDependentType()) { IsResultDependent = true; } else { // C11 6.5.1.1p2 "The type name in a generic association shall specify a // complete object type other than a variably modified type." unsigned D = 0; if (Types[i]->getType()->isIncompleteType()) D = diag::err_assoc_type_incomplete; else if (!Types[i]->getType()->isObjectType()) D = diag::err_assoc_type_nonobject; else if (Types[i]->getType()->isVariablyModifiedType()) D = diag::err_assoc_type_variably_modified; if (D != 0) { Diag(Types[i]->getTypeLoc().getBeginLoc(), D) << Types[i]->getTypeLoc().getSourceRange() << Types[i]->getType(); TypeErrorFound = true; } // C11 6.5.1.1p2 "No two generic associations in the same generic // selection shall specify compatible types." for (unsigned j = i+1; j < NumAssocs; ++j) if (Types[j] && !Types[j]->getType()->isDependentType() && Context.typesAreCompatible(Types[i]->getType(), Types[j]->getType())) { Diag(Types[j]->getTypeLoc().getBeginLoc(), diag::err_assoc_compatible_types) << Types[j]->getTypeLoc().getSourceRange() << Types[j]->getType() << Types[i]->getType(); Diag(Types[i]->getTypeLoc().getBeginLoc(), diag::note_compat_assoc) << Types[i]->getTypeLoc().getSourceRange() << Types[i]->getType(); TypeErrorFound = true; } } } } if (TypeErrorFound) return ExprError(); // If we determined that the generic selection is result-dependent, don't // try to compute the result expression. if (IsResultDependent) return new (Context) GenericSelectionExpr( Context, KeyLoc, ControllingExpr, Types, Exprs, DefaultLoc, RParenLoc, ContainsUnexpandedParameterPack); SmallVector CompatIndices; unsigned DefaultIndex = -1U; for (unsigned i = 0; i < NumAssocs; ++i) { if (!Types[i]) DefaultIndex = i; else if (Context.typesAreCompatible(ControllingExpr->getType(), Types[i]->getType())) CompatIndices.push_back(i); } // C11 6.5.1.1p2 "The controlling expression of a generic selection shall have // type compatible with at most one of the types named in its generic // association list." if (CompatIndices.size() > 1) { // We strip parens here because the controlling expression is typically // parenthesized in macro definitions. ControllingExpr = ControllingExpr->IgnoreParens(); Diag(ControllingExpr->getLocStart(), diag::err_generic_sel_multi_match) << ControllingExpr->getSourceRange() << ControllingExpr->getType() << (unsigned) CompatIndices.size(); for (unsigned I : CompatIndices) { Diag(Types[I]->getTypeLoc().getBeginLoc(), diag::note_compat_assoc) << Types[I]->getTypeLoc().getSourceRange() << Types[I]->getType(); } return ExprError(); } // C11 6.5.1.1p2 "If a generic selection has no default generic association, // its controlling expression shall have type compatible with exactly one of // the types named in its generic association list." if (DefaultIndex == -1U && CompatIndices.size() == 0) { // We strip parens here because the controlling expression is typically // parenthesized in macro definitions. ControllingExpr = ControllingExpr->IgnoreParens(); Diag(ControllingExpr->getLocStart(), diag::err_generic_sel_no_match) << ControllingExpr->getSourceRange() << ControllingExpr->getType(); return ExprError(); } // C11 6.5.1.1p3 "If a generic selection has a generic association with a // type name that is compatible with the type of the controlling expression, // then the result expression of the generic selection is the expression // in that generic association. Otherwise, the result expression of the // generic selection is the expression in the default generic association." unsigned ResultIndex = CompatIndices.size() ? CompatIndices[0] : DefaultIndex; return new (Context) GenericSelectionExpr( Context, KeyLoc, ControllingExpr, Types, Exprs, DefaultLoc, RParenLoc, ContainsUnexpandedParameterPack, ResultIndex); } /// getUDSuffixLoc - Create a SourceLocation for a ud-suffix, given the /// location of the token and the offset of the ud-suffix within it. static SourceLocation getUDSuffixLoc(Sema &S, SourceLocation TokLoc, unsigned Offset) { return Lexer::AdvanceToTokenCharacter(TokLoc, Offset, S.getSourceManager(), S.getLangOpts()); } /// BuildCookedLiteralOperatorCall - A user-defined literal was found. Look up /// the corresponding cooked (non-raw) literal operator, and build a call to it. static ExprResult BuildCookedLiteralOperatorCall(Sema &S, Scope *Scope, IdentifierInfo *UDSuffix, SourceLocation UDSuffixLoc, ArrayRef Args, SourceLocation LitEndLoc) { assert(Args.size() <= 2 && "too many arguments for literal operator"); QualType ArgTy[2]; for (unsigned ArgIdx = 0; ArgIdx != Args.size(); ++ArgIdx) { ArgTy[ArgIdx] = Args[ArgIdx]->getType(); if (ArgTy[ArgIdx]->isArrayType()) ArgTy[ArgIdx] = S.Context.getArrayDecayedType(ArgTy[ArgIdx]); } DeclarationName OpName = S.Context.DeclarationNames.getCXXLiteralOperatorName(UDSuffix); DeclarationNameInfo OpNameInfo(OpName, UDSuffixLoc); OpNameInfo.setCXXLiteralOperatorNameLoc(UDSuffixLoc); LookupResult R(S, OpName, UDSuffixLoc, Sema::LookupOrdinaryName); if (S.LookupLiteralOperator(Scope, R, llvm::makeArrayRef(ArgTy, Args.size()), /*AllowRaw*/false, /*AllowTemplate*/false, /*AllowStringTemplate*/false) == Sema::LOLR_Error) return ExprError(); return S.BuildLiteralOperatorCall(R, OpNameInfo, Args, LitEndLoc); } /// ActOnStringLiteral - The specified tokens were lexed as pasted string /// fragments (e.g. "foo" "bar" L"baz"). The result string has to handle string /// concatenation ([C99 5.1.1.2, translation phase #6]), so it may come from /// multiple tokens. However, the common case is that StringToks points to one /// string. /// ExprResult Sema::ActOnStringLiteral(ArrayRef StringToks, Scope *UDLScope) { assert(!StringToks.empty() && "Must have at least one string!"); StringLiteralParser Literal(StringToks, PP); if (Literal.hadError) return ExprError(); SmallVector StringTokLocs; for (const Token &Tok : StringToks) StringTokLocs.push_back(Tok.getLocation()); QualType CharTy = Context.CharTy; StringLiteral::StringKind Kind = StringLiteral::Ascii; if (Literal.isWide()) { CharTy = Context.getWideCharType(); Kind = StringLiteral::Wide; } else if (Literal.isUTF8()) { Kind = StringLiteral::UTF8; } else if (Literal.isUTF16()) { CharTy = Context.Char16Ty; Kind = StringLiteral::UTF16; } else if (Literal.isUTF32()) { CharTy = Context.Char32Ty; Kind = StringLiteral::UTF32; } else if (Literal.isPascal()) { CharTy = Context.UnsignedCharTy; } QualType CharTyConst = CharTy; // A C++ string literal has a const-qualified element type (C++ 2.13.4p1). if (getLangOpts().CPlusPlus || getLangOpts().ConstStrings) CharTyConst.addConst(); // Get an array type for the string, according to C99 6.4.5. This includes // the nul terminator character as well as the string length for pascal // strings. QualType StrTy = Context.getConstantArrayType(CharTyConst, llvm::APInt(32, Literal.GetNumStringChars()+1), ArrayType::Normal, 0); // OpenCL v1.1 s6.5.3: a string literal is in the constant address space. if (getLangOpts().OpenCL) { StrTy = Context.getAddrSpaceQualType(StrTy, LangAS::opencl_constant); } // Pass &StringTokLocs[0], StringTokLocs.size() to factory! StringLiteral *Lit = StringLiteral::Create(Context, Literal.GetString(), Kind, Literal.Pascal, StrTy, &StringTokLocs[0], StringTokLocs.size()); if (Literal.getUDSuffix().empty()) return Lit; // We're building a user-defined literal. IdentifierInfo *UDSuffix = &Context.Idents.get(Literal.getUDSuffix()); SourceLocation UDSuffixLoc = getUDSuffixLoc(*this, StringTokLocs[Literal.getUDSuffixToken()], Literal.getUDSuffixOffset()); // Make sure we're allowed user-defined literals here. if (!UDLScope) return ExprError(Diag(UDSuffixLoc, diag::err_invalid_string_udl)); // C++11 [lex.ext]p5: The literal L is treated as a call of the form // operator "" X (str, len) QualType SizeType = Context.getSizeType(); DeclarationName OpName = Context.DeclarationNames.getCXXLiteralOperatorName(UDSuffix); DeclarationNameInfo OpNameInfo(OpName, UDSuffixLoc); OpNameInfo.setCXXLiteralOperatorNameLoc(UDSuffixLoc); QualType ArgTy[] = { Context.getArrayDecayedType(StrTy), SizeType }; LookupResult R(*this, OpName, UDSuffixLoc, LookupOrdinaryName); switch (LookupLiteralOperator(UDLScope, R, ArgTy, /*AllowRaw*/false, /*AllowTemplate*/false, /*AllowStringTemplate*/true)) { case LOLR_Cooked: { llvm::APInt Len(Context.getIntWidth(SizeType), Literal.GetNumStringChars()); IntegerLiteral *LenArg = IntegerLiteral::Create(Context, Len, SizeType, StringTokLocs[0]); Expr *Args[] = { Lit, LenArg }; return BuildLiteralOperatorCall(R, OpNameInfo, Args, StringTokLocs.back()); } case LOLR_StringTemplate: { TemplateArgumentListInfo ExplicitArgs; unsigned CharBits = Context.getIntWidth(CharTy); bool CharIsUnsigned = CharTy->isUnsignedIntegerType(); llvm::APSInt Value(CharBits, CharIsUnsigned); TemplateArgument TypeArg(CharTy); TemplateArgumentLocInfo TypeArgInfo(Context.getTrivialTypeSourceInfo(CharTy)); ExplicitArgs.addArgument(TemplateArgumentLoc(TypeArg, TypeArgInfo)); for (unsigned I = 0, N = Lit->getLength(); I != N; ++I) { Value = Lit->getCodeUnit(I); TemplateArgument Arg(Context, Value, CharTy); TemplateArgumentLocInfo ArgInfo; ExplicitArgs.addArgument(TemplateArgumentLoc(Arg, ArgInfo)); } return BuildLiteralOperatorCall(R, OpNameInfo, None, StringTokLocs.back(), &ExplicitArgs); } case LOLR_Raw: case LOLR_Template: llvm_unreachable("unexpected literal operator lookup result"); case LOLR_Error: return ExprError(); } llvm_unreachable("unexpected literal operator lookup result"); } ExprResult Sema::BuildDeclRefExpr(ValueDecl *D, QualType Ty, ExprValueKind VK, SourceLocation Loc, const CXXScopeSpec *SS) { DeclarationNameInfo NameInfo(D->getDeclName(), Loc); return BuildDeclRefExpr(D, Ty, VK, NameInfo, SS); } /// BuildDeclRefExpr - Build an expression that references a /// declaration that does not require a closure capture. ExprResult Sema::BuildDeclRefExpr(ValueDecl *D, QualType Ty, ExprValueKind VK, const DeclarationNameInfo &NameInfo, const CXXScopeSpec *SS, NamedDecl *FoundD, const TemplateArgumentListInfo *TemplateArgs) { bool RefersToCapturedVariable = isa(D) && NeedToCaptureVariable(cast(D), NameInfo.getLoc()); DeclRefExpr *E; if (isa(D)) { VarTemplateSpecializationDecl *VarSpec = cast(D); E = DeclRefExpr::Create(Context, SS ? SS->getWithLocInContext(Context) : NestedNameSpecifierLoc(), VarSpec->getTemplateKeywordLoc(), D, RefersToCapturedVariable, NameInfo.getLoc(), Ty, VK, FoundD, TemplateArgs); } else { assert(!TemplateArgs && "No template arguments for non-variable" " template specialization references"); E = DeclRefExpr::Create(Context, SS ? SS->getWithLocInContext(Context) : NestedNameSpecifierLoc(), SourceLocation(), D, RefersToCapturedVariable, NameInfo, Ty, VK, FoundD); } MarkDeclRefReferenced(E); if (getLangOpts().ObjCWeak && isa(D) && Ty.getObjCLifetime() == Qualifiers::OCL_Weak && !Diags.isIgnored(diag::warn_arc_repeated_use_of_weak, E->getLocStart())) recordUseOfEvaluatedWeak(E); if (FieldDecl *FD = dyn_cast(D)) { UnusedPrivateFields.remove(FD); // Just in case we're building an illegal pointer-to-member. if (FD->isBitField()) E->setObjectKind(OK_BitField); } // C++ [expr.prim]/8: The expression [...] is a bit-field if the identifier // designates a bit-field. if (auto *BD = dyn_cast(D)) if (auto *BE = BD->getBinding()) E->setObjectKind(BE->getObjectKind()); return E; } /// Decomposes the given name into a DeclarationNameInfo, its location, and /// possibly a list of template arguments. /// /// If this produces template arguments, it is permitted to call /// DecomposeTemplateName. /// /// This actually loses a lot of source location information for /// non-standard name kinds; we should consider preserving that in /// some way. void Sema::DecomposeUnqualifiedId(const UnqualifiedId &Id, TemplateArgumentListInfo &Buffer, DeclarationNameInfo &NameInfo, const TemplateArgumentListInfo *&TemplateArgs) { if (Id.getKind() == UnqualifiedId::IK_TemplateId) { Buffer.setLAngleLoc(Id.TemplateId->LAngleLoc); Buffer.setRAngleLoc(Id.TemplateId->RAngleLoc); ASTTemplateArgsPtr TemplateArgsPtr(Id.TemplateId->getTemplateArgs(), Id.TemplateId->NumArgs); translateTemplateArguments(TemplateArgsPtr, Buffer); TemplateName TName = Id.TemplateId->Template.get(); SourceLocation TNameLoc = Id.TemplateId->TemplateNameLoc; NameInfo = Context.getNameForTemplate(TName, TNameLoc); TemplateArgs = &Buffer; } else { NameInfo = GetNameFromUnqualifiedId(Id); TemplateArgs = nullptr; } } static void emitEmptyLookupTypoDiagnostic( const TypoCorrection &TC, Sema &SemaRef, const CXXScopeSpec &SS, DeclarationName Typo, SourceLocation TypoLoc, ArrayRef Args, unsigned DiagnosticID, unsigned DiagnosticSuggestID) { DeclContext *Ctx = SS.isEmpty() ? nullptr : SemaRef.computeDeclContext(SS, false); if (!TC) { // Emit a special diagnostic for failed member lookups. // FIXME: computing the declaration context might fail here (?) if (Ctx) SemaRef.Diag(TypoLoc, diag::err_no_member) << Typo << Ctx << SS.getRange(); else SemaRef.Diag(TypoLoc, DiagnosticID) << Typo; return; } std::string CorrectedStr = TC.getAsString(SemaRef.getLangOpts()); bool DroppedSpecifier = TC.WillReplaceSpecifier() && Typo.getAsString() == CorrectedStr; unsigned NoteID = TC.getCorrectionDeclAs() ? diag::note_implicit_param_decl : diag::note_previous_decl; if (!Ctx) SemaRef.diagnoseTypo(TC, SemaRef.PDiag(DiagnosticSuggestID) << Typo, SemaRef.PDiag(NoteID)); else SemaRef.diagnoseTypo(TC, SemaRef.PDiag(diag::err_no_member_suggest) << Typo << Ctx << DroppedSpecifier << SS.getRange(), SemaRef.PDiag(NoteID)); } /// Diagnose an empty lookup. /// /// \return false if new lookup candidates were found bool Sema::DiagnoseEmptyLookup(Scope *S, CXXScopeSpec &SS, LookupResult &R, std::unique_ptr CCC, TemplateArgumentListInfo *ExplicitTemplateArgs, ArrayRef Args, TypoExpr **Out) { DeclarationName Name = R.getLookupName(); unsigned diagnostic = diag::err_undeclared_var_use; unsigned diagnostic_suggest = diag::err_undeclared_var_use_suggest; if (Name.getNameKind() == DeclarationName::CXXOperatorName || Name.getNameKind() == DeclarationName::CXXLiteralOperatorName || Name.getNameKind() == DeclarationName::CXXConversionFunctionName) { diagnostic = diag::err_undeclared_use; diagnostic_suggest = diag::err_undeclared_use_suggest; } // If the original lookup was an unqualified lookup, fake an // unqualified lookup. This is useful when (for example) the // original lookup would not have found something because it was a // dependent name. DeclContext *DC = SS.isEmpty() ? CurContext : nullptr; while (DC) { if (isa(DC)) { LookupQualifiedName(R, DC); if (!R.empty()) { // Don't give errors about ambiguities in this lookup. R.suppressDiagnostics(); // During a default argument instantiation the CurContext points // to a CXXMethodDecl; but we can't apply a this-> fixit inside a // function parameter list, hence add an explicit check. bool isDefaultArgument = !ActiveTemplateInstantiations.empty() && ActiveTemplateInstantiations.back().Kind == ActiveTemplateInstantiation::DefaultFunctionArgumentInstantiation; CXXMethodDecl *CurMethod = dyn_cast(CurContext); bool isInstance = CurMethod && CurMethod->isInstance() && DC == CurMethod->getParent() && !isDefaultArgument; // Give a code modification hint to insert 'this->'. // TODO: fixit for inserting 'Base::' in the other cases. // Actually quite difficult! if (getLangOpts().MSVCCompat) diagnostic = diag::ext_found_via_dependent_bases_lookup; if (isInstance) { Diag(R.getNameLoc(), diagnostic) << Name << FixItHint::CreateInsertion(R.getNameLoc(), "this->"); CheckCXXThisCapture(R.getNameLoc()); } else { Diag(R.getNameLoc(), diagnostic) << Name; } // Do we really want to note all of these? for (NamedDecl *D : R) Diag(D->getLocation(), diag::note_dependent_var_use); // Return true if we are inside a default argument instantiation // and the found name refers to an instance member function, otherwise // the function calling DiagnoseEmptyLookup will try to create an // implicit member call and this is wrong for default argument. if (isDefaultArgument && ((*R.begin())->isCXXInstanceMember())) { Diag(R.getNameLoc(), diag::err_member_call_without_object); return true; } // Tell the callee to try to recover. return false; } R.clear(); } // In Microsoft mode, if we are performing lookup from within a friend // function definition declared at class scope then we must set // DC to the lexical parent to be able to search into the parent // class. if (getLangOpts().MSVCCompat && isa(DC) && cast(DC)->getFriendObjectKind() && DC->getLexicalParent()->isRecord()) DC = DC->getLexicalParent(); else DC = DC->getParent(); } // We didn't find anything, so try to correct for a typo. TypoCorrection Corrected; if (S && Out) { SourceLocation TypoLoc = R.getNameLoc(); assert(!ExplicitTemplateArgs && "Diagnosing an empty lookup with explicit template args!"); *Out = CorrectTypoDelayed( R.getLookupNameInfo(), R.getLookupKind(), S, &SS, std::move(CCC), [=](const TypoCorrection &TC) { emitEmptyLookupTypoDiagnostic(TC, *this, SS, Name, TypoLoc, Args, diagnostic, diagnostic_suggest); }, nullptr, CTK_ErrorRecovery); if (*Out) return true; } else if (S && (Corrected = CorrectTypo(R.getLookupNameInfo(), R.getLookupKind(), S, &SS, std::move(CCC), CTK_ErrorRecovery))) { std::string CorrectedStr(Corrected.getAsString(getLangOpts())); bool DroppedSpecifier = Corrected.WillReplaceSpecifier() && Name.getAsString() == CorrectedStr; R.setLookupName(Corrected.getCorrection()); bool AcceptableWithRecovery = false; bool AcceptableWithoutRecovery = false; NamedDecl *ND = Corrected.getFoundDecl(); if (ND) { if (Corrected.isOverloaded()) { OverloadCandidateSet OCS(R.getNameLoc(), OverloadCandidateSet::CSK_Normal); OverloadCandidateSet::iterator Best; for (NamedDecl *CD : Corrected) { if (FunctionTemplateDecl *FTD = dyn_cast(CD)) AddTemplateOverloadCandidate( FTD, DeclAccessPair::make(FTD, AS_none), ExplicitTemplateArgs, Args, OCS); else if (FunctionDecl *FD = dyn_cast(CD)) if (!ExplicitTemplateArgs || ExplicitTemplateArgs->size() == 0) AddOverloadCandidate(FD, DeclAccessPair::make(FD, AS_none), Args, OCS); } switch (OCS.BestViableFunction(*this, R.getNameLoc(), Best)) { case OR_Success: ND = Best->FoundDecl; Corrected.setCorrectionDecl(ND); break; default: // FIXME: Arbitrarily pick the first declaration for the note. Corrected.setCorrectionDecl(ND); break; } } R.addDecl(ND); if (getLangOpts().CPlusPlus && ND->isCXXClassMember()) { CXXRecordDecl *Record = nullptr; if (Corrected.getCorrectionSpecifier()) { const Type *Ty = Corrected.getCorrectionSpecifier()->getAsType(); Record = Ty->getAsCXXRecordDecl(); } if (!Record) Record = cast( ND->getDeclContext()->getRedeclContext()); R.setNamingClass(Record); } auto *UnderlyingND = ND->getUnderlyingDecl(); AcceptableWithRecovery = isa(UnderlyingND) || isa(UnderlyingND); // FIXME: If we ended up with a typo for a type name or // Objective-C class name, we're in trouble because the parser // is in the wrong place to recover. Suggest the typo // correction, but don't make it a fix-it since we're not going // to recover well anyway. AcceptableWithoutRecovery = isa(UnderlyingND) || isa(UnderlyingND); } else { // FIXME: We found a keyword. Suggest it, but don't provide a fix-it // because we aren't able to recover. AcceptableWithoutRecovery = true; } if (AcceptableWithRecovery || AcceptableWithoutRecovery) { unsigned NoteID = Corrected.getCorrectionDeclAs() ? diag::note_implicit_param_decl : diag::note_previous_decl; if (SS.isEmpty()) diagnoseTypo(Corrected, PDiag(diagnostic_suggest) << Name, PDiag(NoteID), AcceptableWithRecovery); else diagnoseTypo(Corrected, PDiag(diag::err_no_member_suggest) << Name << computeDeclContext(SS, false) << DroppedSpecifier << SS.getRange(), PDiag(NoteID), AcceptableWithRecovery); // Tell the callee whether to try to recover. return !AcceptableWithRecovery; } } R.clear(); // Emit a special diagnostic for failed member lookups. // FIXME: computing the declaration context might fail here (?) if (!SS.isEmpty()) { Diag(R.getNameLoc(), diag::err_no_member) << Name << computeDeclContext(SS, false) << SS.getRange(); return true; } // Give up, we can't recover. Diag(R.getNameLoc(), diagnostic) << Name; return true; } /// In Microsoft mode, if we are inside a template class whose parent class has /// dependent base classes, and we can't resolve an unqualified identifier, then /// assume the identifier is a member of a dependent base class. We can only /// recover successfully in static methods, instance methods, and other contexts /// where 'this' is available. This doesn't precisely match MSVC's /// instantiation model, but it's close enough. static Expr * recoverFromMSUnqualifiedLookup(Sema &S, ASTContext &Context, DeclarationNameInfo &NameInfo, SourceLocation TemplateKWLoc, const TemplateArgumentListInfo *TemplateArgs) { // Only try to recover from lookup into dependent bases in static methods or // contexts where 'this' is available. QualType ThisType = S.getCurrentThisType(); const CXXRecordDecl *RD = nullptr; if (!ThisType.isNull()) RD = ThisType->getPointeeType()->getAsCXXRecordDecl(); else if (auto *MD = dyn_cast(S.CurContext)) RD = MD->getParent(); if (!RD || !RD->hasAnyDependentBases()) return nullptr; // Diagnose this as unqualified lookup into a dependent base class. If 'this' // is available, suggest inserting 'this->' as a fixit. SourceLocation Loc = NameInfo.getLoc(); auto DB = S.Diag(Loc, diag::ext_undeclared_unqual_id_with_dependent_base); DB << NameInfo.getName() << RD; if (!ThisType.isNull()) { DB << FixItHint::CreateInsertion(Loc, "this->"); return CXXDependentScopeMemberExpr::Create( Context, /*This=*/nullptr, ThisType, /*IsArrow=*/true, /*Op=*/SourceLocation(), NestedNameSpecifierLoc(), TemplateKWLoc, /*FirstQualifierInScope=*/nullptr, NameInfo, TemplateArgs); } // Synthesize a fake NNS that points to the derived class. This will // perform name lookup during template instantiation. CXXScopeSpec SS; auto *NNS = NestedNameSpecifier::Create(Context, nullptr, true, RD->getTypeForDecl()); SS.MakeTrivial(Context, NNS, SourceRange(Loc, Loc)); return DependentScopeDeclRefExpr::Create( Context, SS.getWithLocInContext(Context), TemplateKWLoc, NameInfo, TemplateArgs); } ExprResult Sema::ActOnIdExpression(Scope *S, CXXScopeSpec &SS, SourceLocation TemplateKWLoc, UnqualifiedId &Id, bool HasTrailingLParen, bool IsAddressOfOperand, std::unique_ptr CCC, bool IsInlineAsmIdentifier, Token *KeywordReplacement) { assert(!(IsAddressOfOperand && HasTrailingLParen) && "cannot be direct & operand and have a trailing lparen"); if (SS.isInvalid()) return ExprError(); TemplateArgumentListInfo TemplateArgsBuffer; // Decompose the UnqualifiedId into the following data. DeclarationNameInfo NameInfo; const TemplateArgumentListInfo *TemplateArgs; DecomposeUnqualifiedId(Id, TemplateArgsBuffer, NameInfo, TemplateArgs); DeclarationName Name = NameInfo.getName(); IdentifierInfo *II = Name.getAsIdentifierInfo(); SourceLocation NameLoc = NameInfo.getLoc(); // C++ [temp.dep.expr]p3: // An id-expression is type-dependent if it contains: // -- an identifier that was declared with a dependent type, // (note: handled after lookup) // -- a template-id that is dependent, // (note: handled in BuildTemplateIdExpr) // -- a conversion-function-id that specifies a dependent type, // -- a nested-name-specifier that contains a class-name that // names a dependent type. // Determine whether this is a member of an unknown specialization; // we need to handle these differently. bool DependentID = false; if (Name.getNameKind() == DeclarationName::CXXConversionFunctionName && Name.getCXXNameType()->isDependentType()) { DependentID = true; } else if (SS.isSet()) { if (DeclContext *DC = computeDeclContext(SS, false)) { if (RequireCompleteDeclContext(SS, DC)) return ExprError(); } else { DependentID = true; } } if (DependentID) return ActOnDependentIdExpression(SS, TemplateKWLoc, NameInfo, IsAddressOfOperand, TemplateArgs); // Perform the required lookup. LookupResult R(*this, NameInfo, (Id.getKind() == UnqualifiedId::IK_ImplicitSelfParam) ? LookupObjCImplicitSelfParam : LookupOrdinaryName); if (TemplateArgs) { // Lookup the template name again to correctly establish the context in // which it was found. This is really unfortunate as we already did the // lookup to determine that it was a template name in the first place. If // this becomes a performance hit, we can work harder to preserve those // results until we get here but it's likely not worth it. bool MemberOfUnknownSpecialization; LookupTemplateName(R, S, SS, QualType(), /*EnteringContext=*/false, MemberOfUnknownSpecialization); if (MemberOfUnknownSpecialization || (R.getResultKind() == LookupResult::NotFoundInCurrentInstantiation)) return ActOnDependentIdExpression(SS, TemplateKWLoc, NameInfo, IsAddressOfOperand, TemplateArgs); } else { bool IvarLookupFollowUp = II && !SS.isSet() && getCurMethodDecl(); LookupParsedName(R, S, &SS, !IvarLookupFollowUp); // If the result might be in a dependent base class, this is a dependent // id-expression. if (R.getResultKind() == LookupResult::NotFoundInCurrentInstantiation) return ActOnDependentIdExpression(SS, TemplateKWLoc, NameInfo, IsAddressOfOperand, TemplateArgs); // If this reference is in an Objective-C method, then we need to do // some special Objective-C lookup, too. if (IvarLookupFollowUp) { ExprResult E(LookupInObjCMethod(R, S, II, true)); if (E.isInvalid()) return ExprError(); if (Expr *Ex = E.getAs()) return Ex; } } if (R.isAmbiguous()) return ExprError(); // This could be an implicitly declared function reference (legal in C90, // extension in C99, forbidden in C++). if (R.empty() && HasTrailingLParen && II && !getLangOpts().CPlusPlus) { NamedDecl *D = ImplicitlyDefineFunction(NameLoc, *II, S); if (D) R.addDecl(D); } // Determine whether this name might be a candidate for // argument-dependent lookup. bool ADL = UseArgumentDependentLookup(SS, R, HasTrailingLParen); if (R.empty() && !ADL) { if (SS.isEmpty() && getLangOpts().MSVCCompat) { if (Expr *E = recoverFromMSUnqualifiedLookup(*this, Context, NameInfo, TemplateKWLoc, TemplateArgs)) return E; } // Don't diagnose an empty lookup for inline assembly. if (IsInlineAsmIdentifier) return ExprError(); // If this name wasn't predeclared and if this is not a function // call, diagnose the problem. TypoExpr *TE = nullptr; auto DefaultValidator = llvm::make_unique( II, SS.isValid() ? SS.getScopeRep() : nullptr); DefaultValidator->IsAddressOfOperand = IsAddressOfOperand; assert((!CCC || CCC->IsAddressOfOperand == IsAddressOfOperand) && "Typo correction callback misconfigured"); if (CCC) { // Make sure the callback knows what the typo being diagnosed is. CCC->setTypoName(II); if (SS.isValid()) CCC->setTypoNNS(SS.getScopeRep()); } if (DiagnoseEmptyLookup(S, SS, R, CCC ? std::move(CCC) : std::move(DefaultValidator), nullptr, None, &TE)) { if (TE && KeywordReplacement) { auto &State = getTypoExprState(TE); auto BestTC = State.Consumer->getNextCorrection(); if (BestTC.isKeyword()) { auto *II = BestTC.getCorrectionAsIdentifierInfo(); if (State.DiagHandler) State.DiagHandler(BestTC); KeywordReplacement->startToken(); KeywordReplacement->setKind(II->getTokenID()); KeywordReplacement->setIdentifierInfo(II); KeywordReplacement->setLocation(BestTC.getCorrectionRange().getBegin()); // Clean up the state associated with the TypoExpr, since it has // now been diagnosed (without a call to CorrectDelayedTyposInExpr). clearDelayedTypo(TE); // Signal that a correction to a keyword was performed by returning a // valid-but-null ExprResult. return (Expr*)nullptr; } State.Consumer->resetCorrectionStream(); } return TE ? TE : ExprError(); } assert(!R.empty() && "DiagnoseEmptyLookup returned false but added no results"); // If we found an Objective-C instance variable, let // LookupInObjCMethod build the appropriate expression to // reference the ivar. if (ObjCIvarDecl *Ivar = R.getAsSingle()) { R.clear(); ExprResult E(LookupInObjCMethod(R, S, Ivar->getIdentifier())); // In a hopelessly buggy code, Objective-C instance variable // lookup fails and no expression will be built to reference it. if (!E.isInvalid() && !E.get()) return ExprError(); return E; } } // This is guaranteed from this point on. assert(!R.empty() || ADL); // Check whether this might be a C++ implicit instance member access. // C++ [class.mfct.non-static]p3: // When an id-expression that is not part of a class member access // syntax and not used to form a pointer to member is used in the // body of a non-static member function of class X, if name lookup // resolves the name in the id-expression to a non-static non-type // member of some class C, the id-expression is transformed into a // class member access expression using (*this) as the // postfix-expression to the left of the . operator. // // But we don't actually need to do this for '&' operands if R // resolved to a function or overloaded function set, because the // expression is ill-formed if it actually works out to be a // non-static member function: // // C++ [expr.ref]p4: // Otherwise, if E1.E2 refers to a non-static member function. . . // [t]he expression can be used only as the left-hand operand of a // member function call. // // There are other safeguards against such uses, but it's important // to get this right here so that we don't end up making a // spuriously dependent expression if we're inside a dependent // instance method. if (!R.empty() && (*R.begin())->isCXXClassMember()) { bool MightBeImplicitMember; if (!IsAddressOfOperand) MightBeImplicitMember = true; else if (!SS.isEmpty()) MightBeImplicitMember = false; else if (R.isOverloadedResult()) MightBeImplicitMember = false; else if (R.isUnresolvableResult()) MightBeImplicitMember = true; else MightBeImplicitMember = isa(R.getFoundDecl()) || isa(R.getFoundDecl()) || isa(R.getFoundDecl()); if (MightBeImplicitMember) return BuildPossibleImplicitMemberExpr(SS, TemplateKWLoc, R, TemplateArgs, S); } if (TemplateArgs || TemplateKWLoc.isValid()) { // In C++1y, if this is a variable template id, then check it // in BuildTemplateIdExpr(). // The single lookup result must be a variable template declaration. if (Id.getKind() == UnqualifiedId::IK_TemplateId && Id.TemplateId && Id.TemplateId->Kind == TNK_Var_template) { assert(R.getAsSingle() && "There should only be one declaration found."); } return BuildTemplateIdExpr(SS, TemplateKWLoc, R, ADL, TemplateArgs); } return BuildDeclarationNameExpr(SS, R, ADL); } /// BuildQualifiedDeclarationNameExpr - Build a C++ qualified /// declaration name, generally during template instantiation. /// There's a large number of things which don't need to be done along /// this path. ExprResult Sema::BuildQualifiedDeclarationNameExpr( CXXScopeSpec &SS, const DeclarationNameInfo &NameInfo, bool IsAddressOfOperand, const Scope *S, TypeSourceInfo **RecoveryTSI) { DeclContext *DC = computeDeclContext(SS, false); if (!DC) return BuildDependentDeclRefExpr(SS, /*TemplateKWLoc=*/SourceLocation(), NameInfo, /*TemplateArgs=*/nullptr); if (RequireCompleteDeclContext(SS, DC)) return ExprError(); LookupResult R(*this, NameInfo, LookupOrdinaryName); LookupQualifiedName(R, DC); if (R.isAmbiguous()) return ExprError(); if (R.getResultKind() == LookupResult::NotFoundInCurrentInstantiation) return BuildDependentDeclRefExpr(SS, /*TemplateKWLoc=*/SourceLocation(), NameInfo, /*TemplateArgs=*/nullptr); if (R.empty()) { Diag(NameInfo.getLoc(), diag::err_no_member) << NameInfo.getName() << DC << SS.getRange(); return ExprError(); } if (const TypeDecl *TD = R.getAsSingle()) { // Diagnose a missing typename if this resolved unambiguously to a type in // a dependent context. If we can recover with a type, downgrade this to // a warning in Microsoft compatibility mode. unsigned DiagID = diag::err_typename_missing; if (RecoveryTSI && getLangOpts().MSVCCompat) DiagID = diag::ext_typename_missing; SourceLocation Loc = SS.getBeginLoc(); auto D = Diag(Loc, DiagID); D << SS.getScopeRep() << NameInfo.getName().getAsString() << SourceRange(Loc, NameInfo.getEndLoc()); // Don't recover if the caller isn't expecting us to or if we're in a SFINAE // context. if (!RecoveryTSI) return ExprError(); // Only issue the fixit if we're prepared to recover. D << FixItHint::CreateInsertion(Loc, "typename "); // Recover by pretending this was an elaborated type. QualType Ty = Context.getTypeDeclType(TD); TypeLocBuilder TLB; TLB.pushTypeSpec(Ty).setNameLoc(NameInfo.getLoc()); QualType ET = getElaboratedType(ETK_None, SS, Ty); ElaboratedTypeLoc QTL = TLB.push(ET); QTL.setElaboratedKeywordLoc(SourceLocation()); QTL.setQualifierLoc(SS.getWithLocInContext(Context)); *RecoveryTSI = TLB.getTypeSourceInfo(Context, ET); return ExprEmpty(); } // Defend against this resolving to an implicit member access. We usually // won't get here if this might be a legitimate a class member (we end up in // BuildMemberReferenceExpr instead), but this can be valid if we're forming // a pointer-to-member or in an unevaluated context in C++11. if (!R.empty() && (*R.begin())->isCXXClassMember() && !IsAddressOfOperand) return BuildPossibleImplicitMemberExpr(SS, /*TemplateKWLoc=*/SourceLocation(), R, /*TemplateArgs=*/nullptr, S); return BuildDeclarationNameExpr(SS, R, /* ADL */ false); } /// LookupInObjCMethod - The parser has read a name in, and Sema has /// detected that we're currently inside an ObjC method. Perform some /// additional lookup. /// /// Ideally, most of this would be done by lookup, but there's /// actually quite a lot of extra work involved. /// /// Returns a null sentinel to indicate trivial success. ExprResult Sema::LookupInObjCMethod(LookupResult &Lookup, Scope *S, IdentifierInfo *II, bool AllowBuiltinCreation) { SourceLocation Loc = Lookup.getNameLoc(); ObjCMethodDecl *CurMethod = getCurMethodDecl(); // Check for error condition which is already reported. if (!CurMethod) return ExprError(); // There are two cases to handle here. 1) scoped lookup could have failed, // in which case we should look for an ivar. 2) scoped lookup could have // found a decl, but that decl is outside the current instance method (i.e. // a global variable). In these two cases, we do a lookup for an ivar with // this name, if the lookup sucedes, we replace it our current decl. // If we're in a class method, we don't normally want to look for // ivars. But if we don't find anything else, and there's an // ivar, that's an error. bool IsClassMethod = CurMethod->isClassMethod(); bool LookForIvars; if (Lookup.empty()) LookForIvars = true; else if (IsClassMethod) LookForIvars = false; else LookForIvars = (Lookup.isSingleResult() && Lookup.getFoundDecl()->isDefinedOutsideFunctionOrMethod()); ObjCInterfaceDecl *IFace = nullptr; if (LookForIvars) { IFace = CurMethod->getClassInterface(); ObjCInterfaceDecl *ClassDeclared; ObjCIvarDecl *IV = nullptr; if (IFace && (IV = IFace->lookupInstanceVariable(II, ClassDeclared))) { // Diagnose using an ivar in a class method. if (IsClassMethod) return ExprError(Diag(Loc, diag::err_ivar_use_in_class_method) << IV->getDeclName()); // If we're referencing an invalid decl, just return this as a silent // error node. The error diagnostic was already emitted on the decl. if (IV->isInvalidDecl()) return ExprError(); // Check if referencing a field with __attribute__((deprecated)). if (DiagnoseUseOfDecl(IV, Loc)) return ExprError(); // Diagnose the use of an ivar outside of the declaring class. if (IV->getAccessControl() == ObjCIvarDecl::Private && !declaresSameEntity(ClassDeclared, IFace) && !getLangOpts().DebuggerSupport) Diag(Loc, diag::err_private_ivar_access) << IV->getDeclName(); // FIXME: This should use a new expr for a direct reference, don't // turn this into Self->ivar, just return a BareIVarExpr or something. IdentifierInfo &II = Context.Idents.get("self"); UnqualifiedId SelfName; SelfName.setIdentifier(&II, SourceLocation()); SelfName.setKind(UnqualifiedId::IK_ImplicitSelfParam); CXXScopeSpec SelfScopeSpec; SourceLocation TemplateKWLoc; ExprResult SelfExpr = ActOnIdExpression(S, SelfScopeSpec, TemplateKWLoc, SelfName, false, false); if (SelfExpr.isInvalid()) return ExprError(); SelfExpr = DefaultLvalueConversion(SelfExpr.get()); if (SelfExpr.isInvalid()) return ExprError(); MarkAnyDeclReferenced(Loc, IV, true); ObjCMethodFamily MF = CurMethod->getMethodFamily(); if (MF != OMF_init && MF != OMF_dealloc && MF != OMF_finalize && !IvarBacksCurrentMethodAccessor(IFace, CurMethod, IV)) Diag(Loc, diag::warn_direct_ivar_access) << IV->getDeclName(); ObjCIvarRefExpr *Result = new (Context) ObjCIvarRefExpr(IV, IV->getUsageType(SelfExpr.get()->getType()), Loc, IV->getLocation(), SelfExpr.get(), true, true); if (getLangOpts().ObjCAutoRefCount) { if (IV->getType().getObjCLifetime() == Qualifiers::OCL_Weak) { if (!Diags.isIgnored(diag::warn_arc_repeated_use_of_weak, Loc)) recordUseOfEvaluatedWeak(Result); } if (CurContext->isClosure()) Diag(Loc, diag::warn_implicitly_retains_self) << FixItHint::CreateInsertion(Loc, "self->"); } return Result; } } else if (CurMethod->isInstanceMethod()) { // We should warn if a local variable hides an ivar. if (ObjCInterfaceDecl *IFace = CurMethod->getClassInterface()) { ObjCInterfaceDecl *ClassDeclared; if (ObjCIvarDecl *IV = IFace->lookupInstanceVariable(II, ClassDeclared)) { if (IV->getAccessControl() != ObjCIvarDecl::Private || declaresSameEntity(IFace, ClassDeclared)) Diag(Loc, diag::warn_ivar_use_hidden) << IV->getDeclName(); } } } else if (Lookup.isSingleResult() && Lookup.getFoundDecl()->isDefinedOutsideFunctionOrMethod()) { // If accessing a stand-alone ivar in a class method, this is an error. if (const ObjCIvarDecl *IV = dyn_cast(Lookup.getFoundDecl())) return ExprError(Diag(Loc, diag::err_ivar_use_in_class_method) << IV->getDeclName()); } if (Lookup.empty() && II && AllowBuiltinCreation) { // FIXME. Consolidate this with similar code in LookupName. if (unsigned BuiltinID = II->getBuiltinID()) { if (!(getLangOpts().CPlusPlus && Context.BuiltinInfo.isPredefinedLibFunction(BuiltinID))) { NamedDecl *D = LazilyCreateBuiltin((IdentifierInfo *)II, BuiltinID, S, Lookup.isForRedeclaration(), Lookup.getNameLoc()); if (D) Lookup.addDecl(D); } } } // Sentinel value saying that we didn't do anything special. return ExprResult((Expr *)nullptr); } /// \brief Cast a base object to a member's actual type. /// /// Logically this happens in three phases: /// /// * First we cast from the base type to the naming class. /// The naming class is the class into which we were looking /// when we found the member; it's the qualifier type if a /// qualifier was provided, and otherwise it's the base type. /// /// * Next we cast from the naming class to the declaring class. /// If the member we found was brought into a class's scope by /// a using declaration, this is that class; otherwise it's /// the class declaring the member. /// /// * Finally we cast from the declaring class to the "true" /// declaring class of the member. This conversion does not /// obey access control. ExprResult Sema::PerformObjectMemberConversion(Expr *From, NestedNameSpecifier *Qualifier, NamedDecl *FoundDecl, NamedDecl *Member) { CXXRecordDecl *RD = dyn_cast(Member->getDeclContext()); if (!RD) return From; QualType DestRecordType; QualType DestType; QualType FromRecordType; QualType FromType = From->getType(); bool PointerConversions = false; if (isa(Member)) { DestRecordType = Context.getCanonicalType(Context.getTypeDeclType(RD)); if (FromType->getAs()) { DestType = Context.getPointerType(DestRecordType); FromRecordType = FromType->getPointeeType(); PointerConversions = true; } else { DestType = DestRecordType; FromRecordType = FromType; } } else if (CXXMethodDecl *Method = dyn_cast(Member)) { if (Method->isStatic()) return From; DestType = Method->getThisType(Context); DestRecordType = DestType->getPointeeType(); if (FromType->getAs()) { FromRecordType = FromType->getPointeeType(); PointerConversions = true; } else { FromRecordType = FromType; DestType = DestRecordType; } } else { // No conversion necessary. return From; } if (DestType->isDependentType() || FromType->isDependentType()) return From; // If the unqualified types are the same, no conversion is necessary. if (Context.hasSameUnqualifiedType(FromRecordType, DestRecordType)) return From; SourceRange FromRange = From->getSourceRange(); SourceLocation FromLoc = FromRange.getBegin(); ExprValueKind VK = From->getValueKind(); // C++ [class.member.lookup]p8: // [...] Ambiguities can often be resolved by qualifying a name with its // class name. // // If the member was a qualified name and the qualified referred to a // specific base subobject type, we'll cast to that intermediate type // first and then to the object in which the member is declared. That allows // one to resolve ambiguities in, e.g., a diamond-shaped hierarchy such as: // // class Base { public: int x; }; // class Derived1 : public Base { }; // class Derived2 : public Base { }; // class VeryDerived : public Derived1, public Derived2 { void f(); }; // // void VeryDerived::f() { // x = 17; // error: ambiguous base subobjects // Derived1::x = 17; // okay, pick the Base subobject of Derived1 // } if (Qualifier && Qualifier->getAsType()) { QualType QType = QualType(Qualifier->getAsType(), 0); assert(QType->isRecordType() && "lookup done with non-record type"); QualType QRecordType = QualType(QType->getAs(), 0); // In C++98, the qualifier type doesn't actually have to be a base // type of the object type, in which case we just ignore it. // Otherwise build the appropriate casts. if (IsDerivedFrom(FromLoc, FromRecordType, QRecordType)) { CXXCastPath BasePath; if (CheckDerivedToBaseConversion(FromRecordType, QRecordType, FromLoc, FromRange, &BasePath)) return ExprError(); if (PointerConversions) QType = Context.getPointerType(QType); From = ImpCastExprToType(From, QType, CK_UncheckedDerivedToBase, VK, &BasePath).get(); FromType = QType; FromRecordType = QRecordType; // If the qualifier type was the same as the destination type, // we're done. if (Context.hasSameUnqualifiedType(FromRecordType, DestRecordType)) return From; } } bool IgnoreAccess = false; // If we actually found the member through a using declaration, cast // down to the using declaration's type. // // Pointer equality is fine here because only one declaration of a // class ever has member declarations. if (FoundDecl->getDeclContext() != Member->getDeclContext()) { assert(isa(FoundDecl)); QualType URecordType = Context.getTypeDeclType( cast(FoundDecl->getDeclContext())); // We only need to do this if the naming-class to declaring-class // conversion is non-trivial. if (!Context.hasSameUnqualifiedType(FromRecordType, URecordType)) { assert(IsDerivedFrom(FromLoc, FromRecordType, URecordType)); CXXCastPath BasePath; if (CheckDerivedToBaseConversion(FromRecordType, URecordType, FromLoc, FromRange, &BasePath)) return ExprError(); QualType UType = URecordType; if (PointerConversions) UType = Context.getPointerType(UType); From = ImpCastExprToType(From, UType, CK_UncheckedDerivedToBase, VK, &BasePath).get(); FromType = UType; FromRecordType = URecordType; } // We don't do access control for the conversion from the // declaring class to the true declaring class. IgnoreAccess = true; } CXXCastPath BasePath; if (CheckDerivedToBaseConversion(FromRecordType, DestRecordType, FromLoc, FromRange, &BasePath, IgnoreAccess)) return ExprError(); return ImpCastExprToType(From, DestType, CK_UncheckedDerivedToBase, VK, &BasePath); } bool Sema::UseArgumentDependentLookup(const CXXScopeSpec &SS, const LookupResult &R, bool HasTrailingLParen) { // Only when used directly as the postfix-expression of a call. if (!HasTrailingLParen) return false; // Never if a scope specifier was provided. if (SS.isSet()) return false; // Only in C++ or ObjC++. if (!getLangOpts().CPlusPlus) return false; // Turn off ADL when we find certain kinds of declarations during // normal lookup: for (NamedDecl *D : R) { // C++0x [basic.lookup.argdep]p3: // -- a declaration of a class member // Since using decls preserve this property, we check this on the // original decl. if (D->isCXXClassMember()) return false; // C++0x [basic.lookup.argdep]p3: // -- a block-scope function declaration that is not a // using-declaration // NOTE: we also trigger this for function templates (in fact, we // don't check the decl type at all, since all other decl types // turn off ADL anyway). if (isa(D)) D = cast(D)->getTargetDecl(); else if (D->getLexicalDeclContext()->isFunctionOrMethod()) return false; // C++0x [basic.lookup.argdep]p3: // -- a declaration that is neither a function or a function // template // And also for builtin functions. if (isa(D)) { FunctionDecl *FDecl = cast(D); // But also builtin functions. if (FDecl->getBuiltinID() && FDecl->isImplicit()) return false; } else if (!isa(D)) return false; } return true; } /// Diagnoses obvious problems with the use of the given declaration /// as an expression. This is only actually called for lookups that /// were not overloaded, and it doesn't promise that the declaration /// will in fact be used. static bool CheckDeclInExpr(Sema &S, SourceLocation Loc, NamedDecl *D) { if (D->isInvalidDecl()) return true; if (isa(D)) { S.Diag(Loc, diag::err_unexpected_typedef) << D->getDeclName(); return true; } if (isa(D)) { S.Diag(Loc, diag::err_unexpected_interface) << D->getDeclName(); return true; } if (isa(D)) { S.Diag(Loc, diag::err_unexpected_namespace) << D->getDeclName(); return true; } return false; } ExprResult Sema::BuildDeclarationNameExpr(const CXXScopeSpec &SS, LookupResult &R, bool NeedsADL, bool AcceptInvalidDecl) { // If this is a single, fully-resolved result and we don't need ADL, // just build an ordinary singleton decl ref. if (!NeedsADL && R.isSingleResult() && !R.getAsSingle()) return BuildDeclarationNameExpr(SS, R.getLookupNameInfo(), R.getFoundDecl(), R.getRepresentativeDecl(), nullptr, AcceptInvalidDecl); // We only need to check the declaration if there's exactly one // result, because in the overloaded case the results can only be // functions and function templates. if (R.isSingleResult() && CheckDeclInExpr(*this, R.getNameLoc(), R.getFoundDecl())) return ExprError(); // Otherwise, just build an unresolved lookup expression. Suppress // any lookup-related diagnostics; we'll hash these out later, when // we've picked a target. R.suppressDiagnostics(); UnresolvedLookupExpr *ULE = UnresolvedLookupExpr::Create(Context, R.getNamingClass(), SS.getWithLocInContext(Context), R.getLookupNameInfo(), NeedsADL, R.isOverloadedResult(), R.begin(), R.end()); return ULE; } static void diagnoseUncapturableValueReference(Sema &S, SourceLocation loc, ValueDecl *var, DeclContext *DC); /// \brief Complete semantic analysis for a reference to the given declaration. ExprResult Sema::BuildDeclarationNameExpr( const CXXScopeSpec &SS, const DeclarationNameInfo &NameInfo, NamedDecl *D, NamedDecl *FoundD, const TemplateArgumentListInfo *TemplateArgs, bool AcceptInvalidDecl) { assert(D && "Cannot refer to a NULL declaration"); assert(!isa(D) && "Cannot refer unambiguously to a function template"); SourceLocation Loc = NameInfo.getLoc(); if (CheckDeclInExpr(*this, Loc, D)) return ExprError(); if (TemplateDecl *Template = dyn_cast(D)) { // Specifically diagnose references to class templates that are missing // a template argument list. Diag(Loc, diag::err_template_decl_ref) << (isa(D) ? 1 : 0) << Template << SS.getRange(); Diag(Template->getLocation(), diag::note_template_decl_here); return ExprError(); } // Make sure that we're referring to a value. ValueDecl *VD = dyn_cast(D); if (!VD) { Diag(Loc, diag::err_ref_non_value) << D << SS.getRange(); Diag(D->getLocation(), diag::note_declared_at); return ExprError(); } // Check whether this declaration can be used. Note that we suppress // this check when we're going to perform argument-dependent lookup // on this function name, because this might not be the function // that overload resolution actually selects. if (DiagnoseUseOfDecl(VD, Loc)) return ExprError(); // Only create DeclRefExpr's for valid Decl's. if (VD->isInvalidDecl() && !AcceptInvalidDecl) return ExprError(); // Handle members of anonymous structs and unions. If we got here, // and the reference is to a class member indirect field, then this // must be the subject of a pointer-to-member expression. if (IndirectFieldDecl *indirectField = dyn_cast(VD)) if (!indirectField->isCXXClassMember()) return BuildAnonymousStructUnionMemberReference(SS, NameInfo.getLoc(), indirectField); { QualType type = VD->getType(); if (auto *FPT = type->getAs()) { // C++ [except.spec]p17: // An exception-specification is considered to be needed when: // - in an expression, the function is the unique lookup result or // the selected member of a set of overloaded functions. ResolveExceptionSpec(Loc, FPT); type = VD->getType(); } ExprValueKind valueKind = VK_RValue; switch (D->getKind()) { // Ignore all the non-ValueDecl kinds. #define ABSTRACT_DECL(kind) #define VALUE(type, base) #define DECL(type, base) \ case Decl::type: #include "clang/AST/DeclNodes.inc" llvm_unreachable("invalid value decl kind"); // These shouldn't make it here. case Decl::ObjCAtDefsField: case Decl::ObjCIvar: llvm_unreachable("forming non-member reference to ivar?"); // Enum constants are always r-values and never references. // Unresolved using declarations are dependent. case Decl::EnumConstant: case Decl::UnresolvedUsingValue: case Decl::OMPDeclareReduction: valueKind = VK_RValue; break; // Fields and indirect fields that got here must be for // pointer-to-member expressions; we just call them l-values for // internal consistency, because this subexpression doesn't really // exist in the high-level semantics. case Decl::Field: case Decl::IndirectField: assert(getLangOpts().CPlusPlus && "building reference to field in C?"); // These can't have reference type in well-formed programs, but // for internal consistency we do this anyway. type = type.getNonReferenceType(); valueKind = VK_LValue; break; // Non-type template parameters are either l-values or r-values // depending on the type. case Decl::NonTypeTemplateParm: { if (const ReferenceType *reftype = type->getAs()) { type = reftype->getPointeeType(); valueKind = VK_LValue; // even if the parameter is an r-value reference break; } // For non-references, we need to strip qualifiers just in case // the template parameter was declared as 'const int' or whatever. valueKind = VK_RValue; type = type.getUnqualifiedType(); break; } case Decl::Var: case Decl::VarTemplateSpecialization: case Decl::VarTemplatePartialSpecialization: case Decl::Decomposition: case Decl::OMPCapturedExpr: // In C, "extern void blah;" is valid and is an r-value. if (!getLangOpts().CPlusPlus && !type.hasQualifiers() && type->isVoidType()) { valueKind = VK_RValue; break; } // fallthrough case Decl::ImplicitParam: case Decl::ParmVar: { // These are always l-values. valueKind = VK_LValue; type = type.getNonReferenceType(); // FIXME: Does the addition of const really only apply in // potentially-evaluated contexts? Since the variable isn't actually // captured in an unevaluated context, it seems that the answer is no. if (!isUnevaluatedContext()) { QualType CapturedType = getCapturedDeclRefType(cast(VD), Loc); if (!CapturedType.isNull()) type = CapturedType; } break; } case Decl::Binding: { // These are always lvalues. valueKind = VK_LValue; type = type.getNonReferenceType(); // FIXME: Support lambda-capture of BindingDecls, once CWG actually // decides how that's supposed to work. auto *BD = cast(VD); if (BD->getDeclContext()->isFunctionOrMethod() && BD->getDeclContext() != CurContext) diagnoseUncapturableValueReference(*this, Loc, BD, CurContext); break; } case Decl::Function: { if (unsigned BID = cast(VD)->getBuiltinID()) { if (!Context.BuiltinInfo.isPredefinedLibFunction(BID)) { type = Context.BuiltinFnTy; valueKind = VK_RValue; break; } } const FunctionType *fty = type->castAs(); // If we're referring to a function with an __unknown_anytype // result type, make the entire expression __unknown_anytype. if (fty->getReturnType() == Context.UnknownAnyTy) { type = Context.UnknownAnyTy; valueKind = VK_RValue; break; } // Functions are l-values in C++. if (getLangOpts().CPlusPlus) { valueKind = VK_LValue; break; } // C99 DR 316 says that, if a function type comes from a // function definition (without a prototype), that type is only // used for checking compatibility. Therefore, when referencing // the function, we pretend that we don't have the full function // type. if (!cast(VD)->hasPrototype() && isa(fty)) type = Context.getFunctionNoProtoType(fty->getReturnType(), fty->getExtInfo()); // Functions are r-values in C. valueKind = VK_RValue; break; } case Decl::MSProperty: valueKind = VK_LValue; break; case Decl::CXXMethod: // If we're referring to a method with an __unknown_anytype // result type, make the entire expression __unknown_anytype. // This should only be possible with a type written directly. if (const FunctionProtoType *proto = dyn_cast(VD->getType())) if (proto->getReturnType() == Context.UnknownAnyTy) { type = Context.UnknownAnyTy; valueKind = VK_RValue; break; } // C++ methods are l-values if static, r-values if non-static. if (cast(VD)->isStatic()) { valueKind = VK_LValue; break; } // fallthrough case Decl::CXXConversion: case Decl::CXXDestructor: case Decl::CXXConstructor: valueKind = VK_RValue; break; } return BuildDeclRefExpr(VD, type, valueKind, NameInfo, &SS, FoundD, TemplateArgs); } } static void ConvertUTF8ToWideString(unsigned CharByteWidth, StringRef Source, SmallString<32> &Target) { Target.resize(CharByteWidth * (Source.size() + 1)); char *ResultPtr = &Target[0]; const llvm::UTF8 *ErrorPtr; bool success = llvm::ConvertUTF8toWide(CharByteWidth, Source, ResultPtr, ErrorPtr); (void)success; assert(success); Target.resize(ResultPtr - &Target[0]); } ExprResult Sema::BuildPredefinedExpr(SourceLocation Loc, PredefinedExpr::IdentType IT) { // Pick the current block, lambda, captured statement or function. Decl *currentDecl = nullptr; if (const BlockScopeInfo *BSI = getCurBlock()) currentDecl = BSI->TheDecl; else if (const LambdaScopeInfo *LSI = getCurLambda()) currentDecl = LSI->CallOperator; else if (const CapturedRegionScopeInfo *CSI = getCurCapturedRegion()) currentDecl = CSI->TheCapturedDecl; else currentDecl = getCurFunctionOrMethodDecl(); if (!currentDecl) { Diag(Loc, diag::ext_predef_outside_function); currentDecl = Context.getTranslationUnitDecl(); } QualType ResTy; StringLiteral *SL = nullptr; if (cast(currentDecl)->isDependentContext()) ResTy = Context.DependentTy; else { // Pre-defined identifiers are of type char[x], where x is the length of // the string. auto Str = PredefinedExpr::ComputeName(IT, currentDecl); unsigned Length = Str.length(); llvm::APInt LengthI(32, Length + 1); if (IT == PredefinedExpr::LFunction) { ResTy = Context.WideCharTy.withConst(); SmallString<32> RawChars; ConvertUTF8ToWideString(Context.getTypeSizeInChars(ResTy).getQuantity(), Str, RawChars); ResTy = Context.getConstantArrayType(ResTy, LengthI, ArrayType::Normal, /*IndexTypeQuals*/ 0); SL = StringLiteral::Create(Context, RawChars, StringLiteral::Wide, /*Pascal*/ false, ResTy, Loc); } else { ResTy = Context.CharTy.withConst(); ResTy = Context.getConstantArrayType(ResTy, LengthI, ArrayType::Normal, /*IndexTypeQuals*/ 0); SL = StringLiteral::Create(Context, Str, StringLiteral::Ascii, /*Pascal*/ false, ResTy, Loc); } } return new (Context) PredefinedExpr(Loc, ResTy, IT, SL); } ExprResult Sema::ActOnPredefinedExpr(SourceLocation Loc, tok::TokenKind Kind) { PredefinedExpr::IdentType IT; switch (Kind) { default: llvm_unreachable("Unknown simple primary expr!"); case tok::kw___func__: IT = PredefinedExpr::Func; break; // [C99 6.4.2.2] case tok::kw___FUNCTION__: IT = PredefinedExpr::Function; break; case tok::kw___FUNCDNAME__: IT = PredefinedExpr::FuncDName; break; // [MS] case tok::kw___FUNCSIG__: IT = PredefinedExpr::FuncSig; break; // [MS] case tok::kw_L__FUNCTION__: IT = PredefinedExpr::LFunction; break; case tok::kw___PRETTY_FUNCTION__: IT = PredefinedExpr::PrettyFunction; break; } return BuildPredefinedExpr(Loc, IT); } ExprResult Sema::ActOnCharacterConstant(const Token &Tok, Scope *UDLScope) { SmallString<16> CharBuffer; bool Invalid = false; StringRef ThisTok = PP.getSpelling(Tok, CharBuffer, &Invalid); if (Invalid) return ExprError(); CharLiteralParser Literal(ThisTok.begin(), ThisTok.end(), Tok.getLocation(), PP, Tok.getKind()); if (Literal.hadError()) return ExprError(); QualType Ty; if (Literal.isWide()) Ty = Context.WideCharTy; // L'x' -> wchar_t in C and C++. else if (Literal.isUTF16()) Ty = Context.Char16Ty; // u'x' -> char16_t in C11 and C++11. else if (Literal.isUTF32()) Ty = Context.Char32Ty; // U'x' -> char32_t in C11 and C++11. else if (!getLangOpts().CPlusPlus || Literal.isMultiChar()) Ty = Context.IntTy; // 'x' -> int in C, 'wxyz' -> int in C++. else Ty = Context.CharTy; // 'x' -> char in C++ CharacterLiteral::CharacterKind Kind = CharacterLiteral::Ascii; if (Literal.isWide()) Kind = CharacterLiteral::Wide; else if (Literal.isUTF16()) Kind = CharacterLiteral::UTF16; else if (Literal.isUTF32()) Kind = CharacterLiteral::UTF32; else if (Literal.isUTF8()) Kind = CharacterLiteral::UTF8; Expr *Lit = new (Context) CharacterLiteral(Literal.getValue(), Kind, Ty, Tok.getLocation()); if (Literal.getUDSuffix().empty()) return Lit; // We're building a user-defined literal. IdentifierInfo *UDSuffix = &Context.Idents.get(Literal.getUDSuffix()); SourceLocation UDSuffixLoc = getUDSuffixLoc(*this, Tok.getLocation(), Literal.getUDSuffixOffset()); // Make sure we're allowed user-defined literals here. if (!UDLScope) return ExprError(Diag(UDSuffixLoc, diag::err_invalid_character_udl)); // C++11 [lex.ext]p6: The literal L is treated as a call of the form // operator "" X (ch) return BuildCookedLiteralOperatorCall(*this, UDLScope, UDSuffix, UDSuffixLoc, Lit, Tok.getLocation()); } ExprResult Sema::ActOnIntegerConstant(SourceLocation Loc, uint64_t Val) { unsigned IntSize = Context.getTargetInfo().getIntWidth(); return IntegerLiteral::Create(Context, llvm::APInt(IntSize, Val), Context.IntTy, Loc); } static Expr *BuildFloatingLiteral(Sema &S, NumericLiteralParser &Literal, QualType Ty, SourceLocation Loc) { const llvm::fltSemantics &Format = S.Context.getFloatTypeSemantics(Ty); using llvm::APFloat; APFloat Val(Format); APFloat::opStatus result = Literal.GetFloatValue(Val); // Overflow is always an error, but underflow is only an error if // we underflowed to zero (APFloat reports denormals as underflow). if ((result & APFloat::opOverflow) || ((result & APFloat::opUnderflow) && Val.isZero())) { unsigned diagnostic; SmallString<20> buffer; if (result & APFloat::opOverflow) { diagnostic = diag::warn_float_overflow; APFloat::getLargest(Format).toString(buffer); } else { diagnostic = diag::warn_float_underflow; APFloat::getSmallest(Format).toString(buffer); } S.Diag(Loc, diagnostic) << Ty << StringRef(buffer.data(), buffer.size()); } bool isExact = (result == APFloat::opOK); return FloatingLiteral::Create(S.Context, Val, isExact, Ty, Loc); } bool Sema::CheckLoopHintExpr(Expr *E, SourceLocation Loc) { assert(E && "Invalid expression"); if (E->isValueDependent()) return false; QualType QT = E->getType(); if (!QT->isIntegerType() || QT->isBooleanType() || QT->isCharType()) { Diag(E->getExprLoc(), diag::err_pragma_loop_invalid_argument_type) << QT; return true; } llvm::APSInt ValueAPS; ExprResult R = VerifyIntegerConstantExpression(E, &ValueAPS); if (R.isInvalid()) return true; bool ValueIsPositive = ValueAPS.isStrictlyPositive(); if (!ValueIsPositive || ValueAPS.getActiveBits() > 31) { Diag(E->getExprLoc(), diag::err_pragma_loop_invalid_argument_value) << ValueAPS.toString(10) << ValueIsPositive; return true; } return false; } ExprResult Sema::ActOnNumericConstant(const Token &Tok, Scope *UDLScope) { // Fast path for a single digit (which is quite common). A single digit // cannot have a trigraph, escaped newline, radix prefix, or suffix. if (Tok.getLength() == 1) { const char Val = PP.getSpellingOfSingleCharacterNumericConstant(Tok); return ActOnIntegerConstant(Tok.getLocation(), Val-'0'); } SmallString<128> SpellingBuffer; // NumericLiteralParser wants to overread by one character. Add padding to // the buffer in case the token is copied to the buffer. If getSpelling() // returns a StringRef to the memory buffer, it should have a null char at // the EOF, so it is also safe. SpellingBuffer.resize(Tok.getLength() + 1); // Get the spelling of the token, which eliminates trigraphs, etc. bool Invalid = false; StringRef TokSpelling = PP.getSpelling(Tok, SpellingBuffer, &Invalid); if (Invalid) return ExprError(); NumericLiteralParser Literal(TokSpelling, Tok.getLocation(), PP); if (Literal.hadError) return ExprError(); if (Literal.hasUDSuffix()) { // We're building a user-defined literal. IdentifierInfo *UDSuffix = &Context.Idents.get(Literal.getUDSuffix()); SourceLocation UDSuffixLoc = getUDSuffixLoc(*this, Tok.getLocation(), Literal.getUDSuffixOffset()); // Make sure we're allowed user-defined literals here. if (!UDLScope) return ExprError(Diag(UDSuffixLoc, diag::err_invalid_numeric_udl)); QualType CookedTy; if (Literal.isFloatingLiteral()) { // C++11 [lex.ext]p4: If S contains a literal operator with parameter type // long double, the literal is treated as a call of the form // operator "" X (f L) CookedTy = Context.LongDoubleTy; } else { // C++11 [lex.ext]p3: If S contains a literal operator with parameter type // unsigned long long, the literal is treated as a call of the form // operator "" X (n ULL) CookedTy = Context.UnsignedLongLongTy; } DeclarationName OpName = Context.DeclarationNames.getCXXLiteralOperatorName(UDSuffix); DeclarationNameInfo OpNameInfo(OpName, UDSuffixLoc); OpNameInfo.setCXXLiteralOperatorNameLoc(UDSuffixLoc); SourceLocation TokLoc = Tok.getLocation(); // Perform literal operator lookup to determine if we're building a raw // literal or a cooked one. LookupResult R(*this, OpName, UDSuffixLoc, LookupOrdinaryName); switch (LookupLiteralOperator(UDLScope, R, CookedTy, /*AllowRaw*/true, /*AllowTemplate*/true, /*AllowStringTemplate*/false)) { case LOLR_Error: return ExprError(); case LOLR_Cooked: { Expr *Lit; if (Literal.isFloatingLiteral()) { Lit = BuildFloatingLiteral(*this, Literal, CookedTy, Tok.getLocation()); } else { llvm::APInt ResultVal(Context.getTargetInfo().getLongLongWidth(), 0); if (Literal.GetIntegerValue(ResultVal)) Diag(Tok.getLocation(), diag::err_integer_literal_too_large) << /* Unsigned */ 1; Lit = IntegerLiteral::Create(Context, ResultVal, CookedTy, Tok.getLocation()); } return BuildLiteralOperatorCall(R, OpNameInfo, Lit, TokLoc); } case LOLR_Raw: { // C++11 [lit.ext]p3, p4: If S contains a raw literal operator, the // literal is treated as a call of the form // operator "" X ("n") unsigned Length = Literal.getUDSuffixOffset(); QualType StrTy = Context.getConstantArrayType( Context.CharTy.withConst(), llvm::APInt(32, Length + 1), ArrayType::Normal, 0); Expr *Lit = StringLiteral::Create( Context, StringRef(TokSpelling.data(), Length), StringLiteral::Ascii, /*Pascal*/false, StrTy, &TokLoc, 1); return BuildLiteralOperatorCall(R, OpNameInfo, Lit, TokLoc); } case LOLR_Template: { // C++11 [lit.ext]p3, p4: Otherwise (S contains a literal operator // template), L is treated as a call fo the form // operator "" X <'c1', 'c2', ... 'ck'>() // where n is the source character sequence c1 c2 ... ck. TemplateArgumentListInfo ExplicitArgs; unsigned CharBits = Context.getIntWidth(Context.CharTy); bool CharIsUnsigned = Context.CharTy->isUnsignedIntegerType(); llvm::APSInt Value(CharBits, CharIsUnsigned); for (unsigned I = 0, N = Literal.getUDSuffixOffset(); I != N; ++I) { Value = TokSpelling[I]; TemplateArgument Arg(Context, Value, Context.CharTy); TemplateArgumentLocInfo ArgInfo; ExplicitArgs.addArgument(TemplateArgumentLoc(Arg, ArgInfo)); } return BuildLiteralOperatorCall(R, OpNameInfo, None, TokLoc, &ExplicitArgs); } case LOLR_StringTemplate: llvm_unreachable("unexpected literal operator lookup result"); } } Expr *Res; if (Literal.isFloatingLiteral()) { QualType Ty; if (Literal.isHalf){ if (getOpenCLOptions().isEnabled("cl_khr_fp16")) Ty = Context.HalfTy; else { Diag(Tok.getLocation(), diag::err_half_const_requires_fp16); return ExprError(); } } else if (Literal.isFloat) Ty = Context.FloatTy; else if (Literal.isLong) Ty = Context.LongDoubleTy; else if (Literal.isFloat128) Ty = Context.Float128Ty; else Ty = Context.DoubleTy; Res = BuildFloatingLiteral(*this, Literal, Ty, Tok.getLocation()); if (Ty == Context.DoubleTy) { if (getLangOpts().SinglePrecisionConstants) { const BuiltinType *BTy = Ty->getAs(); if (BTy->getKind() != BuiltinType::Float) { Res = ImpCastExprToType(Res, Context.FloatTy, CK_FloatingCast).get(); } } else if (getLangOpts().OpenCL && !getOpenCLOptions().isEnabled("cl_khr_fp64")) { // Impose single-precision float type when cl_khr_fp64 is not enabled. Diag(Tok.getLocation(), diag::warn_double_const_requires_fp64); Res = ImpCastExprToType(Res, Context.FloatTy, CK_FloatingCast).get(); } } } else if (!Literal.isIntegerLiteral()) { return ExprError(); } else { QualType Ty; // 'long long' is a C99 or C++11 feature. if (!getLangOpts().C99 && Literal.isLongLong) { if (getLangOpts().CPlusPlus) Diag(Tok.getLocation(), getLangOpts().CPlusPlus11 ? diag::warn_cxx98_compat_longlong : diag::ext_cxx11_longlong); else Diag(Tok.getLocation(), diag::ext_c99_longlong); } // Get the value in the widest-possible width. unsigned MaxWidth = Context.getTargetInfo().getIntMaxTWidth(); llvm::APInt ResultVal(MaxWidth, 0); if (Literal.GetIntegerValue(ResultVal)) { // If this value didn't fit into uintmax_t, error and force to ull. Diag(Tok.getLocation(), diag::err_integer_literal_too_large) << /* Unsigned */ 1; Ty = Context.UnsignedLongLongTy; assert(Context.getTypeSize(Ty) == ResultVal.getBitWidth() && "long long is not intmax_t?"); } else { // If this value fits into a ULL, try to figure out what else it fits into // according to the rules of C99 6.4.4.1p5. // Octal, Hexadecimal, and integers with a U suffix are allowed to // be an unsigned int. bool AllowUnsigned = Literal.isUnsigned || Literal.getRadix() != 10; // Check from smallest to largest, picking the smallest type we can. unsigned Width = 0; // Microsoft specific integer suffixes are explicitly sized. if (Literal.MicrosoftInteger) { if (Literal.MicrosoftInteger == 8 && !Literal.isUnsigned) { Width = 8; Ty = Context.CharTy; } else { Width = Literal.MicrosoftInteger; Ty = Context.getIntTypeForBitwidth(Width, /*Signed=*/!Literal.isUnsigned); } } if (Ty.isNull() && !Literal.isLong && !Literal.isLongLong) { // Are int/unsigned possibilities? unsigned IntSize = Context.getTargetInfo().getIntWidth(); // Does it fit in a unsigned int? if (ResultVal.isIntN(IntSize)) { // Does it fit in a signed int? if (!Literal.isUnsigned && ResultVal[IntSize-1] == 0) Ty = Context.IntTy; else if (AllowUnsigned) Ty = Context.UnsignedIntTy; Width = IntSize; } } // Are long/unsigned long possibilities? if (Ty.isNull() && !Literal.isLongLong) { unsigned LongSize = Context.getTargetInfo().getLongWidth(); // Does it fit in a unsigned long? if (ResultVal.isIntN(LongSize)) { // Does it fit in a signed long? if (!Literal.isUnsigned && ResultVal[LongSize-1] == 0) Ty = Context.LongTy; else if (AllowUnsigned) Ty = Context.UnsignedLongTy; // Check according to the rules of C90 6.1.3.2p5. C++03 [lex.icon]p2 // is compatible. else if (!getLangOpts().C99 && !getLangOpts().CPlusPlus11) { const unsigned LongLongSize = Context.getTargetInfo().getLongLongWidth(); Diag(Tok.getLocation(), getLangOpts().CPlusPlus ? Literal.isLong ? diag::warn_old_implicitly_unsigned_long_cxx : /*C++98 UB*/ diag:: ext_old_implicitly_unsigned_long_cxx : diag::warn_old_implicitly_unsigned_long) << (LongLongSize > LongSize ? /*will have type 'long long'*/ 0 : /*will be ill-formed*/ 1); Ty = Context.UnsignedLongTy; } Width = LongSize; } } // Check long long if needed. if (Ty.isNull()) { unsigned LongLongSize = Context.getTargetInfo().getLongLongWidth(); // Does it fit in a unsigned long long? if (ResultVal.isIntN(LongLongSize)) { // Does it fit in a signed long long? // To be compatible with MSVC, hex integer literals ending with the // LL or i64 suffix are always signed in Microsoft mode. if (!Literal.isUnsigned && (ResultVal[LongLongSize-1] == 0 || (getLangOpts().MSVCCompat && Literal.isLongLong))) Ty = Context.LongLongTy; else if (AllowUnsigned) Ty = Context.UnsignedLongLongTy; Width = LongLongSize; } } // If we still couldn't decide a type, we probably have something that // does not fit in a signed long long, but has no U suffix. if (Ty.isNull()) { Diag(Tok.getLocation(), diag::ext_integer_literal_too_large_for_signed); Ty = Context.UnsignedLongLongTy; Width = Context.getTargetInfo().getLongLongWidth(); } if (ResultVal.getBitWidth() != Width) ResultVal = ResultVal.trunc(Width); } Res = IntegerLiteral::Create(Context, ResultVal, Ty, Tok.getLocation()); } // If this is an imaginary literal, create the ImaginaryLiteral wrapper. if (Literal.isImaginary) Res = new (Context) ImaginaryLiteral(Res, Context.getComplexType(Res->getType())); return Res; } ExprResult Sema::ActOnParenExpr(SourceLocation L, SourceLocation R, Expr *E) { assert(E && "ActOnParenExpr() missing expr"); return new (Context) ParenExpr(L, R, E); } static bool CheckVecStepTraitOperandType(Sema &S, QualType T, SourceLocation Loc, SourceRange ArgRange) { // [OpenCL 1.1 6.11.12] "The vec_step built-in function takes a built-in // scalar or vector data type argument..." // Every built-in scalar type (OpenCL 1.1 6.1.1) is either an arithmetic // type (C99 6.2.5p18) or void. if (!(T->isArithmeticType() || T->isVoidType() || T->isVectorType())) { S.Diag(Loc, diag::err_vecstep_non_scalar_vector_type) << T << ArgRange; return true; } assert((T->isVoidType() || !T->isIncompleteType()) && "Scalar types should always be complete"); return false; } static bool CheckExtensionTraitOperandType(Sema &S, QualType T, SourceLocation Loc, SourceRange ArgRange, UnaryExprOrTypeTrait TraitKind) { // Invalid types must be hard errors for SFINAE in C++. if (S.LangOpts.CPlusPlus) return true; // C99 6.5.3.4p1: if (T->isFunctionType() && (TraitKind == UETT_SizeOf || TraitKind == UETT_AlignOf)) { // sizeof(function)/alignof(function) is allowed as an extension. S.Diag(Loc, diag::ext_sizeof_alignof_function_type) << TraitKind << ArgRange; return false; } // Allow sizeof(void)/alignof(void) as an extension, unless in OpenCL where // this is an error (OpenCL v1.1 s6.3.k) if (T->isVoidType()) { unsigned DiagID = S.LangOpts.OpenCL ? diag::err_opencl_sizeof_alignof_type : diag::ext_sizeof_alignof_void_type; S.Diag(Loc, DiagID) << TraitKind << ArgRange; return false; } return true; } static bool CheckObjCTraitOperandConstraints(Sema &S, QualType T, SourceLocation Loc, SourceRange ArgRange, UnaryExprOrTypeTrait TraitKind) { // Reject sizeof(interface) and sizeof(interface) if the // runtime doesn't allow it. if (!S.LangOpts.ObjCRuntime.allowsSizeofAlignof() && T->isObjCObjectType()) { S.Diag(Loc, diag::err_sizeof_nonfragile_interface) << T << (TraitKind == UETT_SizeOf) << ArgRange; return true; } return false; } /// \brief Check whether E is a pointer from a decayed array type (the decayed /// pointer type is equal to T) and emit a warning if it is. static void warnOnSizeofOnArrayDecay(Sema &S, SourceLocation Loc, QualType T, Expr *E) { // Don't warn if the operation changed the type. if (T != E->getType()) return; // Now look for array decays. ImplicitCastExpr *ICE = dyn_cast(E); if (!ICE || ICE->getCastKind() != CK_ArrayToPointerDecay) return; S.Diag(Loc, diag::warn_sizeof_array_decay) << ICE->getSourceRange() << ICE->getType() << ICE->getSubExpr()->getType(); } /// \brief Check the constraints on expression operands to unary type expression /// and type traits. /// /// Completes any types necessary and validates the constraints on the operand /// expression. The logic mostly mirrors the type-based overload, but may modify /// the expression as it completes the type for that expression through template /// instantiation, etc. bool Sema::CheckUnaryExprOrTypeTraitOperand(Expr *E, UnaryExprOrTypeTrait ExprKind) { QualType ExprTy = E->getType(); assert(!ExprTy->isReferenceType()); if (ExprKind == UETT_VecStep) return CheckVecStepTraitOperandType(*this, ExprTy, E->getExprLoc(), E->getSourceRange()); // Whitelist some types as extensions if (!CheckExtensionTraitOperandType(*this, ExprTy, E->getExprLoc(), E->getSourceRange(), ExprKind)) return false; // 'alignof' applied to an expression only requires the base element type of // the expression to be complete. 'sizeof' requires the expression's type to // be complete (and will attempt to complete it if it's an array of unknown // bound). if (ExprKind == UETT_AlignOf) { if (RequireCompleteType(E->getExprLoc(), Context.getBaseElementType(E->getType()), diag::err_sizeof_alignof_incomplete_type, ExprKind, E->getSourceRange())) return true; } else { if (RequireCompleteExprType(E, diag::err_sizeof_alignof_incomplete_type, ExprKind, E->getSourceRange())) return true; } // Completing the expression's type may have changed it. ExprTy = E->getType(); assert(!ExprTy->isReferenceType()); if (ExprTy->isFunctionType()) { Diag(E->getExprLoc(), diag::err_sizeof_alignof_function_type) << ExprKind << E->getSourceRange(); return true; } // The operand for sizeof and alignof is in an unevaluated expression context, // so side effects could result in unintended consequences. if ((ExprKind == UETT_SizeOf || ExprKind == UETT_AlignOf) && ActiveTemplateInstantiations.empty() && E->HasSideEffects(Context, false)) Diag(E->getExprLoc(), diag::warn_side_effects_unevaluated_context); if (CheckObjCTraitOperandConstraints(*this, ExprTy, E->getExprLoc(), E->getSourceRange(), ExprKind)) return true; if (ExprKind == UETT_SizeOf) { if (DeclRefExpr *DeclRef = dyn_cast(E->IgnoreParens())) { if (ParmVarDecl *PVD = dyn_cast(DeclRef->getFoundDecl())) { QualType OType = PVD->getOriginalType(); QualType Type = PVD->getType(); if (Type->isPointerType() && OType->isArrayType()) { Diag(E->getExprLoc(), diag::warn_sizeof_array_param) << Type << OType; Diag(PVD->getLocation(), diag::note_declared_at); } } } // Warn on "sizeof(array op x)" and "sizeof(x op array)", where the array // decays into a pointer and returns an unintended result. This is most // likely a typo for "sizeof(array) op x". if (BinaryOperator *BO = dyn_cast(E->IgnoreParens())) { warnOnSizeofOnArrayDecay(*this, BO->getOperatorLoc(), BO->getType(), BO->getLHS()); warnOnSizeofOnArrayDecay(*this, BO->getOperatorLoc(), BO->getType(), BO->getRHS()); } } return false; } /// \brief Check the constraints on operands to unary expression and type /// traits. /// /// This will complete any types necessary, and validate the various constraints /// on those operands. /// /// The UsualUnaryConversions() function is *not* called by this routine. /// C99 6.3.2.1p[2-4] all state: /// Except when it is the operand of the sizeof operator ... /// /// C++ [expr.sizeof]p4 /// The lvalue-to-rvalue, array-to-pointer, and function-to-pointer /// standard conversions are not applied to the operand of sizeof. /// /// This policy is followed for all of the unary trait expressions. bool Sema::CheckUnaryExprOrTypeTraitOperand(QualType ExprType, SourceLocation OpLoc, SourceRange ExprRange, UnaryExprOrTypeTrait ExprKind) { if (ExprType->isDependentType()) return false; // C++ [expr.sizeof]p2: // When applied to a reference or a reference type, the result // is the size of the referenced type. // C++11 [expr.alignof]p3: // When alignof is applied to a reference type, the result // shall be the alignment of the referenced type. if (const ReferenceType *Ref = ExprType->getAs()) ExprType = Ref->getPointeeType(); // C11 6.5.3.4/3, C++11 [expr.alignof]p3: // When alignof or _Alignof is applied to an array type, the result // is the alignment of the element type. if (ExprKind == UETT_AlignOf || ExprKind == UETT_OpenMPRequiredSimdAlign) ExprType = Context.getBaseElementType(ExprType); if (ExprKind == UETT_VecStep) return CheckVecStepTraitOperandType(*this, ExprType, OpLoc, ExprRange); // Whitelist some types as extensions if (!CheckExtensionTraitOperandType(*this, ExprType, OpLoc, ExprRange, ExprKind)) return false; if (RequireCompleteType(OpLoc, ExprType, diag::err_sizeof_alignof_incomplete_type, ExprKind, ExprRange)) return true; if (ExprType->isFunctionType()) { Diag(OpLoc, diag::err_sizeof_alignof_function_type) << ExprKind << ExprRange; return true; } if (CheckObjCTraitOperandConstraints(*this, ExprType, OpLoc, ExprRange, ExprKind)) return true; return false; } static bool CheckAlignOfExpr(Sema &S, Expr *E) { E = E->IgnoreParens(); // Cannot know anything else if the expression is dependent. if (E->isTypeDependent()) return false; if (E->getObjectKind() == OK_BitField) { S.Diag(E->getExprLoc(), diag::err_sizeof_alignof_typeof_bitfield) << 1 << E->getSourceRange(); return true; } ValueDecl *D = nullptr; if (DeclRefExpr *DRE = dyn_cast(E)) { D = DRE->getDecl(); } else if (MemberExpr *ME = dyn_cast(E)) { D = ME->getMemberDecl(); } // If it's a field, require the containing struct to have a // complete definition so that we can compute the layout. // // This can happen in C++11 onwards, either by naming the member // in a way that is not transformed into a member access expression // (in an unevaluated operand, for instance), or by naming the member // in a trailing-return-type. // // For the record, since __alignof__ on expressions is a GCC // extension, GCC seems to permit this but always gives the // nonsensical answer 0. // // We don't really need the layout here --- we could instead just // directly check for all the appropriate alignment-lowing // attributes --- but that would require duplicating a lot of // logic that just isn't worth duplicating for such a marginal // use-case. if (FieldDecl *FD = dyn_cast_or_null(D)) { // Fast path this check, since we at least know the record has a // definition if we can find a member of it. if (!FD->getParent()->isCompleteDefinition()) { S.Diag(E->getExprLoc(), diag::err_alignof_member_of_incomplete_type) << E->getSourceRange(); return true; } // Otherwise, if it's a field, and the field doesn't have // reference type, then it must have a complete type (or be a // flexible array member, which we explicitly want to // white-list anyway), which makes the following checks trivial. if (!FD->getType()->isReferenceType()) return false; } return S.CheckUnaryExprOrTypeTraitOperand(E, UETT_AlignOf); } bool Sema::CheckVecStepExpr(Expr *E) { E = E->IgnoreParens(); // Cannot know anything else if the expression is dependent. if (E->isTypeDependent()) return false; return CheckUnaryExprOrTypeTraitOperand(E, UETT_VecStep); } static void captureVariablyModifiedType(ASTContext &Context, QualType T, CapturingScopeInfo *CSI) { assert(T->isVariablyModifiedType()); assert(CSI != nullptr); // We're going to walk down into the type and look for VLA expressions. do { const Type *Ty = T.getTypePtr(); switch (Ty->getTypeClass()) { #define TYPE(Class, Base) #define ABSTRACT_TYPE(Class, Base) #define NON_CANONICAL_TYPE(Class, Base) #define DEPENDENT_TYPE(Class, Base) case Type::Class: #define NON_CANONICAL_UNLESS_DEPENDENT_TYPE(Class, Base) #include "clang/AST/TypeNodes.def" T = QualType(); break; // These types are never variably-modified. case Type::Builtin: case Type::Complex: case Type::Vector: case Type::ExtVector: case Type::Record: case Type::Enum: case Type::Elaborated: case Type::TemplateSpecialization: case Type::ObjCObject: case Type::ObjCInterface: case Type::ObjCObjectPointer: case Type::ObjCTypeParam: case Type::Pipe: llvm_unreachable("type class is never variably-modified!"); case Type::Adjusted: T = cast(Ty)->getOriginalType(); break; case Type::Decayed: T = cast(Ty)->getPointeeType(); break; case Type::Pointer: T = cast(Ty)->getPointeeType(); break; case Type::BlockPointer: T = cast(Ty)->getPointeeType(); break; case Type::LValueReference: case Type::RValueReference: T = cast(Ty)->getPointeeType(); break; case Type::MemberPointer: T = cast(Ty)->getPointeeType(); break; case Type::ConstantArray: case Type::IncompleteArray: // Losing element qualification here is fine. T = cast(Ty)->getElementType(); break; case Type::VariableArray: { // Losing element qualification here is fine. const VariableArrayType *VAT = cast(Ty); // Unknown size indication requires no size computation. // Otherwise, evaluate and record it. if (auto Size = VAT->getSizeExpr()) { if (!CSI->isVLATypeCaptured(VAT)) { RecordDecl *CapRecord = nullptr; if (auto LSI = dyn_cast(CSI)) { CapRecord = LSI->Lambda; } else if (auto CRSI = dyn_cast(CSI)) { CapRecord = CRSI->TheRecordDecl; } if (CapRecord) { auto ExprLoc = Size->getExprLoc(); auto SizeType = Context.getSizeType(); // Build the non-static data member. auto Field = FieldDecl::Create(Context, CapRecord, ExprLoc, ExprLoc, /*Id*/ nullptr, SizeType, /*TInfo*/ nullptr, /*BW*/ nullptr, /*Mutable*/ false, /*InitStyle*/ ICIS_NoInit); Field->setImplicit(true); Field->setAccess(AS_private); Field->setCapturedVLAType(VAT); CapRecord->addDecl(Field); CSI->addVLATypeCapture(ExprLoc, SizeType); } } } T = VAT->getElementType(); break; } case Type::FunctionProto: case Type::FunctionNoProto: T = cast(Ty)->getReturnType(); break; case Type::Paren: case Type::TypeOf: case Type::UnaryTransform: case Type::Attributed: case Type::SubstTemplateTypeParm: case Type::PackExpansion: // Keep walking after single level desugaring. T = T.getSingleStepDesugaredType(Context); break; case Type::Typedef: T = cast(Ty)->desugar(); break; case Type::Decltype: T = cast(Ty)->desugar(); break; case Type::Auto: T = cast(Ty)->getDeducedType(); break; case Type::TypeOfExpr: T = cast(Ty)->getUnderlyingExpr()->getType(); break; case Type::Atomic: T = cast(Ty)->getValueType(); break; } } while (!T.isNull() && T->isVariablyModifiedType()); } /// \brief Build a sizeof or alignof expression given a type operand. ExprResult Sema::CreateUnaryExprOrTypeTraitExpr(TypeSourceInfo *TInfo, SourceLocation OpLoc, UnaryExprOrTypeTrait ExprKind, SourceRange R) { if (!TInfo) return ExprError(); QualType T = TInfo->getType(); if (!T->isDependentType() && CheckUnaryExprOrTypeTraitOperand(T, OpLoc, R, ExprKind)) return ExprError(); if (T->isVariablyModifiedType() && FunctionScopes.size() > 1) { if (auto *TT = T->getAs()) { for (auto I = FunctionScopes.rbegin(), E = std::prev(FunctionScopes.rend()); I != E; ++I) { auto *CSI = dyn_cast(*I); if (CSI == nullptr) break; DeclContext *DC = nullptr; if (auto *LSI = dyn_cast(CSI)) DC = LSI->CallOperator; else if (auto *CRSI = dyn_cast(CSI)) DC = CRSI->TheCapturedDecl; else if (auto *BSI = dyn_cast(CSI)) DC = BSI->TheDecl; if (DC) { if (DC->containsDecl(TT->getDecl())) break; captureVariablyModifiedType(Context, T, CSI); } } } } // C99 6.5.3.4p4: the type (an unsigned integer type) is size_t. return new (Context) UnaryExprOrTypeTraitExpr( ExprKind, TInfo, Context.getSizeType(), OpLoc, R.getEnd()); } /// \brief Build a sizeof or alignof expression given an expression /// operand. ExprResult Sema::CreateUnaryExprOrTypeTraitExpr(Expr *E, SourceLocation OpLoc, UnaryExprOrTypeTrait ExprKind) { ExprResult PE = CheckPlaceholderExpr(E); if (PE.isInvalid()) return ExprError(); E = PE.get(); // Verify that the operand is valid. bool isInvalid = false; if (E->isTypeDependent()) { // Delay type-checking for type-dependent expressions. } else if (ExprKind == UETT_AlignOf) { isInvalid = CheckAlignOfExpr(*this, E); } else if (ExprKind == UETT_VecStep) { isInvalid = CheckVecStepExpr(E); } else if (ExprKind == UETT_OpenMPRequiredSimdAlign) { Diag(E->getExprLoc(), diag::err_openmp_default_simd_align_expr); isInvalid = true; } else if (E->refersToBitField()) { // C99 6.5.3.4p1. Diag(E->getExprLoc(), diag::err_sizeof_alignof_typeof_bitfield) << 0; isInvalid = true; } else { isInvalid = CheckUnaryExprOrTypeTraitOperand(E, UETT_SizeOf); } if (isInvalid) return ExprError(); if (ExprKind == UETT_SizeOf && E->getType()->isVariableArrayType()) { PE = TransformToPotentiallyEvaluated(E); if (PE.isInvalid()) return ExprError(); E = PE.get(); } // C99 6.5.3.4p4: the type (an unsigned integer type) is size_t. return new (Context) UnaryExprOrTypeTraitExpr( ExprKind, E, Context.getSizeType(), OpLoc, E->getSourceRange().getEnd()); } /// ActOnUnaryExprOrTypeTraitExpr - Handle @c sizeof(type) and @c sizeof @c /// expr and the same for @c alignof and @c __alignof /// Note that the ArgRange is invalid if isType is false. ExprResult Sema::ActOnUnaryExprOrTypeTraitExpr(SourceLocation OpLoc, UnaryExprOrTypeTrait ExprKind, bool IsType, void *TyOrEx, SourceRange ArgRange) { // If error parsing type, ignore. if (!TyOrEx) return ExprError(); if (IsType) { TypeSourceInfo *TInfo; (void) GetTypeFromParser(ParsedType::getFromOpaquePtr(TyOrEx), &TInfo); return CreateUnaryExprOrTypeTraitExpr(TInfo, OpLoc, ExprKind, ArgRange); } Expr *ArgEx = (Expr *)TyOrEx; ExprResult Result = CreateUnaryExprOrTypeTraitExpr(ArgEx, OpLoc, ExprKind); return Result; } static QualType CheckRealImagOperand(Sema &S, ExprResult &V, SourceLocation Loc, bool IsReal) { if (V.get()->isTypeDependent()) return S.Context.DependentTy; // _Real and _Imag are only l-values for normal l-values. if (V.get()->getObjectKind() != OK_Ordinary) { V = S.DefaultLvalueConversion(V.get()); if (V.isInvalid()) return QualType(); } // These operators return the element type of a complex type. if (const ComplexType *CT = V.get()->getType()->getAs()) return CT->getElementType(); // Otherwise they pass through real integer and floating point types here. if (V.get()->getType()->isArithmeticType()) return V.get()->getType(); // Test for placeholders. ExprResult PR = S.CheckPlaceholderExpr(V.get()); if (PR.isInvalid()) return QualType(); if (PR.get() != V.get()) { V = PR; return CheckRealImagOperand(S, V, Loc, IsReal); } // Reject anything else. S.Diag(Loc, diag::err_realimag_invalid_type) << V.get()->getType() << (IsReal ? "__real" : "__imag"); return QualType(); } ExprResult Sema::ActOnPostfixUnaryOp(Scope *S, SourceLocation OpLoc, tok::TokenKind Kind, Expr *Input) { UnaryOperatorKind Opc; switch (Kind) { default: llvm_unreachable("Unknown unary op!"); case tok::plusplus: Opc = UO_PostInc; break; case tok::minusminus: Opc = UO_PostDec; break; } // Since this might is a postfix expression, get rid of ParenListExprs. ExprResult Result = MaybeConvertParenListExprToParenExpr(S, Input); if (Result.isInvalid()) return ExprError(); Input = Result.get(); return BuildUnaryOp(S, OpLoc, Opc, Input); } /// \brief Diagnose if arithmetic on the given ObjC pointer is illegal. /// /// \return true on error static bool checkArithmeticOnObjCPointer(Sema &S, SourceLocation opLoc, Expr *op) { assert(op->getType()->isObjCObjectPointerType()); if (S.LangOpts.ObjCRuntime.allowsPointerArithmetic() && !S.LangOpts.ObjCSubscriptingLegacyRuntime) return false; S.Diag(opLoc, diag::err_arithmetic_nonfragile_interface) << op->getType()->castAs()->getPointeeType() << op->getSourceRange(); return true; } static bool isMSPropertySubscriptExpr(Sema &S, Expr *Base) { auto *BaseNoParens = Base->IgnoreParens(); if (auto *MSProp = dyn_cast(BaseNoParens)) return MSProp->getPropertyDecl()->getType()->isArrayType(); return isa(BaseNoParens); } ExprResult Sema::ActOnArraySubscriptExpr(Scope *S, Expr *base, SourceLocation lbLoc, Expr *idx, SourceLocation rbLoc) { if (base && !base->getType().isNull() && base->getType()->isSpecificPlaceholderType(BuiltinType::OMPArraySection)) return ActOnOMPArraySectionExpr(base, lbLoc, idx, SourceLocation(), /*Length=*/nullptr, rbLoc); // Since this might be a postfix expression, get rid of ParenListExprs. if (isa(base)) { ExprResult result = MaybeConvertParenListExprToParenExpr(S, base); if (result.isInvalid()) return ExprError(); base = result.get(); } // Handle any non-overload placeholder types in the base and index // expressions. We can't handle overloads here because the other // operand might be an overloadable type, in which case the overload // resolution for the operator overload should get the first crack // at the overload. bool IsMSPropertySubscript = false; if (base->getType()->isNonOverloadPlaceholderType()) { IsMSPropertySubscript = isMSPropertySubscriptExpr(*this, base); if (!IsMSPropertySubscript) { ExprResult result = CheckPlaceholderExpr(base); if (result.isInvalid()) return ExprError(); base = result.get(); } } if (idx->getType()->isNonOverloadPlaceholderType()) { ExprResult result = CheckPlaceholderExpr(idx); if (result.isInvalid()) return ExprError(); idx = result.get(); } // Build an unanalyzed expression if either operand is type-dependent. if (getLangOpts().CPlusPlus && (base->isTypeDependent() || idx->isTypeDependent())) { return new (Context) ArraySubscriptExpr(base, idx, Context.DependentTy, VK_LValue, OK_Ordinary, rbLoc); } // MSDN, property (C++) // https://msdn.microsoft.com/en-us/library/yhfk0thd(v=vs.120).aspx // This attribute can also be used in the declaration of an empty array in a // class or structure definition. For example: // __declspec(property(get=GetX, put=PutX)) int x[]; // The above statement indicates that x[] can be used with one or more array // indices. In this case, i=p->x[a][b] will be turned into i=p->GetX(a, b), // and p->x[a][b] = i will be turned into p->PutX(a, b, i); if (IsMSPropertySubscript) { // Build MS property subscript expression if base is MS property reference // or MS property subscript. return new (Context) MSPropertySubscriptExpr( base, idx, Context.PseudoObjectTy, VK_LValue, OK_Ordinary, rbLoc); } // Use C++ overloaded-operator rules if either operand has record // type. The spec says to do this if either type is *overloadable*, // but enum types can't declare subscript operators or conversion // operators, so there's nothing interesting for overload resolution // to do if there aren't any record types involved. // // ObjC pointers have their own subscripting logic that is not tied // to overload resolution and so should not take this path. if (getLangOpts().CPlusPlus && (base->getType()->isRecordType() || (!base->getType()->isObjCObjectPointerType() && idx->getType()->isRecordType()))) { return CreateOverloadedArraySubscriptExpr(lbLoc, rbLoc, base, idx); } return CreateBuiltinArraySubscriptExpr(base, lbLoc, idx, rbLoc); } ExprResult Sema::ActOnOMPArraySectionExpr(Expr *Base, SourceLocation LBLoc, Expr *LowerBound, SourceLocation ColonLoc, Expr *Length, SourceLocation RBLoc) { if (Base->getType()->isPlaceholderType() && !Base->getType()->isSpecificPlaceholderType( BuiltinType::OMPArraySection)) { ExprResult Result = CheckPlaceholderExpr(Base); if (Result.isInvalid()) return ExprError(); Base = Result.get(); } if (LowerBound && LowerBound->getType()->isNonOverloadPlaceholderType()) { ExprResult Result = CheckPlaceholderExpr(LowerBound); if (Result.isInvalid()) return ExprError(); Result = DefaultLvalueConversion(Result.get()); if (Result.isInvalid()) return ExprError(); LowerBound = Result.get(); } if (Length && Length->getType()->isNonOverloadPlaceholderType()) { ExprResult Result = CheckPlaceholderExpr(Length); if (Result.isInvalid()) return ExprError(); Result = DefaultLvalueConversion(Result.get()); if (Result.isInvalid()) return ExprError(); Length = Result.get(); } // Build an unanalyzed expression if either operand is type-dependent. if (Base->isTypeDependent() || (LowerBound && (LowerBound->isTypeDependent() || LowerBound->isValueDependent())) || (Length && (Length->isTypeDependent() || Length->isValueDependent()))) { return new (Context) OMPArraySectionExpr(Base, LowerBound, Length, Context.DependentTy, VK_LValue, OK_Ordinary, ColonLoc, RBLoc); } // Perform default conversions. QualType OriginalTy = OMPArraySectionExpr::getBaseOriginalType(Base); QualType ResultTy; if (OriginalTy->isAnyPointerType()) { ResultTy = OriginalTy->getPointeeType(); } else if (OriginalTy->isArrayType()) { ResultTy = OriginalTy->getAsArrayTypeUnsafe()->getElementType(); } else { return ExprError( Diag(Base->getExprLoc(), diag::err_omp_typecheck_section_value) << Base->getSourceRange()); } // C99 6.5.2.1p1 if (LowerBound) { auto Res = PerformOpenMPImplicitIntegerConversion(LowerBound->getExprLoc(), LowerBound); if (Res.isInvalid()) return ExprError(Diag(LowerBound->getExprLoc(), diag::err_omp_typecheck_section_not_integer) << 0 << LowerBound->getSourceRange()); LowerBound = Res.get(); if (LowerBound->getType()->isSpecificBuiltinType(BuiltinType::Char_S) || LowerBound->getType()->isSpecificBuiltinType(BuiltinType::Char_U)) Diag(LowerBound->getExprLoc(), diag::warn_omp_section_is_char) << 0 << LowerBound->getSourceRange(); } if (Length) { auto Res = PerformOpenMPImplicitIntegerConversion(Length->getExprLoc(), Length); if (Res.isInvalid()) return ExprError(Diag(Length->getExprLoc(), diag::err_omp_typecheck_section_not_integer) << 1 << Length->getSourceRange()); Length = Res.get(); if (Length->getType()->isSpecificBuiltinType(BuiltinType::Char_S) || Length->getType()->isSpecificBuiltinType(BuiltinType::Char_U)) Diag(Length->getExprLoc(), diag::warn_omp_section_is_char) << 1 << Length->getSourceRange(); } // C99 6.5.2.1p1: "shall have type "pointer to *object* type". Similarly, // C++ [expr.sub]p1: The type "T" shall be a completely-defined object // type. Note that functions are not objects, and that (in C99 parlance) // incomplete types are not object types. if (ResultTy->isFunctionType()) { Diag(Base->getExprLoc(), diag::err_omp_section_function_type) << ResultTy << Base->getSourceRange(); return ExprError(); } if (RequireCompleteType(Base->getExprLoc(), ResultTy, diag::err_omp_section_incomplete_type, Base)) return ExprError(); if (LowerBound && !OriginalTy->isAnyPointerType()) { llvm::APSInt LowerBoundValue; if (LowerBound->EvaluateAsInt(LowerBoundValue, Context)) { // OpenMP 4.5, [2.4 Array Sections] // The array section must be a subset of the original array. if (LowerBoundValue.isNegative()) { Diag(LowerBound->getExprLoc(), diag::err_omp_section_not_subset_of_array) << LowerBound->getSourceRange(); return ExprError(); } } } if (Length) { llvm::APSInt LengthValue; if (Length->EvaluateAsInt(LengthValue, Context)) { // OpenMP 4.5, [2.4 Array Sections] // The length must evaluate to non-negative integers. if (LengthValue.isNegative()) { Diag(Length->getExprLoc(), diag::err_omp_section_length_negative) << LengthValue.toString(/*Radix=*/10, /*Signed=*/true) << Length->getSourceRange(); return ExprError(); } } } else if (ColonLoc.isValid() && (OriginalTy.isNull() || (!OriginalTy->isConstantArrayType() && !OriginalTy->isVariableArrayType()))) { // OpenMP 4.5, [2.4 Array Sections] // When the size of the array dimension is not known, the length must be // specified explicitly. Diag(ColonLoc, diag::err_omp_section_length_undefined) << (!OriginalTy.isNull() && OriginalTy->isArrayType()); return ExprError(); } if (!Base->getType()->isSpecificPlaceholderType( BuiltinType::OMPArraySection)) { ExprResult Result = DefaultFunctionArrayLvalueConversion(Base); if (Result.isInvalid()) return ExprError(); Base = Result.get(); } return new (Context) OMPArraySectionExpr(Base, LowerBound, Length, Context.OMPArraySectionTy, VK_LValue, OK_Ordinary, ColonLoc, RBLoc); } ExprResult Sema::CreateBuiltinArraySubscriptExpr(Expr *Base, SourceLocation LLoc, Expr *Idx, SourceLocation RLoc) { Expr *LHSExp = Base; Expr *RHSExp = Idx; ExprValueKind VK = VK_LValue; ExprObjectKind OK = OK_Ordinary; // Per C++ core issue 1213, the result is an xvalue if either operand is // a non-lvalue array, and an lvalue otherwise. if (getLangOpts().CPlusPlus11 && ((LHSExp->getType()->isArrayType() && !LHSExp->isLValue()) || (RHSExp->getType()->isArrayType() && !RHSExp->isLValue()))) VK = VK_XValue; // Perform default conversions. if (!LHSExp->getType()->getAs()) { ExprResult Result = DefaultFunctionArrayLvalueConversion(LHSExp); if (Result.isInvalid()) return ExprError(); LHSExp = Result.get(); } ExprResult Result = DefaultFunctionArrayLvalueConversion(RHSExp); if (Result.isInvalid()) return ExprError(); RHSExp = Result.get(); QualType LHSTy = LHSExp->getType(), RHSTy = RHSExp->getType(); // C99 6.5.2.1p2: the expression e1[e2] is by definition precisely equivalent // to the expression *((e1)+(e2)). This means the array "Base" may actually be // in the subscript position. As a result, we need to derive the array base // and index from the expression types. Expr *BaseExpr, *IndexExpr; QualType ResultType; if (LHSTy->isDependentType() || RHSTy->isDependentType()) { BaseExpr = LHSExp; IndexExpr = RHSExp; ResultType = Context.DependentTy; } else if (const PointerType *PTy = LHSTy->getAs()) { BaseExpr = LHSExp; IndexExpr = RHSExp; ResultType = PTy->getPointeeType(); } else if (const ObjCObjectPointerType *PTy = LHSTy->getAs()) { BaseExpr = LHSExp; IndexExpr = RHSExp; // Use custom logic if this should be the pseudo-object subscript // expression. if (!LangOpts.isSubscriptPointerArithmetic()) return BuildObjCSubscriptExpression(RLoc, BaseExpr, IndexExpr, nullptr, nullptr); ResultType = PTy->getPointeeType(); } else if (const PointerType *PTy = RHSTy->getAs()) { // Handle the uncommon case of "123[Ptr]". BaseExpr = RHSExp; IndexExpr = LHSExp; ResultType = PTy->getPointeeType(); } else if (const ObjCObjectPointerType *PTy = RHSTy->getAs()) { // Handle the uncommon case of "123[Ptr]". BaseExpr = RHSExp; IndexExpr = LHSExp; ResultType = PTy->getPointeeType(); if (!LangOpts.isSubscriptPointerArithmetic()) { Diag(LLoc, diag::err_subscript_nonfragile_interface) << ResultType << BaseExpr->getSourceRange(); return ExprError(); } } else if (const VectorType *VTy = LHSTy->getAs()) { BaseExpr = LHSExp; // vectors: V[123] IndexExpr = RHSExp; VK = LHSExp->getValueKind(); if (VK != VK_RValue) OK = OK_VectorComponent; // FIXME: need to deal with const... ResultType = VTy->getElementType(); } else if (LHSTy->isArrayType()) { // If we see an array that wasn't promoted by // DefaultFunctionArrayLvalueConversion, it must be an array that // wasn't promoted because of the C90 rule that doesn't // allow promoting non-lvalue arrays. Warn, then // force the promotion here. Diag(LHSExp->getLocStart(), diag::ext_subscript_non_lvalue) << LHSExp->getSourceRange(); LHSExp = ImpCastExprToType(LHSExp, Context.getArrayDecayedType(LHSTy), CK_ArrayToPointerDecay).get(); LHSTy = LHSExp->getType(); BaseExpr = LHSExp; IndexExpr = RHSExp; ResultType = LHSTy->getAs()->getPointeeType(); } else if (RHSTy->isArrayType()) { // Same as previous, except for 123[f().a] case Diag(RHSExp->getLocStart(), diag::ext_subscript_non_lvalue) << RHSExp->getSourceRange(); RHSExp = ImpCastExprToType(RHSExp, Context.getArrayDecayedType(RHSTy), CK_ArrayToPointerDecay).get(); RHSTy = RHSExp->getType(); BaseExpr = RHSExp; IndexExpr = LHSExp; ResultType = RHSTy->getAs()->getPointeeType(); } else { return ExprError(Diag(LLoc, diag::err_typecheck_subscript_value) << LHSExp->getSourceRange() << RHSExp->getSourceRange()); } // C99 6.5.2.1p1 if (!IndexExpr->getType()->isIntegerType() && !IndexExpr->isTypeDependent()) return ExprError(Diag(LLoc, diag::err_typecheck_subscript_not_integer) << IndexExpr->getSourceRange()); if ((IndexExpr->getType()->isSpecificBuiltinType(BuiltinType::Char_S) || IndexExpr->getType()->isSpecificBuiltinType(BuiltinType::Char_U)) && !IndexExpr->isTypeDependent()) Diag(LLoc, diag::warn_subscript_is_char) << IndexExpr->getSourceRange(); // C99 6.5.2.1p1: "shall have type "pointer to *object* type". Similarly, // C++ [expr.sub]p1: The type "T" shall be a completely-defined object // type. Note that Functions are not objects, and that (in C99 parlance) // incomplete types are not object types. if (ResultType->isFunctionType()) { Diag(BaseExpr->getLocStart(), diag::err_subscript_function_type) << ResultType << BaseExpr->getSourceRange(); return ExprError(); } if (ResultType->isVoidType() && !getLangOpts().CPlusPlus) { // GNU extension: subscripting on pointer to void Diag(LLoc, diag::ext_gnu_subscript_void_type) << BaseExpr->getSourceRange(); // C forbids expressions of unqualified void type from being l-values. // See IsCForbiddenLValueType. if (!ResultType.hasQualifiers()) VK = VK_RValue; } else if (!ResultType->isDependentType() && RequireCompleteType(LLoc, ResultType, diag::err_subscript_incomplete_type, BaseExpr)) return ExprError(); assert(VK == VK_RValue || LangOpts.CPlusPlus || !ResultType.isCForbiddenLValueType()); return new (Context) ArraySubscriptExpr(LHSExp, RHSExp, ResultType, VK, OK, RLoc); } bool Sema::CheckCXXDefaultArgExpr(SourceLocation CallLoc, FunctionDecl *FD, ParmVarDecl *Param) { if (Param->hasUnparsedDefaultArg()) { Diag(CallLoc, diag::err_use_of_default_argument_to_function_declared_later) << FD << cast(FD->getDeclContext())->getDeclName(); Diag(UnparsedDefaultArgLocs[Param], diag::note_default_argument_declared_here); return true; } if (Param->hasUninstantiatedDefaultArg()) { Expr *UninstExpr = Param->getUninstantiatedDefaultArg(); EnterExpressionEvaluationContext EvalContext(*this, PotentiallyEvaluated, Param); // Instantiate the expression. MultiLevelTemplateArgumentList MutiLevelArgList = getTemplateInstantiationArgs(FD, nullptr, /*RelativeToPrimary=*/true); InstantiatingTemplate Inst(*this, CallLoc, Param, MutiLevelArgList.getInnermost()); if (Inst.isInvalid()) return true; if (Inst.isAlreadyInstantiating()) { Diag(Param->getLocStart(), diag::err_recursive_default_argument) << FD; Param->setInvalidDecl(); return true; } ExprResult Result; { // C++ [dcl.fct.default]p5: // The names in the [default argument] expression are bound, and // the semantic constraints are checked, at the point where the // default argument expression appears. ContextRAII SavedContext(*this, FD); LocalInstantiationScope Local(*this); Result = SubstInitializer(UninstExpr, MutiLevelArgList, /*DirectInit*/false); } if (Result.isInvalid()) return true; // Check the expression as an initializer for the parameter. InitializedEntity Entity = InitializedEntity::InitializeParameter(Context, Param); InitializationKind Kind = InitializationKind::CreateCopy(Param->getLocation(), /*FIXME:EqualLoc*/UninstExpr->getLocStart()); Expr *ResultE = Result.getAs(); InitializationSequence InitSeq(*this, Entity, Kind, ResultE); Result = InitSeq.Perform(*this, Entity, Kind, ResultE); if (Result.isInvalid()) return true; Result = ActOnFinishFullExpr(Result.getAs(), Param->getOuterLocStart()); if (Result.isInvalid()) return true; // Remember the instantiated default argument. Param->setDefaultArg(Result.getAs()); if (ASTMutationListener *L = getASTMutationListener()) { L->DefaultArgumentInstantiated(Param); } } // If the default argument expression is not set yet, we are building it now. if (!Param->hasInit()) { Diag(Param->getLocStart(), diag::err_recursive_default_argument) << FD; Param->setInvalidDecl(); return true; } // If the default expression creates temporaries, we need to // push them to the current stack of expression temporaries so they'll // be properly destroyed. // FIXME: We should really be rebuilding the default argument with new // bound temporaries; see the comment in PR5810. // We don't need to do that with block decls, though, because // blocks in default argument expression can never capture anything. if (auto Init = dyn_cast(Param->getInit())) { // Set the "needs cleanups" bit regardless of whether there are // any explicit objects. Cleanup.setExprNeedsCleanups(Init->cleanupsHaveSideEffects()); // Append all the objects to the cleanup list. Right now, this // should always be a no-op, because blocks in default argument // expressions should never be able to capture anything. assert(!Init->getNumObjects() && "default argument expression has capturing blocks?"); } // We already type-checked the argument, so we know it works. // Just mark all of the declarations in this potentially-evaluated expression // as being "referenced". MarkDeclarationsReferencedInExpr(Param->getDefaultArg(), /*SkipLocalVariables=*/true); return false; } ExprResult Sema::BuildCXXDefaultArgExpr(SourceLocation CallLoc, FunctionDecl *FD, ParmVarDecl *Param) { if (CheckCXXDefaultArgExpr(CallLoc, FD, Param)) return ExprError(); return CXXDefaultArgExpr::Create(Context, CallLoc, Param); } Sema::VariadicCallType Sema::getVariadicCallType(FunctionDecl *FDecl, const FunctionProtoType *Proto, Expr *Fn) { if (Proto && Proto->isVariadic()) { if (dyn_cast_or_null(FDecl)) return VariadicConstructor; else if (Fn && Fn->getType()->isBlockPointerType()) return VariadicBlock; else if (FDecl) { if (CXXMethodDecl *Method = dyn_cast_or_null(FDecl)) if (Method->isInstance()) return VariadicMethod; } else if (Fn && Fn->getType() == Context.BoundMemberTy) return VariadicMethod; return VariadicFunction; } return VariadicDoesNotApply; } namespace { class FunctionCallCCC : public FunctionCallFilterCCC { public: FunctionCallCCC(Sema &SemaRef, const IdentifierInfo *FuncName, unsigned NumArgs, MemberExpr *ME) : FunctionCallFilterCCC(SemaRef, NumArgs, false, ME), FunctionName(FuncName) {} bool ValidateCandidate(const TypoCorrection &candidate) override { if (!candidate.getCorrectionSpecifier() || candidate.getCorrectionAsIdentifierInfo() != FunctionName) { return false; } return FunctionCallFilterCCC::ValidateCandidate(candidate); } private: const IdentifierInfo *const FunctionName; }; } static TypoCorrection TryTypoCorrectionForCall(Sema &S, Expr *Fn, FunctionDecl *FDecl, ArrayRef Args) { MemberExpr *ME = dyn_cast(Fn); DeclarationName FuncName = FDecl->getDeclName(); SourceLocation NameLoc = ME ? ME->getMemberLoc() : Fn->getLocStart(); if (TypoCorrection Corrected = S.CorrectTypo( DeclarationNameInfo(FuncName, NameLoc), Sema::LookupOrdinaryName, S.getScopeForContext(S.CurContext), nullptr, llvm::make_unique(S, FuncName.getAsIdentifierInfo(), Args.size(), ME), Sema::CTK_ErrorRecovery)) { if (NamedDecl *ND = Corrected.getFoundDecl()) { if (Corrected.isOverloaded()) { OverloadCandidateSet OCS(NameLoc, OverloadCandidateSet::CSK_Normal); OverloadCandidateSet::iterator Best; for (NamedDecl *CD : Corrected) { if (FunctionDecl *FD = dyn_cast(CD)) S.AddOverloadCandidate(FD, DeclAccessPair::make(FD, AS_none), Args, OCS); } switch (OCS.BestViableFunction(S, NameLoc, Best)) { case OR_Success: ND = Best->FoundDecl; Corrected.setCorrectionDecl(ND); break; default: break; } } ND = ND->getUnderlyingDecl(); if (isa(ND) || isa(ND)) return Corrected; } } return TypoCorrection(); } /// ConvertArgumentsForCall - Converts the arguments specified in /// Args/NumArgs to the parameter types of the function FDecl with /// function prototype Proto. Call is the call expression itself, and /// Fn is the function expression. For a C++ member function, this /// routine does not attempt to convert the object argument. Returns /// true if the call is ill-formed. bool Sema::ConvertArgumentsForCall(CallExpr *Call, Expr *Fn, FunctionDecl *FDecl, const FunctionProtoType *Proto, ArrayRef Args, SourceLocation RParenLoc, bool IsExecConfig) { // Bail out early if calling a builtin with custom typechecking. if (FDecl) if (unsigned ID = FDecl->getBuiltinID()) if (Context.BuiltinInfo.hasCustomTypechecking(ID)) return false; // C99 6.5.2.2p7 - the arguments are implicitly converted, as if by // assignment, to the types of the corresponding parameter, ... unsigned NumParams = Proto->getNumParams(); bool Invalid = false; unsigned MinArgs = FDecl ? FDecl->getMinRequiredArguments() : NumParams; unsigned FnKind = Fn->getType()->isBlockPointerType() ? 1 /* block */ : (IsExecConfig ? 3 /* kernel function (exec config) */ : 0 /* function */); // If too few arguments are available (and we don't have default // arguments for the remaining parameters), don't make the call. if (Args.size() < NumParams) { if (Args.size() < MinArgs) { TypoCorrection TC; if (FDecl && (TC = TryTypoCorrectionForCall(*this, Fn, FDecl, Args))) { unsigned diag_id = MinArgs == NumParams && !Proto->isVariadic() ? diag::err_typecheck_call_too_few_args_suggest : diag::err_typecheck_call_too_few_args_at_least_suggest; diagnoseTypo(TC, PDiag(diag_id) << FnKind << MinArgs << static_cast(Args.size()) << TC.getCorrectionRange()); } else if (MinArgs == 1 && FDecl && FDecl->getParamDecl(0)->getDeclName()) Diag(RParenLoc, MinArgs == NumParams && !Proto->isVariadic() ? diag::err_typecheck_call_too_few_args_one : diag::err_typecheck_call_too_few_args_at_least_one) << FnKind << FDecl->getParamDecl(0) << Fn->getSourceRange(); else Diag(RParenLoc, MinArgs == NumParams && !Proto->isVariadic() ? diag::err_typecheck_call_too_few_args : diag::err_typecheck_call_too_few_args_at_least) << FnKind << MinArgs << static_cast(Args.size()) << Fn->getSourceRange(); // Emit the location of the prototype. if (!TC && FDecl && !FDecl->getBuiltinID() && !IsExecConfig) Diag(FDecl->getLocStart(), diag::note_callee_decl) << FDecl; return true; } Call->setNumArgs(Context, NumParams); } // If too many are passed and not variadic, error on the extras and drop // them. if (Args.size() > NumParams) { if (!Proto->isVariadic()) { TypoCorrection TC; if (FDecl && (TC = TryTypoCorrectionForCall(*this, Fn, FDecl, Args))) { unsigned diag_id = MinArgs == NumParams && !Proto->isVariadic() ? diag::err_typecheck_call_too_many_args_suggest : diag::err_typecheck_call_too_many_args_at_most_suggest; diagnoseTypo(TC, PDiag(diag_id) << FnKind << NumParams << static_cast(Args.size()) << TC.getCorrectionRange()); } else if (NumParams == 1 && FDecl && FDecl->getParamDecl(0)->getDeclName()) Diag(Args[NumParams]->getLocStart(), MinArgs == NumParams ? diag::err_typecheck_call_too_many_args_one : diag::err_typecheck_call_too_many_args_at_most_one) << FnKind << FDecl->getParamDecl(0) << static_cast(Args.size()) << Fn->getSourceRange() << SourceRange(Args[NumParams]->getLocStart(), Args.back()->getLocEnd()); else Diag(Args[NumParams]->getLocStart(), MinArgs == NumParams ? diag::err_typecheck_call_too_many_args : diag::err_typecheck_call_too_many_args_at_most) << FnKind << NumParams << static_cast(Args.size()) << Fn->getSourceRange() << SourceRange(Args[NumParams]->getLocStart(), Args.back()->getLocEnd()); // Emit the location of the prototype. if (!TC && FDecl && !FDecl->getBuiltinID() && !IsExecConfig) Diag(FDecl->getLocStart(), diag::note_callee_decl) << FDecl; // This deletes the extra arguments. Call->setNumArgs(Context, NumParams); return true; } } SmallVector AllArgs; VariadicCallType CallType = getVariadicCallType(FDecl, Proto, Fn); Invalid = GatherArgumentsForCall(Call->getLocStart(), FDecl, Proto, 0, Args, AllArgs, CallType); if (Invalid) return true; unsigned TotalNumArgs = AllArgs.size(); for (unsigned i = 0; i < TotalNumArgs; ++i) Call->setArg(i, AllArgs[i]); return false; } bool Sema::GatherArgumentsForCall(SourceLocation CallLoc, FunctionDecl *FDecl, const FunctionProtoType *Proto, unsigned FirstParam, ArrayRef Args, SmallVectorImpl &AllArgs, VariadicCallType CallType, bool AllowExplicit, bool IsListInitialization) { unsigned NumParams = Proto->getNumParams(); bool Invalid = false; size_t ArgIx = 0; // Continue to check argument types (even if we have too few/many args). for (unsigned i = FirstParam; i < NumParams; i++) { QualType ProtoArgType = Proto->getParamType(i); Expr *Arg; ParmVarDecl *Param = FDecl ? FDecl->getParamDecl(i) : nullptr; if (ArgIx < Args.size()) { Arg = Args[ArgIx++]; if (RequireCompleteType(Arg->getLocStart(), ProtoArgType, diag::err_call_incomplete_argument, Arg)) return true; // Strip the unbridged-cast placeholder expression off, if applicable. bool CFAudited = false; if (Arg->getType() == Context.ARCUnbridgedCastTy && FDecl && FDecl->hasAttr() && (!Param || !Param->hasAttr())) Arg = stripARCUnbridgedCast(Arg); else if (getLangOpts().ObjCAutoRefCount && FDecl && FDecl->hasAttr() && (!Param || !Param->hasAttr())) CFAudited = true; InitializedEntity Entity = Param ? InitializedEntity::InitializeParameter(Context, Param, ProtoArgType) : InitializedEntity::InitializeParameter( Context, ProtoArgType, Proto->isParamConsumed(i)); // Remember that parameter belongs to a CF audited API. if (CFAudited) Entity.setParameterCFAudited(); ExprResult ArgE = PerformCopyInitialization( Entity, SourceLocation(), Arg, IsListInitialization, AllowExplicit); if (ArgE.isInvalid()) return true; Arg = ArgE.getAs(); } else { assert(Param && "can't use default arguments without a known callee"); ExprResult ArgExpr = BuildCXXDefaultArgExpr(CallLoc, FDecl, Param); if (ArgExpr.isInvalid()) return true; Arg = ArgExpr.getAs(); } // Check for array bounds violations for each argument to the call. This // check only triggers warnings when the argument isn't a more complex Expr // with its own checking, such as a BinaryOperator. CheckArrayAccess(Arg); // Check for violations of C99 static array rules (C99 6.7.5.3p7). CheckStaticArrayArgument(CallLoc, Param, Arg); AllArgs.push_back(Arg); } // If this is a variadic call, handle args passed through "...". if (CallType != VariadicDoesNotApply) { // Assume that extern "C" functions with variadic arguments that // return __unknown_anytype aren't *really* variadic. if (Proto->getReturnType() == Context.UnknownAnyTy && FDecl && FDecl->isExternC()) { for (Expr *A : Args.slice(ArgIx)) { QualType paramType; // ignored ExprResult arg = checkUnknownAnyArg(CallLoc, A, paramType); Invalid |= arg.isInvalid(); AllArgs.push_back(arg.get()); } // Otherwise do argument promotion, (C99 6.5.2.2p7). } else { for (Expr *A : Args.slice(ArgIx)) { ExprResult Arg = DefaultVariadicArgumentPromotion(A, CallType, FDecl); Invalid |= Arg.isInvalid(); AllArgs.push_back(Arg.get()); } } // Check for array bounds violations. for (Expr *A : Args.slice(ArgIx)) CheckArrayAccess(A); } return Invalid; } static void DiagnoseCalleeStaticArrayParam(Sema &S, ParmVarDecl *PVD) { TypeLoc TL = PVD->getTypeSourceInfo()->getTypeLoc(); if (DecayedTypeLoc DTL = TL.getAs()) TL = DTL.getOriginalLoc(); if (ArrayTypeLoc ATL = TL.getAs()) S.Diag(PVD->getLocation(), diag::note_callee_static_array) << ATL.getLocalSourceRange(); } /// CheckStaticArrayArgument - If the given argument corresponds to a static /// array parameter, check that it is non-null, and that if it is formed by /// array-to-pointer decay, the underlying array is sufficiently large. /// /// C99 6.7.5.3p7: If the keyword static also appears within the [ and ] of the /// array type derivation, then for each call to the function, the value of the /// corresponding actual argument shall provide access to the first element of /// an array with at least as many elements as specified by the size expression. void Sema::CheckStaticArrayArgument(SourceLocation CallLoc, ParmVarDecl *Param, const Expr *ArgExpr) { // Static array parameters are not supported in C++. if (!Param || getLangOpts().CPlusPlus) return; QualType OrigTy = Param->getOriginalType(); const ArrayType *AT = Context.getAsArrayType(OrigTy); if (!AT || AT->getSizeModifier() != ArrayType::Static) return; if (ArgExpr->isNullPointerConstant(Context, Expr::NPC_NeverValueDependent)) { Diag(CallLoc, diag::warn_null_arg) << ArgExpr->getSourceRange(); DiagnoseCalleeStaticArrayParam(*this, Param); return; } const ConstantArrayType *CAT = dyn_cast(AT); if (!CAT) return; const ConstantArrayType *ArgCAT = Context.getAsConstantArrayType(ArgExpr->IgnoreParenImpCasts()->getType()); if (!ArgCAT) return; if (ArgCAT->getSize().ult(CAT->getSize())) { Diag(CallLoc, diag::warn_static_array_too_small) << ArgExpr->getSourceRange() << (unsigned) ArgCAT->getSize().getZExtValue() << (unsigned) CAT->getSize().getZExtValue(); DiagnoseCalleeStaticArrayParam(*this, Param); } } /// Given a function expression of unknown-any type, try to rebuild it /// to have a function type. static ExprResult rebuildUnknownAnyFunction(Sema &S, Expr *fn); /// Is the given type a placeholder that we need to lower out /// immediately during argument processing? static bool isPlaceholderToRemoveAsArg(QualType type) { // Placeholders are never sugared. const BuiltinType *placeholder = dyn_cast(type); if (!placeholder) return false; switch (placeholder->getKind()) { // Ignore all the non-placeholder types. #define IMAGE_TYPE(ImgType, Id, SingletonId, Access, Suffix) \ case BuiltinType::Id: #include "clang/Basic/OpenCLImageTypes.def" #define PLACEHOLDER_TYPE(ID, SINGLETON_ID) #define BUILTIN_TYPE(ID, SINGLETON_ID) case BuiltinType::ID: #include "clang/AST/BuiltinTypes.def" return false; // We cannot lower out overload sets; they might validly be resolved // by the call machinery. case BuiltinType::Overload: return false; // Unbridged casts in ARC can be handled in some call positions and // should be left in place. case BuiltinType::ARCUnbridgedCast: return false; // Pseudo-objects should be converted as soon as possible. case BuiltinType::PseudoObject: return true; // The debugger mode could theoretically but currently does not try // to resolve unknown-typed arguments based on known parameter types. case BuiltinType::UnknownAny: return true; // These are always invalid as call arguments and should be reported. case BuiltinType::BoundMember: case BuiltinType::BuiltinFn: case BuiltinType::OMPArraySection: return true; } llvm_unreachable("bad builtin type kind"); } /// Check an argument list for placeholders that we won't try to /// handle later. static bool checkArgsForPlaceholders(Sema &S, MultiExprArg args) { // Apply this processing to all the arguments at once instead of // dying at the first failure. bool hasInvalid = false; for (size_t i = 0, e = args.size(); i != e; i++) { if (isPlaceholderToRemoveAsArg(args[i]->getType())) { ExprResult result = S.CheckPlaceholderExpr(args[i]); if (result.isInvalid()) hasInvalid = true; else args[i] = result.get(); } else if (hasInvalid) { (void)S.CorrectDelayedTyposInExpr(args[i]); } } return hasInvalid; } /// If a builtin function has a pointer argument with no explicit address /// space, then it should be able to accept a pointer to any address /// space as input. In order to do this, we need to replace the /// standard builtin declaration with one that uses the same address space /// as the call. /// /// \returns nullptr If this builtin is not a candidate for a rewrite i.e. /// it does not contain any pointer arguments without /// an address space qualifer. Otherwise the rewritten /// FunctionDecl is returned. /// TODO: Handle pointer return types. static FunctionDecl *rewriteBuiltinFunctionDecl(Sema *Sema, ASTContext &Context, const FunctionDecl *FDecl, MultiExprArg ArgExprs) { QualType DeclType = FDecl->getType(); const FunctionProtoType *FT = dyn_cast(DeclType); if (!Context.BuiltinInfo.hasPtrArgsOrResult(FDecl->getBuiltinID()) || !FT || FT->isVariadic() || ArgExprs.size() != FT->getNumParams()) return nullptr; bool NeedsNewDecl = false; unsigned i = 0; SmallVector OverloadParams; for (QualType ParamType : FT->param_types()) { // Convert array arguments to pointer to simplify type lookup. ExprResult ArgRes = Sema->DefaultFunctionArrayLvalueConversion(ArgExprs[i++]); if (ArgRes.isInvalid()) return nullptr; Expr *Arg = ArgRes.get(); QualType ArgType = Arg->getType(); if (!ParamType->isPointerType() || ParamType.getQualifiers().hasAddressSpace() || !ArgType->isPointerType() || !ArgType->getPointeeType().getQualifiers().hasAddressSpace()) { OverloadParams.push_back(ParamType); continue; } NeedsNewDecl = true; unsigned AS = ArgType->getPointeeType().getQualifiers().getAddressSpace(); QualType PointeeType = ParamType->getPointeeType(); PointeeType = Context.getAddrSpaceQualType(PointeeType, AS); OverloadParams.push_back(Context.getPointerType(PointeeType)); } if (!NeedsNewDecl) return nullptr; FunctionProtoType::ExtProtoInfo EPI; QualType OverloadTy = Context.getFunctionType(FT->getReturnType(), OverloadParams, EPI); DeclContext *Parent = Context.getTranslationUnitDecl(); FunctionDecl *OverloadDecl = FunctionDecl::Create(Context, Parent, FDecl->getLocation(), FDecl->getLocation(), FDecl->getIdentifier(), OverloadTy, /*TInfo=*/nullptr, SC_Extern, false, /*hasPrototype=*/true); SmallVector Params; FT = cast(OverloadTy); for (unsigned i = 0, e = FT->getNumParams(); i != e; ++i) { QualType ParamType = FT->getParamType(i); ParmVarDecl *Parm = ParmVarDecl::Create(Context, OverloadDecl, SourceLocation(), SourceLocation(), nullptr, ParamType, /*TInfo=*/nullptr, SC_None, nullptr); Parm->setScopeInfo(0, i); Params.push_back(Parm); } OverloadDecl->setParams(Params); return OverloadDecl; } static void checkDirectCallValidity(Sema &S, const Expr *Fn, FunctionDecl *Callee, MultiExprArg ArgExprs) { // `Callee` (when called with ArgExprs) may be ill-formed. enable_if (and // similar attributes) really don't like it when functions are called with an // invalid number of args. if (S.TooManyArguments(Callee->getNumParams(), ArgExprs.size(), /*PartialOverloading=*/false) && !Callee->isVariadic()) return; if (Callee->getMinRequiredArguments() > ArgExprs.size()) return; if (const EnableIfAttr *Attr = S.CheckEnableIf(Callee, ArgExprs, true)) { S.Diag(Fn->getLocStart(), isa(Callee) ? diag::err_ovl_no_viable_member_function_in_call : diag::err_ovl_no_viable_function_in_call) << Callee << Callee->getSourceRange(); S.Diag(Callee->getLocation(), diag::note_ovl_candidate_disabled_by_function_cond_attr) << Attr->getCond()->getSourceRange() << Attr->getMessage(); return; } SmallVector Nonfatal; if (const DiagnoseIfAttr *Attr = S.checkArgDependentDiagnoseIf( Callee, ArgExprs, Nonfatal, /*MissingImplicitThis=*/true)) { S.emitDiagnoseIfDiagnostic(Fn->getLocStart(), Attr); return; } for (const auto *W : Nonfatal) S.emitDiagnoseIfDiagnostic(Fn->getLocStart(), W); } /// ActOnCallExpr - Handle a call to Fn with the specified array of arguments. /// This provides the location of the left/right parens and a list of comma /// locations. ExprResult Sema::ActOnCallExpr(Scope *Scope, Expr *Fn, SourceLocation LParenLoc, MultiExprArg ArgExprs, SourceLocation RParenLoc, Expr *ExecConfig, bool IsExecConfig) { // Since this might be a postfix expression, get rid of ParenListExprs. ExprResult Result = MaybeConvertParenListExprToParenExpr(Scope, Fn); if (Result.isInvalid()) return ExprError(); Fn = Result.get(); if (checkArgsForPlaceholders(*this, ArgExprs)) return ExprError(); if (getLangOpts().CPlusPlus) { // If this is a pseudo-destructor expression, build the call immediately. if (isa(Fn)) { if (!ArgExprs.empty()) { // Pseudo-destructor calls should not have any arguments. Diag(Fn->getLocStart(), diag::err_pseudo_dtor_call_with_args) << FixItHint::CreateRemoval( SourceRange(ArgExprs.front()->getLocStart(), ArgExprs.back()->getLocEnd())); } return new (Context) CallExpr(Context, Fn, None, Context.VoidTy, VK_RValue, RParenLoc); } if (Fn->getType() == Context.PseudoObjectTy) { ExprResult result = CheckPlaceholderExpr(Fn); if (result.isInvalid()) return ExprError(); Fn = result.get(); } // Determine whether this is a dependent call inside a C++ template, // in which case we won't do any semantic analysis now. bool Dependent = false; if (Fn->isTypeDependent()) Dependent = true; else if (Expr::hasAnyTypeDependentArguments(ArgExprs)) Dependent = true; if (Dependent) { if (ExecConfig) { return new (Context) CUDAKernelCallExpr( Context, Fn, cast(ExecConfig), ArgExprs, Context.DependentTy, VK_RValue, RParenLoc); } else { return new (Context) CallExpr( Context, Fn, ArgExprs, Context.DependentTy, VK_RValue, RParenLoc); } } // Determine whether this is a call to an object (C++ [over.call.object]). if (Fn->getType()->isRecordType()) return BuildCallToObjectOfClassType(Scope, Fn, LParenLoc, ArgExprs, RParenLoc); if (Fn->getType() == Context.UnknownAnyTy) { ExprResult result = rebuildUnknownAnyFunction(*this, Fn); if (result.isInvalid()) return ExprError(); Fn = result.get(); } if (Fn->getType() == Context.BoundMemberTy) { return BuildCallToMemberFunction(Scope, Fn, LParenLoc, ArgExprs, RParenLoc); } } // Check for overloaded calls. This can happen even in C due to extensions. if (Fn->getType() == Context.OverloadTy) { OverloadExpr::FindResult find = OverloadExpr::find(Fn); // We aren't supposed to apply this logic for if there'Scope an '&' // involved. if (!find.HasFormOfMemberPointer) { OverloadExpr *ovl = find.Expression; if (UnresolvedLookupExpr *ULE = dyn_cast(ovl)) return BuildOverloadedCallExpr( Scope, Fn, ULE, LParenLoc, ArgExprs, RParenLoc, ExecConfig, /*AllowTypoCorrection=*/true, find.IsAddressOfOperand); return BuildCallToMemberFunction(Scope, Fn, LParenLoc, ArgExprs, RParenLoc); } } // If we're directly calling a function, get the appropriate declaration. if (Fn->getType() == Context.UnknownAnyTy) { ExprResult result = rebuildUnknownAnyFunction(*this, Fn); if (result.isInvalid()) return ExprError(); Fn = result.get(); } Expr *NakedFn = Fn->IgnoreParens(); bool CallingNDeclIndirectly = false; NamedDecl *NDecl = nullptr; if (UnaryOperator *UnOp = dyn_cast(NakedFn)) { if (UnOp->getOpcode() == UO_AddrOf) { CallingNDeclIndirectly = true; NakedFn = UnOp->getSubExpr()->IgnoreParens(); } } if (isa(NakedFn)) { NDecl = cast(NakedFn)->getDecl(); FunctionDecl *FDecl = dyn_cast(NDecl); if (FDecl && FDecl->getBuiltinID()) { // Rewrite the function decl for this builtin by replacing parameters // with no explicit address space with the address space of the arguments // in ArgExprs. if ((FDecl = rewriteBuiltinFunctionDecl(this, Context, FDecl, ArgExprs))) { NDecl = FDecl; Fn = DeclRefExpr::Create( Context, FDecl->getQualifierLoc(), SourceLocation(), FDecl, false, SourceLocation(), FDecl->getType(), Fn->getValueKind(), FDecl); } } } else if (isa(NakedFn)) NDecl = cast(NakedFn)->getMemberDecl(); if (FunctionDecl *FD = dyn_cast_or_null(NDecl)) { if (CallingNDeclIndirectly && !checkAddressOfFunctionIsAvailable(FD, /*Complain=*/true, Fn->getLocStart())) return ExprError(); if (getLangOpts().OpenCL && checkOpenCLDisabledDecl(*FD, *Fn)) return ExprError(); checkDirectCallValidity(*this, Fn, FD, ArgExprs); } return BuildResolvedCallExpr(Fn, NDecl, LParenLoc, ArgExprs, RParenLoc, ExecConfig, IsExecConfig); } /// ActOnAsTypeExpr - create a new asType (bitcast) from the arguments. /// /// __builtin_astype( value, dst type ) /// ExprResult Sema::ActOnAsTypeExpr(Expr *E, ParsedType ParsedDestTy, SourceLocation BuiltinLoc, SourceLocation RParenLoc) { ExprValueKind VK = VK_RValue; ExprObjectKind OK = OK_Ordinary; QualType DstTy = GetTypeFromParser(ParsedDestTy); QualType SrcTy = E->getType(); if (Context.getTypeSize(DstTy) != Context.getTypeSize(SrcTy)) return ExprError(Diag(BuiltinLoc, diag::err_invalid_astype_of_different_size) << DstTy << SrcTy << E->getSourceRange()); return new (Context) AsTypeExpr(E, DstTy, VK, OK, BuiltinLoc, RParenLoc); } /// ActOnConvertVectorExpr - create a new convert-vector expression from the /// provided arguments. /// /// __builtin_convertvector( value, dst type ) /// ExprResult Sema::ActOnConvertVectorExpr(Expr *E, ParsedType ParsedDestTy, SourceLocation BuiltinLoc, SourceLocation RParenLoc) { TypeSourceInfo *TInfo; GetTypeFromParser(ParsedDestTy, &TInfo); return SemaConvertVectorExpr(E, TInfo, BuiltinLoc, RParenLoc); } /// BuildResolvedCallExpr - Build a call to a resolved expression, /// i.e. an expression not of \p OverloadTy. The expression should /// unary-convert to an expression of function-pointer or /// block-pointer type. /// /// \param NDecl the declaration being called, if available ExprResult Sema::BuildResolvedCallExpr(Expr *Fn, NamedDecl *NDecl, SourceLocation LParenLoc, ArrayRef Args, SourceLocation RParenLoc, Expr *Config, bool IsExecConfig) { FunctionDecl *FDecl = dyn_cast_or_null(NDecl); unsigned BuiltinID = (FDecl ? FDecl->getBuiltinID() : 0); // Functions with 'interrupt' attribute cannot be called directly. if (FDecl && FDecl->hasAttr()) { Diag(Fn->getExprLoc(), diag::err_anyx86_interrupt_called); return ExprError(); } // Promote the function operand. // We special-case function promotion here because we only allow promoting // builtin functions to function pointers in the callee of a call. ExprResult Result; if (BuiltinID && Fn->getType()->isSpecificBuiltinType(BuiltinType::BuiltinFn)) { Result = ImpCastExprToType(Fn, Context.getPointerType(FDecl->getType()), CK_BuiltinFnToFnPtr).get(); } else { Result = CallExprUnaryConversions(Fn); } if (Result.isInvalid()) return ExprError(); Fn = Result.get(); // Make the call expr early, before semantic checks. This guarantees cleanup // of arguments and function on error. CallExpr *TheCall; if (Config) TheCall = new (Context) CUDAKernelCallExpr(Context, Fn, cast(Config), Args, Context.BoolTy, VK_RValue, RParenLoc); else TheCall = new (Context) CallExpr(Context, Fn, Args, Context.BoolTy, VK_RValue, RParenLoc); if (!getLangOpts().CPlusPlus) { // C cannot always handle TypoExpr nodes in builtin calls and direct // function calls as their argument checking don't necessarily handle // dependent types properly, so make sure any TypoExprs have been // dealt with. ExprResult Result = CorrectDelayedTyposInExpr(TheCall); if (!Result.isUsable()) return ExprError(); TheCall = dyn_cast(Result.get()); if (!TheCall) return Result; Args = llvm::makeArrayRef(TheCall->getArgs(), TheCall->getNumArgs()); } // Bail out early if calling a builtin with custom typechecking. if (BuiltinID && Context.BuiltinInfo.hasCustomTypechecking(BuiltinID)) return CheckBuiltinFunctionCall(FDecl, BuiltinID, TheCall); retry: const FunctionType *FuncT; if (const PointerType *PT = Fn->getType()->getAs()) { // C99 6.5.2.2p1 - "The expression that denotes the called function shall // have type pointer to function". FuncT = PT->getPointeeType()->getAs(); if (!FuncT) return ExprError(Diag(LParenLoc, diag::err_typecheck_call_not_function) << Fn->getType() << Fn->getSourceRange()); } else if (const BlockPointerType *BPT = Fn->getType()->getAs()) { FuncT = BPT->getPointeeType()->castAs(); } else { // Handle calls to expressions of unknown-any type. if (Fn->getType() == Context.UnknownAnyTy) { ExprResult rewrite = rebuildUnknownAnyFunction(*this, Fn); if (rewrite.isInvalid()) return ExprError(); Fn = rewrite.get(); TheCall->setCallee(Fn); goto retry; } return ExprError(Diag(LParenLoc, diag::err_typecheck_call_not_function) << Fn->getType() << Fn->getSourceRange()); } if (getLangOpts().CUDA) { if (Config) { // CUDA: Kernel calls must be to global functions if (FDecl && !FDecl->hasAttr()) return ExprError(Diag(LParenLoc,diag::err_kern_call_not_global_function) << FDecl->getName() << Fn->getSourceRange()); // CUDA: Kernel function must have 'void' return type if (!FuncT->getReturnType()->isVoidType()) return ExprError(Diag(LParenLoc, diag::err_kern_type_not_void_return) << Fn->getType() << Fn->getSourceRange()); } else { // CUDA: Calls to global functions must be configured if (FDecl && FDecl->hasAttr()) return ExprError(Diag(LParenLoc, diag::err_global_call_not_config) << FDecl->getName() << Fn->getSourceRange()); } } // Check for a valid return type if (CheckCallReturnType(FuncT->getReturnType(), Fn->getLocStart(), TheCall, FDecl)) return ExprError(); // We know the result type of the call, set it. TheCall->setType(FuncT->getCallResultType(Context)); TheCall->setValueKind(Expr::getValueKindForType(FuncT->getReturnType())); const FunctionProtoType *Proto = dyn_cast(FuncT); if (Proto) { if (ConvertArgumentsForCall(TheCall, Fn, FDecl, Proto, Args, RParenLoc, IsExecConfig)) return ExprError(); } else { assert(isa(FuncT) && "Unknown FunctionType!"); if (FDecl) { // Check if we have too few/too many template arguments, based // on our knowledge of the function definition. const FunctionDecl *Def = nullptr; if (FDecl->hasBody(Def) && Args.size() != Def->param_size()) { Proto = Def->getType()->getAs(); if (!Proto || !(Proto->isVariadic() && Args.size() >= Def->param_size())) Diag(RParenLoc, diag::warn_call_wrong_number_of_arguments) << (Args.size() > Def->param_size()) << FDecl << Fn->getSourceRange(); } // If the function we're calling isn't a function prototype, but we have // a function prototype from a prior declaratiom, use that prototype. if (!FDecl->hasPrototype()) Proto = FDecl->getType()->getAs(); } // Promote the arguments (C99 6.5.2.2p6). for (unsigned i = 0, e = Args.size(); i != e; i++) { Expr *Arg = Args[i]; if (Proto && i < Proto->getNumParams()) { InitializedEntity Entity = InitializedEntity::InitializeParameter( Context, Proto->getParamType(i), Proto->isParamConsumed(i)); ExprResult ArgE = PerformCopyInitialization(Entity, SourceLocation(), Arg); if (ArgE.isInvalid()) return true; Arg = ArgE.getAs(); } else { ExprResult ArgE = DefaultArgumentPromotion(Arg); if (ArgE.isInvalid()) return true; Arg = ArgE.getAs(); } if (RequireCompleteType(Arg->getLocStart(), Arg->getType(), diag::err_call_incomplete_argument, Arg)) return ExprError(); TheCall->setArg(i, Arg); } } if (CXXMethodDecl *Method = dyn_cast_or_null(FDecl)) if (!Method->isStatic()) return ExprError(Diag(LParenLoc, diag::err_member_call_without_object) << Fn->getSourceRange()); // Check for sentinels if (NDecl) DiagnoseSentinelCalls(NDecl, LParenLoc, Args); // Do special checking on direct calls to functions. if (FDecl) { if (CheckFunctionCall(FDecl, TheCall, Proto)) return ExprError(); if (BuiltinID) return CheckBuiltinFunctionCall(FDecl, BuiltinID, TheCall); } else if (NDecl) { if (CheckPointerCall(NDecl, TheCall, Proto)) return ExprError(); } else { if (CheckOtherCall(TheCall, Proto)) return ExprError(); } return MaybeBindToTemporary(TheCall); } ExprResult Sema::ActOnCompoundLiteral(SourceLocation LParenLoc, ParsedType Ty, SourceLocation RParenLoc, Expr *InitExpr) { assert(Ty && "ActOnCompoundLiteral(): missing type"); assert(InitExpr && "ActOnCompoundLiteral(): missing expression"); TypeSourceInfo *TInfo; QualType literalType = GetTypeFromParser(Ty, &TInfo); if (!TInfo) TInfo = Context.getTrivialTypeSourceInfo(literalType); return BuildCompoundLiteralExpr(LParenLoc, TInfo, RParenLoc, InitExpr); } ExprResult Sema::BuildCompoundLiteralExpr(SourceLocation LParenLoc, TypeSourceInfo *TInfo, SourceLocation RParenLoc, Expr *LiteralExpr) { QualType literalType = TInfo->getType(); if (literalType->isArrayType()) { if (RequireCompleteType(LParenLoc, Context.getBaseElementType(literalType), diag::err_illegal_decl_array_incomplete_type, SourceRange(LParenLoc, LiteralExpr->getSourceRange().getEnd()))) return ExprError(); if (literalType->isVariableArrayType()) return ExprError(Diag(LParenLoc, diag::err_variable_object_no_init) << SourceRange(LParenLoc, LiteralExpr->getSourceRange().getEnd())); } else if (!literalType->isDependentType() && RequireCompleteType(LParenLoc, literalType, diag::err_typecheck_decl_incomplete_type, SourceRange(LParenLoc, LiteralExpr->getSourceRange().getEnd()))) return ExprError(); InitializedEntity Entity = InitializedEntity::InitializeCompoundLiteralInit(TInfo); InitializationKind Kind = InitializationKind::CreateCStyleCast(LParenLoc, SourceRange(LParenLoc, RParenLoc), /*InitList=*/true); InitializationSequence InitSeq(*this, Entity, Kind, LiteralExpr); ExprResult Result = InitSeq.Perform(*this, Entity, Kind, LiteralExpr, &literalType); if (Result.isInvalid()) return ExprError(); LiteralExpr = Result.get(); bool isFileScope = !CurContext->isFunctionOrMethod(); if (isFileScope && !LiteralExpr->isTypeDependent() && !LiteralExpr->isValueDependent() && !literalType->isDependentType()) { // 6.5.2.5p3 if (CheckForConstantInitializer(LiteralExpr, literalType)) return ExprError(); } // In C, compound literals are l-values for some reason. // For GCC compatibility, in C++, file-scope array compound literals with // constant initializers are also l-values, and compound literals are // otherwise prvalues. // // (GCC also treats C++ list-initialized file-scope array prvalues with // constant initializers as l-values, but that's non-conforming, so we don't // follow it there.) // // FIXME: It would be better to handle the lvalue cases as materializing and // lifetime-extending a temporary object, but our materialized temporaries // representation only supports lifetime extension from a variable, not "out // of thin air". // FIXME: For C++, we might want to instead lifetime-extend only if a pointer // is bound to the result of applying array-to-pointer decay to the compound // literal. // FIXME: GCC supports compound literals of reference type, which should // obviously have a value kind derived from the kind of reference involved. ExprValueKind VK = (getLangOpts().CPlusPlus && !(isFileScope && literalType->isArrayType())) ? VK_RValue : VK_LValue; return MaybeBindToTemporary( new (Context) CompoundLiteralExpr(LParenLoc, TInfo, literalType, VK, LiteralExpr, isFileScope)); } ExprResult Sema::ActOnInitList(SourceLocation LBraceLoc, MultiExprArg InitArgList, SourceLocation RBraceLoc) { // Immediately handle non-overload placeholders. Overloads can be // resolved contextually, but everything else here can't. for (unsigned I = 0, E = InitArgList.size(); I != E; ++I) { if (InitArgList[I]->getType()->isNonOverloadPlaceholderType()) { ExprResult result = CheckPlaceholderExpr(InitArgList[I]); // Ignore failures; dropping the entire initializer list because // of one failure would be terrible for indexing/etc. if (result.isInvalid()) continue; InitArgList[I] = result.get(); } } // Semantic analysis for initializers is done by ActOnDeclarator() and // CheckInitializer() - it requires knowledge of the object being intialized. InitListExpr *E = new (Context) InitListExpr(Context, LBraceLoc, InitArgList, RBraceLoc); E->setType(Context.VoidTy); // FIXME: just a place holder for now. return E; } /// Do an explicit extend of the given block pointer if we're in ARC. void Sema::maybeExtendBlockObject(ExprResult &E) { assert(E.get()->getType()->isBlockPointerType()); assert(E.get()->isRValue()); // Only do this in an r-value context. if (!getLangOpts().ObjCAutoRefCount) return; E = ImplicitCastExpr::Create(Context, E.get()->getType(), CK_ARCExtendBlockObject, E.get(), /*base path*/ nullptr, VK_RValue); Cleanup.setExprNeedsCleanups(true); } /// Prepare a conversion of the given expression to an ObjC object /// pointer type. CastKind Sema::PrepareCastToObjCObjectPointer(ExprResult &E) { QualType type = E.get()->getType(); if (type->isObjCObjectPointerType()) { return CK_BitCast; } else if (type->isBlockPointerType()) { maybeExtendBlockObject(E); return CK_BlockPointerToObjCPointerCast; } else { assert(type->isPointerType()); return CK_CPointerToObjCPointerCast; } } /// Prepares for a scalar cast, performing all the necessary stages /// except the final cast and returning the kind required. CastKind Sema::PrepareScalarCast(ExprResult &Src, QualType DestTy) { // Both Src and Dest are scalar types, i.e. arithmetic or pointer. // Also, callers should have filtered out the invalid cases with // pointers. Everything else should be possible. QualType SrcTy = Src.get()->getType(); if (Context.hasSameUnqualifiedType(SrcTy, DestTy)) return CK_NoOp; switch (Type::ScalarTypeKind SrcKind = SrcTy->getScalarTypeKind()) { case Type::STK_MemberPointer: llvm_unreachable("member pointer type in C"); case Type::STK_CPointer: case Type::STK_BlockPointer: case Type::STK_ObjCObjectPointer: switch (DestTy->getScalarTypeKind()) { case Type::STK_CPointer: { unsigned SrcAS = SrcTy->getPointeeType().getAddressSpace(); unsigned DestAS = DestTy->getPointeeType().getAddressSpace(); if (SrcAS != DestAS) return CK_AddressSpaceConversion; return CK_BitCast; } case Type::STK_BlockPointer: return (SrcKind == Type::STK_BlockPointer ? CK_BitCast : CK_AnyPointerToBlockPointerCast); case Type::STK_ObjCObjectPointer: if (SrcKind == Type::STK_ObjCObjectPointer) return CK_BitCast; if (SrcKind == Type::STK_CPointer) return CK_CPointerToObjCPointerCast; maybeExtendBlockObject(Src); return CK_BlockPointerToObjCPointerCast; case Type::STK_Bool: return CK_PointerToBoolean; case Type::STK_Integral: return CK_PointerToIntegral; case Type::STK_Floating: case Type::STK_FloatingComplex: case Type::STK_IntegralComplex: case Type::STK_MemberPointer: llvm_unreachable("illegal cast from pointer"); } llvm_unreachable("Should have returned before this"); case Type::STK_Bool: // casting from bool is like casting from an integer case Type::STK_Integral: switch (DestTy->getScalarTypeKind()) { case Type::STK_CPointer: case Type::STK_ObjCObjectPointer: case Type::STK_BlockPointer: if (Src.get()->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNull)) return CK_NullToPointer; return CK_IntegralToPointer; case Type::STK_Bool: return CK_IntegralToBoolean; case Type::STK_Integral: return CK_IntegralCast; case Type::STK_Floating: return CK_IntegralToFloating; case Type::STK_IntegralComplex: Src = ImpCastExprToType(Src.get(), DestTy->castAs()->getElementType(), CK_IntegralCast); return CK_IntegralRealToComplex; case Type::STK_FloatingComplex: Src = ImpCastExprToType(Src.get(), DestTy->castAs()->getElementType(), CK_IntegralToFloating); return CK_FloatingRealToComplex; case Type::STK_MemberPointer: llvm_unreachable("member pointer type in C"); } llvm_unreachable("Should have returned before this"); case Type::STK_Floating: switch (DestTy->getScalarTypeKind()) { case Type::STK_Floating: return CK_FloatingCast; case Type::STK_Bool: return CK_FloatingToBoolean; case Type::STK_Integral: return CK_FloatingToIntegral; case Type::STK_FloatingComplex: Src = ImpCastExprToType(Src.get(), DestTy->castAs()->getElementType(), CK_FloatingCast); return CK_FloatingRealToComplex; case Type::STK_IntegralComplex: Src = ImpCastExprToType(Src.get(), DestTy->castAs()->getElementType(), CK_FloatingToIntegral); return CK_IntegralRealToComplex; case Type::STK_CPointer: case Type::STK_ObjCObjectPointer: case Type::STK_BlockPointer: llvm_unreachable("valid float->pointer cast?"); case Type::STK_MemberPointer: llvm_unreachable("member pointer type in C"); } llvm_unreachable("Should have returned before this"); case Type::STK_FloatingComplex: switch (DestTy->getScalarTypeKind()) { case Type::STK_FloatingComplex: return CK_FloatingComplexCast; case Type::STK_IntegralComplex: return CK_FloatingComplexToIntegralComplex; case Type::STK_Floating: { QualType ET = SrcTy->castAs()->getElementType(); if (Context.hasSameType(ET, DestTy)) return CK_FloatingComplexToReal; Src = ImpCastExprToType(Src.get(), ET, CK_FloatingComplexToReal); return CK_FloatingCast; } case Type::STK_Bool: return CK_FloatingComplexToBoolean; case Type::STK_Integral: Src = ImpCastExprToType(Src.get(), SrcTy->castAs()->getElementType(), CK_FloatingComplexToReal); return CK_FloatingToIntegral; case Type::STK_CPointer: case Type::STK_ObjCObjectPointer: case Type::STK_BlockPointer: llvm_unreachable("valid complex float->pointer cast?"); case Type::STK_MemberPointer: llvm_unreachable("member pointer type in C"); } llvm_unreachable("Should have returned before this"); case Type::STK_IntegralComplex: switch (DestTy->getScalarTypeKind()) { case Type::STK_FloatingComplex: return CK_IntegralComplexToFloatingComplex; case Type::STK_IntegralComplex: return CK_IntegralComplexCast; case Type::STK_Integral: { QualType ET = SrcTy->castAs()->getElementType(); if (Context.hasSameType(ET, DestTy)) return CK_IntegralComplexToReal; Src = ImpCastExprToType(Src.get(), ET, CK_IntegralComplexToReal); return CK_IntegralCast; } case Type::STK_Bool: return CK_IntegralComplexToBoolean; case Type::STK_Floating: Src = ImpCastExprToType(Src.get(), SrcTy->castAs()->getElementType(), CK_IntegralComplexToReal); return CK_IntegralToFloating; case Type::STK_CPointer: case Type::STK_ObjCObjectPointer: case Type::STK_BlockPointer: llvm_unreachable("valid complex int->pointer cast?"); case Type::STK_MemberPointer: llvm_unreachable("member pointer type in C"); } llvm_unreachable("Should have returned before this"); } llvm_unreachable("Unhandled scalar cast"); } static bool breakDownVectorType(QualType type, uint64_t &len, QualType &eltType) { // Vectors are simple. if (const VectorType *vecType = type->getAs()) { len = vecType->getNumElements(); eltType = vecType->getElementType(); assert(eltType->isScalarType()); return true; } // We allow lax conversion to and from non-vector types, but only if // they're real types (i.e. non-complex, non-pointer scalar types). if (!type->isRealType()) return false; len = 1; eltType = type; return true; } /// Are the two types lax-compatible vector types? That is, given /// that one of them is a vector, do they have equal storage sizes, /// where the storage size is the number of elements times the element /// size? /// /// This will also return false if either of the types is neither a /// vector nor a real type. bool Sema::areLaxCompatibleVectorTypes(QualType srcTy, QualType destTy) { assert(destTy->isVectorType() || srcTy->isVectorType()); // Disallow lax conversions between scalars and ExtVectors (these // conversions are allowed for other vector types because common headers // depend on them). Most scalar OP ExtVector cases are handled by the // splat path anyway, which does what we want (convert, not bitcast). // What this rules out for ExtVectors is crazy things like char4*float. if (srcTy->isScalarType() && destTy->isExtVectorType()) return false; if (destTy->isScalarType() && srcTy->isExtVectorType()) return false; uint64_t srcLen, destLen; QualType srcEltTy, destEltTy; if (!breakDownVectorType(srcTy, srcLen, srcEltTy)) return false; if (!breakDownVectorType(destTy, destLen, destEltTy)) return false; // ASTContext::getTypeSize will return the size rounded up to a // power of 2, so instead of using that, we need to use the raw // element size multiplied by the element count. uint64_t srcEltSize = Context.getTypeSize(srcEltTy); uint64_t destEltSize = Context.getTypeSize(destEltTy); return (srcLen * srcEltSize == destLen * destEltSize); } /// Is this a legal conversion between two types, one of which is /// known to be a vector type? bool Sema::isLaxVectorConversion(QualType srcTy, QualType destTy) { assert(destTy->isVectorType() || srcTy->isVectorType()); if (!Context.getLangOpts().LaxVectorConversions) return false; return areLaxCompatibleVectorTypes(srcTy, destTy); } bool Sema::CheckVectorCast(SourceRange R, QualType VectorTy, QualType Ty, CastKind &Kind) { assert(VectorTy->isVectorType() && "Not a vector type!"); if (Ty->isVectorType() || Ty->isIntegralType(Context)) { if (!areLaxCompatibleVectorTypes(Ty, VectorTy)) return Diag(R.getBegin(), Ty->isVectorType() ? diag::err_invalid_conversion_between_vectors : diag::err_invalid_conversion_between_vector_and_integer) << VectorTy << Ty << R; } else return Diag(R.getBegin(), diag::err_invalid_conversion_between_vector_and_scalar) << VectorTy << Ty << R; Kind = CK_BitCast; return false; } ExprResult Sema::prepareVectorSplat(QualType VectorTy, Expr *SplattedExpr) { QualType DestElemTy = VectorTy->castAs()->getElementType(); if (DestElemTy == SplattedExpr->getType()) return SplattedExpr; assert(DestElemTy->isFloatingType() || DestElemTy->isIntegralOrEnumerationType()); CastKind CK; if (VectorTy->isExtVectorType() && SplattedExpr->getType()->isBooleanType()) { // OpenCL requires that we convert `true` boolean expressions to -1, but // only when splatting vectors. if (DestElemTy->isFloatingType()) { // To avoid having to have a CK_BooleanToSignedFloating cast kind, we cast // in two steps: boolean to signed integral, then to floating. ExprResult CastExprRes = ImpCastExprToType(SplattedExpr, Context.IntTy, CK_BooleanToSignedIntegral); SplattedExpr = CastExprRes.get(); CK = CK_IntegralToFloating; } else { CK = CK_BooleanToSignedIntegral; } } else { ExprResult CastExprRes = SplattedExpr; CK = PrepareScalarCast(CastExprRes, DestElemTy); if (CastExprRes.isInvalid()) return ExprError(); SplattedExpr = CastExprRes.get(); } return ImpCastExprToType(SplattedExpr, DestElemTy, CK); } ExprResult Sema::CheckExtVectorCast(SourceRange R, QualType DestTy, Expr *CastExpr, CastKind &Kind) { assert(DestTy->isExtVectorType() && "Not an extended vector type!"); QualType SrcTy = CastExpr->getType(); // If SrcTy is a VectorType, the total size must match to explicitly cast to // an ExtVectorType. // In OpenCL, casts between vectors of different types are not allowed. // (See OpenCL 6.2). if (SrcTy->isVectorType()) { if (!areLaxCompatibleVectorTypes(SrcTy, DestTy) || (getLangOpts().OpenCL && (DestTy.getCanonicalType() != SrcTy.getCanonicalType()))) { Diag(R.getBegin(),diag::err_invalid_conversion_between_ext_vectors) << DestTy << SrcTy << R; return ExprError(); } Kind = CK_BitCast; return CastExpr; } // All non-pointer scalars can be cast to ExtVector type. The appropriate // conversion will take place first from scalar to elt type, and then // splat from elt type to vector. if (SrcTy->isPointerType()) return Diag(R.getBegin(), diag::err_invalid_conversion_between_vector_and_scalar) << DestTy << SrcTy << R; Kind = CK_VectorSplat; return prepareVectorSplat(DestTy, CastExpr); } ExprResult Sema::ActOnCastExpr(Scope *S, SourceLocation LParenLoc, Declarator &D, ParsedType &Ty, SourceLocation RParenLoc, Expr *CastExpr) { assert(!D.isInvalidType() && (CastExpr != nullptr) && "ActOnCastExpr(): missing type or expr"); TypeSourceInfo *castTInfo = GetTypeForDeclaratorCast(D, CastExpr->getType()); if (D.isInvalidType()) return ExprError(); if (getLangOpts().CPlusPlus) { // Check that there are no default arguments (C++ only). CheckExtraCXXDefaultArguments(D); } else { // Make sure any TypoExprs have been dealt with. ExprResult Res = CorrectDelayedTyposInExpr(CastExpr); if (!Res.isUsable()) return ExprError(); CastExpr = Res.get(); } checkUnusedDeclAttributes(D); QualType castType = castTInfo->getType(); Ty = CreateParsedType(castType, castTInfo); bool isVectorLiteral = false; // Check for an altivec or OpenCL literal, // i.e. all the elements are integer constants. ParenExpr *PE = dyn_cast(CastExpr); ParenListExpr *PLE = dyn_cast(CastExpr); if ((getLangOpts().AltiVec || getLangOpts().ZVector || getLangOpts().OpenCL) && castType->isVectorType() && (PE || PLE)) { if (PLE && PLE->getNumExprs() == 0) { Diag(PLE->getExprLoc(), diag::err_altivec_empty_initializer); return ExprError(); } if (PE || PLE->getNumExprs() == 1) { Expr *E = (PE ? PE->getSubExpr() : PLE->getExpr(0)); if (!E->getType()->isVectorType()) isVectorLiteral = true; } else isVectorLiteral = true; } // If this is a vector initializer, '(' type ')' '(' init, ..., init ')' // then handle it as such. if (isVectorLiteral) return BuildVectorLiteral(LParenLoc, RParenLoc, CastExpr, castTInfo); // If the Expr being casted is a ParenListExpr, handle it specially. // This is not an AltiVec-style cast, so turn the ParenListExpr into a // sequence of BinOp comma operators. if (isa(CastExpr)) { ExprResult Result = MaybeConvertParenListExprToParenExpr(S, CastExpr); if (Result.isInvalid()) return ExprError(); CastExpr = Result.get(); } if (getLangOpts().CPlusPlus && !castType->isVoidType() && !getSourceManager().isInSystemMacro(LParenLoc)) Diag(LParenLoc, diag::warn_old_style_cast) << CastExpr->getSourceRange(); CheckTollFreeBridgeCast(castType, CastExpr); CheckObjCBridgeRelatedCast(castType, CastExpr); DiscardMisalignedMemberAddress(castType.getTypePtr(), CastExpr); return BuildCStyleCastExpr(LParenLoc, castTInfo, RParenLoc, CastExpr); } ExprResult Sema::BuildVectorLiteral(SourceLocation LParenLoc, SourceLocation RParenLoc, Expr *E, TypeSourceInfo *TInfo) { assert((isa(E) || isa(E)) && "Expected paren or paren list expression"); Expr **exprs; unsigned numExprs; Expr *subExpr; SourceLocation LiteralLParenLoc, LiteralRParenLoc; if (ParenListExpr *PE = dyn_cast(E)) { LiteralLParenLoc = PE->getLParenLoc(); LiteralRParenLoc = PE->getRParenLoc(); exprs = PE->getExprs(); numExprs = PE->getNumExprs(); } else { // isa by assertion at function entrance LiteralLParenLoc = cast(E)->getLParen(); LiteralRParenLoc = cast(E)->getRParen(); subExpr = cast(E)->getSubExpr(); exprs = &subExpr; numExprs = 1; } QualType Ty = TInfo->getType(); assert(Ty->isVectorType() && "Expected vector type"); SmallVector initExprs; const VectorType *VTy = Ty->getAs(); unsigned numElems = Ty->getAs()->getNumElements(); // '(...)' form of vector initialization in AltiVec: the number of // initializers must be one or must match the size of the vector. // If a single value is specified in the initializer then it will be // replicated to all the components of the vector if (VTy->getVectorKind() == VectorType::AltiVecVector) { // The number of initializers must be one or must match the size of the // vector. If a single value is specified in the initializer then it will // be replicated to all the components of the vector if (numExprs == 1) { QualType ElemTy = Ty->getAs()->getElementType(); ExprResult Literal = DefaultLvalueConversion(exprs[0]); if (Literal.isInvalid()) return ExprError(); Literal = ImpCastExprToType(Literal.get(), ElemTy, PrepareScalarCast(Literal, ElemTy)); return BuildCStyleCastExpr(LParenLoc, TInfo, RParenLoc, Literal.get()); } else if (numExprs < numElems) { Diag(E->getExprLoc(), diag::err_incorrect_number_of_vector_initializers); return ExprError(); } else initExprs.append(exprs, exprs + numExprs); } else { // For OpenCL, when the number of initializers is a single value, // it will be replicated to all components of the vector. if (getLangOpts().OpenCL && VTy->getVectorKind() == VectorType::GenericVector && numExprs == 1) { QualType ElemTy = Ty->getAs()->getElementType(); ExprResult Literal = DefaultLvalueConversion(exprs[0]); if (Literal.isInvalid()) return ExprError(); Literal = ImpCastExprToType(Literal.get(), ElemTy, PrepareScalarCast(Literal, ElemTy)); return BuildCStyleCastExpr(LParenLoc, TInfo, RParenLoc, Literal.get()); } initExprs.append(exprs, exprs + numExprs); } // FIXME: This means that pretty-printing the final AST will produce curly // braces instead of the original commas. InitListExpr *initE = new (Context) InitListExpr(Context, LiteralLParenLoc, initExprs, LiteralRParenLoc); initE->setType(Ty); return BuildCompoundLiteralExpr(LParenLoc, TInfo, RParenLoc, initE); } /// This is not an AltiVec-style cast or or C++ direct-initialization, so turn /// the ParenListExpr into a sequence of comma binary operators. ExprResult Sema::MaybeConvertParenListExprToParenExpr(Scope *S, Expr *OrigExpr) { ParenListExpr *E = dyn_cast(OrigExpr); if (!E) return OrigExpr; ExprResult Result(E->getExpr(0)); for (unsigned i = 1, e = E->getNumExprs(); i != e && !Result.isInvalid(); ++i) Result = ActOnBinOp(S, E->getExprLoc(), tok::comma, Result.get(), E->getExpr(i)); if (Result.isInvalid()) return ExprError(); return ActOnParenExpr(E->getLParenLoc(), E->getRParenLoc(), Result.get()); } ExprResult Sema::ActOnParenListExpr(SourceLocation L, SourceLocation R, MultiExprArg Val) { Expr *expr = new (Context) ParenListExpr(Context, L, Val, R); return expr; } /// \brief Emit a specialized diagnostic when one expression is a null pointer /// constant and the other is not a pointer. Returns true if a diagnostic is /// emitted. bool Sema::DiagnoseConditionalForNull(Expr *LHSExpr, Expr *RHSExpr, SourceLocation QuestionLoc) { Expr *NullExpr = LHSExpr; Expr *NonPointerExpr = RHSExpr; Expr::NullPointerConstantKind NullKind = NullExpr->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNotNull); if (NullKind == Expr::NPCK_NotNull) { NullExpr = RHSExpr; NonPointerExpr = LHSExpr; NullKind = NullExpr->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNotNull); } if (NullKind == Expr::NPCK_NotNull) return false; if (NullKind == Expr::NPCK_ZeroExpression) return false; if (NullKind == Expr::NPCK_ZeroLiteral) { // In this case, check to make sure that we got here from a "NULL" // string in the source code. NullExpr = NullExpr->IgnoreParenImpCasts(); SourceLocation loc = NullExpr->getExprLoc(); if (!findMacroSpelling(loc, "NULL")) return false; } int DiagType = (NullKind == Expr::NPCK_CXX11_nullptr); Diag(QuestionLoc, diag::err_typecheck_cond_incompatible_operands_null) << NonPointerExpr->getType() << DiagType << NonPointerExpr->getSourceRange(); return true; } /// \brief Return false if the condition expression is valid, true otherwise. static bool checkCondition(Sema &S, Expr *Cond, SourceLocation QuestionLoc) { QualType CondTy = Cond->getType(); // OpenCL v1.1 s6.3.i says the condition cannot be a floating point type. if (S.getLangOpts().OpenCL && CondTy->isFloatingType()) { S.Diag(QuestionLoc, diag::err_typecheck_cond_expect_nonfloat) << CondTy << Cond->getSourceRange(); return true; } // C99 6.5.15p2 if (CondTy->isScalarType()) return false; S.Diag(QuestionLoc, diag::err_typecheck_cond_expect_scalar) << CondTy << Cond->getSourceRange(); return true; } /// \brief Handle when one or both operands are void type. static QualType checkConditionalVoidType(Sema &S, ExprResult &LHS, ExprResult &RHS) { Expr *LHSExpr = LHS.get(); Expr *RHSExpr = RHS.get(); if (!LHSExpr->getType()->isVoidType()) S.Diag(RHSExpr->getLocStart(), diag::ext_typecheck_cond_one_void) << RHSExpr->getSourceRange(); if (!RHSExpr->getType()->isVoidType()) S.Diag(LHSExpr->getLocStart(), diag::ext_typecheck_cond_one_void) << LHSExpr->getSourceRange(); LHS = S.ImpCastExprToType(LHS.get(), S.Context.VoidTy, CK_ToVoid); RHS = S.ImpCastExprToType(RHS.get(), S.Context.VoidTy, CK_ToVoid); return S.Context.VoidTy; } /// \brief Return false if the NullExpr can be promoted to PointerTy, /// true otherwise. static bool checkConditionalNullPointer(Sema &S, ExprResult &NullExpr, QualType PointerTy) { if ((!PointerTy->isAnyPointerType() && !PointerTy->isBlockPointerType()) || !NullExpr.get()->isNullPointerConstant(S.Context, Expr::NPC_ValueDependentIsNull)) return true; NullExpr = S.ImpCastExprToType(NullExpr.get(), PointerTy, CK_NullToPointer); return false; } /// \brief Checks compatibility between two pointers and return the resulting /// type. static QualType checkConditionalPointerCompatibility(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc) { QualType LHSTy = LHS.get()->getType(); QualType RHSTy = RHS.get()->getType(); if (S.Context.hasSameType(LHSTy, RHSTy)) { // Two identical pointers types are always compatible. return LHSTy; } QualType lhptee, rhptee; // Get the pointee types. bool IsBlockPointer = false; if (const BlockPointerType *LHSBTy = LHSTy->getAs()) { lhptee = LHSBTy->getPointeeType(); rhptee = RHSTy->castAs()->getPointeeType(); IsBlockPointer = true; } else { lhptee = LHSTy->castAs()->getPointeeType(); rhptee = RHSTy->castAs()->getPointeeType(); } // C99 6.5.15p6: If both operands are pointers to compatible types or to // differently qualified versions of compatible types, the result type is // a pointer to an appropriately qualified version of the composite // type. // Only CVR-qualifiers exist in the standard, and the differently-qualified // clause doesn't make sense for our extensions. E.g. address space 2 should // be incompatible with address space 3: they may live on different devices or // anything. Qualifiers lhQual = lhptee.getQualifiers(); Qualifiers rhQual = rhptee.getQualifiers(); unsigned MergedCVRQual = lhQual.getCVRQualifiers() | rhQual.getCVRQualifiers(); lhQual.removeCVRQualifiers(); rhQual.removeCVRQualifiers(); lhptee = S.Context.getQualifiedType(lhptee.getUnqualifiedType(), lhQual); rhptee = S.Context.getQualifiedType(rhptee.getUnqualifiedType(), rhQual); // For OpenCL: // 1. If LHS and RHS types match exactly and: // (a) AS match => use standard C rules, no bitcast or addrspacecast // (b) AS overlap => generate addrspacecast // (c) AS don't overlap => give an error // 2. if LHS and RHS types don't match: // (a) AS match => use standard C rules, generate bitcast // (b) AS overlap => generate addrspacecast instead of bitcast // (c) AS don't overlap => give an error // For OpenCL, non-null composite type is returned only for cases 1a and 1b. QualType CompositeTy = S.Context.mergeTypes(lhptee, rhptee); // OpenCL cases 1c, 2a, 2b, and 2c. if (CompositeTy.isNull()) { // In this situation, we assume void* type. No especially good // reason, but this is what gcc does, and we do have to pick // to get a consistent AST. QualType incompatTy; if (S.getLangOpts().OpenCL) { // OpenCL v1.1 s6.5 - Conversion between pointers to distinct address // spaces is disallowed. unsigned ResultAddrSpace; if (lhQual.isAddressSpaceSupersetOf(rhQual)) { // Cases 2a and 2b. ResultAddrSpace = lhQual.getAddressSpace(); } else if (rhQual.isAddressSpaceSupersetOf(lhQual)) { // Cases 2a and 2b. ResultAddrSpace = rhQual.getAddressSpace(); } else { // Cases 1c and 2c. S.Diag(Loc, diag::err_typecheck_op_on_nonoverlapping_address_space_pointers) << LHSTy << RHSTy << 2 << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } // Continue handling cases 2a and 2b. incompatTy = S.Context.getPointerType( S.Context.getAddrSpaceQualType(S.Context.VoidTy, ResultAddrSpace)); LHS = S.ImpCastExprToType(LHS.get(), incompatTy, (lhQual.getAddressSpace() != ResultAddrSpace) ? CK_AddressSpaceConversion /* 2b */ : CK_BitCast /* 2a */); RHS = S.ImpCastExprToType(RHS.get(), incompatTy, (rhQual.getAddressSpace() != ResultAddrSpace) ? CK_AddressSpaceConversion /* 2b */ : CK_BitCast /* 2a */); } else { S.Diag(Loc, diag::ext_typecheck_cond_incompatible_pointers) << LHSTy << RHSTy << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); incompatTy = S.Context.getPointerType(S.Context.VoidTy); LHS = S.ImpCastExprToType(LHS.get(), incompatTy, CK_BitCast); RHS = S.ImpCastExprToType(RHS.get(), incompatTy, CK_BitCast); } return incompatTy; } // The pointer types are compatible. QualType ResultTy = CompositeTy.withCVRQualifiers(MergedCVRQual); auto LHSCastKind = CK_BitCast, RHSCastKind = CK_BitCast; if (IsBlockPointer) ResultTy = S.Context.getBlockPointerType(ResultTy); else { // Cases 1a and 1b for OpenCL. auto ResultAddrSpace = ResultTy.getQualifiers().getAddressSpace(); LHSCastKind = lhQual.getAddressSpace() == ResultAddrSpace ? CK_BitCast /* 1a */ : CK_AddressSpaceConversion /* 1b */; RHSCastKind = rhQual.getAddressSpace() == ResultAddrSpace ? CK_BitCast /* 1a */ : CK_AddressSpaceConversion /* 1b */; ResultTy = S.Context.getPointerType(ResultTy); } // For case 1a of OpenCL, S.ImpCastExprToType will not insert bitcast // if the target type does not change. LHS = S.ImpCastExprToType(LHS.get(), ResultTy, LHSCastKind); RHS = S.ImpCastExprToType(RHS.get(), ResultTy, RHSCastKind); return ResultTy; } /// \brief Return the resulting type when the operands are both block pointers. static QualType checkConditionalBlockPointerCompatibility(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc) { QualType LHSTy = LHS.get()->getType(); QualType RHSTy = RHS.get()->getType(); if (!LHSTy->isBlockPointerType() || !RHSTy->isBlockPointerType()) { if (LHSTy->isVoidPointerType() || RHSTy->isVoidPointerType()) { QualType destType = S.Context.getPointerType(S.Context.VoidTy); LHS = S.ImpCastExprToType(LHS.get(), destType, CK_BitCast); RHS = S.ImpCastExprToType(RHS.get(), destType, CK_BitCast); return destType; } S.Diag(Loc, diag::err_typecheck_cond_incompatible_operands) << LHSTy << RHSTy << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } // We have 2 block pointer types. return checkConditionalPointerCompatibility(S, LHS, RHS, Loc); } /// \brief Return the resulting type when the operands are both pointers. static QualType checkConditionalObjectPointersCompatibility(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc) { // get the pointer types QualType LHSTy = LHS.get()->getType(); QualType RHSTy = RHS.get()->getType(); // get the "pointed to" types QualType lhptee = LHSTy->getAs()->getPointeeType(); QualType rhptee = RHSTy->getAs()->getPointeeType(); // ignore qualifiers on void (C99 6.5.15p3, clause 6) if (lhptee->isVoidType() && rhptee->isIncompleteOrObjectType()) { // Figure out necessary qualifiers (C99 6.5.15p6) QualType destPointee = S.Context.getQualifiedType(lhptee, rhptee.getQualifiers()); QualType destType = S.Context.getPointerType(destPointee); // Add qualifiers if necessary. LHS = S.ImpCastExprToType(LHS.get(), destType, CK_NoOp); // Promote to void*. RHS = S.ImpCastExprToType(RHS.get(), destType, CK_BitCast); return destType; } if (rhptee->isVoidType() && lhptee->isIncompleteOrObjectType()) { QualType destPointee = S.Context.getQualifiedType(rhptee, lhptee.getQualifiers()); QualType destType = S.Context.getPointerType(destPointee); // Add qualifiers if necessary. RHS = S.ImpCastExprToType(RHS.get(), destType, CK_NoOp); // Promote to void*. LHS = S.ImpCastExprToType(LHS.get(), destType, CK_BitCast); return destType; } return checkConditionalPointerCompatibility(S, LHS, RHS, Loc); } /// \brief Return false if the first expression is not an integer and the second /// expression is not a pointer, true otherwise. static bool checkPointerIntegerMismatch(Sema &S, ExprResult &Int, Expr* PointerExpr, SourceLocation Loc, bool IsIntFirstExpr) { if (!PointerExpr->getType()->isPointerType() || !Int.get()->getType()->isIntegerType()) return false; Expr *Expr1 = IsIntFirstExpr ? Int.get() : PointerExpr; Expr *Expr2 = IsIntFirstExpr ? PointerExpr : Int.get(); S.Diag(Loc, diag::ext_typecheck_cond_pointer_integer_mismatch) << Expr1->getType() << Expr2->getType() << Expr1->getSourceRange() << Expr2->getSourceRange(); Int = S.ImpCastExprToType(Int.get(), PointerExpr->getType(), CK_IntegralToPointer); return true; } /// \brief Simple conversion between integer and floating point types. /// /// Used when handling the OpenCL conditional operator where the /// condition is a vector while the other operands are scalar. /// /// OpenCL v1.1 s6.3.i and s6.11.6 together require that the scalar /// types are either integer or floating type. Between the two /// operands, the type with the higher rank is defined as the "result /// type". The other operand needs to be promoted to the same type. No /// other type promotion is allowed. We cannot use /// UsualArithmeticConversions() for this purpose, since it always /// promotes promotable types. static QualType OpenCLArithmeticConversions(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation QuestionLoc) { LHS = S.DefaultFunctionArrayLvalueConversion(LHS.get()); if (LHS.isInvalid()) return QualType(); RHS = S.DefaultFunctionArrayLvalueConversion(RHS.get()); if (RHS.isInvalid()) return QualType(); // For conversion purposes, we ignore any qualifiers. // For example, "const float" and "float" are equivalent. QualType LHSType = S.Context.getCanonicalType(LHS.get()->getType()).getUnqualifiedType(); QualType RHSType = S.Context.getCanonicalType(RHS.get()->getType()).getUnqualifiedType(); if (!LHSType->isIntegerType() && !LHSType->isRealFloatingType()) { S.Diag(QuestionLoc, diag::err_typecheck_cond_expect_int_float) << LHSType << LHS.get()->getSourceRange(); return QualType(); } if (!RHSType->isIntegerType() && !RHSType->isRealFloatingType()) { S.Diag(QuestionLoc, diag::err_typecheck_cond_expect_int_float) << RHSType << RHS.get()->getSourceRange(); return QualType(); } // If both types are identical, no conversion is needed. if (LHSType == RHSType) return LHSType; // Now handle "real" floating types (i.e. float, double, long double). if (LHSType->isRealFloatingType() || RHSType->isRealFloatingType()) return handleFloatConversion(S, LHS, RHS, LHSType, RHSType, /*IsCompAssign = */ false); // Finally, we have two differing integer types. return handleIntegerConversion (S, LHS, RHS, LHSType, RHSType, /*IsCompAssign = */ false); } /// \brief Convert scalar operands to a vector that matches the /// condition in length. /// /// Used when handling the OpenCL conditional operator where the /// condition is a vector while the other operands are scalar. /// /// We first compute the "result type" for the scalar operands /// according to OpenCL v1.1 s6.3.i. Both operands are then converted /// into a vector of that type where the length matches the condition /// vector type. s6.11.6 requires that the element types of the result /// and the condition must have the same number of bits. static QualType OpenCLConvertScalarsToVectors(Sema &S, ExprResult &LHS, ExprResult &RHS, QualType CondTy, SourceLocation QuestionLoc) { QualType ResTy = OpenCLArithmeticConversions(S, LHS, RHS, QuestionLoc); if (ResTy.isNull()) return QualType(); const VectorType *CV = CondTy->getAs(); assert(CV); // Determine the vector result type unsigned NumElements = CV->getNumElements(); QualType VectorTy = S.Context.getExtVectorType(ResTy, NumElements); // Ensure that all types have the same number of bits if (S.Context.getTypeSize(CV->getElementType()) != S.Context.getTypeSize(ResTy)) { // Since VectorTy is created internally, it does not pretty print // with an OpenCL name. Instead, we just print a description. std::string EleTyName = ResTy.getUnqualifiedType().getAsString(); SmallString<64> Str; llvm::raw_svector_ostream OS(Str); OS << "(vector of " << NumElements << " '" << EleTyName << "' values)"; S.Diag(QuestionLoc, diag::err_conditional_vector_element_size) << CondTy << OS.str(); return QualType(); } // Convert operands to the vector result type LHS = S.ImpCastExprToType(LHS.get(), VectorTy, CK_VectorSplat); RHS = S.ImpCastExprToType(RHS.get(), VectorTy, CK_VectorSplat); return VectorTy; } /// \brief Return false if this is a valid OpenCL condition vector static bool checkOpenCLConditionVector(Sema &S, Expr *Cond, SourceLocation QuestionLoc) { // OpenCL v1.1 s6.11.6 says the elements of the vector must be of // integral type. const VectorType *CondTy = Cond->getType()->getAs(); assert(CondTy); QualType EleTy = CondTy->getElementType(); if (EleTy->isIntegerType()) return false; S.Diag(QuestionLoc, diag::err_typecheck_cond_expect_nonfloat) << Cond->getType() << Cond->getSourceRange(); return true; } /// \brief Return false if the vector condition type and the vector /// result type are compatible. /// /// OpenCL v1.1 s6.11.6 requires that both vector types have the same /// number of elements, and their element types have the same number /// of bits. static bool checkVectorResult(Sema &S, QualType CondTy, QualType VecResTy, SourceLocation QuestionLoc) { const VectorType *CV = CondTy->getAs(); const VectorType *RV = VecResTy->getAs(); assert(CV && RV); if (CV->getNumElements() != RV->getNumElements()) { S.Diag(QuestionLoc, diag::err_conditional_vector_size) << CondTy << VecResTy; return true; } QualType CVE = CV->getElementType(); QualType RVE = RV->getElementType(); if (S.Context.getTypeSize(CVE) != S.Context.getTypeSize(RVE)) { S.Diag(QuestionLoc, diag::err_conditional_vector_element_size) << CondTy << VecResTy; return true; } return false; } /// \brief Return the resulting type for the conditional operator in /// OpenCL (aka "ternary selection operator", OpenCL v1.1 /// s6.3.i) when the condition is a vector type. static QualType OpenCLCheckVectorConditional(Sema &S, ExprResult &Cond, ExprResult &LHS, ExprResult &RHS, SourceLocation QuestionLoc) { Cond = S.DefaultFunctionArrayLvalueConversion(Cond.get()); if (Cond.isInvalid()) return QualType(); QualType CondTy = Cond.get()->getType(); if (checkOpenCLConditionVector(S, Cond.get(), QuestionLoc)) return QualType(); // If either operand is a vector then find the vector type of the // result as specified in OpenCL v1.1 s6.3.i. if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) { QualType VecResTy = S.CheckVectorOperands(LHS, RHS, QuestionLoc, /*isCompAssign*/false, /*AllowBothBool*/true, /*AllowBoolConversions*/false); if (VecResTy.isNull()) return QualType(); // The result type must match the condition type as specified in // OpenCL v1.1 s6.11.6. if (checkVectorResult(S, CondTy, VecResTy, QuestionLoc)) return QualType(); return VecResTy; } // Both operands are scalar. return OpenCLConvertScalarsToVectors(S, LHS, RHS, CondTy, QuestionLoc); } /// \brief Return true if the Expr is block type static bool checkBlockType(Sema &S, const Expr *E) { if (const CallExpr *CE = dyn_cast(E)) { QualType Ty = CE->getCallee()->getType(); if (Ty->isBlockPointerType()) { S.Diag(E->getExprLoc(), diag::err_opencl_ternary_with_block); return true; } } return false; } /// Note that LHS is not null here, even if this is the gnu "x ?: y" extension. /// In that case, LHS = cond. /// C99 6.5.15 QualType Sema::CheckConditionalOperands(ExprResult &Cond, ExprResult &LHS, ExprResult &RHS, ExprValueKind &VK, ExprObjectKind &OK, SourceLocation QuestionLoc) { ExprResult LHSResult = CheckPlaceholderExpr(LHS.get()); if (!LHSResult.isUsable()) return QualType(); LHS = LHSResult; ExprResult RHSResult = CheckPlaceholderExpr(RHS.get()); if (!RHSResult.isUsable()) return QualType(); RHS = RHSResult; // C++ is sufficiently different to merit its own checker. if (getLangOpts().CPlusPlus) return CXXCheckConditionalOperands(Cond, LHS, RHS, VK, OK, QuestionLoc); VK = VK_RValue; OK = OK_Ordinary; // The OpenCL operator with a vector condition is sufficiently // different to merit its own checker. if (getLangOpts().OpenCL && Cond.get()->getType()->isVectorType()) return OpenCLCheckVectorConditional(*this, Cond, LHS, RHS, QuestionLoc); // First, check the condition. Cond = UsualUnaryConversions(Cond.get()); if (Cond.isInvalid()) return QualType(); if (checkCondition(*this, Cond.get(), QuestionLoc)) return QualType(); // Now check the two expressions. if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) return CheckVectorOperands(LHS, RHS, QuestionLoc, /*isCompAssign*/false, /*AllowBothBool*/true, /*AllowBoolConversions*/false); QualType ResTy = UsualArithmeticConversions(LHS, RHS); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); QualType LHSTy = LHS.get()->getType(); QualType RHSTy = RHS.get()->getType(); // Diagnose attempts to convert between __float128 and long double where // such conversions currently can't be handled. if (unsupportedTypeConversion(*this, LHSTy, RHSTy)) { Diag(QuestionLoc, diag::err_typecheck_cond_incompatible_operands) << LHSTy << RHSTy << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } // OpenCL v2.0 s6.12.5 - Blocks cannot be used as expressions of the ternary // selection operator (?:). if (getLangOpts().OpenCL && (checkBlockType(*this, LHS.get()) | checkBlockType(*this, RHS.get()))) { return QualType(); } // If both operands have arithmetic type, do the usual arithmetic conversions // to find a common type: C99 6.5.15p3,5. if (LHSTy->isArithmeticType() && RHSTy->isArithmeticType()) { LHS = ImpCastExprToType(LHS.get(), ResTy, PrepareScalarCast(LHS, ResTy)); RHS = ImpCastExprToType(RHS.get(), ResTy, PrepareScalarCast(RHS, ResTy)); return ResTy; } // If both operands are the same structure or union type, the result is that // type. if (const RecordType *LHSRT = LHSTy->getAs()) { // C99 6.5.15p3 if (const RecordType *RHSRT = RHSTy->getAs()) if (LHSRT->getDecl() == RHSRT->getDecl()) // "If both the operands have structure or union type, the result has // that type." This implies that CV qualifiers are dropped. return LHSTy.getUnqualifiedType(); // FIXME: Type of conditional expression must be complete in C mode. } // C99 6.5.15p5: "If both operands have void type, the result has void type." // The following || allows only one side to be void (a GCC-ism). if (LHSTy->isVoidType() || RHSTy->isVoidType()) { return checkConditionalVoidType(*this, LHS, RHS); } // C99 6.5.15p6 - "if one operand is a null pointer constant, the result has // the type of the other operand." if (!checkConditionalNullPointer(*this, RHS, LHSTy)) return LHSTy; if (!checkConditionalNullPointer(*this, LHS, RHSTy)) return RHSTy; // All objective-c pointer type analysis is done here. QualType compositeType = FindCompositeObjCPointerType(LHS, RHS, QuestionLoc); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); if (!compositeType.isNull()) return compositeType; // Handle block pointer types. if (LHSTy->isBlockPointerType() || RHSTy->isBlockPointerType()) return checkConditionalBlockPointerCompatibility(*this, LHS, RHS, QuestionLoc); // Check constraints for C object pointers types (C99 6.5.15p3,6). if (LHSTy->isPointerType() && RHSTy->isPointerType()) return checkConditionalObjectPointersCompatibility(*this, LHS, RHS, QuestionLoc); // GCC compatibility: soften pointer/integer mismatch. Note that // null pointers have been filtered out by this point. if (checkPointerIntegerMismatch(*this, LHS, RHS.get(), QuestionLoc, /*isIntFirstExpr=*/true)) return RHSTy; if (checkPointerIntegerMismatch(*this, RHS, LHS.get(), QuestionLoc, /*isIntFirstExpr=*/false)) return LHSTy; // Emit a better diagnostic if one of the expressions is a null pointer // constant and the other is not a pointer type. In this case, the user most // likely forgot to take the address of the other expression. if (DiagnoseConditionalForNull(LHS.get(), RHS.get(), QuestionLoc)) return QualType(); // Otherwise, the operands are not compatible. Diag(QuestionLoc, diag::err_typecheck_cond_incompatible_operands) << LHSTy << RHSTy << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } /// FindCompositeObjCPointerType - Helper method to find composite type of /// two objective-c pointer types of the two input expressions. QualType Sema::FindCompositeObjCPointerType(ExprResult &LHS, ExprResult &RHS, SourceLocation QuestionLoc) { QualType LHSTy = LHS.get()->getType(); QualType RHSTy = RHS.get()->getType(); // Handle things like Class and struct objc_class*. Here we case the result // to the pseudo-builtin, because that will be implicitly cast back to the // redefinition type if an attempt is made to access its fields. if (LHSTy->isObjCClassType() && (Context.hasSameType(RHSTy, Context.getObjCClassRedefinitionType()))) { RHS = ImpCastExprToType(RHS.get(), LHSTy, CK_CPointerToObjCPointerCast); return LHSTy; } if (RHSTy->isObjCClassType() && (Context.hasSameType(LHSTy, Context.getObjCClassRedefinitionType()))) { LHS = ImpCastExprToType(LHS.get(), RHSTy, CK_CPointerToObjCPointerCast); return RHSTy; } // And the same for struct objc_object* / id if (LHSTy->isObjCIdType() && (Context.hasSameType(RHSTy, Context.getObjCIdRedefinitionType()))) { RHS = ImpCastExprToType(RHS.get(), LHSTy, CK_CPointerToObjCPointerCast); return LHSTy; } if (RHSTy->isObjCIdType() && (Context.hasSameType(LHSTy, Context.getObjCIdRedefinitionType()))) { LHS = ImpCastExprToType(LHS.get(), RHSTy, CK_CPointerToObjCPointerCast); return RHSTy; } // And the same for struct objc_selector* / SEL if (Context.isObjCSelType(LHSTy) && (Context.hasSameType(RHSTy, Context.getObjCSelRedefinitionType()))) { RHS = ImpCastExprToType(RHS.get(), LHSTy, CK_BitCast); return LHSTy; } if (Context.isObjCSelType(RHSTy) && (Context.hasSameType(LHSTy, Context.getObjCSelRedefinitionType()))) { LHS = ImpCastExprToType(LHS.get(), RHSTy, CK_BitCast); return RHSTy; } // Check constraints for Objective-C object pointers types. if (LHSTy->isObjCObjectPointerType() && RHSTy->isObjCObjectPointerType()) { if (Context.getCanonicalType(LHSTy) == Context.getCanonicalType(RHSTy)) { // Two identical object pointer types are always compatible. return LHSTy; } const ObjCObjectPointerType *LHSOPT = LHSTy->castAs(); const ObjCObjectPointerType *RHSOPT = RHSTy->castAs(); QualType compositeType = LHSTy; // If both operands are interfaces and either operand can be // assigned to the other, use that type as the composite // type. This allows // xxx ? (A*) a : (B*) b // where B is a subclass of A. // // Additionally, as for assignment, if either type is 'id' // allow silent coercion. Finally, if the types are // incompatible then make sure to use 'id' as the composite // type so the result is acceptable for sending messages to. // FIXME: Consider unifying with 'areComparableObjCPointerTypes'. // It could return the composite type. if (!(compositeType = Context.areCommonBaseCompatible(LHSOPT, RHSOPT)).isNull()) { // Nothing more to do. } else if (Context.canAssignObjCInterfaces(LHSOPT, RHSOPT)) { compositeType = RHSOPT->isObjCBuiltinType() ? RHSTy : LHSTy; } else if (Context.canAssignObjCInterfaces(RHSOPT, LHSOPT)) { compositeType = LHSOPT->isObjCBuiltinType() ? LHSTy : RHSTy; } else if ((LHSTy->isObjCQualifiedIdType() || RHSTy->isObjCQualifiedIdType()) && Context.ObjCQualifiedIdTypesAreCompatible(LHSTy, RHSTy, true)) { // Need to handle "id" explicitly. // GCC allows qualified id and any Objective-C type to devolve to // id. Currently localizing to here until clear this should be // part of ObjCQualifiedIdTypesAreCompatible. compositeType = Context.getObjCIdType(); } else if (LHSTy->isObjCIdType() || RHSTy->isObjCIdType()) { compositeType = Context.getObjCIdType(); } else { Diag(QuestionLoc, diag::ext_typecheck_cond_incompatible_operands) << LHSTy << RHSTy << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); QualType incompatTy = Context.getObjCIdType(); LHS = ImpCastExprToType(LHS.get(), incompatTy, CK_BitCast); RHS = ImpCastExprToType(RHS.get(), incompatTy, CK_BitCast); return incompatTy; } // The object pointer types are compatible. LHS = ImpCastExprToType(LHS.get(), compositeType, CK_BitCast); RHS = ImpCastExprToType(RHS.get(), compositeType, CK_BitCast); return compositeType; } // Check Objective-C object pointer types and 'void *' if (LHSTy->isVoidPointerType() && RHSTy->isObjCObjectPointerType()) { if (getLangOpts().ObjCAutoRefCount) { // ARC forbids the implicit conversion of object pointers to 'void *', // so these types are not compatible. Diag(QuestionLoc, diag::err_cond_voidptr_arc) << LHSTy << RHSTy << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); LHS = RHS = true; return QualType(); } QualType lhptee = LHSTy->getAs()->getPointeeType(); QualType rhptee = RHSTy->getAs()->getPointeeType(); QualType destPointee = Context.getQualifiedType(lhptee, rhptee.getQualifiers()); QualType destType = Context.getPointerType(destPointee); // Add qualifiers if necessary. LHS = ImpCastExprToType(LHS.get(), destType, CK_NoOp); // Promote to void*. RHS = ImpCastExprToType(RHS.get(), destType, CK_BitCast); return destType; } if (LHSTy->isObjCObjectPointerType() && RHSTy->isVoidPointerType()) { if (getLangOpts().ObjCAutoRefCount) { // ARC forbids the implicit conversion of object pointers to 'void *', // so these types are not compatible. Diag(QuestionLoc, diag::err_cond_voidptr_arc) << LHSTy << RHSTy << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); LHS = RHS = true; return QualType(); } QualType lhptee = LHSTy->getAs()->getPointeeType(); QualType rhptee = RHSTy->getAs()->getPointeeType(); QualType destPointee = Context.getQualifiedType(rhptee, lhptee.getQualifiers()); QualType destType = Context.getPointerType(destPointee); // Add qualifiers if necessary. RHS = ImpCastExprToType(RHS.get(), destType, CK_NoOp); // Promote to void*. LHS = ImpCastExprToType(LHS.get(), destType, CK_BitCast); return destType; } return QualType(); } /// SuggestParentheses - Emit a note with a fixit hint that wraps /// ParenRange in parentheses. static void SuggestParentheses(Sema &Self, SourceLocation Loc, const PartialDiagnostic &Note, SourceRange ParenRange) { SourceLocation EndLoc = Self.getLocForEndOfToken(ParenRange.getEnd()); if (ParenRange.getBegin().isFileID() && ParenRange.getEnd().isFileID() && EndLoc.isValid()) { Self.Diag(Loc, Note) << FixItHint::CreateInsertion(ParenRange.getBegin(), "(") << FixItHint::CreateInsertion(EndLoc, ")"); } else { // We can't display the parentheses, so just show the bare note. Self.Diag(Loc, Note) << ParenRange; } } static bool IsArithmeticOp(BinaryOperatorKind Opc) { return BinaryOperator::isAdditiveOp(Opc) || BinaryOperator::isMultiplicativeOp(Opc) || BinaryOperator::isShiftOp(Opc); } /// IsArithmeticBinaryExpr - Returns true if E is an arithmetic binary /// expression, either using a built-in or overloaded operator, /// and sets *OpCode to the opcode and *RHSExprs to the right-hand side /// expression. static bool IsArithmeticBinaryExpr(Expr *E, BinaryOperatorKind *Opcode, Expr **RHSExprs) { // Don't strip parenthesis: we should not warn if E is in parenthesis. E = E->IgnoreImpCasts(); E = E->IgnoreConversionOperator(); E = E->IgnoreImpCasts(); // Built-in binary operator. if (BinaryOperator *OP = dyn_cast(E)) { if (IsArithmeticOp(OP->getOpcode())) { *Opcode = OP->getOpcode(); *RHSExprs = OP->getRHS(); return true; } } // Overloaded operator. if (CXXOperatorCallExpr *Call = dyn_cast(E)) { if (Call->getNumArgs() != 2) return false; // Make sure this is really a binary operator that is safe to pass into // BinaryOperator::getOverloadedOpcode(), e.g. it's not a subscript op. OverloadedOperatorKind OO = Call->getOperator(); if (OO < OO_Plus || OO > OO_Arrow || OO == OO_PlusPlus || OO == OO_MinusMinus) return false; BinaryOperatorKind OpKind = BinaryOperator::getOverloadedOpcode(OO); if (IsArithmeticOp(OpKind)) { *Opcode = OpKind; *RHSExprs = Call->getArg(1); return true; } } return false; } /// ExprLooksBoolean - Returns true if E looks boolean, i.e. it has boolean type /// or is a logical expression such as (x==y) which has int type, but is /// commonly interpreted as boolean. static bool ExprLooksBoolean(Expr *E) { E = E->IgnoreParenImpCasts(); if (E->getType()->isBooleanType()) return true; if (BinaryOperator *OP = dyn_cast(E)) return OP->isComparisonOp() || OP->isLogicalOp(); if (UnaryOperator *OP = dyn_cast(E)) return OP->getOpcode() == UO_LNot; if (E->getType()->isPointerType()) return true; return false; } /// DiagnoseConditionalPrecedence - Emit a warning when a conditional operator /// and binary operator are mixed in a way that suggests the programmer assumed /// the conditional operator has higher precedence, for example: /// "int x = a + someBinaryCondition ? 1 : 2". static void DiagnoseConditionalPrecedence(Sema &Self, SourceLocation OpLoc, Expr *Condition, Expr *LHSExpr, Expr *RHSExpr) { BinaryOperatorKind CondOpcode; Expr *CondRHS; if (!IsArithmeticBinaryExpr(Condition, &CondOpcode, &CondRHS)) return; if (!ExprLooksBoolean(CondRHS)) return; // The condition is an arithmetic binary expression, with a right- // hand side that looks boolean, so warn. Self.Diag(OpLoc, diag::warn_precedence_conditional) << Condition->getSourceRange() << BinaryOperator::getOpcodeStr(CondOpcode); SuggestParentheses(Self, OpLoc, Self.PDiag(diag::note_precedence_silence) << BinaryOperator::getOpcodeStr(CondOpcode), SourceRange(Condition->getLocStart(), Condition->getLocEnd())); SuggestParentheses(Self, OpLoc, Self.PDiag(diag::note_precedence_conditional_first), SourceRange(CondRHS->getLocStart(), RHSExpr->getLocEnd())); } /// Compute the nullability of a conditional expression. static QualType computeConditionalNullability(QualType ResTy, bool IsBin, QualType LHSTy, QualType RHSTy, ASTContext &Ctx) { if (!ResTy->isAnyPointerType()) return ResTy; auto GetNullability = [&Ctx](QualType Ty) { Optional Kind = Ty->getNullability(Ctx); if (Kind) return *Kind; return NullabilityKind::Unspecified; }; auto LHSKind = GetNullability(LHSTy), RHSKind = GetNullability(RHSTy); NullabilityKind MergedKind; // Compute nullability of a binary conditional expression. if (IsBin) { if (LHSKind == NullabilityKind::NonNull) MergedKind = NullabilityKind::NonNull; else MergedKind = RHSKind; // Compute nullability of a normal conditional expression. } else { if (LHSKind == NullabilityKind::Nullable || RHSKind == NullabilityKind::Nullable) MergedKind = NullabilityKind::Nullable; else if (LHSKind == NullabilityKind::NonNull) MergedKind = RHSKind; else if (RHSKind == NullabilityKind::NonNull) MergedKind = LHSKind; else MergedKind = NullabilityKind::Unspecified; } // Return if ResTy already has the correct nullability. if (GetNullability(ResTy) == MergedKind) return ResTy; // Strip all nullability from ResTy. while (ResTy->getNullability(Ctx)) ResTy = ResTy.getSingleStepDesugaredType(Ctx); // Create a new AttributedType with the new nullability kind. auto NewAttr = AttributedType::getNullabilityAttrKind(MergedKind); return Ctx.getAttributedType(NewAttr, ResTy, ResTy); } /// ActOnConditionalOp - Parse a ?: operation. Note that 'LHS' may be null /// in the case of a the GNU conditional expr extension. ExprResult Sema::ActOnConditionalOp(SourceLocation QuestionLoc, SourceLocation ColonLoc, Expr *CondExpr, Expr *LHSExpr, Expr *RHSExpr) { if (!getLangOpts().CPlusPlus) { // C cannot handle TypoExpr nodes in the condition because it // doesn't handle dependent types properly, so make sure any TypoExprs have // been dealt with before checking the operands. ExprResult CondResult = CorrectDelayedTyposInExpr(CondExpr); ExprResult LHSResult = CorrectDelayedTyposInExpr(LHSExpr); ExprResult RHSResult = CorrectDelayedTyposInExpr(RHSExpr); if (!CondResult.isUsable()) return ExprError(); if (LHSExpr) { if (!LHSResult.isUsable()) return ExprError(); } if (!RHSResult.isUsable()) return ExprError(); CondExpr = CondResult.get(); LHSExpr = LHSResult.get(); RHSExpr = RHSResult.get(); } // If this is the gnu "x ?: y" extension, analyze the types as though the LHS // was the condition. OpaqueValueExpr *opaqueValue = nullptr; Expr *commonExpr = nullptr; if (!LHSExpr) { commonExpr = CondExpr; // Lower out placeholder types first. This is important so that we don't // try to capture a placeholder. This happens in few cases in C++; such // as Objective-C++'s dictionary subscripting syntax. if (commonExpr->hasPlaceholderType()) { ExprResult result = CheckPlaceholderExpr(commonExpr); if (!result.isUsable()) return ExprError(); commonExpr = result.get(); } // We usually want to apply unary conversions *before* saving, except // in the special case of a C++ l-value conditional. if (!(getLangOpts().CPlusPlus && !commonExpr->isTypeDependent() && commonExpr->getValueKind() == RHSExpr->getValueKind() && commonExpr->isGLValue() && commonExpr->isOrdinaryOrBitFieldObject() && RHSExpr->isOrdinaryOrBitFieldObject() && Context.hasSameType(commonExpr->getType(), RHSExpr->getType()))) { ExprResult commonRes = UsualUnaryConversions(commonExpr); if (commonRes.isInvalid()) return ExprError(); commonExpr = commonRes.get(); } opaqueValue = new (Context) OpaqueValueExpr(commonExpr->getExprLoc(), commonExpr->getType(), commonExpr->getValueKind(), commonExpr->getObjectKind(), commonExpr); LHSExpr = CondExpr = opaqueValue; } QualType LHSTy = LHSExpr->getType(), RHSTy = RHSExpr->getType(); ExprValueKind VK = VK_RValue; ExprObjectKind OK = OK_Ordinary; ExprResult Cond = CondExpr, LHS = LHSExpr, RHS = RHSExpr; QualType result = CheckConditionalOperands(Cond, LHS, RHS, VK, OK, QuestionLoc); if (result.isNull() || Cond.isInvalid() || LHS.isInvalid() || RHS.isInvalid()) return ExprError(); DiagnoseConditionalPrecedence(*this, QuestionLoc, Cond.get(), LHS.get(), RHS.get()); CheckBoolLikeConversion(Cond.get(), QuestionLoc); result = computeConditionalNullability(result, commonExpr, LHSTy, RHSTy, Context); if (!commonExpr) return new (Context) ConditionalOperator(Cond.get(), QuestionLoc, LHS.get(), ColonLoc, RHS.get(), result, VK, OK); return new (Context) BinaryConditionalOperator( commonExpr, opaqueValue, Cond.get(), LHS.get(), RHS.get(), QuestionLoc, ColonLoc, result, VK, OK); } // checkPointerTypesForAssignment - This is a very tricky routine (despite // being closely modeled after the C99 spec:-). The odd characteristic of this // routine is it effectively iqnores the qualifiers on the top level pointee. // This circumvents the usual type rules specified in 6.2.7p1 & 6.7.5.[1-3]. // FIXME: add a couple examples in this comment. static Sema::AssignConvertType checkPointerTypesForAssignment(Sema &S, QualType LHSType, QualType RHSType) { assert(LHSType.isCanonical() && "LHS not canonicalized!"); assert(RHSType.isCanonical() && "RHS not canonicalized!"); // get the "pointed to" type (ignoring qualifiers at the top level) const Type *lhptee, *rhptee; Qualifiers lhq, rhq; std::tie(lhptee, lhq) = cast(LHSType)->getPointeeType().split().asPair(); std::tie(rhptee, rhq) = cast(RHSType)->getPointeeType().split().asPair(); Sema::AssignConvertType ConvTy = Sema::Compatible; // C99 6.5.16.1p1: This following citation is common to constraints // 3 & 4 (below). ...and the type *pointed to* by the left has all the // qualifiers of the type *pointed to* by the right; // As a special case, 'non-__weak A *' -> 'non-__weak const *' is okay. if (lhq.getObjCLifetime() != rhq.getObjCLifetime() && lhq.compatiblyIncludesObjCLifetime(rhq)) { // Ignore lifetime for further calculation. lhq.removeObjCLifetime(); rhq.removeObjCLifetime(); } if (!lhq.compatiblyIncludes(rhq)) { // Treat address-space mismatches as fatal. TODO: address subspaces if (!lhq.isAddressSpaceSupersetOf(rhq)) ConvTy = Sema::IncompatiblePointerDiscardsQualifiers; // It's okay to add or remove GC or lifetime qualifiers when converting to // and from void*. else if (lhq.withoutObjCGCAttr().withoutObjCLifetime() .compatiblyIncludes( rhq.withoutObjCGCAttr().withoutObjCLifetime()) && (lhptee->isVoidType() || rhptee->isVoidType())) ; // keep old // Treat lifetime mismatches as fatal. else if (lhq.getObjCLifetime() != rhq.getObjCLifetime()) ConvTy = Sema::IncompatiblePointerDiscardsQualifiers; // For GCC/MS compatibility, other qualifier mismatches are treated // as still compatible in C. else ConvTy = Sema::CompatiblePointerDiscardsQualifiers; } // C99 6.5.16.1p1 (constraint 4): If one operand is a pointer to an object or // incomplete type and the other is a pointer to a qualified or unqualified // version of void... if (lhptee->isVoidType()) { if (rhptee->isIncompleteOrObjectType()) return ConvTy; // As an extension, we allow cast to/from void* to function pointer. assert(rhptee->isFunctionType()); return Sema::FunctionVoidPointer; } if (rhptee->isVoidType()) { if (lhptee->isIncompleteOrObjectType()) return ConvTy; // As an extension, we allow cast to/from void* to function pointer. assert(lhptee->isFunctionType()); return Sema::FunctionVoidPointer; } // C99 6.5.16.1p1 (constraint 3): both operands are pointers to qualified or // unqualified versions of compatible types, ... QualType ltrans = QualType(lhptee, 0), rtrans = QualType(rhptee, 0); if (!S.Context.typesAreCompatible(ltrans, rtrans)) { // Check if the pointee types are compatible ignoring the sign. // We explicitly check for char so that we catch "char" vs // "unsigned char" on systems where "char" is unsigned. if (lhptee->isCharType()) ltrans = S.Context.UnsignedCharTy; else if (lhptee->hasSignedIntegerRepresentation()) ltrans = S.Context.getCorrespondingUnsignedType(ltrans); if (rhptee->isCharType()) rtrans = S.Context.UnsignedCharTy; else if (rhptee->hasSignedIntegerRepresentation()) rtrans = S.Context.getCorrespondingUnsignedType(rtrans); if (ltrans == rtrans) { // Types are compatible ignoring the sign. Qualifier incompatibility // takes priority over sign incompatibility because the sign // warning can be disabled. if (ConvTy != Sema::Compatible) return ConvTy; return Sema::IncompatiblePointerSign; } // If we are a multi-level pointer, it's possible that our issue is simply // one of qualification - e.g. char ** -> const char ** is not allowed. If // the eventual target type is the same and the pointers have the same // level of indirection, this must be the issue. if (isa(lhptee) && isa(rhptee)) { do { lhptee = cast(lhptee)->getPointeeType().getTypePtr(); rhptee = cast(rhptee)->getPointeeType().getTypePtr(); } while (isa(lhptee) && isa(rhptee)); if (lhptee == rhptee) return Sema::IncompatibleNestedPointerQualifiers; } // General pointer incompatibility takes priority over qualifiers. return Sema::IncompatiblePointer; } if (!S.getLangOpts().CPlusPlus && S.IsFunctionConversion(ltrans, rtrans, ltrans)) return Sema::IncompatiblePointer; return ConvTy; } /// checkBlockPointerTypesForAssignment - This routine determines whether two /// block pointer types are compatible or whether a block and normal pointer /// are compatible. It is more restrict than comparing two function pointer // types. static Sema::AssignConvertType checkBlockPointerTypesForAssignment(Sema &S, QualType LHSType, QualType RHSType) { assert(LHSType.isCanonical() && "LHS not canonicalized!"); assert(RHSType.isCanonical() && "RHS not canonicalized!"); QualType lhptee, rhptee; // get the "pointed to" type (ignoring qualifiers at the top level) lhptee = cast(LHSType)->getPointeeType(); rhptee = cast(RHSType)->getPointeeType(); // In C++, the types have to match exactly. if (S.getLangOpts().CPlusPlus) return Sema::IncompatibleBlockPointer; Sema::AssignConvertType ConvTy = Sema::Compatible; // For blocks we enforce that qualifiers are identical. if (lhptee.getLocalQualifiers() != rhptee.getLocalQualifiers()) ConvTy = Sema::CompatiblePointerDiscardsQualifiers; if (!S.Context.typesAreBlockPointerCompatible(LHSType, RHSType)) return Sema::IncompatibleBlockPointer; return ConvTy; } /// checkObjCPointerTypesForAssignment - Compares two objective-c pointer types /// for assignment compatibility. static Sema::AssignConvertType checkObjCPointerTypesForAssignment(Sema &S, QualType LHSType, QualType RHSType) { assert(LHSType.isCanonical() && "LHS was not canonicalized!"); assert(RHSType.isCanonical() && "RHS was not canonicalized!"); if (LHSType->isObjCBuiltinType()) { // Class is not compatible with ObjC object pointers. if (LHSType->isObjCClassType() && !RHSType->isObjCBuiltinType() && !RHSType->isObjCQualifiedClassType()) return Sema::IncompatiblePointer; return Sema::Compatible; } if (RHSType->isObjCBuiltinType()) { if (RHSType->isObjCClassType() && !LHSType->isObjCBuiltinType() && !LHSType->isObjCQualifiedClassType()) return Sema::IncompatiblePointer; return Sema::Compatible; } QualType lhptee = LHSType->getAs()->getPointeeType(); QualType rhptee = RHSType->getAs()->getPointeeType(); if (!lhptee.isAtLeastAsQualifiedAs(rhptee) && // make an exception for id

!LHSType->isObjCQualifiedIdType()) return Sema::CompatiblePointerDiscardsQualifiers; if (S.Context.typesAreCompatible(LHSType, RHSType)) return Sema::Compatible; if (LHSType->isObjCQualifiedIdType() || RHSType->isObjCQualifiedIdType()) return Sema::IncompatibleObjCQualifiedId; return Sema::IncompatiblePointer; } Sema::AssignConvertType Sema::CheckAssignmentConstraints(SourceLocation Loc, QualType LHSType, QualType RHSType) { // Fake up an opaque expression. We don't actually care about what // cast operations are required, so if CheckAssignmentConstraints // adds casts to this they'll be wasted, but fortunately that doesn't // usually happen on valid code. OpaqueValueExpr RHSExpr(Loc, RHSType, VK_RValue); ExprResult RHSPtr = &RHSExpr; CastKind K = CK_Invalid; return CheckAssignmentConstraints(LHSType, RHSPtr, K, /*ConvertRHS=*/false); } /// CheckAssignmentConstraints (C99 6.5.16) - This routine currently /// has code to accommodate several GCC extensions when type checking /// pointers. Here are some objectionable examples that GCC considers warnings: /// /// int a, *pint; /// short *pshort; /// struct foo *pfoo; /// /// pint = pshort; // warning: assignment from incompatible pointer type /// a = pint; // warning: assignment makes integer from pointer without a cast /// pint = a; // warning: assignment makes pointer from integer without a cast /// pint = pfoo; // warning: assignment from incompatible pointer type /// /// As a result, the code for dealing with pointers is more complex than the /// C99 spec dictates. /// /// Sets 'Kind' for any result kind except Incompatible. Sema::AssignConvertType Sema::CheckAssignmentConstraints(QualType LHSType, ExprResult &RHS, CastKind &Kind, bool ConvertRHS) { QualType RHSType = RHS.get()->getType(); QualType OrigLHSType = LHSType; // Get canonical types. We're not formatting these types, just comparing // them. LHSType = Context.getCanonicalType(LHSType).getUnqualifiedType(); RHSType = Context.getCanonicalType(RHSType).getUnqualifiedType(); // Common case: no conversion required. if (LHSType == RHSType) { Kind = CK_NoOp; return Compatible; } // If we have an atomic type, try a non-atomic assignment, then just add an // atomic qualification step. if (const AtomicType *AtomicTy = dyn_cast(LHSType)) { Sema::AssignConvertType result = CheckAssignmentConstraints(AtomicTy->getValueType(), RHS, Kind); if (result != Compatible) return result; if (Kind != CK_NoOp && ConvertRHS) RHS = ImpCastExprToType(RHS.get(), AtomicTy->getValueType(), Kind); Kind = CK_NonAtomicToAtomic; return Compatible; } // If the left-hand side is a reference type, then we are in a // (rare!) case where we've allowed the use of references in C, // e.g., as a parameter type in a built-in function. In this case, // just make sure that the type referenced is compatible with the // right-hand side type. The caller is responsible for adjusting // LHSType so that the resulting expression does not have reference // type. if (const ReferenceType *LHSTypeRef = LHSType->getAs()) { if (Context.typesAreCompatible(LHSTypeRef->getPointeeType(), RHSType)) { Kind = CK_LValueBitCast; return Compatible; } return Incompatible; } // Allow scalar to ExtVector assignments, and assignments of an ExtVector type // to the same ExtVector type. if (LHSType->isExtVectorType()) { if (RHSType->isExtVectorType()) return Incompatible; if (RHSType->isArithmeticType()) { // CK_VectorSplat does T -> vector T, so first cast to the element type. if (ConvertRHS) RHS = prepareVectorSplat(LHSType, RHS.get()); Kind = CK_VectorSplat; return Compatible; } } // Conversions to or from vector type. if (LHSType->isVectorType() || RHSType->isVectorType()) { if (LHSType->isVectorType() && RHSType->isVectorType()) { // Allow assignments of an AltiVec vector type to an equivalent GCC // vector type and vice versa if (Context.areCompatibleVectorTypes(LHSType, RHSType)) { Kind = CK_BitCast; return Compatible; } // If we are allowing lax vector conversions, and LHS and RHS are both // vectors, the total size only needs to be the same. This is a bitcast; // no bits are changed but the result type is different. if (isLaxVectorConversion(RHSType, LHSType)) { Kind = CK_BitCast; return IncompatibleVectors; } } // When the RHS comes from another lax conversion (e.g. binops between // scalars and vectors) the result is canonicalized as a vector. When the // LHS is also a vector, the lax is allowed by the condition above. Handle // the case where LHS is a scalar. if (LHSType->isScalarType()) { const VectorType *VecType = RHSType->getAs(); if (VecType && VecType->getNumElements() == 1 && isLaxVectorConversion(RHSType, LHSType)) { ExprResult *VecExpr = &RHS; *VecExpr = ImpCastExprToType(VecExpr->get(), LHSType, CK_BitCast); Kind = CK_BitCast; return Compatible; } } return Incompatible; } // Diagnose attempts to convert between __float128 and long double where // such conversions currently can't be handled. if (unsupportedTypeConversion(*this, LHSType, RHSType)) return Incompatible; // Arithmetic conversions. if (LHSType->isArithmeticType() && RHSType->isArithmeticType() && !(getLangOpts().CPlusPlus && LHSType->isEnumeralType())) { if (ConvertRHS) Kind = PrepareScalarCast(RHS, LHSType); return Compatible; } // Conversions to normal pointers. if (const PointerType *LHSPointer = dyn_cast(LHSType)) { // U* -> T* if (isa(RHSType)) { unsigned AddrSpaceL = LHSPointer->getPointeeType().getAddressSpace(); unsigned AddrSpaceR = RHSType->getPointeeType().getAddressSpace(); Kind = AddrSpaceL != AddrSpaceR ? CK_AddressSpaceConversion : CK_BitCast; return checkPointerTypesForAssignment(*this, LHSType, RHSType); } // int -> T* if (RHSType->isIntegerType()) { Kind = CK_IntegralToPointer; // FIXME: null? return IntToPointer; } // C pointers are not compatible with ObjC object pointers, // with two exceptions: if (isa(RHSType)) { // - conversions to void* if (LHSPointer->getPointeeType()->isVoidType()) { Kind = CK_BitCast; return Compatible; } // - conversions from 'Class' to the redefinition type if (RHSType->isObjCClassType() && Context.hasSameType(LHSType, Context.getObjCClassRedefinitionType())) { Kind = CK_BitCast; return Compatible; } Kind = CK_BitCast; return IncompatiblePointer; } // U^ -> void* if (RHSType->getAs()) { if (LHSPointer->getPointeeType()->isVoidType()) { Kind = CK_BitCast; return Compatible; } } return Incompatible; } // Conversions to block pointers. if (isa(LHSType)) { // U^ -> T^ if (RHSType->isBlockPointerType()) { Kind = CK_BitCast; return checkBlockPointerTypesForAssignment(*this, LHSType, RHSType); } // int or null -> T^ if (RHSType->isIntegerType()) { Kind = CK_IntegralToPointer; // FIXME: null return IntToBlockPointer; } // id -> T^ if (getLangOpts().ObjC1 && RHSType->isObjCIdType()) { Kind = CK_AnyPointerToBlockPointerCast; return Compatible; } // void* -> T^ if (const PointerType *RHSPT = RHSType->getAs()) if (RHSPT->getPointeeType()->isVoidType()) { Kind = CK_AnyPointerToBlockPointerCast; return Compatible; } return Incompatible; } // Conversions to Objective-C pointers. if (isa(LHSType)) { // A* -> B* if (RHSType->isObjCObjectPointerType()) { Kind = CK_BitCast; Sema::AssignConvertType result = checkObjCPointerTypesForAssignment(*this, LHSType, RHSType); if (getLangOpts().ObjCAutoRefCount && result == Compatible && !CheckObjCARCUnavailableWeakConversion(OrigLHSType, RHSType)) result = IncompatibleObjCWeakRef; return result; } // int or null -> A* if (RHSType->isIntegerType()) { Kind = CK_IntegralToPointer; // FIXME: null return IntToPointer; } // In general, C pointers are not compatible with ObjC object pointers, // with two exceptions: if (isa(RHSType)) { Kind = CK_CPointerToObjCPointerCast; // - conversions from 'void*' if (RHSType->isVoidPointerType()) { return Compatible; } // - conversions to 'Class' from its redefinition type if (LHSType->isObjCClassType() && Context.hasSameType(RHSType, Context.getObjCClassRedefinitionType())) { return Compatible; } return IncompatiblePointer; } // Only under strict condition T^ is compatible with an Objective-C pointer. if (RHSType->isBlockPointerType() && LHSType->isBlockCompatibleObjCPointerType(Context)) { if (ConvertRHS) maybeExtendBlockObject(RHS); Kind = CK_BlockPointerToObjCPointerCast; return Compatible; } return Incompatible; } // Conversions from pointers that are not covered by the above. if (isa(RHSType)) { // T* -> _Bool if (LHSType == Context.BoolTy) { Kind = CK_PointerToBoolean; return Compatible; } // T* -> int if (LHSType->isIntegerType()) { Kind = CK_PointerToIntegral; return PointerToInt; } return Incompatible; } // Conversions from Objective-C pointers that are not covered by the above. if (isa(RHSType)) { // T* -> _Bool if (LHSType == Context.BoolTy) { Kind = CK_PointerToBoolean; return Compatible; } // T* -> int if (LHSType->isIntegerType()) { Kind = CK_PointerToIntegral; return PointerToInt; } return Incompatible; } // struct A -> struct B if (isa(LHSType) && isa(RHSType)) { if (Context.typesAreCompatible(LHSType, RHSType)) { Kind = CK_NoOp; return Compatible; } } if (LHSType->isSamplerT() && RHSType->isIntegerType()) { Kind = CK_IntToOCLSampler; return Compatible; } return Incompatible; } /// \brief Constructs a transparent union from an expression that is /// used to initialize the transparent union. static void ConstructTransparentUnion(Sema &S, ASTContext &C, ExprResult &EResult, QualType UnionType, FieldDecl *Field) { // Build an initializer list that designates the appropriate member // of the transparent union. Expr *E = EResult.get(); InitListExpr *Initializer = new (C) InitListExpr(C, SourceLocation(), E, SourceLocation()); Initializer->setType(UnionType); Initializer->setInitializedFieldInUnion(Field); // Build a compound literal constructing a value of the transparent // union type from this initializer list. TypeSourceInfo *unionTInfo = C.getTrivialTypeSourceInfo(UnionType); EResult = new (C) CompoundLiteralExpr(SourceLocation(), unionTInfo, UnionType, VK_RValue, Initializer, false); } Sema::AssignConvertType Sema::CheckTransparentUnionArgumentConstraints(QualType ArgType, ExprResult &RHS) { QualType RHSType = RHS.get()->getType(); // If the ArgType is a Union type, we want to handle a potential // transparent_union GCC extension. const RecordType *UT = ArgType->getAsUnionType(); if (!UT || !UT->getDecl()->hasAttr()) return Incompatible; // The field to initialize within the transparent union. RecordDecl *UD = UT->getDecl(); FieldDecl *InitField = nullptr; // It's compatible if the expression matches any of the fields. for (auto *it : UD->fields()) { if (it->getType()->isPointerType()) { // If the transparent union contains a pointer type, we allow: // 1) void pointer // 2) null pointer constant if (RHSType->isPointerType()) if (RHSType->castAs()->getPointeeType()->isVoidType()) { RHS = ImpCastExprToType(RHS.get(), it->getType(), CK_BitCast); InitField = it; break; } if (RHS.get()->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNull)) { RHS = ImpCastExprToType(RHS.get(), it->getType(), CK_NullToPointer); InitField = it; break; } } CastKind Kind = CK_Invalid; if (CheckAssignmentConstraints(it->getType(), RHS, Kind) == Compatible) { RHS = ImpCastExprToType(RHS.get(), it->getType(), Kind); InitField = it; break; } } if (!InitField) return Incompatible; ConstructTransparentUnion(*this, Context, RHS, ArgType, InitField); return Compatible; } Sema::AssignConvertType Sema::CheckSingleAssignmentConstraints(QualType LHSType, ExprResult &CallerRHS, bool Diagnose, bool DiagnoseCFAudited, bool ConvertRHS) { // We need to be able to tell the caller whether we diagnosed a problem, if // they ask us to issue diagnostics. assert((ConvertRHS || !Diagnose) && "can't indicate whether we diagnosed"); // If ConvertRHS is false, we want to leave the caller's RHS untouched. Sadly, // we can't avoid *all* modifications at the moment, so we need some somewhere // to put the updated value. ExprResult LocalRHS = CallerRHS; ExprResult &RHS = ConvertRHS ? CallerRHS : LocalRHS; if (getLangOpts().CPlusPlus) { if (!LHSType->isRecordType() && !LHSType->isAtomicType()) { // C++ 5.17p3: If the left operand is not of class type, the // expression is implicitly converted (C++ 4) to the // cv-unqualified type of the left operand. QualType RHSType = RHS.get()->getType(); if (Diagnose) { RHS = PerformImplicitConversion(RHS.get(), LHSType.getUnqualifiedType(), AA_Assigning); } else { ImplicitConversionSequence ICS = TryImplicitConversion(RHS.get(), LHSType.getUnqualifiedType(), /*SuppressUserConversions=*/false, /*AllowExplicit=*/false, /*InOverloadResolution=*/false, /*CStyle=*/false, /*AllowObjCWritebackConversion=*/false); if (ICS.isFailure()) return Incompatible; RHS = PerformImplicitConversion(RHS.get(), LHSType.getUnqualifiedType(), ICS, AA_Assigning); } if (RHS.isInvalid()) return Incompatible; Sema::AssignConvertType result = Compatible; if (getLangOpts().ObjCAutoRefCount && !CheckObjCARCUnavailableWeakConversion(LHSType, RHSType)) result = IncompatibleObjCWeakRef; return result; } // FIXME: Currently, we fall through and treat C++ classes like C // structures. // FIXME: We also fall through for atomics; not sure what should // happen there, though. } else if (RHS.get()->getType() == Context.OverloadTy) { // As a set of extensions to C, we support overloading on functions. These // functions need to be resolved here. DeclAccessPair DAP; if (FunctionDecl *FD = ResolveAddressOfOverloadedFunction( RHS.get(), LHSType, /*Complain=*/false, DAP)) RHS = FixOverloadedFunctionReference(RHS.get(), DAP, FD); else return Incompatible; } // C99 6.5.16.1p1: the left operand is a pointer and the right is // a null pointer constant. if ((LHSType->isPointerType() || LHSType->isObjCObjectPointerType() || LHSType->isBlockPointerType()) && RHS.get()->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNull)) { if (Diagnose || ConvertRHS) { CastKind Kind; CXXCastPath Path; CheckPointerConversion(RHS.get(), LHSType, Kind, Path, /*IgnoreBaseAccess=*/false, Diagnose); if (ConvertRHS) RHS = ImpCastExprToType(RHS.get(), LHSType, Kind, VK_RValue, &Path); } return Compatible; } // This check seems unnatural, however it is necessary to ensure the proper // conversion of functions/arrays. If the conversion were done for all // DeclExpr's (created by ActOnIdExpression), it would mess up the unary // expressions that suppress this implicit conversion (&, sizeof). // // Suppress this for references: C++ 8.5.3p5. if (!LHSType->isReferenceType()) { // FIXME: We potentially allocate here even if ConvertRHS is false. RHS = DefaultFunctionArrayLvalueConversion(RHS.get(), Diagnose); if (RHS.isInvalid()) return Incompatible; } Expr *PRE = RHS.get()->IgnoreParenCasts(); if (Diagnose && isa(PRE)) { ObjCProtocolDecl *PDecl = cast(PRE)->getProtocol(); if (PDecl && !PDecl->hasDefinition()) { Diag(PRE->getExprLoc(), diag::warn_atprotocol_protocol) << PDecl->getName(); Diag(PDecl->getLocation(), diag::note_entity_declared_at) << PDecl; } } CastKind Kind = CK_Invalid; Sema::AssignConvertType result = CheckAssignmentConstraints(LHSType, RHS, Kind, ConvertRHS); // C99 6.5.16.1p2: The value of the right operand is converted to the // type of the assignment expression. // CheckAssignmentConstraints allows the left-hand side to be a reference, // so that we can use references in built-in functions even in C. // The getNonReferenceType() call makes sure that the resulting expression // does not have reference type. if (result != Incompatible && RHS.get()->getType() != LHSType) { QualType Ty = LHSType.getNonLValueExprType(Context); Expr *E = RHS.get(); // Check for various Objective-C errors. If we are not reporting // diagnostics and just checking for errors, e.g., during overload // resolution, return Incompatible to indicate the failure. if (getLangOpts().ObjCAutoRefCount && CheckObjCARCConversion(SourceRange(), Ty, E, CCK_ImplicitConversion, Diagnose, DiagnoseCFAudited) != ACR_okay) { if (!Diagnose) return Incompatible; } if (getLangOpts().ObjC1 && (CheckObjCBridgeRelatedConversions(E->getLocStart(), LHSType, E->getType(), E, Diagnose) || ConversionToObjCStringLiteralCheck(LHSType, E, Diagnose))) { if (!Diagnose) return Incompatible; // Replace the expression with a corrected version and continue so we // can find further errors. RHS = E; return Compatible; } if (ConvertRHS) RHS = ImpCastExprToType(E, Ty, Kind); } return result; } QualType Sema::InvalidOperands(SourceLocation Loc, ExprResult &LHS, ExprResult &RHS) { Diag(Loc, diag::err_typecheck_invalid_operands) << LHS.get()->getType() << RHS.get()->getType() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } /// Try to convert a value of non-vector type to a vector type by converting /// the type to the element type of the vector and then performing a splat. /// If the language is OpenCL, we only use conversions that promote scalar /// rank; for C, Obj-C, and C++ we allow any real scalar conversion except /// for float->int. /// /// \param scalar - if non-null, actually perform the conversions /// \return true if the operation fails (but without diagnosing the failure) static bool tryVectorConvertAndSplat(Sema &S, ExprResult *scalar, QualType scalarTy, QualType vectorEltTy, QualType vectorTy) { // The conversion to apply to the scalar before splatting it, // if necessary. CastKind scalarCast = CK_Invalid; if (vectorEltTy->isIntegralType(S.Context)) { if (!scalarTy->isIntegralType(S.Context)) return true; if (S.getLangOpts().OpenCL && S.Context.getIntegerTypeOrder(vectorEltTy, scalarTy) < 0) return true; scalarCast = CK_IntegralCast; } else if (vectorEltTy->isRealFloatingType()) { if (scalarTy->isRealFloatingType()) { if (S.getLangOpts().OpenCL && S.Context.getFloatingTypeOrder(vectorEltTy, scalarTy) < 0) return true; scalarCast = CK_FloatingCast; } else if (scalarTy->isIntegralType(S.Context)) scalarCast = CK_IntegralToFloating; else return true; } else { return true; } // Adjust scalar if desired. if (scalar) { if (scalarCast != CK_Invalid) *scalar = S.ImpCastExprToType(scalar->get(), vectorEltTy, scalarCast); *scalar = S.ImpCastExprToType(scalar->get(), vectorTy, CK_VectorSplat); } return false; } QualType Sema::CheckVectorOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, bool IsCompAssign, bool AllowBothBool, bool AllowBoolConversions) { if (!IsCompAssign) { LHS = DefaultFunctionArrayLvalueConversion(LHS.get()); if (LHS.isInvalid()) return QualType(); } RHS = DefaultFunctionArrayLvalueConversion(RHS.get()); if (RHS.isInvalid()) return QualType(); // For conversion purposes, we ignore any qualifiers. // For example, "const float" and "float" are equivalent. QualType LHSType = LHS.get()->getType().getUnqualifiedType(); QualType RHSType = RHS.get()->getType().getUnqualifiedType(); const VectorType *LHSVecType = LHSType->getAs(); const VectorType *RHSVecType = RHSType->getAs(); assert(LHSVecType || RHSVecType); // AltiVec-style "vector bool op vector bool" combinations are allowed // for some operators but not others. if (!AllowBothBool && LHSVecType && LHSVecType->getVectorKind() == VectorType::AltiVecBool && RHSVecType && RHSVecType->getVectorKind() == VectorType::AltiVecBool) return InvalidOperands(Loc, LHS, RHS); // If the vector types are identical, return. if (Context.hasSameType(LHSType, RHSType)) return LHSType; // If we have compatible AltiVec and GCC vector types, use the AltiVec type. if (LHSVecType && RHSVecType && Context.areCompatibleVectorTypes(LHSType, RHSType)) { if (isa(LHSVecType)) { RHS = ImpCastExprToType(RHS.get(), LHSType, CK_BitCast); return LHSType; } if (!IsCompAssign) LHS = ImpCastExprToType(LHS.get(), RHSType, CK_BitCast); return RHSType; } // AllowBoolConversions says that bool and non-bool AltiVec vectors // can be mixed, with the result being the non-bool type. The non-bool // operand must have integer element type. if (AllowBoolConversions && LHSVecType && RHSVecType && LHSVecType->getNumElements() == RHSVecType->getNumElements() && (Context.getTypeSize(LHSVecType->getElementType()) == Context.getTypeSize(RHSVecType->getElementType()))) { if (LHSVecType->getVectorKind() == VectorType::AltiVecVector && LHSVecType->getElementType()->isIntegerType() && RHSVecType->getVectorKind() == VectorType::AltiVecBool) { RHS = ImpCastExprToType(RHS.get(), LHSType, CK_BitCast); return LHSType; } if (!IsCompAssign && LHSVecType->getVectorKind() == VectorType::AltiVecBool && RHSVecType->getVectorKind() == VectorType::AltiVecVector && RHSVecType->getElementType()->isIntegerType()) { LHS = ImpCastExprToType(LHS.get(), RHSType, CK_BitCast); return RHSType; } } // If there's an ext-vector type and a scalar, try to convert the scalar to // the vector element type and splat. // FIXME: this should also work for regular vector types as supported in GCC. if (!RHSVecType && isa(LHSVecType)) { if (!tryVectorConvertAndSplat(*this, &RHS, RHSType, LHSVecType->getElementType(), LHSType)) return LHSType; } if (!LHSVecType && isa(RHSVecType)) { if (!tryVectorConvertAndSplat(*this, (IsCompAssign ? nullptr : &LHS), LHSType, RHSVecType->getElementType(), RHSType)) return RHSType; } // FIXME: The code below also handles convertion between vectors and // non-scalars, we should break this down into fine grained specific checks // and emit proper diagnostics. QualType VecType = LHSVecType ? LHSType : RHSType; const VectorType *VT = LHSVecType ? LHSVecType : RHSVecType; QualType OtherType = LHSVecType ? RHSType : LHSType; ExprResult *OtherExpr = LHSVecType ? &RHS : &LHS; if (isLaxVectorConversion(OtherType, VecType)) { // If we're allowing lax vector conversions, only the total (data) size // needs to be the same. For non compound assignment, if one of the types is // scalar, the result is always the vector type. if (!IsCompAssign) { *OtherExpr = ImpCastExprToType(OtherExpr->get(), VecType, CK_BitCast); return VecType; // In a compound assignment, lhs += rhs, 'lhs' is a lvalue src, forbidding // any implicit cast. Here, the 'rhs' should be implicit casted to 'lhs' // type. Note that this is already done by non-compound assignments in // CheckAssignmentConstraints. If it's a scalar type, only bitcast for // <1 x T> -> T. The result is also a vector type. } else if (OtherType->isExtVectorType() || (OtherType->isScalarType() && VT->getNumElements() == 1)) { ExprResult *RHSExpr = &RHS; *RHSExpr = ImpCastExprToType(RHSExpr->get(), LHSType, CK_BitCast); return VecType; } } // Okay, the expression is invalid. // If there's a non-vector, non-real operand, diagnose that. if ((!RHSVecType && !RHSType->isRealType()) || (!LHSVecType && !LHSType->isRealType())) { Diag(Loc, diag::err_typecheck_vector_not_convertable_non_scalar) << LHSType << RHSType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } // OpenCL V1.1 6.2.6.p1: // If the operands are of more than one vector type, then an error shall // occur. Implicit conversions between vector types are not permitted, per // section 6.2.1. if (getLangOpts().OpenCL && RHSVecType && isa(RHSVecType) && LHSVecType && isa(LHSVecType)) { Diag(Loc, diag::err_opencl_implicit_vector_conversion) << LHSType << RHSType; return QualType(); } // Otherwise, use the generic diagnostic. Diag(Loc, diag::err_typecheck_vector_not_convertable) << LHSType << RHSType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } // checkArithmeticNull - Detect when a NULL constant is used improperly in an // expression. These are mainly cases where the null pointer is used as an // integer instead of a pointer. static void checkArithmeticNull(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, bool IsCompare) { // The canonical way to check for a GNU null is with isNullPointerConstant, // but we use a bit of a hack here for speed; this is a relatively // hot path, and isNullPointerConstant is slow. bool LHSNull = isa(LHS.get()->IgnoreParenImpCasts()); bool RHSNull = isa(RHS.get()->IgnoreParenImpCasts()); QualType NonNullType = LHSNull ? RHS.get()->getType() : LHS.get()->getType(); // Avoid analyzing cases where the result will either be invalid (and // diagnosed as such) or entirely valid and not something to warn about. if ((!LHSNull && !RHSNull) || NonNullType->isBlockPointerType() || NonNullType->isMemberPointerType() || NonNullType->isFunctionType()) return; // Comparison operations would not make sense with a null pointer no matter // what the other expression is. if (!IsCompare) { S.Diag(Loc, diag::warn_null_in_arithmetic_operation) << (LHSNull ? LHS.get()->getSourceRange() : SourceRange()) << (RHSNull ? RHS.get()->getSourceRange() : SourceRange()); return; } // The rest of the operations only make sense with a null pointer // if the other expression is a pointer. if (LHSNull == RHSNull || NonNullType->isAnyPointerType() || NonNullType->canDecayToPointerType()) return; S.Diag(Loc, diag::warn_null_in_comparison_operation) << LHSNull /* LHS is NULL */ << NonNullType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } static void DiagnoseBadDivideOrRemainderValues(Sema& S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, bool IsDiv) { // Check for division/remainder by zero. llvm::APSInt RHSValue; if (!RHS.get()->isValueDependent() && RHS.get()->EvaluateAsInt(RHSValue, S.Context) && RHSValue == 0) S.DiagRuntimeBehavior(Loc, RHS.get(), S.PDiag(diag::warn_remainder_division_by_zero) << IsDiv << RHS.get()->getSourceRange()); } QualType Sema::CheckMultiplyDivideOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, bool IsCompAssign, bool IsDiv) { checkArithmeticNull(*this, LHS, RHS, Loc, /*isCompare=*/false); if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) return CheckVectorOperands(LHS, RHS, Loc, IsCompAssign, /*AllowBothBool*/getLangOpts().AltiVec, /*AllowBoolConversions*/false); QualType compType = UsualArithmeticConversions(LHS, RHS, IsCompAssign); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); if (compType.isNull() || !compType->isArithmeticType()) return InvalidOperands(Loc, LHS, RHS); if (IsDiv) DiagnoseBadDivideOrRemainderValues(*this, LHS, RHS, Loc, IsDiv); return compType; } QualType Sema::CheckRemainderOperands( ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, bool IsCompAssign) { checkArithmeticNull(*this, LHS, RHS, Loc, /*isCompare=*/false); if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) { if (LHS.get()->getType()->hasIntegerRepresentation() && RHS.get()->getType()->hasIntegerRepresentation()) return CheckVectorOperands(LHS, RHS, Loc, IsCompAssign, /*AllowBothBool*/getLangOpts().AltiVec, /*AllowBoolConversions*/false); return InvalidOperands(Loc, LHS, RHS); } QualType compType = UsualArithmeticConversions(LHS, RHS, IsCompAssign); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); if (compType.isNull() || !compType->isIntegerType()) return InvalidOperands(Loc, LHS, RHS); DiagnoseBadDivideOrRemainderValues(*this, LHS, RHS, Loc, false /* IsDiv */); return compType; } /// \brief Diagnose invalid arithmetic on two void pointers. static void diagnoseArithmeticOnTwoVoidPointers(Sema &S, SourceLocation Loc, Expr *LHSExpr, Expr *RHSExpr) { S.Diag(Loc, S.getLangOpts().CPlusPlus ? diag::err_typecheck_pointer_arith_void_type : diag::ext_gnu_void_ptr) << 1 /* two pointers */ << LHSExpr->getSourceRange() << RHSExpr->getSourceRange(); } /// \brief Diagnose invalid arithmetic on a void pointer. static void diagnoseArithmeticOnVoidPointer(Sema &S, SourceLocation Loc, Expr *Pointer) { S.Diag(Loc, S.getLangOpts().CPlusPlus ? diag::err_typecheck_pointer_arith_void_type : diag::ext_gnu_void_ptr) << 0 /* one pointer */ << Pointer->getSourceRange(); } /// \brief Diagnose invalid arithmetic on two function pointers. static void diagnoseArithmeticOnTwoFunctionPointers(Sema &S, SourceLocation Loc, Expr *LHS, Expr *RHS) { assert(LHS->getType()->isAnyPointerType()); assert(RHS->getType()->isAnyPointerType()); S.Diag(Loc, S.getLangOpts().CPlusPlus ? diag::err_typecheck_pointer_arith_function_type : diag::ext_gnu_ptr_func_arith) << 1 /* two pointers */ << LHS->getType()->getPointeeType() // We only show the second type if it differs from the first. << (unsigned)!S.Context.hasSameUnqualifiedType(LHS->getType(), RHS->getType()) << RHS->getType()->getPointeeType() << LHS->getSourceRange() << RHS->getSourceRange(); } /// \brief Diagnose invalid arithmetic on a function pointer. static void diagnoseArithmeticOnFunctionPointer(Sema &S, SourceLocation Loc, Expr *Pointer) { assert(Pointer->getType()->isAnyPointerType()); S.Diag(Loc, S.getLangOpts().CPlusPlus ? diag::err_typecheck_pointer_arith_function_type : diag::ext_gnu_ptr_func_arith) << 0 /* one pointer */ << Pointer->getType()->getPointeeType() << 0 /* one pointer, so only one type */ << Pointer->getSourceRange(); } /// \brief Emit error if Operand is incomplete pointer type /// /// \returns True if pointer has incomplete type static bool checkArithmeticIncompletePointerType(Sema &S, SourceLocation Loc, Expr *Operand) { QualType ResType = Operand->getType(); if (const AtomicType *ResAtomicType = ResType->getAs()) ResType = ResAtomicType->getValueType(); assert(ResType->isAnyPointerType() && !ResType->isDependentType()); QualType PointeeTy = ResType->getPointeeType(); return S.RequireCompleteType(Loc, PointeeTy, diag::err_typecheck_arithmetic_incomplete_type, PointeeTy, Operand->getSourceRange()); } /// \brief Check the validity of an arithmetic pointer operand. /// /// If the operand has pointer type, this code will check for pointer types /// which are invalid in arithmetic operations. These will be diagnosed /// appropriately, including whether or not the use is supported as an /// extension. /// /// \returns True when the operand is valid to use (even if as an extension). static bool checkArithmeticOpPointerOperand(Sema &S, SourceLocation Loc, Expr *Operand) { QualType ResType = Operand->getType(); if (const AtomicType *ResAtomicType = ResType->getAs()) ResType = ResAtomicType->getValueType(); if (!ResType->isAnyPointerType()) return true; QualType PointeeTy = ResType->getPointeeType(); if (PointeeTy->isVoidType()) { diagnoseArithmeticOnVoidPointer(S, Loc, Operand); return !S.getLangOpts().CPlusPlus; } if (PointeeTy->isFunctionType()) { diagnoseArithmeticOnFunctionPointer(S, Loc, Operand); return !S.getLangOpts().CPlusPlus; } if (checkArithmeticIncompletePointerType(S, Loc, Operand)) return false; return true; } /// \brief Check the validity of a binary arithmetic operation w.r.t. pointer /// operands. /// /// This routine will diagnose any invalid arithmetic on pointer operands much /// like \see checkArithmeticOpPointerOperand. However, it has special logic /// for emitting a single diagnostic even for operations where both LHS and RHS /// are (potentially problematic) pointers. /// /// \returns True when the operand is valid to use (even if as an extension). static bool checkArithmeticBinOpPointerOperands(Sema &S, SourceLocation Loc, Expr *LHSExpr, Expr *RHSExpr) { bool isLHSPointer = LHSExpr->getType()->isAnyPointerType(); bool isRHSPointer = RHSExpr->getType()->isAnyPointerType(); if (!isLHSPointer && !isRHSPointer) return true; QualType LHSPointeeTy, RHSPointeeTy; if (isLHSPointer) LHSPointeeTy = LHSExpr->getType()->getPointeeType(); if (isRHSPointer) RHSPointeeTy = RHSExpr->getType()->getPointeeType(); // if both are pointers check if operation is valid wrt address spaces if (S.getLangOpts().OpenCL && isLHSPointer && isRHSPointer) { const PointerType *lhsPtr = LHSExpr->getType()->getAs(); const PointerType *rhsPtr = RHSExpr->getType()->getAs(); if (!lhsPtr->isAddressSpaceOverlapping(*rhsPtr)) { S.Diag(Loc, diag::err_typecheck_op_on_nonoverlapping_address_space_pointers) << LHSExpr->getType() << RHSExpr->getType() << 1 /*arithmetic op*/ << LHSExpr->getSourceRange() << RHSExpr->getSourceRange(); return false; } } // Check for arithmetic on pointers to incomplete types. bool isLHSVoidPtr = isLHSPointer && LHSPointeeTy->isVoidType(); bool isRHSVoidPtr = isRHSPointer && RHSPointeeTy->isVoidType(); if (isLHSVoidPtr || isRHSVoidPtr) { if (!isRHSVoidPtr) diagnoseArithmeticOnVoidPointer(S, Loc, LHSExpr); else if (!isLHSVoidPtr) diagnoseArithmeticOnVoidPointer(S, Loc, RHSExpr); else diagnoseArithmeticOnTwoVoidPointers(S, Loc, LHSExpr, RHSExpr); return !S.getLangOpts().CPlusPlus; } bool isLHSFuncPtr = isLHSPointer && LHSPointeeTy->isFunctionType(); bool isRHSFuncPtr = isRHSPointer && RHSPointeeTy->isFunctionType(); if (isLHSFuncPtr || isRHSFuncPtr) { if (!isRHSFuncPtr) diagnoseArithmeticOnFunctionPointer(S, Loc, LHSExpr); else if (!isLHSFuncPtr) diagnoseArithmeticOnFunctionPointer(S, Loc, RHSExpr); else diagnoseArithmeticOnTwoFunctionPointers(S, Loc, LHSExpr, RHSExpr); return !S.getLangOpts().CPlusPlus; } if (isLHSPointer && checkArithmeticIncompletePointerType(S, Loc, LHSExpr)) return false; if (isRHSPointer && checkArithmeticIncompletePointerType(S, Loc, RHSExpr)) return false; return true; } /// diagnoseStringPlusInt - Emit a warning when adding an integer to a string /// literal. static void diagnoseStringPlusInt(Sema &Self, SourceLocation OpLoc, Expr *LHSExpr, Expr *RHSExpr) { StringLiteral* StrExpr = dyn_cast(LHSExpr->IgnoreImpCasts()); Expr* IndexExpr = RHSExpr; if (!StrExpr) { StrExpr = dyn_cast(RHSExpr->IgnoreImpCasts()); IndexExpr = LHSExpr; } bool IsStringPlusInt = StrExpr && IndexExpr->getType()->isIntegralOrUnscopedEnumerationType(); if (!IsStringPlusInt || IndexExpr->isValueDependent()) return; llvm::APSInt index; if (IndexExpr->EvaluateAsInt(index, Self.getASTContext())) { unsigned StrLenWithNull = StrExpr->getLength() + 1; if (index.isNonNegative() && index <= llvm::APSInt(llvm::APInt(index.getBitWidth(), StrLenWithNull), index.isUnsigned())) return; } SourceRange DiagRange(LHSExpr->getLocStart(), RHSExpr->getLocEnd()); Self.Diag(OpLoc, diag::warn_string_plus_int) << DiagRange << IndexExpr->IgnoreImpCasts()->getType(); // Only print a fixit for "str" + int, not for int + "str". if (IndexExpr == RHSExpr) { SourceLocation EndLoc = Self.getLocForEndOfToken(RHSExpr->getLocEnd()); Self.Diag(OpLoc, diag::note_string_plus_scalar_silence) << FixItHint::CreateInsertion(LHSExpr->getLocStart(), "&") << FixItHint::CreateReplacement(SourceRange(OpLoc), "[") << FixItHint::CreateInsertion(EndLoc, "]"); } else Self.Diag(OpLoc, diag::note_string_plus_scalar_silence); } /// \brief Emit a warning when adding a char literal to a string. static void diagnoseStringPlusChar(Sema &Self, SourceLocation OpLoc, Expr *LHSExpr, Expr *RHSExpr) { const Expr *StringRefExpr = LHSExpr; const CharacterLiteral *CharExpr = dyn_cast(RHSExpr->IgnoreImpCasts()); if (!CharExpr) { CharExpr = dyn_cast(LHSExpr->IgnoreImpCasts()); StringRefExpr = RHSExpr; } if (!CharExpr || !StringRefExpr) return; const QualType StringType = StringRefExpr->getType(); // Return if not a PointerType. if (!StringType->isAnyPointerType()) return; // Return if not a CharacterType. if (!StringType->getPointeeType()->isAnyCharacterType()) return; ASTContext &Ctx = Self.getASTContext(); SourceRange DiagRange(LHSExpr->getLocStart(), RHSExpr->getLocEnd()); const QualType CharType = CharExpr->getType(); if (!CharType->isAnyCharacterType() && CharType->isIntegerType() && llvm::isUIntN(Ctx.getCharWidth(), CharExpr->getValue())) { Self.Diag(OpLoc, diag::warn_string_plus_char) << DiagRange << Ctx.CharTy; } else { Self.Diag(OpLoc, diag::warn_string_plus_char) << DiagRange << CharExpr->getType(); } // Only print a fixit for str + char, not for char + str. if (isa(RHSExpr->IgnoreImpCasts())) { SourceLocation EndLoc = Self.getLocForEndOfToken(RHSExpr->getLocEnd()); Self.Diag(OpLoc, diag::note_string_plus_scalar_silence) << FixItHint::CreateInsertion(LHSExpr->getLocStart(), "&") << FixItHint::CreateReplacement(SourceRange(OpLoc), "[") << FixItHint::CreateInsertion(EndLoc, "]"); } else { Self.Diag(OpLoc, diag::note_string_plus_scalar_silence); } } /// \brief Emit error when two pointers are incompatible. static void diagnosePointerIncompatibility(Sema &S, SourceLocation Loc, Expr *LHSExpr, Expr *RHSExpr) { assert(LHSExpr->getType()->isAnyPointerType()); assert(RHSExpr->getType()->isAnyPointerType()); S.Diag(Loc, diag::err_typecheck_sub_ptr_compatible) << LHSExpr->getType() << RHSExpr->getType() << LHSExpr->getSourceRange() << RHSExpr->getSourceRange(); } // C99 6.5.6 QualType Sema::CheckAdditionOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, BinaryOperatorKind Opc, QualType* CompLHSTy) { checkArithmeticNull(*this, LHS, RHS, Loc, /*isCompare=*/false); if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) { QualType compType = CheckVectorOperands( LHS, RHS, Loc, CompLHSTy, /*AllowBothBool*/getLangOpts().AltiVec, /*AllowBoolConversions*/getLangOpts().ZVector); if (CompLHSTy) *CompLHSTy = compType; return compType; } QualType compType = UsualArithmeticConversions(LHS, RHS, CompLHSTy); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); // Diagnose "string literal" '+' int and string '+' "char literal". if (Opc == BO_Add) { diagnoseStringPlusInt(*this, Loc, LHS.get(), RHS.get()); diagnoseStringPlusChar(*this, Loc, LHS.get(), RHS.get()); } // handle the common case first (both operands are arithmetic). if (!compType.isNull() && compType->isArithmeticType()) { if (CompLHSTy) *CompLHSTy = compType; return compType; } // Type-checking. Ultimately the pointer's going to be in PExp; // note that we bias towards the LHS being the pointer. Expr *PExp = LHS.get(), *IExp = RHS.get(); bool isObjCPointer; if (PExp->getType()->isPointerType()) { isObjCPointer = false; } else if (PExp->getType()->isObjCObjectPointerType()) { isObjCPointer = true; } else { std::swap(PExp, IExp); if (PExp->getType()->isPointerType()) { isObjCPointer = false; } else if (PExp->getType()->isObjCObjectPointerType()) { isObjCPointer = true; } else { return InvalidOperands(Loc, LHS, RHS); } } assert(PExp->getType()->isAnyPointerType()); if (!IExp->getType()->isIntegerType()) return InvalidOperands(Loc, LHS, RHS); if (!checkArithmeticOpPointerOperand(*this, Loc, PExp)) return QualType(); if (isObjCPointer && checkArithmeticOnObjCPointer(*this, Loc, PExp)) return QualType(); // Check array bounds for pointer arithemtic CheckArrayAccess(PExp, IExp); if (CompLHSTy) { QualType LHSTy = Context.isPromotableBitField(LHS.get()); if (LHSTy.isNull()) { LHSTy = LHS.get()->getType(); if (LHSTy->isPromotableIntegerType()) LHSTy = Context.getPromotedIntegerType(LHSTy); } *CompLHSTy = LHSTy; } return PExp->getType(); } // C99 6.5.6 QualType Sema::CheckSubtractionOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, QualType* CompLHSTy) { checkArithmeticNull(*this, LHS, RHS, Loc, /*isCompare=*/false); if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) { QualType compType = CheckVectorOperands( LHS, RHS, Loc, CompLHSTy, /*AllowBothBool*/getLangOpts().AltiVec, /*AllowBoolConversions*/getLangOpts().ZVector); if (CompLHSTy) *CompLHSTy = compType; return compType; } QualType compType = UsualArithmeticConversions(LHS, RHS, CompLHSTy); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); // Enforce type constraints: C99 6.5.6p3. // Handle the common case first (both operands are arithmetic). if (!compType.isNull() && compType->isArithmeticType()) { if (CompLHSTy) *CompLHSTy = compType; return compType; } // Either ptr - int or ptr - ptr. if (LHS.get()->getType()->isAnyPointerType()) { QualType lpointee = LHS.get()->getType()->getPointeeType(); // Diagnose bad cases where we step over interface counts. if (LHS.get()->getType()->isObjCObjectPointerType() && checkArithmeticOnObjCPointer(*this, Loc, LHS.get())) return QualType(); // The result type of a pointer-int computation is the pointer type. if (RHS.get()->getType()->isIntegerType()) { if (!checkArithmeticOpPointerOperand(*this, Loc, LHS.get())) return QualType(); // Check array bounds for pointer arithemtic CheckArrayAccess(LHS.get(), RHS.get(), /*ArraySubscriptExpr*/nullptr, /*AllowOnePastEnd*/true, /*IndexNegated*/true); if (CompLHSTy) *CompLHSTy = LHS.get()->getType(); return LHS.get()->getType(); } // Handle pointer-pointer subtractions. if (const PointerType *RHSPTy = RHS.get()->getType()->getAs()) { QualType rpointee = RHSPTy->getPointeeType(); if (getLangOpts().CPlusPlus) { // Pointee types must be the same: C++ [expr.add] if (!Context.hasSameUnqualifiedType(lpointee, rpointee)) { diagnosePointerIncompatibility(*this, Loc, LHS.get(), RHS.get()); } } else { // Pointee types must be compatible C99 6.5.6p3 if (!Context.typesAreCompatible( Context.getCanonicalType(lpointee).getUnqualifiedType(), Context.getCanonicalType(rpointee).getUnqualifiedType())) { diagnosePointerIncompatibility(*this, Loc, LHS.get(), RHS.get()); return QualType(); } } if (!checkArithmeticBinOpPointerOperands(*this, Loc, LHS.get(), RHS.get())) return QualType(); // The pointee type may have zero size. As an extension, a structure or // union may have zero size or an array may have zero length. In this // case subtraction does not make sense. if (!rpointee->isVoidType() && !rpointee->isFunctionType()) { CharUnits ElementSize = Context.getTypeSizeInChars(rpointee); if (ElementSize.isZero()) { Diag(Loc,diag::warn_sub_ptr_zero_size_types) << rpointee.getUnqualifiedType() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } } if (CompLHSTy) *CompLHSTy = LHS.get()->getType(); return Context.getPointerDiffType(); } } return InvalidOperands(Loc, LHS, RHS); } static bool isScopedEnumerationType(QualType T) { if (const EnumType *ET = T->getAs()) return ET->getDecl()->isScoped(); return false; } static void DiagnoseBadShiftValues(Sema& S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, BinaryOperatorKind Opc, QualType LHSType) { // OpenCL 6.3j: shift values are effectively % word size of LHS (more defined), // so skip remaining warnings as we don't want to modify values within Sema. if (S.getLangOpts().OpenCL) return; llvm::APSInt Right; // Check right/shifter operand if (RHS.get()->isValueDependent() || !RHS.get()->EvaluateAsInt(Right, S.Context)) return; if (Right.isNegative()) { S.DiagRuntimeBehavior(Loc, RHS.get(), S.PDiag(diag::warn_shift_negative) << RHS.get()->getSourceRange()); return; } llvm::APInt LeftBits(Right.getBitWidth(), S.Context.getTypeSize(LHS.get()->getType())); if (Right.uge(LeftBits)) { S.DiagRuntimeBehavior(Loc, RHS.get(), S.PDiag(diag::warn_shift_gt_typewidth) << RHS.get()->getSourceRange()); return; } if (Opc != BO_Shl) return; // When left shifting an ICE which is signed, we can check for overflow which // according to C++ has undefined behavior ([expr.shift] 5.8/2). Unsigned // integers have defined behavior modulo one more than the maximum value // representable in the result type, so never warn for those. llvm::APSInt Left; if (LHS.get()->isValueDependent() || LHSType->hasUnsignedIntegerRepresentation() || !LHS.get()->EvaluateAsInt(Left, S.Context)) return; // If LHS does not have a signed type and non-negative value // then, the behavior is undefined. Warn about it. if (Left.isNegative() && !S.getLangOpts().isSignedOverflowDefined()) { S.DiagRuntimeBehavior(Loc, LHS.get(), S.PDiag(diag::warn_shift_lhs_negative) << LHS.get()->getSourceRange()); return; } llvm::APInt ResultBits = static_cast(Right) + Left.getMinSignedBits(); if (LeftBits.uge(ResultBits)) return; llvm::APSInt Result = Left.extend(ResultBits.getLimitedValue()); Result = Result.shl(Right); // Print the bit representation of the signed integer as an unsigned // hexadecimal number. SmallString<40> HexResult; Result.toString(HexResult, 16, /*Signed =*/false, /*Literal =*/true); // If we are only missing a sign bit, this is less likely to result in actual // bugs -- if the result is cast back to an unsigned type, it will have the // expected value. Thus we place this behind a different warning that can be // turned off separately if needed. if (LeftBits == ResultBits - 1) { S.Diag(Loc, diag::warn_shift_result_sets_sign_bit) << HexResult << LHSType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return; } S.Diag(Loc, diag::warn_shift_result_gt_typewidth) << HexResult.str() << Result.getMinSignedBits() << LHSType << Left.getBitWidth() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } /// \brief Return the resulting type when a vector is shifted /// by a scalar or vector shift amount. static QualType checkVectorShift(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, bool IsCompAssign) { // OpenCL v1.1 s6.3.j says RHS can be a vector only if LHS is a vector. if ((S.LangOpts.OpenCL || S.LangOpts.ZVector) && !LHS.get()->getType()->isVectorType()) { S.Diag(Loc, diag::err_shift_rhs_only_vector) << RHS.get()->getType() << LHS.get()->getType() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } if (!IsCompAssign) { LHS = S.UsualUnaryConversions(LHS.get()); if (LHS.isInvalid()) return QualType(); } RHS = S.UsualUnaryConversions(RHS.get()); if (RHS.isInvalid()) return QualType(); QualType LHSType = LHS.get()->getType(); // Note that LHS might be a scalar because the routine calls not only in // OpenCL case. const VectorType *LHSVecTy = LHSType->getAs(); QualType LHSEleType = LHSVecTy ? LHSVecTy->getElementType() : LHSType; // Note that RHS might not be a vector. QualType RHSType = RHS.get()->getType(); const VectorType *RHSVecTy = RHSType->getAs(); QualType RHSEleType = RHSVecTy ? RHSVecTy->getElementType() : RHSType; // The operands need to be integers. if (!LHSEleType->isIntegerType()) { S.Diag(Loc, diag::err_typecheck_expect_int) << LHS.get()->getType() << LHS.get()->getSourceRange(); return QualType(); } if (!RHSEleType->isIntegerType()) { S.Diag(Loc, diag::err_typecheck_expect_int) << RHS.get()->getType() << RHS.get()->getSourceRange(); return QualType(); } if (!LHSVecTy) { assert(RHSVecTy); if (IsCompAssign) return RHSType; if (LHSEleType != RHSEleType) { LHS = S.ImpCastExprToType(LHS.get(),RHSEleType, CK_IntegralCast); LHSEleType = RHSEleType; } QualType VecTy = S.Context.getExtVectorType(LHSEleType, RHSVecTy->getNumElements()); LHS = S.ImpCastExprToType(LHS.get(), VecTy, CK_VectorSplat); LHSType = VecTy; } else if (RHSVecTy) { // OpenCL v1.1 s6.3.j says that for vector types, the operators // are applied component-wise. So if RHS is a vector, then ensure // that the number of elements is the same as LHS... if (RHSVecTy->getNumElements() != LHSVecTy->getNumElements()) { S.Diag(Loc, diag::err_typecheck_vector_lengths_not_equal) << LHS.get()->getType() << RHS.get()->getType() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); return QualType(); } if (!S.LangOpts.OpenCL && !S.LangOpts.ZVector) { const BuiltinType *LHSBT = LHSEleType->getAs(); const BuiltinType *RHSBT = RHSEleType->getAs(); if (LHSBT != RHSBT && S.Context.getTypeSize(LHSBT) != S.Context.getTypeSize(RHSBT)) { S.Diag(Loc, diag::warn_typecheck_vector_element_sizes_not_equal) << LHS.get()->getType() << RHS.get()->getType() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } } } else { // ...else expand RHS to match the number of elements in LHS. QualType VecTy = S.Context.getExtVectorType(RHSEleType, LHSVecTy->getNumElements()); RHS = S.ImpCastExprToType(RHS.get(), VecTy, CK_VectorSplat); } return LHSType; } // C99 6.5.7 QualType Sema::CheckShiftOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, BinaryOperatorKind Opc, bool IsCompAssign) { checkArithmeticNull(*this, LHS, RHS, Loc, /*isCompare=*/false); // Vector shifts promote their scalar inputs to vector type. if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) { if (LangOpts.ZVector) { // The shift operators for the z vector extensions work basically // like general shifts, except that neither the LHS nor the RHS is // allowed to be a "vector bool". if (auto LHSVecType = LHS.get()->getType()->getAs()) if (LHSVecType->getVectorKind() == VectorType::AltiVecBool) return InvalidOperands(Loc, LHS, RHS); if (auto RHSVecType = RHS.get()->getType()->getAs()) if (RHSVecType->getVectorKind() == VectorType::AltiVecBool) return InvalidOperands(Loc, LHS, RHS); } return checkVectorShift(*this, LHS, RHS, Loc, IsCompAssign); } // Shifts don't perform usual arithmetic conversions, they just do integer // promotions on each operand. C99 6.5.7p3 // For the LHS, do usual unary conversions, but then reset them away // if this is a compound assignment. ExprResult OldLHS = LHS; LHS = UsualUnaryConversions(LHS.get()); if (LHS.isInvalid()) return QualType(); QualType LHSType = LHS.get()->getType(); if (IsCompAssign) LHS = OldLHS; // The RHS is simpler. RHS = UsualUnaryConversions(RHS.get()); if (RHS.isInvalid()) return QualType(); QualType RHSType = RHS.get()->getType(); // C99 6.5.7p2: Each of the operands shall have integer type. if (!LHSType->hasIntegerRepresentation() || !RHSType->hasIntegerRepresentation()) return InvalidOperands(Loc, LHS, RHS); // C++0x: Don't allow scoped enums. FIXME: Use something better than // hasIntegerRepresentation() above instead of this. if (isScopedEnumerationType(LHSType) || isScopedEnumerationType(RHSType)) { return InvalidOperands(Loc, LHS, RHS); } // Sanity-check shift operands DiagnoseBadShiftValues(*this, LHS, RHS, Loc, Opc, LHSType); // "The type of the result is that of the promoted left operand." return LHSType; } static bool IsWithinTemplateSpecialization(Decl *D) { if (DeclContext *DC = D->getDeclContext()) { if (isa(DC)) return true; if (FunctionDecl *FD = dyn_cast(DC)) return FD->isFunctionTemplateSpecialization(); } return false; } /// If two different enums are compared, raise a warning. static void checkEnumComparison(Sema &S, SourceLocation Loc, Expr *LHS, Expr *RHS) { QualType LHSStrippedType = LHS->IgnoreParenImpCasts()->getType(); QualType RHSStrippedType = RHS->IgnoreParenImpCasts()->getType(); const EnumType *LHSEnumType = LHSStrippedType->getAs(); if (!LHSEnumType) return; const EnumType *RHSEnumType = RHSStrippedType->getAs(); if (!RHSEnumType) return; // Ignore anonymous enums. if (!LHSEnumType->getDecl()->getIdentifier()) return; if (!RHSEnumType->getDecl()->getIdentifier()) return; if (S.Context.hasSameUnqualifiedType(LHSStrippedType, RHSStrippedType)) return; S.Diag(Loc, diag::warn_comparison_of_mixed_enum_types) << LHSStrippedType << RHSStrippedType << LHS->getSourceRange() << RHS->getSourceRange(); } /// \brief Diagnose bad pointer comparisons. static void diagnoseDistinctPointerComparison(Sema &S, SourceLocation Loc, ExprResult &LHS, ExprResult &RHS, bool IsError) { S.Diag(Loc, IsError ? diag::err_typecheck_comparison_of_distinct_pointers : diag::ext_typecheck_comparison_of_distinct_pointers) << LHS.get()->getType() << RHS.get()->getType() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } /// \brief Returns false if the pointers are converted to a composite type, /// true otherwise. static bool convertPointersToCompositeType(Sema &S, SourceLocation Loc, ExprResult &LHS, ExprResult &RHS) { // C++ [expr.rel]p2: // [...] Pointer conversions (4.10) and qualification // conversions (4.4) are performed on pointer operands (or on // a pointer operand and a null pointer constant) to bring // them to their composite pointer type. [...] // // C++ [expr.eq]p1 uses the same notion for (in)equality // comparisons of pointers. QualType LHSType = LHS.get()->getType(); QualType RHSType = RHS.get()->getType(); assert(LHSType->isPointerType() || RHSType->isPointerType() || LHSType->isMemberPointerType() || RHSType->isMemberPointerType()); QualType T = S.FindCompositePointerType(Loc, LHS, RHS); if (T.isNull()) { if ((LHSType->isPointerType() || LHSType->isMemberPointerType()) && (RHSType->isPointerType() || RHSType->isMemberPointerType())) diagnoseDistinctPointerComparison(S, Loc, LHS, RHS, /*isError*/true); else S.InvalidOperands(Loc, LHS, RHS); return true; } LHS = S.ImpCastExprToType(LHS.get(), T, CK_BitCast); RHS = S.ImpCastExprToType(RHS.get(), T, CK_BitCast); return false; } static void diagnoseFunctionPointerToVoidComparison(Sema &S, SourceLocation Loc, ExprResult &LHS, ExprResult &RHS, bool IsError) { S.Diag(Loc, IsError ? diag::err_typecheck_comparison_of_fptr_to_void : diag::ext_typecheck_comparison_of_fptr_to_void) << LHS.get()->getType() << RHS.get()->getType() << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } static bool isObjCObjectLiteral(ExprResult &E) { switch (E.get()->IgnoreParenImpCasts()->getStmtClass()) { case Stmt::ObjCArrayLiteralClass: case Stmt::ObjCDictionaryLiteralClass: case Stmt::ObjCStringLiteralClass: case Stmt::ObjCBoxedExprClass: return true; default: // Note that ObjCBoolLiteral is NOT an object literal! return false; } } static bool hasIsEqualMethod(Sema &S, const Expr *LHS, const Expr *RHS) { const ObjCObjectPointerType *Type = LHS->getType()->getAs(); // If this is not actually an Objective-C object, bail out. if (!Type) return false; // Get the LHS object's interface type. QualType InterfaceType = Type->getPointeeType(); // If the RHS isn't an Objective-C object, bail out. if (!RHS->getType()->isObjCObjectPointerType()) return false; // Try to find the -isEqual: method. Selector IsEqualSel = S.NSAPIObj->getIsEqualSelector(); ObjCMethodDecl *Method = S.LookupMethodInObjectType(IsEqualSel, InterfaceType, /*instance=*/true); if (!Method) { if (Type->isObjCIdType()) { // For 'id', just check the global pool. Method = S.LookupInstanceMethodInGlobalPool(IsEqualSel, SourceRange(), /*receiverId=*/true); } else { // Check protocols. Method = S.LookupMethodInQualifiedType(IsEqualSel, Type, /*instance=*/true); } } if (!Method) return false; QualType T = Method->parameters()[0]->getType(); if (!T->isObjCObjectPointerType()) return false; QualType R = Method->getReturnType(); if (!R->isScalarType()) return false; return true; } Sema::ObjCLiteralKind Sema::CheckLiteralKind(Expr *FromE) { FromE = FromE->IgnoreParenImpCasts(); switch (FromE->getStmtClass()) { default: break; case Stmt::ObjCStringLiteralClass: // "string literal" return LK_String; case Stmt::ObjCArrayLiteralClass: // "array literal" return LK_Array; case Stmt::ObjCDictionaryLiteralClass: // "dictionary literal" return LK_Dictionary; case Stmt::BlockExprClass: return LK_Block; case Stmt::ObjCBoxedExprClass: { Expr *Inner = cast(FromE)->getSubExpr()->IgnoreParens(); switch (Inner->getStmtClass()) { case Stmt::IntegerLiteralClass: case Stmt::FloatingLiteralClass: case Stmt::CharacterLiteralClass: case Stmt::ObjCBoolLiteralExprClass: case Stmt::CXXBoolLiteralExprClass: // "numeric literal" return LK_Numeric; case Stmt::ImplicitCastExprClass: { CastKind CK = cast(Inner)->getCastKind(); // Boolean literals can be represented by implicit casts. if (CK == CK_IntegralToBoolean || CK == CK_IntegralCast) return LK_Numeric; break; } default: break; } return LK_Boxed; } } return LK_None; } static void diagnoseObjCLiteralComparison(Sema &S, SourceLocation Loc, ExprResult &LHS, ExprResult &RHS, BinaryOperator::Opcode Opc){ Expr *Literal; Expr *Other; if (isObjCObjectLiteral(LHS)) { Literal = LHS.get(); Other = RHS.get(); } else { Literal = RHS.get(); Other = LHS.get(); } // Don't warn on comparisons against nil. Other = Other->IgnoreParenCasts(); if (Other->isNullPointerConstant(S.getASTContext(), Expr::NPC_ValueDependentIsNotNull)) return; // This should be kept in sync with warn_objc_literal_comparison. // LK_String should always be after the other literals, since it has its own // warning flag. Sema::ObjCLiteralKind LiteralKind = S.CheckLiteralKind(Literal); assert(LiteralKind != Sema::LK_Block); if (LiteralKind == Sema::LK_None) { llvm_unreachable("Unknown Objective-C object literal kind"); } if (LiteralKind == Sema::LK_String) S.Diag(Loc, diag::warn_objc_string_literal_comparison) << Literal->getSourceRange(); else S.Diag(Loc, diag::warn_objc_literal_comparison) << LiteralKind << Literal->getSourceRange(); if (BinaryOperator::isEqualityOp(Opc) && hasIsEqualMethod(S, LHS.get(), RHS.get())) { SourceLocation Start = LHS.get()->getLocStart(); SourceLocation End = S.getLocForEndOfToken(RHS.get()->getLocEnd()); CharSourceRange OpRange = CharSourceRange::getCharRange(Loc, S.getLocForEndOfToken(Loc)); S.Diag(Loc, diag::note_objc_literal_comparison_isequal) << FixItHint::CreateInsertion(Start, Opc == BO_EQ ? "[" : "![") << FixItHint::CreateReplacement(OpRange, " isEqual:") << FixItHint::CreateInsertion(End, "]"); } } /// Warns on !x < y, !x & y where !(x < y), !(x & y) was probably intended. static void diagnoseLogicalNotOnLHSofCheck(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, BinaryOperatorKind Opc) { // Check that left hand side is !something. UnaryOperator *UO = dyn_cast(LHS.get()->IgnoreImpCasts()); if (!UO || UO->getOpcode() != UO_LNot) return; // Only check if the right hand side is non-bool arithmetic type. if (RHS.get()->isKnownToHaveBooleanValue()) return; // Make sure that the something in !something is not bool. Expr *SubExpr = UO->getSubExpr()->IgnoreImpCasts(); if (SubExpr->isKnownToHaveBooleanValue()) return; // Emit warning. bool IsBitwiseOp = Opc == BO_And || Opc == BO_Or || Opc == BO_Xor; S.Diag(UO->getOperatorLoc(), diag::warn_logical_not_on_lhs_of_check) << Loc << IsBitwiseOp; // First note suggest !(x < y) SourceLocation FirstOpen = SubExpr->getLocStart(); SourceLocation FirstClose = RHS.get()->getLocEnd(); FirstClose = S.getLocForEndOfToken(FirstClose); if (FirstClose.isInvalid()) FirstOpen = SourceLocation(); S.Diag(UO->getOperatorLoc(), diag::note_logical_not_fix) << IsBitwiseOp << FixItHint::CreateInsertion(FirstOpen, "(") << FixItHint::CreateInsertion(FirstClose, ")"); // Second note suggests (!x) < y SourceLocation SecondOpen = LHS.get()->getLocStart(); SourceLocation SecondClose = LHS.get()->getLocEnd(); SecondClose = S.getLocForEndOfToken(SecondClose); if (SecondClose.isInvalid()) SecondOpen = SourceLocation(); S.Diag(UO->getOperatorLoc(), diag::note_logical_not_silence_with_parens) << FixItHint::CreateInsertion(SecondOpen, "(") << FixItHint::CreateInsertion(SecondClose, ")"); } // Get the decl for a simple expression: a reference to a variable, // an implicit C++ field reference, or an implicit ObjC ivar reference. static ValueDecl *getCompareDecl(Expr *E) { if (DeclRefExpr* DR = dyn_cast(E)) return DR->getDecl(); if (ObjCIvarRefExpr* Ivar = dyn_cast(E)) { if (Ivar->isFreeIvar()) return Ivar->getDecl(); } if (MemberExpr* Mem = dyn_cast(E)) { if (Mem->isImplicitAccess()) return Mem->getMemberDecl(); } return nullptr; } // C99 6.5.8, C++ [expr.rel] QualType Sema::CheckCompareOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, BinaryOperatorKind Opc, bool IsRelational) { checkArithmeticNull(*this, LHS, RHS, Loc, /*isCompare=*/true); // Handle vector comparisons separately. if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) return CheckVectorCompareOperands(LHS, RHS, Loc, IsRelational); QualType LHSType = LHS.get()->getType(); QualType RHSType = RHS.get()->getType(); Expr *LHSStripped = LHS.get()->IgnoreParenImpCasts(); Expr *RHSStripped = RHS.get()->IgnoreParenImpCasts(); checkEnumComparison(*this, Loc, LHS.get(), RHS.get()); diagnoseLogicalNotOnLHSofCheck(*this, LHS, RHS, Loc, Opc); if (!LHSType->hasFloatingRepresentation() && !(LHSType->isBlockPointerType() && IsRelational) && !LHS.get()->getLocStart().isMacroID() && !RHS.get()->getLocStart().isMacroID() && ActiveTemplateInstantiations.empty()) { // For non-floating point types, check for self-comparisons of the form // x == x, x != x, x < x, etc. These always evaluate to a constant, and // often indicate logic errors in the program. // // NOTE: Don't warn about comparison expressions resulting from macro // expansion. Also don't warn about comparisons which are only self // comparisons within a template specialization. The warnings should catch // obvious cases in the definition of the template anyways. The idea is to // warn when the typed comparison operator will always evaluate to the same // result. ValueDecl *DL = getCompareDecl(LHSStripped); ValueDecl *DR = getCompareDecl(RHSStripped); if (DL && DR && DL == DR && !IsWithinTemplateSpecialization(DL)) { DiagRuntimeBehavior(Loc, nullptr, PDiag(diag::warn_comparison_always) << 0 // self- << (Opc == BO_EQ || Opc == BO_LE || Opc == BO_GE)); } else if (DL && DR && LHSType->isArrayType() && RHSType->isArrayType() && !DL->getType()->isReferenceType() && !DR->getType()->isReferenceType()) { // what is it always going to eval to? char always_evals_to; switch(Opc) { case BO_EQ: // e.g. array1 == array2 always_evals_to = 0; // false break; case BO_NE: // e.g. array1 != array2 always_evals_to = 1; // true break; default: // best we can say is 'a constant' always_evals_to = 2; // e.g. array1 <= array2 break; } DiagRuntimeBehavior(Loc, nullptr, PDiag(diag::warn_comparison_always) << 1 // array << always_evals_to); } if (isa(LHSStripped)) LHSStripped = LHSStripped->IgnoreParenCasts(); if (isa(RHSStripped)) RHSStripped = RHSStripped->IgnoreParenCasts(); // Warn about comparisons against a string constant (unless the other // operand is null), the user probably wants strcmp. Expr *literalString = nullptr; Expr *literalStringStripped = nullptr; if ((isa(LHSStripped) || isa(LHSStripped)) && !RHSStripped->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNull)) { literalString = LHS.get(); literalStringStripped = LHSStripped; } else if ((isa(RHSStripped) || isa(RHSStripped)) && !LHSStripped->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNull)) { literalString = RHS.get(); literalStringStripped = RHSStripped; } if (literalString) { DiagRuntimeBehavior(Loc, nullptr, PDiag(diag::warn_stringcompare) << isa(literalStringStripped) << literalString->getSourceRange()); } } // C99 6.5.8p3 / C99 6.5.9p4 UsualArithmeticConversions(LHS, RHS); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); LHSType = LHS.get()->getType(); RHSType = RHS.get()->getType(); // The result of comparisons is 'bool' in C++, 'int' in C. QualType ResultTy = Context.getLogicalOperationType(); if (IsRelational) { if (LHSType->isRealType() && RHSType->isRealType()) return ResultTy; } else { // Check for comparisons of floating point operands using != and ==. if (LHSType->hasFloatingRepresentation()) CheckFloatComparison(Loc, LHS.get(), RHS.get()); if (LHSType->isArithmeticType() && RHSType->isArithmeticType()) return ResultTy; } const Expr::NullPointerConstantKind LHSNullKind = LHS.get()->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNull); const Expr::NullPointerConstantKind RHSNullKind = RHS.get()->isNullPointerConstant(Context, Expr::NPC_ValueDependentIsNull); bool LHSIsNull = LHSNullKind != Expr::NPCK_NotNull; bool RHSIsNull = RHSNullKind != Expr::NPCK_NotNull; if (!IsRelational && LHSIsNull != RHSIsNull) { bool IsEquality = Opc == BO_EQ; if (RHSIsNull) DiagnoseAlwaysNonNullPointer(LHS.get(), RHSNullKind, IsEquality, RHS.get()->getSourceRange()); else DiagnoseAlwaysNonNullPointer(RHS.get(), LHSNullKind, IsEquality, LHS.get()->getSourceRange()); } if ((LHSType->isIntegerType() && !LHSIsNull) || (RHSType->isIntegerType() && !RHSIsNull)) { // Skip normal pointer conversion checks in this case; we have better // diagnostics for this below. } else if (getLangOpts().CPlusPlus) { // Equality comparison of a function pointer to a void pointer is invalid, // but we allow it as an extension. // FIXME: If we really want to allow this, should it be part of composite // pointer type computation so it works in conditionals too? if (!IsRelational && ((LHSType->isFunctionPointerType() && RHSType->isVoidPointerType()) || (RHSType->isFunctionPointerType() && LHSType->isVoidPointerType()))) { // This is a gcc extension compatibility comparison. // In a SFINAE context, we treat this as a hard error to maintain // conformance with the C++ standard. diagnoseFunctionPointerToVoidComparison( *this, Loc, LHS, RHS, /*isError*/ (bool)isSFINAEContext()); if (isSFINAEContext()) return QualType(); RHS = ImpCastExprToType(RHS.get(), LHSType, CK_BitCast); return ResultTy; } // C++ [expr.eq]p2: // If at least one operand is a pointer [...] bring them to their // composite pointer type. // C++ [expr.rel]p2: // If both operands are pointers, [...] bring them to their composite // pointer type. if ((int)LHSType->isPointerType() + (int)RHSType->isPointerType() >= (IsRelational ? 2 : 1)) { if (convertPointersToCompositeType(*this, Loc, LHS, RHS)) return QualType(); else return ResultTy; } } else if (LHSType->isPointerType() && RHSType->isPointerType()) { // C99 6.5.8p2 // All of the following pointer-related warnings are GCC extensions, except // when handling null pointer constants. QualType LCanPointeeTy = LHSType->castAs()->getPointeeType().getCanonicalType(); QualType RCanPointeeTy = RHSType->castAs()->getPointeeType().getCanonicalType(); // C99 6.5.9p2 and C99 6.5.8p2 if (Context.typesAreCompatible(LCanPointeeTy.getUnqualifiedType(), RCanPointeeTy.getUnqualifiedType())) { // Valid unless a relational comparison of function pointers if (IsRelational && LCanPointeeTy->isFunctionType()) { Diag(Loc, diag::ext_typecheck_ordered_comparison_of_function_pointers) << LHSType << RHSType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } } else if (!IsRelational && (LCanPointeeTy->isVoidType() || RCanPointeeTy->isVoidType())) { // Valid unless comparison between non-null pointer and function pointer if ((LCanPointeeTy->isFunctionType() || RCanPointeeTy->isFunctionType()) && !LHSIsNull && !RHSIsNull) diagnoseFunctionPointerToVoidComparison(*this, Loc, LHS, RHS, /*isError*/false); } else { // Invalid diagnoseDistinctPointerComparison(*this, Loc, LHS, RHS, /*isError*/false); } if (LCanPointeeTy != RCanPointeeTy) { // Treat NULL constant as a special case in OpenCL. if (getLangOpts().OpenCL && !LHSIsNull && !RHSIsNull) { const PointerType *LHSPtr = LHSType->getAs(); if (!LHSPtr->isAddressSpaceOverlapping(*RHSType->getAs())) { Diag(Loc, diag::err_typecheck_op_on_nonoverlapping_address_space_pointers) << LHSType << RHSType << 0 /* comparison */ << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } } unsigned AddrSpaceL = LCanPointeeTy.getAddressSpace(); unsigned AddrSpaceR = RCanPointeeTy.getAddressSpace(); CastKind Kind = AddrSpaceL != AddrSpaceR ? CK_AddressSpaceConversion : CK_BitCast; if (LHSIsNull && !RHSIsNull) LHS = ImpCastExprToType(LHS.get(), RHSType, Kind); else RHS = ImpCastExprToType(RHS.get(), LHSType, Kind); } return ResultTy; } if (getLangOpts().CPlusPlus) { // C++ [expr.eq]p4: // Two operands of type std::nullptr_t or one operand of type // std::nullptr_t and the other a null pointer constant compare equal. if (!IsRelational && LHSIsNull && RHSIsNull) { if (LHSType->isNullPtrType()) { RHS = ImpCastExprToType(RHS.get(), LHSType, CK_NullToPointer); return ResultTy; } if (RHSType->isNullPtrType()) { LHS = ImpCastExprToType(LHS.get(), RHSType, CK_NullToPointer); return ResultTy; } } // Comparison of Objective-C pointers and block pointers against nullptr_t. // These aren't covered by the composite pointer type rules. if (!IsRelational && RHSType->isNullPtrType() && (LHSType->isObjCObjectPointerType() || LHSType->isBlockPointerType())) { RHS = ImpCastExprToType(RHS.get(), LHSType, CK_NullToPointer); return ResultTy; } if (!IsRelational && LHSType->isNullPtrType() && (RHSType->isObjCObjectPointerType() || RHSType->isBlockPointerType())) { LHS = ImpCastExprToType(LHS.get(), RHSType, CK_NullToPointer); return ResultTy; } if (IsRelational && ((LHSType->isNullPtrType() && RHSType->isPointerType()) || (RHSType->isNullPtrType() && LHSType->isPointerType()))) { // HACK: Relational comparison of nullptr_t against a pointer type is // invalid per DR583, but we allow it within std::less<> and friends, // since otherwise common uses of it break. // FIXME: Consider removing this hack once LWG fixes std::less<> and // friends to have std::nullptr_t overload candidates. DeclContext *DC = CurContext; if (isa(DC)) DC = DC->getParent(); if (auto *CTSD = dyn_cast(DC)) { if (CTSD->isInStdNamespace() && llvm::StringSwitch(CTSD->getName()) .Cases("less", "less_equal", "greater", "greater_equal", true) .Default(false)) { if (RHSType->isNullPtrType()) RHS = ImpCastExprToType(RHS.get(), LHSType, CK_NullToPointer); else LHS = ImpCastExprToType(LHS.get(), RHSType, CK_NullToPointer); return ResultTy; } } } // C++ [expr.eq]p2: // If at least one operand is a pointer to member, [...] bring them to // their composite pointer type. if (!IsRelational && (LHSType->isMemberPointerType() || RHSType->isMemberPointerType())) { if (convertPointersToCompositeType(*this, Loc, LHS, RHS)) return QualType(); else return ResultTy; } // Handle scoped enumeration types specifically, since they don't promote // to integers. if (LHS.get()->getType()->isEnumeralType() && Context.hasSameUnqualifiedType(LHS.get()->getType(), RHS.get()->getType())) return ResultTy; } // Handle block pointer types. if (!IsRelational && LHSType->isBlockPointerType() && RHSType->isBlockPointerType()) { QualType lpointee = LHSType->castAs()->getPointeeType(); QualType rpointee = RHSType->castAs()->getPointeeType(); if (!LHSIsNull && !RHSIsNull && !Context.typesAreCompatible(lpointee, rpointee)) { Diag(Loc, diag::err_typecheck_comparison_of_distinct_blocks) << LHSType << RHSType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } RHS = ImpCastExprToType(RHS.get(), LHSType, CK_BitCast); return ResultTy; } // Allow block pointers to be compared with null pointer constants. if (!IsRelational && ((LHSType->isBlockPointerType() && RHSType->isPointerType()) || (LHSType->isPointerType() && RHSType->isBlockPointerType()))) { if (!LHSIsNull && !RHSIsNull) { if (!((RHSType->isPointerType() && RHSType->castAs() ->getPointeeType()->isVoidType()) || (LHSType->isPointerType() && LHSType->castAs() ->getPointeeType()->isVoidType()))) Diag(Loc, diag::err_typecheck_comparison_of_distinct_blocks) << LHSType << RHSType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); } if (LHSIsNull && !RHSIsNull) LHS = ImpCastExprToType(LHS.get(), RHSType, RHSType->isPointerType() ? CK_BitCast : CK_AnyPointerToBlockPointerCast); else RHS = ImpCastExprToType(RHS.get(), LHSType, LHSType->isPointerType() ? CK_BitCast : CK_AnyPointerToBlockPointerCast); return ResultTy; } if (LHSType->isObjCObjectPointerType() || RHSType->isObjCObjectPointerType()) { const PointerType *LPT = LHSType->getAs(); const PointerType *RPT = RHSType->getAs(); if (LPT || RPT) { bool LPtrToVoid = LPT ? LPT->getPointeeType()->isVoidType() : false; bool RPtrToVoid = RPT ? RPT->getPointeeType()->isVoidType() : false; if (!LPtrToVoid && !RPtrToVoid && !Context.typesAreCompatible(LHSType, RHSType)) { diagnoseDistinctPointerComparison(*this, Loc, LHS, RHS, /*isError*/false); } if (LHSIsNull && !RHSIsNull) { Expr *E = LHS.get(); if (getLangOpts().ObjCAutoRefCount) CheckObjCARCConversion(SourceRange(), RHSType, E, CCK_ImplicitConversion); LHS = ImpCastExprToType(E, RHSType, RPT ? CK_BitCast :CK_CPointerToObjCPointerCast); } else { Expr *E = RHS.get(); if (getLangOpts().ObjCAutoRefCount) CheckObjCARCConversion(SourceRange(), LHSType, E, CCK_ImplicitConversion, /*Diagnose=*/true, /*DiagnoseCFAudited=*/false, Opc); RHS = ImpCastExprToType(E, LHSType, LPT ? CK_BitCast :CK_CPointerToObjCPointerCast); } return ResultTy; } if (LHSType->isObjCObjectPointerType() && RHSType->isObjCObjectPointerType()) { if (!Context.areComparableObjCPointerTypes(LHSType, RHSType)) diagnoseDistinctPointerComparison(*this, Loc, LHS, RHS, /*isError*/false); if (isObjCObjectLiteral(LHS) || isObjCObjectLiteral(RHS)) diagnoseObjCLiteralComparison(*this, Loc, LHS, RHS, Opc); if (LHSIsNull && !RHSIsNull) LHS = ImpCastExprToType(LHS.get(), RHSType, CK_BitCast); else RHS = ImpCastExprToType(RHS.get(), LHSType, CK_BitCast); return ResultTy; } } if ((LHSType->isAnyPointerType() && RHSType->isIntegerType()) || (LHSType->isIntegerType() && RHSType->isAnyPointerType())) { unsigned DiagID = 0; bool isError = false; if (LangOpts.DebuggerSupport) { // Under a debugger, allow the comparison of pointers to integers, // since users tend to want to compare addresses. } else if ((LHSIsNull && LHSType->isIntegerType()) || (RHSIsNull && RHSType->isIntegerType())) { if (IsRelational) { isError = getLangOpts().CPlusPlus; DiagID = isError ? diag::err_typecheck_ordered_comparison_of_pointer_and_zero : diag::ext_typecheck_ordered_comparison_of_pointer_and_zero; } } else if (getLangOpts().CPlusPlus) { DiagID = diag::err_typecheck_comparison_of_pointer_integer; isError = true; } else if (IsRelational) DiagID = diag::ext_typecheck_ordered_comparison_of_pointer_integer; else DiagID = diag::ext_typecheck_comparison_of_pointer_integer; if (DiagID) { Diag(Loc, DiagID) << LHSType << RHSType << LHS.get()->getSourceRange() << RHS.get()->getSourceRange(); if (isError) return QualType(); } if (LHSType->isIntegerType()) LHS = ImpCastExprToType(LHS.get(), RHSType, LHSIsNull ? CK_NullToPointer : CK_IntegralToPointer); else RHS = ImpCastExprToType(RHS.get(), LHSType, RHSIsNull ? CK_NullToPointer : CK_IntegralToPointer); return ResultTy; } // Handle block pointers. if (!IsRelational && RHSIsNull && LHSType->isBlockPointerType() && RHSType->isIntegerType()) { RHS = ImpCastExprToType(RHS.get(), LHSType, CK_NullToPointer); return ResultTy; } if (!IsRelational && LHSIsNull && LHSType->isIntegerType() && RHSType->isBlockPointerType()) { LHS = ImpCastExprToType(LHS.get(), RHSType, CK_NullToPointer); return ResultTy; } if (getLangOpts().OpenCLVersion >= 200) { if (LHSIsNull && RHSType->isQueueT()) { LHS = ImpCastExprToType(LHS.get(), RHSType, CK_NullToPointer); return ResultTy; } if (LHSType->isQueueT() && RHSIsNull) { RHS = ImpCastExprToType(RHS.get(), LHSType, CK_NullToPointer); return ResultTy; } } return InvalidOperands(Loc, LHS, RHS); } // Return a signed type that is of identical size and number of elements. // For floating point vectors, return an integer type of identical size // and number of elements. QualType Sema::GetSignedVectorType(QualType V) { const VectorType *VTy = V->getAs(); unsigned TypeSize = Context.getTypeSize(VTy->getElementType()); if (TypeSize == Context.getTypeSize(Context.CharTy)) return Context.getExtVectorType(Context.CharTy, VTy->getNumElements()); else if (TypeSize == Context.getTypeSize(Context.ShortTy)) return Context.getExtVectorType(Context.ShortTy, VTy->getNumElements()); else if (TypeSize == Context.getTypeSize(Context.IntTy)) return Context.getExtVectorType(Context.IntTy, VTy->getNumElements()); else if (TypeSize == Context.getTypeSize(Context.LongTy)) return Context.getExtVectorType(Context.LongTy, VTy->getNumElements()); assert(TypeSize == Context.getTypeSize(Context.LongLongTy) && "Unhandled vector element size in vector compare"); return Context.getExtVectorType(Context.LongLongTy, VTy->getNumElements()); } /// CheckVectorCompareOperands - vector comparisons are a clang extension that /// operates on extended vector types. Instead of producing an IntTy result, /// like a scalar comparison, a vector comparison produces a vector of integer /// types. QualType Sema::CheckVectorCompareOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, bool IsRelational) { // Check to make sure we're operating on vectors of the same type and width, // Allowing one side to be a scalar of element type. QualType vType = CheckVectorOperands(LHS, RHS, Loc, /*isCompAssign*/false, /*AllowBothBool*/true, /*AllowBoolConversions*/getLangOpts().ZVector); if (vType.isNull()) return vType; QualType LHSType = LHS.get()->getType(); // If AltiVec, the comparison results in a numeric type, i.e. // bool for C++, int for C if (getLangOpts().AltiVec && vType->getAs()->getVectorKind() == VectorType::AltiVecVector) return Context.getLogicalOperationType(); // For non-floating point types, check for self-comparisons of the form // x == x, x != x, x < x, etc. These always evaluate to a constant, and // often indicate logic errors in the program. if (!LHSType->hasFloatingRepresentation() && ActiveTemplateInstantiations.empty()) { if (DeclRefExpr* DRL = dyn_cast(LHS.get()->IgnoreParenImpCasts())) if (DeclRefExpr* DRR = dyn_cast(RHS.get()->IgnoreParenImpCasts())) if (DRL->getDecl() == DRR->getDecl()) DiagRuntimeBehavior(Loc, nullptr, PDiag(diag::warn_comparison_always) << 0 // self- << 2 // "a constant" ); } // Check for comparisons of floating point operands using != and ==. if (!IsRelational && LHSType->hasFloatingRepresentation()) { assert (RHS.get()->getType()->hasFloatingRepresentation()); CheckFloatComparison(Loc, LHS.get(), RHS.get()); } // Return a signed type for the vector. return GetSignedVectorType(vType); } QualType Sema::CheckVectorLogicalOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc) { // Ensure that either both operands are of the same vector type, or // one operand is of a vector type and the other is of its element type. QualType vType = CheckVectorOperands(LHS, RHS, Loc, false, /*AllowBothBool*/true, /*AllowBoolConversions*/false); if (vType.isNull()) return InvalidOperands(Loc, LHS, RHS); if (getLangOpts().OpenCL && getLangOpts().OpenCLVersion < 120 && vType->hasFloatingRepresentation()) return InvalidOperands(Loc, LHS, RHS); return GetSignedVectorType(LHS.get()->getType()); } inline QualType Sema::CheckBitwiseOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, BinaryOperatorKind Opc) { checkArithmeticNull(*this, LHS, RHS, Loc, /*isCompare=*/false); bool IsCompAssign = Opc == BO_AndAssign || Opc == BO_OrAssign || Opc == BO_XorAssign; if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) { if (LHS.get()->getType()->hasIntegerRepresentation() && RHS.get()->getType()->hasIntegerRepresentation()) return CheckVectorOperands(LHS, RHS, Loc, IsCompAssign, /*AllowBothBool*/true, /*AllowBoolConversions*/getLangOpts().ZVector); return InvalidOperands(Loc, LHS, RHS); } if (Opc == BO_And) diagnoseLogicalNotOnLHSofCheck(*this, LHS, RHS, Loc, Opc); ExprResult LHSResult = LHS, RHSResult = RHS; QualType compType = UsualArithmeticConversions(LHSResult, RHSResult, IsCompAssign); if (LHSResult.isInvalid() || RHSResult.isInvalid()) return QualType(); LHS = LHSResult.get(); RHS = RHSResult.get(); if (!compType.isNull() && compType->isIntegralOrUnscopedEnumerationType()) return compType; return InvalidOperands(Loc, LHS, RHS); } // C99 6.5.[13,14] inline QualType Sema::CheckLogicalOperands(ExprResult &LHS, ExprResult &RHS, SourceLocation Loc, BinaryOperatorKind Opc) { // Check vector operands differently. if (LHS.get()->getType()->isVectorType() || RHS.get()->getType()->isVectorType()) return CheckVectorLogicalOperands(LHS, RHS, Loc); // Diagnose cases where the user write a logical and/or but probably meant a // bitwise one. We do this when the LHS is a non-bool integer and the RHS // is a constant. if (LHS.get()->getType()->isIntegerType() && !LHS.get()->getType()->isBooleanType() && RHS.get()->getType()->isIntegerType() && !RHS.get()->isValueDependent() && // Don't warn in macros or template instantiations. !Loc.isMacroID() && ActiveTemplateInstantiations.empty()) { // If the RHS can be constant folded, and if it constant folds to something // that isn't 0 or 1 (which indicate a potential logical operation that // happened to fold to true/false) then warn. // Parens on the RHS are ignored. llvm::APSInt Result; if (RHS.get()->EvaluateAsInt(Result, Context)) if ((getLangOpts().Bool && !RHS.get()->getType()->isBooleanType() && !RHS.get()->getExprLoc().isMacroID()) || (Result != 0 && Result != 1)) { Diag(Loc, diag::warn_logical_instead_of_bitwise) << RHS.get()->getSourceRange() << (Opc == BO_LAnd ? "&&" : "||"); // Suggest replacing the logical operator with the bitwise version Diag(Loc, diag::note_logical_instead_of_bitwise_change_operator) << (Opc == BO_LAnd ? "&" : "|") << FixItHint::CreateReplacement(SourceRange( Loc, getLocForEndOfToken(Loc)), Opc == BO_LAnd ? "&" : "|"); if (Opc == BO_LAnd) // Suggest replacing "Foo() && kNonZero" with "Foo()" Diag(Loc, diag::note_logical_instead_of_bitwise_remove_constant) << FixItHint::CreateRemoval( SourceRange(getLocForEndOfToken(LHS.get()->getLocEnd()), RHS.get()->getLocEnd())); } } if (!Context.getLangOpts().CPlusPlus) { // OpenCL v1.1 s6.3.g: The logical operators and (&&), or (||) do // not operate on the built-in scalar and vector float types. if (Context.getLangOpts().OpenCL && Context.getLangOpts().OpenCLVersion < 120) { if (LHS.get()->getType()->isFloatingType() || RHS.get()->getType()->isFloatingType()) return InvalidOperands(Loc, LHS, RHS); } LHS = UsualUnaryConversions(LHS.get()); if (LHS.isInvalid()) return QualType(); RHS = UsualUnaryConversions(RHS.get()); if (RHS.isInvalid()) return QualType(); if (!LHS.get()->getType()->isScalarType() || !RHS.get()->getType()->isScalarType()) return InvalidOperands(Loc, LHS, RHS); return Context.IntTy; } // The following is safe because we only use this method for // non-overloadable operands. // C++ [expr.log.and]p1 // C++ [expr.log.or]p1 // The operands are both contextually converted to type bool. ExprResult LHSRes = PerformContextuallyConvertToBool(LHS.get()); if (LHSRes.isInvalid()) return InvalidOperands(Loc, LHS, RHS); LHS = LHSRes; ExprResult RHSRes = PerformContextuallyConvertToBool(RHS.get()); if (RHSRes.isInvalid()) return InvalidOperands(Loc, LHS, RHS); RHS = RHSRes; // C++ [expr.log.and]p2 // C++ [expr.log.or]p2 // The result is a bool. return Context.BoolTy; } static bool IsReadonlyMessage(Expr *E, Sema &S) { const MemberExpr *ME = dyn_cast(E); if (!ME) return false; if (!isa(ME->getMemberDecl())) return false; ObjCMessageExpr *Base = dyn_cast( ME->getBase()->IgnoreImplicit()->IgnoreParenImpCasts()); if (!Base) return false; return Base->getMethodDecl() != nullptr; } /// Is the given expression (which must be 'const') a reference to a /// variable which was originally non-const, but which has become /// 'const' due to being captured within a block? enum NonConstCaptureKind { NCCK_None, NCCK_Block, NCCK_Lambda }; static NonConstCaptureKind isReferenceToNonConstCapture(Sema &S, Expr *E) { assert(E->isLValue() && E->getType().isConstQualified()); E = E->IgnoreParens(); // Must be a reference to a declaration from an enclosing scope. DeclRefExpr *DRE = dyn_cast(E); if (!DRE) return NCCK_None; if (!DRE->refersToEnclosingVariableOrCapture()) return NCCK_None; // The declaration must be a variable which is not declared 'const'. VarDecl *var = dyn_cast(DRE->getDecl()); if (!var) return NCCK_None; if (var->getType().isConstQualified()) return NCCK_None; assert(var->hasLocalStorage() && "capture added 'const' to non-local?"); // Decide whether the first capture was for a block or a lambda. DeclContext *DC = S.CurContext, *Prev = nullptr; // Decide whether the first capture was for a block or a lambda. while (DC) { // For init-capture, it is possible that the variable belongs to the // template pattern of the current context. if (auto *FD = dyn_cast(DC)) if (var->isInitCapture() && FD->getTemplateInstantiationPattern() == var->getDeclContext()) break; if (DC == var->getDeclContext()) break; Prev = DC; DC = DC->getParent(); } // Unless we have an init-capture, we've gone one step too far. if (!var->isInitCapture()) DC = Prev; return (isa(DC) ? NCCK_Block : NCCK_Lambda); } static bool IsTypeModifiable(QualType Ty, bool IsDereference) { Ty = Ty.getNonReferenceType(); if (IsDereference && Ty->isPointerType()) Ty = Ty->getPointeeType(); return !Ty.isConstQualified(); } /// Emit the "read-only variable not assignable" error and print notes to give /// more information about why the variable is not assignable, such as pointing /// to the declaration of a const variable, showing that a method is const, or /// that the function is returning a const reference. static void DiagnoseConstAssignment(Sema &S, const Expr *E, SourceLocation Loc) { // Update err_typecheck_assign_const and note_typecheck_assign_const // when this enum is changed. enum { ConstFunction, ConstVariable, ConstMember, ConstMethod, ConstUnknown, // Keep as last element }; SourceRange ExprRange = E->getSourceRange(); // Only emit one error on the first const found. All other consts will emit // a note to the error. bool DiagnosticEmitted = false; // Track if the current expression is the result of a dereference, and if the // next checked expression is the result of a dereference. bool IsDereference = false; bool NextIsDereference = false; // Loop to process MemberExpr chains. while (true) { IsDereference = NextIsDereference; E = E->IgnoreImplicit()->IgnoreParenImpCasts(); if (const MemberExpr *ME = dyn_cast(E)) { NextIsDereference = ME->isArrow(); const ValueDecl *VD = ME->getMemberDecl(); if (const FieldDecl *Field = dyn_cast(VD)) { // Mutable fields can be modified even if the class is const. if (Field->isMutable()) { assert(DiagnosticEmitted && "Expected diagnostic not emitted."); break; } if (!IsTypeModifiable(Field->getType(), IsDereference)) { if (!DiagnosticEmitted) { S.Diag(Loc, diag::err_typecheck_assign_const) << ExprRange << ConstMember << false /*static*/ << Field << Field->getType(); DiagnosticEmitted = true; } S.Diag(VD->getLocation(), diag::note_typecheck_assign_const) << ConstMember << false /*static*/ << Field << Field->getType() << Field->getSourceRange(); } E = ME->getBase(); continue; } else if (const VarDecl *VDecl = dyn_cast(VD)) { if (VDecl->getType().isConstQualified()) { if (!DiagnosticEmitted) { S.Diag(Loc, diag::err_typecheck_assign_const) << ExprRange << ConstMember << true /*static*/ << VDecl << VDecl->getType(); DiagnosticEmitted = true; } S.Diag(VD->getLocation(), diag::note_typecheck_assign_const) << ConstMember << true /*static*/ << VDecl << VDecl->getType() << VDecl->getSourceRange(); } // Static fields do not inherit constness from parents. break; } break; } // End MemberExpr break; } if (const CallExpr *CE = dyn_cast(E)) { // Function calls const FunctionDecl *FD = CE->getDirectCallee(); if (FD && !IsTypeModifiable(FD->getReturnType(), IsDereference)) { if (!DiagnosticEmitted) { S.Diag(Loc, diag::err_typecheck_assign_const) << ExprRange << ConstFunction << FD; DiagnosticEmitted = true; } S.Diag(FD->getReturnTypeSourceRange().getBegin(), diag::note_typecheck_assign_const) << ConstFunction << FD << FD->getReturnType() << FD->getReturnTypeSourceRange(); } } else if (const DeclRefExpr *DRE = dyn_cast(E)) { // Point to variable declaration. if (const ValueDecl *VD = DRE->getDecl()) { if (!IsTypeModifiable(VD->getType(), IsDereference)) { if (!DiagnosticEmitted) { S.Diag(Loc, diag::err_typecheck_assign_const) << ExprRange << ConstVariable << VD << VD->getType(); DiagnosticEmitted = true; } S.Diag(VD->getLocation(), diag::note_typecheck_assign_const) << ConstVariable << VD << VD->getType() << VD->getSourceRange(); } } } else if (isa(E)) { if (const DeclContext *DC = S.getFunctionLevelDeclContext()) { if (const CXXMethodDecl *MD = dyn_cast(DC)) { if (MD->isConst()) { if (!DiagnosticEmitted) { S.Diag(Loc, diag::err_typecheck_assign_const) << ExprRange << ConstMethod << MD; DiagnosticEmitted = true; } S.Diag(MD->getLocation(), diag::note_typecheck_assign_const) << ConstMethod << MD << MD->getSourceRange(); } } } } if (DiagnosticEmitted) return; // Can't determine a more specific message, so display the generic error. S.Diag(Loc, diag::err_typecheck_assign_const) << ExprRange << ConstUnknown; } /// CheckForModifiableLvalue - Verify that E is a modifiable lvalue. If not, /// emit an error and return true. If so, return false. static bool CheckForModifiableLvalue(Expr *E, SourceLocation Loc, Sema &S) { assert(!E->hasPlaceholderType(BuiltinType::PseudoObject)); S.CheckShadowingDeclModification(E, Loc); SourceLocation OrigLoc = Loc; Expr::isModifiableLvalueResult IsLV = E->isModifiableLvalue(S.Context, &Loc); if (IsLV == Expr::MLV_ClassTemporary && IsReadonlyMessage(E, S)) IsLV = Expr::MLV_InvalidMessageExpression; if (IsLV == Expr::MLV_Valid) return false; unsigned DiagID = 0; bool NeedType = false; switch (IsLV) { // C99 6.5.16p2 case Expr::MLV_ConstQualified: // Use a specialized diagnostic when we're assigning to an object // from an enclosing function or block. if (NonConstCaptureKind NCCK = isReferenceToNonConstCapture(S, E)) { if (NCCK == NCCK_Block) DiagID = diag::err_block_decl_ref_not_modifiable_lvalue; else DiagID = diag::err_lambda_decl_ref_not_modifiable_lvalue; break; } // In ARC, use some specialized diagnostics for occasions where we // infer 'const'. These are always pseudo-strong variables. if (S.getLangOpts().ObjCAutoRefCount) { DeclRefExpr *declRef = dyn_cast(E->IgnoreParenCasts()); if (declRef && isa(declRef->getDecl())) { VarDecl *var = cast(declRef->getDecl()); // Use the normal diagnostic if it's pseudo-__strong but the // user actually wrote 'const'. if (var->isARCPseudoStrong() && (!var->getTypeSourceInfo() || !var->getTypeSourceInfo()->getType().isConstQualified())) { // There are two pseudo-strong cases: // - self ObjCMethodDecl *method = S.getCurMethodDecl(); if (method && var == method->getSelfDecl()) DiagID = method->isClassMethod() ? diag::err_typecheck_arc_assign_self_class_method : diag::err_typecheck_arc_assign_self; // - fast enumeration variables else DiagID = diag::err_typecheck_arr_assign_enumeration; SourceRange Assign; if (Loc != OrigLoc) Assign = SourceRange(OrigLoc, OrigLoc); S.Diag(Loc, DiagID) << E->getSourceRange() << Assign; // We need to preserve the AST regardless, so migration tool // can do its job. return false; } } } // If none of the special cases above are triggered, then this is a // simple const assignment. if (DiagID == 0) { DiagnoseConstAssignment(S, E, Loc); return true; } break; case Expr::MLV_ConstAddrSpace: DiagnoseConstAssignment(S, E, Loc); return true; case Expr::MLV_ArrayType: case Expr::MLV_ArrayTemporary: DiagID = diag::err_typecheck_array_not_modifiable_lvalue; NeedType = true; break; case Expr::MLV_NotObjectType: DiagID = diag::err_typecheck_non_object_not_modifiable_lvalue; NeedType = true; break; case Expr::MLV_LValueCast: DiagID = diag::err_typecheck_lvalue_casts_not_supported; break; case Expr::MLV_Valid: llvm_unreachable("did not take early return for MLV_Valid"); case Expr::MLV_InvalidExpression: case Expr::MLV_MemberFunction: case Expr::MLV_ClassTemporary: DiagID = diag::err_typecheck_expression_not_modifiable_lvalue; break; case Expr::MLV_IncompleteType: case Expr::MLV_IncompleteVoidType: return S.RequireCompleteType(Loc, E->getType(), diag::err_typecheck_incomplete_type_not_modifiable_lvalue, E); case Expr::MLV_DuplicateVectorComponents: DiagID = diag::err_typecheck_duplicate_vector_components_not_mlvalue; break; case Expr::MLV_NoSetterProperty: llvm_unreachable("readonly properties should be processed differently"); case Expr::MLV_InvalidMessageExpression: DiagID = diag::err_readonly_message_assignment; break; case Expr::MLV_SubObjCPropertySetting: DiagID = diag::err_no_subobject_property_setting; break; } SourceRange Assign; if (Loc != OrigLoc) Assign = SourceRange(OrigLoc, OrigLoc); if (NeedType) S.Diag(Loc, DiagID) << E->getType() << E->getSourceRange() << Assign; else S.Diag(Loc, DiagID) << E->getSourceRange() << Assign; return true; } static void CheckIdentityFieldAssignment(Expr *LHSExpr, Expr *RHSExpr, SourceLocation Loc, Sema &Sema) { // C / C++ fields MemberExpr *ML = dyn_cast(LHSExpr); MemberExpr *MR = dyn_cast(RHSExpr); if (ML && MR && ML->getMemberDecl() == MR->getMemberDecl()) { if (isa(ML->getBase()) && isa(MR->getBase())) Sema.Diag(Loc, diag::warn_identity_field_assign) << 0; } // Objective-C instance variables ObjCIvarRefExpr *OL = dyn_cast(LHSExpr); ObjCIvarRefExpr *OR = dyn_cast(RHSExpr); if (OL && OR && OL->getDecl() == OR->getDecl()) { DeclRefExpr *RL = dyn_cast(OL->getBase()->IgnoreImpCasts()); DeclRefExpr *RR = dyn_cast(OR->getBase()->IgnoreImpCasts()); if (RL && RR && RL->getDecl() == RR->getDecl()) Sema.Diag(Loc, diag::warn_identity_field_assign) << 1; } } // C99 6.5.16.1 QualType Sema::CheckAssignmentOperands(Expr *LHSExpr, ExprResult &RHS, SourceLocation Loc, QualType CompoundType) { assert(!LHSExpr->hasPlaceholderType(BuiltinType::PseudoObject)); // Verify that LHS is a modifiable lvalue, and emit error if not. if (CheckForModifiableLvalue(LHSExpr, Loc, *this)) return QualType(); QualType LHSType = LHSExpr->getType(); QualType RHSType = CompoundType.isNull() ? RHS.get()->getType() : CompoundType; // OpenCL v1.2 s6.1.1.1 p2: // The half data type can only be used to declare a pointer to a buffer that // contains half values if (getLangOpts().OpenCL && !getOpenCLOptions().isEnabled("cl_khr_fp16") && LHSType->isHalfType()) { Diag(Loc, diag::err_opencl_half_load_store) << 1 << LHSType.getUnqualifiedType(); return QualType(); } AssignConvertType ConvTy; if (CompoundType.isNull()) { Expr *RHSCheck = RHS.get(); CheckIdentityFieldAssignment(LHSExpr, RHSCheck, Loc, *this); QualType LHSTy(LHSType); ConvTy = CheckSingleAssignmentConstraints(LHSTy, RHS); if (RHS.isInvalid()) return QualType(); // Special case of NSObject attributes on c-style pointer types. if (ConvTy == IncompatiblePointer && ((Context.isObjCNSObjectType(LHSType) && RHSType->isObjCObjectPointerType()) || (Context.isObjCNSObjectType(RHSType) && LHSType->isObjCObjectPointerType()))) ConvTy = Compatible; if (ConvTy == Compatible && LHSType->isObjCObjectType()) Diag(Loc, diag::err_objc_object_assignment) << LHSType; // If the RHS is a unary plus or minus, check to see if they = and + are // right next to each other. If so, the user may have typo'd "x =+ 4" // instead of "x += 4". if (ImplicitCastExpr *ICE = dyn_cast(RHSCheck)) RHSCheck = ICE->getSubExpr(); if (UnaryOperator *UO = dyn_cast(RHSCheck)) { if ((UO->getOpcode() == UO_Plus || UO->getOpcode() == UO_Minus) && Loc.isFileID() && UO->getOperatorLoc().isFileID() && // Only if the two operators are exactly adjacent. Loc.getLocWithOffset(1) == UO->getOperatorLoc() && // And there is a space or other character before the subexpr of the // unary +/-. We don't want to warn on "x=-1". Loc.getLocWithOffset(2) != UO->getSubExpr()->getLocStart() && UO->getSubExpr()->getLocStart().isFileID()) { Diag(Loc, diag::warn_not_compound_assign) << (UO->getOpcode() == UO_Plus ? "+" : "-") << SourceRange(UO->getOperatorLoc(), UO->getOperatorLoc()); } } if (ConvTy == Compatible) { if (LHSType.getObjCLifetime() == Qualifiers::OCL_Strong) { // Warn about retain cycles where a block captures the LHS, but // not if the LHS is a simple variable into which the block is // being stored...unless that variable can be captured by reference! const Expr *InnerLHS = LHSExpr->IgnoreParenCasts(); const DeclRefExpr *DRE = dyn_cast(InnerLHS); if (!DRE || DRE->getDecl()->hasAttr()) checkRetainCycles(LHSExpr, RHS.get()); // It is safe to assign a weak reference into a strong variable. // Although this code can still have problems: // id x = self.weakProp; // id y = self.weakProp; // we do not warn to warn spuriously when 'x' and 'y' are on separate // paths through the function. This should be revisited if // -Wrepeated-use-of-weak is made flow-sensitive. if (!Diags.isIgnored(diag::warn_arc_repeated_use_of_weak, RHS.get()->getLocStart())) getCurFunction()->markSafeWeakUse(RHS.get()); } else if (getLangOpts().ObjCAutoRefCount) { checkUnsafeExprAssigns(Loc, LHSExpr, RHS.get()); } } } else { // Compound assignment "x += y" ConvTy = CheckAssignmentConstraints(Loc, LHSType, RHSType); } if (DiagnoseAssignmentResult(ConvTy, Loc, LHSType, RHSType, RHS.get(), AA_Assigning)) return QualType(); CheckForNullPointerDereference(*this, LHSExpr); // C99 6.5.16p3: The type of an assignment expression is the type of the // left operand unless the left operand has qualified type, in which case // it is the unqualified version of the type of the left operand. // C99 6.5.16.1p2: In simple assignment, the value of the right operand // is converted to the type of the assignment expression (above). // C++ 5.17p1: the type of the assignment expression is that of its left // operand. return (getLangOpts().CPlusPlus ? LHSType : LHSType.getUnqualifiedType()); } // Only ignore explicit casts to void. static bool IgnoreCommaOperand(const Expr *E) { E = E->IgnoreParens(); if (const CastExpr *CE = dyn_cast(E)) { if (CE->getCastKind() == CK_ToVoid) { return true; } } return false; } // Look for instances where it is likely the comma operator is confused with // another operator. There is a whitelist of acceptable expressions for the // left hand side of the comma operator, otherwise emit a warning. void Sema::DiagnoseCommaOperator(const Expr *LHS, SourceLocation Loc) { // No warnings in macros if (Loc.isMacroID()) return; // Don't warn in template instantiations. if (!ActiveTemplateInstantiations.empty()) return; // Scope isn't fine-grained enough to whitelist the specific cases, so // instead, skip more than needed, then call back into here with the // CommaVisitor in SemaStmt.cpp. // The whitelisted locations are the initialization and increment portions // of a for loop. The additional checks are on the condition of // if statements, do/while loops, and for loops. const unsigned ForIncrementFlags = Scope::ControlScope | Scope::ContinueScope | Scope::BreakScope; const unsigned ForInitFlags = Scope::ControlScope | Scope::DeclScope; const unsigned ScopeFlags = getCurScope()->getFlags(); if ((ScopeFlags & ForIncrementFlags) == ForIncrementFlags || (ScopeFlags & ForInitFlags) == ForInitFlags) return; // If there are multiple comma operators used together, get the RHS of the // of the comma operator as the LHS. while (const BinaryOperator *BO = dyn_cast(LHS)) { if (BO->getOpcode() != BO_Comma) break; LHS = BO->getRHS(); } // Only allow some expressions on LHS to not warn. if (IgnoreCommaOperand(LHS)) return; Diag(Loc, diag::warn_comma_operator); Diag(LHS->getLocStart(), diag::note_cast_to_void) << LHS->getSourceRange() << FixItHint::CreateInsertion(LHS->getLocStart(), LangOpts.CPlusPlus ? "static_cast(" : "(void)(") << FixItHint::CreateInsertion(PP.getLocForEndOfToken(LHS->getLocEnd()), ")"); } // C99 6.5.17 static QualType CheckCommaOperands(Sema &S, ExprResult &LHS, ExprResult &RHS, SourceLocation Loc) { LHS = S.CheckPlaceholderExpr(LHS.get()); RHS = S.CheckPlaceholderExpr(RHS.get()); if (LHS.isInvalid() || RHS.isInvalid()) return QualType(); // C's comma performs lvalue conversion (C99 6.3.2.1) on both its // operands, but not unary promotions. // C++'s comma does not do any conversions at all (C++ [expr.comma]p1). // So we treat the LHS as a ignored value, and in C++ we allow the // containing site to determine what should be done with the RHS. LHS = S.IgnoredValueConversions(LHS.get()); if (LHS.isInvalid()) return QualType(); S.DiagnoseUnusedExprResult(LHS.get()); if (!S.getLangOpts().CPlusPlus) { RHS = S.DefaultFunctionArrayLvalueConversion(RHS.get()); if (RHS.isInvalid()) return QualType(); if (!RHS.get()->getType()->isVoidType()) S.RequireCompleteType(Loc, RHS.get()->getType(), diag::err_incomplete_type); } if (!S.getDiagnostics().isIgnored(diag::warn_comma_operator, Loc)) S.DiagnoseCommaOperator(LHS.get(), Loc); return RHS.get()->getType(); } /// CheckIncrementDecrementOperand - unlike most "Check" methods, this routine /// doesn't need to call UsualUnaryConversions or UsualArithmeticConversions. static QualType CheckIncrementDecrementOperand(Sema &S, Expr *Op, ExprValueKind &VK, ExprObjectKind &OK, SourceLocation OpLoc, bool IsInc, bool IsPrefix) { if (Op->isTypeDependent()) return S.Context.DependentTy; QualType ResType = Op->getType(); // Atomic types can be used for increment / decrement where the non-atomic // versions can, so ignore the _Atomic() specifier for the purpose of // checking. if (const AtomicType *ResAtomicType = ResType->getAs()) ResType = ResAtomicType->getValueType(); assert(!ResType.isNull() && "no type for increment/decrement expression"); if (S.getLangOpts().CPlusPlus && ResType->isBooleanType()) { // Decrement of bool is not allowed. if (!IsInc) { S.Diag(OpLoc, diag::err_decrement_bool) << Op->getSourceRange(); return QualType(); } // Increment of bool sets it to true, but is deprecated. S.Diag(OpLoc, S.getLangOpts().CPlusPlus1z ? diag::ext_increment_bool : diag::warn_increment_bool) << Op->getSourceRange(); } else if (S.getLangOpts().CPlusPlus && ResType->isEnumeralType()) { // Error on enum increments and decrements in C++ mode S.Diag(OpLoc, diag::err_increment_decrement_enum) << IsInc << ResType; return QualType(); } else if (ResType->isRealType()) { // OK! } else if (ResType->isPointerType()) { // C99 6.5.2.4p2, 6.5.6p2 if (!checkArithmeticOpPointerOperand(S, OpLoc, Op)) return QualType(); } else if (ResType->isObjCObjectPointerType()) { // On modern runtimes, ObjC pointer arithmetic is forbidden. // Otherwise, we just need a complete type. if (checkArithmeticIncompletePointerType(S, OpLoc, Op) || checkArithmeticOnObjCPointer(S, OpLoc, Op)) return QualType(); } else if (ResType->isAnyComplexType()) { // C99 does not support ++/-- on complex types, we allow as an extension. S.Diag(OpLoc, diag::ext_integer_increment_complex) << ResType << Op->getSourceRange(); } else if (ResType->isPlaceholderType()) { ExprResult PR = S.CheckPlaceholderExpr(Op); if (PR.isInvalid()) return QualType(); return CheckIncrementDecrementOperand(S, PR.get(), VK, OK, OpLoc, IsInc, IsPrefix); } else if (S.getLangOpts().AltiVec && ResType->isVectorType()) { // OK! ( C/C++ Language Extensions for CBEA(Version 2.6) 10.3 ) } else if (S.getLangOpts().ZVector && ResType->isVectorType() && (ResType->getAs()->getVectorKind() != VectorType::AltiVecBool)) { // The z vector extensions allow ++ and -- for non-bool vectors. } else if(S.getLangOpts().OpenCL && ResType->isVectorType() && ResType->getAs()->getElementType()->isIntegerType()) { // OpenCL V1.2 6.3 says dec/inc ops operate on integer vector types. } else { S.Diag(OpLoc, diag::err_typecheck_illegal_increment_decrement) << ResType << int(IsInc) << Op->getSourceRange(); return QualType(); } // At this point, we know we have a real, complex or pointer type. // Now make sure the operand is a modifiable lvalue. if (CheckForModifiableLvalue(Op, OpLoc, S)) return QualType(); // In C++, a prefix increment is the same type as the operand. Otherwise // (in C or with postfix), the increment is the unqualified type of the // operand. if (IsPrefix && S.getLangOpts().CPlusPlus) { VK = VK_LValue; OK = Op->getObjectKind(); return ResType; } else { VK = VK_RValue; return ResType.getUnqualifiedType(); } } /// getPrimaryDecl - Helper function for CheckAddressOfOperand(). /// This routine allows us to typecheck complex/recursive expressions /// where the declaration is needed for type checking. We only need to /// handle cases when the expression references a function designator /// or is an lvalue. Here are some examples: /// - &(x) => x /// - &*****f => f for f a function designator. /// - &s.xx => s /// - &s.zz[1].yy -> s, if zz is an array /// - *(x + 1) -> x, if x is an array /// - &"123"[2] -> 0 /// - & __real__ x -> x static ValueDecl *getPrimaryDecl(Expr *E) { switch (E->getStmtClass()) { case Stmt::DeclRefExprClass: return cast(E)->getDecl(); case Stmt::MemberExprClass: // If this is an arrow operator, the address is an offset from // the base's value, so the object the base refers to is // irrelevant. if (cast(E)->isArrow()) return nullptr; // Otherwise, the expression refers to a part of the base return getPrimaryDecl(cast(E)->getBase()); case Stmt::ArraySubscriptExprClass: { // FIXME: This code shouldn't be necessary! We should catch the implicit // promotion of register arrays earlier. Expr* Base = cast(E)->getBase(); if (ImplicitCastExpr* ICE = dyn_cast(Base)) { if (ICE->getSubExpr()->getType()->isArrayType()) return getPrimaryDecl(ICE->getSubExpr()); } return nullptr; } case Stmt::UnaryOperatorClass: { UnaryOperator *UO = cast(E); switch(UO->getOpcode()) { case UO_Real: case UO_Imag: case UO_Extension: return getPrimaryDecl(UO->getSubExpr()); default: return nullptr; } } case Stmt::ParenExprClass: return getPrimaryDecl(cast(E)->getSubExpr()); case Stmt::ImplicitCastExprClass: // If the result of an implicit cast is an l-value, we care about // the sub-expression; otherwise, the result here doesn't matter. return getPrimaryDecl(cast(E)->getSubExpr()); default: return nullptr; } } namespace { enum { AO_Bit_Field = 0, AO_Vector_Element = 1, AO_Property_Expansion = 2, AO_Register_Variable = 3, AO_No_Error = 4 }; } /// \brief Diagnose invalid operand for address of operations. /// /// \param Type The type of operand which cannot have its address taken. static void diagnoseAddressOfInvalidType(Sema &S, SourceLocation Loc, Expr *E, unsigned Type) { S.Diag(Loc, diag::err_typecheck_address_of) << Type << E->getSourceRange(); } /// CheckAddressOfOperand - The operand of & must be either a function /// designator or an lvalue designating an object. If it is an lvalue, the /// object cannot be declared with storage class register or be a bit field. /// Note: The usual conversions are *not* applied to the operand of the & /// operator (C99 6.3.2.1p[2-4]), and its result is never an lvalue. /// In C++, the operand might be an overloaded function name, in which case /// we allow the '&' but retain the overloaded-function type. QualType Sema::CheckAddressOfOperand(ExprResult &OrigOp, SourceLocation OpLoc) { if (const BuiltinType *PTy = OrigOp.get()->getType()->getAsPlaceholderType()){ if (PTy->getKind() == BuiltinType::Overload) { Expr *E = OrigOp.get()->IgnoreParens(); if (!isa(E)) { assert(cast(E)->getOpcode() == UO_AddrOf); Diag(OpLoc, diag::err_typecheck_invalid_lvalue_addrof_addrof_function) << OrigOp.get()->getSourceRange(); return QualType(); } OverloadExpr *Ovl = cast(E); if (isa(Ovl)) if (!ResolveSingleFunctionTemplateSpecialization(Ovl)) { Diag(OpLoc, diag::err_invalid_form_pointer_member_function) << OrigOp.get()->getSourceRange(); return QualType(); } return Context.OverloadTy; } if (PTy->getKind() == BuiltinType::UnknownAny) return Context.UnknownAnyTy; if (PTy->getKind() == BuiltinType::BoundMember) { Diag(OpLoc, diag::err_invalid_form_pointer_member_function) << OrigOp.get()->getSourceRange(); return QualType(); } OrigOp = CheckPlaceholderExpr(OrigOp.get()); if (OrigOp.isInvalid()) return QualType(); } if (OrigOp.get()->isTypeDependent()) return Context.DependentTy; assert(!OrigOp.get()->getType()->isPlaceholderType()); // Make sure to ignore parentheses in subsequent checks Expr *op = OrigOp.get()->IgnoreParens(); // OpenCL v1.0 s6.8.a.3: Pointers to functions are not allowed. if (LangOpts.OpenCL && op->getType()->isFunctionType()) { Diag(op->getExprLoc(), diag::err_opencl_taking_function_address); return QualType(); } if (getLangOpts().C99) { // Implement C99-only parts of addressof rules. if (UnaryOperator* uOp = dyn_cast(op)) { if (uOp->getOpcode() == UO_Deref) // Per C99 6.5.3.2, the address of a deref always returns a valid result // (assuming the deref expression is valid). return uOp->getSubExpr()->getType(); } // Technically, there should be a check for array subscript // expressions here, but the result of one is always an lvalue anyway. } ValueDecl *dcl = getPrimaryDecl(op); if (auto *FD = dyn_cast_or_null(dcl)) if (!checkAddressOfFunctionIsAvailable(FD, /*Complain=*/true, op->getLocStart())) return QualType(); Expr::LValueClassification lval = op->ClassifyLValue(Context); unsigned AddressOfError = AO_No_Error; if (lval == Expr::LV_ClassTemporary || lval == Expr::LV_ArrayTemporary) { bool sfinae = (bool)isSFINAEContext(); Diag(OpLoc, isSFINAEContext() ? diag::err_typecheck_addrof_temporary : diag::ext_typecheck_addrof_temporary) << op->getType() << op->getSourceRange(); if (sfinae) return QualType(); // Materialize the temporary as an lvalue so that we can take its address. OrigOp = op = CreateMaterializeTemporaryExpr(op->getType(), OrigOp.get(), true); } else if (isa(op)) { return Context.getPointerType(op->getType()); } else if (lval == Expr::LV_MemberFunction) { // If it's an instance method, make a member pointer. // The expression must have exactly the form &A::foo. // If the underlying expression isn't a decl ref, give up. if (!isa(op)) { Diag(OpLoc, diag::err_invalid_form_pointer_member_function) << OrigOp.get()->getSourceRange(); return QualType(); } DeclRefExpr *DRE = cast(op); CXXMethodDecl *MD = cast(DRE->getDecl()); // The id-expression was parenthesized. if (OrigOp.get() != DRE) { Diag(OpLoc, diag::err_parens_pointer_member_function) << OrigOp.get()->getSourceRange(); // The method was named without a qualifier. } else if (!DRE->getQualifier()) { if (MD->getParent()->getName().empty()) Diag(OpLoc, diag::err_unqualified_pointer_member_function) << op->getSourceRange(); else { SmallString<32> Str; StringRef Qual = (MD->getParent()->getName() + "::").toStringRef(Str); Diag(OpLoc, diag::err_unqualified_pointer_member_function) << op->getSourceRange() << FixItHint::CreateInsertion(op->getSourceRange().getBegin(), Qual); } } // Taking the address of a dtor is illegal per C++ [class.dtor]p2. if (isa(MD)) Diag(OpLoc, diag::err_typecheck_addrof_dtor) << op->getSourceRange(); QualType MPTy = Context.getMemberPointerType( op->getType(), Context.getTypeDeclType(MD->getParent()).getTypePtr()); // Under the MS ABI, lock down the inheritance model now. if (Context.getTargetInfo().getCXXABI().isMicrosoft()) (void)isCompleteType(OpLoc, MPTy); return MPTy; } else if (lval != Expr::LV_Valid && lval != Expr::LV_IncompleteVoidType) { // C99 6.5.3.2p1 // The operand must be either an l-value or a function designator if (!op->getType()->isFunctionType()) { // Use a special diagnostic for loads from property references. if (isa(op)) { AddressOfError = AO_Property_Expansion; } else { Diag(OpLoc, diag::err_typecheck_invalid_lvalue_addrof) << op->getType() << op->getSourceRange(); return QualType(); } } } else if (op->getObjectKind() == OK_BitField) { // C99 6.5.3.2p1 // The operand cannot be a bit-field AddressOfError = AO_Bit_Field; } else if (op->getObjectKind() == OK_VectorComponent) { // The operand cannot be an element of a vector AddressOfError = AO_Vector_Element; } else if (dcl) { // C99 6.5.3.2p1 // We have an lvalue with a decl. Make sure the decl is not declared // with the register storage-class specifier. if (const VarDecl *vd = dyn_cast(dcl)) { // in C++ it is not error to take address of a register // variable (c++03 7.1.1P3) if (vd->getStorageClass() == SC_Register && !getLangOpts().CPlusPlus) { AddressOfError = AO_Register_Variable; } } else if (isa(dcl)) { AddressOfError = AO_Property_Expansion; } else if (isa(dcl)) { return Context.OverloadTy; } else if (isa(dcl) || isa(dcl)) { // Okay: we can take the address of a field. // Could be a pointer to member, though, if there is an explicit // scope qualifier for the class. if (isa(op) && cast(op)->getQualifier()) { DeclContext *Ctx = dcl->getDeclContext(); if (Ctx && Ctx->isRecord()) { if (dcl->getType()->isReferenceType()) { Diag(OpLoc, diag::err_cannot_form_pointer_to_member_of_reference_type) << dcl->getDeclName() << dcl->getType(); return QualType(); } while (cast(Ctx)->isAnonymousStructOrUnion()) Ctx = Ctx->getParent(); QualType MPTy = Context.getMemberPointerType( op->getType(), Context.getTypeDeclType(cast(Ctx)).getTypePtr()); // Under the MS ABI, lock down the inheritance model now. if (Context.getTargetInfo().getCXXABI().isMicrosoft()) (void)isCompleteType(OpLoc, MPTy); return MPTy; } } } else if (!isa(dcl) && !isa(dcl) && !isa(dcl)) llvm_unreachable("Unknown/unexpected decl type"); } if (AddressOfError != AO_No_Error) { diagnoseAddressOfInvalidType(*this, OpLoc, op, AddressOfError); return QualType(); } if (lval == Expr::LV_IncompleteVoidType) { // Taking the address of a void variable is technically illegal, but we // allow it in cases which are otherwise valid. // Example: "extern void x; void* y = &x;". Diag(OpLoc, diag::ext_typecheck_addrof_void) << op->getSourceRange(); } // If the operand has type "type", the result has type "pointer to type". if (op->getType()->isObjCObjectType()) return Context.getObjCObjectPointerType(op->getType()); CheckAddressOfPackedMember(op); return Context.getPointerType(op->getType()); } static void RecordModifiableNonNullParam(Sema &S, const Expr *Exp) { const DeclRefExpr *DRE = dyn_cast(Exp); if (!DRE) return; const Decl *D = DRE->getDecl(); if (!D) return; const ParmVarDecl *Param = dyn_cast(D); if (!Param) return; if (const FunctionDecl* FD = dyn_cast(Param->getDeclContext())) if (!FD->hasAttr() && !Param->hasAttr()) return; if (FunctionScopeInfo *FD = S.getCurFunction()) if (!FD->ModifiedNonNullParams.count(Param)) FD->ModifiedNonNullParams.insert(Param); } /// CheckIndirectionOperand - Type check unary indirection (prefix '*'). static QualType CheckIndirectionOperand(Sema &S, Expr *Op, ExprValueKind &VK, SourceLocation OpLoc) { if (Op->isTypeDependent()) return S.Context.DependentTy; ExprResult ConvResult = S.UsualUnaryConversions(Op); if (ConvResult.isInvalid()) return QualType(); Op = ConvResult.get(); QualType OpTy = Op->getType(); QualType Result; if (isa(Op)) { QualType OpOrigType = Op->IgnoreParenCasts()->getType(); S.CheckCompatibleReinterpretCast(OpOrigType, OpTy, /*IsDereference*/true, Op->getSourceRange()); } if (const PointerType *PT = OpTy->getAs()) { Result = PT->getPointeeType(); } else if (const ObjCObjectPointerType *OPT = OpTy->getAs()) Result = OPT->getPointeeType(); else { ExprResult PR = S.CheckPlaceholderExpr(Op); if (PR.isInvalid()) return QualType(); if (PR.get() != Op) return CheckIndirectionOperand(S, PR.get(), VK, OpLoc); } if (Result.isNull()) { S.Diag(OpLoc, diag::err_typecheck_indirection_requires_pointer) << OpTy << Op->getSourceRange(); return QualType(); } // Note that per both C89 and C99, indirection is always legal, even if Result // is an incomplete type or void. It would be possible to warn about // dereferencing a void pointer, but it's completely well-defined, and such a // warning is unlikely to catch any mistakes. In C++, indirection is not valid // for pointers to 'void' but is fine for any other pointer type: // // C++ [expr.unary.op]p1: // [...] the expression to which [the unary * operator] is applied shall // be a pointer to an object type, or a pointer to a function type if (S.getLangOpts().CPlusPlus && Result->isVoidType()) S.Diag(OpLoc, diag::ext_typecheck_indirection_through_void_pointer) << OpTy << Op->getSourceRange(); // Dereferences are usually l-values... VK = VK_LValue; // ...except that certain expressions are never l-values in C. if (!S.getLangOpts().CPlusPlus && Result.isCForbiddenLValueType()) VK = VK_RValue; return Result; } BinaryOperatorKind Sema::ConvertTokenKindToBinaryOpcode(tok::TokenKind Kind) { BinaryOperatorKind Opc; switch (Kind) { default: llvm_unreachable("Unknown binop!"); case tok::periodstar: Opc = BO_PtrMemD; break; case tok::arrowstar: Opc = BO_PtrMemI; break; case tok::star: Opc = BO_Mul; break; case tok::slash: Opc = BO_Div; break; case tok::percent: Opc = BO_Rem; break; case tok::plus: Opc = BO_Add; break; case tok::minus: Opc = BO_Sub; break; case tok::lessless: Opc = BO_Shl; break; case tok::greatergreater: Opc = BO_Shr; break; case tok::lessequal: Opc = BO_LE; break; case tok::less: Opc = BO_LT; break; case tok::greaterequal: Opc = BO_GE; break; case tok::greater: Opc = BO_GT; break; case tok::exclaimequal: Opc = BO_NE; break; case tok::equalequal: Opc = BO_EQ; break; case tok::amp: Opc = BO_And; break; case tok::caret: Opc = BO_Xor; break; case tok::pipe: Opc = BO_Or; break; case tok::ampamp: Opc = BO_LAnd; break; case tok::pipepipe: Opc = BO_LOr; break; case tok::equal: Opc = BO_Assign; break; case tok::starequal: Opc = BO_MulAssign; break; case tok::slashequal: Opc = BO_DivAssign; break; case tok::percentequal: Opc = BO_RemAssign; break; case tok::plusequal: Opc = BO_AddAssign; break; case tok::minusequal: Opc = BO_SubAssign; break; case tok::lesslessequal: Opc = BO_ShlAssign; break; case tok::greatergreaterequal: Opc = BO_ShrAssign; break; case tok::ampequal: Opc = BO_AndAssign; break; case tok::caretequal: Opc = BO_XorAssign; break; case tok::pipeequal: Opc = BO_OrAssign; break; case tok::comma: Opc = BO_Comma; break; } return Opc; } static inline UnaryOperatorKind ConvertTokenKindToUnaryOpcode( tok::TokenKind Kind) { UnaryOperatorKind Opc; switch (Kind) { default: llvm_unreachable("Unknown unary op!"); case tok::plusplus: Opc = UO_PreInc; break; case tok::minusminus: Opc = UO_PreDec; break; case tok::amp: Opc = UO_AddrOf; break; case tok::star: Opc = UO_Deref; break; case tok::plus: Opc = UO_Plus; break; case tok::minus: Opc = UO_Minus; break; case tok::tilde: Opc = UO_Not; break; case tok::exclaim: Opc = UO_LNot; break; case tok::kw___real: Opc = UO_Real; break; case tok::kw___imag: Opc = UO_Imag; break; case tok::kw___extension__: Opc = UO_Extension; break; } return Opc; } /// DiagnoseSelfAssignment - Emits a warning if a value is assigned to itself. /// This warning is only emitted for builtin assignment operations. It is also /// suppressed in the event of macro expansions. static void DiagnoseSelfAssignment(Sema &S, Expr *LHSExpr, Expr *RHSExpr, SourceLocation OpLoc) { if (!S.ActiveTemplateInstantiations.empty()) return; if (OpLoc.isInvalid() || OpLoc.isMacroID()) return; LHSExpr = LHSExpr->IgnoreParenImpCasts(); RHSExpr = RHSExpr->IgnoreParenImpCasts(); const DeclRefExpr *LHSDeclRef = dyn_cast(LHSExpr); const DeclRefExpr *RHSDeclRef = dyn_cast(RHSExpr); if (!LHSDeclRef || !RHSDeclRef || LHSDeclRef->getLocation().isMacroID() || RHSDeclRef->getLocation().isMacroID()) return; const ValueDecl *LHSDecl = cast(LHSDeclRef->getDecl()->getCanonicalDecl()); const ValueDecl *RHSDecl = cast(RHSDeclRef->getDecl()->getCanonicalDecl()); if (LHSDecl != RHSDecl) return; if (LHSDecl->getType().isVolatileQualified()) return; if (const ReferenceType *RefTy = LHSDecl->getType()->getAs()) if (RefTy->getPointeeType().isVolatileQualified()) return; S.Diag(OpLoc, diag::warn_self_assignment) << LHSDeclRef->getType() << LHSExpr->getSourceRange() << RHSExpr->getSourceRange(); } /// Check if a bitwise-& is performed on an Objective-C pointer. This /// is usually indicative of introspection within the Objective-C pointer. static void checkObjCPointerIntrospection(Sema &S, ExprResult &L, ExprResult &R, SourceLocation OpLoc) { if (!S.getLangOpts().ObjC1) return; const Expr *ObjCPointerExpr = nullptr, *OtherExpr = nullptr; const Expr *LHS = L.get(); const Expr *RHS = R.get(); if (LHS->IgnoreParenCasts()->getType()->isObjCObjectPointerType()) { ObjCPointerExpr = LHS; OtherExpr = RHS; } else if (RHS->IgnoreParenCasts()->getType()->isObjCObjectPointerType()) { ObjCPointerExpr = RHS; OtherExpr = LHS; } // This warning is deliberately made very specific to reduce false // positives with logic that uses '&' for hashing. This logic mainly // looks for code trying to introspect into tagged pointers, which // code should generally never do. if (ObjCPointerExpr && isa(OtherExpr->IgnoreParenCasts())) { unsigned Diag = diag::warn_objc_pointer_masking; // Determine if we are introspecting the result of performSelectorXXX. const Expr *Ex = ObjCPointerExpr->IgnoreParenCasts(); // Special case messages to -performSelector and friends, which // can return non-pointer values boxed in a pointer value. // Some clients may wish to silence warnings in this subcase. if (const ObjCMessageExpr *ME = dyn_cast(Ex)) { Selector S = ME->getSelector(); StringRef SelArg0 = S.getNameForSlot(0); if (SelArg0.startswith("performSelector")) Diag = diag::warn_objc_pointer_masking_performSelector; } S.Diag(OpLoc, Diag) << ObjCPointerExpr->getSourceRange(); } } static NamedDecl *getDeclFromExpr(Expr *E) { if (!E) return nullptr; if (auto *DRE = dyn_cast(E)) return DRE->getDecl(); if (auto *ME = dyn_cast(E)) return ME->getMemberDecl(); if (auto *IRE = dyn_cast(E)) return IRE->getDecl(); return nullptr; } /// CreateBuiltinBinOp - Creates a new built-in binary operation with /// operator @p Opc at location @c TokLoc. This routine only supports /// built-in operations; ActOnBinOp handles overloaded operators. ExprResult Sema::CreateBuiltinBinOp(SourceLocation OpLoc, BinaryOperatorKind Opc, Expr *LHSExpr, Expr *RHSExpr) { if (getLangOpts().CPlusPlus11 && isa(RHSExpr)) { // The syntax only allows initializer lists on the RHS of assignment, // so we don't need to worry about accepting invalid code for // non-assignment operators. // C++11 5.17p9: // The meaning of x = {v} [...] is that of x = T(v) [...]. The meaning // of x = {} is x = T(). InitializationKind Kind = InitializationKind::CreateDirectList(RHSExpr->getLocStart()); InitializedEntity Entity = InitializedEntity::InitializeTemporary(LHSExpr->getType()); InitializationSequence InitSeq(*this, Entity, Kind, RHSExpr); ExprResult Init = InitSeq.Perform(*this, Entity, Kind, RHSExpr); if (Init.isInvalid()) return Init; RHSExpr = Init.get(); } ExprResult LHS = LHSExpr, RHS = RHSExpr; QualType ResultTy; // Result type of the binary operator. // The following two variables are used for compound assignment operators QualType CompLHSTy; // Type of LHS after promotions for computation QualType CompResultTy; // Type of computation result ExprValueKind VK = VK_RValue; ExprObjectKind OK = OK_Ordinary; if (!getLangOpts().CPlusPlus) { // C cannot handle TypoExpr nodes on either side of a binop because it // doesn't handle dependent types properly, so make sure any TypoExprs have // been dealt with before checking the operands. LHS = CorrectDelayedTyposInExpr(LHSExpr); RHS = CorrectDelayedTyposInExpr(RHSExpr, [Opc, LHS](Expr *E) { if (Opc != BO_Assign) return ExprResult(E); // Avoid correcting the RHS to the same Expr as the LHS. Decl *D = getDeclFromExpr(E); return (D && D == getDeclFromExpr(LHS.get())) ? ExprError() : E; }); if (!LHS.isUsable() || !RHS.isUsable()) return ExprError(); } if (getLangOpts().OpenCL) { QualType LHSTy = LHSExpr->getType(); QualType RHSTy = RHSExpr->getType(); // OpenCLC v2.0 s6.13.11.1 allows atomic variables to be initialized by // the ATOMIC_VAR_INIT macro. if (LHSTy->isAtomicType() || RHSTy->isAtomicType()) { SourceRange SR(LHSExpr->getLocStart(), RHSExpr->getLocEnd()); if (BO_Assign == Opc) Diag(OpLoc, diag::err_atomic_init_constant) << SR; else ResultTy = InvalidOperands(OpLoc, LHS, RHS); return ExprError(); } // OpenCL special types - image, sampler, pipe, and blocks are to be used // only with a builtin functions and therefore should be disallowed here. if (LHSTy->isImageType() || RHSTy->isImageType() || LHSTy->isSamplerT() || RHSTy->isSamplerT() || LHSTy->isPipeType() || RHSTy->isPipeType() || LHSTy->isBlockPointerType() || RHSTy->isBlockPointerType()) { ResultTy = InvalidOperands(OpLoc, LHS, RHS); return ExprError(); } } switch (Opc) { case BO_Assign: ResultTy = CheckAssignmentOperands(LHS.get(), RHS, OpLoc, QualType()); if (getLangOpts().CPlusPlus && LHS.get()->getObjectKind() != OK_ObjCProperty) { VK = LHS.get()->getValueKind(); OK = LHS.get()->getObjectKind(); } if (!ResultTy.isNull()) { DiagnoseSelfAssignment(*this, LHS.get(), RHS.get(), OpLoc); DiagnoseSelfMove(LHS.get(), RHS.get(), OpLoc); } RecordModifiableNonNullParam(*this, LHS.get()); break; case BO_PtrMemD: case BO_PtrMemI: ResultTy = CheckPointerToMemberOperands(LHS, RHS, VK, OpLoc, Opc == BO_PtrMemI); break; case BO_Mul: case BO_Div: ResultTy = CheckMultiplyDivideOperands(LHS, RHS, OpLoc, false, Opc == BO_Div); break; case BO_Rem: ResultTy = CheckRemainderOperands(LHS, RHS, OpLoc); break; case BO_Add: ResultTy = CheckAdditionOperands(LHS, RHS, OpLoc, Opc); break; case BO_Sub: ResultTy = CheckSubtractionOperands(LHS, RHS, OpLoc); break; case BO_Shl: case BO_Shr: ResultTy = CheckShiftOperands(LHS, RHS, OpLoc, Opc); break; case BO_LE: case BO_LT: case BO_GE: case BO_GT: ResultTy = CheckCompareOperands(LHS, RHS, OpLoc, Opc, true); break; case BO_EQ: case BO_NE: ResultTy = CheckCompareOperands(LHS, RHS, OpLoc, Opc, false); break; case BO_And: checkObjCPointerIntrospection(*this, LHS, RHS, OpLoc); case BO_Xor: case BO_Or: ResultTy = CheckBitwiseOperands(LHS, RHS, OpLoc, Opc); break; case BO_LAnd: case BO_LOr: ResultTy = CheckLogicalOperands(LHS, RHS, OpLoc, Opc); break; case BO_MulAssign: case BO_DivAssign: CompResultTy = CheckMultiplyDivideOperands(LHS, RHS, OpLoc, true, Opc == BO_DivAssign); CompLHSTy = CompResultTy; if (!CompResultTy.isNull() && !LHS.isInvalid() && !RHS.isInvalid()) ResultTy = CheckAssignmentOperands(LHS.get(), RHS, OpLoc, CompResultTy); break; case BO_RemAssign: CompResultTy = CheckRemainderOperands(LHS, RHS, OpLoc, true); CompLHSTy = CompResultTy; if (!CompResultTy.isNull() && !LHS.isInvalid() && !RHS.isInvalid()) ResultTy = CheckAssignmentOperands(LHS.get(), RHS, OpLoc, CompResultTy); break; case BO_AddAssign: CompResultTy = CheckAdditionOperands(LHS, RHS, OpLoc, Opc, &CompLHSTy); if (!CompResultTy.isNull() && !LHS.isInvalid() && !RHS.isInvalid()) ResultTy = CheckAssignmentOperands(LHS.get(), RHS, OpLoc, CompResultTy); break; case BO_SubAssign: CompResultTy = CheckSubtractionOperands(LHS, RHS, OpLoc, &CompLHSTy); if (!CompResultTy.isNull() && !LHS.isInvalid() && !RHS.isInvalid()) ResultTy = CheckAssignmentOperands(LHS.get(), RHS, OpLoc, CompResultTy); break; case BO_ShlAssign: case BO_ShrAssign: CompResultTy = CheckShiftOperands(LHS, RHS, OpLoc, Opc, true); CompLHSTy = CompResultTy; if (!CompResultTy.isNull() && !LHS.isInvalid() && !RHS.isInvalid()) ResultTy = CheckAssignmentOperands(LHS.get(), RHS, OpLoc, CompResultTy); break; case BO_AndAssign: case BO_OrAssign: // fallthrough DiagnoseSelfAssignment(*this, LHS.get(), RHS.get(), OpLoc); case BO_XorAssign: CompResultTy = CheckBitwiseOperands(LHS, RHS, OpLoc, Opc); CompLHSTy = CompResultTy; if (!CompResultTy.isNull() && !LHS.isInvalid() && !RHS.isInvalid()) ResultTy = CheckAssignmentOperands(LHS.get(), RHS, OpLoc, CompResultTy); break; case BO_Comma: ResultTy = CheckCommaOperands(*this, LHS, RHS, OpLoc); if (getLangOpts().CPlusPlus && !RHS.isInvalid()) { VK = RHS.get()->getValueKind(); OK = RHS.get()->getObjectKind(); } break; } if (ResultTy.isNull() || LHS.isInvalid() || RHS.isInvalid()) return ExprError(); // Check for array bounds violations for both sides of the BinaryOperator CheckArrayAccess(LHS.get()); CheckArrayAccess(RHS.get()); if (const ObjCIsaExpr *OISA = dyn_cast(LHS.get()->IgnoreParenCasts())) { NamedDecl *ObjectSetClass = LookupSingleName(TUScope, &Context.Idents.get("object_setClass"), SourceLocation(), LookupOrdinaryName); if (ObjectSetClass && isa(LHS.get())) { SourceLocation RHSLocEnd = getLocForEndOfToken(RHS.get()->getLocEnd()); Diag(LHS.get()->getExprLoc(), diag::warn_objc_isa_assign) << FixItHint::CreateInsertion(LHS.get()->getLocStart(), "object_setClass(") << FixItHint::CreateReplacement(SourceRange(OISA->getOpLoc(), OpLoc), ",") << FixItHint::CreateInsertion(RHSLocEnd, ")"); } else Diag(LHS.get()->getExprLoc(), diag::warn_objc_isa_assign); } else if (const ObjCIvarRefExpr *OIRE = dyn_cast(LHS.get()->IgnoreParenCasts())) DiagnoseDirectIsaAccess(*this, OIRE, OpLoc, RHS.get()); if (CompResultTy.isNull()) return new (Context) BinaryOperator(LHS.get(), RHS.get(), Opc, ResultTy, VK, OK, OpLoc, FPFeatures.fp_contract); if (getLangOpts().CPlusPlus && LHS.get()->getObjectKind() != OK_ObjCProperty) { VK = VK_LValue; OK = LHS.get()->getObjectKind(); } return new (Context) CompoundAssignOperator( LHS.get(), RHS.get(), Opc, ResultTy, VK, OK, CompLHSTy, CompResultTy, OpLoc, FPFeatures.fp_contract); } /// DiagnoseBitwisePrecedence - Emit a warning when bitwise and comparison /// operators are mixed in a way that suggests that the programmer forgot that /// comparison operators have higher precedence. The most typical example of /// such code is "flags & 0x0020 != 0", which is equivalent to "flags & 1". static void DiagnoseBitwisePrecedence(Sema &Self, BinaryOperatorKind Opc, SourceLocation OpLoc, Expr *LHSExpr, Expr *RHSExpr) { BinaryOperator *LHSBO = dyn_cast(LHSExpr); BinaryOperator *RHSBO = dyn_cast(RHSExpr); // Check that one of the sides is a comparison operator and the other isn't. bool isLeftComp = LHSBO && LHSBO->isComparisonOp(); bool isRightComp = RHSBO && RHSBO->isComparisonOp(); if (isLeftComp == isRightComp) return; // Bitwise operations are sometimes used as eager logical ops. // Don't diagnose this. bool isLeftBitwise = LHSBO && LHSBO->isBitwiseOp(); bool isRightBitwise = RHSBO && RHSBO->isBitwiseOp(); if (isLeftBitwise || isRightBitwise) return; SourceRange DiagRange = isLeftComp ? SourceRange(LHSExpr->getLocStart(), OpLoc) : SourceRange(OpLoc, RHSExpr->getLocEnd()); StringRef OpStr = isLeftComp ? LHSBO->getOpcodeStr() : RHSBO->getOpcodeStr(); SourceRange ParensRange = isLeftComp ? SourceRange(LHSBO->getRHS()->getLocStart(), RHSExpr->getLocEnd()) : SourceRange(LHSExpr->getLocStart(), RHSBO->getLHS()->getLocEnd()); Self.Diag(OpLoc, diag::warn_precedence_bitwise_rel) << DiagRange << BinaryOperator::getOpcodeStr(Opc) << OpStr; SuggestParentheses(Self, OpLoc, Self.PDiag(diag::note_precedence_silence) << OpStr, (isLeftComp ? LHSExpr : RHSExpr)->getSourceRange()); SuggestParentheses(Self, OpLoc, Self.PDiag(diag::note_precedence_bitwise_first) << BinaryOperator::getOpcodeStr(Opc), ParensRange); } /// \brief It accepts a '&&' expr that is inside a '||' one. /// Emit a diagnostic together with a fixit hint that wraps the '&&' expression /// in parentheses. static void EmitDiagnosticForLogicalAndInLogicalOr(Sema &Self, SourceLocation OpLoc, BinaryOperator *Bop) { assert(Bop->getOpcode() == BO_LAnd); Self.Diag(Bop->getOperatorLoc(), diag::warn_logical_and_in_logical_or) << Bop->getSourceRange() << OpLoc; SuggestParentheses(Self, Bop->getOperatorLoc(), Self.PDiag(diag::note_precedence_silence) << Bop->getOpcodeStr(), Bop->getSourceRange()); } /// \brief Returns true if the given expression can be evaluated as a constant /// 'true'. static bool EvaluatesAsTrue(Sema &S, Expr *E) { bool Res; return !E->isValueDependent() && E->EvaluateAsBooleanCondition(Res, S.getASTContext()) && Res; } /// \brief Returns true if the given expression can be evaluated as a constant /// 'false'. static bool EvaluatesAsFalse(Sema &S, Expr *E) { bool Res; return !E->isValueDependent() && E->EvaluateAsBooleanCondition(Res, S.getASTContext()) && !Res; } /// \brief Look for '&&' in the left hand of a '||' expr. static void DiagnoseLogicalAndInLogicalOrLHS(Sema &S, SourceLocation OpLoc, Expr *LHSExpr, Expr *RHSExpr) { if (BinaryOperator *Bop = dyn_cast(LHSExpr)) { if (Bop->getOpcode() == BO_LAnd) { // If it's "a && b || 0" don't warn since the precedence doesn't matter. if (EvaluatesAsFalse(S, RHSExpr)) return; // If it's "1 && a || b" don't warn since the precedence doesn't matter. if (!EvaluatesAsTrue(S, Bop->getLHS())) return EmitDiagnosticForLogicalAndInLogicalOr(S, OpLoc, Bop); } else if (Bop->getOpcode() == BO_LOr) { if (BinaryOperator *RBop = dyn_cast(Bop->getRHS())) { // If it's "a || b && 1 || c" we didn't warn earlier for // "a || b && 1", but warn now. if (RBop->getOpcode() == BO_LAnd && EvaluatesAsTrue(S, RBop->getRHS())) return EmitDiagnosticForLogicalAndInLogicalOr(S, OpLoc, RBop); } } } } /// \brief Look for '&&' in the right hand of a '||' expr. static void DiagnoseLogicalAndInLogicalOrRHS(Sema &S, SourceLocation OpLoc, Expr *LHSExpr, Expr *RHSExpr) { if (BinaryOperator *Bop = dyn_cast(RHSExpr)) { if (Bop->getOpcode() == BO_LAnd) { // If it's "0 || a && b" don't warn since the precedence doesn't matter. if (EvaluatesAsFalse(S, LHSExpr)) return; // If it's "a || b && 1" don't warn since the precedence doesn't matter. if (!EvaluatesAsTrue(S, Bop->getRHS())) return EmitDiagnosticForLogicalAndInLogicalOr(S, OpLoc, Bop); } } } /// \brief Look for bitwise op in the left or right hand of a bitwise op with /// lower precedence and emit a diagnostic together with a fixit hint that wraps /// the '&' expression in parentheses. static void DiagnoseBitwiseOpInBitwiseOp(Sema &S, BinaryOperatorKind Opc, SourceLocation OpLoc, Expr *SubExpr) { if (BinaryOperator *Bop = dyn_cast(SubExpr)) { if (Bop->isBitwiseOp() && Bop->getOpcode() < Opc) { S.Diag(Bop->getOperatorLoc(), diag::warn_bitwise_op_in_bitwise_op) << Bop->getOpcodeStr() << BinaryOperator::getOpcodeStr(Opc) << Bop->getSourceRange() << OpLoc; SuggestParentheses(S, Bop->getOperatorLoc(), S.PDiag(diag::note_precedence_silence) << Bop->getOpcodeStr(), Bop->getSourceRange()); } } } static void DiagnoseAdditionInShift(Sema &S, SourceLocation OpLoc, Expr *SubExpr, StringRef Shift) { if (BinaryOperator *Bop = dyn_cast(SubExpr)) { if (Bop->getOpcode() == BO_Add || Bop->getOpcode() == BO_Sub) { StringRef Op = Bop->getOpcodeStr(); S.Diag(Bop->getOperatorLoc(), diag::warn_addition_in_bitshift) << Bop->getSourceRange() << OpLoc << Shift << Op; SuggestParentheses(S, Bop->getOperatorLoc(), S.PDiag(diag::note_precedence_silence) << Op, Bop->getSourceRange()); } } } static void DiagnoseShiftCompare(Sema &S, SourceLocation OpLoc, Expr *LHSExpr, Expr *RHSExpr) { CXXOperatorCallExpr *OCE = dyn_cast(LHSExpr); if (!OCE) return; FunctionDecl *FD = OCE->getDirectCallee(); if (!FD || !FD->isOverloadedOperator()) return; OverloadedOperatorKind Kind = FD->getOverloadedOperator(); if (Kind != OO_LessLess && Kind != OO_GreaterGreater) return; S.Diag(OpLoc, diag::warn_overloaded_shift_in_comparison) << LHSExpr->getSourceRange() << RHSExpr->getSourceRange() << (Kind == OO_LessLess); SuggestParentheses(S, OCE->getOperatorLoc(), S.PDiag(diag::note_precedence_silence) << (Kind == OO_LessLess ? "<<" : ">>"), OCE->getSourceRange()); SuggestParentheses(S, OpLoc, S.PDiag(diag::note_evaluate_comparison_first), SourceRange(OCE->getArg(1)->getLocStart(), RHSExpr->getLocEnd())); } /// DiagnoseBinOpPrecedence - Emit warnings for expressions with tricky /// precedence. static void DiagnoseBinOpPrecedence(Sema &Self, BinaryOperatorKind Opc, SourceLocation OpLoc, Expr *LHSExpr, Expr *RHSExpr){ // Diagnose "arg1 'bitwise' arg2 'eq' arg3". if (BinaryOperator::isBitwiseOp(Opc)) DiagnoseBitwisePrecedence(Self, Opc, OpLoc, LHSExpr, RHSExpr); // Diagnose "arg1 & arg2 | arg3" if ((Opc == BO_Or || Opc == BO_Xor) && !OpLoc.isMacroID()/* Don't warn in macros. */) { DiagnoseBitwiseOpInBitwiseOp(Self, Opc, OpLoc, LHSExpr); DiagnoseBitwiseOpInBitwiseOp(Self, Opc, OpLoc, RHSExpr); } // Warn about arg1 || arg2 && arg3, as GCC 4.3+ does. // We don't warn for 'assert(a || b && "bad")' since this is safe. if (Opc == BO_LOr && !OpLoc.isMacroID()/* Don't warn in macros. */) { DiagnoseLogicalAndInLogicalOrLHS(Self, OpLoc, LHSExpr, RHSExpr); DiagnoseLogicalAndInLogicalOrRHS(Self, OpLoc, LHSExpr, RHSExpr); } if ((Opc == BO_Shl && LHSExpr->getType()->isIntegralType(Self.getASTContext())) || Opc == BO_Shr) { StringRef Shift = BinaryOperator::getOpcodeStr(Opc); DiagnoseAdditionInShift(Self, OpLoc, LHSExpr, Shift); DiagnoseAdditionInShift(Self, OpLoc, RHSExpr, Shift); } // Warn on overloaded shift operators and comparisons, such as: // cout << 5 == 4; if (BinaryOperator::isComparisonOp(Opc)) DiagnoseShiftCompare(Self, OpLoc, LHSExpr, RHSExpr); } // Binary Operators. 'Tok' is the token for the operator. ExprResult Sema::ActOnBinOp(Scope *S, SourceLocation TokLoc, tok::TokenKind Kind, Expr *LHSExpr, Expr *RHSExpr) { BinaryOperatorKind Opc = ConvertTokenKindToBinaryOpcode(Kind); assert(LHSExpr && "ActOnBinOp(): missing left expression"); assert(RHSExpr && "ActOnBinOp(): missing right expression"); // Emit warnings for tricky precedence issues, e.g. "bitfield & 0x4 == 0" DiagnoseBinOpPrecedence(*this, Opc, TokLoc, LHSExpr, RHSExpr); return BuildBinOp(S, TokLoc, Opc, LHSExpr, RHSExpr); } /// Build an overloaded binary operator expression in the given scope. static ExprResult BuildOverloadedBinOp(Sema &S, Scope *Sc, SourceLocation OpLoc, BinaryOperatorKind Opc, Expr *LHS, Expr *RHS) { // Find all of the overloaded operators visible from this // point. We perform both an operator-name lookup from the local // scope and an argument-dependent lookup based on the types of // the arguments. UnresolvedSet<16> Functions; OverloadedOperatorKind OverOp = BinaryOperator::getOverloadedOperator(Opc); if (Sc && OverOp != OO_None && OverOp != OO_Equal) S.LookupOverloadedOperatorName(OverOp, Sc, LHS->getType(), RHS->getType(), Functions); // Build the (potentially-overloaded, potentially-dependent) // binary operation. return S.CreateOverloadedBinOp(OpLoc, Opc, Functions, LHS, RHS); } ExprResult Sema::BuildBinOp(Scope *S, SourceLocation OpLoc, BinaryOperatorKind Opc, Expr *LHSExpr, Expr *RHSExpr) { // We want to end up calling one of checkPseudoObjectAssignment // (if the LHS is a pseudo-object), BuildOverloadedBinOp (if // both expressions are overloadable or either is type-dependent), // or CreateBuiltinBinOp (in any other case). We also want to get // any placeholder types out of the way. // Handle pseudo-objects in the LHS. if (const BuiltinType *pty = LHSExpr->getType()->getAsPlaceholderType()) { // Assignments with a pseudo-object l-value need special analysis. if (pty->getKind() == BuiltinType::PseudoObject && BinaryOperator::isAssignmentOp(Opc)) return checkPseudoObjectAssignment(S, OpLoc, Opc, LHSExpr, RHSExpr); // Don't resolve overloads if the other type is overloadable. - if (pty->getKind() == BuiltinType::Overload) { + if (getLangOpts().CPlusPlus && pty->getKind() == BuiltinType::Overload) { // We can't actually test that if we still have a placeholder, // though. Fortunately, none of the exceptions we see in that // code below are valid when the LHS is an overload set. Note // that an overload set can be dependently-typed, but it never // instantiates to having an overloadable type. ExprResult resolvedRHS = CheckPlaceholderExpr(RHSExpr); if (resolvedRHS.isInvalid()) return ExprError(); RHSExpr = resolvedRHS.get(); if (RHSExpr->isTypeDependent() || RHSExpr->getType()->isOverloadableType()) return BuildOverloadedBinOp(*this, S, OpLoc, Opc, LHSExpr, RHSExpr); } ExprResult LHS = CheckPlaceholderExpr(LHSExpr); if (LHS.isInvalid()) return ExprError(); LHSExpr = LHS.get(); } // Handle pseudo-objects in the RHS. if (const BuiltinType *pty = RHSExpr->getType()->getAsPlaceholderType()) { // An overload in the RHS can potentially be resolved by the type // being assigned to. if (Opc == BO_Assign && pty->getKind() == BuiltinType::Overload) { - if (LHSExpr->isTypeDependent() || RHSExpr->isTypeDependent()) + if (getLangOpts().CPlusPlus && + (LHSExpr->isTypeDependent() || RHSExpr->isTypeDependent() || + LHSExpr->getType()->isOverloadableType())) return BuildOverloadedBinOp(*this, S, OpLoc, Opc, LHSExpr, RHSExpr); - if (LHSExpr->getType()->isOverloadableType()) - return BuildOverloadedBinOp(*this, S, OpLoc, Opc, LHSExpr, RHSExpr); - return CreateBuiltinBinOp(OpLoc, Opc, LHSExpr, RHSExpr); } // Don't resolve overloads if the other type is overloadable. - if (pty->getKind() == BuiltinType::Overload && + if (getLangOpts().CPlusPlus && pty->getKind() == BuiltinType::Overload && LHSExpr->getType()->isOverloadableType()) return BuildOverloadedBinOp(*this, S, OpLoc, Opc, LHSExpr, RHSExpr); ExprResult resolvedRHS = CheckPlaceholderExpr(RHSExpr); if (!resolvedRHS.isUsable()) return ExprError(); RHSExpr = resolvedRHS.get(); } if (getLangOpts().CPlusPlus) { // If either expression is type-dependent, always build an // overloaded op. if (LHSExpr->isTypeDependent() || RHSExpr->isTypeDependent()) return BuildOverloadedBinOp(*this, S, OpLoc, Opc, LHSExpr, RHSExpr); // Otherwise, build an overloaded op if either expression has an // overloadable type. if (LHSExpr->getType()->isOverloadableType() || RHSExpr->getType()->isOverloadableType()) return BuildOverloadedBinOp(*this, S, OpLoc, Opc, LHSExpr, RHSExpr); } // Build a built-in binary operation. return CreateBuiltinBinOp(OpLoc, Opc, LHSExpr, RHSExpr); } ExprResult Sema::CreateBuiltinUnaryOp(SourceLocation OpLoc, UnaryOperatorKind Opc, Expr *InputExpr) { ExprResult Input = InputExpr; ExprValueKind VK = VK_RValue; ExprObjectKind OK = OK_Ordinary; QualType resultType; if (getLangOpts().OpenCL) { QualType Ty = InputExpr->getType(); // The only legal unary operation for atomics is '&'. if ((Opc != UO_AddrOf && Ty->isAtomicType()) || // OpenCL special types - image, sampler, pipe, and blocks are to be used // only with a builtin functions and therefore should be disallowed here. (Ty->isImageType() || Ty->isSamplerT() || Ty->isPipeType() || Ty->isBlockPointerType())) { return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr) << InputExpr->getType() << Input.get()->getSourceRange()); } } switch (Opc) { case UO_PreInc: case UO_PreDec: case UO_PostInc: case UO_PostDec: resultType = CheckIncrementDecrementOperand(*this, Input.get(), VK, OK, OpLoc, Opc == UO_PreInc || Opc == UO_PostInc, Opc == UO_PreInc || Opc == UO_PreDec); break; case UO_AddrOf: resultType = CheckAddressOfOperand(Input, OpLoc); RecordModifiableNonNullParam(*this, InputExpr); break; case UO_Deref: { Input = DefaultFunctionArrayLvalueConversion(Input.get()); if (Input.isInvalid()) return ExprError(); resultType = CheckIndirectionOperand(*this, Input.get(), VK, OpLoc); break; } case UO_Plus: case UO_Minus: Input = UsualUnaryConversions(Input.get()); if (Input.isInvalid()) return ExprError(); resultType = Input.get()->getType(); if (resultType->isDependentType()) break; if (resultType->isArithmeticType()) // C99 6.5.3.3p1 break; else if (resultType->isVectorType() && // The z vector extensions don't allow + or - with bool vectors. (!Context.getLangOpts().ZVector || resultType->getAs()->getVectorKind() != VectorType::AltiVecBool)) break; else if (getLangOpts().CPlusPlus && // C++ [expr.unary.op]p6 Opc == UO_Plus && resultType->isPointerType()) break; return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr) << resultType << Input.get()->getSourceRange()); case UO_Not: // bitwise complement Input = UsualUnaryConversions(Input.get()); if (Input.isInvalid()) return ExprError(); resultType = Input.get()->getType(); if (resultType->isDependentType()) break; // C99 6.5.3.3p1. We allow complex int and float as a GCC extension. if (resultType->isComplexType() || resultType->isComplexIntegerType()) // C99 does not support '~' for complex conjugation. Diag(OpLoc, diag::ext_integer_complement_complex) << resultType << Input.get()->getSourceRange(); else if (resultType->hasIntegerRepresentation()) break; else if (resultType->isExtVectorType()) { if (Context.getLangOpts().OpenCL) { // OpenCL v1.1 s6.3.f: The bitwise operator not (~) does not operate // on vector float types. QualType T = resultType->getAs()->getElementType(); if (!T->isIntegerType()) return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr) << resultType << Input.get()->getSourceRange()); } break; } else { return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr) << resultType << Input.get()->getSourceRange()); } break; case UO_LNot: // logical negation // Unlike +/-/~, integer promotions aren't done here (C99 6.5.3.3p5). Input = DefaultFunctionArrayLvalueConversion(Input.get()); if (Input.isInvalid()) return ExprError(); resultType = Input.get()->getType(); // Though we still have to promote half FP to float... if (resultType->isHalfType() && !Context.getLangOpts().NativeHalfType) { Input = ImpCastExprToType(Input.get(), Context.FloatTy, CK_FloatingCast).get(); resultType = Context.FloatTy; } if (resultType->isDependentType()) break; if (resultType->isScalarType() && !isScopedEnumerationType(resultType)) { // C99 6.5.3.3p1: ok, fallthrough; if (Context.getLangOpts().CPlusPlus) { // C++03 [expr.unary.op]p8, C++0x [expr.unary.op]p9: // operand contextually converted to bool. Input = ImpCastExprToType(Input.get(), Context.BoolTy, ScalarTypeToBooleanCastKind(resultType)); } else if (Context.getLangOpts().OpenCL && Context.getLangOpts().OpenCLVersion < 120) { // OpenCL v1.1 6.3.h: The logical operator not (!) does not // operate on scalar float types. if (!resultType->isIntegerType()) return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr) << resultType << Input.get()->getSourceRange()); } } else if (resultType->isExtVectorType()) { if (Context.getLangOpts().OpenCL && Context.getLangOpts().OpenCLVersion < 120) { // OpenCL v1.1 6.3.h: The logical operator not (!) does not // operate on vector float types. QualType T = resultType->getAs()->getElementType(); if (!T->isIntegerType()) return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr) << resultType << Input.get()->getSourceRange()); } // Vector logical not returns the signed variant of the operand type. resultType = GetSignedVectorType(resultType); break; } else { return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr) << resultType << Input.get()->getSourceRange()); } // LNot always has type int. C99 6.5.3.3p5. // In C++, it's bool. C++ 5.3.1p8 resultType = Context.getLogicalOperationType(); break; case UO_Real: case UO_Imag: resultType = CheckRealImagOperand(*this, Input, OpLoc, Opc == UO_Real); // _Real maps ordinary l-values into ordinary l-values. _Imag maps ordinary // complex l-values to ordinary l-values and all other values to r-values. if (Input.isInvalid()) return ExprError(); if (Opc == UO_Real || Input.get()->getType()->isAnyComplexType()) { if (Input.get()->getValueKind() != VK_RValue && Input.get()->getObjectKind() == OK_Ordinary) VK = Input.get()->getValueKind(); } else if (!getLangOpts().CPlusPlus) { // In C, a volatile scalar is read by __imag. In C++, it is not. Input = DefaultLvalueConversion(Input.get()); } break; case UO_Extension: case UO_Coawait: resultType = Input.get()->getType(); VK = Input.get()->getValueKind(); OK = Input.get()->getObjectKind(); break; } if (resultType.isNull() || Input.isInvalid()) return ExprError(); // Check for array bounds violations in the operand of the UnaryOperator, // except for the '*' and '&' operators that have to be handled specially // by CheckArrayAccess (as there are special cases like &array[arraysize] // that are explicitly defined as valid by the standard). if (Opc != UO_AddrOf && Opc != UO_Deref) CheckArrayAccess(Input.get()); return new (Context) UnaryOperator(Input.get(), Opc, resultType, VK, OK, OpLoc); } /// \brief Determine whether the given expression is a qualified member /// access expression, of a form that could be turned into a pointer to member /// with the address-of operator. static bool isQualifiedMemberAccess(Expr *E) { if (DeclRefExpr *DRE = dyn_cast(E)) { if (!DRE->getQualifier()) return false; ValueDecl *VD = DRE->getDecl(); if (!VD->isCXXClassMember()) return false; if (isa(VD) || isa(VD)) return true; if (CXXMethodDecl *Method = dyn_cast(VD)) return Method->isInstance(); return false; } if (UnresolvedLookupExpr *ULE = dyn_cast(E)) { if (!ULE->getQualifier()) return false; for (NamedDecl *D : ULE->decls()) { if (CXXMethodDecl *Method = dyn_cast(D)) { if (Method->isInstance()) return true; } else { // Overload set does not contain methods. break; } } return false; } return false; } ExprResult Sema::BuildUnaryOp(Scope *S, SourceLocation OpLoc, UnaryOperatorKind Opc, Expr *Input) { // First things first: handle placeholders so that the // overloaded-operator check considers the right type. if (const BuiltinType *pty = Input->getType()->getAsPlaceholderType()) { // Increment and decrement of pseudo-object references. if (pty->getKind() == BuiltinType::PseudoObject && UnaryOperator::isIncrementDecrementOp(Opc)) return checkPseudoObjectIncDec(S, OpLoc, Opc, Input); // extension is always a builtin operator. if (Opc == UO_Extension) return CreateBuiltinUnaryOp(OpLoc, Opc, Input); // & gets special logic for several kinds of placeholder. // The builtin code knows what to do. if (Opc == UO_AddrOf && (pty->getKind() == BuiltinType::Overload || pty->getKind() == BuiltinType::UnknownAny || pty->getKind() == BuiltinType::BoundMember)) return CreateBuiltinUnaryOp(OpLoc, Opc, Input); // Anything else needs to be handled now. ExprResult Result = CheckPlaceholderExpr(Input); if (Result.isInvalid()) return ExprError(); Input = Result.get(); } if (getLangOpts().CPlusPlus && Input->getType()->isOverloadableType() && UnaryOperator::getOverloadedOperator(Opc) != OO_None && !(Opc == UO_AddrOf && isQualifiedMemberAccess(Input))) { // Find all of the overloaded operators visible from this // point. We perform both an operator-name lookup from the local // scope and an argument-dependent lookup based on the types of // the arguments. UnresolvedSet<16> Functions; OverloadedOperatorKind OverOp = UnaryOperator::getOverloadedOperator(Opc); if (S && OverOp != OO_None) LookupOverloadedOperatorName(OverOp, S, Input->getType(), QualType(), Functions); return CreateOverloadedUnaryOp(OpLoc, Opc, Functions, Input); } return CreateBuiltinUnaryOp(OpLoc, Opc, Input); } // Unary Operators. 'Tok' is the token for the operator. ExprResult Sema::ActOnUnaryOp(Scope *S, SourceLocation OpLoc, tok::TokenKind Op, Expr *Input) { return BuildUnaryOp(S, OpLoc, ConvertTokenKindToUnaryOpcode(Op), Input); } /// ActOnAddrLabel - Parse the GNU address of label extension: "&&foo". ExprResult Sema::ActOnAddrLabel(SourceLocation OpLoc, SourceLocation LabLoc, LabelDecl *TheDecl) { TheDecl->markUsed(Context); // Create the AST node. The address of a label always has type 'void*'. return new (Context) AddrLabelExpr(OpLoc, LabLoc, TheDecl, Context.getPointerType(Context.VoidTy)); } /// Given the last statement in a statement-expression, check whether /// the result is a producing expression (like a call to an /// ns_returns_retained function) and, if so, rebuild it to hoist the /// release out of the full-expression. Otherwise, return null. /// Cannot fail. static Expr *maybeRebuildARCConsumingStmt(Stmt *Statement) { // Should always be wrapped with one of these. ExprWithCleanups *cleanups = dyn_cast(Statement); if (!cleanups) return nullptr; ImplicitCastExpr *cast = dyn_cast(cleanups->getSubExpr()); if (!cast || cast->getCastKind() != CK_ARCConsumeObject) return nullptr; // Splice out the cast. This shouldn't modify any interesting // features of the statement. Expr *producer = cast->getSubExpr(); assert(producer->getType() == cast->getType()); assert(producer->getValueKind() == cast->getValueKind()); cleanups->setSubExpr(producer); return cleanups; } void Sema::ActOnStartStmtExpr() { PushExpressionEvaluationContext(ExprEvalContexts.back().Context); } void Sema::ActOnStmtExprError() { // Note that function is also called by TreeTransform when leaving a // StmtExpr scope without rebuilding anything. DiscardCleanupsInEvaluationContext(); PopExpressionEvaluationContext(); } ExprResult Sema::ActOnStmtExpr(SourceLocation LPLoc, Stmt *SubStmt, SourceLocation RPLoc) { // "({..})" assert(SubStmt && isa(SubStmt) && "Invalid action invocation!"); CompoundStmt *Compound = cast(SubStmt); if (hasAnyUnrecoverableErrorsInThisFunction()) DiscardCleanupsInEvaluationContext(); assert(!Cleanup.exprNeedsCleanups() && "cleanups within StmtExpr not correctly bound!"); PopExpressionEvaluationContext(); // FIXME: there are a variety of strange constraints to enforce here, for // example, it is not possible to goto into a stmt expression apparently. // More semantic analysis is needed. // If there are sub-stmts in the compound stmt, take the type of the last one // as the type of the stmtexpr. QualType Ty = Context.VoidTy; bool StmtExprMayBindToTemp = false; if (!Compound->body_empty()) { Stmt *LastStmt = Compound->body_back(); LabelStmt *LastLabelStmt = nullptr; // If LastStmt is a label, skip down through into the body. while (LabelStmt *Label = dyn_cast(LastStmt)) { LastLabelStmt = Label; LastStmt = Label->getSubStmt(); } if (Expr *LastE = dyn_cast(LastStmt)) { // Do function/array conversion on the last expression, but not // lvalue-to-rvalue. However, initialize an unqualified type. ExprResult LastExpr = DefaultFunctionArrayConversion(LastE); if (LastExpr.isInvalid()) return ExprError(); Ty = LastExpr.get()->getType().getUnqualifiedType(); if (!Ty->isDependentType() && !LastExpr.get()->isTypeDependent()) { // In ARC, if the final expression ends in a consume, splice // the consume out and bind it later. In the alternate case // (when dealing with a retainable type), the result // initialization will create a produce. In both cases the // result will be +1, and we'll need to balance that out with // a bind. if (Expr *rebuiltLastStmt = maybeRebuildARCConsumingStmt(LastExpr.get())) { LastExpr = rebuiltLastStmt; } else { LastExpr = PerformCopyInitialization( InitializedEntity::InitializeResult(LPLoc, Ty, false), SourceLocation(), LastExpr); } if (LastExpr.isInvalid()) return ExprError(); if (LastExpr.get() != nullptr) { if (!LastLabelStmt) Compound->setLastStmt(LastExpr.get()); else LastLabelStmt->setSubStmt(LastExpr.get()); StmtExprMayBindToTemp = true; } } } } // FIXME: Check that expression type is complete/non-abstract; statement // expressions are not lvalues. Expr *ResStmtExpr = new (Context) StmtExpr(Compound, Ty, LPLoc, RPLoc); if (StmtExprMayBindToTemp) return MaybeBindToTemporary(ResStmtExpr); return ResStmtExpr; } ExprResult Sema::BuildBuiltinOffsetOf(SourceLocation BuiltinLoc, TypeSourceInfo *TInfo, ArrayRef Components, SourceLocation RParenLoc) { QualType ArgTy = TInfo->getType(); bool Dependent = ArgTy->isDependentType(); SourceRange TypeRange = TInfo->getTypeLoc().getLocalSourceRange(); // We must have at least one component that refers to the type, and the first // one is known to be a field designator. Verify that the ArgTy represents // a struct/union/class. if (!Dependent && !ArgTy->isRecordType()) return ExprError(Diag(BuiltinLoc, diag::err_offsetof_record_type) << ArgTy << TypeRange); // Type must be complete per C99 7.17p3 because a declaring a variable // with an incomplete type would be ill-formed. if (!Dependent && RequireCompleteType(BuiltinLoc, ArgTy, diag::err_offsetof_incomplete_type, TypeRange)) return ExprError(); // offsetof with non-identifier designators (e.g. "offsetof(x, a.b[c])") are a // GCC extension, diagnose them. // FIXME: This diagnostic isn't actually visible because the location is in // a system header! if (Components.size() != 1) Diag(BuiltinLoc, diag::ext_offsetof_extended_field_designator) << SourceRange(Components[1].LocStart, Components.back().LocEnd); bool DidWarnAboutNonPOD = false; QualType CurrentType = ArgTy; SmallVector Comps; SmallVector Exprs; for (const OffsetOfComponent &OC : Components) { if (OC.isBrackets) { // Offset of an array sub-field. TODO: Should we allow vector elements? if (!CurrentType->isDependentType()) { const ArrayType *AT = Context.getAsArrayType(CurrentType); if(!AT) return ExprError(Diag(OC.LocEnd, diag::err_offsetof_array_type) << CurrentType); CurrentType = AT->getElementType(); } else CurrentType = Context.DependentTy; ExprResult IdxRval = DefaultLvalueConversion(static_cast(OC.U.E)); if (IdxRval.isInvalid()) return ExprError(); Expr *Idx = IdxRval.get(); // The expression must be an integral expression. // FIXME: An integral constant expression? if (!Idx->isTypeDependent() && !Idx->isValueDependent() && !Idx->getType()->isIntegerType()) return ExprError(Diag(Idx->getLocStart(), diag::err_typecheck_subscript_not_integer) << Idx->getSourceRange()); // Record this array index. Comps.push_back(OffsetOfNode(OC.LocStart, Exprs.size(), OC.LocEnd)); Exprs.push_back(Idx); continue; } // Offset of a field. if (CurrentType->isDependentType()) { // We have the offset of a field, but we can't look into the dependent // type. Just record the identifier of the field. Comps.push_back(OffsetOfNode(OC.LocStart, OC.U.IdentInfo, OC.LocEnd)); CurrentType = Context.DependentTy; continue; } // We need to have a complete type to look into. if (RequireCompleteType(OC.LocStart, CurrentType, diag::err_offsetof_incomplete_type)) return ExprError(); // Look for the designated field. const RecordType *RC = CurrentType->getAs(); if (!RC) return ExprError(Diag(OC.LocEnd, diag::err_offsetof_record_type) << CurrentType); RecordDecl *RD = RC->getDecl(); // C++ [lib.support.types]p5: // The macro offsetof accepts a restricted set of type arguments in this // International Standard. type shall be a POD structure or a POD union // (clause 9). // C++11 [support.types]p4: // If type is not a standard-layout class (Clause 9), the results are // undefined. if (CXXRecordDecl *CRD = dyn_cast(RD)) { bool IsSafe = LangOpts.CPlusPlus11? CRD->isStandardLayout() : CRD->isPOD(); unsigned DiagID = LangOpts.CPlusPlus11? diag::ext_offsetof_non_standardlayout_type : diag::ext_offsetof_non_pod_type; if (!IsSafe && !DidWarnAboutNonPOD && DiagRuntimeBehavior(BuiltinLoc, nullptr, PDiag(DiagID) << SourceRange(Components[0].LocStart, OC.LocEnd) << CurrentType)) DidWarnAboutNonPOD = true; } // Look for the field. LookupResult R(*this, OC.U.IdentInfo, OC.LocStart, LookupMemberName); LookupQualifiedName(R, RD); FieldDecl *MemberDecl = R.getAsSingle(); IndirectFieldDecl *IndirectMemberDecl = nullptr; if (!MemberDecl) { if ((IndirectMemberDecl = R.getAsSingle())) MemberDecl = IndirectMemberDecl->getAnonField(); } if (!MemberDecl) return ExprError(Diag(BuiltinLoc, diag::err_no_member) << OC.U.IdentInfo << RD << SourceRange(OC.LocStart, OC.LocEnd)); // C99 7.17p3: // (If the specified member is a bit-field, the behavior is undefined.) // // We diagnose this as an error. if (MemberDecl->isBitField()) { Diag(OC.LocEnd, diag::err_offsetof_bitfield) << MemberDecl->getDeclName() << SourceRange(BuiltinLoc, RParenLoc); Diag(MemberDecl->getLocation(), diag::note_bitfield_decl); return ExprError(); } RecordDecl *Parent = MemberDecl->getParent(); if (IndirectMemberDecl) Parent = cast(IndirectMemberDecl->getDeclContext()); // If the member was found in a base class, introduce OffsetOfNodes for // the base class indirections. CXXBasePaths Paths; if (IsDerivedFrom(OC.LocStart, CurrentType, Context.getTypeDeclType(Parent), Paths)) { if (Paths.getDetectedVirtual()) { Diag(OC.LocEnd, diag::err_offsetof_field_of_virtual_base) << MemberDecl->getDeclName() << SourceRange(BuiltinLoc, RParenLoc); return ExprError(); } CXXBasePath &Path = Paths.front(); for (const CXXBasePathElement &B : Path) Comps.push_back(OffsetOfNode(B.Base)); } if (IndirectMemberDecl) { for (auto *FI : IndirectMemberDecl->chain()) { assert(isa(FI)); Comps.push_back(OffsetOfNode(OC.LocStart, cast(FI), OC.LocEnd)); } } else Comps.push_back(OffsetOfNode(OC.LocStart, MemberDecl, OC.LocEnd)); CurrentType = MemberDecl->getType().getNonReferenceType(); } return OffsetOfExpr::Create(Context, Context.getSizeType(), BuiltinLoc, TInfo, Comps, Exprs, RParenLoc); } ExprResult Sema::ActOnBuiltinOffsetOf(Scope *S, SourceLocation BuiltinLoc, SourceLocation TypeLoc, ParsedType ParsedArgTy, ArrayRef Components, SourceLocation RParenLoc) { TypeSourceInfo *ArgTInfo; QualType ArgTy = GetTypeFromParser(ParsedArgTy, &ArgTInfo); if (ArgTy.isNull()) return ExprError(); if (!ArgTInfo) ArgTInfo = Context.getTrivialTypeSourceInfo(ArgTy, TypeLoc); return BuildBuiltinOffsetOf(BuiltinLoc, ArgTInfo, Components, RParenLoc); } ExprResult Sema::ActOnChooseExpr(SourceLocation BuiltinLoc, Expr *CondExpr, Expr *LHSExpr, Expr *RHSExpr, SourceLocation RPLoc) { assert((CondExpr && LHSExpr && RHSExpr) && "Missing type argument(s)"); ExprValueKind VK = VK_RValue; ExprObjectKind OK = OK_Ordinary; QualType resType; bool ValueDependent = false; bool CondIsTrue = false; if (CondExpr->isTypeDependent() || CondExpr->isValueDependent()) { resType = Context.DependentTy; ValueDependent = true; } else { // The conditional expression is required to be a constant expression. llvm::APSInt condEval(32); ExprResult CondICE = VerifyIntegerConstantExpression(CondExpr, &condEval, diag::err_typecheck_choose_expr_requires_constant, false); if (CondICE.isInvalid()) return ExprError(); CondExpr = CondICE.get(); CondIsTrue = condEval.getZExtValue(); // If the condition is > zero, then the AST type is the same as the LSHExpr. Expr *ActiveExpr = CondIsTrue ? LHSExpr : RHSExpr; resType = ActiveExpr->getType(); ValueDependent = ActiveExpr->isValueDependent(); VK = ActiveExpr->getValueKind(); OK = ActiveExpr->getObjectKind(); } return new (Context) ChooseExpr(BuiltinLoc, CondExpr, LHSExpr, RHSExpr, resType, VK, OK, RPLoc, CondIsTrue, resType->isDependentType(), ValueDependent); } //===----------------------------------------------------------------------===// // Clang Extensions. //===----------------------------------------------------------------------===// /// ActOnBlockStart - This callback is invoked when a block literal is started. void Sema::ActOnBlockStart(SourceLocation CaretLoc, Scope *CurScope) { BlockDecl *Block = BlockDecl::Create(Context, CurContext, CaretLoc); if (LangOpts.CPlusPlus) { Decl *ManglingContextDecl; if (MangleNumberingContext *MCtx = getCurrentMangleNumberContext(Block->getDeclContext(), ManglingContextDecl)) { unsigned ManglingNumber = MCtx->getManglingNumber(Block); Block->setBlockMangling(ManglingNumber, ManglingContextDecl); } } PushBlockScope(CurScope, Block); CurContext->addDecl(Block); if (CurScope) PushDeclContext(CurScope, Block); else CurContext = Block; getCurBlock()->HasImplicitReturnType = true; // Enter a new evaluation context to insulate the block from any // cleanups from the enclosing full-expression. PushExpressionEvaluationContext(PotentiallyEvaluated); } void Sema::ActOnBlockArguments(SourceLocation CaretLoc, Declarator &ParamInfo, Scope *CurScope) { assert(ParamInfo.getIdentifier() == nullptr && "block-id should have no identifier!"); assert(ParamInfo.getContext() == Declarator::BlockLiteralContext); BlockScopeInfo *CurBlock = getCurBlock(); TypeSourceInfo *Sig = GetTypeForDeclarator(ParamInfo, CurScope); QualType T = Sig->getType(); // FIXME: We should allow unexpanded parameter packs here, but that would, // in turn, make the block expression contain unexpanded parameter packs. if (DiagnoseUnexpandedParameterPack(CaretLoc, Sig, UPPC_Block)) { // Drop the parameters. FunctionProtoType::ExtProtoInfo EPI; EPI.HasTrailingReturn = false; EPI.TypeQuals |= DeclSpec::TQ_const; T = Context.getFunctionType(Context.DependentTy, None, EPI); Sig = Context.getTrivialTypeSourceInfo(T); } // GetTypeForDeclarator always produces a function type for a block // literal signature. Furthermore, it is always a FunctionProtoType // unless the function was written with a typedef. assert(T->isFunctionType() && "GetTypeForDeclarator made a non-function block signature"); // Look for an explicit signature in that function type. FunctionProtoTypeLoc ExplicitSignature; TypeLoc tmp = Sig->getTypeLoc().IgnoreParens(); if ((ExplicitSignature = tmp.getAs())) { // Check whether that explicit signature was synthesized by // GetTypeForDeclarator. If so, don't save that as part of the // written signature. if (ExplicitSignature.getLocalRangeBegin() == ExplicitSignature.getLocalRangeEnd()) { // This would be much cheaper if we stored TypeLocs instead of // TypeSourceInfos. TypeLoc Result = ExplicitSignature.getReturnLoc(); unsigned Size = Result.getFullDataSize(); Sig = Context.CreateTypeSourceInfo(Result.getType(), Size); Sig->getTypeLoc().initializeFullCopy(Result, Size); ExplicitSignature = FunctionProtoTypeLoc(); } } CurBlock->TheDecl->setSignatureAsWritten(Sig); CurBlock->FunctionType = T; const FunctionType *Fn = T->getAs(); QualType RetTy = Fn->getReturnType(); bool isVariadic = (isa(Fn) && cast(Fn)->isVariadic()); CurBlock->TheDecl->setIsVariadic(isVariadic); // Context.DependentTy is used as a placeholder for a missing block // return type. TODO: what should we do with declarators like: // ^ * { ... } // If the answer is "apply template argument deduction".... if (RetTy != Context.DependentTy) { CurBlock->ReturnType = RetTy; CurBlock->TheDecl->setBlockMissingReturnType(false); CurBlock->HasImplicitReturnType = false; } // Push block parameters from the declarator if we had them. SmallVector Params; if (ExplicitSignature) { for (unsigned I = 0, E = ExplicitSignature.getNumParams(); I != E; ++I) { ParmVarDecl *Param = ExplicitSignature.getParam(I); if (Param->getIdentifier() == nullptr && !Param->isImplicit() && !Param->isInvalidDecl() && !getLangOpts().CPlusPlus) Diag(Param->getLocation(), diag::err_parameter_name_omitted); Params.push_back(Param); } // Fake up parameter variables if we have a typedef, like // ^ fntype { ... } } else if (const FunctionProtoType *Fn = T->getAs()) { for (const auto &I : Fn->param_types()) { ParmVarDecl *Param = BuildParmVarDeclForTypedef( CurBlock->TheDecl, ParamInfo.getLocStart(), I); Params.push_back(Param); } } // Set the parameters on the block decl. if (!Params.empty()) { CurBlock->TheDecl->setParams(Params); CheckParmsForFunctionDef(CurBlock->TheDecl->parameters(), /*CheckParameterNames=*/false); } // Finally we can process decl attributes. ProcessDeclAttributes(CurScope, CurBlock->TheDecl, ParamInfo); // Put the parameter variables in scope. for (auto AI : CurBlock->TheDecl->parameters()) { AI->setOwningFunction(CurBlock->TheDecl); // If this has an identifier, add it to the scope stack. if (AI->getIdentifier()) { CheckShadow(CurBlock->TheScope, AI); PushOnScopeChains(AI, CurBlock->TheScope); } } } /// ActOnBlockError - If there is an error parsing a block, this callback /// is invoked to pop the information about the block from the action impl. void Sema::ActOnBlockError(SourceLocation CaretLoc, Scope *CurScope) { // Leave the expression-evaluation context. DiscardCleanupsInEvaluationContext(); PopExpressionEvaluationContext(); // Pop off CurBlock, handle nested blocks. PopDeclContext(); PopFunctionScopeInfo(); } /// ActOnBlockStmtExpr - This is called when the body of a block statement /// literal was successfully completed. ^(int x){...} ExprResult Sema::ActOnBlockStmtExpr(SourceLocation CaretLoc, Stmt *Body, Scope *CurScope) { // If blocks are disabled, emit an error. if (!LangOpts.Blocks) Diag(CaretLoc, diag::err_blocks_disable) << LangOpts.OpenCL; // Leave the expression-evaluation context. if (hasAnyUnrecoverableErrorsInThisFunction()) DiscardCleanupsInEvaluationContext(); assert(!Cleanup.exprNeedsCleanups() && "cleanups within block not correctly bound!"); PopExpressionEvaluationContext(); BlockScopeInfo *BSI = cast(FunctionScopes.back()); if (BSI->HasImplicitReturnType) deduceClosureReturnType(*BSI); PopDeclContext(); QualType RetTy = Context.VoidTy; if (!BSI->ReturnType.isNull()) RetTy = BSI->ReturnType; bool NoReturn = BSI->TheDecl->hasAttr(); QualType BlockTy; // Set the captured variables on the block. // FIXME: Share capture structure between BlockDecl and CapturingScopeInfo! SmallVector Captures; for (CapturingScopeInfo::Capture &Cap : BSI->Captures) { if (Cap.isThisCapture()) continue; BlockDecl::Capture NewCap(Cap.getVariable(), Cap.isBlockCapture(), Cap.isNested(), Cap.getInitExpr()); Captures.push_back(NewCap); } BSI->TheDecl->setCaptures(Context, Captures, BSI->CXXThisCaptureIndex != 0); // If the user wrote a function type in some form, try to use that. if (!BSI->FunctionType.isNull()) { const FunctionType *FTy = BSI->FunctionType->getAs(); FunctionType::ExtInfo Ext = FTy->getExtInfo(); if (NoReturn && !Ext.getNoReturn()) Ext = Ext.withNoReturn(true); // Turn protoless block types into nullary block types. if (isa(FTy)) { FunctionProtoType::ExtProtoInfo EPI; EPI.ExtInfo = Ext; BlockTy = Context.getFunctionType(RetTy, None, EPI); // Otherwise, if we don't need to change anything about the function type, // preserve its sugar structure. } else if (FTy->getReturnType() == RetTy && (!NoReturn || FTy->getNoReturnAttr())) { BlockTy = BSI->FunctionType; // Otherwise, make the minimal modifications to the function type. } else { const FunctionProtoType *FPT = cast(FTy); FunctionProtoType::ExtProtoInfo EPI = FPT->getExtProtoInfo(); EPI.TypeQuals = 0; // FIXME: silently? EPI.ExtInfo = Ext; BlockTy = Context.getFunctionType(RetTy, FPT->getParamTypes(), EPI); } // If we don't have a function type, just build one from nothing. } else { FunctionProtoType::ExtProtoInfo EPI; EPI.ExtInfo = FunctionType::ExtInfo().withNoReturn(NoReturn); BlockTy = Context.getFunctionType(RetTy, None, EPI); } DiagnoseUnusedParameters(BSI->TheDecl->parameters()); BlockTy = Context.getBlockPointerType(BlockTy); // If needed, diagnose invalid gotos and switches in the block. if (getCurFunction()->NeedsScopeChecking() && !PP.isCodeCompletionEnabled()) DiagnoseInvalidJumps(cast(Body)); BSI->TheDecl->setBody(cast(Body)); // Try to apply the named return value optimization. We have to check again // if we can do this, though, because blocks keep return statements around // to deduce an implicit return type. if (getLangOpts().CPlusPlus && RetTy->isRecordType() && !BSI->TheDecl->isDependentContext()) computeNRVO(Body, BSI); BlockExpr *Result = new (Context) BlockExpr(BSI->TheDecl, BlockTy); AnalysisBasedWarnings::Policy WP = AnalysisWarnings.getDefaultPolicy(); PopFunctionScopeInfo(&WP, Result->getBlockDecl(), Result); // If the block isn't obviously global, i.e. it captures anything at // all, then we need to do a few things in the surrounding context: if (Result->getBlockDecl()->hasCaptures()) { // First, this expression has a new cleanup object. ExprCleanupObjects.push_back(Result->getBlockDecl()); Cleanup.setExprNeedsCleanups(true); // It also gets a branch-protected scope if any of the captured // variables needs destruction. for (const auto &CI : Result->getBlockDecl()->captures()) { const VarDecl *var = CI.getVariable(); if (var->getType().isDestructedType() != QualType::DK_none) { getCurFunction()->setHasBranchProtectedScope(); break; } } } return Result; } ExprResult Sema::ActOnVAArg(SourceLocation BuiltinLoc, Expr *E, ParsedType Ty, SourceLocation RPLoc) { TypeSourceInfo *TInfo; GetTypeFromParser(Ty, &TInfo); return BuildVAArgExpr(BuiltinLoc, E, TInfo, RPLoc); } ExprResult Sema::BuildVAArgExpr(SourceLocation BuiltinLoc, Expr *E, TypeSourceInfo *TInfo, SourceLocation RPLoc) { Expr *OrigExpr = E; bool IsMS = false; // CUDA device code does not support varargs. if (getLangOpts().CUDA && getLangOpts().CUDAIsDevice) { if (const FunctionDecl *F = dyn_cast(CurContext)) { CUDAFunctionTarget T = IdentifyCUDATarget(F); if (T == CFT_Global || T == CFT_Device || T == CFT_HostDevice) return ExprError(Diag(E->getLocStart(), diag::err_va_arg_in_device)); } } // It might be a __builtin_ms_va_list. (But don't ever mark a va_arg() // as Microsoft ABI on an actual Microsoft platform, where // __builtin_ms_va_list and __builtin_va_list are the same.) if (!E->isTypeDependent() && Context.getTargetInfo().hasBuiltinMSVaList() && Context.getTargetInfo().getBuiltinVaListKind() != TargetInfo::CharPtrBuiltinVaList) { QualType MSVaListType = Context.getBuiltinMSVaListType(); if (Context.hasSameType(MSVaListType, E->getType())) { if (CheckForModifiableLvalue(E, BuiltinLoc, *this)) return ExprError(); IsMS = true; } } // Get the va_list type QualType VaListType = Context.getBuiltinVaListType(); if (!IsMS) { if (VaListType->isArrayType()) { // Deal with implicit array decay; for example, on x86-64, // va_list is an array, but it's supposed to decay to // a pointer for va_arg. VaListType = Context.getArrayDecayedType(VaListType); // Make sure the input expression also decays appropriately. ExprResult Result = UsualUnaryConversions(E); if (Result.isInvalid()) return ExprError(); E = Result.get(); } else if (VaListType->isRecordType() && getLangOpts().CPlusPlus) { // If va_list is a record type and we are compiling in C++ mode, // check the argument using reference binding. InitializedEntity Entity = InitializedEntity::InitializeParameter( Context, Context.getLValueReferenceType(VaListType), false); ExprResult Init = PerformCopyInitialization(Entity, SourceLocation(), E); if (Init.isInvalid()) return ExprError(); E = Init.getAs(); } else { // Otherwise, the va_list argument must be an l-value because // it is modified by va_arg. if (!E->isTypeDependent() && CheckForModifiableLvalue(E, BuiltinLoc, *this)) return ExprError(); } } if (!IsMS && !E->isTypeDependent() && !Context.hasSameType(VaListType, E->getType())) return ExprError(Diag(E->getLocStart(), diag::err_first_argument_to_va_arg_not_of_type_va_list) << OrigExpr->getType() << E->getSourceRange()); if (!TInfo->getType()->isDependentType()) { if (RequireCompleteType(TInfo->getTypeLoc().getBeginLoc(), TInfo->getType(), diag::err_second_parameter_to_va_arg_incomplete, TInfo->getTypeLoc())) return ExprError(); if (RequireNonAbstractType(TInfo->getTypeLoc().getBeginLoc(), TInfo->getType(), diag::err_second_parameter_to_va_arg_abstract, TInfo->getTypeLoc())) return ExprError(); if (!TInfo->getType().isPODType(Context)) { Diag(TInfo->getTypeLoc().getBeginLoc(), TInfo->getType()->isObjCLifetimeType() ? diag::warn_second_parameter_to_va_arg_ownership_qualified : diag::warn_second_parameter_to_va_arg_not_pod) << TInfo->getType() << TInfo->getTypeLoc().getSourceRange(); } // Check for va_arg where arguments of the given type will be promoted // (i.e. this va_arg is guaranteed to have undefined behavior). QualType PromoteType; if (TInfo->getType()->isPromotableIntegerType()) { PromoteType = Context.getPromotedIntegerType(TInfo->getType()); if (Context.typesAreCompatible(PromoteType, TInfo->getType())) PromoteType = QualType(); } if (TInfo->getType()->isSpecificBuiltinType(BuiltinType::Float)) PromoteType = Context.DoubleTy; if (!PromoteType.isNull()) DiagRuntimeBehavior(TInfo->getTypeLoc().getBeginLoc(), E, PDiag(diag::warn_second_parameter_to_va_arg_never_compatible) << TInfo->getType() << PromoteType << TInfo->getTypeLoc().getSourceRange()); } QualType T = TInfo->getType().getNonLValueExprType(Context); return new (Context) VAArgExpr(BuiltinLoc, E, TInfo, RPLoc, T, IsMS); } ExprResult Sema::ActOnGNUNullExpr(SourceLocation TokenLoc) { // The type of __null will be int or long, depending on the size of // pointers on the target. QualType Ty; unsigned pw = Context.getTargetInfo().getPointerWidth(0); if (pw == Context.getTargetInfo().getIntWidth()) Ty = Context.IntTy; else if (pw == Context.getTargetInfo().getLongWidth()) Ty = Context.LongTy; else if (pw == Context.getTargetInfo().getLongLongWidth()) Ty = Context.LongLongTy; else { llvm_unreachable("I don't know size of pointer!"); } return new (Context) GNUNullExpr(Ty, TokenLoc); } bool Sema::ConversionToObjCStringLiteralCheck(QualType DstType, Expr *&Exp, bool Diagnose) { if (!getLangOpts().ObjC1) return false; const ObjCObjectPointerType *PT = DstType->getAs(); if (!PT) return false; if (!PT->isObjCIdType()) { // Check if the destination is the 'NSString' interface. const ObjCInterfaceDecl *ID = PT->getInterfaceDecl(); if (!ID || !ID->getIdentifier()->isStr("NSString")) return false; } // Ignore any parens, implicit casts (should only be // array-to-pointer decays), and not-so-opaque values. The last is // important for making this trigger for property assignments. Expr *SrcExpr = Exp->IgnoreParenImpCasts(); if (OpaqueValueExpr *OV = dyn_cast(SrcExpr)) if (OV->getSourceExpr()) SrcExpr = OV->getSourceExpr()->IgnoreParenImpCasts(); StringLiteral *SL = dyn_cast(SrcExpr); if (!SL || !SL->isAscii()) return false; if (Diagnose) { Diag(SL->getLocStart(), diag::err_missing_atsign_prefix) << FixItHint::CreateInsertion(SL->getLocStart(), "@"); Exp = BuildObjCStringLiteral(SL->getLocStart(), SL).get(); } return true; } static bool maybeDiagnoseAssignmentToFunction(Sema &S, QualType DstType, const Expr *SrcExpr) { if (!DstType->isFunctionPointerType() || !SrcExpr->getType()->isFunctionType()) return false; auto *DRE = dyn_cast(SrcExpr->IgnoreParenImpCasts()); if (!DRE) return false; auto *FD = dyn_cast(DRE->getDecl()); if (!FD) return false; return !S.checkAddressOfFunctionIsAvailable(FD, /*Complain=*/true, SrcExpr->getLocStart()); } bool Sema::DiagnoseAssignmentResult(AssignConvertType ConvTy, SourceLocation Loc, QualType DstType, QualType SrcType, Expr *SrcExpr, AssignmentAction Action, bool *Complained) { if (Complained) *Complained = false; // Decode the result (notice that AST's are still created for extensions). bool CheckInferredResultType = false; bool isInvalid = false; unsigned DiagKind = 0; FixItHint Hint; ConversionFixItGenerator ConvHints; bool MayHaveConvFixit = false; bool MayHaveFunctionDiff = false; const ObjCInterfaceDecl *IFace = nullptr; const ObjCProtocolDecl *PDecl = nullptr; switch (ConvTy) { case Compatible: DiagnoseAssignmentEnum(DstType, SrcType, SrcExpr); return false; case PointerToInt: DiagKind = diag::ext_typecheck_convert_pointer_int; ConvHints.tryToFixConversion(SrcExpr, SrcType, DstType, *this); MayHaveConvFixit = true; break; case IntToPointer: DiagKind = diag::ext_typecheck_convert_int_pointer; ConvHints.tryToFixConversion(SrcExpr, SrcType, DstType, *this); MayHaveConvFixit = true; break; case IncompatiblePointer: if (Action == AA_Passing_CFAudited) DiagKind = diag::err_arc_typecheck_convert_incompatible_pointer; else if (SrcType->isFunctionPointerType() && DstType->isFunctionPointerType()) DiagKind = diag::ext_typecheck_convert_incompatible_function_pointer; else DiagKind = diag::ext_typecheck_convert_incompatible_pointer; CheckInferredResultType = DstType->isObjCObjectPointerType() && SrcType->isObjCObjectPointerType(); if (Hint.isNull() && !CheckInferredResultType) { ConvHints.tryToFixConversion(SrcExpr, SrcType, DstType, *this); } else if (CheckInferredResultType) { SrcType = SrcType.getUnqualifiedType(); DstType = DstType.getUnqualifiedType(); } MayHaveConvFixit = true; break; case IncompatiblePointerSign: DiagKind = diag::ext_typecheck_convert_incompatible_pointer_sign; break; case FunctionVoidPointer: DiagKind = diag::ext_typecheck_convert_pointer_void_func; break; case IncompatiblePointerDiscardsQualifiers: { // Perform array-to-pointer decay if necessary. if (SrcType->isArrayType()) SrcType = Context.getArrayDecayedType(SrcType); Qualifiers lhq = SrcType->getPointeeType().getQualifiers(); Qualifiers rhq = DstType->getPointeeType().getQualifiers(); if (lhq.getAddressSpace() != rhq.getAddressSpace()) { DiagKind = diag::err_typecheck_incompatible_address_space; break; } else if (lhq.getObjCLifetime() != rhq.getObjCLifetime()) { DiagKind = diag::err_typecheck_incompatible_ownership; break; } llvm_unreachable("unknown error case for discarding qualifiers!"); // fallthrough } case CompatiblePointerDiscardsQualifiers: // If the qualifiers lost were because we were applying the // (deprecated) C++ conversion from a string literal to a char* // (or wchar_t*), then there was no error (C++ 4.2p2). FIXME: // Ideally, this check would be performed in // checkPointerTypesForAssignment. However, that would require a // bit of refactoring (so that the second argument is an // expression, rather than a type), which should be done as part // of a larger effort to fix checkPointerTypesForAssignment for // C++ semantics. if (getLangOpts().CPlusPlus && IsStringLiteralToNonConstPointerConversion(SrcExpr, DstType)) return false; DiagKind = diag::ext_typecheck_convert_discards_qualifiers; break; case IncompatibleNestedPointerQualifiers: DiagKind = diag::ext_nested_pointer_qualifier_mismatch; break; case IntToBlockPointer: DiagKind = diag::err_int_to_block_pointer; break; case IncompatibleBlockPointer: DiagKind = diag::err_typecheck_convert_incompatible_block_pointer; break; case IncompatibleObjCQualifiedId: { if (SrcType->isObjCQualifiedIdType()) { const ObjCObjectPointerType *srcOPT = SrcType->getAs(); for (auto *srcProto : srcOPT->quals()) { PDecl = srcProto; break; } if (const ObjCInterfaceType *IFaceT = DstType->getAs()->getInterfaceType()) IFace = IFaceT->getDecl(); } else if (DstType->isObjCQualifiedIdType()) { const ObjCObjectPointerType *dstOPT = DstType->getAs(); for (auto *dstProto : dstOPT->quals()) { PDecl = dstProto; break; } if (const ObjCInterfaceType *IFaceT = SrcType->getAs()->getInterfaceType()) IFace = IFaceT->getDecl(); } DiagKind = diag::warn_incompatible_qualified_id; break; } case IncompatibleVectors: DiagKind = diag::warn_incompatible_vectors; break; case IncompatibleObjCWeakRef: DiagKind = diag::err_arc_weak_unavailable_assign; break; case Incompatible: if (maybeDiagnoseAssignmentToFunction(*this, DstType, SrcExpr)) { if (Complained) *Complained = true; return true; } DiagKind = diag::err_typecheck_convert_incompatible; ConvHints.tryToFixConversion(SrcExpr, SrcType, DstType, *this); MayHaveConvFixit = true; isInvalid = true; MayHaveFunctionDiff = true; break; } QualType FirstType, SecondType; switch (Action) { case AA_Assigning: case AA_Initializing: // The destination type comes first. FirstType = DstType; SecondType = SrcType; break; case AA_Returning: case AA_Passing: case AA_Passing_CFAudited: case AA_Converting: case AA_Sending: case AA_Casting: // The source type comes first. FirstType = SrcType; SecondType = DstType; break; } PartialDiagnostic FDiag = PDiag(DiagKind); if (Action == AA_Passing_CFAudited) FDiag << FirstType << SecondType << AA_Passing << SrcExpr->getSourceRange(); else FDiag << FirstType << SecondType << Action << SrcExpr->getSourceRange(); // If we can fix the conversion, suggest the FixIts. assert(ConvHints.isNull() || Hint.isNull()); if (!ConvHints.isNull()) { for (FixItHint &H : ConvHints.Hints) FDiag << H; } else { FDiag << Hint; } if (MayHaveConvFixit) { FDiag << (unsigned) (ConvHints.Kind); } if (MayHaveFunctionDiff) HandleFunctionTypeMismatch(FDiag, SecondType, FirstType); Diag(Loc, FDiag); if (DiagKind == diag::warn_incompatible_qualified_id && PDecl && IFace && !IFace->hasDefinition()) Diag(IFace->getLocation(), diag::note_incomplete_class_and_qualified_id) << IFace->getName() << PDecl->getName(); if (SecondType == Context.OverloadTy) NoteAllOverloadCandidates(OverloadExpr::find(SrcExpr).Expression, FirstType, /*TakingAddress=*/true); if (CheckInferredResultType) EmitRelatedResultTypeNote(SrcExpr); if (Action == AA_Returning && ConvTy == IncompatiblePointer) EmitRelatedResultTypeNoteForReturn(DstType); if (Complained) *Complained = true; return isInvalid; } ExprResult Sema::VerifyIntegerConstantExpression(Expr *E, llvm::APSInt *Result) { class SimpleICEDiagnoser : public VerifyICEDiagnoser { public: void diagnoseNotICE(Sema &S, SourceLocation Loc, SourceRange SR) override { S.Diag(Loc, diag::err_expr_not_ice) << S.LangOpts.CPlusPlus << SR; } } Diagnoser; return VerifyIntegerConstantExpression(E, Result, Diagnoser); } ExprResult Sema::VerifyIntegerConstantExpression(Expr *E, llvm::APSInt *Result, unsigned DiagID, bool AllowFold) { class IDDiagnoser : public VerifyICEDiagnoser { unsigned DiagID; public: IDDiagnoser(unsigned DiagID) : VerifyICEDiagnoser(DiagID == 0), DiagID(DiagID) { } void diagnoseNotICE(Sema &S, SourceLocation Loc, SourceRange SR) override { S.Diag(Loc, DiagID) << SR; } } Diagnoser(DiagID); return VerifyIntegerConstantExpression(E, Result, Diagnoser, AllowFold); } void Sema::VerifyICEDiagnoser::diagnoseFold(Sema &S, SourceLocation Loc, SourceRange SR) { S.Diag(Loc, diag::ext_expr_not_ice) << SR << S.LangOpts.CPlusPlus; } ExprResult Sema::VerifyIntegerConstantExpression(Expr *E, llvm::APSInt *Result, VerifyICEDiagnoser &Diagnoser, bool AllowFold) { SourceLocation DiagLoc = E->getLocStart(); if (getLangOpts().CPlusPlus11) { // C++11 [expr.const]p5: // If an expression of literal class type is used in a context where an // integral constant expression is required, then that class type shall // have a single non-explicit conversion function to an integral or // unscoped enumeration type ExprResult Converted; class CXX11ConvertDiagnoser : public ICEConvertDiagnoser { public: CXX11ConvertDiagnoser(bool Silent) : ICEConvertDiagnoser(/*AllowScopedEnumerations*/false, Silent, true) {} SemaDiagnosticBuilder diagnoseNotInt(Sema &S, SourceLocation Loc, QualType T) override { return S.Diag(Loc, diag::err_ice_not_integral) << T; } SemaDiagnosticBuilder diagnoseIncomplete( Sema &S, SourceLocation Loc, QualType T) override { return S.Diag(Loc, diag::err_ice_incomplete_type) << T; } SemaDiagnosticBuilder diagnoseExplicitConv( Sema &S, SourceLocation Loc, QualType T, QualType ConvTy) override { return S.Diag(Loc, diag::err_ice_explicit_conversion) << T << ConvTy; } SemaDiagnosticBuilder noteExplicitConv( Sema &S, CXXConversionDecl *Conv, QualType ConvTy) override { return S.Diag(Conv->getLocation(), diag::note_ice_conversion_here) << ConvTy->isEnumeralType() << ConvTy; } SemaDiagnosticBuilder diagnoseAmbiguous( Sema &S, SourceLocation Loc, QualType T) override { return S.Diag(Loc, diag::err_ice_ambiguous_conversion) << T; } SemaDiagnosticBuilder noteAmbiguous( Sema &S, CXXConversionDecl *Conv, QualType ConvTy) override { return S.Diag(Conv->getLocation(), diag::note_ice_conversion_here) << ConvTy->isEnumeralType() << ConvTy; } SemaDiagnosticBuilder diagnoseConversion( Sema &S, SourceLocation Loc, QualType T, QualType ConvTy) override { llvm_unreachable("conversion functions are permitted"); } } ConvertDiagnoser(Diagnoser.Suppress); Converted = PerformContextualImplicitConversion(DiagLoc, E, ConvertDiagnoser); if (Converted.isInvalid()) return Converted; E = Converted.get(); if (!E->getType()->isIntegralOrUnscopedEnumerationType()) return ExprError(); } else if (!E->getType()->isIntegralOrUnscopedEnumerationType()) { // An ICE must be of integral or unscoped enumeration type. if (!Diagnoser.Suppress) Diagnoser.diagnoseNotICE(*this, DiagLoc, E->getSourceRange()); return ExprError(); } // Circumvent ICE checking in C++11 to avoid evaluating the expression twice // in the non-ICE case. if (!getLangOpts().CPlusPlus11 && E->isIntegerConstantExpr(Context)) { if (Result) *Result = E->EvaluateKnownConstInt(Context); return E; } Expr::EvalResult EvalResult; SmallVector Notes; EvalResult.Diag = &Notes; // Try to evaluate the expression, and produce diagnostics explaining why it's // not a constant expression as a side-effect. bool Folded = E->EvaluateAsRValue(EvalResult, Context) && EvalResult.Val.isInt() && !EvalResult.HasSideEffects; // In C++11, we can rely on diagnostics being produced for any expression // which is not a constant expression. If no diagnostics were produced, then // this is a constant expression. if (Folded && getLangOpts().CPlusPlus11 && Notes.empty()) { if (Result) *Result = EvalResult.Val.getInt(); return E; } // If our only note is the usual "invalid subexpression" note, just point // the caret at its location rather than producing an essentially // redundant note. if (Notes.size() == 1 && Notes[0].second.getDiagID() == diag::note_invalid_subexpr_in_const_expr) { DiagLoc = Notes[0].first; Notes.clear(); } if (!Folded || !AllowFold) { if (!Diagnoser.Suppress) { Diagnoser.diagnoseNotICE(*this, DiagLoc, E->getSourceRange()); for (const PartialDiagnosticAt &Note : Notes) Diag(Note.first, Note.second); } return ExprError(); } Diagnoser.diagnoseFold(*this, DiagLoc, E->getSourceRange()); for (const PartialDiagnosticAt &Note : Notes) Diag(Note.first, Note.second); if (Result) *Result = EvalResult.Val.getInt(); return E; } namespace { // Handle the case where we conclude a expression which we speculatively // considered to be unevaluated is actually evaluated. class TransformToPE : public TreeTransform { typedef TreeTransform BaseTransform; public: TransformToPE(Sema &SemaRef) : BaseTransform(SemaRef) { } // Make sure we redo semantic analysis bool AlwaysRebuild() { return true; } // Make sure we handle LabelStmts correctly. // FIXME: This does the right thing, but maybe we need a more general // fix to TreeTransform? StmtResult TransformLabelStmt(LabelStmt *S) { S->getDecl()->setStmt(nullptr); return BaseTransform::TransformLabelStmt(S); } // We need to special-case DeclRefExprs referring to FieldDecls which // are not part of a member pointer formation; normal TreeTransforming // doesn't catch this case because of the way we represent them in the AST. // FIXME: This is a bit ugly; is it really the best way to handle this // case? // // Error on DeclRefExprs referring to FieldDecls. ExprResult TransformDeclRefExpr(DeclRefExpr *E) { if (isa(E->getDecl()) && !SemaRef.isUnevaluatedContext()) return SemaRef.Diag(E->getLocation(), diag::err_invalid_non_static_member_use) << E->getDecl() << E->getSourceRange(); return BaseTransform::TransformDeclRefExpr(E); } // Exception: filter out member pointer formation ExprResult TransformUnaryOperator(UnaryOperator *E) { if (E->getOpcode() == UO_AddrOf && E->getType()->isMemberPointerType()) return E; return BaseTransform::TransformUnaryOperator(E); } ExprResult TransformLambdaExpr(LambdaExpr *E) { // Lambdas never need to be transformed. return E; } }; } ExprResult Sema::TransformToPotentiallyEvaluated(Expr *E) { assert(isUnevaluatedContext() && "Should only transform unevaluated expressions"); ExprEvalContexts.back().Context = ExprEvalContexts[ExprEvalContexts.size()-2].Context; if (isUnevaluatedContext()) return E; return TransformToPE(*this).TransformExpr(E); } void Sema::PushExpressionEvaluationContext(ExpressionEvaluationContext NewContext, Decl *LambdaContextDecl, bool IsDecltype) { ExprEvalContexts.emplace_back(NewContext, ExprCleanupObjects.size(), Cleanup, LambdaContextDecl, IsDecltype); Cleanup.reset(); if (!MaybeODRUseExprs.empty()) std::swap(MaybeODRUseExprs, ExprEvalContexts.back().SavedMaybeODRUseExprs); } void Sema::PushExpressionEvaluationContext(ExpressionEvaluationContext NewContext, ReuseLambdaContextDecl_t, bool IsDecltype) { Decl *ClosureContextDecl = ExprEvalContexts.back().ManglingContextDecl; PushExpressionEvaluationContext(NewContext, ClosureContextDecl, IsDecltype); } void Sema::PopExpressionEvaluationContext() { ExpressionEvaluationContextRecord& Rec = ExprEvalContexts.back(); unsigned NumTypos = Rec.NumTypos; if (!Rec.Lambdas.empty()) { if (Rec.isUnevaluated() || Rec.Context == ConstantEvaluated) { unsigned D; if (Rec.isUnevaluated()) { // C++11 [expr.prim.lambda]p2: // A lambda-expression shall not appear in an unevaluated operand // (Clause 5). D = diag::err_lambda_unevaluated_operand; } else { // C++1y [expr.const]p2: // A conditional-expression e is a core constant expression unless the // evaluation of e, following the rules of the abstract machine, would // evaluate [...] a lambda-expression. D = diag::err_lambda_in_constant_expression; } // C++1z allows lambda expressions as core constant expressions. // FIXME: In C++1z, reinstate the restrictions on lambda expressions (CWG // 1607) from appearing within template-arguments and array-bounds that // are part of function-signatures. Be mindful that P0315 (Lambdas in // unevaluated contexts) might lift some of these restrictions in a // future version. if (Rec.Context != ConstantEvaluated || !getLangOpts().CPlusPlus1z) for (const auto *L : Rec.Lambdas) Diag(L->getLocStart(), D); } else { // Mark the capture expressions odr-used. This was deferred // during lambda expression creation. for (auto *Lambda : Rec.Lambdas) { for (auto *C : Lambda->capture_inits()) MarkDeclarationsReferencedInExpr(C); } } } // When are coming out of an unevaluated context, clear out any // temporaries that we may have created as part of the evaluation of // the expression in that context: they aren't relevant because they // will never be constructed. if (Rec.isUnevaluated() || Rec.Context == ConstantEvaluated) { ExprCleanupObjects.erase(ExprCleanupObjects.begin() + Rec.NumCleanupObjects, ExprCleanupObjects.end()); Cleanup = Rec.ParentCleanup; CleanupVarDeclMarking(); std::swap(MaybeODRUseExprs, Rec.SavedMaybeODRUseExprs); // Otherwise, merge the contexts together. } else { Cleanup.mergeFrom(Rec.ParentCleanup); MaybeODRUseExprs.insert(Rec.SavedMaybeODRUseExprs.begin(), Rec.SavedMaybeODRUseExprs.end()); } // Pop the current expression evaluation context off the stack. ExprEvalContexts.pop_back(); if (!ExprEvalContexts.empty()) ExprEvalContexts.back().NumTypos += NumTypos; else assert(NumTypos == 0 && "There are outstanding typos after popping the " "last ExpressionEvaluationContextRecord"); } void Sema::DiscardCleanupsInEvaluationContext() { ExprCleanupObjects.erase( ExprCleanupObjects.begin() + ExprEvalContexts.back().NumCleanupObjects, ExprCleanupObjects.end()); Cleanup.reset(); MaybeODRUseExprs.clear(); } ExprResult Sema::HandleExprEvaluationContextForTypeof(Expr *E) { if (!E->getType()->isVariablyModifiedType()) return E; return TransformToPotentiallyEvaluated(E); } /// Are we within a context in which some evaluation could be performed (be it /// constant evaluation or runtime evaluation)? Sadly, this notion is not quite /// captured by C++'s idea of an "unevaluated context". static bool isEvaluatableContext(Sema &SemaRef) { switch (SemaRef.ExprEvalContexts.back().Context) { case Sema::Unevaluated: case Sema::UnevaluatedAbstract: case Sema::DiscardedStatement: // Expressions in this context are never evaluated. return false; case Sema::UnevaluatedList: case Sema::ConstantEvaluated: case Sema::PotentiallyEvaluated: // Expressions in this context could be evaluated. return true; case Sema::PotentiallyEvaluatedIfUsed: // Referenced declarations will only be used if the construct in the // containing expression is used, at which point we'll be given another // turn to mark them. return false; } llvm_unreachable("Invalid context"); } /// Are we within a context in which references to resolved functions or to /// variables result in odr-use? static bool isOdrUseContext(Sema &SemaRef, bool SkipDependentUses = true) { // An expression in a template is not really an expression until it's been // instantiated, so it doesn't trigger odr-use. if (SkipDependentUses && SemaRef.CurContext->isDependentContext()) return false; switch (SemaRef.ExprEvalContexts.back().Context) { case Sema::Unevaluated: case Sema::UnevaluatedList: case Sema::UnevaluatedAbstract: case Sema::DiscardedStatement: return false; case Sema::ConstantEvaluated: case Sema::PotentiallyEvaluated: return true; case Sema::PotentiallyEvaluatedIfUsed: return false; } llvm_unreachable("Invalid context"); } static bool isImplicitlyDefinableConstexprFunction(FunctionDecl *Func) { CXXMethodDecl *MD = dyn_cast(Func); return Func->isConstexpr() && (Func->isImplicitlyInstantiable() || (MD && !MD->isUserProvided())); } /// \brief Mark a function referenced, and check whether it is odr-used /// (C++ [basic.def.odr]p2, C99 6.9p3) void Sema::MarkFunctionReferenced(SourceLocation Loc, FunctionDecl *Func, bool MightBeOdrUse) { assert(Func && "No function?"); Func->setReferenced(); // C++11 [basic.def.odr]p3: // A function whose name appears as a potentially-evaluated expression is // odr-used if it is the unique lookup result or the selected member of a // set of overloaded functions [...]. // // We (incorrectly) mark overload resolution as an unevaluated context, so we // can just check that here. bool OdrUse = MightBeOdrUse && isOdrUseContext(*this); // Determine whether we require a function definition to exist, per // C++11 [temp.inst]p3: // Unless a function template specialization has been explicitly // instantiated or explicitly specialized, the function template // specialization is implicitly instantiated when the specialization is // referenced in a context that requires a function definition to exist. // // That is either when this is an odr-use, or when a usage of a constexpr // function occurs within an evaluatable context. bool NeedDefinition = OdrUse || (isEvaluatableContext(*this) && isImplicitlyDefinableConstexprFunction(Func)); // C++14 [temp.expl.spec]p6: // If a template [...] is explicitly specialized then that specialization // shall be declared before the first use of that specialization that would // cause an implicit instantiation to take place, in every translation unit // in which such a use occurs if (NeedDefinition && (Func->getTemplateSpecializationKind() != TSK_Undeclared || Func->getMemberSpecializationInfo())) checkSpecializationVisibility(Loc, Func); // C++14 [except.spec]p17: // An exception-specification is considered to be needed when: // - the function is odr-used or, if it appears in an unevaluated operand, // would be odr-used if the expression were potentially-evaluated; // // Note, we do this even if MightBeOdrUse is false. That indicates that the // function is a pure virtual function we're calling, and in that case the // function was selected by overload resolution and we need to resolve its // exception specification for a different reason. const FunctionProtoType *FPT = Func->getType()->getAs(); if (FPT && isUnresolvedExceptionSpec(FPT->getExceptionSpecType())) ResolveExceptionSpec(Loc, FPT); // If we don't need to mark the function as used, and we don't need to // try to provide a definition, there's nothing more to do. if ((Func->isUsed(/*CheckUsedAttr=*/false) || !OdrUse) && (!NeedDefinition || Func->getBody())) return; // Note that this declaration has been used. if (CXXConstructorDecl *Constructor = dyn_cast(Func)) { Constructor = cast(Constructor->getFirstDecl()); if (Constructor->isDefaulted() && !Constructor->isDeleted()) { if (Constructor->isDefaultConstructor()) { if (Constructor->isTrivial() && !Constructor->hasAttr()) return; DefineImplicitDefaultConstructor(Loc, Constructor); } else if (Constructor->isCopyConstructor()) { DefineImplicitCopyConstructor(Loc, Constructor); } else if (Constructor->isMoveConstructor()) { DefineImplicitMoveConstructor(Loc, Constructor); } } else if (Constructor->getInheritedConstructor()) { DefineInheritingConstructor(Loc, Constructor); } } else if (CXXDestructorDecl *Destructor = dyn_cast(Func)) { Destructor = cast(Destructor->getFirstDecl()); if (Destructor->isDefaulted() && !Destructor->isDeleted()) { if (Destructor->isTrivial() && !Destructor->hasAttr()) return; DefineImplicitDestructor(Loc, Destructor); } if (Destructor->isVirtual() && getLangOpts().AppleKext) MarkVTableUsed(Loc, Destructor->getParent()); } else if (CXXMethodDecl *MethodDecl = dyn_cast(Func)) { if (MethodDecl->isOverloadedOperator() && MethodDecl->getOverloadedOperator() == OO_Equal) { MethodDecl = cast(MethodDecl->getFirstDecl()); if (MethodDecl->isDefaulted() && !MethodDecl->isDeleted()) { if (MethodDecl->isCopyAssignmentOperator()) DefineImplicitCopyAssignment(Loc, MethodDecl); else if (MethodDecl->isMoveAssignmentOperator()) DefineImplicitMoveAssignment(Loc, MethodDecl); } } else if (isa(MethodDecl) && MethodDecl->getParent()->isLambda()) { CXXConversionDecl *Conversion = cast(MethodDecl->getFirstDecl()); if (Conversion->isLambdaToBlockPointerConversion()) DefineImplicitLambdaToBlockPointerConversion(Loc, Conversion); else DefineImplicitLambdaToFunctionPointerConversion(Loc, Conversion); } else if (MethodDecl->isVirtual() && getLangOpts().AppleKext) MarkVTableUsed(Loc, MethodDecl->getParent()); } // Recursive functions should be marked when used from another function. // FIXME: Is this really right? if (CurContext == Func) return; // Implicit instantiation of function templates and member functions of // class templates. if (Func->isImplicitlyInstantiable()) { bool AlreadyInstantiated = false; SourceLocation PointOfInstantiation = Loc; if (FunctionTemplateSpecializationInfo *SpecInfo = Func->getTemplateSpecializationInfo()) { if (SpecInfo->getPointOfInstantiation().isInvalid()) SpecInfo->setPointOfInstantiation(Loc); else if (SpecInfo->getTemplateSpecializationKind() == TSK_ImplicitInstantiation) { AlreadyInstantiated = true; PointOfInstantiation = SpecInfo->getPointOfInstantiation(); } } else if (MemberSpecializationInfo *MSInfo = Func->getMemberSpecializationInfo()) { if (MSInfo->getPointOfInstantiation().isInvalid()) MSInfo->setPointOfInstantiation(Loc); else if (MSInfo->getTemplateSpecializationKind() == TSK_ImplicitInstantiation) { AlreadyInstantiated = true; PointOfInstantiation = MSInfo->getPointOfInstantiation(); } } if (!AlreadyInstantiated || Func->isConstexpr()) { if (isa(Func->getDeclContext()) && cast(Func->getDeclContext())->isLocalClass() && ActiveTemplateInstantiations.size()) PendingLocalImplicitInstantiations.push_back( std::make_pair(Func, PointOfInstantiation)); else if (Func->isConstexpr()) // Do not defer instantiations of constexpr functions, to avoid the // expression evaluator needing to call back into Sema if it sees a // call to such a function. InstantiateFunctionDefinition(PointOfInstantiation, Func); else { PendingInstantiations.push_back(std::make_pair(Func, PointOfInstantiation)); // Notify the consumer that a function was implicitly instantiated. Consumer.HandleCXXImplicitFunctionInstantiation(Func); } } } else { // Walk redefinitions, as some of them may be instantiable. for (auto i : Func->redecls()) { if (!i->isUsed(false) && i->isImplicitlyInstantiable()) MarkFunctionReferenced(Loc, i, OdrUse); } } if (!OdrUse) return; // Keep track of used but undefined functions. if (!Func->isDefined()) { if (mightHaveNonExternalLinkage(Func)) UndefinedButUsed.insert(std::make_pair(Func->getCanonicalDecl(), Loc)); else if (Func->getMostRecentDecl()->isInlined() && !LangOpts.GNUInline && !Func->getMostRecentDecl()->hasAttr()) UndefinedButUsed.insert(std::make_pair(Func->getCanonicalDecl(), Loc)); } Func->markUsed(Context); } static void diagnoseUncapturableValueReference(Sema &S, SourceLocation loc, ValueDecl *var, DeclContext *DC) { DeclContext *VarDC = var->getDeclContext(); // If the parameter still belongs to the translation unit, then // we're actually just using one parameter in the declaration of // the next. if (isa(var) && isa(VarDC)) return; // For C code, don't diagnose about capture if we're not actually in code // right now; it's impossible to write a non-constant expression outside of // function context, so we'll get other (more useful) diagnostics later. // // For C++, things get a bit more nasty... it would be nice to suppress this // diagnostic for certain cases like using a local variable in an array bound // for a member of a local class, but the correct predicate is not obvious. if (!S.getLangOpts().CPlusPlus && !S.CurContext->isFunctionOrMethod()) return; unsigned ValueKind = isa(var) ? 1 : 0; unsigned ContextKind = 3; // unknown if (isa(VarDC) && cast(VarDC->getParent())->isLambda()) { ContextKind = 2; } else if (isa(VarDC)) { ContextKind = 0; } else if (isa(VarDC)) { ContextKind = 1; } S.Diag(loc, diag::err_reference_to_local_in_enclosing_context) << var << ValueKind << ContextKind << VarDC; S.Diag(var->getLocation(), diag::note_entity_declared_at) << var; // FIXME: Add additional diagnostic info about class etc. which prevents // capture. } static bool isVariableAlreadyCapturedInScopeInfo(CapturingScopeInfo *CSI, VarDecl *Var, bool &SubCapturesAreNested, QualType &CaptureType, QualType &DeclRefType) { // Check whether we've already captured it. if (CSI->CaptureMap.count(Var)) { // If we found a capture, any subcaptures are nested. SubCapturesAreNested = true; // Retrieve the capture type for this variable. CaptureType = CSI->getCapture(Var).getCaptureType(); // Compute the type of an expression that refers to this variable. DeclRefType = CaptureType.getNonReferenceType(); // Similarly to mutable captures in lambda, all the OpenMP captures by copy // are mutable in the sense that user can change their value - they are // private instances of the captured declarations. const CapturingScopeInfo::Capture &Cap = CSI->getCapture(Var); if (Cap.isCopyCapture() && !(isa(CSI) && cast(CSI)->Mutable) && !(isa(CSI) && cast(CSI)->CapRegionKind == CR_OpenMP)) DeclRefType.addConst(); return true; } return false; } // Only block literals, captured statements, and lambda expressions can // capture; other scopes don't work. static DeclContext *getParentOfCapturingContextOrNull(DeclContext *DC, VarDecl *Var, SourceLocation Loc, const bool Diagnose, Sema &S) { if (isa(DC) || isa(DC) || isLambdaCallOperator(DC)) return getLambdaAwareParentOfDeclContext(DC); else if (Var->hasLocalStorage()) { if (Diagnose) diagnoseUncapturableValueReference(S, Loc, Var, DC); } return nullptr; } // Certain capturing entities (lambdas, blocks etc.) are not allowed to capture // certain types of variables (unnamed, variably modified types etc.) // so check for eligibility. static bool isVariableCapturable(CapturingScopeInfo *CSI, VarDecl *Var, SourceLocation Loc, const bool Diagnose, Sema &S) { bool IsBlock = isa(CSI); bool IsLambda = isa(CSI); // Lambdas are not allowed to capture unnamed variables // (e.g. anonymous unions). // FIXME: The C++11 rule don't actually state this explicitly, but I'm // assuming that's the intent. if (IsLambda && !Var->getDeclName()) { if (Diagnose) { S.Diag(Loc, diag::err_lambda_capture_anonymous_var); S.Diag(Var->getLocation(), diag::note_declared_at); } return false; } // Prohibit variably-modified types in blocks; they're difficult to deal with. if (Var->getType()->isVariablyModifiedType() && IsBlock) { if (Diagnose) { S.Diag(Loc, diag::err_ref_vm_type); S.Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); } return false; } // Prohibit structs with flexible array members too. // We cannot capture what is in the tail end of the struct. if (const RecordType *VTTy = Var->getType()->getAs()) { if (VTTy->getDecl()->hasFlexibleArrayMember()) { if (Diagnose) { if (IsBlock) S.Diag(Loc, diag::err_ref_flexarray_type); else S.Diag(Loc, diag::err_lambda_capture_flexarray_type) << Var->getDeclName(); S.Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); } return false; } } const bool HasBlocksAttr = Var->hasAttr(); // Lambdas and captured statements are not allowed to capture __block // variables; they don't support the expected semantics. if (HasBlocksAttr && (IsLambda || isa(CSI))) { if (Diagnose) { S.Diag(Loc, diag::err_capture_block_variable) << Var->getDeclName() << !IsLambda; S.Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); } return false; } return true; } // Returns true if the capture by block was successful. static bool captureInBlock(BlockScopeInfo *BSI, VarDecl *Var, SourceLocation Loc, const bool BuildAndDiagnose, QualType &CaptureType, QualType &DeclRefType, const bool Nested, Sema &S) { Expr *CopyExpr = nullptr; bool ByRef = false; // Blocks are not allowed to capture arrays. if (CaptureType->isArrayType()) { if (BuildAndDiagnose) { S.Diag(Loc, diag::err_ref_array_type); S.Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); } return false; } // Forbid the block-capture of autoreleasing variables. if (CaptureType.getObjCLifetime() == Qualifiers::OCL_Autoreleasing) { if (BuildAndDiagnose) { S.Diag(Loc, diag::err_arc_autoreleasing_capture) << /*block*/ 0; S.Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); } return false; } // Warn about implicitly autoreleasing indirect parameters captured by blocks. if (auto *PT = dyn_cast(CaptureType)) { QualType PointeeTy = PT->getPointeeType(); if (isa(PointeeTy.getCanonicalType()) && PointeeTy.getObjCLifetime() == Qualifiers::OCL_Autoreleasing && !isa(PointeeTy)) { if (BuildAndDiagnose) { SourceLocation VarLoc = Var->getLocation(); S.Diag(Loc, diag::warn_block_capture_autoreleasing); S.Diag(VarLoc, diag::note_declare_parameter_autoreleasing) << FixItHint::CreateInsertion(VarLoc, "__autoreleasing"); S.Diag(VarLoc, diag::note_declare_parameter_strong); } } } const bool HasBlocksAttr = Var->hasAttr(); if (HasBlocksAttr || CaptureType->isReferenceType() || (S.getLangOpts().OpenMP && S.IsOpenMPCapturedDecl(Var))) { // Block capture by reference does not change the capture or // declaration reference types. ByRef = true; } else { // Block capture by copy introduces 'const'. CaptureType = CaptureType.getNonReferenceType().withConst(); DeclRefType = CaptureType; if (S.getLangOpts().CPlusPlus && BuildAndDiagnose) { if (const RecordType *Record = DeclRefType->getAs()) { // The capture logic needs the destructor, so make sure we mark it. // Usually this is unnecessary because most local variables have // their destructors marked at declaration time, but parameters are // an exception because it's technically only the call site that // actually requires the destructor. if (isa(Var)) S.FinalizeVarWithDestructor(Var, Record); // Enter a new evaluation context to insulate the copy // full-expression. EnterExpressionEvaluationContext scope(S, S.PotentiallyEvaluated); // According to the blocks spec, the capture of a variable from // the stack requires a const copy constructor. This is not true // of the copy/move done to move a __block variable to the heap. Expr *DeclRef = new (S.Context) DeclRefExpr(Var, Nested, DeclRefType.withConst(), VK_LValue, Loc); ExprResult Result = S.PerformCopyInitialization( InitializedEntity::InitializeBlock(Var->getLocation(), CaptureType, false), Loc, DeclRef); // Build a full-expression copy expression if initialization // succeeded and used a non-trivial constructor. Recover from // errors by pretending that the copy isn't necessary. if (!Result.isInvalid() && !cast(Result.get())->getConstructor() ->isTrivial()) { Result = S.MaybeCreateExprWithCleanups(Result); CopyExpr = Result.get(); } } } } // Actually capture the variable. if (BuildAndDiagnose) BSI->addCapture(Var, HasBlocksAttr, ByRef, Nested, Loc, SourceLocation(), CaptureType, CopyExpr); return true; } /// \brief Capture the given variable in the captured region. static bool captureInCapturedRegion(CapturedRegionScopeInfo *RSI, VarDecl *Var, SourceLocation Loc, const bool BuildAndDiagnose, QualType &CaptureType, QualType &DeclRefType, const bool RefersToCapturedVariable, Sema &S) { // By default, capture variables by reference. bool ByRef = true; // Using an LValue reference type is consistent with Lambdas (see below). if (S.getLangOpts().OpenMP && RSI->CapRegionKind == CR_OpenMP) { if (S.IsOpenMPCapturedDecl(Var)) DeclRefType = DeclRefType.getUnqualifiedType(); ByRef = S.IsOpenMPCapturedByRef(Var, RSI->OpenMPLevel); } if (ByRef) CaptureType = S.Context.getLValueReferenceType(DeclRefType); else CaptureType = DeclRefType; Expr *CopyExpr = nullptr; if (BuildAndDiagnose) { // The current implementation assumes that all variables are captured // by references. Since there is no capture by copy, no expression // evaluation will be needed. RecordDecl *RD = RSI->TheRecordDecl; FieldDecl *Field = FieldDecl::Create(S.Context, RD, Loc, Loc, nullptr, CaptureType, S.Context.getTrivialTypeSourceInfo(CaptureType, Loc), nullptr, false, ICIS_NoInit); Field->setImplicit(true); Field->setAccess(AS_private); RD->addDecl(Field); CopyExpr = new (S.Context) DeclRefExpr(Var, RefersToCapturedVariable, DeclRefType, VK_LValue, Loc); Var->setReferenced(true); Var->markUsed(S.Context); } // Actually capture the variable. if (BuildAndDiagnose) RSI->addCapture(Var, /*isBlock*/false, ByRef, RefersToCapturedVariable, Loc, SourceLocation(), CaptureType, CopyExpr); return true; } /// \brief Create a field within the lambda class for the variable /// being captured. static void addAsFieldToClosureType(Sema &S, LambdaScopeInfo *LSI, QualType FieldType, QualType DeclRefType, SourceLocation Loc, bool RefersToCapturedVariable) { CXXRecordDecl *Lambda = LSI->Lambda; // Build the non-static data member. FieldDecl *Field = FieldDecl::Create(S.Context, Lambda, Loc, Loc, nullptr, FieldType, S.Context.getTrivialTypeSourceInfo(FieldType, Loc), nullptr, false, ICIS_NoInit); Field->setImplicit(true); Field->setAccess(AS_private); Lambda->addDecl(Field); } /// \brief Capture the given variable in the lambda. static bool captureInLambda(LambdaScopeInfo *LSI, VarDecl *Var, SourceLocation Loc, const bool BuildAndDiagnose, QualType &CaptureType, QualType &DeclRefType, const bool RefersToCapturedVariable, const Sema::TryCaptureKind Kind, SourceLocation EllipsisLoc, const bool IsTopScope, Sema &S) { // Determine whether we are capturing by reference or by value. bool ByRef = false; if (IsTopScope && Kind != Sema::TryCapture_Implicit) { ByRef = (Kind == Sema::TryCapture_ExplicitByRef); } else { ByRef = (LSI->ImpCaptureStyle == LambdaScopeInfo::ImpCap_LambdaByref); } // Compute the type of the field that will capture this variable. if (ByRef) { // C++11 [expr.prim.lambda]p15: // An entity is captured by reference if it is implicitly or // explicitly captured but not captured by copy. It is // unspecified whether additional unnamed non-static data // members are declared in the closure type for entities // captured by reference. // // FIXME: It is not clear whether we want to build an lvalue reference // to the DeclRefType or to CaptureType.getNonReferenceType(). GCC appears // to do the former, while EDG does the latter. Core issue 1249 will // clarify, but for now we follow GCC because it's a more permissive and // easily defensible position. CaptureType = S.Context.getLValueReferenceType(DeclRefType); } else { // C++11 [expr.prim.lambda]p14: // For each entity captured by copy, an unnamed non-static // data member is declared in the closure type. The // declaration order of these members is unspecified. The type // of such a data member is the type of the corresponding // captured entity if the entity is not a reference to an // object, or the referenced type otherwise. [Note: If the // captured entity is a reference to a function, the // corresponding data member is also a reference to a // function. - end note ] if (const ReferenceType *RefType = CaptureType->getAs()){ if (!RefType->getPointeeType()->isFunctionType()) CaptureType = RefType->getPointeeType(); } // Forbid the lambda copy-capture of autoreleasing variables. if (CaptureType.getObjCLifetime() == Qualifiers::OCL_Autoreleasing) { if (BuildAndDiagnose) { S.Diag(Loc, diag::err_arc_autoreleasing_capture) << /*lambda*/ 1; S.Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); } return false; } // Make sure that by-copy captures are of a complete and non-abstract type. if (BuildAndDiagnose) { if (!CaptureType->isDependentType() && S.RequireCompleteType(Loc, CaptureType, diag::err_capture_of_incomplete_type, Var->getDeclName())) return false; if (S.RequireNonAbstractType(Loc, CaptureType, diag::err_capture_of_abstract_type)) return false; } } // Capture this variable in the lambda. if (BuildAndDiagnose) addAsFieldToClosureType(S, LSI, CaptureType, DeclRefType, Loc, RefersToCapturedVariable); // Compute the type of a reference to this captured variable. if (ByRef) DeclRefType = CaptureType.getNonReferenceType(); else { // C++ [expr.prim.lambda]p5: // The closure type for a lambda-expression has a public inline // function call operator [...]. This function call operator is // declared const (9.3.1) if and only if the lambda-expression's // parameter-declaration-clause is not followed by mutable. DeclRefType = CaptureType.getNonReferenceType(); if (!LSI->Mutable && !CaptureType->isReferenceType()) DeclRefType.addConst(); } // Add the capture. if (BuildAndDiagnose) LSI->addCapture(Var, /*IsBlock=*/false, ByRef, RefersToCapturedVariable, Loc, EllipsisLoc, CaptureType, /*CopyExpr=*/nullptr); return true; } bool Sema::tryCaptureVariable( VarDecl *Var, SourceLocation ExprLoc, TryCaptureKind Kind, SourceLocation EllipsisLoc, bool BuildAndDiagnose, QualType &CaptureType, QualType &DeclRefType, const unsigned *const FunctionScopeIndexToStopAt) { // An init-capture is notionally from the context surrounding its // declaration, but its parent DC is the lambda class. DeclContext *VarDC = Var->getDeclContext(); if (Var->isInitCapture()) VarDC = VarDC->getParent(); DeclContext *DC = CurContext; const unsigned MaxFunctionScopesIndex = FunctionScopeIndexToStopAt ? *FunctionScopeIndexToStopAt : FunctionScopes.size() - 1; // We need to sync up the Declaration Context with the // FunctionScopeIndexToStopAt if (FunctionScopeIndexToStopAt) { unsigned FSIndex = FunctionScopes.size() - 1; while (FSIndex != MaxFunctionScopesIndex) { DC = getLambdaAwareParentOfDeclContext(DC); --FSIndex; } } // If the variable is declared in the current context, there is no need to // capture it. if (VarDC == DC) return true; // Capture global variables if it is required to use private copy of this // variable. bool IsGlobal = !Var->hasLocalStorage(); if (IsGlobal && !(LangOpts.OpenMP && IsOpenMPCapturedDecl(Var))) return true; // Walk up the stack to determine whether we can capture the variable, // performing the "simple" checks that don't depend on type. We stop when // we've either hit the declared scope of the variable or find an existing // capture of that variable. We start from the innermost capturing-entity // (the DC) and ensure that all intervening capturing-entities // (blocks/lambdas etc.) between the innermost capturer and the variable`s // declcontext can either capture the variable or have already captured // the variable. CaptureType = Var->getType(); DeclRefType = CaptureType.getNonReferenceType(); bool Nested = false; bool Explicit = (Kind != TryCapture_Implicit); unsigned FunctionScopesIndex = MaxFunctionScopesIndex; do { // Only block literals, captured statements, and lambda expressions can // capture; other scopes don't work. DeclContext *ParentDC = getParentOfCapturingContextOrNull(DC, Var, ExprLoc, BuildAndDiagnose, *this); // We need to check for the parent *first* because, if we *have* // private-captured a global variable, we need to recursively capture it in // intermediate blocks, lambdas, etc. if (!ParentDC) { if (IsGlobal) { FunctionScopesIndex = MaxFunctionScopesIndex - 1; break; } return true; } FunctionScopeInfo *FSI = FunctionScopes[FunctionScopesIndex]; CapturingScopeInfo *CSI = cast(FSI); // Check whether we've already captured it. if (isVariableAlreadyCapturedInScopeInfo(CSI, Var, Nested, CaptureType, DeclRefType)) break; // If we are instantiating a generic lambda call operator body, // we do not want to capture new variables. What was captured // during either a lambdas transformation or initial parsing // should be used. if (isGenericLambdaCallOperatorSpecialization(DC)) { if (BuildAndDiagnose) { LambdaScopeInfo *LSI = cast(CSI); if (LSI->ImpCaptureStyle == CapturingScopeInfo::ImpCap_None) { Diag(ExprLoc, diag::err_lambda_impcap) << Var->getDeclName(); Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); Diag(LSI->Lambda->getLocStart(), diag::note_lambda_decl); } else diagnoseUncapturableValueReference(*this, ExprLoc, Var, DC); } return true; } // Certain capturing entities (lambdas, blocks etc.) are not allowed to capture // certain types of variables (unnamed, variably modified types etc.) // so check for eligibility. if (!isVariableCapturable(CSI, Var, ExprLoc, BuildAndDiagnose, *this)) return true; // Try to capture variable-length arrays types. if (Var->getType()->isVariablyModifiedType()) { // We're going to walk down into the type and look for VLA // expressions. QualType QTy = Var->getType(); if (ParmVarDecl *PVD = dyn_cast_or_null(Var)) QTy = PVD->getOriginalType(); captureVariablyModifiedType(Context, QTy, CSI); } if (getLangOpts().OpenMP) { if (auto *RSI = dyn_cast(CSI)) { // OpenMP private variables should not be captured in outer scope, so // just break here. Similarly, global variables that are captured in a // target region should not be captured outside the scope of the region. if (RSI->CapRegionKind == CR_OpenMP) { auto IsTargetCap = isOpenMPTargetCapturedDecl(Var, RSI->OpenMPLevel); // When we detect target captures we are looking from inside the // target region, therefore we need to propagate the capture from the // enclosing region. Therefore, the capture is not initially nested. if (IsTargetCap) FunctionScopesIndex--; if (IsTargetCap || isOpenMPPrivateDecl(Var, RSI->OpenMPLevel)) { Nested = !IsTargetCap; DeclRefType = DeclRefType.getUnqualifiedType(); CaptureType = Context.getLValueReferenceType(DeclRefType); break; } } } } if (CSI->ImpCaptureStyle == CapturingScopeInfo::ImpCap_None && !Explicit) { // No capture-default, and this is not an explicit capture // so cannot capture this variable. if (BuildAndDiagnose) { Diag(ExprLoc, diag::err_lambda_impcap) << Var->getDeclName(); Diag(Var->getLocation(), diag::note_previous_decl) << Var->getDeclName(); if (cast(CSI)->Lambda) Diag(cast(CSI)->Lambda->getLocStart(), diag::note_lambda_decl); // FIXME: If we error out because an outer lambda can not implicitly // capture a variable that an inner lambda explicitly captures, we // should have the inner lambda do the explicit capture - because // it makes for cleaner diagnostics later. This would purely be done // so that the diagnostic does not misleadingly claim that a variable // can not be captured by a lambda implicitly even though it is captured // explicitly. Suggestion: // - create const bool VariableCaptureWasInitiallyExplicit = Explicit // at the function head // - cache the StartingDeclContext - this must be a lambda // - captureInLambda in the innermost lambda the variable. } return true; } FunctionScopesIndex--; DC = ParentDC; Explicit = false; } while (!VarDC->Equals(DC)); // Walk back down the scope stack, (e.g. from outer lambda to inner lambda) // computing the type of the capture at each step, checking type-specific // requirements, and adding captures if requested. // If the variable had already been captured previously, we start capturing // at the lambda nested within that one. for (unsigned I = ++FunctionScopesIndex, N = MaxFunctionScopesIndex + 1; I != N; ++I) { CapturingScopeInfo *CSI = cast(FunctionScopes[I]); if (BlockScopeInfo *BSI = dyn_cast(CSI)) { if (!captureInBlock(BSI, Var, ExprLoc, BuildAndDiagnose, CaptureType, DeclRefType, Nested, *this)) return true; Nested = true; } else if (CapturedRegionScopeInfo *RSI = dyn_cast(CSI)) { if (!captureInCapturedRegion(RSI, Var, ExprLoc, BuildAndDiagnose, CaptureType, DeclRefType, Nested, *this)) return true; Nested = true; } else { LambdaScopeInfo *LSI = cast(CSI); if (!captureInLambda(LSI, Var, ExprLoc, BuildAndDiagnose, CaptureType, DeclRefType, Nested, Kind, EllipsisLoc, /*IsTopScope*/I == N - 1, *this)) return true; Nested = true; } } return false; } bool Sema::tryCaptureVariable(VarDecl *Var, SourceLocation Loc, TryCaptureKind Kind, SourceLocation EllipsisLoc) { QualType CaptureType; QualType DeclRefType; return tryCaptureVariable(Var, Loc, Kind, EllipsisLoc, /*BuildAndDiagnose=*/true, CaptureType, DeclRefType, nullptr); } bool Sema::NeedToCaptureVariable(VarDecl *Var, SourceLocation Loc) { QualType CaptureType; QualType DeclRefType; return !tryCaptureVariable(Var, Loc, TryCapture_Implicit, SourceLocation(), /*BuildAndDiagnose=*/false, CaptureType, DeclRefType, nullptr); } QualType Sema::getCapturedDeclRefType(VarDecl *Var, SourceLocation Loc) { QualType CaptureType; QualType DeclRefType; // Determine whether we can capture this variable. if (tryCaptureVariable(Var, Loc, TryCapture_Implicit, SourceLocation(), /*BuildAndDiagnose=*/false, CaptureType, DeclRefType, nullptr)) return QualType(); return DeclRefType; } // If either the type of the variable or the initializer is dependent, // return false. Otherwise, determine whether the variable is a constant // expression. Use this if you need to know if a variable that might or // might not be dependent is truly a constant expression. static inline bool IsVariableNonDependentAndAConstantExpression(VarDecl *Var, ASTContext &Context) { if (Var->getType()->isDependentType()) return false; const VarDecl *DefVD = nullptr; Var->getAnyInitializer(DefVD); if (!DefVD) return false; EvaluatedStmt *Eval = DefVD->ensureEvaluatedStmt(); Expr *Init = cast(Eval->Value); if (Init->isValueDependent()) return false; return IsVariableAConstantExpression(Var, Context); } void Sema::UpdateMarkingForLValueToRValue(Expr *E) { // Per C++11 [basic.def.odr], a variable is odr-used "unless it is // an object that satisfies the requirements for appearing in a // constant expression (5.19) and the lvalue-to-rvalue conversion (4.1) // is immediately applied." This function handles the lvalue-to-rvalue // conversion part. MaybeODRUseExprs.erase(E->IgnoreParens()); // If we are in a lambda, check if this DeclRefExpr or MemberExpr refers // to a variable that is a constant expression, and if so, identify it as // a reference to a variable that does not involve an odr-use of that // variable. if (LambdaScopeInfo *LSI = getCurLambda()) { Expr *SansParensExpr = E->IgnoreParens(); VarDecl *Var = nullptr; if (DeclRefExpr *DRE = dyn_cast(SansParensExpr)) Var = dyn_cast(DRE->getFoundDecl()); else if (MemberExpr *ME = dyn_cast(SansParensExpr)) Var = dyn_cast(ME->getMemberDecl()); if (Var && IsVariableNonDependentAndAConstantExpression(Var, Context)) LSI->markVariableExprAsNonODRUsed(SansParensExpr); } } ExprResult Sema::ActOnConstantExpression(ExprResult Res) { Res = CorrectDelayedTyposInExpr(Res); if (!Res.isUsable()) return Res; // If a constant-expression is a reference to a variable where we delay // deciding whether it is an odr-use, just assume we will apply the // lvalue-to-rvalue conversion. In the one case where this doesn't happen // (a non-type template argument), we have special handling anyway. UpdateMarkingForLValueToRValue(Res.get()); return Res; } void Sema::CleanupVarDeclMarking() { for (Expr *E : MaybeODRUseExprs) { VarDecl *Var; SourceLocation Loc; if (DeclRefExpr *DRE = dyn_cast(E)) { Var = cast(DRE->getDecl()); Loc = DRE->getLocation(); } else if (MemberExpr *ME = dyn_cast(E)) { Var = cast(ME->getMemberDecl()); Loc = ME->getMemberLoc(); } else { llvm_unreachable("Unexpected expression"); } MarkVarDeclODRUsed(Var, Loc, *this, /*MaxFunctionScopeIndex Pointer*/ nullptr); } MaybeODRUseExprs.clear(); } static void DoMarkVarDeclReferenced(Sema &SemaRef, SourceLocation Loc, VarDecl *Var, Expr *E) { assert((!E || isa(E) || isa(E)) && "Invalid Expr argument to DoMarkVarDeclReferenced"); Var->setReferenced(); TemplateSpecializationKind TSK = Var->getTemplateSpecializationKind(); bool OdrUseContext = isOdrUseContext(SemaRef); bool NeedDefinition = OdrUseContext || (isEvaluatableContext(SemaRef) && Var->isUsableInConstantExpressions(SemaRef.Context)); VarTemplateSpecializationDecl *VarSpec = dyn_cast(Var); assert(!isa(Var) && "Can't instantiate a partial template specialization."); // If this might be a member specialization of a static data member, check // the specialization is visible. We already did the checks for variable // template specializations when we created them. if (NeedDefinition && TSK != TSK_Undeclared && !isa(Var)) SemaRef.checkSpecializationVisibility(Loc, Var); // Perform implicit instantiation of static data members, static data member // templates of class templates, and variable template specializations. Delay // instantiations of variable templates, except for those that could be used // in a constant expression. if (NeedDefinition && isTemplateInstantiation(TSK)) { bool TryInstantiating = TSK == TSK_ImplicitInstantiation; if (TryInstantiating && !isa(Var)) { if (Var->getPointOfInstantiation().isInvalid()) { // This is a modification of an existing AST node. Notify listeners. if (ASTMutationListener *L = SemaRef.getASTMutationListener()) L->StaticDataMemberInstantiated(Var); } else if (!Var->isUsableInConstantExpressions(SemaRef.Context)) // Don't bother trying to instantiate it again, unless we might need // its initializer before we get to the end of the TU. TryInstantiating = false; } if (Var->getPointOfInstantiation().isInvalid()) Var->setTemplateSpecializationKind(TSK, Loc); if (TryInstantiating) { SourceLocation PointOfInstantiation = Var->getPointOfInstantiation(); bool InstantiationDependent = false; bool IsNonDependent = VarSpec ? !TemplateSpecializationType::anyDependentTemplateArguments( VarSpec->getTemplateArgsInfo(), InstantiationDependent) : true; // Do not instantiate specializations that are still type-dependent. if (IsNonDependent) { if (Var->isUsableInConstantExpressions(SemaRef.Context)) { // Do not defer instantiations of variables which could be used in a // constant expression. SemaRef.InstantiateVariableDefinition(PointOfInstantiation, Var); } else { SemaRef.PendingInstantiations .push_back(std::make_pair(Var, PointOfInstantiation)); } } } } // Per C++11 [basic.def.odr], a variable is odr-used "unless it satisfies // the requirements for appearing in a constant expression (5.19) and, if // it is an object, the lvalue-to-rvalue conversion (4.1) // is immediately applied." We check the first part here, and // Sema::UpdateMarkingForLValueToRValue deals with the second part. // Note that we use the C++11 definition everywhere because nothing in // C++03 depends on whether we get the C++03 version correct. The second // part does not apply to references, since they are not objects. if (OdrUseContext && E && IsVariableAConstantExpression(Var, SemaRef.Context)) { // A reference initialized by a constant expression can never be // odr-used, so simply ignore it. if (!Var->getType()->isReferenceType()) SemaRef.MaybeODRUseExprs.insert(E); } else if (OdrUseContext) { MarkVarDeclODRUsed(Var, Loc, SemaRef, /*MaxFunctionScopeIndex ptr*/ nullptr); } else if (isOdrUseContext(SemaRef, /*SkipDependentUses*/false)) { // If this is a dependent context, we don't need to mark variables as // odr-used, but we may still need to track them for lambda capture. // FIXME: Do we also need to do this inside dependent typeid expressions // (which are modeled as unevaluated at this point)? const bool RefersToEnclosingScope = (SemaRef.CurContext != Var->getDeclContext() && Var->getDeclContext()->isFunctionOrMethod() && Var->hasLocalStorage()); if (RefersToEnclosingScope) { if (LambdaScopeInfo *const LSI = SemaRef.getCurLambda(/*IgnoreCapturedRegions=*/true)) { // If a variable could potentially be odr-used, defer marking it so // until we finish analyzing the full expression for any // lvalue-to-rvalue // or discarded value conversions that would obviate odr-use. // Add it to the list of potential captures that will be analyzed // later (ActOnFinishFullExpr) for eventual capture and odr-use marking // unless the variable is a reference that was initialized by a constant // expression (this will never need to be captured or odr-used). assert(E && "Capture variable should be used in an expression."); if (!Var->getType()->isReferenceType() || !IsVariableNonDependentAndAConstantExpression(Var, SemaRef.Context)) LSI->addPotentialCapture(E->IgnoreParens()); } } } } /// \brief Mark a variable referenced, and check whether it is odr-used /// (C++ [basic.def.odr]p2, C99 6.9p3). Note that this should not be /// used directly for normal expressions referring to VarDecl. void Sema::MarkVariableReferenced(SourceLocation Loc, VarDecl *Var) { DoMarkVarDeclReferenced(*this, Loc, Var, nullptr); } static void MarkExprReferenced(Sema &SemaRef, SourceLocation Loc, Decl *D, Expr *E, bool MightBeOdrUse) { if (SemaRef.isInOpenMPDeclareTargetContext()) SemaRef.checkDeclIsAllowedInOpenMPTarget(E, D); if (VarDecl *Var = dyn_cast(D)) { DoMarkVarDeclReferenced(SemaRef, Loc, Var, E); return; } SemaRef.MarkAnyDeclReferenced(Loc, D, MightBeOdrUse); // If this is a call to a method via a cast, also mark the method in the // derived class used in case codegen can devirtualize the call. const MemberExpr *ME = dyn_cast(E); if (!ME) return; CXXMethodDecl *MD = dyn_cast(ME->getMemberDecl()); if (!MD) return; // Only attempt to devirtualize if this is truly a virtual call. bool IsVirtualCall = MD->isVirtual() && ME->performsVirtualDispatch(SemaRef.getLangOpts()); if (!IsVirtualCall) return; const Expr *Base = ME->getBase(); const CXXRecordDecl *MostDerivedClassDecl = Base->getBestDynamicClassType(); if (!MostDerivedClassDecl) return; CXXMethodDecl *DM = MD->getCorrespondingMethodInClass(MostDerivedClassDecl); if (!DM || DM->isPure()) return; SemaRef.MarkAnyDeclReferenced(Loc, DM, MightBeOdrUse); } /// \brief Perform reference-marking and odr-use handling for a DeclRefExpr. void Sema::MarkDeclRefReferenced(DeclRefExpr *E) { // TODO: update this with DR# once a defect report is filed. // C++11 defect. The address of a pure member should not be an ODR use, even // if it's a qualified reference. bool OdrUse = true; if (CXXMethodDecl *Method = dyn_cast(E->getDecl())) if (Method->isVirtual()) OdrUse = false; MarkExprReferenced(*this, E->getLocation(), E->getDecl(), E, OdrUse); } /// \brief Perform reference-marking and odr-use handling for a MemberExpr. void Sema::MarkMemberReferenced(MemberExpr *E) { // C++11 [basic.def.odr]p2: // A non-overloaded function whose name appears as a potentially-evaluated // expression or a member of a set of candidate functions, if selected by // overload resolution when referred to from a potentially-evaluated // expression, is odr-used, unless it is a pure virtual function and its // name is not explicitly qualified. bool MightBeOdrUse = true; if (E->performsVirtualDispatch(getLangOpts())) { if (CXXMethodDecl *Method = dyn_cast(E->getMemberDecl())) if (Method->isPure()) MightBeOdrUse = false; } SourceLocation Loc = E->getMemberLoc().isValid() ? E->getMemberLoc() : E->getLocStart(); MarkExprReferenced(*this, Loc, E->getMemberDecl(), E, MightBeOdrUse); } /// \brief Perform marking for a reference to an arbitrary declaration. It /// marks the declaration referenced, and performs odr-use checking for /// functions and variables. This method should not be used when building a /// normal expression which refers to a variable. void Sema::MarkAnyDeclReferenced(SourceLocation Loc, Decl *D, bool MightBeOdrUse) { if (MightBeOdrUse) { if (auto *VD = dyn_cast(D)) { MarkVariableReferenced(Loc, VD); return; } } if (auto *FD = dyn_cast(D)) { MarkFunctionReferenced(Loc, FD, MightBeOdrUse); return; } D->setReferenced(); } namespace { // Mark all of the declarations used by a type as referenced. // FIXME: Not fully implemented yet! We need to have a better understanding // of when we're entering a context we should not recurse into. // FIXME: This is and EvaluatedExprMarker are more-or-less equivalent to // TreeTransforms rebuilding the type in a new context. Rather than // duplicating the TreeTransform logic, we should consider reusing it here. // Currently that causes problems when rebuilding LambdaExprs. class MarkReferencedDecls : public RecursiveASTVisitor { Sema &S; SourceLocation Loc; public: typedef RecursiveASTVisitor Inherited; MarkReferencedDecls(Sema &S, SourceLocation Loc) : S(S), Loc(Loc) { } bool TraverseTemplateArgument(const TemplateArgument &Arg); }; } bool MarkReferencedDecls::TraverseTemplateArgument( const TemplateArgument &Arg) { { // A non-type template argument is a constant-evaluated context. EnterExpressionEvaluationContext Evaluated(S, Sema::ConstantEvaluated); if (Arg.getKind() == TemplateArgument::Declaration) { if (Decl *D = Arg.getAsDecl()) S.MarkAnyDeclReferenced(Loc, D, true); } else if (Arg.getKind() == TemplateArgument::Expression) { S.MarkDeclarationsReferencedInExpr(Arg.getAsExpr(), false); } } return Inherited::TraverseTemplateArgument(Arg); } void Sema::MarkDeclarationsReferencedInType(SourceLocation Loc, QualType T) { MarkReferencedDecls Marker(*this, Loc); Marker.TraverseType(T); } namespace { /// \brief Helper class that marks all of the declarations referenced by /// potentially-evaluated subexpressions as "referenced". class EvaluatedExprMarker : public EvaluatedExprVisitor { Sema &S; bool SkipLocalVariables; public: typedef EvaluatedExprVisitor Inherited; EvaluatedExprMarker(Sema &S, bool SkipLocalVariables) : Inherited(S.Context), S(S), SkipLocalVariables(SkipLocalVariables) { } void VisitDeclRefExpr(DeclRefExpr *E) { // If we were asked not to visit local variables, don't. if (SkipLocalVariables) { if (VarDecl *VD = dyn_cast(E->getDecl())) if (VD->hasLocalStorage()) return; } S.MarkDeclRefReferenced(E); } void VisitMemberExpr(MemberExpr *E) { S.MarkMemberReferenced(E); Inherited::VisitMemberExpr(E); } void VisitCXXBindTemporaryExpr(CXXBindTemporaryExpr *E) { S.MarkFunctionReferenced(E->getLocStart(), const_cast(E->getTemporary()->getDestructor())); Visit(E->getSubExpr()); } void VisitCXXNewExpr(CXXNewExpr *E) { if (E->getOperatorNew()) S.MarkFunctionReferenced(E->getLocStart(), E->getOperatorNew()); if (E->getOperatorDelete()) S.MarkFunctionReferenced(E->getLocStart(), E->getOperatorDelete()); Inherited::VisitCXXNewExpr(E); } void VisitCXXDeleteExpr(CXXDeleteExpr *E) { if (E->getOperatorDelete()) S.MarkFunctionReferenced(E->getLocStart(), E->getOperatorDelete()); QualType Destroyed = S.Context.getBaseElementType(E->getDestroyedType()); if (const RecordType *DestroyedRec = Destroyed->getAs()) { CXXRecordDecl *Record = cast(DestroyedRec->getDecl()); S.MarkFunctionReferenced(E->getLocStart(), S.LookupDestructor(Record)); } Inherited::VisitCXXDeleteExpr(E); } void VisitCXXConstructExpr(CXXConstructExpr *E) { S.MarkFunctionReferenced(E->getLocStart(), E->getConstructor()); Inherited::VisitCXXConstructExpr(E); } void VisitCXXDefaultArgExpr(CXXDefaultArgExpr *E) { Visit(E->getExpr()); } void VisitImplicitCastExpr(ImplicitCastExpr *E) { Inherited::VisitImplicitCastExpr(E); if (E->getCastKind() == CK_LValueToRValue) S.UpdateMarkingForLValueToRValue(E->getSubExpr()); } }; } /// \brief Mark any declarations that appear within this expression or any /// potentially-evaluated subexpressions as "referenced". /// /// \param SkipLocalVariables If true, don't mark local variables as /// 'referenced'. void Sema::MarkDeclarationsReferencedInExpr(Expr *E, bool SkipLocalVariables) { EvaluatedExprMarker(*this, SkipLocalVariables).Visit(E); } /// \brief Emit a diagnostic that describes an effect on the run-time behavior /// of the program being compiled. /// /// This routine emits the given diagnostic when the code currently being /// type-checked is "potentially evaluated", meaning that there is a /// possibility that the code will actually be executable. Code in sizeof() /// expressions, code used only during overload resolution, etc., are not /// potentially evaluated. This routine will suppress such diagnostics or, /// in the absolutely nutty case of potentially potentially evaluated /// expressions (C++ typeid), queue the diagnostic to potentially emit it /// later. /// /// This routine should be used for all diagnostics that describe the run-time /// behavior of a program, such as passing a non-POD value through an ellipsis. /// Failure to do so will likely result in spurious diagnostics or failures /// during overload resolution or within sizeof/alignof/typeof/typeid. bool Sema::DiagRuntimeBehavior(SourceLocation Loc, const Stmt *Statement, const PartialDiagnostic &PD) { switch (ExprEvalContexts.back().Context) { case Unevaluated: case UnevaluatedList: case UnevaluatedAbstract: case DiscardedStatement: // The argument will never be evaluated, so don't complain. break; case ConstantEvaluated: // Relevant diagnostics should be produced by constant evaluation. break; case PotentiallyEvaluated: case PotentiallyEvaluatedIfUsed: if (Statement && getCurFunctionOrMethodDecl()) { FunctionScopes.back()->PossiblyUnreachableDiags. push_back(sema::PossiblyUnreachableDiag(PD, Loc, Statement)); } else Diag(Loc, PD); return true; } return false; } bool Sema::CheckCallReturnType(QualType ReturnType, SourceLocation Loc, CallExpr *CE, FunctionDecl *FD) { if (ReturnType->isVoidType() || !ReturnType->isIncompleteType()) return false; // If we're inside a decltype's expression, don't check for a valid return // type or construct temporaries until we know whether this is the last call. if (ExprEvalContexts.back().IsDecltype) { ExprEvalContexts.back().DelayedDecltypeCalls.push_back(CE); return false; } class CallReturnIncompleteDiagnoser : public TypeDiagnoser { FunctionDecl *FD; CallExpr *CE; public: CallReturnIncompleteDiagnoser(FunctionDecl *FD, CallExpr *CE) : FD(FD), CE(CE) { } void diagnose(Sema &S, SourceLocation Loc, QualType T) override { if (!FD) { S.Diag(Loc, diag::err_call_incomplete_return) << T << CE->getSourceRange(); return; } S.Diag(Loc, diag::err_call_function_incomplete_return) << CE->getSourceRange() << FD->getDeclName() << T; S.Diag(FD->getLocation(), diag::note_entity_declared_at) << FD->getDeclName(); } } Diagnoser(FD, CE); if (RequireCompleteType(Loc, ReturnType, Diagnoser)) return true; return false; } // Diagnose the s/=/==/ and s/\|=/!=/ typos. Note that adding parentheses // will prevent this condition from triggering, which is what we want. void Sema::DiagnoseAssignmentAsCondition(Expr *E) { SourceLocation Loc; unsigned diagnostic = diag::warn_condition_is_assignment; bool IsOrAssign = false; if (BinaryOperator *Op = dyn_cast(E)) { if (Op->getOpcode() != BO_Assign && Op->getOpcode() != BO_OrAssign) return; IsOrAssign = Op->getOpcode() == BO_OrAssign; // Greylist some idioms by putting them into a warning subcategory. if (ObjCMessageExpr *ME = dyn_cast(Op->getRHS()->IgnoreParenCasts())) { Selector Sel = ME->getSelector(); // self = [ init...] if (isSelfExpr(Op->getLHS()) && ME->getMethodFamily() == OMF_init) diagnostic = diag::warn_condition_is_idiomatic_assignment; // = [ nextObject] else if (Sel.isUnarySelector() && Sel.getNameForSlot(0) == "nextObject") diagnostic = diag::warn_condition_is_idiomatic_assignment; } Loc = Op->getOperatorLoc(); } else if (CXXOperatorCallExpr *Op = dyn_cast(E)) { if (Op->getOperator() != OO_Equal && Op->getOperator() != OO_PipeEqual) return; IsOrAssign = Op->getOperator() == OO_PipeEqual; Loc = Op->getOperatorLoc(); } else if (PseudoObjectExpr *POE = dyn_cast(E)) return DiagnoseAssignmentAsCondition(POE->getSyntacticForm()); else { // Not an assignment. return; } Diag(Loc, diagnostic) << E->getSourceRange(); SourceLocation Open = E->getLocStart(); SourceLocation Close = getLocForEndOfToken(E->getSourceRange().getEnd()); Diag(Loc, diag::note_condition_assign_silence) << FixItHint::CreateInsertion(Open, "(") << FixItHint::CreateInsertion(Close, ")"); if (IsOrAssign) Diag(Loc, diag::note_condition_or_assign_to_comparison) << FixItHint::CreateReplacement(Loc, "!="); else Diag(Loc, diag::note_condition_assign_to_comparison) << FixItHint::CreateReplacement(Loc, "=="); } /// \brief Redundant parentheses over an equality comparison can indicate /// that the user intended an assignment used as condition. void Sema::DiagnoseEqualityWithExtraParens(ParenExpr *ParenE) { // Don't warn if the parens came from a macro. SourceLocation parenLoc = ParenE->getLocStart(); if (parenLoc.isInvalid() || parenLoc.isMacroID()) return; // Don't warn for dependent expressions. if (ParenE->isTypeDependent()) return; Expr *E = ParenE->IgnoreParens(); if (BinaryOperator *opE = dyn_cast(E)) if (opE->getOpcode() == BO_EQ && opE->getLHS()->IgnoreParenImpCasts()->isModifiableLvalue(Context) == Expr::MLV_Valid) { SourceLocation Loc = opE->getOperatorLoc(); Diag(Loc, diag::warn_equality_with_extra_parens) << E->getSourceRange(); SourceRange ParenERange = ParenE->getSourceRange(); Diag(Loc, diag::note_equality_comparison_silence) << FixItHint::CreateRemoval(ParenERange.getBegin()) << FixItHint::CreateRemoval(ParenERange.getEnd()); Diag(Loc, diag::note_equality_comparison_to_assign) << FixItHint::CreateReplacement(Loc, "="); } } ExprResult Sema::CheckBooleanCondition(SourceLocation Loc, Expr *E, bool IsConstexpr) { DiagnoseAssignmentAsCondition(E); if (ParenExpr *parenE = dyn_cast(E)) DiagnoseEqualityWithExtraParens(parenE); ExprResult result = CheckPlaceholderExpr(E); if (result.isInvalid()) return ExprError(); E = result.get(); if (!E->isTypeDependent()) { if (getLangOpts().CPlusPlus) return CheckCXXBooleanCondition(E, IsConstexpr); // C++ 6.4p4 ExprResult ERes = DefaultFunctionArrayLvalueConversion(E); if (ERes.isInvalid()) return ExprError(); E = ERes.get(); QualType T = E->getType(); if (!T->isScalarType()) { // C99 6.8.4.1p1 Diag(Loc, diag::err_typecheck_statement_requires_scalar) << T << E->getSourceRange(); return ExprError(); } CheckBoolLikeConversion(E, Loc); } return E; } Sema::ConditionResult Sema::ActOnCondition(Scope *S, SourceLocation Loc, Expr *SubExpr, ConditionKind CK) { // Empty conditions are valid in for-statements. if (!SubExpr) return ConditionResult(); ExprResult Cond; switch (CK) { case ConditionKind::Boolean: Cond = CheckBooleanCondition(Loc, SubExpr); break; case ConditionKind::ConstexprIf: Cond = CheckBooleanCondition(Loc, SubExpr, true); break; case ConditionKind::Switch: Cond = CheckSwitchCondition(Loc, SubExpr); break; } if (Cond.isInvalid()) return ConditionError(); // FIXME: FullExprArg doesn't have an invalid bit, so check nullness instead. FullExprArg FullExpr = MakeFullExpr(Cond.get(), Loc); if (!FullExpr.get()) return ConditionError(); return ConditionResult(*this, nullptr, FullExpr, CK == ConditionKind::ConstexprIf); } namespace { /// A visitor for rebuilding a call to an __unknown_any expression /// to have an appropriate type. struct RebuildUnknownAnyFunction : StmtVisitor { Sema &S; RebuildUnknownAnyFunction(Sema &S) : S(S) {} ExprResult VisitStmt(Stmt *S) { llvm_unreachable("unexpected statement!"); } ExprResult VisitExpr(Expr *E) { S.Diag(E->getExprLoc(), diag::err_unsupported_unknown_any_call) << E->getSourceRange(); return ExprError(); } /// Rebuild an expression which simply semantically wraps another /// expression which it shares the type and value kind of. template ExprResult rebuildSugarExpr(T *E) { ExprResult SubResult = Visit(E->getSubExpr()); if (SubResult.isInvalid()) return ExprError(); Expr *SubExpr = SubResult.get(); E->setSubExpr(SubExpr); E->setType(SubExpr->getType()); E->setValueKind(SubExpr->getValueKind()); assert(E->getObjectKind() == OK_Ordinary); return E; } ExprResult VisitParenExpr(ParenExpr *E) { return rebuildSugarExpr(E); } ExprResult VisitUnaryExtension(UnaryOperator *E) { return rebuildSugarExpr(E); } ExprResult VisitUnaryAddrOf(UnaryOperator *E) { ExprResult SubResult = Visit(E->getSubExpr()); if (SubResult.isInvalid()) return ExprError(); Expr *SubExpr = SubResult.get(); E->setSubExpr(SubExpr); E->setType(S.Context.getPointerType(SubExpr->getType())); assert(E->getValueKind() == VK_RValue); assert(E->getObjectKind() == OK_Ordinary); return E; } ExprResult resolveDecl(Expr *E, ValueDecl *VD) { if (!isa(VD)) return VisitExpr(E); E->setType(VD->getType()); assert(E->getValueKind() == VK_RValue); if (S.getLangOpts().CPlusPlus && !(isa(VD) && cast(VD)->isInstance())) E->setValueKind(VK_LValue); return E; } ExprResult VisitMemberExpr(MemberExpr *E) { return resolveDecl(E, E->getMemberDecl()); } ExprResult VisitDeclRefExpr(DeclRefExpr *E) { return resolveDecl(E, E->getDecl()); } }; } /// Given a function expression of unknown-any type, try to rebuild it /// to have a function type. static ExprResult rebuildUnknownAnyFunction(Sema &S, Expr *FunctionExpr) { ExprResult Result = RebuildUnknownAnyFunction(S).Visit(FunctionExpr); if (Result.isInvalid()) return ExprError(); return S.DefaultFunctionArrayConversion(Result.get()); } namespace { /// A visitor for rebuilding an expression of type __unknown_anytype /// into one which resolves the type directly on the referring /// expression. Strict preservation of the original source /// structure is not a goal. struct RebuildUnknownAnyExpr : StmtVisitor { Sema &S; /// The current destination type. QualType DestType; RebuildUnknownAnyExpr(Sema &S, QualType CastType) : S(S), DestType(CastType) {} ExprResult VisitStmt(Stmt *S) { llvm_unreachable("unexpected statement!"); } ExprResult VisitExpr(Expr *E) { S.Diag(E->getExprLoc(), diag::err_unsupported_unknown_any_expr) << E->getSourceRange(); return ExprError(); } ExprResult VisitCallExpr(CallExpr *E); ExprResult VisitObjCMessageExpr(ObjCMessageExpr *E); /// Rebuild an expression which simply semantically wraps another /// expression which it shares the type and value kind of. template ExprResult rebuildSugarExpr(T *E) { ExprResult SubResult = Visit(E->getSubExpr()); if (SubResult.isInvalid()) return ExprError(); Expr *SubExpr = SubResult.get(); E->setSubExpr(SubExpr); E->setType(SubExpr->getType()); E->setValueKind(SubExpr->getValueKind()); assert(E->getObjectKind() == OK_Ordinary); return E; } ExprResult VisitParenExpr(ParenExpr *E) { return rebuildSugarExpr(E); } ExprResult VisitUnaryExtension(UnaryOperator *E) { return rebuildSugarExpr(E); } ExprResult VisitUnaryAddrOf(UnaryOperator *E) { const PointerType *Ptr = DestType->getAs(); if (!Ptr) { S.Diag(E->getOperatorLoc(), diag::err_unknown_any_addrof) << E->getSourceRange(); return ExprError(); } if (isa(E->getSubExpr())) { S.Diag(E->getOperatorLoc(), diag::err_unknown_any_addrof_call) << E->getSourceRange(); return ExprError(); } assert(E->getValueKind() == VK_RValue); assert(E->getObjectKind() == OK_Ordinary); E->setType(DestType); // Build the sub-expression as if it were an object of the pointee type. DestType = Ptr->getPointeeType(); ExprResult SubResult = Visit(E->getSubExpr()); if (SubResult.isInvalid()) return ExprError(); E->setSubExpr(SubResult.get()); return E; } ExprResult VisitImplicitCastExpr(ImplicitCastExpr *E); ExprResult resolveDecl(Expr *E, ValueDecl *VD); ExprResult VisitMemberExpr(MemberExpr *E) { return resolveDecl(E, E->getMemberDecl()); } ExprResult VisitDeclRefExpr(DeclRefExpr *E) { return resolveDecl(E, E->getDecl()); } }; } /// Rebuilds a call expression which yielded __unknown_anytype. ExprResult RebuildUnknownAnyExpr::VisitCallExpr(CallExpr *E) { Expr *CalleeExpr = E->getCallee(); enum FnKind { FK_MemberFunction, FK_FunctionPointer, FK_BlockPointer }; FnKind Kind; QualType CalleeType = CalleeExpr->getType(); if (CalleeType == S.Context.BoundMemberTy) { assert(isa(E) || isa(E)); Kind = FK_MemberFunction; CalleeType = Expr::findBoundMemberType(CalleeExpr); } else if (const PointerType *Ptr = CalleeType->getAs()) { CalleeType = Ptr->getPointeeType(); Kind = FK_FunctionPointer; } else { CalleeType = CalleeType->castAs()->getPointeeType(); Kind = FK_BlockPointer; } const FunctionType *FnType = CalleeType->castAs(); // Verify that this is a legal result type of a function. if (DestType->isArrayType() || DestType->isFunctionType()) { unsigned diagID = diag::err_func_returning_array_function; if (Kind == FK_BlockPointer) diagID = diag::err_block_returning_array_function; S.Diag(E->getExprLoc(), diagID) << DestType->isFunctionType() << DestType; return ExprError(); } // Otherwise, go ahead and set DestType as the call's result. E->setType(DestType.getNonLValueExprType(S.Context)); E->setValueKind(Expr::getValueKindForType(DestType)); assert(E->getObjectKind() == OK_Ordinary); // Rebuild the function type, replacing the result type with DestType. const FunctionProtoType *Proto = dyn_cast(FnType); if (Proto) { // __unknown_anytype(...) is a special case used by the debugger when // it has no idea what a function's signature is. // // We want to build this call essentially under the K&R // unprototyped rules, but making a FunctionNoProtoType in C++ // would foul up all sorts of assumptions. However, we cannot // simply pass all arguments as variadic arguments, nor can we // portably just call the function under a non-variadic type; see // the comment on IR-gen's TargetInfo::isNoProtoCallVariadic. // However, it turns out that in practice it is generally safe to // call a function declared as "A foo(B,C,D);" under the prototype // "A foo(B,C,D,...);". The only known exception is with the // Windows ABI, where any variadic function is implicitly cdecl // regardless of its normal CC. Therefore we change the parameter // types to match the types of the arguments. // // This is a hack, but it is far superior to moving the // corresponding target-specific code from IR-gen to Sema/AST. ArrayRef ParamTypes = Proto->getParamTypes(); SmallVector ArgTypes; if (ParamTypes.empty() && Proto->isVariadic()) { // the special case ArgTypes.reserve(E->getNumArgs()); for (unsigned i = 0, e = E->getNumArgs(); i != e; ++i) { Expr *Arg = E->getArg(i); QualType ArgType = Arg->getType(); if (E->isLValue()) { ArgType = S.Context.getLValueReferenceType(ArgType); } else if (E->isXValue()) { ArgType = S.Context.getRValueReferenceType(ArgType); } ArgTypes.push_back(ArgType); } ParamTypes = ArgTypes; } DestType = S.Context.getFunctionType(DestType, ParamTypes, Proto->getExtProtoInfo()); } else { DestType = S.Context.getFunctionNoProtoType(DestType, FnType->getExtInfo()); } // Rebuild the appropriate pointer-to-function type. switch (Kind) { case FK_MemberFunction: // Nothing to do. break; case FK_FunctionPointer: DestType = S.Context.getPointerType(DestType); break; case FK_BlockPointer: DestType = S.Context.getBlockPointerType(DestType); break; } // Finally, we can recurse. ExprResult CalleeResult = Visit(CalleeExpr); if (!CalleeResult.isUsable()) return ExprError(); E->setCallee(CalleeResult.get()); // Bind a temporary if necessary. return S.MaybeBindToTemporary(E); } ExprResult RebuildUnknownAnyExpr::VisitObjCMessageExpr(ObjCMessageExpr *E) { // Verify that this is a legal result type of a call. if (DestType->isArrayType() || DestType->isFunctionType()) { S.Diag(E->getExprLoc(), diag::err_func_returning_array_function) << DestType->isFunctionType() << DestType; return ExprError(); } // Rewrite the method result type if available. if (ObjCMethodDecl *Method = E->getMethodDecl()) { assert(Method->getReturnType() == S.Context.UnknownAnyTy); Method->setReturnType(DestType); } // Change the type of the message. E->setType(DestType.getNonReferenceType()); E->setValueKind(Expr::getValueKindForType(DestType)); return S.MaybeBindToTemporary(E); } ExprResult RebuildUnknownAnyExpr::VisitImplicitCastExpr(ImplicitCastExpr *E) { // The only case we should ever see here is a function-to-pointer decay. if (E->getCastKind() == CK_FunctionToPointerDecay) { assert(E->getValueKind() == VK_RValue); assert(E->getObjectKind() == OK_Ordinary); E->setType(DestType); // Rebuild the sub-expression as the pointee (function) type. DestType = DestType->castAs()->getPointeeType(); ExprResult Result = Visit(E->getSubExpr()); if (!Result.isUsable()) return ExprError(); E->setSubExpr(Result.get()); return E; } else if (E->getCastKind() == CK_LValueToRValue) { assert(E->getValueKind() == VK_RValue); assert(E->getObjectKind() == OK_Ordinary); assert(isa(E->getType())); E->setType(DestType); // The sub-expression has to be a lvalue reference, so rebuild it as such. DestType = S.Context.getLValueReferenceType(DestType); ExprResult Result = Visit(E->getSubExpr()); if (!Result.isUsable()) return ExprError(); E->setSubExpr(Result.get()); return E; } else { llvm_unreachable("Unhandled cast type!"); } } ExprResult RebuildUnknownAnyExpr::resolveDecl(Expr *E, ValueDecl *VD) { ExprValueKind ValueKind = VK_LValue; QualType Type = DestType; // We know how to make this work for certain kinds of decls: // - functions if (FunctionDecl *FD = dyn_cast(VD)) { if (const PointerType *Ptr = Type->getAs()) { DestType = Ptr->getPointeeType(); ExprResult Result = resolveDecl(E, VD); if (Result.isInvalid()) return ExprError(); return S.ImpCastExprToType(Result.get(), Type, CK_FunctionToPointerDecay, VK_RValue); } if (!Type->isFunctionType()) { S.Diag(E->getExprLoc(), diag::err_unknown_any_function) << VD << E->getSourceRange(); return ExprError(); } if (const FunctionProtoType *FT = Type->getAs()) { // We must match the FunctionDecl's type to the hack introduced in // RebuildUnknownAnyExpr::VisitCallExpr to vararg functions of unknown // type. See the lengthy commentary in that routine. QualType FDT = FD->getType(); const FunctionType *FnType = FDT->castAs(); const FunctionProtoType *Proto = dyn_cast_or_null(FnType); DeclRefExpr *DRE = dyn_cast(E); if (DRE && Proto && Proto->getParamTypes().empty() && Proto->isVariadic()) { SourceLocation Loc = FD->getLocation(); FunctionDecl *NewFD = FunctionDecl::Create(FD->getASTContext(), FD->getDeclContext(), Loc, Loc, FD->getNameInfo().getName(), DestType, FD->getTypeSourceInfo(), SC_None, false/*isInlineSpecified*/, FD->hasPrototype(), false/*isConstexprSpecified*/); if (FD->getQualifier()) NewFD->setQualifierInfo(FD->getQualifierLoc()); SmallVector Params; for (const auto &AI : FT->param_types()) { ParmVarDecl *Param = S.BuildParmVarDeclForTypedef(FD, Loc, AI); Param->setScopeInfo(0, Params.size()); Params.push_back(Param); } NewFD->setParams(Params); DRE->setDecl(NewFD); VD = DRE->getDecl(); } } if (CXXMethodDecl *MD = dyn_cast(FD)) if (MD->isInstance()) { ValueKind = VK_RValue; Type = S.Context.BoundMemberTy; } // Function references aren't l-values in C. if (!S.getLangOpts().CPlusPlus) ValueKind = VK_RValue; // - variables } else if (isa(VD)) { if (const ReferenceType *RefTy = Type->getAs()) { Type = RefTy->getPointeeType(); } else if (Type->isFunctionType()) { S.Diag(E->getExprLoc(), diag::err_unknown_any_var_function_type) << VD << E->getSourceRange(); return ExprError(); } // - nothing else } else { S.Diag(E->getExprLoc(), diag::err_unsupported_unknown_any_decl) << VD << E->getSourceRange(); return ExprError(); } // Modifying the declaration like this is friendly to IR-gen but // also really dangerous. VD->setType(DestType); E->setType(Type); E->setValueKind(ValueKind); return E; } /// Check a cast of an unknown-any type. We intentionally only /// trigger this for C-style casts. ExprResult Sema::checkUnknownAnyCast(SourceRange TypeRange, QualType CastType, Expr *CastExpr, CastKind &CastKind, ExprValueKind &VK, CXXCastPath &Path) { // The type we're casting to must be either void or complete. if (!CastType->isVoidType() && RequireCompleteType(TypeRange.getBegin(), CastType, diag::err_typecheck_cast_to_incomplete)) return ExprError(); // Rewrite the casted expression from scratch. ExprResult result = RebuildUnknownAnyExpr(*this, CastType).Visit(CastExpr); if (!result.isUsable()) return ExprError(); CastExpr = result.get(); VK = CastExpr->getValueKind(); CastKind = CK_NoOp; return CastExpr; } ExprResult Sema::forceUnknownAnyToType(Expr *E, QualType ToType) { return RebuildUnknownAnyExpr(*this, ToType).Visit(E); } ExprResult Sema::checkUnknownAnyArg(SourceLocation callLoc, Expr *arg, QualType ¶mType) { // If the syntactic form of the argument is not an explicit cast of // any sort, just do default argument promotion. ExplicitCastExpr *castArg = dyn_cast(arg->IgnoreParens()); if (!castArg) { ExprResult result = DefaultArgumentPromotion(arg); if (result.isInvalid()) return ExprError(); paramType = result.get()->getType(); return result; } // Otherwise, use the type that was written in the explicit cast. assert(!arg->hasPlaceholderType()); paramType = castArg->getTypeAsWritten(); // Copy-initialize a parameter of that type. InitializedEntity entity = InitializedEntity::InitializeParameter(Context, paramType, /*consumed*/ false); return PerformCopyInitialization(entity, callLoc, arg); } static ExprResult diagnoseUnknownAnyExpr(Sema &S, Expr *E) { Expr *orig = E; unsigned diagID = diag::err_uncasted_use_of_unknown_any; while (true) { E = E->IgnoreParenImpCasts(); if (CallExpr *call = dyn_cast(E)) { E = call->getCallee(); diagID = diag::err_uncasted_call_of_unknown_any; } else { break; } } SourceLocation loc; NamedDecl *d; if (DeclRefExpr *ref = dyn_cast(E)) { loc = ref->getLocation(); d = ref->getDecl(); } else if (MemberExpr *mem = dyn_cast(E)) { loc = mem->getMemberLoc(); d = mem->getMemberDecl(); } else if (ObjCMessageExpr *msg = dyn_cast(E)) { diagID = diag::err_uncasted_call_of_unknown_any; loc = msg->getSelectorStartLoc(); d = msg->getMethodDecl(); if (!d) { S.Diag(loc, diag::err_uncasted_send_to_unknown_any_method) << static_cast(msg->isClassMessage()) << msg->getSelector() << orig->getSourceRange(); return ExprError(); } } else { S.Diag(E->getExprLoc(), diag::err_unsupported_unknown_any_expr) << E->getSourceRange(); return ExprError(); } S.Diag(loc, diagID) << d << orig->getSourceRange(); // Never recoverable. return ExprError(); } /// Check for operands with placeholder types and complain if found. /// Returns true if there was an error and no recovery was possible. ExprResult Sema::CheckPlaceholderExpr(Expr *E) { if (!getLangOpts().CPlusPlus) { // C cannot handle TypoExpr nodes on either side of a binop because it // doesn't handle dependent types properly, so make sure any TypoExprs have // been dealt with before checking the operands. ExprResult Result = CorrectDelayedTyposInExpr(E); if (!Result.isUsable()) return ExprError(); E = Result.get(); } const BuiltinType *placeholderType = E->getType()->getAsPlaceholderType(); if (!placeholderType) return E; switch (placeholderType->getKind()) { // Overloaded expressions. case BuiltinType::Overload: { // Try to resolve a single function template specialization. // This is obligatory. ExprResult Result = E; if (ResolveAndFixSingleFunctionTemplateSpecialization(Result, false)) return Result; // No guarantees that ResolveAndFixSingleFunctionTemplateSpecialization // leaves Result unchanged on failure. Result = E; if (resolveAndFixAddressOfOnlyViableOverloadCandidate(Result)) return Result; // If that failed, try to recover with a call. tryToRecoverWithCall(Result, PDiag(diag::err_ovl_unresolvable), /*complain*/ true); return Result; } // Bound member functions. case BuiltinType::BoundMember: { ExprResult result = E; const Expr *BME = E->IgnoreParens(); PartialDiagnostic PD = PDiag(diag::err_bound_member_function); // Try to give a nicer diagnostic if it is a bound member that we recognize. if (isa(BME)) { PD = PDiag(diag::err_dtor_expr_without_call) << /*pseudo-destructor*/ 1; } else if (const auto *ME = dyn_cast(BME)) { if (ME->getMemberNameInfo().getName().getNameKind() == DeclarationName::CXXDestructorName) PD = PDiag(diag::err_dtor_expr_without_call) << /*destructor*/ 0; } tryToRecoverWithCall(result, PD, /*complain*/ true); return result; } // ARC unbridged casts. case BuiltinType::ARCUnbridgedCast: { Expr *realCast = stripARCUnbridgedCast(E); diagnoseARCUnbridgedCast(realCast); return realCast; } // Expressions of unknown type. case BuiltinType::UnknownAny: return diagnoseUnknownAnyExpr(*this, E); // Pseudo-objects. case BuiltinType::PseudoObject: return checkPseudoObjectRValue(E); case BuiltinType::BuiltinFn: { // Accept __noop without parens by implicitly converting it to a call expr. auto *DRE = dyn_cast(E->IgnoreParenImpCasts()); if (DRE) { auto *FD = cast(DRE->getDecl()); if (FD->getBuiltinID() == Builtin::BI__noop) { E = ImpCastExprToType(E, Context.getPointerType(FD->getType()), CK_BuiltinFnToFnPtr).get(); return new (Context) CallExpr(Context, E, None, Context.IntTy, VK_RValue, SourceLocation()); } } Diag(E->getLocStart(), diag::err_builtin_fn_use); return ExprError(); } // Expressions of unknown type. case BuiltinType::OMPArraySection: Diag(E->getLocStart(), diag::err_omp_array_section_use); return ExprError(); // Everything else should be impossible. #define IMAGE_TYPE(ImgType, Id, SingletonId, Access, Suffix) \ case BuiltinType::Id: #include "clang/Basic/OpenCLImageTypes.def" #define BUILTIN_TYPE(Id, SingletonId) case BuiltinType::Id: #define PLACEHOLDER_TYPE(Id, SingletonId) #include "clang/AST/BuiltinTypes.def" break; } llvm_unreachable("invalid placeholder type!"); } bool Sema::CheckCaseExpression(Expr *E) { if (E->isTypeDependent()) return true; if (E->isValueDependent() || E->isIntegerConstantExpr(Context)) return E->getType()->isIntegralOrEnumerationType(); return false; } /// ActOnObjCBoolLiteral - Parse {__objc_yes,__objc_no} literals. ExprResult Sema::ActOnObjCBoolLiteral(SourceLocation OpLoc, tok::TokenKind Kind) { assert((Kind == tok::kw___objc_yes || Kind == tok::kw___objc_no) && "Unknown Objective-C Boolean value!"); QualType BoolT = Context.ObjCBuiltinBoolTy; if (!Context.getBOOLDecl()) { LookupResult Result(*this, &Context.Idents.get("BOOL"), OpLoc, Sema::LookupOrdinaryName); if (LookupName(Result, getCurScope()) && Result.isSingleResult()) { NamedDecl *ND = Result.getFoundDecl(); if (TypedefDecl *TD = dyn_cast(ND)) Context.setBOOLDecl(TD); } } if (Context.getBOOLDecl()) BoolT = Context.getBOOLType(); return new (Context) ObjCBoolLiteralExpr(Kind == tok::kw___objc_yes, BoolT, OpLoc); } ExprResult Sema::ActOnObjCAvailabilityCheckExpr( llvm::ArrayRef AvailSpecs, SourceLocation AtLoc, SourceLocation RParen) { StringRef Platform = getASTContext().getTargetInfo().getPlatformName(); auto Spec = std::find_if(AvailSpecs.begin(), AvailSpecs.end(), [&](const AvailabilitySpec &Spec) { return Spec.getPlatform() == Platform; }); VersionTuple Version; if (Spec != AvailSpecs.end()) Version = Spec->getVersion(); return new (Context) ObjCAvailabilityCheckExpr(Version, AtLoc, RParen, Context.BoolTy); } Index: vendor/clang/dist/test/CodeGenCXX/builtins.cpp =================================================================== --- vendor/clang/dist/test/CodeGenCXX/builtins.cpp (revision 312705) +++ vendor/clang/dist/test/CodeGenCXX/builtins.cpp (revision 312706) @@ -1,28 +1,32 @@ // RUN: %clang_cc1 -triple=x86_64-linux-gnu -emit-llvm -o - %s | FileCheck %s // PR8839 extern "C" char memmove(); int main() { // CHECK: call {{signext i8|i8}} @memmove() return memmove(); } struct S; // CHECK: define {{.*}} @_Z9addressofbR1SS0_( S *addressof(bool b, S &s, S &t) { // CHECK: %[[LVALUE:.*]] = phi // CHECK: ret {{.*}}* %[[LVALUE]] return __builtin_addressof(b ? s : t); } extern "C" int __builtin_abs(int); // #1 long __builtin_abs(long); // #2 extern "C" int __builtin_abs(int); // #3 int x = __builtin_abs(-2); // CHECK: store i32 2, i32* @x, align 4 long y = __builtin_abs(-2l); // CHECK: [[Y:%.+]] = call i64 @_Z13__builtin_absl(i64 -2) // CHECK: store i64 [[Y]], i64* @y, align 8 + +extern const char char_memchr_arg[32]; +char *memchr_result = __builtin_char_memchr(char_memchr_arg, 123, 32); +// CHECK: call i8* @memchr(i8* getelementptr inbounds ([32 x i8], [32 x i8]* @char_memchr_arg, i32 0, i32 0), i32 123, i64 32) Index: vendor/clang/dist/test/Lexer/has_feature_cxx0x.cpp =================================================================== --- vendor/clang/dist/test/Lexer/has_feature_cxx0x.cpp (revision 312705) +++ vendor/clang/dist/test/Lexer/has_feature_cxx0x.cpp (revision 312706) @@ -1,470 +1,481 @@ // RUN: %clang_cc1 -E -triple x86_64-linux-gnu -std=c++11 %s -o - | FileCheck --check-prefix=CHECK-11 %s // RUN: %clang_cc1 -E -triple armv7-apple-darwin -std=c++11 %s -o - | FileCheck --check-prefix=CHECK-NO-TLS %s // RUN: %clang_cc1 -E -triple x86_64-linux-gnu -std=c++98 %s -o - | FileCheck --check-prefix=CHECK-NO-11 %s // RUN: %clang_cc1 -E -triple x86_64-linux-gnu -std=c++14 %s -o - | FileCheck --check-prefix=CHECK-14 %s // RUN: %clang_cc1 -E -triple x86_64-linux-gnu -std=c++1z %s -o - | FileCheck --check-prefix=CHECK-1Z %s #if __has_feature(cxx_atomic) int has_atomic(); #else int no_atomic(); #endif // CHECK-1Z: has_atomic // CHECK-14: has_atomic // CHECK-11: has_atomic // CHECK-NO-11: no_atomic #if __has_feature(cxx_lambdas) int has_lambdas(); #else int no_lambdas(); #endif // CHECK-1Z: has_lambdas // CHECK-14: has_lambdas // CHECK-11: has_lambdas // CHECK-NO-11: no_lambdas #if __has_feature(cxx_nullptr) int has_nullptr(); #else int no_nullptr(); #endif // CHECK-1Z: has_nullptr // CHECK-14: has_nullptr // CHECK-11: has_nullptr // CHECK-NO-11: no_nullptr #if __has_feature(cxx_decltype) int has_decltype(); #else int no_decltype(); #endif // CHECK-1Z: has_decltype // CHECK-14: has_decltype // CHECK-11: has_decltype // CHECK-NO-11: no_decltype #if __has_feature(cxx_decltype_incomplete_return_types) int has_decltype_incomplete_return_types(); #else int no_decltype_incomplete_return_types(); #endif // CHECK-1Z: has_decltype_incomplete_return_types // CHECK-14: has_decltype_incomplete_return_types // CHECK-11: has_decltype_incomplete_return_types // CHECK-NO-11: no_decltype_incomplete_return_types #if __has_feature(cxx_auto_type) int has_auto_type(); #else int no_auto_type(); #endif // CHECK-1Z: has_auto_type // CHECK-14: has_auto_type // CHECK-11: has_auto_type // CHECK-NO-11: no_auto_type #if __has_feature(cxx_trailing_return) int has_trailing_return(); #else int no_trailing_return(); #endif // CHECK-1Z: has_trailing_return // CHECK-14: has_trailing_return // CHECK-11: has_trailing_return // CHECK-NO-11: no_trailing_return #if __has_feature(cxx_attributes) int has_attributes(); #else int no_attributes(); #endif // CHECK-1Z: has_attributes // CHECK-14: has_attributes // CHECK-11: has_attributes // CHECK-NO-11: no_attributes #if __has_feature(cxx_static_assert) int has_static_assert(); #else int no_static_assert(); #endif // CHECK-1Z: has_static_assert // CHECK-14: has_static_assert // CHECK-11: has_static_assert // CHECK-NO-11: no_static_assert #if __has_feature(cxx_deleted_functions) int has_deleted_functions(); #else int no_deleted_functions(); #endif // CHECK-1Z: has_deleted_functions // CHECK-14: has_deleted_functions // CHECK-11: has_deleted_functions // CHECK-NO-11: no_deleted_functions #if __has_feature(cxx_defaulted_functions) int has_defaulted_functions(); #else int no_defaulted_functions(); #endif // CHECK-1Z: has_defaulted_functions // CHECK-14: has_defaulted_functions // CHECK-11: has_defaulted_functions // CHECK-NO-11: no_defaulted_functions #if __has_feature(cxx_rvalue_references) int has_rvalue_references(); #else int no_rvalue_references(); #endif // CHECK-1Z: has_rvalue_references // CHECK-14: has_rvalue_references // CHECK-11: has_rvalue_references // CHECK-NO-11: no_rvalue_references #if __has_feature(cxx_variadic_templates) int has_variadic_templates(); #else int no_variadic_templates(); #endif // CHECK-1Z: has_variadic_templates // CHECK-14: has_variadic_templates // CHECK-11: has_variadic_templates // CHECK-NO-11: no_variadic_templates #if __has_feature(cxx_inline_namespaces) int has_inline_namespaces(); #else int no_inline_namespaces(); #endif // CHECK-1Z: has_inline_namespaces // CHECK-14: has_inline_namespaces // CHECK-11: has_inline_namespaces // CHECK-NO-11: no_inline_namespaces #if __has_feature(cxx_range_for) int has_range_for(); #else int no_range_for(); #endif // CHECK-1Z: has_range_for // CHECK-14: has_range_for // CHECK-11: has_range_for // CHECK-NO-11: no_range_for #if __has_feature(cxx_reference_qualified_functions) int has_reference_qualified_functions(); #else int no_reference_qualified_functions(); #endif // CHECK-1Z: has_reference_qualified_functions // CHECK-14: has_reference_qualified_functions // CHECK-11: has_reference_qualified_functions // CHECK-NO-11: no_reference_qualified_functions #if __has_feature(cxx_default_function_template_args) int has_default_function_template_args(); #else int no_default_function_template_args(); #endif // CHECK-1Z: has_default_function_template_args // CHECK-14: has_default_function_template_args // CHECK-11: has_default_function_template_args // CHECK-NO-11: no_default_function_template_args #if __has_feature(cxx_noexcept) int has_noexcept(); #else int no_noexcept(); #endif // CHECK-1Z: has_noexcept // CHECK-14: has_noexcept // CHECK-11: has_noexcept // CHECK-NO-11: no_noexcept #if __has_feature(cxx_override_control) int has_override_control(); #else int no_override_control(); #endif // CHECK-1Z: has_override_control // CHECK-14: has_override_control // CHECK-11: has_override_control // CHECK-NO-11: no_override_control #if __has_feature(cxx_alias_templates) int has_alias_templates(); #else int no_alias_templates(); #endif // CHECK-1Z: has_alias_templates // CHECK-14: has_alias_templates // CHECK-11: has_alias_templates // CHECK-NO-11: no_alias_templates #if __has_feature(cxx_implicit_moves) int has_implicit_moves(); #else int no_implicit_moves(); #endif // CHECK-1Z: has_implicit_moves // CHECK-14: has_implicit_moves // CHECK-11: has_implicit_moves // CHECK-NO-11: no_implicit_moves #if __has_feature(cxx_alignas) int has_alignas(); #else int no_alignas(); #endif // CHECK-1Z: has_alignas // CHECK-14: has_alignas // CHECK-11: has_alignas // CHECK-NO-11: no_alignas #if __has_feature(cxx_alignof) int has_alignof(); #else int no_alignof(); #endif // CHECK-1Z: has_alignof // CHECK-14: has_alignof // CHECK-11: has_alignof // CHECK-NO-11: no_alignof #if __has_feature(cxx_raw_string_literals) int has_raw_string_literals(); #else int no_raw_string_literals(); #endif // CHECK-1Z: has_raw_string_literals // CHECK-14: has_raw_string_literals // CHECK-11: has_raw_string_literals // CHECK-NO-11: no_raw_string_literals #if __has_feature(cxx_unicode_literals) int has_unicode_literals(); #else int no_unicode_literals(); #endif // CHECK-1Z: has_unicode_literals // CHECK-14: has_unicode_literals // CHECK-11: has_unicode_literals // CHECK-NO-11: no_unicode_literals #if __has_feature(cxx_constexpr) int has_constexpr(); #else int no_constexpr(); #endif // CHECK-1Z: has_constexpr // CHECK-14: has_constexpr // CHECK-11: has_constexpr // CHECK-NO-11: no_constexpr +#if __has_feature(cxx_constexpr_string_builtins) +int has_constexpr_string_builtins(); +#else +int no_constexpr_string_builtins(); +#endif + +// CHECK-1Z: has_constexpr_string_builtins +// CHECK-14: has_constexpr_string_builtins +// CHECK-11: has_constexpr_string_builtins +// CHECK-NO-11: no_constexpr_string_builtins + #if __has_feature(cxx_generalized_initializers) int has_generalized_initializers(); #else int no_generalized_initializers(); #endif // CHECK-1Z: has_generalized_initializers // CHECK-14: has_generalized_initializers // CHECK-11: has_generalized_initializers // CHECK-NO-11: no_generalized_initializers #if __has_feature(cxx_unrestricted_unions) int has_unrestricted_unions(); #else int no_unrestricted_unions(); #endif // CHECK-1Z: has_unrestricted_unions // CHECK-14: has_unrestricted_unions // CHECK-11: has_unrestricted_unions // CHECK-NO-11: no_unrestricted_unions #if __has_feature(cxx_user_literals) int has_user_literals(); #else int no_user_literals(); #endif // CHECK-1Z: has_user_literals // CHECK-14: has_user_literals // CHECK-11: has_user_literals // CHECK-NO-11: no_user_literals #if __has_feature(cxx_local_type_template_args) int has_local_type_template_args(); #else int no_local_type_template_args(); #endif // CHECK-1Z: has_local_type_template_args // CHECK-14: has_local_type_template_args // CHECK-11: has_local_type_template_args // CHECK-NO-11: no_local_type_template_args #if __has_feature(cxx_inheriting_constructors) int has_inheriting_constructors(); #else int no_inheriting_constructors(); #endif // CHECK-1Z: has_inheriting_constructors // CHECK-14: has_inheriting_constructors // CHECK-11: has_inheriting_constructors // CHECK-NO-11: no_inheriting_constructors #if __has_feature(cxx_thread_local) int has_thread_local(); #else int no_thread_local(); #endif // CHECK-1Z: has_thread_local // CHECK-14: has_thread_local // CHECK-11: has_thread_local // CHECK-NO-11: no_thread_local // CHECK-NO-TLS: no_thread_local // === C++14 features === #if __has_feature(cxx_binary_literals) int has_binary_literals(); #else int no_binary_literals(); #endif // CHECK-1Z: has_binary_literals // CHECK-14: has_binary_literals // CHECK-11: no_binary_literals // CHECK-NO-11: no_binary_literals #if __has_feature(cxx_aggregate_nsdmi) int has_aggregate_nsdmi(); #else int no_aggregate_nsdmi(); #endif // CHECK-1Z: has_aggregate_nsdmi // CHECK-14: has_aggregate_nsdmi // CHECK-11: no_aggregate_nsdmi // CHECK-NO-11: no_aggregate_nsdmi #if __has_feature(cxx_return_type_deduction) int has_return_type_deduction(); #else int no_return_type_deduction(); #endif // CHECK-1Z: has_return_type_deduction // CHECK-14: has_return_type_deduction // CHECK-11: no_return_type_deduction // CHECK-NO-11: no_return_type_deduction #if __has_feature(cxx_contextual_conversions) int has_contextual_conversions(); #else int no_contextual_conversions(); #endif // CHECK-1Z: has_contextual_conversions // CHECK-14: has_contextual_conversions // CHECK-11: no_contextual_conversions // CHECK-NO-11: no_contextual_conversions #if __has_feature(cxx_relaxed_constexpr) int has_relaxed_constexpr(); #else int no_relaxed_constexpr(); #endif // CHECK-1Z: has_relaxed_constexpr // CHECK-14: has_relaxed_constexpr // CHECK-11: no_relaxed_constexpr // CHECK-NO-11: no_relaxed_constexpr #if __has_feature(cxx_variable_templates) int has_variable_templates(); #else int no_variable_templates(); #endif // CHECK-1Z: has_variable_templates // CHECK-14: has_variable_templates // CHECK-11: no_variable_templates // CHECK-NO-11: no_variable_templates #if __has_feature(cxx_init_captures) int has_init_captures(); #else int no_init_captures(); #endif // CHECK-1Z: has_init_captures // CHECK-14: has_init_captures // CHECK-11: no_init_captures // CHECK-NO-11: no_init_captures #if __has_feature(cxx_decltype_auto) int has_decltype_auto(); #else int no_decltype_auto(); #endif // CHECK-1Z: has_decltype_auto // CHECK-14: has_decltype_auto // CHECK-11: no_decltype_auto // CHECK-NO-11: no_decltype_auto #if __has_feature(cxx_generic_lambdas) int has_generic_lambdas(); #else int no_generic_lambdas(); #endif // CHECK-1Z: has_generic_lambdas // CHECK-14: has_generic_lambdas // CHECK-11: no_generic_lambdas // CHECK-NO-11: no_generic_lambdas Index: vendor/clang/dist/test/Sema/PR28181.c =================================================================== --- vendor/clang/dist/test/Sema/PR28181.c (nonexistent) +++ vendor/clang/dist/test/Sema/PR28181.c (revision 312706) @@ -0,0 +1,13 @@ +// RUN: %clang_cc1 -fsyntax-only -verify %s + +struct spinlock_t { + int lock; +} audit_skb_queue; + +void fn1() { + audit_skb_queue = (lock); // expected-error {{use of undeclared identifier 'lock'; did you mean 'long'?}} +} // expected-error@-1 {{assigning to 'struct spinlock_t' from incompatible type ''}} + +void fn2() { + audit_skb_queue + (lock); // expected-error {{use of undeclared identifier 'lock'; did you mean 'long'?}} +} // expected-error@-1 {{reference to overloaded function could not be resolved; did you mean to call it?}} Property changes on: vendor/clang/dist/test/Sema/PR28181.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: vendor/clang/dist/test/SemaCXX/constexpr-string.cpp =================================================================== --- vendor/clang/dist/test/SemaCXX/constexpr-string.cpp (revision 312705) +++ vendor/clang/dist/test/SemaCXX/constexpr-string.cpp (revision 312706) @@ -1,201 +1,222 @@ // RUN: %clang_cc1 %s -std=c++1z -fsyntax-only -verify -pedantic // RUN: %clang_cc1 %s -std=c++1z -fsyntax-only -verify -pedantic -fno-signed-char // RUN: %clang_cc1 %s -std=c++1z -fsyntax-only -verify -pedantic -fno-wchar -Dwchar_t=__WCHAR_TYPE__ # 6 "/usr/include/string.h" 1 3 4 extern "C" { typedef decltype(sizeof(int)) size_t; extern size_t strlen(const char *p); extern int strcmp(const char *s1, const char *s2); extern int strncmp(const char *s1, const char *s2, size_t n); extern int memcmp(const void *s1, const void *s2, size_t n); extern char *strchr(const char *s, int c); extern void *memchr(const void *s, int c, size_t n); } # 19 "SemaCXX/constexpr-string.cpp" 2 # 21 "/usr/include/wchar.h" 1 3 4 extern "C" { extern size_t wcslen(const wchar_t *p); extern int wcscmp(const wchar_t *s1, const wchar_t *s2); extern int wcsncmp(const wchar_t *s1, const wchar_t *s2, size_t n); extern int wmemcmp(const wchar_t *s1, const wchar_t *s2, size_t n); extern wchar_t *wcschr(const wchar_t *s, wchar_t c); extern wchar_t *wmemchr(const wchar_t *s, wchar_t c, size_t n); } # 33 "SemaCXX/constexpr-string.cpp" 2 namespace Strlen { constexpr int n = __builtin_strlen("hello"); // ok static_assert(n == 5); constexpr int wn = __builtin_wcslen(L"hello"); // ok static_assert(wn == 5); constexpr int m = strlen("hello"); // expected-error {{constant expression}} expected-note {{non-constexpr function 'strlen' cannot be used in a constant expression}} constexpr int wm = wcslen(L"hello"); // expected-error {{constant expression}} expected-note {{non-constexpr function 'wcslen' cannot be used in a constant expression}} // Make sure we can evaluate a call to strlen. int arr[3]; // expected-note 2{{here}} int k = arr[strlen("hello")]; // expected-warning {{array index 5}} int wk = arr[wcslen(L"hello")]; // expected-warning {{array index 5}} } namespace StrcmpEtc { constexpr char kFoobar[6] = {'f','o','o','b','a','r'}; constexpr char kFoobazfoobar[12] = {'f','o','o','b','a','z','f','o','o','b','a','r'}; static_assert(__builtin_strcmp("abab", "abab") == 0); static_assert(__builtin_strcmp("abab", "abba") == -1); static_assert(__builtin_strcmp("abab", "abaa") == 1); static_assert(__builtin_strcmp("ababa", "abab") == 1); static_assert(__builtin_strcmp("abab", "ababa") == -1); static_assert(__builtin_strcmp("abab\0banana", "abab") == 0); static_assert(__builtin_strcmp("abab", "abab\0banana") == 0); static_assert(__builtin_strcmp("abab\0banana", "abab\0canada") == 0); static_assert(__builtin_strcmp(0, "abab") == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_strcmp("abab", 0) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_strcmp(kFoobar, kFoobazfoobar) == -1); // FIXME: Should we reject this? static_assert(__builtin_strcmp(kFoobar, kFoobazfoobar + 6) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_strncmp("abaa", "abba", 5) == -1); static_assert(__builtin_strncmp("abaa", "abba", 4) == -1); static_assert(__builtin_strncmp("abaa", "abba", 3) == -1); static_assert(__builtin_strncmp("abaa", "abba", 2) == 0); static_assert(__builtin_strncmp("abaa", "abba", 1) == 0); static_assert(__builtin_strncmp("abaa", "abba", 0) == 0); static_assert(__builtin_strncmp(0, 0, 0) == 0); static_assert(__builtin_strncmp("abab\0banana", "abab\0canada", 100) == 0); static_assert(__builtin_strncmp(kFoobar, kFoobazfoobar, 6) == -1); static_assert(__builtin_strncmp(kFoobar, kFoobazfoobar, 7) == -1); // FIXME: Should we reject this? static_assert(__builtin_strncmp(kFoobar, kFoobazfoobar + 6, 6) == 0); static_assert(__builtin_strncmp(kFoobar, kFoobazfoobar + 6, 7) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_memcmp("abaa", "abba", 3) == -1); static_assert(__builtin_memcmp("abaa", "abba", 2) == 0); static_assert(__builtin_memcmp(0, 0, 0) == 0); static_assert(__builtin_memcmp("abab\0banana", "abab\0banana", 100) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_memcmp("abab\0banana", "abab\0canada", 100) == -1); // FIXME: Should we reject this? static_assert(__builtin_memcmp("abab\0banana", "abab\0canada", 7) == -1); static_assert(__builtin_memcmp("abab\0banana", "abab\0canada", 6) == -1); static_assert(__builtin_memcmp("abab\0banana", "abab\0canada", 5) == 0); constexpr int a = strcmp("hello", "world"); // expected-error {{constant expression}} expected-note {{non-constexpr function 'strcmp' cannot be used in a constant expression}} constexpr int b = strncmp("hello", "world", 3); // expected-error {{constant expression}} expected-note {{non-constexpr function 'strncmp' cannot be used in a constant expression}} constexpr int c = memcmp("hello", "world", 3); // expected-error {{constant expression}} expected-note {{non-constexpr function 'memcmp' cannot be used in a constant expression}} } namespace WcscmpEtc { constexpr wchar_t kFoobar[6] = {L'f',L'o',L'o',L'b',L'a',L'r'}; constexpr wchar_t kFoobazfoobar[12] = {L'f',L'o',L'o',L'b',L'a',L'z',L'f',L'o',L'o',L'b',L'a',L'r'}; static_assert(__builtin_wcscmp(L"abab", L"abab") == 0); static_assert(__builtin_wcscmp(L"abab", L"abba") == -1); static_assert(__builtin_wcscmp(L"abab", L"abaa") == 1); static_assert(__builtin_wcscmp(L"ababa", L"abab") == 1); static_assert(__builtin_wcscmp(L"abab", L"ababa") == -1); static_assert(__builtin_wcscmp(L"abab\0banana", L"abab") == 0); static_assert(__builtin_wcscmp(L"abab", L"abab\0banana") == 0); static_assert(__builtin_wcscmp(L"abab\0banana", L"abab\0canada") == 0); static_assert(__builtin_wcscmp(0, L"abab") == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_wcscmp(L"abab", 0) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_wcscmp(kFoobar, kFoobazfoobar) == -1); // FIXME: Should we reject this? static_assert(__builtin_wcscmp(kFoobar, kFoobazfoobar + 6) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_wcsncmp(L"abaa", L"abba", 5) == -1); static_assert(__builtin_wcsncmp(L"abaa", L"abba", 4) == -1); static_assert(__builtin_wcsncmp(L"abaa", L"abba", 3) == -1); static_assert(__builtin_wcsncmp(L"abaa", L"abba", 2) == 0); static_assert(__builtin_wcsncmp(L"abaa", L"abba", 1) == 0); static_assert(__builtin_wcsncmp(L"abaa", L"abba", 0) == 0); static_assert(__builtin_wcsncmp(0, 0, 0) == 0); static_assert(__builtin_wcsncmp(L"abab\0banana", L"abab\0canada", 100) == 0); static_assert(__builtin_wcsncmp(kFoobar, kFoobazfoobar, 6) == -1); static_assert(__builtin_wcsncmp(kFoobar, kFoobazfoobar, 7) == -1); // FIXME: Should we reject this? static_assert(__builtin_wcsncmp(kFoobar, kFoobazfoobar + 6, 6) == 0); static_assert(__builtin_wcsncmp(kFoobar, kFoobazfoobar + 6, 7) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_wmemcmp(L"abaa", L"abba", 3) == -1); static_assert(__builtin_wmemcmp(L"abaa", L"abba", 2) == 0); static_assert(__builtin_wmemcmp(0, 0, 0) == 0); static_assert(__builtin_wmemcmp(L"abab\0banana", L"abab\0banana", 100) == 0); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_wmemcmp(L"abab\0banana", L"abab\0canada", 100) == -1); // FIXME: Should we reject this? static_assert(__builtin_wmemcmp(L"abab\0banana", L"abab\0canada", 7) == -1); static_assert(__builtin_wmemcmp(L"abab\0banana", L"abab\0canada", 6) == -1); static_assert(__builtin_wmemcmp(L"abab\0banana", L"abab\0canada", 5) == 0); constexpr int a = wcscmp(L"hello", L"world"); // expected-error {{constant expression}} expected-note {{non-constexpr function 'wcscmp' cannot be used in a constant expression}} constexpr int b = wcsncmp(L"hello", L"world", 3); // expected-error {{constant expression}} expected-note {{non-constexpr function 'wcsncmp' cannot be used in a constant expression}} constexpr int c = wmemcmp(L"hello", L"world", 3); // expected-error {{constant expression}} expected-note {{non-constexpr function 'wmemcmp' cannot be used in a constant expression}} } namespace StrchrEtc { constexpr const char *kStr = "abca\xff\0d"; constexpr char kFoo[] = {'f', 'o', 'o'}; static_assert(__builtin_strchr(kStr, 'a') == kStr); static_assert(__builtin_strchr(kStr, 'b') == kStr + 1); static_assert(__builtin_strchr(kStr, 'c') == kStr + 2); static_assert(__builtin_strchr(kStr, 'd') == nullptr); static_assert(__builtin_strchr(kStr, 'e') == nullptr); static_assert(__builtin_strchr(kStr, '\0') == kStr + 5); static_assert(__builtin_strchr(kStr, 'a' + 256) == nullptr); static_assert(__builtin_strchr(kStr, 'a' - 256) == nullptr); static_assert(__builtin_strchr(kStr, '\xff') == kStr + 4); static_assert(__builtin_strchr(kStr, '\xff' + 256) == nullptr); static_assert(__builtin_strchr(kStr, '\xff' - 256) == nullptr); static_assert(__builtin_strchr(kFoo, 'o') == kFoo + 1); static_assert(__builtin_strchr(kFoo, 'x') == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_strchr(nullptr, 'x') == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_memchr(kStr, 'a', 0) == nullptr); static_assert(__builtin_memchr(kStr, 'a', 1) == kStr); static_assert(__builtin_memchr(kStr, '\0', 5) == nullptr); static_assert(__builtin_memchr(kStr, '\0', 6) == kStr + 5); static_assert(__builtin_memchr(kStr, '\xff', 8) == kStr + 4); static_assert(__builtin_memchr(kStr, '\xff' + 256, 8) == kStr + 4); static_assert(__builtin_memchr(kStr, '\xff' - 256, 8) == kStr + 4); static_assert(__builtin_memchr(kFoo, 'x', 3) == nullptr); static_assert(__builtin_memchr(kFoo, 'x', 4) == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_memchr(nullptr, 'x', 3) == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_memchr(nullptr, 'x', 0) == nullptr); // FIXME: Should we reject this? + static_assert(__builtin_char_memchr(kStr, 'a', 0) == nullptr); + static_assert(__builtin_char_memchr(kStr, 'a', 1) == kStr); + static_assert(__builtin_char_memchr(kStr, '\0', 5) == nullptr); + static_assert(__builtin_char_memchr(kStr, '\0', 6) == kStr + 5); + static_assert(__builtin_char_memchr(kStr, '\xff', 8) == kStr + 4); + static_assert(__builtin_char_memchr(kStr, '\xff' + 256, 8) == kStr + 4); + static_assert(__builtin_char_memchr(kStr, '\xff' - 256, 8) == kStr + 4); + static_assert(__builtin_char_memchr(kFoo, 'x', 3) == nullptr); + static_assert(__builtin_char_memchr(kFoo, 'x', 4) == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} + static_assert(__builtin_char_memchr(nullptr, 'x', 3) == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} + static_assert(__builtin_char_memchr(nullptr, 'x', 0) == nullptr); // FIXME: Should we reject this? + + static_assert(*__builtin_char_memchr(kStr, '\xff', 8) == '\xff'); + constexpr bool char_memchr_mutable() { + char buffer[] = "mutable"; + *__builtin_char_memchr(buffer, 't', 8) = 'r'; + *__builtin_char_memchr(buffer, 'm', 8) = 'd'; + return __builtin_strcmp(buffer, "durable") == 0; + } + static_assert(char_memchr_mutable()); + constexpr bool a = !strchr("hello", 'h'); // expected-error {{constant expression}} expected-note {{non-constexpr function 'strchr' cannot be used in a constant expression}} constexpr bool b = !memchr("hello", 'h', 3); // expected-error {{constant expression}} expected-note {{non-constexpr function 'memchr' cannot be used in a constant expression}} } namespace WcschrEtc { constexpr const wchar_t *kStr = L"abca\xffff\0dL"; constexpr wchar_t kFoo[] = {L'f', L'o', L'o'}; static_assert(__builtin_wcschr(kStr, L'a') == kStr); static_assert(__builtin_wcschr(kStr, L'b') == kStr + 1); static_assert(__builtin_wcschr(kStr, L'c') == kStr + 2); static_assert(__builtin_wcschr(kStr, L'd') == nullptr); static_assert(__builtin_wcschr(kStr, L'e') == nullptr); static_assert(__builtin_wcschr(kStr, L'\0') == kStr + 5); static_assert(__builtin_wcschr(kStr, L'a' + 256) == nullptr); static_assert(__builtin_wcschr(kStr, L'a' - 256) == nullptr); static_assert(__builtin_wcschr(kStr, L'\xffff') == kStr + 4); static_assert(__builtin_wcschr(kFoo, L'o') == kFoo + 1); static_assert(__builtin_wcschr(kFoo, L'x') == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_wcschr(nullptr, L'x') == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_wmemchr(kStr, L'a', 0) == nullptr); static_assert(__builtin_wmemchr(kStr, L'a', 1) == kStr); static_assert(__builtin_wmemchr(kStr, L'\0', 5) == nullptr); static_assert(__builtin_wmemchr(kStr, L'\0', 6) == kStr + 5); static_assert(__builtin_wmemchr(kStr, L'\xffff', 8) == kStr + 4); static_assert(__builtin_wmemchr(kFoo, L'x', 3) == nullptr); static_assert(__builtin_wmemchr(kFoo, L'x', 4) == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced one-past-the-end}} static_assert(__builtin_wmemchr(nullptr, L'x', 3) == nullptr); // expected-error {{not an integral constant}} expected-note {{dereferenced null}} static_assert(__builtin_wmemchr(nullptr, L'x', 0) == nullptr); // FIXME: Should we reject this? constexpr bool a = !wcschr(L"hello", L'h'); // expected-error {{constant expression}} expected-note {{non-constexpr function 'wcschr' cannot be used in a constant expression}} constexpr bool b = !wmemchr(L"hello", L'h', 3); // expected-error {{constant expression}} expected-note {{non-constexpr function 'wmemchr' cannot be used in a constant expression}} } Index: vendor/clang/dist/test/SemaCXX/cxx11-default-member-initializers.cpp =================================================================== --- vendor/clang/dist/test/SemaCXX/cxx11-default-member-initializers.cpp (nonexistent) +++ vendor/clang/dist/test/SemaCXX/cxx11-default-member-initializers.cpp (revision 312706) @@ -0,0 +1,14 @@ +// RUN: %clang_cc1 -std=c++11 -verify %s -pedantic + +namespace PR31692 { + struct A { + struct X { int n = 0; } x; + // Trigger construction of X() from a SFINAE context. This must not mark + // any part of X as invalid. + static_assert(!__is_constructible(X), ""); + // Check that X::n is not marked invalid. + double &r = x.n; // expected-error {{non-const lvalue reference to type 'double' cannot bind to a value of unrelated type 'int'}} + }; + // A::X can now be default-constructed. + static_assert(__is_constructible(A::X), ""); +} Property changes on: vendor/clang/dist/test/SemaCXX/cxx11-default-member-initializers.cpp ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property