add libc++ and libc++abi sources

upstream: LLVM 10
This commit is contained in:
Andrew Kelley
2020-03-26 22:41:26 -04:00
parent 463b90b977
commit ed0dbe1a64
99 changed files with 30450 additions and 0 deletions

311
lib/libcxxabi/LICENSE.TXT Normal file
View File

@@ -0,0 +1,311 @@
==============================================================================
The LLVM Project is under the Apache License v2.0 with LLVM Exceptions:
==============================================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---- LLVM Exceptions to the Apache 2.0 License ----
As an exception, if, as a result of your compiling your source code, portions
of this Software are embedded into an Object form of such source code, you
may redistribute such embedded portions in such Object form without complying
with the conditions of Sections 4(a), 4(b) and 4(d) of the License.
In addition, if you combine or link compiled forms of this Software with
software that is licensed under the GPLv2 ("Combined Software") and if a
court of competent jurisdiction determines that the patent provision (Section
3), the indemnity provision (Section 9) or other Section of the License
conflicts with the conditions of the GPLv2, you may retroactively and
prospectively choose to deem waived or otherwise exclude such Section(s) of
the License, but only in their entirety and only with respect to the Combined
Software.
==============================================================================
Software from third parties included in the LLVM Project:
==============================================================================
The LLVM Project contains third party software which is under different license
terms. All such code will be identified clearly using at least one of two
mechanisms:
1) It will be in a separate directory tree with its own `LICENSE.txt` or
`LICENSE` file at the top containing the specific license and restrictions
which apply to that software, or
2) It will contain specific license and restriction terms at the top of every
file.
==============================================================================
Legacy LLVM License (https://llvm.org/docs/DeveloperPolicy.html#legacy):
==============================================================================
The libc++abi library is dual licensed under both the University of Illinois
"BSD-Like" license and the MIT license. As a user of this code you may choose
to use it under either license. As a contributor, you agree to allow your code
to be used under both.
Full text of the relevant licenses is included below.
==============================================================================
University of Illinois/NCSA
Open Source License
Copyright (c) 2009-2019 by the contributors listed in CREDITS.TXT
All rights reserved.
Developed by:
LLVM Team
University of Illinois at Urbana-Champaign
http://llvm.org
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal with
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimers.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimers in the
documentation and/or other materials provided with the distribution.
* Neither the names of the LLVM Team, University of Illinois at
Urbana-Champaign, nor the names of its contributors may be used to
endorse or promote products derived from this Software without specific
prior written permission.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE
SOFTWARE.
==============================================================================
Copyright (c) 2009-2014 by the contributors listed in CREDITS.TXT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,79 @@
//===-------------------------- __cxxabi_config.h -------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef ____CXXABI_CONFIG_H
#define ____CXXABI_CONFIG_H
#if defined(__arm__) && !defined(__USING_SJLJ_EXCEPTIONS__) && \
!defined(__ARM_DWARF_EH__)
#define _LIBCXXABI_ARM_EHABI
#endif
#if !defined(__has_attribute)
#define __has_attribute(_attribute_) 0
#endif
#if defined(_WIN32)
#if defined(_LIBCXXABI_DISABLE_VISIBILITY_ANNOTATIONS)
#define _LIBCXXABI_HIDDEN
#define _LIBCXXABI_DATA_VIS
#define _LIBCXXABI_FUNC_VIS
#define _LIBCXXABI_TYPE_VIS
#elif defined(_LIBCXXABI_BUILDING_LIBRARY)
#define _LIBCXXABI_HIDDEN
#define _LIBCXXABI_DATA_VIS __declspec(dllexport)
#define _LIBCXXABI_FUNC_VIS __declspec(dllexport)
#define _LIBCXXABI_TYPE_VIS __declspec(dllexport)
#else
#define _LIBCXXABI_HIDDEN
#define _LIBCXXABI_DATA_VIS __declspec(dllimport)
#define _LIBCXXABI_FUNC_VIS __declspec(dllimport)
#define _LIBCXXABI_TYPE_VIS __declspec(dllimport)
#endif
#else
#if !defined(_LIBCXXABI_DISABLE_VISIBILITY_ANNOTATIONS)
#define _LIBCXXABI_HIDDEN __attribute__((__visibility__("hidden")))
#define _LIBCXXABI_DATA_VIS __attribute__((__visibility__("default")))
#define _LIBCXXABI_FUNC_VIS __attribute__((__visibility__("default")))
#if __has_attribute(__type_visibility__)
#define _LIBCXXABI_TYPE_VIS __attribute__((__type_visibility__("default")))
#else
#define _LIBCXXABI_TYPE_VIS __attribute__((__visibility__("default")))
#endif
#else
#define _LIBCXXABI_HIDDEN
#define _LIBCXXABI_DATA_VIS
#define _LIBCXXABI_FUNC_VIS
#define _LIBCXXABI_TYPE_VIS
#endif
#endif
#if defined(_WIN32)
#define _LIBCXXABI_WEAK
#else
#define _LIBCXXABI_WEAK __attribute__((__weak__))
#endif
#if defined(__clang__)
#define _LIBCXXABI_COMPILER_CLANG
#elif defined(__GNUC__)
#define _LIBCXXABI_COMPILER_GCC
#endif
#if __has_attribute(__no_sanitize__) && defined(_LIBCXXABI_COMPILER_CLANG)
#define _LIBCXXABI_NO_CFI __attribute__((__no_sanitize__("cfi")))
#else
#define _LIBCXXABI_NO_CFI
#endif
// wasm32 follows the arm32 ABI convention of using 32-bit guard.
#if defined(__arm__) || defined(__wasm32__)
# define _LIBCXXABI_GUARD_ABI_ARM
#endif
#endif // ____CXXABI_CONFIG_H

View File

@@ -0,0 +1,176 @@
//===--------------------------- cxxabi.h ---------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef __CXXABI_H
#define __CXXABI_H
/*
* This header provides the interface to the C++ ABI as defined at:
* https://itanium-cxx-abi.github.io/cxx-abi/
*/
#include <stddef.h>
#include <stdint.h>
#include <__cxxabi_config.h>
#define _LIBCPPABI_VERSION 1002
#define _LIBCXXABI_NORETURN __attribute__((noreturn))
#ifdef __cplusplus
namespace std {
#if defined(_WIN32)
class _LIBCXXABI_TYPE_VIS type_info; // forward declaration
#else
class type_info; // forward declaration
#endif
}
// runtime routines use C calling conventions, but are in __cxxabiv1 namespace
namespace __cxxabiv1 {
extern "C" {
// 2.4.2 Allocating the Exception Object
extern _LIBCXXABI_FUNC_VIS void *
__cxa_allocate_exception(size_t thrown_size) throw();
extern _LIBCXXABI_FUNC_VIS void
__cxa_free_exception(void *thrown_exception) throw();
// 2.4.3 Throwing the Exception Object
extern _LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void
__cxa_throw(void *thrown_exception, std::type_info *tinfo,
void (*dest)(void *));
// 2.5.3 Exception Handlers
extern _LIBCXXABI_FUNC_VIS void *
__cxa_get_exception_ptr(void *exceptionObject) throw();
extern _LIBCXXABI_FUNC_VIS void *
__cxa_begin_catch(void *exceptionObject) throw();
extern _LIBCXXABI_FUNC_VIS void __cxa_end_catch();
#if defined(_LIBCXXABI_ARM_EHABI)
extern _LIBCXXABI_FUNC_VIS bool
__cxa_begin_cleanup(void *exceptionObject) throw();
extern _LIBCXXABI_FUNC_VIS void __cxa_end_cleanup();
#endif
extern _LIBCXXABI_FUNC_VIS std::type_info *__cxa_current_exception_type();
// 2.5.4 Rethrowing Exceptions
extern _LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void __cxa_rethrow();
// 2.6 Auxiliary Runtime APIs
extern _LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void __cxa_bad_cast(void);
extern _LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void __cxa_bad_typeid(void);
extern _LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void
__cxa_throw_bad_array_new_length(void);
// 3.2.6 Pure Virtual Function API
extern _LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void __cxa_pure_virtual(void);
// 3.2.7 Deleted Virtual Function API
extern _LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void __cxa_deleted_virtual(void);
// 3.3.2 One-time Construction API
#if defined(_LIBCXXABI_GUARD_ABI_ARM)
extern _LIBCXXABI_FUNC_VIS int __cxa_guard_acquire(uint32_t *);
extern _LIBCXXABI_FUNC_VIS void __cxa_guard_release(uint32_t *);
extern _LIBCXXABI_FUNC_VIS void __cxa_guard_abort(uint32_t *);
#else
extern _LIBCXXABI_FUNC_VIS int __cxa_guard_acquire(uint64_t *);
extern _LIBCXXABI_FUNC_VIS void __cxa_guard_release(uint64_t *);
extern _LIBCXXABI_FUNC_VIS void __cxa_guard_abort(uint64_t *);
#endif
// 3.3.3 Array Construction and Destruction API
extern _LIBCXXABI_FUNC_VIS void *
__cxa_vec_new(size_t element_count, size_t element_size, size_t padding_size,
void (*constructor)(void *), void (*destructor)(void *));
extern _LIBCXXABI_FUNC_VIS void *
__cxa_vec_new2(size_t element_count, size_t element_size, size_t padding_size,
void (*constructor)(void *), void (*destructor)(void *),
void *(*alloc)(size_t), void (*dealloc)(void *));
extern _LIBCXXABI_FUNC_VIS void *
__cxa_vec_new3(size_t element_count, size_t element_size, size_t padding_size,
void (*constructor)(void *), void (*destructor)(void *),
void *(*alloc)(size_t), void (*dealloc)(void *, size_t));
extern _LIBCXXABI_FUNC_VIS void
__cxa_vec_ctor(void *array_address, size_t element_count, size_t element_size,
void (*constructor)(void *), void (*destructor)(void *));
extern _LIBCXXABI_FUNC_VIS void __cxa_vec_dtor(void *array_address,
size_t element_count,
size_t element_size,
void (*destructor)(void *));
extern _LIBCXXABI_FUNC_VIS void __cxa_vec_cleanup(void *array_address,
size_t element_count,
size_t element_size,
void (*destructor)(void *));
extern _LIBCXXABI_FUNC_VIS void __cxa_vec_delete(void *array_address,
size_t element_size,
size_t padding_size,
void (*destructor)(void *));
extern _LIBCXXABI_FUNC_VIS void
__cxa_vec_delete2(void *array_address, size_t element_size, size_t padding_size,
void (*destructor)(void *), void (*dealloc)(void *));
extern _LIBCXXABI_FUNC_VIS void
__cxa_vec_delete3(void *__array_address, size_t element_size,
size_t padding_size, void (*destructor)(void *),
void (*dealloc)(void *, size_t));
extern _LIBCXXABI_FUNC_VIS void
__cxa_vec_cctor(void *dest_array, void *src_array, size_t element_count,
size_t element_size, void (*constructor)(void *, void *),
void (*destructor)(void *));
// 3.3.5.3 Runtime API
extern _LIBCXXABI_FUNC_VIS int __cxa_atexit(void (*f)(void *), void *p,
void *d);
extern _LIBCXXABI_FUNC_VIS int __cxa_finalize(void *);
// 3.4 Demangler API
extern _LIBCXXABI_FUNC_VIS char *__cxa_demangle(const char *mangled_name,
char *output_buffer,
size_t *length, int *status);
// Apple additions to support C++ 0x exception_ptr class
// These are primitives to wrap a smart pointer around an exception object
extern _LIBCXXABI_FUNC_VIS void *__cxa_current_primary_exception() throw();
extern _LIBCXXABI_FUNC_VIS void
__cxa_rethrow_primary_exception(void *primary_exception);
extern _LIBCXXABI_FUNC_VIS void
__cxa_increment_exception_refcount(void *primary_exception) throw();
extern _LIBCXXABI_FUNC_VIS void
__cxa_decrement_exception_refcount(void *primary_exception) throw();
// Apple extension to support std::uncaught_exception()
extern _LIBCXXABI_FUNC_VIS bool __cxa_uncaught_exception() throw();
extern _LIBCXXABI_FUNC_VIS unsigned int __cxa_uncaught_exceptions() throw();
#if defined(__linux__) || defined(__Fuchsia__)
// Linux and Fuchsia TLS support. Not yet an official part of the Itanium ABI.
// https://sourceware.org/glibc/wiki/Destructor%20support%20for%20thread_local%20variables
extern _LIBCXXABI_FUNC_VIS int __cxa_thread_atexit(void (*)(void *), void *,
void *) throw();
#endif
} // extern "C"
} // namespace __cxxabiv1
namespace abi = __cxxabiv1;
#endif // __cplusplus
#endif // __CXXABI_H

View File

@@ -0,0 +1,77 @@
//===------------------------- abort_message.cpp --------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
#include "abort_message.h"
#ifdef __BIONIC__
#include <android/api-level.h>
#if __ANDROID_API__ >= 21
#include <syslog.h>
extern "C" void android_set_abort_message(const char* msg);
#else
#include <assert.h>
#endif // __ANDROID_API__ >= 21
#endif // __BIONIC__
#ifdef __APPLE__
# if defined(__has_include) && __has_include(<CrashReporterClient.h>)
# define HAVE_CRASHREPORTERCLIENT_H
# include <CrashReporterClient.h>
# endif
#endif
void abort_message(const char* format, ...)
{
// write message to stderr
#if !defined(NDEBUG) || !defined(LIBCXXABI_BAREMETAL)
#ifdef __APPLE__
fprintf(stderr, "libc++abi.dylib: ");
#endif
va_list list;
va_start(list, format);
vfprintf(stderr, format, list);
va_end(list);
fprintf(stderr, "\n");
#endif
#if defined(__APPLE__) && defined(HAVE_CRASHREPORTERCLIENT_H)
// record message in crash report
char* buffer;
va_list list2;
va_start(list2, format);
vasprintf(&buffer, format, list2);
va_end(list2);
CRSetCrashLogMessage(buffer);
#elif defined(__BIONIC__)
char* buffer;
va_list list2;
va_start(list2, format);
vasprintf(&buffer, format, list2);
va_end(list2);
#if __ANDROID_API__ >= 21
// Show error in tombstone.
android_set_abort_message(buffer);
// Show error in logcat.
openlog("libc++abi", 0, 0);
syslog(LOG_CRIT, "%s", buffer);
closelog();
#else
// The good error reporting wasn't available in Android until L. Since we're
// about to abort anyway, just call __assert2, which will log _somewhere_
// (tombstone and/or logcat) in older releases.
__assert2(__FILE__, __LINE__, __func__, buffer);
#endif // __ANDROID_API__ >= 21
#endif // __BIONIC__
abort();
}

View File

@@ -0,0 +1,26 @@
//===-------------------------- abort_message.h-----------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef __ABORT_MESSAGE_H_
#define __ABORT_MESSAGE_H_
#include "cxxabi.h"
#ifdef __cplusplus
extern "C" {
#endif
_LIBCXXABI_HIDDEN _LIBCXXABI_NORETURN void
abort_message(const char *format, ...) __attribute__((format(printf, 1, 2)));
#ifdef __cplusplus
}
#endif
#endif

View File

@@ -0,0 +1,43 @@
//===------------------------ cxa_aux_runtime.cpp -------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the "Auxiliary Runtime APIs"
// https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#cxx-aux
//===----------------------------------------------------------------------===//
#include "cxxabi.h"
#include <new>
#include <typeinfo>
namespace __cxxabiv1 {
extern "C" {
_LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void __cxa_bad_cast(void) {
#ifndef _LIBCXXABI_NO_EXCEPTIONS
throw std::bad_cast();
#else
std::terminate();
#endif
}
_LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void __cxa_bad_typeid(void) {
#ifndef _LIBCXXABI_NO_EXCEPTIONS
throw std::bad_typeid();
#else
std::terminate();
#endif
}
_LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN void
__cxa_throw_bad_array_new_length(void) {
#ifndef _LIBCXXABI_NO_EXCEPTIONS
throw std::bad_array_new_length();
#else
std::terminate();
#endif
}
} // extern "C"
} // abi

View File

@@ -0,0 +1,124 @@
//===------------------------- cxa_default_handlers.cpp -------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the default terminate_handler and unexpected_handler.
//===----------------------------------------------------------------------===//
#include <exception>
#include <stdlib.h>
#include "abort_message.h"
#include "cxxabi.h"
#include "cxa_handlers.h"
#include "cxa_exception.h"
#include "private_typeinfo.h"
#include "include/atomic_support.h"
#if !defined(LIBCXXABI_SILENT_TERMINATE)
_LIBCPP_SAFE_STATIC
static const char* cause = "uncaught";
__attribute__((noreturn))
static void demangling_terminate_handler()
{
#ifndef _LIBCXXABI_NO_EXCEPTIONS
// If there might be an uncaught exception
using namespace __cxxabiv1;
__cxa_eh_globals* globals = __cxa_get_globals_fast();
if (globals)
{
__cxa_exception* exception_header = globals->caughtExceptions;
// If there is an uncaught exception
if (exception_header)
{
_Unwind_Exception* unwind_exception =
reinterpret_cast<_Unwind_Exception*>(exception_header + 1) - 1;
if (__isOurExceptionClass(unwind_exception))
{
void* thrown_object =
__getExceptionClass(unwind_exception) == kOurDependentExceptionClass ?
((__cxa_dependent_exception*)exception_header)->primaryException :
exception_header + 1;
const __shim_type_info* thrown_type =
static_cast<const __shim_type_info*>(exception_header->exceptionType);
// Try to get demangled name of thrown_type
int status;
char buf[1024];
size_t len = sizeof(buf);
const char* name = __cxa_demangle(thrown_type->name(), buf, &len, &status);
if (status != 0)
name = thrown_type->name();
// If the uncaught exception can be caught with std::exception&
const __shim_type_info* catch_type =
static_cast<const __shim_type_info*>(&typeid(std::exception));
if (catch_type->can_catch(thrown_type, thrown_object))
{
// Include the what() message from the exception
const std::exception* e = static_cast<const std::exception*>(thrown_object);
abort_message("terminating with %s exception of type %s: %s",
cause, name, e->what());
}
else
// Else just note that we're terminating with an exception
abort_message("terminating with %s exception of type %s",
cause, name);
}
else
// Else we're terminating with a foreign exception
abort_message("terminating with %s foreign exception", cause);
}
}
#endif
// Else just note that we're terminating
abort_message("terminating");
}
__attribute__((noreturn))
static void demangling_unexpected_handler()
{
cause = "unexpected";
std::terminate();
}
static constexpr std::terminate_handler default_terminate_handler = demangling_terminate_handler;
static constexpr std::terminate_handler default_unexpected_handler = demangling_unexpected_handler;
#else
static constexpr std::terminate_handler default_terminate_handler = ::abort;
static constexpr std::terminate_handler default_unexpected_handler = std::terminate;
#endif
//
// Global variables that hold the pointers to the current handler
//
_LIBCXXABI_DATA_VIS
_LIBCPP_SAFE_STATIC std::terminate_handler __cxa_terminate_handler = default_terminate_handler;
_LIBCXXABI_DATA_VIS
_LIBCPP_SAFE_STATIC std::unexpected_handler __cxa_unexpected_handler = default_unexpected_handler;
namespace std
{
unexpected_handler
set_unexpected(unexpected_handler func) _NOEXCEPT
{
if (func == 0)
func = default_unexpected_handler;
return __libcpp_atomic_exchange(&__cxa_unexpected_handler, func,
_AO_Acq_Rel);
}
terminate_handler
set_terminate(terminate_handler func) _NOEXCEPT
{
if (func == 0)
func = default_terminate_handler;
return __libcpp_atomic_exchange(&__cxa_terminate_handler, func,
_AO_Acq_Rel);
}
}

View File

@@ -0,0 +1,366 @@
//===-------------------------- cxa_demangle.cpp --------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// FIXME: (possibly) incomplete list of features that clang mangles that this
// file does not yet support:
// - C++ modules TS
#include "demangle/ItaniumDemangle.h"
#include "__cxxabi_config.h"
#include <cassert>
#include <cctype>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <functional>
#include <numeric>
#include <utility>
using namespace itanium_demangle;
constexpr const char *itanium_demangle::FloatData<float>::spec;
constexpr const char *itanium_demangle::FloatData<double>::spec;
constexpr const char *itanium_demangle::FloatData<long double>::spec;
// <discriminator> := _ <non-negative number> # when number < 10
// := __ <non-negative number> _ # when number >= 10
// extension := decimal-digit+ # at the end of string
const char *itanium_demangle::parse_discriminator(const char *first,
const char *last) {
// parse but ignore discriminator
if (first != last) {
if (*first == '_') {
const char *t1 = first + 1;
if (t1 != last) {
if (std::isdigit(*t1))
first = t1 + 1;
else if (*t1 == '_') {
for (++t1; t1 != last && std::isdigit(*t1); ++t1)
;
if (t1 != last && *t1 == '_')
first = t1 + 1;
}
}
} else if (std::isdigit(*first)) {
const char *t1 = first + 1;
for (; t1 != last && std::isdigit(*t1); ++t1)
;
if (t1 == last)
first = last;
}
}
return first;
}
#ifndef NDEBUG
namespace {
struct DumpVisitor {
unsigned Depth = 0;
bool PendingNewline = false;
template<typename NodeT> static constexpr bool wantsNewline(const NodeT *) {
return true;
}
static bool wantsNewline(NodeArray A) { return !A.empty(); }
static constexpr bool wantsNewline(...) { return false; }
template<typename ...Ts> static bool anyWantNewline(Ts ...Vs) {
for (bool B : {wantsNewline(Vs)...})
if (B)
return true;
return false;
}
void printStr(const char *S) { fprintf(stderr, "%s", S); }
void print(StringView SV) {
fprintf(stderr, "\"%.*s\"", (int)SV.size(), SV.begin());
}
void print(const Node *N) {
if (N)
N->visit(std::ref(*this));
else
printStr("<null>");
}
void print(NodeArray A) {
++Depth;
printStr("{");
bool First = true;
for (const Node *N : A) {
if (First)
print(N);
else
printWithComma(N);
First = false;
}
printStr("}");
--Depth;
}
// Overload used when T is exactly 'bool', not merely convertible to 'bool'.
void print(bool B) { printStr(B ? "true" : "false"); }
template <class T>
typename std::enable_if<std::is_unsigned<T>::value>::type print(T N) {
fprintf(stderr, "%llu", (unsigned long long)N);
}
template <class T>
typename std::enable_if<std::is_signed<T>::value>::type print(T N) {
fprintf(stderr, "%lld", (long long)N);
}
void print(ReferenceKind RK) {
switch (RK) {
case ReferenceKind::LValue:
return printStr("ReferenceKind::LValue");
case ReferenceKind::RValue:
return printStr("ReferenceKind::RValue");
}
}
void print(FunctionRefQual RQ) {
switch (RQ) {
case FunctionRefQual::FrefQualNone:
return printStr("FunctionRefQual::FrefQualNone");
case FunctionRefQual::FrefQualLValue:
return printStr("FunctionRefQual::FrefQualLValue");
case FunctionRefQual::FrefQualRValue:
return printStr("FunctionRefQual::FrefQualRValue");
}
}
void print(Qualifiers Qs) {
if (!Qs) return printStr("QualNone");
struct QualName { Qualifiers Q; const char *Name; } Names[] = {
{QualConst, "QualConst"},
{QualVolatile, "QualVolatile"},
{QualRestrict, "QualRestrict"},
};
for (QualName Name : Names) {
if (Qs & Name.Q) {
printStr(Name.Name);
Qs = Qualifiers(Qs & ~Name.Q);
if (Qs) printStr(" | ");
}
}
}
void print(SpecialSubKind SSK) {
switch (SSK) {
case SpecialSubKind::allocator:
return printStr("SpecialSubKind::allocator");
case SpecialSubKind::basic_string:
return printStr("SpecialSubKind::basic_string");
case SpecialSubKind::string:
return printStr("SpecialSubKind::string");
case SpecialSubKind::istream:
return printStr("SpecialSubKind::istream");
case SpecialSubKind::ostream:
return printStr("SpecialSubKind::ostream");
case SpecialSubKind::iostream:
return printStr("SpecialSubKind::iostream");
}
}
void print(TemplateParamKind TPK) {
switch (TPK) {
case TemplateParamKind::Type:
return printStr("TemplateParamKind::Type");
case TemplateParamKind::NonType:
return printStr("TemplateParamKind::NonType");
case TemplateParamKind::Template:
return printStr("TemplateParamKind::Template");
}
}
void newLine() {
printStr("\n");
for (unsigned I = 0; I != Depth; ++I)
printStr(" ");
PendingNewline = false;
}
template<typename T> void printWithPendingNewline(T V) {
print(V);
if (wantsNewline(V))
PendingNewline = true;
}
template<typename T> void printWithComma(T V) {
if (PendingNewline || wantsNewline(V)) {
printStr(",");
newLine();
} else {
printStr(", ");
}
printWithPendingNewline(V);
}
struct CtorArgPrinter {
DumpVisitor &Visitor;
template<typename T, typename ...Rest> void operator()(T V, Rest ...Vs) {
if (Visitor.anyWantNewline(V, Vs...))
Visitor.newLine();
Visitor.printWithPendingNewline(V);
int PrintInOrder[] = { (Visitor.printWithComma(Vs), 0)..., 0 };
(void)PrintInOrder;
}
};
template<typename NodeT> void operator()(const NodeT *Node) {
Depth += 2;
fprintf(stderr, "%s(", itanium_demangle::NodeKind<NodeT>::name());
Node->match(CtorArgPrinter{*this});
fprintf(stderr, ")");
Depth -= 2;
}
void operator()(const ForwardTemplateReference *Node) {
Depth += 2;
fprintf(stderr, "ForwardTemplateReference(");
if (Node->Ref && !Node->Printing) {
Node->Printing = true;
CtorArgPrinter{*this}(Node->Ref);
Node->Printing = false;
} else {
CtorArgPrinter{*this}(Node->Index);
}
fprintf(stderr, ")");
Depth -= 2;
}
};
}
void itanium_demangle::Node::dump() const {
DumpVisitor V;
visit(std::ref(V));
V.newLine();
}
#endif
namespace {
class BumpPointerAllocator {
struct BlockMeta {
BlockMeta* Next;
size_t Current;
};
static constexpr size_t AllocSize = 4096;
static constexpr size_t UsableAllocSize = AllocSize - sizeof(BlockMeta);
alignas(long double) char InitialBuffer[AllocSize];
BlockMeta* BlockList = nullptr;
void grow() {
char* NewMeta = static_cast<char *>(std::malloc(AllocSize));
if (NewMeta == nullptr)
std::terminate();
BlockList = new (NewMeta) BlockMeta{BlockList, 0};
}
void* allocateMassive(size_t NBytes) {
NBytes += sizeof(BlockMeta);
BlockMeta* NewMeta = reinterpret_cast<BlockMeta*>(std::malloc(NBytes));
if (NewMeta == nullptr)
std::terminate();
BlockList->Next = new (NewMeta) BlockMeta{BlockList->Next, 0};
return static_cast<void*>(NewMeta + 1);
}
public:
BumpPointerAllocator()
: BlockList(new (InitialBuffer) BlockMeta{nullptr, 0}) {}
void* allocate(size_t N) {
N = (N + 15u) & ~15u;
if (N + BlockList->Current >= UsableAllocSize) {
if (N > UsableAllocSize)
return allocateMassive(N);
grow();
}
BlockList->Current += N;
return static_cast<void*>(reinterpret_cast<char*>(BlockList + 1) +
BlockList->Current - N);
}
void reset() {
while (BlockList) {
BlockMeta* Tmp = BlockList;
BlockList = BlockList->Next;
if (reinterpret_cast<char*>(Tmp) != InitialBuffer)
std::free(Tmp);
}
BlockList = new (InitialBuffer) BlockMeta{nullptr, 0};
}
~BumpPointerAllocator() { reset(); }
};
class DefaultAllocator {
BumpPointerAllocator Alloc;
public:
void reset() { Alloc.reset(); }
template<typename T, typename ...Args> T *makeNode(Args &&...args) {
return new (Alloc.allocate(sizeof(T)))
T(std::forward<Args>(args)...);
}
void *allocateNodeArray(size_t sz) {
return Alloc.allocate(sizeof(Node *) * sz);
}
};
} // unnamed namespace
//===----------------------------------------------------------------------===//
// Code beyond this point should not be synchronized with LLVM.
//===----------------------------------------------------------------------===//
using Demangler = itanium_demangle::ManglingParser<DefaultAllocator>;
namespace {
enum : int {
demangle_invalid_args = -3,
demangle_invalid_mangled_name = -2,
demangle_memory_alloc_failure = -1,
demangle_success = 0,
};
}
namespace __cxxabiv1 {
extern "C" _LIBCXXABI_FUNC_VIS char *
__cxa_demangle(const char *MangledName, char *Buf, size_t *N, int *Status) {
if (MangledName == nullptr || (Buf != nullptr && N == nullptr)) {
if (Status)
*Status = demangle_invalid_args;
return nullptr;
}
int InternalStatus = demangle_success;
Demangler Parser(MangledName, MangledName + std::strlen(MangledName));
OutputStream S;
Node *AST = Parser.parse();
if (AST == nullptr)
InternalStatus = demangle_invalid_mangled_name;
else if (!initializeOutputStream(Buf, N, S, 1024))
InternalStatus = demangle_memory_alloc_failure;
else {
assert(Parser.ForwardTemplateRefs.empty());
AST->print(S);
S += '\0';
if (N != nullptr)
*N = S.getCurrentPosition();
Buf = S.getBuffer();
}
if (Status)
*Status = InternalStatus;
return InternalStatus == demangle_success ? Buf : nullptr;
}
} // __cxxabiv1

View File

@@ -0,0 +1,755 @@
//===------------------------- cxa_exception.cpp --------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the "Exception Handling APIs"
// https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html
//
//===----------------------------------------------------------------------===//
#include "cxxabi.h"
#include <exception> // for std::terminate
#include <string.h> // for memset
#include "cxa_exception.h"
#include "cxa_handlers.h"
#include "fallback_malloc.h"
#include "include/atomic_support.h"
#if __has_feature(address_sanitizer)
extern "C" void __asan_handle_no_return(void);
#endif
// +---------------------------+-----------------------------+---------------+
// | __cxa_exception | _Unwind_Exception CLNGC++\0 | thrown object |
// +---------------------------+-----------------------------+---------------+
// ^
// |
// +-------------------------------------------------------+
// |
// +---------------------------+-----------------------------+
// | __cxa_dependent_exception | _Unwind_Exception CLNGC++\1 |
// +---------------------------+-----------------------------+
namespace __cxxabiv1 {
// Utility routines
static
inline
__cxa_exception*
cxa_exception_from_thrown_object(void* thrown_object)
{
return static_cast<__cxa_exception*>(thrown_object) - 1;
}
// Note: This is never called when exception_header is masquerading as a
// __cxa_dependent_exception.
static
inline
void*
thrown_object_from_cxa_exception(__cxa_exception* exception_header)
{
return static_cast<void*>(exception_header + 1);
}
// Get the exception object from the unwind pointer.
// Relies on the structure layout, where the unwind pointer is right in
// front of the user's exception object
static
inline
__cxa_exception*
cxa_exception_from_exception_unwind_exception(_Unwind_Exception* unwind_exception)
{
return cxa_exception_from_thrown_object(unwind_exception + 1 );
}
// Round s up to next multiple of a.
static inline
size_t aligned_allocation_size(size_t s, size_t a) {
return (s + a - 1) & ~(a - 1);
}
static inline
size_t cxa_exception_size_from_exception_thrown_size(size_t size) {
return aligned_allocation_size(size + sizeof (__cxa_exception),
alignof(__cxa_exception));
}
void __setExceptionClass(_Unwind_Exception* unwind_exception, uint64_t newValue) {
::memcpy(&unwind_exception->exception_class, &newValue, sizeof(newValue));
}
static void setOurExceptionClass(_Unwind_Exception* unwind_exception) {
__setExceptionClass(unwind_exception, kOurExceptionClass);
}
static void setDependentExceptionClass(_Unwind_Exception* unwind_exception) {
__setExceptionClass(unwind_exception, kOurDependentExceptionClass);
}
// Is it one of ours?
uint64_t __getExceptionClass(const _Unwind_Exception* unwind_exception) {
// On x86 and some ARM unwinders, unwind_exception->exception_class is
// a uint64_t. On other ARM unwinders, it is a char[8].
// See: http://infocenter.arm.com/help/topic/com.arm.doc.ihi0038b/IHI0038B_ehabi.pdf
// So we just copy it into a uint64_t to be sure.
uint64_t exClass;
::memcpy(&exClass, &unwind_exception->exception_class, sizeof(exClass));
return exClass;
}
bool __isOurExceptionClass(const _Unwind_Exception* unwind_exception) {
return (__getExceptionClass(unwind_exception) & get_vendor_and_language) ==
(kOurExceptionClass & get_vendor_and_language);
}
static bool isDependentException(_Unwind_Exception* unwind_exception) {
return (__getExceptionClass(unwind_exception) & 0xFF) == 0x01;
}
// This does not need to be atomic
static inline int incrementHandlerCount(__cxa_exception *exception) {
return ++exception->handlerCount;
}
// This does not need to be atomic
static inline int decrementHandlerCount(__cxa_exception *exception) {
return --exception->handlerCount;
}
/*
If reason isn't _URC_FOREIGN_EXCEPTION_CAUGHT, then the terminateHandler
stored in exc is called. Otherwise the exceptionDestructor stored in
exc is called, and then the memory for the exception is deallocated.
This is never called for a __cxa_dependent_exception.
*/
static
void
exception_cleanup_func(_Unwind_Reason_Code reason, _Unwind_Exception* unwind_exception)
{
__cxa_exception* exception_header = cxa_exception_from_exception_unwind_exception(unwind_exception);
if (_URC_FOREIGN_EXCEPTION_CAUGHT != reason)
std::__terminate(exception_header->terminateHandler);
// Just in case there exists a dependent exception that is pointing to this,
// check the reference count and only destroy this if that count goes to zero.
__cxa_decrement_exception_refcount(unwind_exception + 1);
}
static _LIBCXXABI_NORETURN void failed_throw(__cxa_exception* exception_header) {
// Section 2.5.3 says:
// * For purposes of this ABI, several things are considered exception handlers:
// ** A terminate() call due to a throw.
// and
// * Upon entry, Following initialization of the catch parameter,
// a handler must call:
// * void *__cxa_begin_catch(void *exceptionObject );
(void) __cxa_begin_catch(&exception_header->unwindHeader);
std::__terminate(exception_header->terminateHandler);
}
// Return the offset of the __cxa_exception header from the start of the
// allocated buffer. If __cxa_exception's alignment is smaller than the maximum
// useful alignment for the target machine, padding has to be inserted before
// the header to ensure the thrown object that follows the header is
// sufficiently aligned. This happens if _Unwind_exception isn't double-word
// aligned (on Darwin, for example).
static size_t get_cxa_exception_offset() {
struct S {
} __attribute__((aligned));
// Compute the maximum alignment for the target machine.
constexpr size_t alignment = alignof(S);
constexpr size_t excp_size = sizeof(__cxa_exception);
constexpr size_t aligned_size =
(excp_size + alignment - 1) / alignment * alignment;
constexpr size_t offset = aligned_size - excp_size;
static_assert((offset == 0 || alignof(_Unwind_Exception) < alignment),
"offset is non-zero only if _Unwind_Exception isn't aligned");
return offset;
}
extern "C" {
// Allocate a __cxa_exception object, and zero-fill it.
// Reserve "thrown_size" bytes on the end for the user's exception
// object. Zero-fill the object. If memory can't be allocated, call
// std::terminate. Return a pointer to the memory to be used for the
// user's exception object.
void *__cxa_allocate_exception(size_t thrown_size) throw() {
size_t actual_size = cxa_exception_size_from_exception_thrown_size(thrown_size);
// Allocate extra space before the __cxa_exception header to ensure the
// start of the thrown object is sufficiently aligned.
size_t header_offset = get_cxa_exception_offset();
char *raw_buffer =
(char *)__aligned_malloc_with_fallback(header_offset + actual_size);
if (NULL == raw_buffer)
std::terminate();
__cxa_exception *exception_header =
static_cast<__cxa_exception *>((void *)(raw_buffer + header_offset));
::memset(exception_header, 0, actual_size);
return thrown_object_from_cxa_exception(exception_header);
}
// Free a __cxa_exception object allocated with __cxa_allocate_exception.
void __cxa_free_exception(void *thrown_object) throw() {
// Compute the size of the padding before the header.
size_t header_offset = get_cxa_exception_offset();
char *raw_buffer =
((char *)cxa_exception_from_thrown_object(thrown_object)) - header_offset;
__aligned_free_with_fallback((void *)raw_buffer);
}
// This function shall allocate a __cxa_dependent_exception and
// return a pointer to it. (Really to the object, not past its' end).
// Otherwise, it will work like __cxa_allocate_exception.
void * __cxa_allocate_dependent_exception () {
size_t actual_size = sizeof(__cxa_dependent_exception);
void *ptr = __aligned_malloc_with_fallback(actual_size);
if (NULL == ptr)
std::terminate();
::memset(ptr, 0, actual_size);
return ptr;
}
// This function shall free a dependent_exception.
// It does not affect the reference count of the primary exception.
void __cxa_free_dependent_exception (void * dependent_exception) {
__aligned_free_with_fallback(dependent_exception);
}
// 2.4.3 Throwing the Exception Object
/*
After constructing the exception object with the throw argument value,
the generated code calls the __cxa_throw runtime library routine. This
routine never returns.
The __cxa_throw routine will do the following:
* Obtain the __cxa_exception header from the thrown exception object address,
which can be computed as follows:
__cxa_exception *header = ((__cxa_exception *) thrown_exception - 1);
* Save the current unexpected_handler and terminate_handler in the __cxa_exception header.
* Save the tinfo and dest arguments in the __cxa_exception header.
* Set the exception_class field in the unwind header. This is a 64-bit value
representing the ASCII string "XXXXC++\0", where "XXXX" is a
vendor-dependent string. That is, for implementations conforming to this
ABI, the low-order 4 bytes of this 64-bit value will be "C++\0".
* Increment the uncaught_exception flag.
* Call _Unwind_RaiseException in the system unwind library, Its argument is the
pointer to the thrown exception, which __cxa_throw itself received as an argument.
__Unwind_RaiseException begins the process of stack unwinding, described
in Section 2.5. In special cases, such as an inability to find a
handler, _Unwind_RaiseException may return. In that case, __cxa_throw
will call terminate, assuming that there was no handler for the
exception.
*/
void
__cxa_throw(void *thrown_object, std::type_info *tinfo, void (*dest)(void *)) {
__cxa_eh_globals *globals = __cxa_get_globals();
__cxa_exception* exception_header = cxa_exception_from_thrown_object(thrown_object);
exception_header->unexpectedHandler = std::get_unexpected();
exception_header->terminateHandler = std::get_terminate();
exception_header->exceptionType = tinfo;
exception_header->exceptionDestructor = dest;
setOurExceptionClass(&exception_header->unwindHeader);
exception_header->referenceCount = 1; // This is a newly allocated exception, no need for thread safety.
globals->uncaughtExceptions += 1; // Not atomically, since globals are thread-local
exception_header->unwindHeader.exception_cleanup = exception_cleanup_func;
#if __has_feature(address_sanitizer)
// Inform the ASan runtime that now might be a good time to clean stuff up.
__asan_handle_no_return();
#endif
#ifdef __USING_SJLJ_EXCEPTIONS__
_Unwind_SjLj_RaiseException(&exception_header->unwindHeader);
#else
_Unwind_RaiseException(&exception_header->unwindHeader);
#endif
// This only happens when there is no handler, or some unexpected unwinding
// error happens.
failed_throw(exception_header);
}
// 2.5.3 Exception Handlers
/*
The adjusted pointer is computed by the personality routine during phase 1
and saved in the exception header (either __cxa_exception or
__cxa_dependent_exception).
Requires: exception is native
*/
void *__cxa_get_exception_ptr(void *unwind_exception) throw() {
#if defined(_LIBCXXABI_ARM_EHABI)
return reinterpret_cast<void*>(
static_cast<_Unwind_Control_Block*>(unwind_exception)->barrier_cache.bitpattern[0]);
#else
return cxa_exception_from_exception_unwind_exception(
static_cast<_Unwind_Exception*>(unwind_exception))->adjustedPtr;
#endif
}
#if defined(_LIBCXXABI_ARM_EHABI)
/*
The routine to be called before the cleanup. This will save __cxa_exception in
__cxa_eh_globals, so that __cxa_end_cleanup() can recover later.
*/
bool __cxa_begin_cleanup(void *unwind_arg) throw() {
_Unwind_Exception* unwind_exception = static_cast<_Unwind_Exception*>(unwind_arg);
__cxa_eh_globals* globals = __cxa_get_globals();
__cxa_exception* exception_header =
cxa_exception_from_exception_unwind_exception(unwind_exception);
if (__isOurExceptionClass(unwind_exception))
{
if (0 == exception_header->propagationCount)
{
exception_header->nextPropagatingException = globals->propagatingExceptions;
globals->propagatingExceptions = exception_header;
}
++exception_header->propagationCount;
}
else
{
// If the propagatingExceptions stack is not empty, since we can't
// chain the foreign exception, terminate it.
if (NULL != globals->propagatingExceptions)
std::terminate();
globals->propagatingExceptions = exception_header;
}
return true;
}
/*
The routine to be called after the cleanup has been performed. It will get the
propagating __cxa_exception from __cxa_eh_globals, and continue the stack
unwinding with _Unwind_Resume.
According to ARM EHABI 8.4.1, __cxa_end_cleanup() should not clobber any
register, thus we have to write this function in assembly so that we can save
{r1, r2, r3}. We don't have to save r0 because it is the return value and the
first argument to _Unwind_Resume(). In addition, we are saving r4 in order to
align the stack to 16 bytes, even though it is a callee-save register.
*/
__attribute__((used)) static _Unwind_Exception *
__cxa_end_cleanup_impl()
{
__cxa_eh_globals* globals = __cxa_get_globals();
__cxa_exception* exception_header = globals->propagatingExceptions;
if (NULL == exception_header)
{
// It seems that __cxa_begin_cleanup() is not called properly.
// We have no choice but terminate the program now.
std::terminate();
}
if (__isOurExceptionClass(&exception_header->unwindHeader))
{
--exception_header->propagationCount;
if (0 == exception_header->propagationCount)
{
globals->propagatingExceptions = exception_header->nextPropagatingException;
exception_header->nextPropagatingException = NULL;
}
}
else
{
globals->propagatingExceptions = NULL;
}
return &exception_header->unwindHeader;
}
asm (
" .pushsection .text.__cxa_end_cleanup,\"ax\",%progbits\n"
" .globl __cxa_end_cleanup\n"
" .type __cxa_end_cleanup,%function\n"
"__cxa_end_cleanup:\n"
" push {r1, r2, r3, r4}\n"
" bl __cxa_end_cleanup_impl\n"
" pop {r1, r2, r3, r4}\n"
" bl _Unwind_Resume\n"
" bl abort\n"
" .popsection"
);
#endif // defined(_LIBCXXABI_ARM_EHABI)
/*
This routine can catch foreign or native exceptions. If native, the exception
can be a primary or dependent variety. This routine may remain blissfully
ignorant of whether the native exception is primary or dependent.
If the exception is native:
* Increment's the exception's handler count.
* Push the exception on the stack of currently-caught exceptions if it is not
already there (from a rethrow).
* Decrements the uncaught_exception count.
* Returns the adjusted pointer to the exception object, which is stored in
the __cxa_exception by the personality routine.
If the exception is foreign, this means it did not originate from one of throw
routines. The foreign exception does not necessarily have a __cxa_exception
header. However we can catch it here with a catch (...), or with a call
to terminate or unexpected during unwinding.
* Do not try to increment the exception's handler count, we don't know where
it is.
* Push the exception on the stack of currently-caught exceptions only if the
stack is empty. The foreign exception has no way to link to the current
top of stack. If the stack is not empty, call terminate. Even with an
empty stack, this is hacked in by pushing a pointer to an imaginary
__cxa_exception block in front of the foreign exception. It would be better
if the __cxa_eh_globals structure had a stack of _Unwind_Exception, but it
doesn't. It has a stack of __cxa_exception (which has a next* in it).
* Do not decrement the uncaught_exception count because we didn't increment it
in __cxa_throw (or one of our rethrow functions).
* If we haven't terminated, assume the exception object is just past the
_Unwind_Exception and return a pointer to that.
*/
void*
__cxa_begin_catch(void* unwind_arg) throw()
{
_Unwind_Exception* unwind_exception = static_cast<_Unwind_Exception*>(unwind_arg);
bool native_exception = __isOurExceptionClass(unwind_exception);
__cxa_eh_globals* globals = __cxa_get_globals();
// exception_header is a hackish offset from a foreign exception, but it
// works as long as we're careful not to try to access any __cxa_exception
// parts.
__cxa_exception* exception_header =
cxa_exception_from_exception_unwind_exception
(
static_cast<_Unwind_Exception*>(unwind_exception)
);
if (native_exception)
{
// Increment the handler count, removing the flag about being rethrown
exception_header->handlerCount = exception_header->handlerCount < 0 ?
-exception_header->handlerCount + 1 : exception_header->handlerCount + 1;
// place the exception on the top of the stack if it's not already
// there by a previous rethrow
if (exception_header != globals->caughtExceptions)
{
exception_header->nextException = globals->caughtExceptions;
globals->caughtExceptions = exception_header;
}
globals->uncaughtExceptions -= 1; // Not atomically, since globals are thread-local
#if defined(_LIBCXXABI_ARM_EHABI)
return reinterpret_cast<void*>(exception_header->unwindHeader.barrier_cache.bitpattern[0]);
#else
return exception_header->adjustedPtr;
#endif
}
// Else this is a foreign exception
// If the caughtExceptions stack is not empty, terminate
if (globals->caughtExceptions != 0)
std::terminate();
// Push the foreign exception on to the stack
globals->caughtExceptions = exception_header;
return unwind_exception + 1;
}
/*
Upon exit for any reason, a handler must call:
void __cxa_end_catch ();
This routine can be called for either a native or foreign exception.
For a native exception:
* Locates the most recently caught exception and decrements its handler count.
* Removes the exception from the caught exception stack, if the handler count goes to zero.
* If the handler count goes down to zero, and the exception was not re-thrown
by throw, it locates the primary exception (which may be the same as the one
it's handling) and decrements its reference count. If that reference count
goes to zero, the function destroys the exception. In any case, if the current
exception is a dependent exception, it destroys that.
For a foreign exception:
* If it has been rethrown, there is nothing to do.
* Otherwise delete the exception and pop the catch stack to empty.
*/
void __cxa_end_catch() {
static_assert(sizeof(__cxa_exception) == sizeof(__cxa_dependent_exception),
"sizeof(__cxa_exception) must be equal to "
"sizeof(__cxa_dependent_exception)");
static_assert(__builtin_offsetof(__cxa_exception, referenceCount) ==
__builtin_offsetof(__cxa_dependent_exception,
primaryException),
"the layout of __cxa_exception must match the layout of "
"__cxa_dependent_exception");
static_assert(__builtin_offsetof(__cxa_exception, handlerCount) ==
__builtin_offsetof(__cxa_dependent_exception, handlerCount),
"the layout of __cxa_exception must match the layout of "
"__cxa_dependent_exception");
__cxa_eh_globals* globals = __cxa_get_globals_fast(); // __cxa_get_globals called in __cxa_begin_catch
__cxa_exception* exception_header = globals->caughtExceptions;
// If we've rethrown a foreign exception, then globals->caughtExceptions
// will have been made an empty stack by __cxa_rethrow() and there is
// nothing more to be done. Do nothing!
if (NULL != exception_header)
{
bool native_exception = __isOurExceptionClass(&exception_header->unwindHeader);
if (native_exception)
{
// This is a native exception
if (exception_header->handlerCount < 0)
{
// The exception has been rethrown by __cxa_rethrow, so don't delete it
if (0 == incrementHandlerCount(exception_header))
{
// Remove from the chain of uncaught exceptions
globals->caughtExceptions = exception_header->nextException;
// but don't destroy
}
// Keep handlerCount negative in case there are nested catch's
// that need to be told that this exception is rethrown. Don't
// erase this rethrow flag until the exception is recaught.
}
else
{
// The native exception has not been rethrown
if (0 == decrementHandlerCount(exception_header))
{
// Remove from the chain of uncaught exceptions
globals->caughtExceptions = exception_header->nextException;
// Destroy this exception, being careful to distinguish
// between dependent and primary exceptions
if (isDependentException(&exception_header->unwindHeader))
{
// Reset exception_header to primaryException and deallocate the dependent exception
__cxa_dependent_exception* dep_exception_header =
reinterpret_cast<__cxa_dependent_exception*>(exception_header);
exception_header =
cxa_exception_from_thrown_object(dep_exception_header->primaryException);
__cxa_free_dependent_exception(dep_exception_header);
}
// Destroy the primary exception only if its referenceCount goes to 0
// (this decrement must be atomic)
__cxa_decrement_exception_refcount(thrown_object_from_cxa_exception(exception_header));
}
}
}
else
{
// The foreign exception has not been rethrown. Pop the stack
// and delete it. If there are nested catch's and they try
// to touch a foreign exception in any way, that is undefined
// behavior. They likely can't since the only way to catch
// a foreign exception is with catch (...)!
_Unwind_DeleteException(&globals->caughtExceptions->unwindHeader);
globals->caughtExceptions = 0;
}
}
}
// Note: exception_header may be masquerading as a __cxa_dependent_exception
// and that's ok. exceptionType is there too.
// However watch out for foreign exceptions. Return null for them.
std::type_info *__cxa_current_exception_type() {
// get the current exception
__cxa_eh_globals *globals = __cxa_get_globals_fast();
if (NULL == globals)
return NULL; // If there have never been any exceptions, there are none now.
__cxa_exception *exception_header = globals->caughtExceptions;
if (NULL == exception_header)
return NULL; // No current exception
if (!__isOurExceptionClass(&exception_header->unwindHeader))
return NULL;
return exception_header->exceptionType;
}
// 2.5.4 Rethrowing Exceptions
/* This routine can rethrow native or foreign exceptions.
If the exception is native:
* marks the exception object on top of the caughtExceptions stack
(in an implementation-defined way) as being rethrown.
* If the caughtExceptions stack is empty, it calls terminate()
(see [C++FDIS] [except.throw], 15.1.8).
* It then calls _Unwind_RaiseException which should not return
(terminate if it does).
Note: exception_header may be masquerading as a __cxa_dependent_exception
and that's ok.
*/
void __cxa_rethrow() {
__cxa_eh_globals* globals = __cxa_get_globals();
__cxa_exception* exception_header = globals->caughtExceptions;
if (NULL == exception_header)
std::terminate(); // throw; called outside of a exception handler
bool native_exception = __isOurExceptionClass(&exception_header->unwindHeader);
if (native_exception)
{
// Mark the exception as being rethrown (reverse the effects of __cxa_begin_catch)
exception_header->handlerCount = -exception_header->handlerCount;
globals->uncaughtExceptions += 1;
// __cxa_end_catch will remove this exception from the caughtExceptions stack if necessary
}
else // this is a foreign exception
{
// The only way to communicate to __cxa_end_catch that we've rethrown
// a foreign exception, so don't delete us, is to pop the stack here
// which must be empty afterwards. Then __cxa_end_catch will do
// nothing
globals->caughtExceptions = 0;
}
#ifdef __USING_SJLJ_EXCEPTIONS__
_Unwind_SjLj_RaiseException(&exception_header->unwindHeader);
#else
_Unwind_RaiseException(&exception_header->unwindHeader);
#endif
// If we get here, some kind of unwinding error has occurred.
// There is some weird code generation bug happening with
// Apple clang version 4.0 (tags/Apple/clang-418.0.2) (based on LLVM 3.1svn)
// If we call failed_throw here. Turns up with -O2 or higher, and -Os.
__cxa_begin_catch(&exception_header->unwindHeader);
if (native_exception)
std::__terminate(exception_header->terminateHandler);
// Foreign exception: can't get exception_header->terminateHandler
std::terminate();
}
/*
If thrown_object is not null, atomically increment the referenceCount field
of the __cxa_exception header associated with the thrown object referred to
by thrown_object.
Requires: If thrown_object is not NULL, it is a native exception.
*/
void
__cxa_increment_exception_refcount(void *thrown_object) throw() {
if (thrown_object != NULL )
{
__cxa_exception* exception_header = cxa_exception_from_thrown_object(thrown_object);
std::__libcpp_atomic_add(&exception_header->referenceCount, size_t(1));
}
}
/*
If thrown_object is not null, atomically decrement the referenceCount field
of the __cxa_exception header associated with the thrown object referred to
by thrown_object. If the referenceCount drops to zero, destroy and
deallocate the exception.
Requires: If thrown_object is not NULL, it is a native exception.
*/
_LIBCXXABI_NO_CFI
void __cxa_decrement_exception_refcount(void *thrown_object) throw() {
if (thrown_object != NULL )
{
__cxa_exception* exception_header = cxa_exception_from_thrown_object(thrown_object);
if (std::__libcpp_atomic_add(&exception_header->referenceCount, size_t(-1)) == 0)
{
if (NULL != exception_header->exceptionDestructor)
exception_header->exceptionDestructor(thrown_object);
__cxa_free_exception(thrown_object);
}
}
}
/*
Returns a pointer to the thrown object (if any) at the top of the
caughtExceptions stack. Atomically increment the exception's referenceCount.
If there is no such thrown object or if the thrown object is foreign,
returns null.
We can use __cxa_get_globals_fast here to get the globals because if there have
been no exceptions thrown, ever, on this thread, we can return NULL without
the need to allocate the exception-handling globals.
*/
void *__cxa_current_primary_exception() throw() {
// get the current exception
__cxa_eh_globals* globals = __cxa_get_globals_fast();
if (NULL == globals)
return NULL; // If there are no globals, there is no exception
__cxa_exception* exception_header = globals->caughtExceptions;
if (NULL == exception_header)
return NULL; // No current exception
if (!__isOurExceptionClass(&exception_header->unwindHeader))
return NULL; // Can't capture a foreign exception (no way to refcount it)
if (isDependentException(&exception_header->unwindHeader)) {
__cxa_dependent_exception* dep_exception_header =
reinterpret_cast<__cxa_dependent_exception*>(exception_header);
exception_header = cxa_exception_from_thrown_object(dep_exception_header->primaryException);
}
void* thrown_object = thrown_object_from_cxa_exception(exception_header);
__cxa_increment_exception_refcount(thrown_object);
return thrown_object;
}
/*
If reason isn't _URC_FOREIGN_EXCEPTION_CAUGHT, then the terminateHandler
stored in exc is called. Otherwise the referenceCount stored in the
primary exception is decremented, destroying the primary if necessary.
Finally the dependent exception is destroyed.
*/
static
void
dependent_exception_cleanup(_Unwind_Reason_Code reason, _Unwind_Exception* unwind_exception)
{
__cxa_dependent_exception* dep_exception_header =
reinterpret_cast<__cxa_dependent_exception*>(unwind_exception + 1) - 1;
if (_URC_FOREIGN_EXCEPTION_CAUGHT != reason)
std::__terminate(dep_exception_header->terminateHandler);
__cxa_decrement_exception_refcount(dep_exception_header->primaryException);
__cxa_free_dependent_exception(dep_exception_header);
}
/*
If thrown_object is not null, allocate, initialize and throw a dependent
exception.
*/
void
__cxa_rethrow_primary_exception(void* thrown_object)
{
if ( thrown_object != NULL )
{
// thrown_object guaranteed to be native because
// __cxa_current_primary_exception returns NULL for foreign exceptions
__cxa_exception* exception_header = cxa_exception_from_thrown_object(thrown_object);
__cxa_dependent_exception* dep_exception_header =
static_cast<__cxa_dependent_exception*>(__cxa_allocate_dependent_exception());
dep_exception_header->primaryException = thrown_object;
__cxa_increment_exception_refcount(thrown_object);
dep_exception_header->exceptionType = exception_header->exceptionType;
dep_exception_header->unexpectedHandler = std::get_unexpected();
dep_exception_header->terminateHandler = std::get_terminate();
setDependentExceptionClass(&dep_exception_header->unwindHeader);
__cxa_get_globals()->uncaughtExceptions += 1;
dep_exception_header->unwindHeader.exception_cleanup = dependent_exception_cleanup;
#ifdef __USING_SJLJ_EXCEPTIONS__
_Unwind_SjLj_RaiseException(&dep_exception_header->unwindHeader);
#else
_Unwind_RaiseException(&dep_exception_header->unwindHeader);
#endif
// Some sort of unwinding error. Note that terminate is a handler.
__cxa_begin_catch(&dep_exception_header->unwindHeader);
}
// If we return client will call terminate()
}
bool
__cxa_uncaught_exception() throw() { return __cxa_uncaught_exceptions() != 0; }
unsigned int
__cxa_uncaught_exceptions() throw()
{
// This does not report foreign exceptions in flight
__cxa_eh_globals* globals = __cxa_get_globals_fast();
if (globals == 0)
return 0;
return globals->uncaughtExceptions;
}
} // extern "C"
} // abi

View File

@@ -0,0 +1,164 @@
//===------------------------- cxa_exception.h ----------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the "Exception Handling APIs"
// https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html
//
//===----------------------------------------------------------------------===//
#ifndef _CXA_EXCEPTION_H
#define _CXA_EXCEPTION_H
#include <exception> // for std::unexpected_handler and std::terminate_handler
#include "cxxabi.h"
#include "unwind.h"
namespace __cxxabiv1 {
static const uint64_t kOurExceptionClass = 0x434C4E47432B2B00; // CLNGC++\0
static const uint64_t kOurDependentExceptionClass = 0x434C4E47432B2B01; // CLNGC++\1
static const uint64_t get_vendor_and_language = 0xFFFFFFFFFFFFFF00; // mask for CLNGC++
_LIBCXXABI_HIDDEN uint64_t __getExceptionClass (const _Unwind_Exception*);
_LIBCXXABI_HIDDEN void __setExceptionClass ( _Unwind_Exception*, uint64_t);
_LIBCXXABI_HIDDEN bool __isOurExceptionClass(const _Unwind_Exception*);
struct _LIBCXXABI_HIDDEN __cxa_exception {
#if defined(__LP64__) || defined(_WIN64) || defined(_LIBCXXABI_ARM_EHABI)
// Now _Unwind_Exception is marked with __attribute__((aligned)),
// which implies __cxa_exception is also aligned. Insert padding
// in the beginning of the struct, rather than before unwindHeader.
void *reserve;
// This is a new field to support C++ 0x exception_ptr.
// For binary compatibility it is at the start of this
// struct which is prepended to the object thrown in
// __cxa_allocate_exception.
size_t referenceCount;
#endif
// Manage the exception object itself.
std::type_info *exceptionType;
void (*exceptionDestructor)(void *);
std::unexpected_handler unexpectedHandler;
std::terminate_handler terminateHandler;
__cxa_exception *nextException;
int handlerCount;
#if defined(_LIBCXXABI_ARM_EHABI)
__cxa_exception* nextPropagatingException;
int propagationCount;
#else
int handlerSwitchValue;
const unsigned char *actionRecord;
const unsigned char *languageSpecificData;
void *catchTemp;
void *adjustedPtr;
#endif
#if !defined(__LP64__) && !defined(_WIN64) && !defined(_LIBCXXABI_ARM_EHABI)
// This is a new field to support C++ 0x exception_ptr.
// For binary compatibility it is placed where the compiler
// previously adding padded to 64-bit align unwindHeader.
size_t referenceCount;
#endif
_Unwind_Exception unwindHeader;
};
// http://sourcery.mentor.com/archives/cxx-abi-dev/msg01924.html
// The layout of this structure MUST match the layout of __cxa_exception, with
// primaryException instead of referenceCount.
struct _LIBCXXABI_HIDDEN __cxa_dependent_exception {
#if defined(__LP64__) || defined(_WIN64) || defined(_LIBCXXABI_ARM_EHABI)
void* reserve; // padding.
void* primaryException;
#endif
std::type_info *exceptionType;
void (*exceptionDestructor)(void *);
std::unexpected_handler unexpectedHandler;
std::terminate_handler terminateHandler;
__cxa_exception *nextException;
int handlerCount;
#if defined(_LIBCXXABI_ARM_EHABI)
__cxa_exception* nextPropagatingException;
int propagationCount;
#else
int handlerSwitchValue;
const unsigned char *actionRecord;
const unsigned char *languageSpecificData;
void * catchTemp;
void *adjustedPtr;
#endif
#if !defined(__LP64__) && !defined(_WIN64) && !defined(_LIBCXXABI_ARM_EHABI)
void* primaryException;
#endif
_Unwind_Exception unwindHeader;
};
// Verify the negative offsets of different fields.
static_assert(sizeof(_Unwind_Exception) +
offsetof(__cxa_exception, unwindHeader) ==
sizeof(__cxa_exception),
"unwindHeader has wrong negative offsets");
static_assert(sizeof(_Unwind_Exception) +
offsetof(__cxa_dependent_exception, unwindHeader) ==
sizeof(__cxa_dependent_exception),
"unwindHeader has wrong negative offsets");
#if defined(_LIBCXXABI_ARM_EHABI)
static_assert(offsetof(__cxa_exception, propagationCount) +
sizeof(_Unwind_Exception) + sizeof(void*) ==
sizeof(__cxa_exception),
"propagationCount has wrong negative offset");
static_assert(offsetof(__cxa_dependent_exception, propagationCount) +
sizeof(_Unwind_Exception) + sizeof(void*) ==
sizeof(__cxa_dependent_exception),
"propagationCount has wrong negative offset");
#elif defined(__LP64__) || defined(_WIN64)
static_assert(offsetof(__cxa_exception, adjustedPtr) +
sizeof(_Unwind_Exception) + sizeof(void*) ==
sizeof(__cxa_exception),
"adjustedPtr has wrong negative offset");
static_assert(offsetof(__cxa_dependent_exception, adjustedPtr) +
sizeof(_Unwind_Exception) + sizeof(void*) ==
sizeof(__cxa_dependent_exception),
"adjustedPtr has wrong negative offset");
#else
static_assert(offsetof(__cxa_exception, referenceCount) +
sizeof(_Unwind_Exception) + sizeof(void*) ==
sizeof(__cxa_exception),
"referenceCount has wrong negative offset");
static_assert(offsetof(__cxa_dependent_exception, primaryException) +
sizeof(_Unwind_Exception) + sizeof(void*) ==
sizeof(__cxa_dependent_exception),
"primaryException has wrong negative offset");
#endif
struct _LIBCXXABI_HIDDEN __cxa_eh_globals {
__cxa_exception * caughtExceptions;
unsigned int uncaughtExceptions;
#if defined(_LIBCXXABI_ARM_EHABI)
__cxa_exception* propagatingExceptions;
#endif
};
extern "C" _LIBCXXABI_FUNC_VIS __cxa_eh_globals * __cxa_get_globals ();
extern "C" _LIBCXXABI_FUNC_VIS __cxa_eh_globals * __cxa_get_globals_fast ();
extern "C" _LIBCXXABI_FUNC_VIS void * __cxa_allocate_dependent_exception ();
extern "C" _LIBCXXABI_FUNC_VIS void __cxa_free_dependent_exception (void * dependent_exception);
} // namespace __cxxabiv1
#endif // _CXA_EXCEPTION_H

View File

@@ -0,0 +1,105 @@
//===--------------------- cxa_exception_storage.cpp ----------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the storage for the "Caught Exception Stack"
// https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#cxx-exc-stack
//
//===----------------------------------------------------------------------===//
#include "cxa_exception.h"
#include <__threading_support>
#if defined(_LIBCXXABI_HAS_NO_THREADS)
namespace __cxxabiv1 {
extern "C" {
static __cxa_eh_globals eh_globals;
__cxa_eh_globals *__cxa_get_globals() { return &eh_globals; }
__cxa_eh_globals *__cxa_get_globals_fast() { return &eh_globals; }
}
}
#elif defined(HAS_THREAD_LOCAL)
namespace __cxxabiv1 {
namespace {
__cxa_eh_globals * __globals () {
static thread_local __cxa_eh_globals eh_globals;
return &eh_globals;
}
}
extern "C" {
__cxa_eh_globals * __cxa_get_globals () { return __globals (); }
__cxa_eh_globals * __cxa_get_globals_fast () { return __globals (); }
}
}
#else
#include "abort_message.h"
#include "fallback_malloc.h"
#if defined(__ELF__) && defined(_LIBCXXABI_LINK_PTHREAD_LIB)
#pragma comment(lib, "pthread")
#endif
// In general, we treat all threading errors as fatal.
// We cannot call std::terminate() because that will in turn
// call __cxa_get_globals() and cause infinite recursion.
namespace __cxxabiv1 {
namespace {
std::__libcpp_tls_key key_;
std::__libcpp_exec_once_flag flag_ = _LIBCPP_EXEC_ONCE_INITIALIZER;
void _LIBCPP_TLS_DESTRUCTOR_CC destruct_ (void *p) {
__free_with_fallback ( p );
if ( 0 != std::__libcpp_tls_set ( key_, NULL ) )
abort_message("cannot zero out thread value for __cxa_get_globals()");
}
void construct_ () {
if ( 0 != std::__libcpp_tls_create ( &key_, destruct_ ) )
abort_message("cannot create thread specific key for __cxa_get_globals()");
}
}
extern "C" {
__cxa_eh_globals * __cxa_get_globals () {
// Try to get the globals for this thread
__cxa_eh_globals* retVal = __cxa_get_globals_fast ();
// If this is the first time we've been asked for these globals, create them
if ( NULL == retVal ) {
retVal = static_cast<__cxa_eh_globals*>
(__calloc_with_fallback (1, sizeof (__cxa_eh_globals)));
if ( NULL == retVal )
abort_message("cannot allocate __cxa_eh_globals");
if ( 0 != std::__libcpp_tls_set ( key_, retVal ) )
abort_message("std::__libcpp_tls_set failure in __cxa_get_globals()");
}
return retVal;
}
// Note that this implementation will reliably return NULL if not
// preceded by a call to __cxa_get_globals(). This is an extension
// to the Itanium ABI and is taken advantage of in several places in
// libc++abi.
__cxa_eh_globals * __cxa_get_globals_fast () {
// First time through, create the key.
if (0 != std::__libcpp_execute_once(&flag_, construct_))
abort_message("execute once failure in __cxa_get_globals_fast()");
// static int init = construct_();
return static_cast<__cxa_eh_globals*>(std::__libcpp_tls_get(key_));
}
}
}
#endif

View File

@@ -0,0 +1,53 @@
//===---------------------------- cxa_guard.cpp ---------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "__cxxabi_config.h"
#include "cxxabi.h"
// Tell the implementation that we're building the actual implementation
// (and not testing it)
#define BUILDING_CXA_GUARD
#include "cxa_guard_impl.h"
/*
This implementation must be careful to not call code external to this file
which will turn around and try to call __cxa_guard_acquire reentrantly.
For this reason, the headers of this file are as restricted as possible.
Previous implementations of this code for __APPLE__ have used
std::__libcpp_mutex_lock and the abort_message utility without problem. This
implementation also uses std::__libcpp_condvar_wait which has tested
to not be a problem.
*/
namespace __cxxabiv1 {
#if defined(_LIBCXXABI_GUARD_ABI_ARM)
using guard_type = uint32_t;
#else
using guard_type = uint64_t;
#endif
extern "C"
{
_LIBCXXABI_FUNC_VIS int __cxa_guard_acquire(guard_type* raw_guard_object) {
SelectedImplementation imp(raw_guard_object);
return static_cast<int>(imp.cxa_guard_acquire());
}
_LIBCXXABI_FUNC_VIS void __cxa_guard_release(guard_type *raw_guard_object) {
SelectedImplementation imp(raw_guard_object);
imp.cxa_guard_release();
}
_LIBCXXABI_FUNC_VIS void __cxa_guard_abort(guard_type *raw_guard_object) {
SelectedImplementation imp(raw_guard_object);
imp.cxa_guard_abort();
}
} // extern "C"
} // __cxxabiv1

View File

@@ -0,0 +1,567 @@
//===----------------------------------------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef LIBCXXABI_SRC_INCLUDE_CXA_GUARD_IMPL_H
#define LIBCXXABI_SRC_INCLUDE_CXA_GUARD_IMPL_H
/* cxa_guard_impl.h - Implements the C++ runtime support for function local
* static guards.
* The layout of the guard object is the same across ARM and Itanium.
*
* The first "guard byte" (which is checked by the compiler) is set only upon
* the completion of cxa release.
*
* The second "init byte" does the rest of the bookkeeping. It tracks if
* initialization is complete or pending, and if there are waiting threads.
*
* If the guard variable is 64-bits and the platforms supplies a 32-bit thread
* identifier, it is used to detect recursive initialization. The thread ID of
* the thread currently performing initialization is stored in the second word.
*
* Guard Object Layout:
* -------------------------------------------------------------------------
* |a: guard byte | a+1: init byte | a+2 : unused ... | a+4: thread-id ... |
* ------------------------------------------------------------------------
*
* Access Protocol:
* For each implementation the guard byte is checked and set before accessing
* the init byte.
*
* Overall Design:
* The implementation was designed to allow each implementation to be tested
* independent of the C++ runtime or platform support.
*
*/
#include "__cxxabi_config.h"
#include "include/atomic_support.h"
#include <unistd.h>
#include <sys/types.h>
#if defined(__has_include)
# if __has_include(<sys/syscall.h>)
# include <sys/syscall.h>
# endif
#endif
#include <stdlib.h>
#include <__threading_support>
#ifndef _LIBCXXABI_HAS_NO_THREADS
#if defined(__ELF__) && defined(_LIBCXXABI_LINK_PTHREAD_LIB)
#pragma comment(lib, "pthread")
#endif
#endif
// To make testing possible, this header is included from both cxa_guard.cpp
// and a number of tests.
//
// For this reason we place everything in an anonymous namespace -- even though
// we're in a header. We want the actual implementation and the tests to have
// unique definitions of the types in this header (since the tests may depend
// on function local statics).
//
// To enforce this either `BUILDING_CXA_GUARD` or `TESTING_CXA_GUARD` must be
// defined when including this file. Only `src/cxa_guard.cpp` should define
// the former.
#ifdef BUILDING_CXA_GUARD
# include "abort_message.h"
# define ABORT_WITH_MESSAGE(...) ::abort_message(__VA_ARGS__)
#elif defined(TESTING_CXA_GUARD)
# define ABORT_WITH_MESSAGE(...) ::abort()
#else
# error "Either BUILDING_CXA_GUARD or TESTING_CXA_GUARD must be defined"
#endif
#if __has_feature(thread_sanitizer)
extern "C" void __tsan_acquire(void*);
extern "C" void __tsan_release(void*);
#else
#define __tsan_acquire(addr) ((void)0)
#define __tsan_release(addr) ((void)0)
#endif
namespace __cxxabiv1 {
// Use an anonymous namespace to ensure that the tests and actual implementation
// have unique definitions of these symbols.
namespace {
//===----------------------------------------------------------------------===//
// Misc Utilities
//===----------------------------------------------------------------------===//
template <class T, T(*Init)()>
struct LazyValue {
LazyValue() : is_init(false) {}
T& get() {
if (!is_init) {
value = Init();
is_init = true;
}
return value;
}
private:
T value;
bool is_init = false;
};
//===----------------------------------------------------------------------===//
// PlatformGetThreadID
//===----------------------------------------------------------------------===//
#if defined(__APPLE__) && defined(_LIBCPP_HAS_THREAD_API_PTHREAD)
uint32_t PlatformThreadID() {
static_assert(sizeof(mach_port_t) == sizeof(uint32_t), "");
return static_cast<uint32_t>(
pthread_mach_thread_np(std::__libcpp_thread_get_current_id()));
}
#elif defined(SYS_gettid) && defined(_LIBCPP_HAS_THREAD_API_PTHREAD)
uint32_t PlatformThreadID() {
static_assert(sizeof(pid_t) == sizeof(uint32_t), "");
return static_cast<uint32_t>(syscall(SYS_gettid));
}
#else
constexpr uint32_t (*PlatformThreadID)() = nullptr;
#endif
constexpr bool PlatformSupportsThreadID() {
#ifdef __clang__
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wtautological-pointer-compare"
#endif
return +PlatformThreadID != nullptr;
#ifdef __clang__
#pragma clang diagnostic pop
#endif
}
//===----------------------------------------------------------------------===//
// GuardBase
//===----------------------------------------------------------------------===//
enum class AcquireResult {
INIT_IS_DONE,
INIT_IS_PENDING,
};
constexpr AcquireResult INIT_IS_DONE = AcquireResult::INIT_IS_DONE;
constexpr AcquireResult INIT_IS_PENDING = AcquireResult::INIT_IS_PENDING;
static constexpr uint8_t UNSET = 0;
static constexpr uint8_t COMPLETE_BIT = (1 << 0);
static constexpr uint8_t PENDING_BIT = (1 << 1);
static constexpr uint8_t WAITING_BIT = (1 << 2);
template <class Derived>
struct GuardObject {
GuardObject() = delete;
GuardObject(GuardObject const&) = delete;
GuardObject& operator=(GuardObject const&) = delete;
explicit GuardObject(uint32_t* g)
: base_address(g), guard_byte_address(reinterpret_cast<uint8_t*>(g)),
init_byte_address(reinterpret_cast<uint8_t*>(g) + 1),
thread_id_address(nullptr) {}
explicit GuardObject(uint64_t* g)
: base_address(g), guard_byte_address(reinterpret_cast<uint8_t*>(g)),
init_byte_address(reinterpret_cast<uint8_t*>(g) + 1),
thread_id_address(reinterpret_cast<uint32_t*>(g) + 1) {}
public:
/// Implements __cxa_guard_acquire
AcquireResult cxa_guard_acquire() {
AtomicInt<uint8_t> guard_byte(guard_byte_address);
if (guard_byte.load(std::_AO_Acquire) != UNSET)
return INIT_IS_DONE;
return derived()->acquire_init_byte();
}
/// Implements __cxa_guard_release
void cxa_guard_release() {
AtomicInt<uint8_t> guard_byte(guard_byte_address);
// Store complete first, so that when release wakes other folks, they see
// it as having been completed.
guard_byte.store(COMPLETE_BIT, std::_AO_Release);
derived()->release_init_byte();
}
/// Implements __cxa_guard_abort
void cxa_guard_abort() { derived()->abort_init_byte(); }
public:
/// base_address - the address of the original guard object.
void* const base_address;
/// The address of the guord byte at offset 0.
uint8_t* const guard_byte_address;
/// The address of the byte used by the implementation during initialization.
uint8_t* const init_byte_address;
/// An optional address storing an identifier for the thread performing initialization.
/// It's used to detect recursive initialization.
uint32_t* const thread_id_address;
private:
Derived* derived() { return static_cast<Derived*>(this); }
};
//===----------------------------------------------------------------------===//
// Single Threaded Implementation
//===----------------------------------------------------------------------===//
struct InitByteNoThreads : GuardObject<InitByteNoThreads> {
using GuardObject::GuardObject;
AcquireResult acquire_init_byte() {
if (*init_byte_address == COMPLETE_BIT)
return INIT_IS_DONE;
if (*init_byte_address & PENDING_BIT)
ABORT_WITH_MESSAGE("__cxa_guard_acquire detected recursive initialization");
*init_byte_address = PENDING_BIT;
return INIT_IS_PENDING;
}
void release_init_byte() { *init_byte_address = COMPLETE_BIT; }
void abort_init_byte() { *init_byte_address = UNSET; }
};
//===----------------------------------------------------------------------===//
// Global Mutex Implementation
//===----------------------------------------------------------------------===//
struct LibcppMutex;
struct LibcppCondVar;
#ifndef _LIBCXXABI_HAS_NO_THREADS
struct LibcppMutex {
LibcppMutex() = default;
LibcppMutex(LibcppMutex const&) = delete;
LibcppMutex& operator=(LibcppMutex const&) = delete;
bool lock() { return std::__libcpp_mutex_lock(&mutex); }
bool unlock() { return std::__libcpp_mutex_unlock(&mutex); }
private:
friend struct LibcppCondVar;
std::__libcpp_mutex_t mutex = _LIBCPP_MUTEX_INITIALIZER;
};
struct LibcppCondVar {
LibcppCondVar() = default;
LibcppCondVar(LibcppCondVar const&) = delete;
LibcppCondVar& operator=(LibcppCondVar const&) = delete;
bool wait(LibcppMutex& mut) {
return std::__libcpp_condvar_wait(&cond, &mut.mutex);
}
bool broadcast() { return std::__libcpp_condvar_broadcast(&cond); }
private:
std::__libcpp_condvar_t cond = _LIBCPP_CONDVAR_INITIALIZER;
};
#else
struct LibcppMutex {};
struct LibcppCondVar {};
#endif // !defined(_LIBCXXABI_HAS_NO_THREADS)
template <class Mutex, class CondVar, Mutex& global_mutex, CondVar& global_cond,
uint32_t (*GetThreadID)() = PlatformThreadID>
struct InitByteGlobalMutex
: GuardObject<InitByteGlobalMutex<Mutex, CondVar, global_mutex, global_cond,
GetThreadID>> {
using BaseT = typename InitByteGlobalMutex::GuardObject;
using BaseT::BaseT;
explicit InitByteGlobalMutex(uint32_t *g)
: BaseT(g), has_thread_id_support(false) {}
explicit InitByteGlobalMutex(uint64_t *g)
: BaseT(g), has_thread_id_support(PlatformSupportsThreadID()) {}
public:
AcquireResult acquire_init_byte() {
LockGuard g("__cxa_guard_acquire");
// Check for possible recursive initialization.
if (has_thread_id_support && (*init_byte_address & PENDING_BIT)) {
if (*thread_id_address == current_thread_id.get())
ABORT_WITH_MESSAGE("__cxa_guard_acquire detected recursive initialization");
}
// Wait until the pending bit is not set.
while (*init_byte_address & PENDING_BIT) {
*init_byte_address |= WAITING_BIT;
global_cond.wait(global_mutex);
}
if (*init_byte_address == COMPLETE_BIT)
return INIT_IS_DONE;
if (has_thread_id_support)
*thread_id_address = current_thread_id.get();
*init_byte_address = PENDING_BIT;
return INIT_IS_PENDING;
}
void release_init_byte() {
bool has_waiting;
{
LockGuard g("__cxa_guard_release");
has_waiting = *init_byte_address & WAITING_BIT;
*init_byte_address = COMPLETE_BIT;
}
if (has_waiting) {
if (global_cond.broadcast()) {
ABORT_WITH_MESSAGE("%s failed to broadcast", "__cxa_guard_release");
}
}
}
void abort_init_byte() {
bool has_waiting;
{
LockGuard g("__cxa_guard_abort");
if (has_thread_id_support)
*thread_id_address = 0;
has_waiting = *init_byte_address & WAITING_BIT;
*init_byte_address = UNSET;
}
if (has_waiting) {
if (global_cond.broadcast()) {
ABORT_WITH_MESSAGE("%s failed to broadcast", "__cxa_guard_abort");
}
}
}
private:
using BaseT::init_byte_address;
using BaseT::thread_id_address;
const bool has_thread_id_support;
LazyValue<uint32_t, GetThreadID> current_thread_id;
private:
struct LockGuard {
LockGuard() = delete;
LockGuard(LockGuard const&) = delete;
LockGuard& operator=(LockGuard const&) = delete;
explicit LockGuard(const char* calling_func)
: calling_func(calling_func) {
if (global_mutex.lock())
ABORT_WITH_MESSAGE("%s failed to acquire mutex", calling_func);
}
~LockGuard() {
if (global_mutex.unlock())
ABORT_WITH_MESSAGE("%s failed to release mutex", calling_func);
}
private:
const char* const calling_func;
};
};
//===----------------------------------------------------------------------===//
// Futex Implementation
//===----------------------------------------------------------------------===//
#if defined(SYS_futex)
void PlatformFutexWait(int* addr, int expect) {
constexpr int WAIT = 0;
syscall(SYS_futex, addr, WAIT, expect, 0);
__tsan_acquire(addr);
}
void PlatformFutexWake(int* addr) {
constexpr int WAKE = 1;
__tsan_release(addr);
syscall(SYS_futex, addr, WAKE, INT_MAX);
}
#else
constexpr void (*PlatformFutexWait)(int*, int) = nullptr;
constexpr void (*PlatformFutexWake)(int*) = nullptr;
#endif
constexpr bool PlatformSupportsFutex() {
#ifdef __clang__
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wtautological-pointer-compare"
#endif
return +PlatformFutexWait != nullptr;
#ifdef __clang__
#pragma clang diagnostic pop
#endif
}
/// InitByteFutex - Manages initialization using atomics and the futex syscall
/// for waiting and waking.
template <void (*Wait)(int*, int) = PlatformFutexWait,
void (*Wake)(int*) = PlatformFutexWake,
uint32_t (*GetThreadIDArg)() = PlatformThreadID>
struct InitByteFutex : GuardObject<InitByteFutex<Wait, Wake, GetThreadIDArg>> {
using BaseT = typename InitByteFutex::GuardObject;
/// ARM Constructor
explicit InitByteFutex(uint32_t *g) : BaseT(g),
init_byte(this->init_byte_address),
has_thread_id_support(this->thread_id_address && GetThreadIDArg),
thread_id(this->thread_id_address) {}
/// Itanium Constructor
explicit InitByteFutex(uint64_t *g) : BaseT(g),
init_byte(this->init_byte_address),
has_thread_id_support(this->thread_id_address && GetThreadIDArg),
thread_id(this->thread_id_address) {}
public:
AcquireResult acquire_init_byte() {
while (true) {
uint8_t last_val = UNSET;
if (init_byte.compare_exchange(&last_val, PENDING_BIT, std::_AO_Acq_Rel,
std::_AO_Acquire)) {
if (has_thread_id_support) {
thread_id.store(current_thread_id.get(), std::_AO_Relaxed);
}
return INIT_IS_PENDING;
}
if (last_val == COMPLETE_BIT)
return INIT_IS_DONE;
if (last_val & PENDING_BIT) {
// Check for recursive initialization
if (has_thread_id_support && thread_id.load(std::_AO_Relaxed) == current_thread_id.get()) {
ABORT_WITH_MESSAGE("__cxa_guard_acquire detected recursive initialization");
}
if ((last_val & WAITING_BIT) == 0) {
// This compare exchange can fail for several reasons
// (1) another thread finished the whole thing before we got here
// (2) another thread set the waiting bit we were trying to thread
// (3) another thread had an exception and failed to finish
if (!init_byte.compare_exchange(&last_val, PENDING_BIT | WAITING_BIT,
std::_AO_Acq_Rel, std::_AO_Release)) {
// (1) success, via someone else's work!
if (last_val == COMPLETE_BIT)
return INIT_IS_DONE;
// (3) someone else, bailed on doing the work, retry from the start!
if (last_val == UNSET)
continue;
// (2) the waiting bit got set, so we are happy to keep waiting
}
}
wait_on_initialization();
}
}
}
void release_init_byte() {
uint8_t old = init_byte.exchange(COMPLETE_BIT, std::_AO_Acq_Rel);
if (old & WAITING_BIT)
wake_all();
}
void abort_init_byte() {
if (has_thread_id_support)
thread_id.store(0, std::_AO_Relaxed);
uint8_t old = init_byte.exchange(0, std::_AO_Acq_Rel);
if (old & WAITING_BIT)
wake_all();
}
private:
/// Use the futex to wait on the current guard variable. Futex expects a
/// 32-bit 4-byte aligned address as the first argument, so we have to use use
/// the base address of the guard variable (not the init byte).
void wait_on_initialization() {
Wait(static_cast<int*>(this->base_address),
expected_value_for_futex(PENDING_BIT | WAITING_BIT));
}
void wake_all() { Wake(static_cast<int*>(this->base_address)); }
private:
AtomicInt<uint8_t> init_byte;
const bool has_thread_id_support;
// Unsafe to use unless has_thread_id_support
AtomicInt<uint32_t> thread_id;
LazyValue<uint32_t, GetThreadIDArg> current_thread_id;
/// Create the expected integer value for futex `wait(int* addr, int expected)`.
/// We pass the base address as the first argument, So this function creates
/// an zero-initialized integer with `b` copied at the correct offset.
static int expected_value_for_futex(uint8_t b) {
int dest_val = 0;
std::memcpy(reinterpret_cast<char*>(&dest_val) + 1, &b, 1);
return dest_val;
}
static_assert(Wait != nullptr && Wake != nullptr, "");
};
//===----------------------------------------------------------------------===//
//
//===----------------------------------------------------------------------===//
template <class T>
struct GlobalStatic {
static T instance;
};
template <class T>
_LIBCPP_SAFE_STATIC T GlobalStatic<T>::instance = {};
enum class Implementation {
NoThreads,
GlobalLock,
Futex
};
template <Implementation Impl>
struct SelectImplementation;
template <>
struct SelectImplementation<Implementation::NoThreads> {
using type = InitByteNoThreads;
};
template <>
struct SelectImplementation<Implementation::GlobalLock> {
using type = InitByteGlobalMutex<
LibcppMutex, LibcppCondVar, GlobalStatic<LibcppMutex>::instance,
GlobalStatic<LibcppCondVar>::instance, PlatformThreadID>;
};
template <>
struct SelectImplementation<Implementation::Futex> {
using type =
InitByteFutex<PlatformFutexWait, PlatformFutexWake, PlatformThreadID>;
};
// TODO(EricWF): We should prefer the futex implementation when available. But
// it should be done in a separate step from adding the implementation.
constexpr Implementation CurrentImplementation =
#if defined(_LIBCXXABI_HAS_NO_THREADS)
Implementation::NoThreads;
#elif defined(_LIBCXXABI_USE_FUTEX)
Implementation::Futex;
#else
Implementation::GlobalLock;
#endif
static_assert(CurrentImplementation != Implementation::Futex
|| PlatformSupportsFutex(), "Futex selected but not supported");
using SelectedImplementation =
SelectImplementation<CurrentImplementation>::type;
} // end namespace
} // end namespace __cxxabiv1
#endif // LIBCXXABI_SRC_INCLUDE_CXA_GUARD_IMPL_H

View File

@@ -0,0 +1,111 @@
//===------------------------- cxa_handlers.cpp ---------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the functionality associated with the terminate_handler,
// unexpected_handler, and new_handler.
//===----------------------------------------------------------------------===//
#include <stdexcept>
#include <new>
#include <exception>
#include "abort_message.h"
#include "cxxabi.h"
#include "cxa_handlers.h"
#include "cxa_exception.h"
#include "private_typeinfo.h"
#include "include/atomic_support.h"
namespace std
{
unexpected_handler
get_unexpected() _NOEXCEPT
{
return __libcpp_atomic_load(&__cxa_unexpected_handler, _AO_Acquire);
}
void
__unexpected(unexpected_handler func)
{
func();
// unexpected handler should not return
abort_message("unexpected_handler unexpectedly returned");
}
__attribute__((noreturn))
void
unexpected()
{
__unexpected(get_unexpected());
}
terminate_handler
get_terminate() _NOEXCEPT
{
return __libcpp_atomic_load(&__cxa_terminate_handler, _AO_Acquire);
}
void
__terminate(terminate_handler func) _NOEXCEPT
{
#ifndef _LIBCXXABI_NO_EXCEPTIONS
try
{
#endif // _LIBCXXABI_NO_EXCEPTIONS
func();
// handler should not return
abort_message("terminate_handler unexpectedly returned");
#ifndef _LIBCXXABI_NO_EXCEPTIONS
}
catch (...)
{
// handler should not throw exception
abort_message("terminate_handler unexpectedly threw an exception");
}
#endif // _LIBCXXABI_NO_EXCEPTIONS
}
__attribute__((noreturn))
void
terminate() _NOEXCEPT
{
#ifndef _LIBCXXABI_NO_EXCEPTIONS
// If there might be an uncaught exception
using namespace __cxxabiv1;
__cxa_eh_globals* globals = __cxa_get_globals_fast();
if (globals)
{
__cxa_exception* exception_header = globals->caughtExceptions;
if (exception_header)
{
_Unwind_Exception* unwind_exception =
reinterpret_cast<_Unwind_Exception*>(exception_header + 1) - 1;
if (__isOurExceptionClass(unwind_exception))
__terminate(exception_header->terminateHandler);
}
}
#endif
__terminate(get_terminate());
}
extern "C" {
new_handler __cxa_new_handler = 0;
}
new_handler
set_new_handler(new_handler handler) _NOEXCEPT
{
return __libcpp_atomic_exchange(&__cxa_new_handler, handler, _AO_Acq_Rel);
}
new_handler
get_new_handler() _NOEXCEPT
{
return __libcpp_atomic_load(&__cxa_new_handler, _AO_Acquire);
}
} // std

View File

@@ -0,0 +1,55 @@
//===------------------------- cxa_handlers.h -----------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the functionality associated with the terminate_handler,
// unexpected_handler, and new_handler.
//===----------------------------------------------------------------------===//
#ifndef _CXA_HANDLERS_H
#define _CXA_HANDLERS_H
#include <__cxxabi_config.h>
#include <exception>
namespace std
{
_LIBCXXABI_HIDDEN _LIBCXXABI_NORETURN
void
__unexpected(unexpected_handler func);
_LIBCXXABI_HIDDEN _LIBCXXABI_NORETURN
void
__terminate(terminate_handler func) _NOEXCEPT;
} // std
extern "C"
{
_LIBCXXABI_DATA_VIS extern void (*__cxa_terminate_handler)();
_LIBCXXABI_DATA_VIS extern void (*__cxa_unexpected_handler)();
_LIBCXXABI_DATA_VIS extern void (*__cxa_new_handler)();
/*
At some point in the future these three symbols will become
C++11 atomic variables:
extern std::atomic<std::terminate_handler> __cxa_terminate_handler;
extern std::atomic<std::unexpected_handler> __cxa_unexpected_handler;
extern std::atomic<std::new_handler> __cxa_new_handler;
This change will not impact their ABI. But it will allow for a
portable performance optimization.
*/
} // extern "C"
#endif // _CXA_HANDLERS_H

View File

@@ -0,0 +1,59 @@
//===------------------------- cxa_exception.cpp --------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the "Exception Handling APIs"
// https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html
//
//===----------------------------------------------------------------------===//
// Support functions for the no-exceptions libc++ library
#include "cxxabi.h"
#include <exception> // for std::terminate
#include "cxa_exception.h"
#include "cxa_handlers.h"
namespace __cxxabiv1 {
extern "C" {
void
__cxa_increment_exception_refcount(void *thrown_object) throw() {
if (thrown_object != nullptr)
std::terminate();
}
void
__cxa_decrement_exception_refcount(void *thrown_object) throw() {
if (thrown_object != nullptr)
std::terminate();
}
void *__cxa_current_primary_exception() throw() { return nullptr; }
void
__cxa_rethrow_primary_exception(void* thrown_object) {
if (thrown_object != nullptr)
std::terminate();
}
bool
__cxa_uncaught_exception() throw() { return false; }
unsigned int
__cxa_uncaught_exceptions() throw() { return 0; }
} // extern "C"
// provide dummy implementations for the 'no exceptions' case.
uint64_t __getExceptionClass (const _Unwind_Exception*) { return 0; }
void __setExceptionClass ( _Unwind_Exception*, uint64_t) {}
bool __isOurExceptionClass(const _Unwind_Exception*) { return false; }
} // abi

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,145 @@
//===----------------------- cxa_thread_atexit.cpp ------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "abort_message.h"
#include "cxxabi.h"
#include <__threading_support>
#ifndef _LIBCXXABI_HAS_NO_THREADS
#if defined(__ELF__) && defined(_LIBCXXABI_LINK_PTHREAD_LIB)
#pragma comment(lib, "pthread")
#endif
#endif
#include <stdlib.h>
namespace __cxxabiv1 {
using Dtor = void(*)(void*);
extern "C"
#ifndef HAVE___CXA_THREAD_ATEXIT_IMPL
// A weak symbol is used to detect this function's presence in the C library
// at runtime, even if libc++ is built against an older libc
_LIBCXXABI_WEAK
#endif
int __cxa_thread_atexit_impl(Dtor, void*, void*);
#ifndef HAVE___CXA_THREAD_ATEXIT_IMPL
namespace {
// This implementation is used if the C library does not provide
// __cxa_thread_atexit_impl() for us. It has a number of limitations that are
// difficult to impossible to address without ..._impl():
//
// - dso_symbol is ignored. This means that a shared library may be unloaded
// (via dlclose()) before its thread_local destructors have run.
//
// - thread_local destructors for the main thread are run by the destructor of
// a static object. This is later than expected; they should run before the
// destructors of any objects with static storage duration.
//
// - thread_local destructors on non-main threads run on the first iteration
// through the __libccpp_tls_key destructors.
// std::notify_all_at_thread_exit() and similar functions must be careful to
// wait until the second iteration to provide their intended ordering
// guarantees.
//
// Another limitation, though one shared with ..._impl(), is that any
// thread_locals that are first initialized after non-thread_local global
// destructors begin to run will not be destroyed. [basic.start.term] states
// that all thread_local destructors are sequenced before the destruction of
// objects with static storage duration, resulting in a contradiction if a
// thread_local is constructed after that point. Thus we consider such
// programs ill-formed, and don't bother to run those destructors. (If the
// program terminates abnormally after such a thread_local is constructed,
// the destructor is not expected to run and thus there is no contradiction.
// So construction still has to work.)
struct DtorList {
Dtor dtor;
void* obj;
DtorList* next;
};
// The linked list of thread-local destructors to run
__thread DtorList* dtors = nullptr;
// True if the destructors are currently scheduled to run on this thread
__thread bool dtors_alive = false;
// Used to trigger destructors on thread exit; value is ignored
std::__libcpp_tls_key dtors_key;
void run_dtors(void*) {
while (auto head = dtors) {
dtors = head->next;
head->dtor(head->obj);
::free(head);
}
dtors_alive = false;
}
struct DtorsManager {
DtorsManager() {
// There is intentionally no matching std::__libcpp_tls_delete call, as
// __cxa_thread_atexit() may be called arbitrarily late (for example, from
// global destructors or atexit() handlers).
if (std::__libcpp_tls_create(&dtors_key, run_dtors) != 0) {
abort_message("std::__libcpp_tls_create() failed in __cxa_thread_atexit()");
}
}
~DtorsManager() {
// std::__libcpp_tls_key destructors do not run on threads that call exit()
// (including when the main thread returns from main()), so we explicitly
// call the destructor here. This runs at exit time (potentially earlier
// if libc++abi is dlclose()'d). Any thread_locals initialized after this
// point will not be destroyed.
run_dtors(nullptr);
}
};
} // namespace
#endif // HAVE___CXA_THREAD_ATEXIT_IMPL
extern "C" {
_LIBCXXABI_FUNC_VIS int __cxa_thread_atexit(Dtor dtor, void* obj, void* dso_symbol) throw() {
#ifdef HAVE___CXA_THREAD_ATEXIT_IMPL
return __cxa_thread_atexit_impl(dtor, obj, dso_symbol);
#else
if (__cxa_thread_atexit_impl) {
return __cxa_thread_atexit_impl(dtor, obj, dso_symbol);
} else {
// Initialize the dtors std::__libcpp_tls_key (uses __cxa_guard_*() for
// one-time initialization and __cxa_atexit() for destruction)
static DtorsManager manager;
if (!dtors_alive) {
if (std::__libcpp_tls_set(dtors_key, &dtors_key) != 0) {
return -1;
}
dtors_alive = true;
}
auto head = static_cast<DtorList*>(::malloc(sizeof(DtorList)));
if (!head) {
return -1;
}
head->dtor = dtor;
head->obj = obj;
head->next = dtors;
dtors = head;
return 0;
}
#endif // HAVE___CXA_THREAD_ATEXIT_IMPL
}
} // extern "C"
} // namespace __cxxabiv1

View File

@@ -0,0 +1,22 @@
//===------------------------- cxa_unexpected.cpp -------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include <exception>
#include "cxxabi.h"
#include "cxa_exception.h"
namespace __cxxabiv1
{
extern "C"
{
}
} // namespace __cxxabiv1

View File

@@ -0,0 +1,421 @@
//===-------------------------- cxa_vector.cpp ---------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the "Array Construction and Destruction APIs"
// https://itanium-cxx-abi.github.io/cxx-abi/abi.html#array-ctor
//
//===----------------------------------------------------------------------===//
#include "cxxabi.h"
#include "__cxxabi_config.h"
#include <exception> // for std::terminate
#include <new> // for std::bad_array_new_length
#include "abort_message.h"
#ifndef __has_builtin
#define __has_builtin(x) 0
#endif
namespace __cxxabiv1 {
#if 0
#pragma mark --Helper routines and classes --
#endif
namespace {
inline static size_t __get_element_count ( void *p ) {
return static_cast <size_t *> (p)[-1];
}
inline static void __set_element_count ( void *p, size_t element_count ) {
static_cast <size_t *> (p)[-1] = element_count;
}
// A pair of classes to simplify exception handling and control flow.
// They get passed a block of memory in the constructor, and unless the
// 'release' method is called, they deallocate the memory in the destructor.
// Preferred usage is to allocate some memory, attach it to one of these objects,
// and then, when all the operations to set up the memory block have succeeded,
// call 'release'. If any of the setup operations fail, or an exception is
// thrown, then the block is automatically deallocated.
//
// The only difference between these two classes is the signature for the
// deallocation function (to match new2/new3 and delete2/delete3.
class st_heap_block2 {
public:
typedef void (*dealloc_f)(void *);
st_heap_block2 ( dealloc_f dealloc, void *ptr )
: dealloc_ ( dealloc ), ptr_ ( ptr ), enabled_ ( true ) {}
~st_heap_block2 () { if ( enabled_ ) dealloc_ ( ptr_ ) ; }
void release () { enabled_ = false; }
private:
dealloc_f dealloc_;
void *ptr_;
bool enabled_;
};
class st_heap_block3 {
public:
typedef void (*dealloc_f)(void *, size_t);
st_heap_block3 ( dealloc_f dealloc, void *ptr, size_t size )
: dealloc_ ( dealloc ), ptr_ ( ptr ), size_ ( size ), enabled_ ( true ) {}
~st_heap_block3 () { if ( enabled_ ) dealloc_ ( ptr_, size_ ) ; }
void release () { enabled_ = false; }
private:
dealloc_f dealloc_;
void *ptr_;
size_t size_;
bool enabled_;
};
class st_cxa_cleanup {
public:
typedef void (*destruct_f)(void *);
st_cxa_cleanup ( void *ptr, size_t &idx, size_t element_size, destruct_f destructor )
: ptr_ ( ptr ), idx_ ( idx ), element_size_ ( element_size ),
destructor_ ( destructor ), enabled_ ( true ) {}
~st_cxa_cleanup () {
if ( enabled_ )
__cxa_vec_cleanup ( ptr_, idx_, element_size_, destructor_ );
}
void release () { enabled_ = false; }
private:
void *ptr_;
size_t &idx_;
size_t element_size_;
destruct_f destructor_;
bool enabled_;
};
class st_terminate {
public:
st_terminate ( bool enabled = true ) : enabled_ ( enabled ) {}
~st_terminate () { if ( enabled_ ) std::terminate (); }
void release () { enabled_ = false; }
private:
bool enabled_ ;
};
}
#if 0
#pragma mark --Externally visible routines--
#endif
namespace {
_LIBCXXABI_NORETURN
void throw_bad_array_new_length() {
#ifndef _LIBCXXABI_NO_EXCEPTIONS
throw std::bad_array_new_length();
#else
abort_message("__cxa_vec_new failed to allocate memory");
#endif
}
bool mul_overflow(size_t x, size_t y, size_t *res) {
#if (defined(_LIBCXXABI_COMPILER_CLANG) && __has_builtin(__builtin_mul_overflow)) \
|| defined(_LIBCXXABI_COMPILER_GCC)
return __builtin_mul_overflow(x, y, res);
#else
*res = x * y;
return x && ((*res / x) != y);
#endif
}
bool add_overflow(size_t x, size_t y, size_t *res) {
#if (defined(_LIBCXXABI_COMPILER_CLANG) && __has_builtin(__builtin_add_overflow)) \
|| defined(_LIBCXXABI_COMPILER_GCC)
return __builtin_add_overflow(x, y, res);
#else
*res = x + y;
return *res < y;
#endif
}
size_t calculate_allocation_size_or_throw(size_t element_count,
size_t element_size,
size_t padding_size) {
size_t element_heap_size;
if (mul_overflow(element_count, element_size, &element_heap_size))
throw_bad_array_new_length();
size_t allocation_size;
if (add_overflow(element_heap_size, padding_size, &allocation_size))
throw_bad_array_new_length();
return allocation_size;
}
} // namespace
extern "C" {
// Equivalent to
//
// __cxa_vec_new2(element_count, element_size, padding_size, constructor,
// destructor, &::operator new[], &::operator delete[])
_LIBCXXABI_FUNC_VIS void *
__cxa_vec_new(size_t element_count, size_t element_size, size_t padding_size,
void (*constructor)(void *), void (*destructor)(void *)) {
return __cxa_vec_new2 ( element_count, element_size, padding_size,
constructor, destructor, &::operator new [], &::operator delete [] );
}
// Given the number and size of elements for an array and the non-negative
// size of prefix padding for a cookie, allocate space (using alloc) for
// the array preceded by the specified padding, initialize the cookie if
// the padding is non-zero, and call the given constructor on each element.
// Return the address of the array proper, after the padding.
//
// If alloc throws an exception, rethrow the exception. If alloc returns
// NULL, return NULL. If the constructor throws an exception, call
// destructor for any already constructed elements, and rethrow the
// exception. If the destructor throws an exception, call std::terminate.
//
// The constructor may be NULL, in which case it must not be called. If the
// padding_size is zero, the destructor may be NULL; in that case it must
// not be called.
//
// Neither alloc nor dealloc may be NULL.
_LIBCXXABI_FUNC_VIS void *
__cxa_vec_new2(size_t element_count, size_t element_size, size_t padding_size,
void (*constructor)(void *), void (*destructor)(void *),
void *(*alloc)(size_t), void (*dealloc)(void *)) {
const size_t heap_size = calculate_allocation_size_or_throw(
element_count, element_size, padding_size);
char* const heap_block = static_cast<char*>(alloc(heap_size));
char* vec_base = heap_block;
if (NULL != vec_base) {
st_heap_block2 heap(dealloc, heap_block);
// put the padding before the array elements
if ( 0 != padding_size ) {
vec_base += padding_size;
__set_element_count ( vec_base, element_count );
}
// Construct the elements
__cxa_vec_ctor ( vec_base, element_count, element_size, constructor, destructor );
heap.release (); // We're good!
}
return vec_base;
}
// Same as __cxa_vec_new2 except that the deallocation function takes both
// the object address and its size.
_LIBCXXABI_FUNC_VIS void *
__cxa_vec_new3(size_t element_count, size_t element_size, size_t padding_size,
void (*constructor)(void *), void (*destructor)(void *),
void *(*alloc)(size_t), void (*dealloc)(void *, size_t)) {
const size_t heap_size = calculate_allocation_size_or_throw(
element_count, element_size, padding_size);
char* const heap_block = static_cast<char*>(alloc(heap_size));
char* vec_base = heap_block;
if (NULL != vec_base) {
st_heap_block3 heap(dealloc, heap_block, heap_size);
// put the padding before the array elements
if ( 0 != padding_size ) {
vec_base += padding_size;
__set_element_count ( vec_base, element_count );
}
// Construct the elements
__cxa_vec_ctor ( vec_base, element_count, element_size, constructor, destructor );
heap.release (); // We're good!
}
return vec_base;
}
// Given the (data) addresses of a destination and a source array, an
// element count and an element size, call the given copy constructor to
// copy each element from the source array to the destination array. The
// copy constructor's arguments are the destination address and source
// address, respectively. If an exception occurs, call the given destructor
// (if non-NULL) on each copied element and rethrow. If the destructor
// throws an exception, call terminate(). The constructor and or destructor
// pointers may be NULL. If either is NULL, no action is taken when it
// would have been called.
_LIBCXXABI_FUNC_VIS void __cxa_vec_cctor(void *dest_array, void *src_array,
size_t element_count,
size_t element_size,
void (*constructor)(void *, void *),
void (*destructor)(void *)) {
if ( NULL != constructor ) {
size_t idx = 0;
char *src_ptr = static_cast<char *>(src_array);
char *dest_ptr = static_cast<char *>(dest_array);
st_cxa_cleanup cleanup ( dest_array, idx, element_size, destructor );
for ( idx = 0; idx < element_count;
++idx, src_ptr += element_size, dest_ptr += element_size )
constructor ( dest_ptr, src_ptr );
cleanup.release (); // We're good!
}
}
// Given the (data) address of an array, not including any cookie padding,
// and the number and size of its elements, call the given constructor on
// each element. If the constructor throws an exception, call the given
// destructor for any already-constructed elements, and rethrow the
// exception. If the destructor throws an exception, call terminate(). The
// constructor and/or destructor pointers may be NULL. If either is NULL,
// no action is taken when it would have been called.
_LIBCXXABI_FUNC_VIS void
__cxa_vec_ctor(void *array_address, size_t element_count, size_t element_size,
void (*constructor)(void *), void (*destructor)(void *)) {
if ( NULL != constructor ) {
size_t idx;
char *ptr = static_cast <char *> ( array_address );
st_cxa_cleanup cleanup ( array_address, idx, element_size, destructor );
// Construct the elements
for ( idx = 0; idx < element_count; ++idx, ptr += element_size )
constructor ( ptr );
cleanup.release (); // We're good!
}
}
// Given the (data) address of an array, the number of elements, and the
// size of its elements, call the given destructor on each element. If the
// destructor throws an exception, rethrow after destroying the remaining
// elements if possible. If the destructor throws a second exception, call
// terminate(). The destructor pointer may be NULL, in which case this
// routine does nothing.
_LIBCXXABI_FUNC_VIS void __cxa_vec_dtor(void *array_address,
size_t element_count,
size_t element_size,
void (*destructor)(void *)) {
if ( NULL != destructor ) {
char *ptr = static_cast <char *> (array_address);
size_t idx = element_count;
st_cxa_cleanup cleanup ( array_address, idx, element_size, destructor );
{
st_terminate exception_guard (__cxa_uncaught_exception ());
ptr += element_count * element_size; // one past the last element
while ( idx-- > 0 ) {
ptr -= element_size;
destructor ( ptr );
}
exception_guard.release (); // We're good !
}
cleanup.release (); // We're still good!
}
}
// Given the (data) address of an array, the number of elements, and the
// size of its elements, call the given destructor on each element. If the
// destructor throws an exception, call terminate(). The destructor pointer
// may be NULL, in which case this routine does nothing.
_LIBCXXABI_FUNC_VIS void __cxa_vec_cleanup(void *array_address,
size_t element_count,
size_t element_size,
void (*destructor)(void *)) {
if ( NULL != destructor ) {
char *ptr = static_cast <char *> (array_address);
size_t idx = element_count;
st_terminate exception_guard;
ptr += element_count * element_size; // one past the last element
while ( idx-- > 0 ) {
ptr -= element_size;
destructor ( ptr );
}
exception_guard.release (); // We're done!
}
}
// If the array_address is NULL, return immediately. Otherwise, given the
// (data) address of an array, the non-negative size of prefix padding for
// the cookie, and the size of its elements, call the given destructor on
// each element, using the cookie to determine the number of elements, and
// then delete the space by calling ::operator delete[](void *). If the
// destructor throws an exception, rethrow after (a) destroying the
// remaining elements, and (b) deallocating the storage. If the destructor
// throws a second exception, call terminate(). If padding_size is 0, the
// destructor pointer must be NULL. If the destructor pointer is NULL, no
// destructor call is to be made.
//
// The intent of this function is to permit an implementation to call this
// function when confronted with an expression of the form delete[] p in
// the source code, provided that the default deallocation function can be
// used. Therefore, the semantics of this function are consistent with
// those required by the standard. The requirement that the deallocation
// function be called even if the destructor throws an exception derives
// from the resolution to DR 353 to the C++ standard, which was adopted in
// April, 2003.
_LIBCXXABI_FUNC_VIS void __cxa_vec_delete(void *array_address,
size_t element_size,
size_t padding_size,
void (*destructor)(void *)) {
__cxa_vec_delete2 ( array_address, element_size, padding_size,
destructor, &::operator delete [] );
}
// Same as __cxa_vec_delete, except that the given function is used for
// deallocation instead of the default delete function. If dealloc throws
// an exception, the result is undefined. The dealloc pointer may not be
// NULL.
_LIBCXXABI_FUNC_VIS void
__cxa_vec_delete2(void *array_address, size_t element_size, size_t padding_size,
void (*destructor)(void *), void (*dealloc)(void *)) {
if ( NULL != array_address ) {
char *vec_base = static_cast <char *> (array_address);
char *heap_block = vec_base - padding_size;
st_heap_block2 heap ( dealloc, heap_block );
if ( 0 != padding_size && NULL != destructor ) // call the destructors
__cxa_vec_dtor ( array_address, __get_element_count ( vec_base ),
element_size, destructor );
}
}
// Same as __cxa_vec_delete, except that the given function is used for
// deallocation instead of the default delete function. The deallocation
// function takes both the object address and its size. If dealloc throws
// an exception, the result is undefined. The dealloc pointer may not be
// NULL.
_LIBCXXABI_FUNC_VIS void
__cxa_vec_delete3(void *array_address, size_t element_size, size_t padding_size,
void (*destructor)(void *), void (*dealloc)(void *, size_t)) {
if ( NULL != array_address ) {
char *vec_base = static_cast <char *> (array_address);
char *heap_block = vec_base - padding_size;
const size_t element_count = padding_size ? __get_element_count ( vec_base ) : 0;
const size_t heap_block_size = element_size * element_count + padding_size;
st_heap_block3 heap ( dealloc, heap_block, heap_block_size );
if ( 0 != padding_size && NULL != destructor ) // call the destructors
__cxa_vec_dtor ( array_address, element_count, element_size, destructor );
}
}
} // extern "C"
} // abi

View File

@@ -0,0 +1,24 @@
//===-------------------------- cxa_virtual.cpp ---------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "cxxabi.h"
#include "abort_message.h"
namespace __cxxabiv1 {
extern "C" {
_LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN
void __cxa_pure_virtual(void) {
abort_message("Pure virtual function called!");
}
_LIBCXXABI_FUNC_VIS _LIBCXXABI_NORETURN
void __cxa_deleted_virtual(void) {
abort_message("Deleted virtual function called!");
}
} // extern "C"
} // abi

View File

@@ -0,0 +1,2 @@
BasedOnStyle: LLVM

View File

@@ -0,0 +1,97 @@
//===--- DemangleConfig.h --------------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
// This file is contains a subset of macros copied from
// llvm/include/llvm/Demangle/DemangleConfig.h
//===----------------------------------------------------------------------===//
#ifndef LIBCXXABI_DEMANGLE_DEMANGLE_CONFIG_H
#define LIBCXXABI_DEMANGLE_DEMANGLE_CONFIG_H
#include <ciso646>
#ifdef _MSC_VER
// snprintf is implemented in VS 2015
#if _MSC_VER < 1900
#define snprintf _snprintf_s
#endif
#endif
#ifndef __has_feature
#define __has_feature(x) 0
#endif
#ifndef __has_cpp_attribute
#define __has_cpp_attribute(x) 0
#endif
#ifndef __has_attribute
#define __has_attribute(x) 0
#endif
#ifndef __has_builtin
#define __has_builtin(x) 0
#endif
#ifndef DEMANGLE_GNUC_PREREQ
#if defined(__GNUC__) && defined(__GNUC_MINOR__) && defined(__GNUC_PATCHLEVEL__)
#define DEMANGLE_GNUC_PREREQ(maj, min, patch) \
((__GNUC__ << 20) + (__GNUC_MINOR__ << 10) + __GNUC_PATCHLEVEL__ >= \
((maj) << 20) + ((min) << 10) + (patch))
#elif defined(__GNUC__) && defined(__GNUC_MINOR__)
#define DEMANGLE_GNUC_PREREQ(maj, min, patch) \
((__GNUC__ << 20) + (__GNUC_MINOR__ << 10) >= ((maj) << 20) + ((min) << 10))
#else
#define DEMANGLE_GNUC_PREREQ(maj, min, patch) 0
#endif
#endif
#if __has_attribute(used) || DEMANGLE_GNUC_PREREQ(3, 1, 0)
#define DEMANGLE_ATTRIBUTE_USED __attribute__((__used__))
#else
#define DEMANGLE_ATTRIBUTE_USED
#endif
#if __has_builtin(__builtin_unreachable) || DEMANGLE_GNUC_PREREQ(4, 5, 0)
#define DEMANGLE_UNREACHABLE __builtin_unreachable()
#elif defined(_MSC_VER)
#define DEMANGLE_UNREACHABLE __assume(false)
#else
#define DEMANGLE_UNREACHABLE
#endif
#if __has_attribute(noinline) || DEMANGLE_GNUC_PREREQ(3, 4, 0)
#define DEMANGLE_ATTRIBUTE_NOINLINE __attribute__((noinline))
#elif defined(_MSC_VER)
#define DEMANGLE_ATTRIBUTE_NOINLINE __declspec(noinline)
#else
#define DEMANGLE_ATTRIBUTE_NOINLINE
#endif
#if !defined(NDEBUG)
#define DEMANGLE_DUMP_METHOD DEMANGLE_ATTRIBUTE_NOINLINE DEMANGLE_ATTRIBUTE_USED
#else
#define DEMANGLE_DUMP_METHOD DEMANGLE_ATTRIBUTE_NOINLINE
#endif
#if __cplusplus > 201402L && __has_cpp_attribute(fallthrough)
#define DEMANGLE_FALLTHROUGH [[fallthrough]]
#elif __has_cpp_attribute(gnu::fallthrough)
#define DEMANGLE_FALLTHROUGH [[gnu::fallthrough]]
#elif !__cplusplus
// Workaround for llvm.org/PR23435, since clang 3.6 and below emit a spurious
// error when __has_cpp_attribute is given a scoped attribute in C mode.
#define DEMANGLE_FALLTHROUGH
#elif __has_cpp_attribute(clang::fallthrough)
#define DEMANGLE_FALLTHROUGH [[clang::fallthrough]]
#else
#define DEMANGLE_FALLTHROUGH
#endif
#define DEMANGLE_NAMESPACE_BEGIN namespace { namespace itanium_demangle {
#define DEMANGLE_NAMESPACE_END } }
#endif // LIBCXXABI_DEMANGLE_DEMANGLE_CONFIG_H

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,52 @@
Itanium Name Demangler Library
==============================
Introduction
------------
This directory contains the generic itanium name demangler library. The main
purpose of the library is to demangle C++ symbols, i.e. convert the string
"_Z1fv" into "f()". You can also use the CRTP base ManglingParser to perform
some simple analysis on the mangled name, or (in LLVM) use the opaque
ItaniumPartialDemangler to query the demangled AST.
Why are there multiple copies of the this library in the source tree?
---------------------------------------------------------------------
This directory is mirrored between libcxxabi/demangle and
llvm/include/llvm/Demangle. The simple reason for this is that both projects
need to demangle symbols, but neither can depend on each other. libcxxabi needs
the demangler to implement __cxa_demangle, which is part of the itanium ABI
spec. LLVM needs a copy for a bunch of places, but doesn't want to use the
system's __cxa_demangle because it a) might not be available (i.e., on Windows),
and b) probably isn't that up-to-date on the latest language features.
The copy of the demangler in LLVM has some extra stuff that aren't needed in
libcxxabi (ie, the MSVC demangler, ItaniumPartialDemangler), which depend on the
shared generic components. Despite these differences, we want to keep the "core"
generic demangling library identical between both copies to simplify development
and testing.
If you're working on the generic library, then do the work first in libcxxabi,
then run the cp-to-llvm.sh script in src/demangle. This script takes as an
argument the path to llvm, and re-copies the changes you made to libcxxabi over.
Note that this script just blindly overwrites all changes to the generic library
in llvm, so be careful.
Because the core demangler needs to work in libcxxabi, everything needs to be
declared in an anonymous namespace (see DEMANGLE_NAMESPACE_BEGIN), and you can't
introduce any code that depends on the libcxx dylib.
Hopefully, when LLVM becomes a monorepo, we can de-duplicate this code, and have
both LLVM and libcxxabi depend on a shared demangler library.
Testing
-------
The tests are split up between libcxxabi/test/{unit,}test_demangle.cpp, and
llvm/unittest/Demangle. The llvm directory should only get tests for stuff not
included in the core library. In the future though, we should probably move all
the tests to LLVM.
It is also a really good idea to run libFuzzer after non-trivial changes, see
libcxxabi/fuzz/cxa_demangle_fuzzer.cpp and https://llvm.org/docs/LibFuzzer.html.

View File

@@ -0,0 +1,126 @@
//===--- StringView.h -------------------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// FIXME: Use std::string_view instead when we support C++17.
//
//===----------------------------------------------------------------------===//
#ifndef DEMANGLE_STRINGVIEW_H
#define DEMANGLE_STRINGVIEW_H
#include "DemangleConfig.h"
#include <algorithm>
#include <cassert>
#include <cstring>
DEMANGLE_NAMESPACE_BEGIN
class StringView {
const char *First;
const char *Last;
public:
static const size_t npos = ~size_t(0);
template <size_t N>
StringView(const char (&Str)[N]) : First(Str), Last(Str + N - 1) {}
StringView(const char *First_, const char *Last_)
: First(First_), Last(Last_) {}
StringView(const char *First_, size_t Len)
: First(First_), Last(First_ + Len) {}
StringView(const char *Str) : First(Str), Last(Str + std::strlen(Str)) {}
StringView() : First(nullptr), Last(nullptr) {}
StringView substr(size_t From) const {
return StringView(begin() + From, size() - From);
}
size_t find(char C, size_t From = 0) const {
size_t FindBegin = std::min(From, size());
// Avoid calling memchr with nullptr.
if (FindBegin < size()) {
// Just forward to memchr, which is faster than a hand-rolled loop.
if (const void *P = ::memchr(First + FindBegin, C, size() - FindBegin))
return size_t(static_cast<const char *>(P) - First);
}
return npos;
}
StringView substr(size_t From, size_t To) const {
if (To >= size())
To = size() - 1;
if (From >= size())
From = size() - 1;
return StringView(First + From, First + To);
}
StringView dropFront(size_t N = 1) const {
if (N >= size())
N = size();
return StringView(First + N, Last);
}
StringView dropBack(size_t N = 1) const {
if (N >= size())
N = size();
return StringView(First, Last - N);
}
char front() const {
assert(!empty());
return *begin();
}
char back() const {
assert(!empty());
return *(end() - 1);
}
char popFront() {
assert(!empty());
return *First++;
}
bool consumeFront(char C) {
if (!startsWith(C))
return false;
*this = dropFront(1);
return true;
}
bool consumeFront(StringView S) {
if (!startsWith(S))
return false;
*this = dropFront(S.size());
return true;
}
bool startsWith(char C) const { return !empty() && *begin() == C; }
bool startsWith(StringView Str) const {
if (Str.size() > size())
return false;
return std::equal(Str.begin(), Str.end(), begin());
}
const char &operator[](size_t Idx) const { return *(begin() + Idx); }
const char *begin() const { return First; }
const char *end() const { return Last; }
size_t size() const { return static_cast<size_t>(Last - First); }
bool empty() const { return First == Last; }
};
inline bool operator==(const StringView &LHS, const StringView &RHS) {
return LHS.size() == RHS.size() &&
std::equal(LHS.begin(), LHS.end(), RHS.begin());
}
DEMANGLE_NAMESPACE_END
#endif

View File

@@ -0,0 +1,191 @@
//===--- Utility.h ----------------------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// Provide some utility classes for use in the demangler(s).
//
//===----------------------------------------------------------------------===//
#ifndef DEMANGLE_UTILITY_H
#define DEMANGLE_UTILITY_H
#include "StringView.h"
#include <cstdint>
#include <cstdlib>
#include <cstring>
#include <iterator>
#include <limits>
DEMANGLE_NAMESPACE_BEGIN
// Stream that AST nodes write their string representation into after the AST
// has been parsed.
class OutputStream {
char *Buffer = nullptr;
size_t CurrentPosition = 0;
size_t BufferCapacity = 0;
// Ensure there is at least n more positions in buffer.
void grow(size_t N) {
if (N + CurrentPosition >= BufferCapacity) {
BufferCapacity *= 2;
if (BufferCapacity < N + CurrentPosition)
BufferCapacity = N + CurrentPosition;
Buffer = static_cast<char *>(std::realloc(Buffer, BufferCapacity));
if (Buffer == nullptr)
std::terminate();
}
}
void writeUnsigned(uint64_t N, bool isNeg = false) {
// Handle special case...
if (N == 0) {
*this << '0';
return;
}
char Temp[21];
char *TempPtr = std::end(Temp);
while (N) {
*--TempPtr = '0' + char(N % 10);
N /= 10;
}
// Add negative sign...
if (isNeg)
*--TempPtr = '-';
this->operator<<(StringView(TempPtr, std::end(Temp)));
}
public:
OutputStream(char *StartBuf, size_t Size)
: Buffer(StartBuf), CurrentPosition(0), BufferCapacity(Size) {}
OutputStream() = default;
void reset(char *Buffer_, size_t BufferCapacity_) {
CurrentPosition = 0;
Buffer = Buffer_;
BufferCapacity = BufferCapacity_;
}
/// If a ParameterPackExpansion (or similar type) is encountered, the offset
/// into the pack that we're currently printing.
unsigned CurrentPackIndex = std::numeric_limits<unsigned>::max();
unsigned CurrentPackMax = std::numeric_limits<unsigned>::max();
OutputStream &operator+=(StringView R) {
size_t Size = R.size();
if (Size == 0)
return *this;
grow(Size);
std::memmove(Buffer + CurrentPosition, R.begin(), Size);
CurrentPosition += Size;
return *this;
}
OutputStream &operator+=(char C) {
grow(1);
Buffer[CurrentPosition++] = C;
return *this;
}
OutputStream &operator<<(StringView R) { return (*this += R); }
OutputStream &operator<<(char C) { return (*this += C); }
OutputStream &operator<<(long long N) {
if (N < 0)
writeUnsigned(static_cast<unsigned long long>(-N), true);
else
writeUnsigned(static_cast<unsigned long long>(N));
return *this;
}
OutputStream &operator<<(unsigned long long N) {
writeUnsigned(N, false);
return *this;
}
OutputStream &operator<<(long N) {
return this->operator<<(static_cast<long long>(N));
}
OutputStream &operator<<(unsigned long N) {
return this->operator<<(static_cast<unsigned long long>(N));
}
OutputStream &operator<<(int N) {
return this->operator<<(static_cast<long long>(N));
}
OutputStream &operator<<(unsigned int N) {
return this->operator<<(static_cast<unsigned long long>(N));
}
size_t getCurrentPosition() const { return CurrentPosition; }
void setCurrentPosition(size_t NewPos) { CurrentPosition = NewPos; }
char back() const {
return CurrentPosition ? Buffer[CurrentPosition - 1] : '\0';
}
bool empty() const { return CurrentPosition == 0; }
char *getBuffer() { return Buffer; }
char *getBufferEnd() { return Buffer + CurrentPosition - 1; }
size_t getBufferCapacity() const { return BufferCapacity; }
};
template <class T> class SwapAndRestore {
T &Restore;
T OriginalValue;
bool ShouldRestore = true;
public:
SwapAndRestore(T &Restore_) : SwapAndRestore(Restore_, Restore_) {}
SwapAndRestore(T &Restore_, T NewVal)
: Restore(Restore_), OriginalValue(Restore) {
Restore = std::move(NewVal);
}
~SwapAndRestore() {
if (ShouldRestore)
Restore = std::move(OriginalValue);
}
void shouldRestore(bool ShouldRestore_) { ShouldRestore = ShouldRestore_; }
void restoreNow(bool Force) {
if (!Force && !ShouldRestore)
return;
Restore = std::move(OriginalValue);
ShouldRestore = false;
}
SwapAndRestore(const SwapAndRestore &) = delete;
SwapAndRestore &operator=(const SwapAndRestore &) = delete;
};
inline bool initializeOutputStream(char *Buf, size_t *N, OutputStream &S,
size_t InitSize) {
size_t BufferSize;
if (Buf == nullptr) {
Buf = static_cast<char *>(std::malloc(InitSize));
if (Buf == nullptr)
return false;
BufferSize = InitSize;
} else
BufferSize = *N;
S.reset(Buf, BufferSize);
return true;
}
DEMANGLE_NAMESPACE_END
#endif

View File

@@ -0,0 +1,27 @@
#!/bin/bash
# Copies the 'demangle' library, excluding 'DemangleConfig.h', to llvm. If no
# llvm directory is specified, then assume a monorepo layout.
set -e
FILES="ItaniumDemangle.h StringView.h Utility.h README.txt"
LLVM_DEMANGLE_DIR=$1
if [[ -z "$LLVM_DEMANGLE_DIR" ]]; then
LLVM_DEMANGLE_DIR="../../../llvm/include/llvm/Demangle"
fi
if [[ ! -d "$LLVM_DEMANGLE_DIR" ]]; then
echo "No such directory: $LLVM_DEMANGLE_DIR" >&2
exit 1
fi
read -p "This will overwrite the copies of $FILES in $LLVM_DEMANGLE_DIR; are you sure? [y/N]" -n 1 -r ANSWER
echo
if [[ $ANSWER =~ ^[Yy]$ ]]; then
for I in $FILES ; do
cp $I $LLVM_DEMANGLE_DIR/$I
done
fi

View File

@@ -0,0 +1,259 @@
//===------------------------ fallback_malloc.cpp -------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// Define _LIBCPP_BUILDING_LIBRARY to ensure _LIBCPP_HAS_NO_LIBRARY_ALIGNED_ALLOCATION
// is only defined when libc aligned allocation is not available.
#define _LIBCPP_BUILDING_LIBRARY
#include "fallback_malloc.h"
#include <__threading_support>
#ifndef _LIBCXXABI_HAS_NO_THREADS
#if defined(__ELF__) && defined(_LIBCXXABI_LINK_PTHREAD_LIB)
#pragma comment(lib, "pthread")
#endif
#endif
#include <stdlib.h> // for malloc, calloc, free
#include <string.h> // for memset
// A small, simple heap manager based (loosely) on
// the startup heap manager from FreeBSD, optimized for space.
//
// Manages a fixed-size memory pool, supports malloc and free only.
// No support for realloc.
//
// Allocates chunks in multiples of four bytes, with a four byte header
// for each chunk. The overhead of each chunk is kept low by keeping pointers
// as two byte offsets within the heap, rather than (4 or 8 byte) pointers.
namespace {
// When POSIX threads are not available, make the mutex operations a nop
#ifndef _LIBCXXABI_HAS_NO_THREADS
_LIBCPP_SAFE_STATIC
static std::__libcpp_mutex_t heap_mutex = _LIBCPP_MUTEX_INITIALIZER;
#else
static void* heap_mutex = 0;
#endif
class mutexor {
public:
#ifndef _LIBCXXABI_HAS_NO_THREADS
mutexor(std::__libcpp_mutex_t* m) : mtx_(m) {
std::__libcpp_mutex_lock(mtx_);
}
~mutexor() { std::__libcpp_mutex_unlock(mtx_); }
#else
mutexor(void*) {}
~mutexor() {}
#endif
private:
mutexor(const mutexor& rhs);
mutexor& operator=(const mutexor& rhs);
#ifndef _LIBCXXABI_HAS_NO_THREADS
std::__libcpp_mutex_t* mtx_;
#endif
};
static const size_t HEAP_SIZE = 512;
char heap[HEAP_SIZE] __attribute__((aligned));
typedef unsigned short heap_offset;
typedef unsigned short heap_size;
struct heap_node {
heap_offset next_node; // offset into heap
heap_size len; // size in units of "sizeof(heap_node)"
};
static const heap_node* list_end =
(heap_node*)(&heap[HEAP_SIZE]); // one past the end of the heap
static heap_node* freelist = NULL;
heap_node* node_from_offset(const heap_offset offset) {
return (heap_node*)(heap + (offset * sizeof(heap_node)));
}
heap_offset offset_from_node(const heap_node* ptr) {
return static_cast<heap_offset>(
static_cast<size_t>(reinterpret_cast<const char*>(ptr) - heap) /
sizeof(heap_node));
}
void init_heap() {
freelist = (heap_node*)heap;
freelist->next_node = offset_from_node(list_end);
freelist->len = HEAP_SIZE / sizeof(heap_node);
}
// How big a chunk we allocate
size_t alloc_size(size_t len) {
return (len + sizeof(heap_node) - 1) / sizeof(heap_node) + 1;
}
bool is_fallback_ptr(void* ptr) {
return ptr >= heap && ptr < (heap + HEAP_SIZE);
}
void* fallback_malloc(size_t len) {
heap_node *p, *prev;
const size_t nelems = alloc_size(len);
mutexor mtx(&heap_mutex);
if (NULL == freelist)
init_heap();
// Walk the free list, looking for a "big enough" chunk
for (p = freelist, prev = 0; p && p != list_end;
prev = p, p = node_from_offset(p->next_node)) {
if (p->len > nelems) { // chunk is larger, shorten, and return the tail
heap_node* q;
p->len = static_cast<heap_size>(p->len - nelems);
q = p + p->len;
q->next_node = 0;
q->len = static_cast<heap_size>(nelems);
return (void*)(q + 1);
}
if (p->len == nelems) { // exact size match
if (prev == 0)
freelist = node_from_offset(p->next_node);
else
prev->next_node = p->next_node;
p->next_node = 0;
return (void*)(p + 1);
}
}
return NULL; // couldn't find a spot big enough
}
// Return the start of the next block
heap_node* after(struct heap_node* p) { return p + p->len; }
void fallback_free(void* ptr) {
struct heap_node* cp = ((struct heap_node*)ptr) - 1; // retrieve the chunk
struct heap_node *p, *prev;
mutexor mtx(&heap_mutex);
#ifdef DEBUG_FALLBACK_MALLOC
std::cout << "Freeing item at " << offset_from_node(cp) << " of size "
<< cp->len << std::endl;
#endif
for (p = freelist, prev = 0; p && p != list_end;
prev = p, p = node_from_offset(p->next_node)) {
#ifdef DEBUG_FALLBACK_MALLOC
std::cout << " p, cp, after (p), after(cp) " << offset_from_node(p) << ' '
<< offset_from_node(cp) << ' ' << offset_from_node(after(p))
<< ' ' << offset_from_node(after(cp)) << std::endl;
#endif
if (after(p) == cp) {
#ifdef DEBUG_FALLBACK_MALLOC
std::cout << " Appending onto chunk at " << offset_from_node(p)
<< std::endl;
#endif
p->len = static_cast<heap_size>(
p->len + cp->len); // make the free heap_node larger
return;
} else if (after(cp) == p) { // there's a free heap_node right after
#ifdef DEBUG_FALLBACK_MALLOC
std::cout << " Appending free chunk at " << offset_from_node(p)
<< std::endl;
#endif
cp->len = static_cast<heap_size>(cp->len + p->len);
if (prev == 0) {
freelist = cp;
cp->next_node = p->next_node;
} else
prev->next_node = offset_from_node(cp);
return;
}
}
// Nothing to merge with, add it to the start of the free list
#ifdef DEBUG_FALLBACK_MALLOC
std::cout << " Making new free list entry " << offset_from_node(cp)
<< std::endl;
#endif
cp->next_node = offset_from_node(freelist);
freelist = cp;
}
#ifdef INSTRUMENT_FALLBACK_MALLOC
size_t print_free_list() {
struct heap_node *p, *prev;
heap_size total_free = 0;
if (NULL == freelist)
init_heap();
for (p = freelist, prev = 0; p && p != list_end;
prev = p, p = node_from_offset(p->next_node)) {
std::cout << (prev == 0 ? "" : " ") << "Offset: " << offset_from_node(p)
<< "\tsize: " << p->len << " Next: " << p->next_node << std::endl;
total_free += p->len;
}
std::cout << "Total Free space: " << total_free << std::endl;
return total_free;
}
#endif
} // end unnamed namespace
namespace __cxxabiv1 {
struct __attribute__((aligned)) __aligned_type {};
void* __aligned_malloc_with_fallback(size_t size) {
#if defined(_WIN32)
if (void* dest = _aligned_malloc(size, alignof(__aligned_type)))
return dest;
#elif defined(_LIBCPP_HAS_NO_LIBRARY_ALIGNED_ALLOCATION)
if (void* dest = ::malloc(size))
return dest;
#else
if (size == 0)
size = 1;
void* dest;
if (::posix_memalign(&dest, __alignof(__aligned_type), size) == 0)
return dest;
#endif
return fallback_malloc(size);
}
void* __calloc_with_fallback(size_t count, size_t size) {
void* ptr = ::calloc(count, size);
if (NULL != ptr)
return ptr;
// if calloc fails, fall back to emergency stash
ptr = fallback_malloc(size * count);
if (NULL != ptr)
::memset(ptr, 0, size * count);
return ptr;
}
void __aligned_free_with_fallback(void* ptr) {
if (is_fallback_ptr(ptr))
fallback_free(ptr);
else {
#if defined(_WIN32)
::_aligned_free(ptr);
#else
::free(ptr);
#endif
}
}
void __free_with_fallback(void* ptr) {
if (is_fallback_ptr(ptr))
fallback_free(ptr);
else
::free(ptr);
}
} // namespace __cxxabiv1

View File

@@ -0,0 +1,28 @@
//===------------------------- fallback_malloc.h --------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef _FALLBACK_MALLOC_H
#define _FALLBACK_MALLOC_H
#include "__cxxabi_config.h"
#include <stddef.h> // for size_t
namespace __cxxabiv1 {
// Allocate some memory from _somewhere_
_LIBCXXABI_HIDDEN void * __aligned_malloc_with_fallback(size_t size);
// Allocate and zero-initialize memory from _somewhere_
_LIBCXXABI_HIDDEN void * __calloc_with_fallback(size_t count, size_t size);
_LIBCXXABI_HIDDEN void __aligned_free_with_fallback(void *ptr);
_LIBCXXABI_HIDDEN void __free_with_fallback(void *ptr);
} // namespace __cxxabiv1
#endif

View File

@@ -0,0 +1,210 @@
//===----------------------------------------------------------------------===////
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===////
// FIXME: This file is copied from libcxx/src/include/atomic_support.h. Instead
// of duplicating the file in libc++abi we should require that the libc++
// sources are available when building libc++abi.
#ifndef ATOMIC_SUPPORT_H
#define ATOMIC_SUPPORT_H
#include "__config"
#include "memory" // for __libcpp_relaxed_load
#if defined(__clang__) && __has_builtin(__atomic_load_n) \
&& __has_builtin(__atomic_store_n) \
&& __has_builtin(__atomic_add_fetch) \
&& __has_builtin(__atomic_exchange_n) \
&& __has_builtin(__atomic_compare_exchange_n) \
&& defined(__ATOMIC_RELAXED) \
&& defined(__ATOMIC_CONSUME) \
&& defined(__ATOMIC_ACQUIRE) \
&& defined(__ATOMIC_RELEASE) \
&& defined(__ATOMIC_ACQ_REL) \
&& defined(__ATOMIC_SEQ_CST)
# define _LIBCXXABI_HAS_ATOMIC_BUILTINS
#elif !defined(__clang__) && defined(_GNUC_VER) && _GNUC_VER >= 407
# define _LIBCXXABI_HAS_ATOMIC_BUILTINS
#endif
#if !defined(_LIBCXXABI_HAS_ATOMIC_BUILTINS) && !defined(_LIBCXXABI_HAS_NO_THREADS)
# if defined(_LIBCPP_WARNING)
_LIBCPP_WARNING("Building libc++ without __atomic builtins is unsupported")
# else
# warning Building libc++ without __atomic builtins is unsupported
# endif
#endif
_LIBCPP_BEGIN_NAMESPACE_STD
namespace {
#if defined(_LIBCXXABI_HAS_ATOMIC_BUILTINS) && !defined(_LIBCXXABI_HAS_NO_THREADS)
enum __libcpp_atomic_order {
_AO_Relaxed = __ATOMIC_RELAXED,
_AO_Consume = __ATOMIC_CONSUME,
_AO_Acquire = __ATOMIC_ACQUIRE,
_AO_Release = __ATOMIC_RELEASE,
_AO_Acq_Rel = __ATOMIC_ACQ_REL,
_AO_Seq = __ATOMIC_SEQ_CST
};
template <class _ValueType, class _FromType>
inline _LIBCPP_INLINE_VISIBILITY
void __libcpp_atomic_store(_ValueType* __dest, _FromType __val,
int __order = _AO_Seq)
{
__atomic_store_n(__dest, __val, __order);
}
template <class _ValueType, class _FromType>
inline _LIBCPP_INLINE_VISIBILITY
void __libcpp_relaxed_store(_ValueType* __dest, _FromType __val)
{
__atomic_store_n(__dest, __val, _AO_Relaxed);
}
template <class _ValueType>
inline _LIBCPP_INLINE_VISIBILITY
_ValueType __libcpp_atomic_load(_ValueType const* __val,
int __order = _AO_Seq)
{
return __atomic_load_n(__val, __order);
}
template <class _ValueType, class _AddType>
inline _LIBCPP_INLINE_VISIBILITY
_ValueType __libcpp_atomic_add(_ValueType* __val, _AddType __a,
int __order = _AO_Seq)
{
return __atomic_add_fetch(__val, __a, __order);
}
template <class _ValueType>
inline _LIBCPP_INLINE_VISIBILITY
_ValueType __libcpp_atomic_exchange(_ValueType* __target,
_ValueType __value, int __order = _AO_Seq)
{
return __atomic_exchange_n(__target, __value, __order);
}
template <class _ValueType>
inline _LIBCPP_INLINE_VISIBILITY
bool __libcpp_atomic_compare_exchange(_ValueType* __val,
_ValueType* __expected, _ValueType __after,
int __success_order = _AO_Seq,
int __fail_order = _AO_Seq)
{
return __atomic_compare_exchange_n(__val, __expected, __after, true,
__success_order, __fail_order);
}
#else // _LIBCPP_HAS_NO_THREADS
enum __libcpp_atomic_order {
_AO_Relaxed,
_AO_Consume,
_AO_Acquire,
_AO_Release,
_AO_Acq_Rel,
_AO_Seq
};
template <class _ValueType, class _FromType>
inline _LIBCPP_INLINE_VISIBILITY
void __libcpp_atomic_store(_ValueType* __dest, _FromType __val,
int = 0)
{
*__dest = __val;
}
template <class _ValueType, class _FromType>
inline _LIBCPP_INLINE_VISIBILITY
void __libcpp_relaxed_store(_ValueType* __dest, _FromType __val)
{
*__dest = __val;
}
template <class _ValueType>
inline _LIBCPP_INLINE_VISIBILITY
_ValueType __libcpp_atomic_load(_ValueType const* __val,
int = 0)
{
return *__val;
}
template <class _ValueType, class _AddType>
inline _LIBCPP_INLINE_VISIBILITY
_ValueType __libcpp_atomic_add(_ValueType* __val, _AddType __a,
int = 0)
{
return *__val += __a;
}
template <class _ValueType>
inline _LIBCPP_INLINE_VISIBILITY
_ValueType __libcpp_atomic_exchange(_ValueType* __target,
_ValueType __value, int = _AO_Seq)
{
_ValueType old = *__target;
*__target = __value;
return old;
}
template <class _ValueType>
inline _LIBCPP_INLINE_VISIBILITY
bool __libcpp_atomic_compare_exchange(_ValueType* __val,
_ValueType* __expected, _ValueType __after,
int = 0, int = 0)
{
if (*__val == *__expected) {
*__val = __after;
return true;
}
*__expected = *__val;
return false;
}
#endif // _LIBCPP_HAS_NO_THREADS
} // end namespace
_LIBCPP_END_NAMESPACE_STD
namespace {
template <class IntType>
class AtomicInt {
public:
using MemoryOrder = std::__libcpp_atomic_order;
explicit AtomicInt(IntType *b) : b(b) {}
AtomicInt(AtomicInt const&) = delete;
AtomicInt& operator=(AtomicInt const&) = delete;
IntType load(MemoryOrder ord) {
return std::__libcpp_atomic_load(b, ord);
}
void store(IntType val, MemoryOrder ord) {
std::__libcpp_atomic_store(b, val, ord);
}
IntType exchange(IntType new_val, MemoryOrder ord) {
return std::__libcpp_atomic_exchange(b, new_val, ord);
}
bool compare_exchange(IntType *expected, IntType desired, MemoryOrder ord_success, MemoryOrder ord_failure) {
return std::__libcpp_atomic_compare_exchange(b, expected, desired, ord_success, ord_failure);
}
private:
IntType *b;
};
} // end namespace
#endif // ATOMIC_SUPPORT_H

View File

@@ -0,0 +1,131 @@
//===------------------------ __refstring ---------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// FIXME: This file is copied from libcxx/src/include/refstring.h. Instead of
// duplicating the file in libc++abi we should require that the libc++ sources
// are available when building libc++abi.
#ifndef _LIBCPPABI_REFSTRING_H
#define _LIBCPPABI_REFSTRING_H
#include <__config>
#include <stdexcept>
#include <cstddef>
#include <cstring>
#ifdef __APPLE__
#include <dlfcn.h>
#include <mach-o/dyld.h>
#endif
#include "atomic_support.h"
_LIBCPP_BEGIN_NAMESPACE_STD
namespace __refstring_imp { namespace {
typedef int count_t;
struct _Rep_base {
std::size_t len;
std::size_t cap;
count_t count;
};
inline _Rep_base* rep_from_data(const char *data_) noexcept {
char *data = const_cast<char *>(data_);
return reinterpret_cast<_Rep_base *>(data - sizeof(_Rep_base));
}
inline char * data_from_rep(_Rep_base *rep) noexcept {
char *data = reinterpret_cast<char *>(rep);
return data + sizeof(*rep);
}
#if defined(__APPLE__)
inline
const char* compute_gcc_empty_string_storage() _NOEXCEPT
{
void* handle = dlopen("/usr/lib/libstdc++.6.dylib", RTLD_NOLOAD);
if (handle == nullptr)
return nullptr;
void* sym = dlsym(handle, "_ZNSs4_Rep20_S_empty_rep_storageE");
if (sym == nullptr)
return nullptr;
return data_from_rep(reinterpret_cast<_Rep_base *>(sym));
}
inline
const char*
get_gcc_empty_string_storage() _NOEXCEPT
{
static const char* p = compute_gcc_empty_string_storage();
return p;
}
#endif
}} // namespace __refstring_imp
using namespace __refstring_imp;
inline
__libcpp_refstring::__libcpp_refstring(const char* msg) {
std::size_t len = strlen(msg);
_Rep_base* rep = static_cast<_Rep_base *>(::operator new(sizeof(*rep) + len + 1));
rep->len = len;
rep->cap = len;
rep->count = 0;
char *data = data_from_rep(rep);
std::memcpy(data, msg, len + 1);
__imp_ = data;
}
inline
__libcpp_refstring::__libcpp_refstring(const __libcpp_refstring &s) _NOEXCEPT
: __imp_(s.__imp_)
{
if (__uses_refcount())
__libcpp_atomic_add(&rep_from_data(__imp_)->count, 1);
}
inline
__libcpp_refstring& __libcpp_refstring::operator=(__libcpp_refstring const& s) _NOEXCEPT {
bool adjust_old_count = __uses_refcount();
struct _Rep_base *old_rep = rep_from_data(__imp_);
__imp_ = s.__imp_;
if (__uses_refcount())
__libcpp_atomic_add(&rep_from_data(__imp_)->count, 1);
if (adjust_old_count)
{
if (__libcpp_atomic_add(&old_rep->count, count_t(-1)) < 0)
{
::operator delete(old_rep);
}
}
return *this;
}
inline
__libcpp_refstring::~__libcpp_refstring() {
if (__uses_refcount()) {
_Rep_base* rep = rep_from_data(__imp_);
if (__libcpp_atomic_add(&rep->count, count_t(-1)) < 0) {
::operator delete(rep);
}
}
}
inline
bool __libcpp_refstring::__uses_refcount() const {
#ifdef __APPLE__
return __imp_ != get_gcc_empty_string_storage();
#else
return true;
#endif
}
_LIBCPP_END_NAMESPACE_STD
#endif //_LIBCPPABI_REFSTRING_H

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,251 @@
//===------------------------ private_typeinfo.h --------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef __PRIVATE_TYPEINFO_H_
#define __PRIVATE_TYPEINFO_H_
#include "__cxxabi_config.h"
#include <typeinfo>
#include <stddef.h>
namespace __cxxabiv1 {
class _LIBCXXABI_TYPE_VIS __shim_type_info : public std::type_info {
public:
_LIBCXXABI_HIDDEN virtual ~__shim_type_info();
_LIBCXXABI_HIDDEN virtual void noop1() const;
_LIBCXXABI_HIDDEN virtual void noop2() const;
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *thrown_type,
void *&adjustedPtr) const = 0;
};
class _LIBCXXABI_TYPE_VIS __fundamental_type_info : public __shim_type_info {
public:
_LIBCXXABI_HIDDEN virtual ~__fundamental_type_info();
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
};
class _LIBCXXABI_TYPE_VIS __array_type_info : public __shim_type_info {
public:
_LIBCXXABI_HIDDEN virtual ~__array_type_info();
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
};
class _LIBCXXABI_TYPE_VIS __function_type_info : public __shim_type_info {
public:
_LIBCXXABI_HIDDEN virtual ~__function_type_info();
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
};
class _LIBCXXABI_TYPE_VIS __enum_type_info : public __shim_type_info {
public:
_LIBCXXABI_HIDDEN virtual ~__enum_type_info();
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
};
enum
{
unknown = 0,
public_path,
not_public_path,
yes,
no
};
class _LIBCXXABI_TYPE_VIS __class_type_info;
struct _LIBCXXABI_HIDDEN __dynamic_cast_info
{
// const data supplied to the search:
const __class_type_info* dst_type;
const void* static_ptr;
const __class_type_info* static_type;
ptrdiff_t src2dst_offset;
// Data that represents the answer:
// pointer to a dst_type which has (static_ptr, static_type) above it
const void* dst_ptr_leading_to_static_ptr;
// pointer to a dst_type which does not have (static_ptr, static_type) above it
const void* dst_ptr_not_leading_to_static_ptr;
// The following three paths are either unknown, public_path or not_public_path.
// access of path from dst_ptr_leading_to_static_ptr to (static_ptr, static_type)
int path_dst_ptr_to_static_ptr;
// access of path from (dynamic_ptr, dynamic_type) to (static_ptr, static_type)
// when there is no dst_type along the path
int path_dynamic_ptr_to_static_ptr;
// access of path from (dynamic_ptr, dynamic_type) to dst_type
// (not used if there is a (static_ptr, static_type) above a dst_type).
int path_dynamic_ptr_to_dst_ptr;
// Number of dst_types below (static_ptr, static_type)
int number_to_static_ptr;
// Number of dst_types not below (static_ptr, static_type)
int number_to_dst_ptr;
// Data that helps stop the search before the entire tree is searched:
// is_dst_type_derived_from_static_type is either unknown, yes or no.
int is_dst_type_derived_from_static_type;
// Number of dst_type in tree. If 0, then that means unknown.
int number_of_dst_type;
// communicates to a dst_type node that (static_ptr, static_type) was found
// above it.
bool found_our_static_ptr;
// communicates to a dst_type node that a static_type was found
// above it, but it wasn't (static_ptr, static_type)
bool found_any_static_type;
// Set whenever a search can be stopped
bool search_done;
};
// Has no base class
class _LIBCXXABI_TYPE_VIS __class_type_info : public __shim_type_info {
public:
_LIBCXXABI_HIDDEN virtual ~__class_type_info();
_LIBCXXABI_HIDDEN void process_static_type_above_dst(__dynamic_cast_info *,
const void *,
const void *, int) const;
_LIBCXXABI_HIDDEN void process_static_type_below_dst(__dynamic_cast_info *,
const void *, int) const;
_LIBCXXABI_HIDDEN void process_found_base_class(__dynamic_cast_info *, void *,
int) const;
_LIBCXXABI_HIDDEN virtual void search_above_dst(__dynamic_cast_info *,
const void *, const void *,
int, bool) const;
_LIBCXXABI_HIDDEN virtual void
search_below_dst(__dynamic_cast_info *, const void *, int, bool) const;
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
_LIBCXXABI_HIDDEN virtual void
has_unambiguous_public_base(__dynamic_cast_info *, void *, int) const;
};
// Has one non-virtual public base class at offset zero
class _LIBCXXABI_TYPE_VIS __si_class_type_info : public __class_type_info {
public:
const __class_type_info *__base_type;
_LIBCXXABI_HIDDEN virtual ~__si_class_type_info();
_LIBCXXABI_HIDDEN virtual void search_above_dst(__dynamic_cast_info *,
const void *, const void *,
int, bool) const;
_LIBCXXABI_HIDDEN virtual void
search_below_dst(__dynamic_cast_info *, const void *, int, bool) const;
_LIBCXXABI_HIDDEN virtual void
has_unambiguous_public_base(__dynamic_cast_info *, void *, int) const;
};
struct _LIBCXXABI_HIDDEN __base_class_type_info
{
public:
const __class_type_info* __base_type;
long __offset_flags;
enum __offset_flags_masks
{
__virtual_mask = 0x1,
__public_mask = 0x2, // base is public
__offset_shift = 8
};
void search_above_dst(__dynamic_cast_info*, const void*, const void*, int, bool) const;
void search_below_dst(__dynamic_cast_info*, const void*, int, bool) const;
void has_unambiguous_public_base(__dynamic_cast_info*, void*, int) const;
};
// Has one or more base classes
class _LIBCXXABI_TYPE_VIS __vmi_class_type_info : public __class_type_info {
public:
unsigned int __flags;
unsigned int __base_count;
__base_class_type_info __base_info[1];
enum __flags_masks {
__non_diamond_repeat_mask = 0x1, // has two or more distinct base class
// objects of the same type
__diamond_shaped_mask = 0x2 // has base class object with two or
// more derived objects
};
_LIBCXXABI_HIDDEN virtual ~__vmi_class_type_info();
_LIBCXXABI_HIDDEN virtual void search_above_dst(__dynamic_cast_info *,
const void *, const void *,
int, bool) const;
_LIBCXXABI_HIDDEN virtual void
search_below_dst(__dynamic_cast_info *, const void *, int, bool) const;
_LIBCXXABI_HIDDEN virtual void
has_unambiguous_public_base(__dynamic_cast_info *, void *, int) const;
};
class _LIBCXXABI_TYPE_VIS __pbase_type_info : public __shim_type_info {
public:
unsigned int __flags;
const __shim_type_info *__pointee;
enum __masks {
__const_mask = 0x1,
__volatile_mask = 0x2,
__restrict_mask = 0x4,
__incomplete_mask = 0x8,
__incomplete_class_mask = 0x10,
__transaction_safe_mask = 0x20,
// This implements the following proposal from cxx-abi-dev (not yet part of
// the ABI document):
//
// http://sourcerytools.com/pipermail/cxx-abi-dev/2016-October/002986.html
//
// This is necessary for support of http://wg21.link/p0012, which permits
// throwing noexcept function and member function pointers and catching
// them as non-noexcept pointers.
__noexcept_mask = 0x40,
// Flags that cannot be removed by a standard conversion.
__no_remove_flags_mask = __const_mask | __volatile_mask | __restrict_mask,
// Flags that cannot be added by a standard conversion.
__no_add_flags_mask = __transaction_safe_mask | __noexcept_mask
};
_LIBCXXABI_HIDDEN virtual ~__pbase_type_info();
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
};
class _LIBCXXABI_TYPE_VIS __pointer_type_info : public __pbase_type_info {
public:
_LIBCXXABI_HIDDEN virtual ~__pointer_type_info();
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
_LIBCXXABI_HIDDEN bool can_catch_nested(const __shim_type_info *) const;
};
class _LIBCXXABI_TYPE_VIS __pointer_to_member_type_info
: public __pbase_type_info {
public:
const __class_type_info *__context;
_LIBCXXABI_HIDDEN virtual ~__pointer_to_member_type_info();
_LIBCXXABI_HIDDEN virtual bool can_catch(const __shim_type_info *,
void *&) const;
_LIBCXXABI_HIDDEN bool can_catch_nested(const __shim_type_info *) const;
};
} // __cxxabiv1
#endif // __PRIVATE_TYPEINFO_H_

View File

@@ -0,0 +1,71 @@
//===---------------------------- exception.cpp ---------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#define _LIBCPP_BUILDING_LIBRARY
#include <new>
#include <exception>
namespace std
{
// exception
exception::~exception() _NOEXCEPT
{
}
const char* exception::what() const _NOEXCEPT
{
return "std::exception";
}
// bad_exception
bad_exception::~bad_exception() _NOEXCEPT
{
}
const char* bad_exception::what() const _NOEXCEPT
{
return "std::bad_exception";
}
// bad_alloc
bad_alloc::bad_alloc() _NOEXCEPT
{
}
bad_alloc::~bad_alloc() _NOEXCEPT
{
}
const char*
bad_alloc::what() const _NOEXCEPT
{
return "std::bad_alloc";
}
// bad_array_new_length
bad_array_new_length::bad_array_new_length() _NOEXCEPT
{
}
bad_array_new_length::~bad_array_new_length() _NOEXCEPT
{
}
const char*
bad_array_new_length::what() const _NOEXCEPT
{
return "bad_array_new_length";
}
} // std

View File

@@ -0,0 +1,262 @@
//===--------------------- stdlib_new_delete.cpp --------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//
// This file implements the new and delete operators.
//===----------------------------------------------------------------------===//
#define _LIBCPP_BUILDING_LIBRARY
#include "__cxxabi_config.h"
#include <new>
#include <cstdlib>
#if !defined(_THROW_BAD_ALLOC) || !defined(_NOEXCEPT) || !defined(_LIBCXXABI_WEAK)
#error The _THROW_BAD_ALLOC, _NOEXCEPT, and _LIBCXXABI_WEAK libc++ macros must \
already be defined by libc++.
#endif
// Implement all new and delete operators as weak definitions
// in this shared library, so that they can be overridden by programs
// that define non-weak copies of the functions.
_LIBCXXABI_WEAK
void *
operator new(std::size_t size) _THROW_BAD_ALLOC
{
if (size == 0)
size = 1;
void* p;
while ((p = ::malloc(size)) == 0)
{
// If malloc fails and there is a new_handler,
// call it to try free up memory.
std::new_handler nh = std::get_new_handler();
if (nh)
nh();
else
#ifndef _LIBCXXABI_NO_EXCEPTIONS
throw std::bad_alloc();
#else
break;
#endif
}
return p;
}
_LIBCXXABI_WEAK
void*
operator new(size_t size, const std::nothrow_t&) _NOEXCEPT
{
void* p = 0;
#ifndef _LIBCXXABI_NO_EXCEPTIONS
try
{
#endif // _LIBCXXABI_NO_EXCEPTIONS
p = ::operator new(size);
#ifndef _LIBCXXABI_NO_EXCEPTIONS
}
catch (...)
{
}
#endif // _LIBCXXABI_NO_EXCEPTIONS
return p;
}
_LIBCXXABI_WEAK
void*
operator new[](size_t size) _THROW_BAD_ALLOC
{
return ::operator new(size);
}
_LIBCXXABI_WEAK
void*
operator new[](size_t size, const std::nothrow_t&) _NOEXCEPT
{
void* p = 0;
#ifndef _LIBCXXABI_NO_EXCEPTIONS
try
{
#endif // _LIBCXXABI_NO_EXCEPTIONS
p = ::operator new[](size);
#ifndef _LIBCXXABI_NO_EXCEPTIONS
}
catch (...)
{
}
#endif // _LIBCXXABI_NO_EXCEPTIONS
return p;
}
_LIBCXXABI_WEAK
void
operator delete(void* ptr) _NOEXCEPT
{
if (ptr)
::free(ptr);
}
_LIBCXXABI_WEAK
void
operator delete(void* ptr, const std::nothrow_t&) _NOEXCEPT
{
::operator delete(ptr);
}
_LIBCXXABI_WEAK
void
operator delete(void* ptr, size_t) _NOEXCEPT
{
::operator delete(ptr);
}
_LIBCXXABI_WEAK
void
operator delete[] (void* ptr) _NOEXCEPT
{
::operator delete(ptr);
}
_LIBCXXABI_WEAK
void
operator delete[] (void* ptr, const std::nothrow_t&) _NOEXCEPT
{
::operator delete[](ptr);
}
_LIBCXXABI_WEAK
void
operator delete[] (void* ptr, size_t) _NOEXCEPT
{
::operator delete[](ptr);
}
#if !defined(_LIBCPP_HAS_NO_LIBRARY_ALIGNED_ALLOCATION)
_LIBCXXABI_WEAK
void *
operator new(std::size_t size, std::align_val_t alignment) _THROW_BAD_ALLOC
{
if (size == 0)
size = 1;
if (static_cast<size_t>(alignment) < sizeof(void*))
alignment = std::align_val_t(sizeof(void*));
void* p;
#if defined(_LIBCPP_WIN32API)
while ((p = _aligned_malloc(size, static_cast<size_t>(alignment))) == nullptr)
#else
while (::posix_memalign(&p, static_cast<size_t>(alignment), size) != 0)
#endif
{
// If posix_memalign fails and there is a new_handler,
// call it to try free up memory.
std::new_handler nh = std::get_new_handler();
if (nh)
nh();
else {
#ifndef _LIBCXXABI_NO_EXCEPTIONS
throw std::bad_alloc();
#else
p = nullptr; // posix_memalign doesn't initialize 'p' on failure
break;
#endif
}
}
return p;
}
_LIBCXXABI_WEAK
void*
operator new(size_t size, std::align_val_t alignment, const std::nothrow_t&) _NOEXCEPT
{
void* p = 0;
#ifndef _LIBCXXABI_NO_EXCEPTIONS
try
{
#endif // _LIBCXXABI_NO_EXCEPTIONS
p = ::operator new(size, alignment);
#ifndef _LIBCXXABI_NO_EXCEPTIONS
}
catch (...)
{
}
#endif // _LIBCXXABI_NO_EXCEPTIONS
return p;
}
_LIBCXXABI_WEAK
void*
operator new[](size_t size, std::align_val_t alignment) _THROW_BAD_ALLOC
{
return ::operator new(size, alignment);
}
_LIBCXXABI_WEAK
void*
operator new[](size_t size, std::align_val_t alignment, const std::nothrow_t&) _NOEXCEPT
{
void* p = 0;
#ifndef _LIBCXXABI_NO_EXCEPTIONS
try
{
#endif // _LIBCXXABI_NO_EXCEPTIONS
p = ::operator new[](size, alignment);
#ifndef _LIBCXXABI_NO_EXCEPTIONS
}
catch (...)
{
}
#endif // _LIBCXXABI_NO_EXCEPTIONS
return p;
}
_LIBCXXABI_WEAK
void
operator delete(void* ptr, std::align_val_t) _NOEXCEPT
{
if (ptr)
#if defined(_LIBCPP_WIN32API)
::_aligned_free(ptr);
#else
::free(ptr);
#endif
}
_LIBCXXABI_WEAK
void
operator delete(void* ptr, std::align_val_t alignment, const std::nothrow_t&) _NOEXCEPT
{
::operator delete(ptr, alignment);
}
_LIBCXXABI_WEAK
void
operator delete(void* ptr, size_t, std::align_val_t alignment) _NOEXCEPT
{
::operator delete(ptr, alignment);
}
_LIBCXXABI_WEAK
void
operator delete[] (void* ptr, std::align_val_t alignment) _NOEXCEPT
{
::operator delete(ptr, alignment);
}
_LIBCXXABI_WEAK
void
operator delete[] (void* ptr, std::align_val_t alignment, const std::nothrow_t&) _NOEXCEPT
{
::operator delete[](ptr, alignment);
}
_LIBCXXABI_WEAK
void
operator delete[] (void* ptr, size_t, std::align_val_t alignment) _NOEXCEPT
{
::operator delete[](ptr, alignment);
}
#endif // !_LIBCPP_HAS_NO_LIBRARY_ALIGNED_ALLOCATION

View File

@@ -0,0 +1,47 @@
//===------------------------ stdexcept.cpp -------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "include/refstring.h"
#include "stdexcept"
#include "new"
#include <cstdlib>
#include <cstring>
#include <cstdint>
#include <cstddef>
static_assert(sizeof(std::__libcpp_refstring) == sizeof(const char *), "");
namespace std // purposefully not using versioning namespace
{
logic_error::~logic_error() _NOEXCEPT {}
const char*
logic_error::what() const _NOEXCEPT
{
return __imp_.c_str();
}
runtime_error::~runtime_error() _NOEXCEPT {}
const char*
runtime_error::what() const _NOEXCEPT
{
return __imp_.c_str();
}
domain_error::~domain_error() _NOEXCEPT {}
invalid_argument::~invalid_argument() _NOEXCEPT {}
length_error::~length_error() _NOEXCEPT {}
out_of_range::~out_of_range() _NOEXCEPT {}
range_error::~range_error() _NOEXCEPT {}
overflow_error::~overflow_error() _NOEXCEPT {}
underflow_error::~underflow_error() _NOEXCEPT {}
} // std

View File

@@ -0,0 +1,52 @@
//===----------------------------- typeinfo.cpp ---------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include <typeinfo>
namespace std
{
// type_info
type_info::~type_info()
{
}
// bad_cast
bad_cast::bad_cast() _NOEXCEPT
{
}
bad_cast::~bad_cast() _NOEXCEPT
{
}
const char*
bad_cast::what() const _NOEXCEPT
{
return "std::bad_cast";
}
// bad_typeid
bad_typeid::bad_typeid() _NOEXCEPT
{
}
bad_typeid::~bad_typeid() _NOEXCEPT
{
}
const char*
bad_typeid::what() const _NOEXCEPT
{
return "std::bad_typeid";
}
} // std