Commit Graph

369 Commits

Author SHA1 Message Date
mlugg
2ffef605c7 Replace uses of Value.zero, Value.one, Value.negative_one
This is a bit nasty, mainly because Type.onePossibleValue is now
errorable, which is a quite viral change.
2023-06-10 20:42:29 -07:00
Andrew Kelley
75900ec1b5 stage2: move integer values to InternPool 2023-06-10 20:42:29 -07:00
Andrew Kelley
31aee50c1a InternPool: add a slice encoding
This uses the data field to reference its pointer field type, which
allows for efficient and infallible access of a slice type's pointer
type.
2023-06-10 20:42:29 -07:00
Andrew Kelley
6ab8b6f8b2 stage2: move undef, unreach, null values to InternPool 2023-06-10 20:42:28 -07:00
Andrew Kelley
5e636643d2 stage2: move many Type encodings to InternPool
Notably, `vector`.

Additionally, all alternate encodings of `pointer`, `optional`, and
`array`.
2023-06-10 20:42:27 -07:00
Andrew Kelley
85c69c5194 Type.isSlice: make it InternPool aware 2023-06-10 20:40:04 -07:00
Andrew Kelley
ca3cf93b21 stage2: move most simple values to InternPool 2023-06-10 20:40:04 -07:00
Andrew Kelley
bcd4bb8afb stage2: move named int types to InternPool 2023-06-10 20:40:04 -07:00
Andrew Kelley
9aec2758cc stage2: start the InternPool transition
Instead of doing everything at once which is a hopelessly large task,
this introduces a piecemeal transition that can be done in small
increments at a time.

This is a minimal changeset that keeps the compiler compiling. It only
uses the InternPool for a small set of types.

Behavior tests are not passing.

Air.Inst.Ref and Zir.Inst.Ref are separated into different enums but
compile-time verified to have the same fields in the same order.

The large set of changes is mainly to deal with the fact that most Type
and Value methods now require a Module to be passed in, so that the
InternPool object can be accessed.
2023-06-10 20:40:03 -07:00
Luuk de Gram
ebfd3450d9 codegen: Write padding bytes for unions
Previously we did not write any missing padding bytes after the smallest
field (either tag or payload, depending on alignment). This resulted in
writing too few bytes and not matching the full abisize of the union.
2023-05-31 18:04:33 +02:00
Veikka Tuominen
ca16f1e8a7 std.Target adjustments
* move `ptrBitWidth` from Arch to Target since it needs to know about the abi
* double isn't always 8 bits
* AVR uses 1-byte alignment for everything in GCC
2023-05-26 21:42:19 -07:00
Jacob Young
60e69fdaa1 codegen: emit global vector padding 2023-05-18 20:42:38 -04:00
Jacob Young
57c38f6433 x86_64: implement global payload pointers 2023-05-15 03:07:51 -04:00
Jacob Young
81664f17d5 codegen: implement global enum_numbered 2023-05-15 03:07:51 -04:00
Jacob Young
31429a4e86 codegen: handle variable and decl_ref_mut consistently 2023-05-03 04:25:14 -04:00
Jacob Young
f894ec264b codegen: fix global nested field_ptr 2023-05-03 04:25:14 -04:00
Jacob Young
f56f5af403 x86_64: fix global slices 2023-05-03 04:25:14 -04:00
Jacob Young
47a34d038d x86_64: implement tagName 2023-05-01 19:22:52 -04:00
Jacob Young
f37ca3fa73 link: cleanup lazy alignment
This gets the alignment from the code that creates a lazy symbol instead
of guessing it at every use.
2023-05-01 19:22:52 -04:00
Jacob Young
db76ae8260 x86_64: fix emitting f80 globals 2023-05-01 19:22:52 -04:00
Jacob Young
4ec49da5f6 x86_64: implement a bunch of floating point stuff 2023-05-01 19:22:52 -04:00
Andrew Kelley
42973d73e6 compiler: use @memcpy instead of std.mem.copy 2023-04-28 13:24:43 -07:00
Jakub Konka
8a3ad3f620 elf: do not reserve a GOT slot for every Atom 2023-04-21 22:44:25 +02:00
Jakub Konka
b82130709d x86_64: cleanup different memory load types
Split `MCValue.linker_load` into `.load_got`, `.load_direct`, and
`.lea_direct`.
2023-04-15 11:10:24 +02:00
Jakub Konka
179117c114 x86_64: split MCValue.tlv_reloc into .load_tlv and .lea_tlv
`.load_tlv` signifies we want to load the value of a TLV
`.lea_tlv` signifies we want to load effective address of a TLV
2023-04-15 00:57:23 +02:00
Jakub Konka
3f35d6984f x86_64: make TLV a separate MCValue 2023-04-13 16:35:45 +02:00
Jakub Konka
fd52d4537a x86_64: emit pointer to TLV for macho 2023-04-13 11:47:51 +02:00
Jakub Konka
382de7bf1d codegen: use non-debug Type/Value formatting 2023-04-13 11:47:51 +02:00
Jacob Young
caa3d6a4f4 x86_64: fix constant pointers to zero-bit types
These non-dereferencable pointers still need to have the correct
alignment and non-null-ness.
2023-04-13 04:17:47 -04:00
Jacob Young
821eb595f4 x86_64: implement cmp_lt_errors_len 2023-04-03 15:19:07 -04:00
Jacob Young
677427bc3a x86_64: implement error name 2023-04-02 04:49:53 -04:00
Jacob Young
d9ce69dc39 codegen: fix ptr-like optional constants 2023-03-25 16:23:55 -04:00
Jacob Young
f95faac5ae x86_64: (re)implement optional ops
Note that this commit also changes the layout of optional for all
other backends using `src/codegen.zig` without updating them!
2023-03-21 08:49:54 +01:00
Jakub Konka
dc709fbf48 codegen: rename GenerateSymbolError to CodeGenError 2023-03-03 18:56:57 +01:00
Jakub Konka
c413ac100f codegen: refactor generating Int as immediate where appropriate 2023-03-03 18:40:16 +01:00
Jakub Konka
c746cbc686 codegen: move gen logic for typed values, consts and decl ref to common codegen 2023-03-03 18:06:25 +01:00
Veikka Tuominen
f10950526e implement writeToMemory/readFromMemory for pointers 2023-02-19 13:54:52 -05:00
Andrew Kelley
aeaef8c0ff update std lib and compiler sources to new for loop syntax 2023-02-18 19:17:21 -07:00
Jakub Konka
a95d58caf2 self-hosted: rename codegen Result.appended to Result.ok 2023-01-25 10:28:18 +01:00
Jakub Konka
4983da40d0 self-hosted: remove unused externally_managed prong for Decls code 2023-01-25 10:10:50 +01:00
Travis Staloch
1ebb761244 codegen - lower str_lit to vector 2022-12-16 06:08:10 -05:00
Luuk de Gram
db06eed7a3 codegen: implement generating vector values 2022-12-12 17:41:59 +01:00
Andrew Kelley
50eb7983cd remove most conditional compilation based on stage1
There are still a few occurrences of "stage1" in the standard library
and self-hosted compiler source, however, these instances need a bit
more careful inspection to ensure no breakage.
2022-12-06 20:38:54 -07:00
Andrew Kelley
fdbb0fb7b9 Merge pull request #13744 from Vexu/stage2-fixes
Improve error messages, fix dependency loops
2022-12-03 00:42:11 -05:00
Veikka Tuominen
0e38cc16d5 Sema: fix comparisons between lazy and runtime values
Closes #12498
2022-12-03 00:09:23 +02:00
Jakub Konka
5bffc17c42 codegen: make LinkerLoad a common struct shared by backends 2022-12-01 16:32:09 +01:00
Luuk de Gram
090deae41d wasm: enable behavior tests for packed structs 2022-11-30 21:01:09 +01:00
Luuk de Gram
3933a4bac5 codegen: support generating packed structs 2022-11-30 17:56:02 +01:00
Ali Chraghi
f5f1f8c666 all: rename i386 to x86 2022-11-04 00:09:27 +03:30
Cody Tapscott
3295fee911 stage2: Use mem.readPackedInt etc. for packed bitcasts
Packed memory has a well-defined layout that doesn't require
conversion from an integer to read from. Let's use it :-)

This change means that for bitcasting to/from a packed value that
is N layers deep, we no longer have to create N temporary big-ints
and perform N copies.

Other miscellaneous improvements:
  - Adds support for casting to packed enums and vectors
  - Fixes bitcasting to/from vectors outside of a packed struct
  - Adds a fast path for bitcasting <= u/i64
  - Fixes bug when bitcasting f80 which would clear following fields

This also changes the bitcast memory layout of exotic integers on
big-endian systems to match what's empirically observed on our targets.
Technically, this layout is not guaranteed by LLVM so we should probably
ban bitcasts that reveal these padding bits, but for now this is an
improvement.
2022-10-28 08:41:04 -07:00