The high level Allocator interface API functions will now do a
`@returnAddress()` so that stack traces captured by allocator
implementations have a return address that does not include the
Allocator overhead functions. This makes `4` a more reasonable default
for how many stack frames to capture.
`std.GeneralPurposeAllocator` is now available. It is a function that
takes a configuration struct (with default field values) and returns an
allocator. There is a detailed description of this allocator in the
doc comments at the top of the new file.
The main feature of this allocator is that it is *safe*. It
prevents double-free, use-after-free, and detects leaks.
Some deprecation compile errors are removed.
The Allocator interface gains `old_align` as a new parameter to
`resizeFn`. This is useful to quickly look up allocations.
`std.heap.page_allocator` is improved to use mmap address hints to avoid
obtaining the same virtual address pages when unmapping and mapping
pages. The new general purpose allocator uses the page allocator as its
backing allocator by default.
`std.testing.allocator` is replaced with usage of this new allocator,
which does leak checking, and so the LeakCheckAllocator is retired.
stage1 is improved so that the `@typeInfo` of a pointer has a lazy value
for the alignment of the child type, to avoid false dependency loops
when dealing with pointers to async function frames.
The `std.mem.Allocator` interface is refactored to be in its own file.
`std.Mutex` now exposes the dummy mutex with `std.Mutex.Dummy`.
This allocator is great for debug mode, however it needs some work to
have better performance in release modes. The next step will be setting
up a series of tests in ziglang/gotta-go-fast and then making
improvements to the implementation.
* introduce std.ArrayListUnmanaged for when you have the allocator
stored elsewhere
* move std.heap.ArenaAllocator implementation to its own file. extract
the main state into std.heap.ArenaAllocator.State, which can be
stored as an alternative to storing the entire ArenaAllocator, saving
24 bytes per ArenaAllocator on 64 bit targets.
* std.LinkedList.Node pointer field now defaults to being null
initialized.
* Rework self-hosted compiler Package API
* Delete almost all the bitrotted self-hosted compiler code. The only bit
rotted code left is in main.zig and compilation.zig
* Add call instruction to ZIR
* self-hosted compiler ir API and link API are reworked to support
a long-running compiler that incrementally updates declarations
* Introduce the concept of scopes to ZIR semantic analysis
* ZIR text format supports referencing named decls that are declared
later in the file
* Figure out how memory management works for the long-running compiler
and incremental compilation. The main roots are top level
declarations. There is a table of decls. The key is a cryptographic
hash of the fully qualified decl name. Each decl has an arena
allocator where all of the memory related to that decl is stored.
Each code block has its own arena allocator for the lifetime of
the block. Values that want to survive when going out of scope in
a block must get copied into the outer block. Finally, values must
get copied into the Decl arena to be long-lived.
* Delete the unused MemoryCell struct. Instead, comptime pointers are
based on references to Decl structs.
* Figure out how caching works. Each Decl will store a set of other
Decls which must be recompiled when it changes.
This branch is still work-in-progress; this commit breaks the build.