avoid the word "coroutine", they're "async functions"

This commit is contained in:
Andrew Kelley 2019-08-13 14:14:19 -04:00
parent 82d4ebe53a
commit 5092634103
No known key found for this signature in database
GPG Key ID: 7C5F548F728501A9
23 changed files with 175 additions and 199 deletions

View File

@ -1,4 +1,4 @@
* grep for "coroutine" and "coro" and replace all that nomenclature with "async functions"
* zig fmt support for the syntax
* alignment of variables not being respected in async functions
* await of a non async function
* async call on a non async function

View File

@ -5968,9 +5968,10 @@ test "global assembly" {
<p>TODO: @atomic rmw</p>
<p>TODO: builtin atomic memory ordering enum</p>
{#header_close#}
{#header_open|Coroutines#}
{#header_open|Async Functions#}
<p>
A coroutine is a generalization of a function.
An async function is a function whose callsite is split into an {#syntax#}async{#endsyntax#} initiation,
followed by an {#syntax#}await{#endsyntax#} completion. They can also be canceled.
</p>
<p>
When you call a function, it creates a stack frame,
@ -5980,14 +5981,14 @@ test "global assembly" {
until the function returns.
</p>
<p>
A coroutine is like a function, but it can be suspended
An async function is like a function, but it can be suspended
and resumed any number of times, and then it must be
explicitly destroyed. When a coroutine suspends, it
explicitly destroyed. When an async function suspends, it
returns to the resumer.
</p>
{#header_open|Minimal Coroutine Example#}
{#header_open|Minimal Async Function Example#}
<p>
Declare a coroutine with the {#syntax#}async{#endsyntax#} keyword.
Declare an async function with the {#syntax#}async{#endsyntax#} keyword.
The expression in angle brackets must evaluate to a struct
which has these fields:
</p>
@ -6006,8 +6007,8 @@ test "global assembly" {
the function generic. Zig will infer the allocator type when the async function is called.
</p>
<p>
Call a coroutine with the {#syntax#}async{#endsyntax#} keyword. Here, the expression in angle brackets
is a pointer to the allocator struct that the coroutine expects.
Call an async function with the {#syntax#}async{#endsyntax#} keyword. Here, the expression in angle brackets
is a pointer to the allocator struct that the async function expects.
</p>
<p>
The result of an async function call is a {#syntax#}promise->T{#endsyntax#} type, where {#syntax#}T{#endsyntax#}
@ -6058,7 +6059,7 @@ const assert = std.debug.assert;
var the_frame: anyframe = undefined;
var result = false;
test "coroutine suspend with block" {
test "async function suspend with block" {
_ = async testSuspendBlock();
std.debug.assert(!result);
resume the_frame;
@ -6074,7 +6075,7 @@ fn testSuspendBlock() void {
}
{#code_end#}
<p>
Every suspend point in an async function represents a point at which the coroutine
Every suspend point in an async function represents a point at which the async function
could be destroyed. If that happens, {#syntax#}defer{#endsyntax#} expressions that are in
scope are run, as well as {#syntax#}errdefer{#endsyntax#} expressions.
</p>
@ -6083,14 +6084,14 @@ fn testSuspendBlock() void {
</p>
{#header_open|Resuming from Suspend Blocks#}
<p>
Upon entering a {#syntax#}suspend{#endsyntax#} block, the coroutine is already considered
Upon entering a {#syntax#}suspend{#endsyntax#} block, the async function is already considered
suspended, and can be resumed. For example, if you started another kernel thread,
and had that thread call {#syntax#}resume{#endsyntax#} on the promise handle provided by the
{#syntax#}suspend{#endsyntax#} block, the new thread would begin executing after the suspend
block, while the old thread continued executing the suspend block.
</p>
<p>
However, the coroutine can be directly resumed from the suspend block, in which case it
However, the async function can be directly resumed from the suspend block, in which case it
never returns to its resumer and continues executing.
</p>
{#code_begin|test#}
@ -6127,8 +6128,8 @@ async fn testResumeFromSuspend(my_result: *i32) void {
If the async function associated with the promise handle has already returned,
then {#syntax#}await{#endsyntax#} destroys the target async function, and gives the return value.
Otherwise, {#syntax#}await{#endsyntax#} suspends the current async function, registering its
promise handle with the target coroutine. It becomes the target coroutine's responsibility
to have ensured that it will be resumed or destroyed. When the target coroutine reaches
promise handle with the target async function. It becomes the target async function's responsibility
to have ensured that it will be resumed or destroyed. When the target async function reaches
its return statement, it gives the return value to the awaiter, destroys itself, and then
resumes the awaiter.
</p>
@ -6137,7 +6138,7 @@ async fn testResumeFromSuspend(my_result: *i32) void {
</p>
<p>
{#syntax#}await{#endsyntax#} counts as a suspend point, and therefore at every {#syntax#}await{#endsyntax#},
a coroutine can be potentially destroyed, which would run {#syntax#}defer{#endsyntax#} and {#syntax#}errdefer{#endsyntax#} expressions.
a async function can be potentially destroyed, which would run {#syntax#}defer{#endsyntax#} and {#syntax#}errdefer{#endsyntax#} expressions.
</p>
{#code_begin|test#}
const std = @import("std");
@ -6146,7 +6147,7 @@ const assert = std.debug.assert;
var the_frame: anyframe = undefined;
var final_result: i32 = 0;
test "coroutine await" {
test "async function await" {
seq('a');
_ = async amain();
seq('f');
@ -6188,7 +6189,7 @@ fn seq(c: u8) void {
{#header_close#}
{#header_open|Open Issues#}
<p>
There are a few issues with coroutines that are considered unresolved. Best be aware of them,
There are a few issues with async function that are considered unresolved. Best be aware of them,
as the situation is likely to change before 1.0.0:
</p>
<ul>
@ -6202,7 +6203,7 @@ fn seq(c: u8) void {
</li>
<li>
Zig does not take advantage of LLVM's allocation elision optimization for
coroutines. It crashed LLVM when I tried to do it the first time. This is
async function. It crashed LLVM when I tried to do it the first time. This is
related to the other 2 bullet points here. See
<a href="https://github.com/ziglang/zig/issues/802">#802</a>.
</li>
@ -8016,8 +8017,7 @@ pub fn build(b: *Builder) void {
<p>Zig has a compile option <code>--single-threaded</code> which has the following effects:
<ul>
<li>All {#link|Thread Local Variables#} are treated as {#link|Global Variables#}.</li>
<li>The overhead of {#link|Coroutines#} becomes equivalent to function call overhead.
TODO: please note this will not be implemented until the upcoming Coroutine Rewrite</li>
<li>The overhead of {#link|Async Functions#} becomes equivalent to function call overhead.</li>
<li>The {#syntax#}@import("builtin").single_threaded{#endsyntax#} becomes {#syntax#}true{#endsyntax#}
and therefore various userland APIs which read this variable become more efficient.
For example {#syntax#}std.Mutex{#endsyntax#} becomes

View File

@ -1904,20 +1904,6 @@ pub const Builder = struct {
}
return error.Unimplemented;
//ir_build_store_ptr(irb, scope, node, irb->exec->coro_result_field_ptr, return_value);
//IrInstruction *promise_type_val = ir_build_const_type(irb, scope, node,
// get_optional_type(irb->codegen, irb->codegen->builtin_types.entry_promise));
//// TODO replace replacement_value with @intToPtr(?promise, 0x1) when it doesn't crash zig
//IrInstruction *replacement_value = irb->exec->coro_handle;
//IrInstruction *maybe_await_handle = ir_build_atomic_rmw(irb, scope, node,
// promise_type_val, irb->exec->coro_awaiter_field_ptr, nullptr, replacement_value, nullptr,
// AtomicRmwOp_xchg, AtomicOrderSeqCst);
//ir_build_store_ptr(irb, scope, node, irb->exec->await_handle_var_ptr, maybe_await_handle);
//IrInstruction *is_non_null = ir_build_test_nonnull(irb, scope, node, maybe_await_handle);
//IrInstruction *is_comptime = ir_build_const_bool(irb, scope, node, false);
//return ir_build_cond_br(irb, scope, node, is_non_null, irb->exec->coro_normal_final, irb->exec->coro_early_final,
// is_comptime);
//// the above blocks are rendered by ir_gen after the rest of codegen
}
const Ident = union(enum) {

View File

@ -627,7 +627,7 @@ fn constructLinkerArgsWasm(ctx: *Context) void {
fn addFnObjects(ctx: *Context) !void {
// at this point it's guaranteed nobody else has this lock, so we circumvent it
// and avoid having to be a coroutine
// and avoid having to be an async function
const fn_link_set = &ctx.comp.fn_link_set.private_data;
var it = fn_link_set.first;

View File

@ -52,7 +52,7 @@ const Command = struct {
pub fn main() !void {
// This allocator needs to be thread-safe because we use it for the event.Loop
// which multiplexes coroutines onto kernel threads.
// which multiplexes async functions onto kernel threads.
// libc allocator is guaranteed to have this property.
const allocator = std.heap.c_allocator;

View File

@ -142,7 +142,8 @@ export fn stage2_render_ast(tree: *ast.Tree, output_file: *FILE) Error {
return Error.None;
}
// TODO: just use the actual self-hosted zig fmt. Until the coroutine rewrite, we use a blocking implementation.
// TODO: just use the actual self-hosted zig fmt. Until https://github.com/ziglang/zig/issues/2377,
// we use a blocking implementation.
export fn stage2_fmt(argc: c_int, argv: [*]const [*]const u8) c_int {
if (std.debug.runtime_safety) {
fmtMain(argc, argv) catch unreachable;

View File

@ -1265,7 +1265,7 @@ enum ZigTypeId {
ZigTypeIdBoundFn,
ZigTypeIdArgTuple,
ZigTypeIdOpaque,
ZigTypeIdCoroFrame,
ZigTypeIdFnFrame,
ZigTypeIdAnyFrame,
ZigTypeIdVector,
ZigTypeIdEnumLiteral,
@ -1281,7 +1281,7 @@ struct ZigTypeOpaque {
Buf *bare_name;
};
struct ZigTypeCoroFrame {
struct ZigTypeFnFrame {
ZigFn *fn;
ZigType *locals_struct;
};
@ -1315,7 +1315,7 @@ struct ZigType {
ZigTypeBoundFn bound_fn;
ZigTypeVector vector;
ZigTypeOpaque opaque;
ZigTypeCoroFrame frame;
ZigTypeFnFrame frame;
ZigTypeAnyFrame any_frame;
} data;
@ -1376,7 +1376,7 @@ struct ZigFn {
LLVMTypeRef raw_type_ref;
ZigLLVMDIType *raw_di_type;
ZigType *frame_type; // coro frame type
ZigType *frame_type;
// in the case of normal functions this is the implicit return type
// in the case of async functions this is the implicit return type according to the
// zig source code, not according to zig ir
@ -2368,7 +2368,7 @@ enum IrInstructionId {
IrInstructionIdSuspendFinish,
IrInstructionIdAwaitSrc,
IrInstructionIdAwaitGen,
IrInstructionIdCoroResume,
IrInstructionIdResume,
IrInstructionIdTestCancelRequested,
IrInstructionIdSpillBegin,
IrInstructionIdSpillEnd,
@ -3640,7 +3640,7 @@ struct IrInstructionAwaitGen {
IrInstruction *result_loc;
};
struct IrInstructionCoroResume {
struct IrInstructionResume {
IrInstruction base;
IrInstruction *frame;
@ -3751,12 +3751,12 @@ static const size_t maybe_null_index = 1;
static const size_t err_union_payload_index = 0;
static const size_t err_union_err_index = 1;
// label (grep this): [coro_frame_struct_layout]
static const size_t coro_fn_ptr_index = 0;
static const size_t coro_resume_index = 1;
static const size_t coro_awaiter_index = 2;
static const size_t coro_prev_val_index = 3;
static const size_t coro_ret_start = 4;
// label (grep this): [fn_frame_struct_layout]
static const size_t frame_fn_ptr_index = 0;
static const size_t frame_resume_index = 1;
static const size_t frame_awaiter_index = 2;
static const size_t frame_prev_val_index = 3;
static const size_t frame_ret_start = 4;
// TODO https://github.com/ziglang/zig/issues/3056
// We require this to be a power of 2 so that we can use shifting rather than

View File

@ -234,7 +234,7 @@ AstNode *type_decl_node(ZigType *type_entry) {
return type_entry->data.enumeration.decl_node;
case ZigTypeIdUnion:
return type_entry->data.unionation.decl_node;
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
return type_entry->data.frame.fn->proto_node;
case ZigTypeIdOpaque:
case ZigTypeIdMetaType:
@ -271,7 +271,7 @@ bool type_is_resolved(ZigType *type_entry, ResolveStatus status) {
return type_entry->data.structure.resolve_status >= status;
case ZigTypeIdUnion:
return type_entry->data.unionation.resolve_status >= status;
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
switch (status) {
case ResolveStatusInvalid:
zig_unreachable();
@ -394,18 +394,18 @@ static const char *ptr_len_to_star_str(PtrLen ptr_len) {
zig_unreachable();
}
ZigType *get_coro_frame_type(CodeGen *g, ZigFn *fn) {
ZigType *get_fn_frame_type(CodeGen *g, ZigFn *fn) {
if (fn->frame_type != nullptr) {
return fn->frame_type;
}
ZigType *entry = new_type_table_entry(ZigTypeIdCoroFrame);
ZigType *entry = new_type_table_entry(ZigTypeIdFnFrame);
buf_resize(&entry->name, 0);
buf_appendf(&entry->name, "@Frame(%s)", buf_ptr(&fn->symbol_name));
entry->data.frame.fn = fn;
// Coroutine frames are always non-zero bits because they always have a resume index.
// Async function frames are always non-zero bits because they always have a resume index.
entry->abi_size = SIZE_MAX;
entry->size_in_bits = SIZE_MAX;
@ -1108,7 +1108,7 @@ static Error emit_error_unless_type_allowed_in_packed_struct(CodeGen *g, ZigType
case ZigTypeIdBoundFn:
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
add_node_error(g, source_node,
buf_sprintf("type '%s' not allowed in packed struct; no guaranteed in-memory representation",
@ -1198,7 +1198,7 @@ bool type_allowed_in_extern(CodeGen *g, ZigType *type_entry) {
case ZigTypeIdBoundFn:
case ZigTypeIdArgTuple:
case ZigTypeIdVoid:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return false;
case ZigTypeIdOpaque:
@ -1370,7 +1370,7 @@ static ZigType *analyze_fn_type(CodeGen *g, AstNode *proto_node, Scope *child_sc
case ZigTypeIdUnion:
case ZigTypeIdFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
switch (type_requires_comptime(g, type_entry)) {
case ReqCompTimeNo:
@ -1467,7 +1467,7 @@ static ZigType *analyze_fn_type(CodeGen *g, AstNode *proto_node, Scope *child_sc
case ZigTypeIdUnion:
case ZigTypeIdFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
switch (type_requires_comptime(g, fn_type_id.return_type)) {
case ReqCompTimeInvalid:
@ -3080,7 +3080,7 @@ ZigType *validate_var_type(CodeGen *g, AstNode *source_node, ZigType *type_entry
case ZigTypeIdFn:
case ZigTypeIdBoundFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return type_entry;
}
@ -3582,7 +3582,7 @@ bool is_container(ZigType *type_entry) {
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return false;
}
@ -3640,7 +3640,7 @@ Error resolve_container_type(CodeGen *g, ZigType *type_entry) {
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
zig_unreachable();
}
@ -3672,7 +3672,7 @@ bool type_is_nonnull_ptr(ZigType *type) {
return get_codegen_ptr_type(type) == type && !ptr_allows_addr_zero(type);
}
static uint32_t get_coro_frame_align_bytes(CodeGen *g) {
static uint32_t get_async_frame_align_bytes(CodeGen *g) {
uint32_t a = g->pointer_size_bytes * 2;
// promises have at least alignment 8 so that we can have 3 extra bits when doing atomicrmw
if (a < 8) a = 8;
@ -3691,7 +3691,7 @@ uint32_t get_ptr_align(CodeGen *g, ZigType *type) {
// See http://lists.llvm.org/pipermail/llvm-dev/2018-September/126142.html
return (ptr_type->data.fn.fn_type_id.alignment == 0) ? 1 : ptr_type->data.fn.fn_type_id.alignment;
} else if (ptr_type->id == ZigTypeIdAnyFrame) {
return get_coro_frame_align_bytes(g);
return get_async_frame_align_bytes(g);
} else {
zig_unreachable();
}
@ -3779,7 +3779,7 @@ bool resolve_inferred_error_set(CodeGen *g, ZigType *err_set_type, AstNode *sour
}
static void resolve_async_fn_frame(CodeGen *g, ZigFn *fn) {
ZigType *frame_type = get_coro_frame_type(g, fn);
ZigType *frame_type = get_fn_frame_type(g, fn);
Error err;
if ((err = type_resolve(g, frame_type, ResolveStatusSizeKnown))) {
fn->anal_state = FnAnalStateInvalid;
@ -4218,7 +4218,7 @@ bool handle_is_ptr(ZigType *type_entry) {
return false;
case ZigTypeIdArray:
case ZigTypeIdStruct:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
return type_has_bits(type_entry);
case ZigTypeIdErrorUnion:
return type_has_bits(type_entry->data.error_union.payload_type);
@ -4463,7 +4463,7 @@ static uint32_t hash_const_val(ConstExprValue *const_val) {
case ZigTypeIdVector:
// TODO better hashing algorithm
return 3647867726;
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
// TODO better hashing algorithm
return 675741936;
case ZigTypeIdAnyFrame:
@ -4533,7 +4533,7 @@ static bool can_mutate_comptime_var_state(ConstExprValue *value) {
case ZigTypeIdOpaque:
case ZigTypeIdErrorSet:
case ZigTypeIdEnum:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return false;
@ -4606,7 +4606,7 @@ static bool return_type_is_cacheable(ZigType *return_type) {
case ZigTypeIdEnum:
case ZigTypeIdPointer:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return true;
@ -4739,7 +4739,7 @@ OnePossibleValue type_has_one_possible_value(CodeGen *g, ZigType *type_entry) {
case ZigTypeIdBool:
case ZigTypeIdFloat:
case ZigTypeIdErrorUnion:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return OnePossibleValueNo;
case ZigTypeIdUndefined:
@ -4828,7 +4828,7 @@ ReqCompTime type_requires_comptime(CodeGen *g, ZigType *type_entry) {
case ZigTypeIdFloat:
case ZigTypeIdVoid:
case ZigTypeIdUnreachable:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return ReqCompTimeNo;
}
@ -5161,7 +5161,7 @@ static ZigType *get_async_fn_type(CodeGen *g, ZigType *orig_fn_type) {
return fn_type;
}
static Error resolve_coro_frame(CodeGen *g, ZigType *frame_type) {
static Error resolve_async_frame(CodeGen *g, ZigType *frame_type) {
Error err;
if (frame_type->data.frame.locals_struct != nullptr)
@ -5231,7 +5231,7 @@ static Error resolve_coro_frame(CodeGen *g, ZigType *frame_type) {
if (!fn_is_async(callee))
continue;
ZigType *callee_frame_type = get_coro_frame_type(g, callee);
ZigType *callee_frame_type = get_fn_frame_type(g, callee);
IrInstructionAllocaGen *alloca_gen = allocate<IrInstructionAllocaGen>(1);
alloca_gen->base.id = IrInstructionIdAllocaGen;
@ -5244,7 +5244,7 @@ static Error resolve_coro_frame(CodeGen *g, ZigType *frame_type) {
call->frame_result_loc = &alloca_gen->base;
}
// label (grep this): [coro_frame_struct_layout]
// label (grep this): [fn_frame_struct_layout]
ZigList<ZigType *> field_types = {};
ZigList<const char *> field_names = {};
@ -5366,8 +5366,8 @@ Error type_resolve(CodeGen *g, ZigType *ty, ResolveStatus status) {
return resolve_enum_zero_bits(g, ty);
} else if (ty->id == ZigTypeIdUnion) {
return resolve_union_alignment(g, ty);
} else if (ty->id == ZigTypeIdCoroFrame) {
return resolve_coro_frame(g, ty);
} else if (ty->id == ZigTypeIdFnFrame) {
return resolve_async_frame(g, ty);
}
return ErrorNone;
case ResolveStatusSizeKnown:
@ -5377,8 +5377,8 @@ Error type_resolve(CodeGen *g, ZigType *ty, ResolveStatus status) {
return resolve_enum_zero_bits(g, ty);
} else if (ty->id == ZigTypeIdUnion) {
return resolve_union_type(g, ty);
} else if (ty->id == ZigTypeIdCoroFrame) {
return resolve_coro_frame(g, ty);
} else if (ty->id == ZigTypeIdFnFrame) {
return resolve_async_frame(g, ty);
}
return ErrorNone;
case ResolveStatusLLVMFwdDecl:
@ -5573,7 +5573,7 @@ bool const_values_equal(CodeGen *g, ConstExprValue *a, ConstExprValue *b) {
return false;
}
return true;
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
zig_panic("TODO");
case ZigTypeIdAnyFrame:
zig_panic("TODO");
@ -5929,7 +5929,7 @@ void render_const_value(CodeGen *g, Buf *buf, ConstExprValue *const_val) {
buf_appendf(buf, "(args value)");
return;
}
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
buf_appendf(buf, "(TODO: async function frame value)");
return;
@ -5992,7 +5992,7 @@ uint32_t type_id_hash(TypeId x) {
case ZigTypeIdFn:
case ZigTypeIdBoundFn:
case ZigTypeIdArgTuple:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
zig_unreachable();
case ZigTypeIdErrorUnion:
@ -6042,7 +6042,7 @@ bool type_id_eql(TypeId a, TypeId b) {
case ZigTypeIdBoundFn:
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
zig_unreachable();
case ZigTypeIdErrorUnion:
@ -6209,7 +6209,7 @@ static const ZigTypeId all_type_ids[] = {
ZigTypeIdBoundFn,
ZigTypeIdArgTuple,
ZigTypeIdOpaque,
ZigTypeIdCoroFrame,
ZigTypeIdFnFrame,
ZigTypeIdAnyFrame,
ZigTypeIdVector,
ZigTypeIdEnumLiteral,
@ -6274,7 +6274,7 @@ size_t type_id_index(ZigType *entry) {
return 20;
case ZigTypeIdOpaque:
return 21;
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
return 22;
case ZigTypeIdAnyFrame:
return 23;
@ -6338,7 +6338,7 @@ const char *type_id_name(ZigTypeId id) {
return "Opaque";
case ZigTypeIdVector:
return "Vector";
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
return "Frame";
case ZigTypeIdAnyFrame:
return "AnyFrame";
@ -6782,7 +6782,7 @@ static void resolve_llvm_types_slice(CodeGen *g, ZigType *type, ResolveStatus wa
}
static void resolve_llvm_types_struct(CodeGen *g, ZigType *struct_type, ResolveStatus wanted_resolve_status,
ZigType *coro_frame_type)
ZigType *async_frame_type)
{
assert(struct_type->id == ZigTypeIdStruct);
assert(struct_type->data.structure.resolve_status != ResolveStatusInvalid);
@ -6887,11 +6887,11 @@ static void resolve_llvm_types_struct(CodeGen *g, ZigType *struct_type, ResolveS
packed_bits_offset = next_packed_bits_offset;
} else {
LLVMTypeRef llvm_type;
if (i == 0 && coro_frame_type != nullptr) {
assert(coro_frame_type->id == ZigTypeIdCoroFrame);
if (i == 0 && async_frame_type != nullptr) {
assert(async_frame_type->id == ZigTypeIdFnFrame);
assert(field_type->id == ZigTypeIdFn);
resolve_llvm_types_fn(g, coro_frame_type->data.frame.fn);
llvm_type = LLVMPointerType(coro_frame_type->data.frame.fn->raw_type_ref, 0);
resolve_llvm_types_fn(g, async_frame_type->data.frame.fn);
llvm_type = LLVMPointerType(async_frame_type->data.frame.fn->raw_type_ref, 0);
} else {
llvm_type = get_llvm_type(g, field_type);
}
@ -7594,7 +7594,7 @@ void resolve_llvm_types_fn(CodeGen *g, ZigFn *fn) {
// first "parameter" is return value
param_di_types.append(get_llvm_di_type(g, gen_return_type));
ZigType *frame_type = get_coro_frame_type(g, fn);
ZigType *frame_type = get_fn_frame_type(g, fn);
ZigType *ptr_type = get_pointer_to_type(g, frame_type, false);
if ((err = type_resolve(g, ptr_type, ResolveStatusLLVMFwdDecl)))
zig_unreachable();
@ -7634,7 +7634,7 @@ static void resolve_llvm_types_anyerror(CodeGen *g) {
get_llvm_di_type(g, g->err_tag_type), "");
}
static void resolve_llvm_types_coro_frame(CodeGen *g, ZigType *frame_type, ResolveStatus wanted_resolve_status) {
static void resolve_llvm_types_async_frame(CodeGen *g, ZigType *frame_type, ResolveStatus wanted_resolve_status) {
resolve_llvm_types_struct(g, frame_type->data.frame.locals_struct, wanted_resolve_status, frame_type);
frame_type->llvm_type = frame_type->data.frame.locals_struct->llvm_type;
frame_type->llvm_di_type = frame_type->data.frame.locals_struct->llvm_di_type;
@ -7673,7 +7673,7 @@ static void resolve_llvm_types_any_frame(CodeGen *g, ZigType *any_frame_type, Re
ZigList<LLVMTypeRef> field_types = {};
ZigList<ZigLLVMDIType *> di_element_types = {};
// label (grep this): [coro_frame_struct_layout]
// label (grep this): [fn_frame_struct_layout]
field_types.append(ptr_fn_llvm_type); // fn_ptr
field_types.append(usize_type_ref); // resume_index
field_types.append(usize_type_ref); // awaiter
@ -7824,8 +7824,8 @@ static void resolve_llvm_types(CodeGen *g, ZigType *type, ResolveStatus wanted_r
type->abi_align, get_llvm_di_type(g, type->data.vector.elem_type), type->data.vector.len);
return;
}
case ZigTypeIdCoroFrame:
return resolve_llvm_types_coro_frame(g, type, wanted_resolve_status);
case ZigTypeIdFnFrame:
return resolve_llvm_types_async_frame(g, type, wanted_resolve_status);
case ZigTypeIdAnyFrame:
return resolve_llvm_types_any_frame(g, type, wanted_resolve_status);
}

View File

@ -16,7 +16,7 @@ ErrorMsg *add_token_error(CodeGen *g, ZigType *owner, Token *token, Buf *msg);
ErrorMsg *add_error_note(CodeGen *g, ErrorMsg *parent_msg, const AstNode *node, Buf *msg);
void emit_error_notes_for_ref_stack(CodeGen *g, ErrorMsg *msg);
ZigType *new_type_table_entry(ZigTypeId id);
ZigType *get_coro_frame_type(CodeGen *g, ZigFn *fn);
ZigType *get_fn_frame_type(CodeGen *g, ZigFn *fn);
ZigType *get_pointer_to_type(CodeGen *g, ZigType *child_type, bool is_const);
ZigType *get_pointer_to_type_extra(CodeGen *g, ZigType *child_type, bool is_const,
bool is_volatile, PtrLen ptr_len,

View File

@ -305,16 +305,16 @@ static LLVMLinkage to_llvm_linkage(GlobalLinkageId id) {
zig_unreachable();
}
// label (grep this): [coro_frame_struct_layout]
// label (grep this): [fn_frame_struct_layout]
static uint32_t frame_index_trace_arg(CodeGen *g, ZigType *return_type) {
// [0] *ReturnType (callee's)
// [1] *ReturnType (awaiter's)
// [2] ReturnType
uint32_t return_field_count = type_has_bits(return_type) ? 3 : 0;
return coro_ret_start + return_field_count;
return frame_ret_start + return_field_count;
}
// label (grep this): [coro_frame_struct_layout]
// label (grep this): [fn_frame_struct_layout]
static uint32_t frame_index_arg(CodeGen *g, ZigType *return_type) {
bool have_stack_trace = codegen_fn_has_err_ret_tracing_arg(g, return_type);
// [0] *StackTrace
@ -322,7 +322,7 @@ static uint32_t frame_index_arg(CodeGen *g, ZigType *return_type) {
return frame_index_trace_arg(g, return_type) + trace_field_count;
}
// label (grep this): [coro_frame_struct_layout]
// label (grep this): [fn_frame_struct_layout]
static uint32_t frame_index_trace_stack(CodeGen *g, FnTypeId *fn_type_id) {
uint32_t result = frame_index_arg(g, fn_type_id->return_type);
for (size_t i = 0; i < fn_type_id->param_count; i += 1) {
@ -2224,7 +2224,7 @@ static LLVMValueRef gen_resume(CodeGen *g, LLVMValueRef fn_val, LLVMValueRef tar
{
LLVMTypeRef usize_type_ref = g->builtin_types.entry_usize->llvm_type;
if (fn_val == nullptr) {
LLVMValueRef fn_ptr_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, coro_fn_ptr_index, "");
LLVMValueRef fn_ptr_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, frame_fn_ptr_index, "");
fn_val = LLVMBuildLoad(g->builder, fn_ptr_ptr, "");
}
if (arg_val == nullptr) {
@ -2373,7 +2373,7 @@ static LLVMValueRef ir_render_return(CodeGen *g, IrExecutable *executable, IrIns
// If the awaiter result pointer is non-null, we need to copy the result to there.
LLVMBasicBlockRef copy_block = LLVMAppendBasicBlock(g->cur_fn_val, "CopyResult");
LLVMBasicBlockRef copy_end_block = LLVMAppendBasicBlock(g->cur_fn_val, "CopyResultEnd");
LLVMValueRef awaiter_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, coro_ret_start + 1, "");
LLVMValueRef awaiter_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, frame_ret_start + 1, "");
LLVMValueRef awaiter_ret_ptr = LLVMBuildLoad(g->builder, awaiter_ret_ptr_ptr, "");
LLVMValueRef zero_ptr = LLVMConstNull(LLVMTypeOf(awaiter_ret_ptr));
LLVMValueRef need_copy_bit = LLVMBuildICmp(g->builder, LLVMIntNE, awaiter_ret_ptr, zero_ptr, "");
@ -3858,7 +3858,7 @@ static LLVMValueRef ir_render_call(CodeGen *g, IrExecutable *executable, IrInstr
if (ret_has_bits) {
// Use the result location which is inside the frame if this is an async call.
ret_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_ret_start + 2, "");
ret_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_ret_start + 2, "");
}
} else {
LLVMValueRef frame_slice_ptr = ir_llvm_value(g, instruction->new_stack);
@ -3897,14 +3897,14 @@ static LLVMValueRef ir_render_call(CodeGen *g, IrExecutable *executable, IrInstr
if (ret_has_bits) {
if (result_loc == nullptr) {
// return type is a scalar, but we still need a pointer to it. Use the async fn frame.
ret_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_ret_start + 2, "");
ret_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_ret_start + 2, "");
} else {
// Use the call instruction's result location.
ret_ptr = result_loc;
}
// Store a zero in the awaiter's result ptr to indicate we do not need a copy made.
LLVMValueRef awaiter_ret_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_ret_start + 1, "");
LLVMValueRef awaiter_ret_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_ret_start + 1, "");
LLVMValueRef zero_ptr = LLVMConstNull(LLVMGetElementType(LLVMTypeOf(awaiter_ret_ptr)));
LLVMBuildStore(g->builder, zero_ptr, awaiter_ret_ptr);
}
@ -3919,19 +3919,19 @@ static LLVMValueRef ir_render_call(CodeGen *g, IrExecutable *executable, IrInstr
if (instruction->is_async || callee_is_async) {
assert(frame_result_loc != nullptr);
LLVMValueRef fn_ptr_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_fn_ptr_index, "");
LLVMValueRef fn_ptr_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_fn_ptr_index, "");
LLVMValueRef bitcasted_fn_val = LLVMBuildBitCast(g->builder, fn_val,
LLVMGetElementType(LLVMTypeOf(fn_ptr_ptr)), "");
LLVMBuildStore(g->builder, bitcasted_fn_val, fn_ptr_ptr);
LLVMValueRef resume_index_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_resume_index, "");
LLVMValueRef resume_index_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_resume_index, "");
LLVMBuildStore(g->builder, zero, resume_index_ptr);
LLVMValueRef awaiter_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_awaiter_index, "");
LLVMValueRef awaiter_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_awaiter_index, "");
LLVMBuildStore(g->builder, awaiter_init_val, awaiter_ptr);
if (ret_has_bits) {
LLVMValueRef ret_ptr_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_ret_start, "");
LLVMValueRef ret_ptr_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_ret_start, "");
LLVMBuildStore(g->builder, ret_ptr, ret_ptr_ptr);
}
} else {
@ -4018,7 +4018,7 @@ static LLVMValueRef ir_render_call(CodeGen *g, IrExecutable *executable, IrInstr
if (result_loc != nullptr)
return get_handle_value(g, result_loc, src_return_type, ptr_result_type);
LLVMValueRef result_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, coro_ret_start + 2, "");
LLVMValueRef result_ptr = LLVMBuildStructGEP(g->builder, frame_result_loc, frame_ret_start + 2, "");
return LLVMBuildLoad(g->builder, result_ptr, "");
}
@ -5491,7 +5491,7 @@ static LLVMValueRef ir_render_cancel(CodeGen *g, IrExecutable *executable, IrIns
// supply null for the awaiter return pointer (no copy needed)
if (type_has_bits(result_type)) {
LLVMValueRef awaiter_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, coro_ret_start + 1, "");
LLVMValueRef awaiter_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, frame_ret_start + 1, "");
LLVMBuildStore(g->builder, LLVMConstNull(LLVMGetElementType(LLVMTypeOf(awaiter_ret_ptr_ptr))),
awaiter_ret_ptr_ptr);
}
@ -5506,7 +5506,7 @@ static LLVMValueRef ir_render_cancel(CodeGen *g, IrExecutable *executable, IrIns
LLVMValueRef awaiter_val = LLVMBuildPtrToInt(g->builder, g->cur_frame_ptr, usize_type_ref, "");
LLVMValueRef awaiter_ored_val = LLVMBuildOr(g->builder, awaiter_val, one, "");
LLVMValueRef awaiter_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, coro_awaiter_index, "");
LLVMValueRef awaiter_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, frame_awaiter_index, "");
LLVMValueRef prev_val = gen_maybe_atomic_op(g, LLVMAtomicRMWBinOpXchg, awaiter_ptr, awaiter_ored_val,
LLVMAtomicOrderingRelease);
@ -5549,7 +5549,7 @@ static LLVMValueRef ir_render_await(CodeGen *g, IrExecutable *executable, IrInst
LLVMValueRef result_loc = (instruction->result_loc == nullptr) ?
nullptr : ir_llvm_value(g, instruction->result_loc);
if (type_has_bits(result_type)) {
LLVMValueRef awaiter_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, coro_ret_start + 1, "");
LLVMValueRef awaiter_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, frame_ret_start + 1, "");
if (result_loc == nullptr) {
// no copy needed
LLVMBuildStore(g->builder, LLVMConstNull(LLVMGetElementType(LLVMTypeOf(awaiter_ret_ptr_ptr))),
@ -5570,7 +5570,7 @@ static LLVMValueRef ir_render_await(CodeGen *g, IrExecutable *executable, IrInst
// caller's own frame pointer
LLVMValueRef awaiter_init_val = LLVMBuildPtrToInt(g->builder, g->cur_frame_ptr, usize_type_ref, "");
LLVMValueRef awaiter_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, coro_awaiter_index, "");
LLVMValueRef awaiter_ptr = LLVMBuildStructGEP(g->builder, target_frame_ptr, frame_awaiter_index, "");
LLVMValueRef prev_val = LLVMBuildAtomicRMW(g->builder, LLVMAtomicRMWBinOpXchg, awaiter_ptr, awaiter_init_val,
LLVMAtomicOrderingRelease, g->is_single_threaded);
@ -5608,9 +5608,7 @@ static LLVMValueRef ir_render_await(CodeGen *g, IrExecutable *executable, IrInst
return nullptr;
}
static LLVMValueRef ir_render_coro_resume(CodeGen *g, IrExecutable *executable,
IrInstructionCoroResume *instruction)
{
static LLVMValueRef ir_render_resume(CodeGen *g, IrExecutable *executable, IrInstructionResume *instruction) {
LLVMValueRef frame = ir_llvm_value(g, instruction->frame);
ZigType *frame_type = instruction->frame->value.type;
assert(frame_type->id == ZigTypeIdAnyFrame);
@ -5921,8 +5919,8 @@ static LLVMValueRef ir_render_instruction(CodeGen *g, IrExecutable *executable,
return ir_render_suspend_begin(g, executable, (IrInstructionSuspendBegin *)instruction);
case IrInstructionIdSuspendFinish:
return ir_render_suspend_finish(g, executable, (IrInstructionSuspendFinish *)instruction);
case IrInstructionIdCoroResume:
return ir_render_coro_resume(g, executable, (IrInstructionCoroResume *)instruction);
case IrInstructionIdResume:
return ir_render_resume(g, executable, (IrInstructionResume *)instruction);
case IrInstructionIdFrameSizeGen:
return ir_render_frame_size(g, executable, (IrInstructionFrameSizeGen *)instruction);
case IrInstructionIdAwaitGen:
@ -6195,7 +6193,7 @@ static LLVMValueRef pack_const_int(CodeGen *g, LLVMTypeRef big_int_type_ref, Con
}
return val;
}
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
zig_panic("TODO bit pack an async function frame");
case ZigTypeIdAnyFrame:
zig_panic("TODO bit pack an anyframe");
@ -6727,7 +6725,7 @@ static LLVMValueRef gen_const_val(CodeGen *g, ConstExprValue *const_val, const c
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
zig_unreachable();
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
zig_panic("TODO");
case ZigTypeIdAnyFrame:
zig_panic("TODO");
@ -7171,12 +7169,12 @@ static void do_code_gen(CodeGen *g) {
LLVMPositionBuilderAtEnd(g->builder, g->cur_preamble_llvm_block);
render_async_spills(g);
g->cur_async_awaiter_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, coro_awaiter_index, "");
LLVMValueRef resume_index_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, coro_resume_index, "");
g->cur_async_awaiter_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, frame_awaiter_index, "");
LLVMValueRef resume_index_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, frame_resume_index, "");
g->cur_async_resume_index_ptr = resume_index_ptr;
if (type_has_bits(fn_type_id->return_type)) {
LLVMValueRef cur_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, coro_ret_start, "");
LLVMValueRef cur_ret_ptr_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr, frame_ret_start, "");
g->cur_ret_ptr = LLVMBuildLoad(g->builder, cur_ret_ptr_ptr, "");
}
if (codegen_fn_has_err_ret_tracing_arg(g, fn_type_id->return_type)) {
@ -7190,7 +7188,7 @@ static void do_code_gen(CodeGen *g) {
trace_field_index_stack, "");
}
g->cur_async_prev_val_field_ptr = LLVMBuildStructGEP(g->builder, g->cur_frame_ptr,
coro_prev_val_index, "");
frame_prev_val_index, "");
LLVMValueRef resume_index = LLVMBuildLoad(g->builder, resume_index_ptr, "");
LLVMValueRef switch_instr = LLVMBuildSwitch(g->builder, resume_index, bad_resume_block, 4);
@ -9229,7 +9227,7 @@ static void prepend_c_type_to_decl_list(CodeGen *g, GenH *gen_h, ZigType *type_e
case ZigTypeIdArgTuple:
case ZigTypeIdErrorUnion:
case ZigTypeIdErrorSet:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
zig_unreachable();
case ZigTypeIdVoid:
@ -9414,7 +9412,7 @@ static void get_c_type(CodeGen *g, GenH *gen_h, ZigType *type_entry, Buf *out_bu
case ZigTypeIdUndefined:
case ZigTypeIdNull:
case ZigTypeIdArgTuple:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
zig_unreachable();
}
@ -9583,7 +9581,7 @@ static void gen_h_file(CodeGen *g) {
case ZigTypeIdOptional:
case ZigTypeIdFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
zig_unreachable();

View File

@ -321,7 +321,7 @@ static bool types_have_same_zig_comptime_repr(ZigType *a, ZigType *b) {
case ZigTypeIdFn:
case ZigTypeIdArgTuple:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
return false;
}
zig_unreachable();
@ -1058,8 +1058,8 @@ static constexpr IrInstructionId ir_instruction_id(IrInstructionAwaitGen *) {
return IrInstructionIdAwaitGen;
}
static constexpr IrInstructionId ir_instruction_id(IrInstructionCoroResume *) {
return IrInstructionIdCoroResume;
static constexpr IrInstructionId ir_instruction_id(IrInstructionResume *) {
return IrInstructionIdResume;
}
static constexpr IrInstructionId ir_instruction_id(IrInstructionTestCancelRequested *) {
@ -3321,10 +3321,8 @@ static IrInstruction *ir_build_await_gen(IrAnalyze *ira, IrInstruction *source_i
return &instruction->base;
}
static IrInstruction *ir_build_coro_resume(IrBuilder *irb, Scope *scope, AstNode *source_node,
IrInstruction *frame)
{
IrInstructionCoroResume *instruction = ir_build_instruction<IrInstructionCoroResume>(irb, scope, source_node);
static IrInstruction *ir_build_resume(IrBuilder *irb, Scope *scope, AstNode *source_node, IrInstruction *frame) {
IrInstructionResume *instruction = ir_build_instruction<IrInstructionResume>(irb, scope, source_node);
instruction->base.value.type = irb->codegen->builtin_types.entry_void;
instruction->frame = frame;
@ -7964,7 +7962,7 @@ static IrInstruction *ir_gen_resume(IrBuilder *irb, Scope *scope, AstNode *node)
if (target_inst == irb->codegen->invalid_instruction)
return irb->codegen->invalid_instruction;
return ir_build_coro_resume(irb, scope, node, target_inst);
return ir_build_resume(irb, scope, node, target_inst);
}
static IrInstruction *ir_gen_await_expr(IrBuilder *irb, Scope *scope, AstNode *node, LVal lval,
@ -12223,7 +12221,7 @@ static IrInstruction *ir_analyze_cast(IrAnalyze *ira, IrInstruction *source_inst
// *@Frame(func) to anyframe->T or anyframe
if (actual_type->id == ZigTypeIdPointer && actual_type->data.pointer.ptr_len == PtrLenSingle &&
actual_type->data.pointer.child_type->id == ZigTypeIdCoroFrame && wanted_type->id == ZigTypeIdAnyFrame)
actual_type->data.pointer.child_type->id == ZigTypeIdFnFrame && wanted_type->id == ZigTypeIdAnyFrame)
{
bool ok = true;
if (wanted_type->data.any_frame.result_type != nullptr) {
@ -13123,7 +13121,7 @@ static IrInstruction *ir_analyze_bin_op_cmp(IrAnalyze *ira, IrInstructionBinOp *
case ZigTypeIdNull:
case ZigTypeIdErrorUnion:
case ZigTypeIdUnion:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
operator_allowed = false;
break;
case ZigTypeIdOptional:
@ -14488,7 +14486,7 @@ static IrInstruction *ir_analyze_instruction_export(IrAnalyze *ira, IrInstructio
case ZigTypeIdBoundFn:
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
ir_add_error(ira, target,
buf_sprintf("invalid export target '%s'", buf_ptr(&type_value->name)));
@ -14514,7 +14512,7 @@ static IrInstruction *ir_analyze_instruction_export(IrAnalyze *ira, IrInstructio
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
case ZigTypeIdEnumLiteral:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
ir_add_error(ira, target,
buf_sprintf("invalid export target type '%s'", buf_ptr(&target->value.type->name)));
@ -15060,7 +15058,7 @@ static IrInstruction *ir_analyze_async_call(IrAnalyze *ira, IrInstructionCallSrc
return ira->codegen->invalid_instruction;
}
ZigType *frame_type = get_coro_frame_type(ira->codegen, fn_entry);
ZigType *frame_type = get_fn_frame_type(ira->codegen, fn_entry);
IrInstruction *result_loc = ir_resolve_result(ira, &call_instruction->base, call_instruction->result_loc,
frame_type, nullptr, true, true, false);
if (result_loc != nullptr && (type_is_invalid(result_loc->value.type) || instr_is_unreachable(result_loc))) {
@ -16121,7 +16119,7 @@ static IrInstruction *ir_analyze_optional_type(IrAnalyze *ira, IrInstructionUnOp
case ZigTypeIdFn:
case ZigTypeIdBoundFn:
case ZigTypeIdArgTuple:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
return ir_const_type(ira, &un_op_instruction->base, get_optional_type(ira->codegen, type_entry));
@ -17910,7 +17908,7 @@ static IrInstruction *ir_analyze_instruction_slice_type(IrAnalyze *ira,
case ZigTypeIdFn:
case ZigTypeIdBoundFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
{
ResolveStatus needed_status = (align_bytes == 0) ?
@ -18026,7 +18024,7 @@ static IrInstruction *ir_analyze_instruction_array_type(IrAnalyze *ira,
case ZigTypeIdFn:
case ZigTypeIdBoundFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
{
if ((err = ensure_complete_type(ira->codegen, child_type)))
@ -18078,7 +18076,7 @@ static IrInstruction *ir_analyze_instruction_size_of(IrAnalyze *ira,
case ZigTypeIdUnion:
case ZigTypeIdFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
{
uint64_t size_in_bytes = type_size(ira->codegen, type_entry);
@ -18643,7 +18641,7 @@ static IrInstruction *ir_analyze_instruction_switch_target(IrAnalyze *ira,
case ZigTypeIdArgTuple:
case ZigTypeIdOpaque:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
ir_add_error(ira, &switch_target_instruction->base,
buf_sprintf("invalid switch target type '%s'", buf_ptr(&target_type->name)));
@ -20500,7 +20498,7 @@ static Error ir_make_type_info_value(IrAnalyze *ira, IrInstruction *source_instr
break;
}
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
zig_panic("TODO @typeInfo for async function frames");
}
@ -22219,7 +22217,7 @@ static IrInstruction *ir_analyze_instruction_frame_handle(IrAnalyze *ira, IrInst
ZigFn *fn = exec_fn_entry(ira->new_irb.exec);
ir_assert(fn != nullptr, &instruction->base);
ZigType *frame_type = get_coro_frame_type(ira->codegen, fn);
ZigType *frame_type = get_fn_frame_type(ira->codegen, fn);
ZigType *ptr_frame_type = get_pointer_to_type(ira->codegen, frame_type, false);
IrInstruction *result = ir_build_handle(&ira->new_irb, instruction->base.scope, instruction->base.source_node);
@ -22232,7 +22230,7 @@ static IrInstruction *ir_analyze_instruction_frame_type(IrAnalyze *ira, IrInstru
if (fn == nullptr)
return ira->codegen->invalid_instruction;
ZigType *ty = get_coro_frame_type(ira->codegen, fn);
ZigType *ty = get_fn_frame_type(ira->codegen, fn);
return ir_const_type(ira, &instruction->base, ty);
}
@ -22293,7 +22291,7 @@ static IrInstruction *ir_analyze_instruction_align_of(IrAnalyze *ira, IrInstruct
case ZigTypeIdUnion:
case ZigTypeIdFn:
case ZigTypeIdVector:
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
case ZigTypeIdAnyFrame:
{
uint64_t align_in_bytes = get_abi_alignment(ira->codegen, type_entry);
@ -23438,7 +23436,7 @@ static void buf_write_value_bytes(CodeGen *codegen, uint8_t *buf, ConstExprValue
zig_panic("TODO buf_write_value_bytes fn type");
case ZigTypeIdUnion:
zig_panic("TODO buf_write_value_bytes union type");
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
zig_panic("TODO buf_write_value_bytes async fn frame type");
case ZigTypeIdAnyFrame:
zig_panic("TODO buf_write_value_bytes anyframe type");
@ -23621,7 +23619,7 @@ static Error buf_read_value_bytes(IrAnalyze *ira, CodeGen *codegen, AstNode *sou
zig_panic("TODO buf_read_value_bytes fn type");
case ZigTypeIdUnion:
zig_panic("TODO buf_read_value_bytes union type");
case ZigTypeIdCoroFrame:
case ZigTypeIdFnFrame:
zig_panic("TODO buf_read_value_bytes async fn frame type");
case ZigTypeIdAnyFrame:
zig_panic("TODO buf_read_value_bytes anyframe type");
@ -24674,7 +24672,7 @@ static IrInstruction *analyze_frame_ptr_to_anyframe_T(IrAnalyze *ira, IrInstruct
IrInstruction *frame;
if (frame_ptr->value.type->id == ZigTypeIdPointer &&
frame_ptr->value.type->data.pointer.ptr_len == PtrLenSingle &&
frame_ptr->value.type->data.pointer.child_type->id == ZigTypeIdCoroFrame)
frame_ptr->value.type->data.pointer.child_type->id == ZigTypeIdFnFrame)
{
result_type = frame_ptr->value.type->data.pointer.child_type->data.frame.fn->type_entry->data.fn.fn_type_id.return_type;
frame = frame_ptr;
@ -24682,7 +24680,7 @@ static IrInstruction *analyze_frame_ptr_to_anyframe_T(IrAnalyze *ira, IrInstruct
frame = ir_get_deref(ira, source_instr, frame_ptr, nullptr);
if (frame->value.type->id == ZigTypeIdPointer &&
frame->value.type->data.pointer.ptr_len == PtrLenSingle &&
frame->value.type->data.pointer.child_type->id == ZigTypeIdCoroFrame)
frame->value.type->data.pointer.child_type->id == ZigTypeIdFnFrame)
{
result_type = frame->value.type->data.pointer.child_type->data.frame.fn->type_entry->data.fn.fn_type_id.return_type;
} else if (frame->value.type->id != ZigTypeIdAnyFrame ||
@ -24751,7 +24749,7 @@ static IrInstruction *ir_analyze_instruction_await(IrAnalyze *ira, IrInstruction
return ir_finish_anal(ira, result);
}
static IrInstruction *ir_analyze_instruction_coro_resume(IrAnalyze *ira, IrInstructionCoroResume *instruction) {
static IrInstruction *ir_analyze_instruction_resume(IrAnalyze *ira, IrInstructionResume *instruction) {
IrInstruction *frame_ptr = instruction->frame->child;
if (type_is_invalid(frame_ptr->value.type))
return ira->codegen->invalid_instruction;
@ -24759,7 +24757,7 @@ static IrInstruction *ir_analyze_instruction_coro_resume(IrAnalyze *ira, IrInstr
IrInstruction *frame;
if (frame_ptr->value.type->id == ZigTypeIdPointer &&
frame_ptr->value.type->data.pointer.ptr_len == PtrLenSingle &&
frame_ptr->value.type->data.pointer.child_type->id == ZigTypeIdCoroFrame)
frame_ptr->value.type->data.pointer.child_type->id == ZigTypeIdFnFrame)
{
frame = frame_ptr;
} else {
@ -24771,7 +24769,7 @@ static IrInstruction *ir_analyze_instruction_coro_resume(IrAnalyze *ira, IrInstr
if (type_is_invalid(casted_frame->value.type))
return ira->codegen->invalid_instruction;
return ir_build_coro_resume(&ira->new_irb, instruction->base.scope, instruction->base.source_node, casted_frame);
return ir_build_resume(&ira->new_irb, instruction->base.scope, instruction->base.source_node, casted_frame);
}
static IrInstruction *ir_analyze_instruction_test_cancel_requested(IrAnalyze *ira,
@ -25112,8 +25110,8 @@ static IrInstruction *ir_analyze_instruction_base(IrAnalyze *ira, IrInstruction
return ir_analyze_instruction_suspend_begin(ira, (IrInstructionSuspendBegin *)instruction);
case IrInstructionIdSuspendFinish:
return ir_analyze_instruction_suspend_finish(ira, (IrInstructionSuspendFinish *)instruction);
case IrInstructionIdCoroResume:
return ir_analyze_instruction_coro_resume(ira, (IrInstructionCoroResume *)instruction);
case IrInstructionIdResume:
return ir_analyze_instruction_resume(ira, (IrInstructionResume *)instruction);
case IrInstructionIdAwaitSrc:
return ir_analyze_instruction_await(ira, (IrInstructionAwaitSrc *)instruction);
case IrInstructionIdTestCancelRequested:
@ -25256,7 +25254,7 @@ bool ir_has_side_effects(IrInstruction *instruction) {
case IrInstructionIdResetResult:
case IrInstructionIdSuspendBegin:
case IrInstructionIdSuspendFinish:
case IrInstructionIdCoroResume:
case IrInstructionIdResume:
case IrInstructionIdAwaitSrc:
case IrInstructionIdAwaitGen:
case IrInstructionIdSpillBegin:

View File

@ -1528,10 +1528,9 @@ static void ir_print_suspend_finish(IrPrint *irp, IrInstructionSuspendFinish *in
fprintf(irp->f, "@suspendFinish()");
}
static void ir_print_coro_resume(IrPrint *irp, IrInstructionCoroResume *instruction) {
fprintf(irp->f, "@coroResume(");
static void ir_print_resume(IrPrint *irp, IrInstructionResume *instruction) {
fprintf(irp->f, "resume ");
ir_print_other_instruction(irp, instruction->frame);
fprintf(irp->f, ")");
}
static void ir_print_await_src(IrPrint *irp, IrInstructionAwaitSrc *instruction) {
@ -2039,8 +2038,8 @@ static void ir_print_instruction(IrPrint *irp, IrInstruction *instruction) {
case IrInstructionIdSuspendFinish:
ir_print_suspend_finish(irp, (IrInstructionSuspendFinish *)instruction);
break;
case IrInstructionIdCoroResume:
ir_print_coro_resume(irp, (IrInstructionCoroResume *)instruction);
case IrInstructionIdResume:
ir_print_resume(irp, (IrInstructionResume *)instruction);
break;
case IrInstructionIdAwaitSrc:
ir_print_await_src(irp, (IrInstructionAwaitSrc *)instruction);

View File

@ -42,7 +42,6 @@
#include <llvm/Support/TargetRegistry.h>
#include <llvm/Target/TargetMachine.h>
#include <llvm/Target/CodeGenCWrappers.h>
#include <llvm/Transforms/Coroutines.h>
#include <llvm/Transforms/IPO.h>
#include <llvm/Transforms/IPO/AlwaysInliner.h>
#include <llvm/Transforms/IPO/PassManagerBuilder.h>
@ -203,8 +202,6 @@ bool ZigLLVMTargetMachineEmitToFile(LLVMTargetMachineRef targ_machine_ref, LLVMM
PMBuilder->Inliner = createFunctionInliningPass(PMBuilder->OptLevel, PMBuilder->SizeLevel, false);
}
addCoroutinePassesToExtensionPoints(*PMBuilder);
// Set up the per-function pass manager.
legacy::FunctionPassManager FPM = legacy::FunctionPassManager(module);
auto tliwp = new(std::nothrow) TargetLibraryInfoWrapperPass(tlii);

View File

@ -799,7 +799,7 @@ pub const WatchEventId = enum {
// pub fn destroy(self: *Self) void {
// switch (builtin.os) {
// .macosx, .freebsd, .netbsd => {
// // TODO we need to cancel the coroutines before destroying the lock
// // TODO we need to cancel the frames before destroying the lock
// self.os_data.table_lock.deinit();
// var it = self.os_data.file_table.iterator();
// while (it.next()) |entry| {
@ -1088,7 +1088,7 @@ pub const WatchEventId = enum {
//
// while (true) {
// {
// // TODO only 1 beginOneEvent for the whole coroutine
// // TODO only 1 beginOneEvent for the whole function
// self.channel.loop.beginOneEvent();
// errdefer self.channel.loop.finishOneEvent();
// errdefer {
@ -1252,7 +1252,7 @@ pub const WatchEventId = enum {
const test_tmp_dir = "std_event_fs_test";
// TODO this test is disabled until the coroutine rewrite is finished.
// TODO this test is disabled until the async function rewrite is finished.
//test "write a file, watch it, write it again" {
// return error.SkipZigTest;
// const allocator = std.heap.direct_allocator;

View File

@ -6,7 +6,7 @@ const Lock = std.event.Lock;
const Loop = std.event.Loop;
/// This is a value that starts out unavailable, until resolve() is called
/// While it is unavailable, coroutines suspend when they try to get() it,
/// While it is unavailable, functions suspend when they try to get() it,
/// and then are resumed when resolve() is called.
/// At this point the value remains forever available, and another resolve() is not allowed.
pub fn Future(comptime T: type) type {

View File

@ -7,7 +7,7 @@ const testing = std.testing;
/// ReturnType must be `void` or `E!void`
pub fn Group(comptime ReturnType: type) type {
return struct {
coro_stack: Stack,
frame_stack: Stack,
alloc_stack: Stack,
lock: Lock,
@ -21,7 +21,7 @@ pub fn Group(comptime ReturnType: type) type {
pub fn init(loop: *Loop) Self {
return Self{
.coro_stack = Stack.init(),
.frame_stack = Stack.init(),
.alloc_stack = Stack.init(),
.lock = Lock.init(loop),
};
@ -29,7 +29,7 @@ pub fn Group(comptime ReturnType: type) type {
/// Cancel all the outstanding frames. Can be called even if wait was already called.
pub fn deinit(self: *Self) void {
while (self.coro_stack.pop()) |node| {
while (self.frame_stack.pop()) |node| {
cancel node.data;
}
while (self.alloc_stack.pop()) |node| {
@ -50,11 +50,11 @@ pub fn Group(comptime ReturnType: type) type {
/// Add a node to the group. Thread-safe. Cannot fail.
/// `node.data` should be the frame handle to add to the group.
/// The node's memory should be in the coroutine frame of
/// The node's memory should be in the function frame of
/// the handle that is in the node, or somewhere guaranteed to live
/// at least as long.
pub fn addNode(self: *Self, node: *Stack.Node) void {
self.coro_stack.push(node);
self.frame_stack.push(node);
}
/// Wait for all the calls and promises of the group to complete.
@ -64,7 +64,7 @@ pub fn Group(comptime ReturnType: type) type {
const held = self.lock.acquire();
defer held.release();
while (self.coro_stack.pop()) |node| {
while (self.frame_stack.pop()) |node| {
if (Error == void) {
await node.data;
} else {

View File

@ -6,7 +6,7 @@ const mem = std.mem;
const Loop = std.event.Loop;
/// Thread-safe async/await lock.
/// coroutines which are waiting for the lock are suspended, and
/// Functions which are waiting for the lock are suspended, and
/// are resumed when the lock is released, in order.
/// Allows only one actor to hold the lock.
pub const Lock = struct {
@ -96,8 +96,7 @@ pub const Lock = struct {
suspend {
self.queue.put(&my_tick_node);
// At this point, we are in the queue, so we might have already been resumed and this coroutine
// frame might be destroyed. For the rest of the suspend block we cannot access the coroutine frame.
// At this point, we are in the queue, so we might have already been resumed.
// We set this bit so that later we can rely on the fact, that if queue_empty_bit is 1, some actor
// will attempt to grab the lock.

View File

@ -3,7 +3,7 @@ const Lock = std.event.Lock;
const Loop = std.event.Loop;
/// Thread-safe async/await lock that protects one piece of data.
/// coroutines which are waiting for the lock are suspended, and
/// Functions which are waiting for the lock are suspended, and
/// are resumed when the lock is released, in order.
pub fn Locked(comptime T: type) type {
return struct {

View File

@ -118,7 +118,7 @@ pub const Loop = struct {
}
/// The allocator must be thread-safe because we use it for multiplexing
/// coroutines onto kernel threads.
/// async functions onto kernel threads.
/// After initialization, call run().
/// TODO copy elision / named return values so that the threads referencing *Loop
/// have the correct pointer value.

View File

@ -13,7 +13,7 @@ pub const Server = struct {
loop: *Loop,
sockfd: ?i32,
accept_coro: ?anyframe,
accept_frame: ?anyframe,
listen_address: std.net.Address,
waiting_for_emfile_node: PromiseNode,
@ -22,11 +22,11 @@ pub const Server = struct {
const PromiseNode = std.TailQueue(anyframe).Node;
pub fn init(loop: *Loop) Server {
// TODO can't initialize handler coroutine here because we need well defined copy elision
// TODO can't initialize handler here because we need well defined copy elision
return Server{
.loop = loop,
.sockfd = null,
.accept_coro = null,
.accept_frame = null,
.handleRequestFn = undefined,
.waiting_for_emfile_node = undefined,
.listen_address = undefined,
@ -53,10 +53,10 @@ pub const Server = struct {
try os.listen(sockfd, os.SOMAXCONN);
self.listen_address = std.net.Address.initPosix(try os.getsockname(sockfd));
self.accept_coro = async Server.handler(self);
errdefer cancel self.accept_coro.?;
self.accept_frame = async Server.handler(self);
errdefer cancel self.accept_frame.?;
self.listen_resume_node.handle = self.accept_coro.?;
self.listen_resume_node.handle = self.accept_frame.?;
try self.loop.linuxAddFd(sockfd, &self.listen_resume_node, os.EPOLLIN | os.EPOLLOUT | os.EPOLLET);
errdefer self.loop.removeFd(sockfd);
}
@ -71,7 +71,7 @@ pub const Server = struct {
}
pub fn deinit(self: *Server) void {
if (self.accept_coro) |accept_coro| cancel accept_coro;
if (self.accept_frame) |accept_frame| cancel accept_frame;
if (self.sockfd) |sockfd| os.close(sockfd);
}

View File

@ -6,7 +6,7 @@ const mem = std.mem;
const Loop = std.event.Loop;
/// Thread-safe async/await lock.
/// coroutines which are waiting for the lock are suspended, and
/// Functions which are waiting for the lock are suspended, and
/// are resumed when the lock is released, in order.
/// Many readers can hold the lock at the same time; however locking for writing is exclusive.
/// When a read lock is held, it will not be released until the reader queue is empty.
@ -107,8 +107,7 @@ pub const RwLock = struct {
self.reader_queue.put(&my_tick_node);
// At this point, we are in the reader_queue, so we might have already been resumed and this coroutine
// frame might be destroyed. For the rest of the suspend block we cannot access the coroutine frame.
// At this point, we are in the reader_queue, so we might have already been resumed.
// We set this bit so that later we can rely on the fact, that if reader_queue_empty_bit is 1,
// some actor will attempt to grab the lock.
@ -139,8 +138,7 @@ pub const RwLock = struct {
self.writer_queue.put(&my_tick_node);
// At this point, we are in the writer_queue, so we might have already been resumed and this coroutine
// frame might be destroyed. For the rest of the suspend block we cannot access the coroutine frame.
// At this point, we are in the writer_queue, so we might have already been resumed.
// We set this bit so that later we can rely on the fact, that if writer_queue_empty_bit is 1,
// some actor will attempt to grab the lock.

View File

@ -3,7 +3,7 @@ const RwLock = std.event.RwLock;
const Loop = std.event.Loop;
/// Thread-safe async/await RW lock that protects one piece of data.
/// coroutines which are waiting for the lock are suspended, and
/// Functions which are waiting for the lock are suspended, and
/// are resumed when the lock is released, in order.
pub fn RwLocked(comptime T: type) type {
return struct {

View File

@ -2103,7 +2103,7 @@ test "zig fmt: inline asm" {
);
}
test "zig fmt: coroutines" {
test "zig fmt: async functions" {
try testCanonical(
\\async fn simpleAsyncFn() void {
\\ const a = async a.b();
@ -2115,8 +2115,8 @@ test "zig fmt: coroutines" {
\\ await p;
\\}
\\
\\test "coroutine suspend, resume, cancel" {
\\ const p: anyframe = try async<std.debug.global_allocator> testAsyncSeq();
\\test "suspend, resume, cancel" {
\\ const p: anyframe = async testAsyncSeq();
\\ resume p;
\\ cancel p;
\\}