On Mon, 4 May 2026 at 16:30, Henrik Grubbström (Lysator) @ Pike (-)
developers forum <10353(a)lyskom.lysator.liu.se> wrote:
>
> > Having a large blob of data returned from an await() results in all
> > subsequent awaits to slow down. Example:
> >
> > __async__ void reconnect() {
> > mapping commands = await(load_commands()); //Returns a big array
> > //werror("This should be a one: %O\n",
> > await(Concurrent.resolve(1))); //Returns just an integer
> > foreach (commands; string uuid; mapping data) do_stuff_with(uuid, data);
> > }
>
> Wow! Somebody (other than me) is already using __async__ and await()!
I sure am! I love it and use it in all my projects now, pretty much.
My main Twitch bot (Mustard Mine) switched over to that back when they
were "continue functions" and without a moment's hesitation I moved to
proper async/await.
> Concurrent.Promise et al should be agnostic with respect to the size
> of data that is passed. As you have noted though, due to it saving
> the backtrace at time of creation, the size of the backtrace will
> affect timing.
>
> > If do_stuff_with() involves multiple promises/awaits, each one has a
> > notional backtrace involving the entire large array. The inclusion of
> > the dummy resolve(1) changes this so that the earlier call in the
> > backtrace is much smaller.
>
> Try disabling the orig_backtrace handling (eg initialize it to ""
> instead of the backtrace) in Concurrent.pmod, and see if it solves
> your issue.
Yes, hacking that out does prevent the problem.
> > I'm not sure what changed recently, but when I upgraded Pike, this
> > started to be slow. However, Concurrent.pmod doesn't seem to have
> > changed recently, so I'm not sure what the cause is. (It's also
> > possible an unrelated change of my own triggered this.)
>
> One easy way to reduce the backtrace size is to wrap the large value
> in an object when passing it as an argument.
I could do that. I'd have to figure out how much of a nuisance that'd
be API-wise, since I don't know how much of a problem it's going to
be. Notably, this is only really a problem when there are a *lot* of
promise backtraces being generated (the code I posted above is
simplified, but there's several layers of iteration and calls - this
is loading roughly a hundred channel's worth of data, then for each
channel, going and setting things up).
> > Promises currently seem to retain a text backtrace. I'm not sure if
> > that's relevant or not, as it has always been there, but it may be
> > connected.
>
> They have been there for quite a while, albeit they were not enabled
> by default initially. They were enabled by default as they were so
> useful when debugging problematic Promises.
>
Ah, interesting. I thought that change happened years ago though, so I
didn't think that was the cause.
Could the Promise retain the backtrace as an array of frames, rather
than describing it immediately and retaining the string? That is also
fast (since most of the time it doesn't need to display anything), but
will still produce equivalent output. Or would that result in a
reference loop?
ChrisA