It’s not completely clear-cut. SpiderMonkey does try very hard to handle out of memory situations and convert them to “uncatchable” exceptions (as in, it will return to your calling embedding code and you can handle the situation however you’d like). It is one of our more common selling points as an embedding engine. That said, there are limitations:
- There are still some situations where it will abort, if it is unable to maintain its internal state. Search for AutoEnterOOMUnsafeRegion::crash. There are a lot of calls, but many of them won’t apply to your embedding, and it is rare to hit one in practice.
- We often contemplate removing this behavior, since it costs developer overhead and isn’t that useful for Firefox (Gecko code aborts on OOM already, so there probably aren’t that many times that SM will be able to recover without Gecko just crashing out soon after anyway.) Note that large and user-controllable allocations will continue to be handled even if we remove most of the OOM handling from SM.
- SM does not support the threading model you describe. JS contexts are 1:1 with JS runtimes, which are 1:1 with OS threads. So you cannot have multiple JS contexts per thread. Though it depends on what you mean by “[multiple] JS contexts running asynchronously” – if you mean Promise-based asynchrony, that’s fine, since it uses a single runtime and therefore thread. An OOM encountered during a Promise callback would be caught and you could discard associated things manually.
The main setting available for controlling the memory limit is JSGC_MAX_BYTES
(in Firefox, the preference javascript.options.mem.max
). That is only for memory directly controlled by the GC, however, which doesn’t count for malloc memory that is owned by GC’ed objects. I think the only way to cap that is with OS mechanisms (ulimit or the Windows equivalent… Job Objects, maybe?). SM will, however, track its malloced memory and use it to trigger GCs.