How does SpiderMonkey deal with memory limitations?

Hi,

I am looking for the most suitable JIT JS engine to replace existing JS interpreter.
I am looking right now at V8 and SpiderMonkey. Specifically I am interested in the way SpiderMonkey handles memory limits. I want to control memory which is allowed per context and the ability to gracefully handling the memory limits in my code (no about()), unlike V8 which does exactly that https://chromium.googlesource.com/v8/v8.git/+/HEAD/include/v8-isolate.h#82, NearHeapLimitCallback() gives some help, but not enough for me.

What are available mechanisms in SpiderMonkey I could use to handle memory limits gracefully in my code?

To clarify, I have a single-threaded C application with its own even loop with unknown number of JS contexts running asynchronously. In case if a JS context is out of predefined memory limit I want to destroy/close that context, without disrupting my process.

It’s not completely clear-cut. SpiderMonkey does try very hard to handle out of memory situations and convert them to “uncatchable” exceptions (as in, it will return to your calling embedding code and you can handle the situation however you’d like). It is one of our more common selling points as an embedding engine. That said, there are limitations:

  1. There are still some situations where it will abort, if it is unable to maintain its internal state. Search for AutoEnterOOMUnsafeRegion::crash. There are a lot of calls, but many of them won’t apply to your embedding, and it is rare to hit one in practice.
  2. We often contemplate removing this behavior, since it costs developer overhead and isn’t that useful for Firefox (Gecko code aborts on OOM already, so there probably aren’t that many times that SM will be able to recover without Gecko just crashing out soon after anyway.) Note that large and user-controllable allocations will continue to be handled even if we remove most of the OOM handling from SM.
  3. SM does not support the threading model you describe. JS contexts are 1:1 with JS runtimes, which are 1:1 with OS threads. So you cannot have multiple JS contexts per thread. Though it depends on what you mean by “[multiple] JS contexts running asynchronously” – if you mean Promise-based asynchrony, that’s fine, since it uses a single runtime and therefore thread. An OOM encountered during a Promise callback would be caught and you could discard associated things manually.

The main setting available for controlling the memory limit is JSGC_MAX_BYTES (in Firefox, the preference javascript.options.mem.max). That is only for memory directly controlled by the GC, however, which doesn’t count for malloc memory that is owned by GC’ed objects. I think the only way to cap that is with OS mechanisms (ulimit or the Windows equivalent… Job Objects, maybe?). SM will, however, track its malloced memory and use it to trigger GCs.

Hi Steve,

Thank you for clarification, it helps.

. Though it depends on what you mean by “[multiple] JS contexts running asynchronously”

Yes, I am confusing terminology here. What I need is an isolated environment with its own global object and prototypes. In SM parlance this corresponds to a realm, right?
What are the relationship between JSContext and a realm from the memory perspective and GC?

Ideally, I want to have a separate realm for each incoming connection, and in principle there can be a lot of them, up to 50-100k req/sec, so the realm creation should be cheap. Also I want to destroy/close a realm if something goes wrong without affecting other connections.

If the realm creation is not cheap, I may use a single realm for many connections, but I have to make sure it will not bloat with time.

SM does not support the threading model you describe. JS contexts are 1:1 with JS runtimes, which are 1:1 with OS threads. So you cannot have multiple JS contexts per thread.

I also see that JS_NewContext() spawns 2 threads. Given that my app is not thread-safe, what are the implications? What are the threads for?, if for GC, is it possible for me to do it manually from a my own thread?