JS -> wasm vs JS -> native function call overhead?

Hi there,

I repost here the question I asked in the old mailing list (https://groups.google.com/g/mozilla.dev.tech.js-engine/c/1qg9sbp7BnA)

My question is: how the overhead of calling a wasm function from javascript compares to the overhead of calling a native function (linked to the JS runtime with e.g. JS_DefineFunction in C++)? Are these equal or one of the two is faster? I am using SM 78.

Thank you very much!


1 Like

I think that’s a hard question to give a good answer to; ultimately it’s going to be hugely use-case dependent. Let’s say you’re considering wrapping an external library vs. compiling said library to wasm and using it that way.

Consider for example the JSNative signature, used by JS_DefineFunction:

using JSNative = bool (*)(JSContext* cx, unsigned argc, JS::Value* vp);

So, if your library needs arguments, you’ll need to extract the JS args and coerce them into types your library uses; at this point we’re into a philosophical question: Is that argument manipulation ‘call overhead’ or not?

Really, I suspect the only good answer here is: your mileage will almost certainly vary based on application.

There are other considerations that might push you in one direction or another, but if ‘how many calls can I issue per second’ is the defining one, the only answer I have is: you’re going to have to benchmark .


Lin Clark wrote an excellent post about function call overhead in WASM.

In general, JIT-to-JIT and C+±to-C++ calls have lower overhead than calls that cross between the two worlds. If literally all you care about is call overhead, my money would be on wasm being faster. That said, Matt is absolutely correct: your overall performance will depend on what the function you’re calling is doing. If it’s perf-critical, you should benchmark it.


Thank you for your answers!

1 Like