I'm in a very desperate need of tricking garbage collection into triggering

Hi,

I develop Ultrawidify, which is an extension that crops letterboxed videos to fit a 21:9 monitor properly. One of the features is automatic aspect ratio detection.

The way automatic autodetection works is by taking the video element, drawing current frame to canvas every X seconds with ctx.drawImage() and then getting pixels from the canvas through the magic of ctx.getImageData().

Main function looks roughly like this — I’ve omitted some less critical parts:

async main() {
  while (cond) {  // yes I know. setInterval exists.
     checkFrame();
    await sleep(interval);
  }
}

And the checkFrame() boils down to this:

async checkFrame() {
  this.context.drawImage(this.video, 0, 0, this.canvas.width, this.canvas.height);
  const imageData = this.context.getImageData(0, 0, this.canvas.width, this.canvas.height).data;   // <----- problem child.

  // do stuff with imageData, lots of stuff
  // check results
  // return nothing
}

As far as I can tell, that’s pretty much by the book. My code has no memory leaks and there’s no way to improve my interactions with canvas at all. Reusing existing references is not an option because ctx.getImageData() will always return a new one. After checkFrame() finishes executing, imageDatashould get garbage-collected eventually because unused reference.

And that’s what usually tends to happen. If I open a youtube video, ‘memory’ tab in devtools will peg memory usage for the page at 70-120 MB, which is reasonable. After a while though, the memory usage is going to start to rise (personal record: to the tune of 20+ GB and no I’m not kidding).

Today, I’ve only managed to get it up to 1 gig, but that’s still way too much: https://imgur.com/btcUlpO

If you take a look at the ‘dominator’ view, you start noticing funny stuff: there’s tons of ArrayBuffer objects 921664B big:
Imgur

Which is coincidentally roughly how big I expect imageData to be (4 bytes per pixel × 640 pixels wide × 320 pixels tall = pretty much this).

Going to ‘about:memory’ and clicking ‘GC’ button will bring the number back from multiple gigabytes to what it should be — notice drops in fourth and last snapshot:
Imgur

Autodetection is the kind of feature I don’t want to go back on, and it’s mildly important that you run it frequently enough in case video keeps changing aspect ratio. Are there any lesser-known workarounds that would help me curb the RAM usage due to shitty garbage collection?

Things I’ve tried so far

  • Googling. No results.

Things that I’m looking at

  • Web workers.

I’ve found this bit about transferable objects. If I understand this right, sending imageData to a worker like suggested here and killing the worker once it’s done processing would serve as a kind of forced garbage collection — or am I wrong on this one?

1 Like

Very interesting stuff!
Please keep us posted if you find a solution in the meantime (those workers sounds like a very good idea).

I have a similar problem - when creating thumbnails, I’m resizing a lot of small images and memory goes up really fast, but goes down very slowly. It can easily take from 0.5GB to 1.5GB of memory.

As a general suggestion, avoid using async functions for performance sensitive number crunching. Firefox doesn’t JIT compile them.

It’s quite likely that moving
// do stuff with imageData, lots of stuff
to a new function without async would improve your performance. (This probably won’t help your GC behavior)

As a general suggestion, avoid using async functions for performance sensitive number crunching. Firefox doesn’t JIT compile them.

This is going to be useful once/if I move that stuff to a worker and don’t have to care about long-running functions blocking the rest of the page, so thanks for the tip.

In the mean time, here’s my reasoning for using async:

If my understanding is correct, liberal usage of async/await — while not being too helpful for speed in raw number crunching — can positively affect overall performance, because it doesn’t completely block other stuff on the page from executing. If you’re having a function that could run for a long time and don’t use async/await, then the page will stop responding for however long it takes for the function to execute. In the mean time, using async/await (the // do stuff with imagedata is a lot of await some_function() calls) still allows other scripts on the webpage to execute between those calls.

Does it makes sense or did I misapply something I found while googling once?

Could you elaborate?
Is there some issue on bugzilla for this?

So if I understand this correctly, if I configure my transpiler to generate old code (without async / await), it will actually speed-up my addon? I will try that.

It’s not really that simple - you have to use actual asynchronous API (like Fetch API) or run your JavaScript on a different thread (using workers, or send your work to background script) in order to be really non-blocking. If you just make your function async, it won’t make the execution asynchronous (parallel).
There is a really good video by Jake Archibald that explains async / await / Promise execution:

@31:00 onwards seems relevant to my await/async abuse for ghetto concurrency.

Was just about to say that I don’t need proper multithreading, abusing event loop through async/await does the job just fine even if that’s ghetto/pretend-concurency, but then came the proverbial asterisk about microtasks and took care of that misconception.

Yikes, then. Guess I was lied to.

(BTW, thanks for the video)

You can easily find the related bugs on bugzilla. It goes pretty deep, since currently generators can also not be optimized to the fastest possible stage.

If using transpiled es3 code is faster than built in async functions is up for experimentation, and probably really depends on the specific usage.

Thanks. I’ve read this new article about it here:
https://hacks.mozilla.org/2019/08/the-baseline-interpreter-a-faster-js-interpreter-in-firefox-70/
But I couldn’t understand much… this is much more complex than I expected :slight_smile:

EDIT:
I’ve finished refactoring today to perform all canvas operations directly in the tab, not in the background script but it seems that it didn’t helped. Maybe it’s because it’s loaded as temporary…??

Because the memory wasn’t released at all, not even after closing the tab, reloading whole addon and running GC. It went up from 900MB to 2.9GB during the process, then it went down to 2.2GB.

Then I’ve recompiled my code to ES3 but it didn’t helped, the behavior was the same (plus part of my addon got broken due to unsupported API).

I’ve also tried it in Chrome, there it went from 800MB to 1.5GB during the process and and then back to 1.1GB when the tab got closed. Then after a while it went to something below 1GB.

So I would say there is really something wrong with canvas in Firefox.