Help with an addon architecture with code injection

I need to develop an addon to perform scraping on certain web pages of our clients in the private area of each user once logged on those web pages (all this always under the approval of the user informing him at all times the data that will be obtained) . Each of our clients has a different website and the necessary scraping code is different for each of the clients. Every month we get new clients, so we have to develop the necessary scraping script for new clients. In addition, the information obtained from scraping will be sent by the addon to our REST web service for storage in a database.

To avoid having to generate a new version of the addon every time we develop the scraping script of the new clients, I had thought of developing a web service of type REST that the addon would initially consult, and that would return the URLs of all the clients, along with all the scripts associated with each client. In this way, we would only make a version of the addon and the scraping script would store it in a database and return it to the addon through the web service indicated above and the addon would inject it as a content script using the browser.tabs.executeScript() method. Once the scraping is done, the result would be sent by the addon to the corresponding web service.

After reading the security conditions of the addons, I realized that with this architecture I would be injecting javascript code that is stored in an external system.

Would the architecture that I have commented be feasible? If it is not, what options do I have? This project is very important for my company and I have to look for the right options.

Is there a problem with the injection of javascript code stored in our external system? If so, would it be useful when we send the addon for its validation, also add the javascript code that would be stored and that would be injected?

Is there a problem with sending the information obtained by scraping to our external server for storage?

I’m very worried about all this, can someone help me?

Sending the sites-gathered data to your server shouldn’t be a problem, as long as the user is aware of that and it’s written in your privacy notice.

I think executing within the add-on (hence with some level of privileges) some arbitrary code that has been downloaded from an external source (even if it is from your site) is dangerous and would cause the add-on to be rejected.

Instead of downloading an executable javascript code from your server, would it be possible to download a JSON object that describes where to grab the data, like a list of XPath expressions ?

Thanks for your answer.

The problem is that, due to the complexity of some web pages of the clients, it is not possible to propose a standard method based on a JSON that stores enough information to perform the scraping.

Assuming your clients install the add-on (previously signed on Mozilla servers) from your own server, could you make a version per-client, with a different add-on id ? In your build process, it would be relatively easy to “recompile” all add-ons when you make a change in the common code, or just one add-on flavour when it’s a client-specific change.

Michel, it is not possible to create different add-ons for each client. We currently have about 15 clients and we will possibly have 50 more in the coming months.