Concern about how add-on auto-review can hurt user's trust to Firefox


Most people don’t know that we’re now using an automatically-approval process on AMO for Firefox Add-on review (since mid. September), which means, our program will check for known problems and threats in new add-ons, and if not found, it will get published without any manual process.

The benefit is time. Some addon author already found that the reviewing process is incredibly quick, that it will get published in just a few minutes after uploaded new version.

However, the automatically approved system (which Chrome used for many years) had been considering harmful for general user, because it can be the routes for malware to get into user’s browser, and we can only stop and fix it after someone discovered and reported back, and cannot prevent in advance with current manual reviewing process.

And this bad part had ALREADY happened just in that week after the automatically review process up. Two weeks ago, we discover several add-ons had bound with mining codes on AMO. After some research, we found 7 add-ons with the same codes in the end.

Although the editor had banned them, the MALICIOUS ADD-ONs had already got into user’s computer. We didn’t know how many users were affected nor how many mal-addons left there in public.

Many people (including me) feel that AMO is not trustworthy anymore because of the potential threat. Without fully manual review, we need to assume that every add-on appears on AMO can be dangerous.

The manual review process had always been considering one of the better things Firefox’s eco-system over Chrome, that the reviewer will not only check for the malicious codes, but also help addon author to follow some best practice (eg., using the known version of libraries such as jQuery from official site).

And because of the manual reviewing process, we told general user that they need to be aware for Add-on installed from other sites, and they can trust the one listed on AMO.

With non-manual checking Add-ons (and the fact that dangerous code can be live there), we cannot trust any AMO add-ons now. Even for those been manual reviewed, because we just cannot tell which one had been check and which is not apart.

Even worst, some normal add-ons may be updated with malicious codes, been auto-approval and push into user’s browser with our auto-update mechanism anytime.

Many people are asking for purchasing the ownership of my add-ons from time to time in past, that they want to earn the profit by adding Ads to it. Guess what they want to do for now if they discover the change of reviewing process?

According to the discussion on AMO blog article, on IRC addon channel, Reddit, 4chan, Telegram and within above and below links, many contributors including volunteering reviewers, Add-on authors and users already raise the concern about safety and trust problem, before and after the change.

They didn’t get the answers before the auto-approved enabling. And they saw the incidents happened. Besides removing those add-ons which been reported, there is still lack of any steps and strategies to prevent future incidents happening again.

My question is - How can we still convince users that Firefox eco-system is still TRUSTWORTHY? If even myself found it’s not trustful anymore.


I think extensions that have not yet been human-reviewed should be labeled specially, for example, similarly to the old Preliminary button, for the time being until it has been proven out.


While we might convince people that it’s trustworthy, we shouldn’t try to do so, because we already know that it’s not. Our attempting to convince them just destroys the trust which people have in us. The only appropriate way to handle this is to be up-front with people. Tell them what the risks are.

Removing the required manual, human review of Firefox add-ons eliminates the only reason I had to continue to use Firefox post FF57. Mozilla has already chosen to change Firefox into a completely different browser. At least from the point of view of my user-experience, it is completely different not to have legacy extensions and full themes. Personally, I’ve been calling FF57+ Smolderfox, as the fire is out.

I’ve stated in multiple places that my only reason to continue to use Firefox was the trust that was gained by having manual, human review of all listed add-ons. This change completely destroys that trust. Using extensions in Smolderfox will now be just as dangerous as using them in Chrome (where there are numerous extensions that violate your privacy and security, including existing extensions which are changed to do so).


If I read the ghacks article right, we still have human review, but only after publishing. So it is not the same as Chrome’s review process which seems to have no human review at all.

I would be interested in the relative (in-)security of the process (since all implementations pose some risk, even full human review). So based on threats recognized after publishing until now, how likely is it that an extension a user has downloaded is malicious for

  • manual review (as we had before)
  • semi automated review (as based on the rather short period until know)
  • fully automated review (as for chrome)

Is there any way we can get this data? (maybe @jorgev knows?)

Few things I’d like to clarify here:

  1. Malicious add-ons have always been and will always be a risk, on AMO and other places. We have the blocklisting system for those add-ons.
  2. Coin miners are not considered malicious. Otherwise they would have been blocked. The reason they were on the site for a few days after being submitted and reported is that we were figuring out what our policy should be about them. They don’t quite fit previous situations and we want to make sure our policies are clear. They were taken down while we clarified the policy, and it looks like ultimately they won’t be allowed.
  3. Similarly, advertising or other forms of monetization aren’t considered malicious, though there are restrictions about disclosure and opt-in.
  4. The WebExtensions API limits the impact malicious add-ons can have on users, though they can still do some bad stuff.
  5. Chrome’s system isn’t fully automated. It’s actually fairly complex and sophisticated, and there’s a human element to it. We’re setting a higher bar, but we can’t guarantee total security (and never have).

I would like to know that

  1. do we actually have human review all auto-approval add-ons after publishing, and how long is the window period between those two actions?

  2. How can a user know whether an addon had been human-reviewed, and

  3. is there any way to block auto-approval addon (before human check) get into our Firefox to reduce the risk?

1 Like

Thanks for the clarifications on the “malicious” criteria and the Chrome review system!
As mentioned above I think it could be very helpful to have some data on human vs. automated review, particularly since human may sound much better but might not necessarily be so. Did Mozilla collect such data or are there comparable analyses from other organizations?

Can you please update the review policy to include this also?

There are scam and malicious addons appear again and again on AMO. If we cannot prevent it, someday it would become huge PR problem over “Firefox is more secure” idea.

1 Like

Good question.

I’d like to see an indicator of some sort.

Would it indicate the status of the latest version? If yes, there are going to be a lot of addons without the indicator for the majority of the time. If no, there are going to be a lot of addons with an indicator that no longer applies to the version that actually gets installed.

Either way, I’m not convinced it could be a reliably useful indicator. Or did you have an idea in mind that I’ve missed?



Back to the earlier quote,

How can a user know whether an addon had been human-reviewed, …

If the current version has not been reviewed by a human, there should be an indication to that effect.

If the current version has been reviewed by a human, there should be an indication to that effect.


The code at appears to show that personally identifying information, and passwords, can be collected under the guise of anonymized browsing behavior data.

Was the review of Stylish 3.1.5 automated?

1 Like

Under the old review system, add-ons that were reviewed by a human were marked to separate them from the ones that were not.

A new system was introduced in 2017 as per the following blog post:

Marking the add-ons as human-reviewed defeats the purpose of the new policy as it would revert to the old system of reviewed vs not-reviewed.

Checking the add-on reviews, developer and add-on code (or asking someone to check them) could be considered in cases where there are doubts.

Thanks, I’m aware of the blog post (linked from the opening post here).

Still, I do believe that there’s great value in openness; please do show whether a review was, or was not automated. It can be done, and it’s not a waste of space.

… defeats the purpose of the new policy … old system of reviewed vs not-reviewed. …

With respect, that seems exaggerated.

How long would it take to mark an extension as automatically reviewed? Surely that mark itself can be automated?

… risk factors that are calculated from the add-on’s codebase and other metadata. …

Was the review of Stylish 3.1.5 automated?

Will the combined circumstances for 3.1.3 and 3.1.5 cause calculations to be improved?

Again with reference to the opening post,

Concern about how add-on auto-review can hurt user’s trust to Firefox

– the circumstances surrounding 3.1.3 caused some reduction of trust in the review process. The unknowns surrounding 3.1.5, so soon after 3.1.3, are, potentially, further erosive to trust.

The current AMO policy is not to mark them (issue is not that it is difficult to mark them). :wink:

I only stated the current policy. I am not in a position to discuss the AMO policy. Only Admin can do that.

To clarify what I said, previously we had:

  • Not-human-reviewed
  • Preliminary-human-reviewed (later removed from the review system)
  • Fully-human-reviewed

If Human Reviewed are marked, then we would have:

  • Unmarked = Auto-reviewed = Not-human-reviewed
  • Marked = Human-reviewed = Fully-human-reviewed

Above is the same as the previous system of separating human reviewed from those that weren’t which is not the current policy of AMO.