Hi @odebroqueville2. I’m curious how you would see the above working if you did have the ability to flag negative ratings that didn’t include a comment?
With no comment included, how would we distinguish between a one-star or two-star rating that was a result of a bug vs a one-star or two-star rating that was a result of the user simply not liking the extension? Or would the expectation be that we would just automatically delete any low ratings flagged by the developer that did not also include a comment?
Thanks – Ed
Hi Ed, there has to be a reason for not liking an extension and giving it a poor rating. Users need to act responsibly and respectfully. Knowing why an extension isn’t appreciated is what helps devs improve their extensions where possible. I have no issues with fair and just reviews. I think that we should have the chance to make things better if we can. Judging from the issues that were raised on my Github, in the vast majority of cases the issues or feature requests were well founded deserved and got my attention. But what am I to do with a 1-star or 2-star rating with no explanation? There’s zero room for improvement! Most of the time that I got poor ratings was because something was broken or a feature was missing. Most often bugs got fixed and new features were added. There was just one case where Mozilla made a change to the way storage.sync worked and that permanently broke things (I couldn’t find a fix). There was another case where users would complain about not being able to import their search engines from Firefox. All I could do was explain that Mozilla hadn’t provided an API to read the query url for search engines stored in Firefox. Another dev got around this problem by using a hack that I didn’t find elegant. At least I got a chance to explain my view! Finally, there are some users who’ll just give a poor rating due to security concerns or license terms. I have to admit that I can’t do much about security concerns because I’m no expert in that field. Regarding license terms, my extension was first published with an open source license. Then, I noticed that new features I implemented were getting copied by another dev (admittedly a lot more talented than I am) because some users requested those features on their Github page. Noticing this, I decided to change the license terms and make the code private, and got poor ratings as a result. So, in conclusion, I believe that most devs provide extensions on a best efforts basis and we also have our reasons for chosing whatever license terms we choose. I don’t think that we should be penalised for that. I would really have liked Mozilla to allow us to monteize our extensions, perhaps keeping a 30% cut to review the code and guarantee users security and privacy. I find that perfectly acceptable and a win-win-win (Mozilla-devs-users) for everyone.
Maybe, before a user makes a rating-review, they could be asked if they want to report a bug or request a feature instead and be taken to Github issues for that.
1 Like
Hi @odebroqueville2.
I completely understand your point, you have explained it very clearly.
I’d like to go back to my previous question if possible, “how you would see the above working if you did have the ability to flag negative ratings that didn’t include a comment?”
I would genuinely like to hear your thoughts on this. Or are you of the mind that we shouldn’t even allow a negative rating without additional context? What would be an acceptable solution to this issue in your eyes?
I think ratings and reviews should be useful to other users and to devs. A negative rating without any context just tells other users that someone had a negative sentiment about the addon for any reason. It’s not informative at all for the dev. If the total number of ratings or reviews is large, that rating won’t have a large impact on the total score. If the addon has few ratings or reviews, then the impact could be meaningful. For the above reasons, I think that negative ratings should always be explained. I don’t have the same view for positive ratings because I don’t see them as detering users from trying an addon. If ratings could be automated (e.g. using some sort of bot), then the overall rating could be misleading and, in this case, I’d argue that contexts should always be provided.
We could also ask ourselves what information is useful in ratings and reviews when evaluating an addon. Personally, I can list a few things I’d be interested to know if I was going to use an addon:
- is it safe to use ? (respect for privacy andd security)
- is it robust/stable ? i.e. how often do bugs arise ?
- does it perform well ? is it responsive ?
- is it likely to cover my needs ? does it have a lot of features ?
- is it easy to use ? is it well documented ?
- is it actively being developed/updated ?
- does the dev offer good support if help is needed ?
Does a total score, where each rating is equally weighted, faithfully reflect the value of an addon? If the addon is several years old, shouldn’t less weight be assigned to the ratings of the older versions? Should ratings without context have the same weight as ratings providing more context?
Each of the above criteria could have its own rating.
For devs, I believe the 2 most important questions are:
- is something broken ? (some of us don’t have the possibility to test for all platforms)
- how can I make the addon better ? is it missing an essential feature ?
Thank you for your thoughtful reply, @odebroqueville2.
I can’t promise anything, but I have funneled your feedback to the appropriate team.
Thanks – Ed
1 Like
You’re welcome. Hope it helps.