Pulse needs duplicate rejection

While I can see the usefulness of the Pulse concept, I must say the execution, as currently implemented, has some very rough edges.

First and foremost, pulse needs duplicate rejection. That is, it needs to keep track somewhere (server side or client side) of what web pages a user has already rated, and stop asking the user 5 times a day about the same few sites:

“I see you found a nifty new web site called ‘Facebook’; how would your rate this web site? (fast? slow? functional? dysfunctional?)”; “I see you found a nifty new web site called ‘YouTube’; how would you rate this web site? (fast? slow? functional? dysfunctional?)”; “I see you found a nifty new web site called ‘Facebook’; how would your rate this web site? (fast? slow? functional? dysfunctional?)”; “I see you found a nifty new web site called ‘YouTube’; how would you rate this web site? (fast? slow? functional? dysfunctional?)”.

Secondly, some web sites you already know work well on all browsers by design, because they’re made by huge corporations with a vested interest in making web sites non-browser-dependent. These include Google, YouTube, Facebook, and Yahoo. How many thousands of copies of “these web sites work fine” do you need?

I believe this is a variant of “survivership bias”; look that up:
https://en.wikipedia.org/wiki/Survivorship_bias

Wouldn’t it be better to put those web sites in a built-in black list so that Pulse doesn’t ask user input about those at all?

Concentrate instead on the sites which don’t necessarily work perfectly:
www.pardus.at
www.tinami.com
www.samharris.org
www.glidefitness.com
In other words, sites from smaller companies, with various different HTML coding styles, various versions of CSS, various dynamic-content technologies. Those are the ones Pulse should be concentrating on, the ones you can learn something from.