>> Right, so first of all, this happens all time online,
from obvious spam where people are just posting links and comments.
To more subtle stuff, they're posting fake reviews in Amazon and Yelp.
But I've actually got some colleagues at Carnel including and Jeff Hinndencock and
Claire who've been studying.
Can you tell whether a review is fake or
not, based on language that people are using?
And it turns out that you can.
And in fact, we'll get back to this.
In MovieLens, people could tell that we were playing with movie ratings too.
But the problem is that if you have this bad data in the system,
you get downstream effect, sorry,
it's a recommendor system that's using this data to calculate the recommendation.
So, if I can get bogus high ratings on the system,
then it's going to tend to be shown to more people.
And it's going to be shown with a higher rating to those people.
Which in turn, will influence some of those people.
And so, you get this kind of vicious cycle of bogus in your system.
So, is there any good news that came out of this day?
>> No, [LAUGH] a little bit, so,
we actually wanted to see whether people notice, right?
And so, we, what we did was we asked some people to do the task, but
didn't show them anything at all.
We asked some people to do this task, and we were rating movies,
but what we showed them, the manipulated predictions and the way we saw.
And then the two groups when we saw some at the end of the experiment
how well they liked MovieLens, right?
Were the recommendations during the experiment useful?
And was MovieLens useful overall?
People who were in the conditional way of playing with the raise
gave a significantly lower range, they liked their recommendations less,
they liked movie lens last.
And so, I'd say the manipulation was pretty course than what we did, right?
We basically manipulated half of the movies for the experimental condition.
But it was just 40 movies, and it was only 10 minutes.
And somehow, we were able to significantly make MovieLens look worse.
Now, that's not good news.
But the good news is that that if people are reasonably sensitive to these things,
and that it's likely that at least the companies themselves.
Right, the recommender system company could also play around with the ratings,
recommending products that they wanted to get off of their shelves, or
recommending products with a higher profit margin.
And to some extent, people can detect these things, and
they'll react by liking you and going and doing something else.
>> So, maybe the third or fourth time they start trying to sell you the extended
warranty, you stop buying the television set there.
Or maybe the other lesson for our class is if you're going to lie to people,
save the lie for when it's really worth it.
Don't waste.
>> [LAUGH] >> You lies on small things.
>> Yes, can I tell you something else about this too though?
>> Absolutely.
>> Just recommend our systems, right?
This is, I mean, if you think about Google, right, and search results,
they have an algorithm that's trying to pick the best things for us.
And then the interface shows us the top ten,
which then drives attention to those sites, which increases linking.