Challenge: redesign social media "voting" to reduce negative effects
-
I don't know if this qualifies as "philosophy" but since it isn't voting in the conventional sense, this is the best category I could find.
Today I saw this article, which discusses how the design of social media apps, which have "like" and "share" features, has unintended side effects, such as encouraging the expression of outrage.
https://www.slashgear.com/yale-study-finds-social-media-likes-train-users-to-act-outraged-13686659/
Here is a quote, emphasis mine:
This analysis was joined by a study of participants in controlled experiments, ultimately finding that the “basic design of social media,” including its algorithms, teaches some users to express more outrage online. The researchers point out that outrage can be both good and bad, at times seeking justice for legitimate transgressions, but at other times being used to bully, spread fake news, and increase polarization among political groups.
Here is an older article on how social media polarizes us, with input from 11 supposed experts.
https://www.businessinsider.com/how-internet-social-media-fuel-polarization-america-facebook-twitter-youtube-2020-12Say you were hired by Facebook or Twitter or YouTube to propose alternative designs that reduced or eliminated this effect. You are not asked to increase profits for the company, however, you'd like to do it in a way that doesn't simply drive people to other networks. You should attempt to create a solution that is reasonably incremental, that is, something that addresses the problem without completely changing the product(s).
How would you rework their user interface and backend formulas to accomplish this? Hopefully you can tap into your insights from voting theory.
-
A starting place in analyzing this problem might be to state what the goals of the rating system are, or might be. Then one can turn ones attention to the question of whether any of those goals can be retained to any extent while removing the undesirable effect described. It might be quite a difficult problem.
-
A starting place in analyzing this problem might be to state what the goals of the rating system are, or might be.
Well I think the goal of the rating system is to encourage quality content and allow the system to promote that quality content.
Are you asking what "quality" is?
Obviously quality is subjective, but we can do better than just "what gets the most clicks" or "what gets the most likes." You can think about it in terms of "what content would a paid human moderator choose as high quality?" A human moderator would care less about pushing a particular agenda, and care more about general "health" of the dialog. They would encourage diplomacy and nuance, while discouraging inflammatory, divisive, trollish or "click-bait" stuff.
Here is an article about a human moderator, "dang", at the Hacker News forum (one I frequent). https://www.newyorker.com/news/letter-from-silicon-valley/the-lonely-work-of-moderating-hacker-news
What I am asking is, how do you automate that job via a rating system? (ui and backend algorithms)
This shouldn't be hard to visualize what it would be like to have a site that is able to figure out what content is "good" and therefore should be promoted or shown to more users.
As I said, pretend you are hired by the company to implement this (or at least to propose a solution). If you just ask "how do I decide what is good?" they aren't going to keep you around for long, they expect you to have some intuition about it. Right?