One of the better ideas in there:
MAKE FACEBOOK AND GOOGLE RESPONSIBLE FOR CONTENT. Communications networks should be forced to make a choice between being regulated as a communications facility or as a publisher. Requiring this decision means modifying Section 230 of the Communications Decency Act and Section 512 of the Digital Millennium Copyright Act so that large communications networks do not receive liability protection if they profit from advertising or targeted advertising. This modification would likely force Facebook and Google to change their business model. Amending Section 230 and Section 512 would also help restore a level playing field for publishers, who are legally responsible for the content they publish.
Force online platforms to “pick one” - either you get to sell ads on user-generated content and are responsible for what’s posted, or you get liability and can’t shove ads into user-generated content streams.
This, if actually enforced, has the potential to break some of the feedback loops that are currently responsible for a lot of the toxic effects of social media. Facebook, in particular, has no incentive to actually do more than the minimum required to deal with “fake news” and such (in the literal, “this was totally made up and is 100% false” variety of “fake news”), because that sort of content is very “engaging” - it shares well, goes wild, and leads to more people on their platform viewing ads. Sure, it’s literally made up, reinforces existing biases, etc, but… I mean, who cares about that? It sells ads!
YouTube is just as bad. From what I’ve heard, at least a few years back, their internal metrics were pretty straightforward: Hours watched. That graph needs to go “up and to the right.” Their goals, at some point in time, were to beat out network television for viewer-eyeball-hours. And that decision - focusing on hours watched - has lead to quite a bit of the human-toxic behavior of their algorithms. Despite what they might claim, there’s no way the recommendation engine knows anything about what it’s recommending. The closed captions are bad enough before you get to the state of pulling semantic meaning out of content, so the engine is almost certainly just A/B testing various things. “Someone who watched video 1234 is highly likely to sit through video 5678, and that increases hours watched, so we’ll recommend that!” Great for increasing viewer hours, except for the fact that you end up recommending all sorts of conspiracy crap and other nonsense. Turn someone onto a few conspiracy theories, they’re likely to look into all sorts of others - and as long as that’s on YouTube, well, what a win! A more engaged viewer! Sure, you’re filling their heads with lies and nonsense, but as long as they’re watching…
You’ll excuse me for not thinking this is the best set of ideas I’ve heard.
There’s obviously a long path between where we are and something more sane, but I’m just not happy with what the current behaviors are.