The downfall of the "F" word, robot revolutions, coping with the internet, and more.
Hello! Here, have this entirely dramatic, menacing trap track called Apashe - Lacrimosa that fairly well sums up just how dramatic my feelings about Facebook are (which is the "F" word, if you won't have guessed from the content that comes after this moment).
Whoa. This story starts out fairly tame, but it goes into everything. Political upheaval within the company, internal leaks, outraged bipartisans, meetings of the oligarchs (and shadow wars between them), the news industry versus Facebook, automation of content curation algorithms (I'd never do that to you), Trump targeted political campaigns, Russian misinformation, FAKE NEWS, Zuckerberg's inner turmoils and naivete, conspiracy conveyor-belt thinking, government regulation, and Zuckerberg's (seemingly honest) attempts to take what Facebook has done seriously and reform the company's approach. This is a hell of a long read. If you love or hate Facebook (or megacorp drama), it's worth your time to check out.
"One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook." Also, hat tip for the "screenshots heard round the world" pun.
Referenced in the previous article, these are the words of Chamath Palihapitiya, former Facebook VP of User Growth (who left in 2011). He has some choice words to say, like "if you feed the beast, that beast will destroy you", referencing Facebook as "that shit" which he doesn't use anymore. Sean Parker also pops in here as a conscientious objector of social media. And I'll use this opportunity to tack on.
Too little, too late! While it's heartwarming to hear about Zuckerberg finally accepting the call, I hope this is a sea change in how people view social media in general. I see more and more people peacing off of Facebook, at least for hiatuses and their mental health. I can tell you, having thought about designing social media websites myself, the design imperatives to succeed are completely manipulative, exploitative, and designed to make you get addicted. Just read Hooked: How to Build Habit-Forming Products by Nir Eyal for a deep-dive into how UX designers think about forming addictions--in you. (To be fair, Hooked does ask you to use your abilities for good, not evil, but how does one judge such the acceptability of whether some habits should be created or not in an app?) I'm still trying to rebuild my relationship with Facebook (because I can't not use it for various activities) and get off of Twitter. Of course, the problem is ultimately, humans. More on that later.
Here's another angle into the issue of Facebook (the larger internet, but really, the internet practically is Facebook nowadays)--the fact that nothing ever disappears. Sure, you can hide things from Facebook or delete them, but things can be cached, data can be exported, information can be shared, screenshots can be taken. How do we handle the fact that we are forever, immediately, permanently one age on the MySpace archives, another on Xanga, another on Tumblr, and another on Facebook. We have to live a lot more in a world where time doesn't exist because we can access all of it--and we need to act more like Vonnegut's interdimensional Tralfalmadorian aliens.
Here's a really cool video documentary? coverage video? media? what do you even call content anymore--video on the potential of humanoid robots through the robots Sophia and Erica. Extremely well-shot and devised to be deep and profound, this is a nice one to send your friends who don't know nothin' about these things.
In other news, have you seen that door-opening robot? Probably. Have you seen it when a guy tries to yank it away from the door by its "tail", and also by using a hockey stick? Maybe. If not, here you go--it's... unnerving.
I like this one--what if criminals want to use AI? Won't they? Certainly! A quick tl;dr: automation of tasks that required human labor like spear phishing, faking video and audio, political manipulation and propaganda (hate to break this to you but I'm 100% certain that state actors already do this at scale), and then entirely novel dangers like implanting bombs into cleaning robots, and generally just hacking stuff. There are five key recommendations from the paper, too: AI researchers being aware of how their work can be used; policymakers learning from technical experts; AI world learning from cybersecurity world; ethical frameworks for AI; everyone being involved in the discussions. No problem! I'm sure we'll tackle this problem maturely and in a sophisticated, measured manner. (I'm being snarky. But I do genuinely hope that.)