Unethical AI, failing neural networks and failing science, the improbability of the universe, free will in a nutshell.
You want a song? Of course you want a song. Who doesn't want a song? Here's a song. This is Hot Sugar - The Girl Who Stole My Tamagotchi, a more chilled out, spatious and minimalistic midipiano-and-drums driven track with a couple other gorgeous flavors thrown in there. Muah! Tasty! Enjoy.
Kate Crawford and Meredith Whittaker published a report that asserts that we've done an awful job trying to hold AI to ethical standards. As they say, "user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles", which is completely true, even when you're not taking into account the possibility of killer AI robots. The real killer AI is much more mundane yet tragically insidious: a neural network model designed for actuarial work decides that you don't get to have life insurance because of X/Y/Z statistically modeled reasons. Or, a medical diagnosis algorithm misdiagnoses you because it was biased against specific indicators in your racial makeup. Of course, though, there's also the extraordinarily mundane reason that your private data was leaked because all the investor money went into AI design and marketing and absolutely none into security.
Far before dying from hard, truly-conscious (or indistinguishably complex) AI, we're more likely to kill ourselves with a very simple neural miscalculation error applied to some sort of fundamental utilities pipeline, or some sort of top secret U.S. superweapon. It's just a larger version of the phenomena of auto turrets that can automatically target and kill people, which we've had since 2012. Thank you to Daniel Temkin for sharing this one with me!
With that said, let's talk about fooling image recognition AIs to think that a turtle is a rifle, or a cat's face is guacamole. The potential crazy cyberpunk reality is that we could easily mass produce these objects for terroristic uses. Like turtles that get recognized as rifles. Or hell, rifles that get recognized as turtles.
What do you do when science doesn't do the good good anymore? Or specifically, when your tiny human puny person brain can no longer contain all the big data logic idea informations that science gives you? Well, you turn to more science, and produce automated systems that can do it for you. You know, maybe. (Part of me wonders if we didn't throw the baby out with the bathwater in the 70s.) I had a minor science crisis when I found out that so many scientific studies are unreproducable, calling into question the entire foundation of, well, science. I mean, just look at what happened with Big Sugar paying scientists to point the blame at fat. I have to make my own decisions about what seems good for my body? Bullshit!
No, seriously, though. You ever stop to think how weird it is that we don't know what's good for our own bodies? Doesn't it seem like that should be something instinctual, passed down through the generations? Didn't somebody know food? I refuse to believe that we as a human race have been clueless about this forever, and we've only recently figured it out. Oops, sorry. I'm getting worked up about things that may never be answered. (I'm not anxious about knowledge lost to time! You're anxious about knowledge lost to time!)
ahahahahhhkgjfjkfklhjHgkjgerk
er, what I'm trying to say, is that that's a great title. Here's the idea: the "standard model" perdicts that the Big Bang should have created equal amounts of matter and antimatter, but antimatter plus matter equals annihilation and nothingness. (Work out your issues, matter and antimatter. Geez.) But that... didn't happen. So. Huh? Man, science is just failing me this week. Why you do this, science? Wait, I'm sorry, science! Don't take away my phone. Slave labor paid for this phone; don't let that go to waste. (Jesus. That's dark. Sorry. Reel it in, Way. Reel it in...)
Here, instead let's just harsh on Crowdtap, an app (site? is there a difference anymore?) where you can gain points by answering surveys and posting product promotions for brands, which can then in turn be traded in for gift cards. Here, the author says, "As I’m writing this, for example, I’m looking at someone who has tweeted, in the last ten minutes, about Febreze ("Would totally use Febreze to fight persistent odors in my basement"), squeezable apple sauce ("Great deal, just in time for the new school year!"), Splenda (three times), cheese snacks, LensCrafters (twice), Suave (twice), and McDonalds." And then I think -- wait, why would someone even be friends with that person? Then the author says, "sharing content on social media isn’t even required — the default option is to share, but you get your points either way. Crowdtap passes members’ responses on to brands, but otherwise nobody is listening to what they say. No one is responding. There’s very little about this that might be called social. Imagine someone wandering alone in a giant desert, shouting "I love Big Macs!" into the sky. That’s Crowdtap." Weird. The plot thickens.
James Patton sent this to me directly--it's a game that's the creation of himself, and himself, which is very impressive. But it's also right up my alley as a cyberpunk-themed, sardonic strategy game that involves taking over the world. (I'm all about world domination, baby.) Here's his description of it:
"As you can see, it's definitely cyberpunk and definitely a bit tongue-in-cheek sometimes. But the game explores the principle that the world is cyberpunk now and we don't have a great handle on it. Each mechanic and feature in the game is an abstraction of something I perceive in the real world of global business, media or politics. So while most people might approach the game as a fun little diversion where you get to play the bad guy (and it is designed to accommodate that approach), it's also an interactive tool to introduce players to my perspective on the interactions between these real-world systems."
Plugged, shared, here for your interest.