REMINDER: The #1 thing you can do to support the site is share the articles!
It’s been said before that no technology is inherently evil. Or good. Or neutral. Jesus, what the hell is it then?
We guess it’s more so what humans do with it. You know, like how one minute some guy is revolutionizing physics itself and the next we’re using his new equation to build terrifying weapons. Usually, people have good intentions when building something awesome. Other times, it’s a head-scratcher.
Artificially Intelligent Soldier Robots
In news that will surprise exactly no one, but terrify absolutely everyone, Russia is in the process of creating artificially intelligent robot soldiers right now. Of course, they say they’re not – supposedly, their new Final Experimental Demonstration Object Research robot is being developed to embark on an unmanned space mission in a couple of years. So they’re just, you know, teaching it how to make decisions in a variety of situations.
Like ... how to shoot guns at targets. Wait, what?
Look, we’re no experts on space exploration, but we’re pretty sure that dual-wielding pistols or operating a military-style vehicle aren’t typical subjects in astronaut school. Plus, it’s not like Russia hasn’t been working on autonomous weapons for some time – its “neural net” system can apparently determine what to shoot without any human intervention whatsoever.
OK, so maybe FEDOR really is just for a space mission, and maybe Russia’s claims about the capabilities of their AI weaponry are exaggerated, or simply untrue. But artificial intelligence on the battlefield isn’t just a Russian thing – the subject apparently warrants its own “Artificial intelligence arms race” page on Wikipedia these days. You may not hear that on the news, but it’s happening, so it’s probably just a matter of time before Skynet takes over.
It seems like there’s hardly enough time to discuss whether or not the use of robot soldiers against people should be considered a crime against humanity. But what the hell is there to discuss? Right now it’s not really a matter of “Should we or shouldn’t we?” but rather “Russia’s doing it, so we better hop on this killer robot train before we get left behind.”
Understandably, many experts petitioned the U.N. to ban the technology, but we all know that Russia doesn’t exactly play by anybody else’s rules, so if the U.N. were to ban robot soldiers, it’s unlikely they would pay attention to any sanctions on the matter. And sure, it still feels like science fiction at this point, but so did Jurassic Parkian genetic manipulation right up until we had the technology to actually do seemingly magical, potentially terrible things.
Predicting Crime Before it Happens
We know: The first thing that popped into your head was Minority Report. And yeah, it’s sort of like that, only with less technology and three fewer psychics laying around in a photon milk bath.
This particular strategy is called “predictive policing,” and yes, the intention is exactly what it sounds like: trying to stop crime before it even occurs. Which sounds great on the surface – who wouldn’t want that? The problem, however, is in how the prediction of future crime is arrived at: complicated algorithms that take into account not only an individual’s criminal past, but also associations (real or perceived) with other known law-breakers or even what you post on social media.
If that sounds like it has the potential to be, uh, less-than-accurate, you’re not alone. One man in Kansas City, MO, with a criminal record, found himself at a meeting where police were discussing this new technique. Imagine his surprise when he saw his picture up on the presentation screen and finding out he had been linked to a murder he had no involvement in. There was no evidence directly tying him to it – just a computer algorithm that figured he might have had something to do with it.
This same methodology is being used in the courtroom as well, for things like sentencing or bond amounts or whether someone should be granted parole. It’s basically leaving the judgment of humanity at the mercy of some complex formula that we don’t completely understand, which sounds like some corny dystopian flick John Cusack would star in.
The cars would probably be cool, though.
One could almost think that a cold, clinical machine designed to see all humans equally would actually benefit justice. Wouldn’t that get rid of the human bias factor? Well, not exactly. As it turns out, these algorithms are still created by people, using crime statistics that exist because of people, and all of it is interpreted by ... ugh, people.
And even if it was shown that it did actually help, is that sort of thing even legal? Aren’t there laws in place that say we can’t just be presumed guilty without things like probable cause and evidence and a fair trial and an actual crime being committed? Do we really live in a world where a computer says, “That dude’s definitely going to commit a crime,” and everybody just kind of shrugs and says, “Yeah, that sounds good.”
Yes, yes we do.
Digital News Anchors Reporting In
Digital art has come a long way since Mario was basically, like, a collection of 20 rectangles. Now, computer generated humans look ... well, human. Your grandmother’s prediction was accurate: Technology is invading every aspect of our lives, and in many cases making everything just the worst. Give her a call and let her know you’re sorry for doubting her wisdom.
Newsrooms are a perfect example of that. Meet Xinhua News Agency’s new news anchor, the apparently first ever AI reporter. It functions by analyzing live broadcasting videos by itself and learning from them. Which, to be fair, is a task that many people loathe: studying. The agency claims the virtual anchor can read as smoothly as a human being, which is kind of amazing. No, seriously, that would be amazing. But we’ll let you check out the first broadcast and let you be the judge.
OK, it does look entirely human. In fact, some people may even recognize it as a specific human they’ve seen somewhere before, given that it’s modeled after the real news presenter Zhang Zhou. Which ... wait. Why would you digitally clone someone who could just do the job, and do it better? It’s probably a money thing – after all, you don’t need to pay a program to deliver the news. It’s either that or some doofus in marketing said, “I’ve done the research, and it’s clear that people want news hitting their ears in monotone, robotic voices.”
Still, imagine how happy your grandmother might be. One day, she’ll reminisce about the good old days of journalism when Walter Cronkite was on TV, and a super-trusted source of information. Then you can say, “Oh, Grandma! He’s still doing it, only it’s on YouTube now and his voice is a little weird but still.” And then she’ll notice the distortion, pixelization, and glitching, then crack your ass with a switch because Grandma’s no idiot.
It’s even extended to the very core of journalism itself, with outlets like Reuters and The Washington Post utilizing AI to gather information on the most pressing issues in the modern world. Which ... OK. But is this the best way to investigate stories thoroughly and get to the truth? Maybe it’s just a method to generate leads, but what happens when the programs get it wrong and someone starts shouting ...
You know what? We’re not even going to use that phrase. It’s entirely overused, and we don’t want to be accused of spreading fake ne- goddammit.
Programs Creating Comedy
With how toxic the world can be today, sometimes all we need is a good, hard laugh. Sort of like what you accomplished by reading this article. Well, we’re sorry, but that’s about to be totally ruined for you.
Typically, good comedy elicits feelings and emotions in one form or another. Maybe it triggers something familiar, or just kind of shocks you in a (hopefully) non-offensive way. Humor is measured differently by everyone – in fact, some of you reading this right now may not find any of our jokes funny at all. And that’s fine, but do know that there is an alternative to creative, human minds out there, and it looks like this:
Hey, we’re not trying to rag on anyone’s project here, but we’re unclear as to why “robot comedy” is a thing that people chose to pursue. We’re already replacing human talent with technology in all sorts of fields – do we really want to do that with the arts? And is creativity a mathematical formula now? Robot comedy clubs and bands and holy crap what is happening?
Just imagine buying a ticket for a stand up comedy concert. Then the line-up presented is filled with characters like Vitamin C++ and LOLbot. Also, their jokes will be about eating fried hardware for breakfast or hacking their ex’s Facebook account. Wait, we’re pretty sure people make those jokes now, hopefully to their chagrin when they hear the groans from the audience.
This isn’t a one off thing, either. Others are climbing on board with this whole AI-based comedy thing. It’s terrifying, and a concept we’ve fed into our own Modern Rogue Comedy Generator to try to get some humor out of. We’re still working on it, but there’s a joke there somewhere.
Like this article? Check out “Surprising, Unintended Ways People Are Using Common Technology” or “Real Life Body Hacks That People Are Doing Right Now”.