Ehhhhhhhhh..... no.
It's a pretty hyped article. The problem with things like "deep" learning neural networks is they get so complicated journalists have no real clue what they're writing about so they select the parts they can buzzfeed-up into a bait-able headline.
While it's certainly possible that these servers "created" a new language to communicate "more efficiently" than in English they would have been doing it within a fairly structured framework of the learning model. They didn't spontaneously develop the sense to perform these tasks on their own. The model would have been something like, here's every word in english and how they fit together in sentences, here is some information, convey this information to a remote zero-knowledge third party as efficiently as possible. This is still a remarkable breakthrough which would have been impossible just a couple of years ago. The lexical map of English in computer-processable form is terabytes in size which must fit into RAM to be effectively processed.
As these new machine learning techniques evolve they do rapidly move out of human understanding. However, that just means that instead of a rigid training model humans have built to put data into the machines can use other machine learning techniques to build their own training models. While these are out of the scope of human understanding in terms of complexity it's not a new emergent information structure that we can't explain. Think of something like all the stars in the galaxy, we know what a star is and mostly what their trajectories and velocities are, but humans are incapable of comprehending this on a galactic scale, a single human could probably calculate or store the information for some hundreds or thousands of stars in a lifetime where as a model built by a computer under the instructions of "learn everything about stars" and provided enough input data could create a much more effective neural network learning model to predict star behaviour than people could. That's "beyond" human understanding, but the core concept is not something that humans don't understand.
If you want to go properly skynet, the next generation of Intel chips are so complex they are being designed by neural networks. Machines are designing the brains of the next machines which contain complex structures beyond human capabilities to design.
Unless there's some seriously under-wraps government projects somewhere to the best of my knowledge we (as in humans) aren't anywhere close to a true AI or even understanding what is required to construct a true AI as that rapidly vanishes off into the what is a soul discussion and isn't very scientific.
These deep learning neural networks are scary for different reasons, for example automatically controlling swarms of drones that have real time facial recognition in them. Combining drone swarms with orders to find or execute lists of people en-masse is within our technical grasp now with the operator only required to press a big red "launch" button and dictating an operational area. Basically we can build manhacks now that are AI-controlled, but not HAL9000.