If we talk about recent breakthroughs in the software community, machine learning and deep learning is a major contender – the usage, adoption, and experimentation of deep learning has exponentially increased. Especially in the areas of computer vision, speech, natural language processing and understanding, deep learning has made unprecedented progress. GANs, variational autoencoders and deep reinforcement learning are also creating impressive AI results.
To know more about the progress of deep learning, we interviewed Ivan Vasilev, a machine learning engineer and researcher based in Bulgaria. Ivan is also the author of the book Advanced Deep Learning with Python. In this book, he teaches advanced deep learning topics like attention mechanism, meta-learning, graph neural networks, memory augmented neural networks, and more using the Python ecosystem. In this interview, he shares his experiences working on this book, compares TensorFlow and PyTorch, as well as talks about computer vision, NLP, and GANs.
On why he chose Computer Vision and NLP as two major focus areas of his book
Computer Vision and Natural Language processing are two popular areas where a number of developments are ongoing. In his book, Advanced Deep Learning with Python, Ivan delves deep into these two broad application areas. “One of the reasons I emphasized computer vision and NLP”, he clarifies, “is that these fields have a broad range of real-world commercial applications, which makes them interesting for a large number of people.”
The other reason for focusing on Computer Vision, he says “is because of the natural (or human-driven if you wish) progress of deep learning. One of the first modern breakthroughs was in 2012, when a solution based on convolutional network won the ImageNet competition of that year with a large margin compared to any previous algorithms. Thanks in part to this impressive result, the interest in the field was renewed and brought many other advances including solving complex tasks like object detection and new generative models like generative adversarial networks. In parallel, the NLP domain saw its own wave of innovation with things like word vector embeddings and the attention mechanism.”
On the ongoing battle between TensorFlow and PyTorch
There are two popular machine learning frameworks that are currently at par – TensorFlow and PyTorch (Both had new releases in the past month, TensorFlow 2.0 and PyTorch 1.3). There is an ongoing debate that pitches TensorFlow and PyTorch as rivaling tech and communities. Ivan does not think there is a clear winner between the two libraries and this is why he has included them both in the book.
He explains, “On the one hand, it seems that the API of PyTorch is more streamlined and the library is more popular with the academic community. On the other hand, TensorFlow seems to have better cloud support and enterprise features. In any case, developers will only benefit from the competition. For example, PyTorch has demonstrated the importance of eager execution and TensorFlow 2.0 now has much better support for eager execution to the point that it is enabled by default. In the past, TensorFlow had internal competing APIs, whereas now Keras is promoted as its main high-level API. On the other hand, PyTorch 1.3 has introduced experimental support for iOS and Android devices and quantization (computation operations with reduced precision for increased efficiency).”
Using Machine Learning in the stock trading process can make markets more efficient
Ivan discusses his venture into the field of financial machine learning, being the author of an ML-oriented event-based algorithmic trading library. However, financial machine learning (and stock price prediction in particular) is usually not in the focus of mainstream deep learning research. “One reason”, Ivan states, “is that the field isn’t as appealing as, say, computer vision or NLP. At first glance, it might even appear gimmicky to predict stock prices.”
He adds, “Another reason is that quality training data isn’t freely available and can be quite expensive to obtain. Even if you have such data, pre-processing it in an ML-friendly way is not a straightforward process, because the noise-to-signal ratio is a lot higher compared to images or text. Additionally, the data itself could have huge volume.”
“However”, he counters, “using ML in finance could have benefits, besides the obvious (getting rich by trading stocks). The participation of ML algorithms in the stock trading process can make the markets more efficient. This efficiency will make it harder for market imbalances to stay unnoticed for long periods of time. Such imbalances will be corrected early, thus preventing painful market corrections, which could otherwise lead to economic recessions.”
GANs can be used for nefarious purposes, but that doesn’t warrant discarding them
Ivan has also given a special emphasis to Generative adversarial networks in his book. Although extremely useful, in recent times GANs have been used to generate high-dimensional fake data that look very convincing. Many researchers and developers have raised concerns about the negative repercussions of using GANs and wondered if it is even possible to prevent and counter its misuse/abuse.
Ivan acknowledges that GANs may have unintended outcomes but that shouldn’t be the sole reason to discard them. He says, “Besides great entertainment value, GANs have some very useful applications and could help us better understand the inner workings of neural networks. But as you mentioned, they can be used for nefarious purposes as well. Still, we shouldn’t discard GANs (or any algorithm with similar purpose) because of this. If only because the bad actors won’t discard them. I think the solution to this problem lies beyond the realm of deep learning. We should strive to educate the public on the possible adverse effects of these algorithms, but also to their benefits. In this way we can raise the awareness of machine learning and spark an honest debate about its role in our society.”
Machine learning can have both intentional and unintentional harmful effects
Awareness and Ethics go in parallel. Ethics is one of the most important topics to emerge in machine learning and artificial intelligence over the last year. Ivan agrees that the ethics and algorithmic bias in machine learning are of extreme importance. He says, “We can view the potential harmful effects of machine learning as either intentional and unintentional. For example, the bad actors I mentioned when we discussed GANs fall into the intentional category. We can limit their influence by striving to keep the cutting edge of ML research publicly available, thus denying them any unfair advantage of potentially better algorithms. Fortunately, this is largely the case now and hopefully will remain that way in the future. “
“I don’t think algorithmic bias is necessarily intentional,” he says. “Instead, I believe that it is the result of the underlying injustices in our society, which creep into ML through either skewed training datasets or unconscious bias of the researchers. Although the bias might not be intentional, we still have a responsibility to put a conscious effort to eliminate it.”
Challenges in the Machine learning ecosystem
“The field of ML exploded (in a good sense) a few years ago,” says Ivan, “thanks to a combination of algorithmic and computer hardware advances. Since then, the researches have introduced new smarter and more elegant deep learning algorithms. But history has shown that AI can generate such a great hype that even the impressive achievements of the last few years could fall short of the expectations of the general public.”
“So, in a broader sense, the challenge in front of ML is to sustain the current pace of innovation. In particular, current deep learning algorithms fall short in some key intelligence areas, where humans excel. For example, neural networks have a hard time learning multiple unrelated tasks. They also tend to perform better when working with unstructured data (like images), compared to structured data (like graphs).”
“Another issue is that neural networks sometimes struggle to remember long-distance dependencies in sequential data. Solving these problems might require new fundamental breakthroughs, and it’s hard to give an estimation of such one time events. But even at the current level, ML can fundamentally change our society (hopefully for the better). For instance, in the next 5 to 10 years, we can see the widespread introduction of fully autonomous vehicles, which have the potential to transform our lives.”
This is just a snapshot of some of the important focus areas in the deep learning ecosystem. You can check out more of Ivan’s work in his book Advanced Deep Learning with Python. In this book you will investigate and train CNN models with GPU accelerated libraries like TensorFlow and PyTorch. You will also apply deep neural networks to state-of-the-art domains like computer vision problems, NLP, GANs, and more.
Author Bio
Ivan Vasilev started working on the first open source Java Deep Learning library with GPU support in 2013. The library was acquired by a German company, where he continued its development. He has also worked as a machine learning engineer and researcher in the area of medical image classification and segmentation with deep neural networks. Since 2017 he has focused on financial machine learning. He is working on a Python based platform, which provides the infrastructure to rapidly experiment with different ML algorithms for algorithmic trading. You can find him on Linkedin and GitHub.
Read Next
Kaggle’s Rachel Tatman on what to do when applying deep learning is overkill
Brad Miro talks TensorFlow 2.0 features and how Google is using it internally