2 min read

The huge advancements of deep learning and artificial intelligence were perhaps the biggest story in tech in 2018. But we wanted to know what the future might hold – luckily, we were able to speak to Packt author Will Ballard about what they see as in store for artificial in 2019 and beyond.

Will Ballard is the chief technology officer at GLG, responsible for engineering and IT. He was also responsible for the design and operation of large data centers that helped run site services for customers including Gannett, Hearst Magazines, NFL, NPR, The Washington Post, and Whole Foods. He has held leadership roles in software development at NetSolve (now Cisco), NetSpend, and Works (now Bank of America).

Explore Will Ballard’s Packt titles here.

Packt: What do you think the biggest development in deep learning / AI was in 2018?

Will Ballard: I think attention models beginning to take the place of recurrent networks is a pretty impressive breakout on the algorithm side.

In Packt’s 2018 Skill Up survey, developers across disciplines and job roles identified machine learning as the thing they were most likely to be learning in the coming year. What do you think of that result? Do you think machine learning is becoming a mandatory multidiscipline skill, and why?

Almost all of my engineering teams have an active, or a planned machine learning feature on their roadmap. We’ve been able to get all kinds of engineers with different backgrounds to use machine learning — it really is just another way to make functions — probabilistic functions — but functions.

What do you think the most important new deep learning/AI technique to learn in 2019 will be, and why?

In 2019 — I think it is going to be all about PyTorch and TensorFlow 2.0, and learning how to host these on cloud PaaS.

The benefits of automated machine learning and metalearning

How important do you think automated machine learning and metalearning will be to the practice of developing AI/machine learning in 2019? What benefits do you think they will bring?

Even ‘simple’ automation techniques like grid search and running multiple different algorithms on the same data are big wins when mastered. There is almost no telling which model is ‘right’ till you try it, so why not let a cloud of computers iterate through scores of algorithms and models to give you the best available answer?

Artificial intelligence and ethics

Do you think ethical considerations will become more relevant to developing AI/machine learning algorithms going forwards? If yes, how do you think this will be implemented?

I think the ethical issues are important on outcomes, and on how models are used, but aren’t the place of algorithms themselves.

If a developer was looking to start working with machine learning/AI, what tools and software would you suggest they learn in 2019?

Python and PyTorch.