Thursday 30 September 2021

Michael Free, BT on "Models that cheat - Making sure it really works"

Conference TFNetworkAutumn21

Data Science - The beating heart of AI
 Conference overview and registration 
 YouTube TFNetworkSummer21 Conference Playlist 

Michael Free, is an AI Research Manager in BT’s Applied research department. With his work focussing on business applications of Deep Learning, he has led machine learning projects across a variety of fields such as natural language understanding on customer interaction data, image recognition for infrastructure management and future chatbot technology. His main interest lies in how to best use this technology in an enterprise context, on real problems with limited training data, utilising unsupervised learning techniques to create useful functionality.

Current main projects involve utilising reinforcement learning for dialogue management, content recommendation systems and collaborative work with universities on speech analytics/NLP.

Models that cheat - Making sure it really works

Machine Learning has made great strides in hard, unstructured problems with the advent of deep learning. However, such progress does not come free of issues. Often treated as black box solutions, interpretability and explainability are complex issues when building deep learning models, and poorly framed experiments and ‘dodgy data’ have led to a litany of models that don’t really work in practise – they merely ‘cheat’ the limited test you’ve given them.

I’ll discuss these issues in this talk, with examples of where things have gone wrong – and methods we can use to mitigate these issues and ensure our models “really work” when we test them.


No comments:

Post a Comment

Note: only a member of this blog may post a comment.