Wednesday, November 1, 2017

Will deep learning make other machine learning algorithms obsolete?

The fourth (fifth?) quoranswer is here! This time we'll talk a bit about deep learning and its role in making other state of the art machine learning methods obsolete.


Will deep learning make other machine learning algorithms obsolete?


I will try to take a look at the question from the natural language processing perspective.

There is a class of problems in NLProc, that might not be benefited from deep learning (DL), at least directly. For the same reasons, machine learning  (ML) cannot help so easily. I will give three examples, which share more or less the same property so hard to model with ML or DL:

1. Identifying and analyzing a sentiment polarity oriented towards a particular object: person, brand etc. Example: I like phoneX, but dislike phoneY. If you monitor the sentiment situation for the phoneX you'll expect this message to be positive, while negative polarity for the phoneY. One can argue, it is easy / doable with ML / DL, but I doubt you can stay solely within that framework. Most probably you'll need a hybrid with rule-based system, syntactic parsing etc, which somewhat defeats the purpose of DL: be able to train neural network on a large amount of data without domain (linguist) knowledge.

2. Anaphora resolution. There are systems that use ML (and hence DL can be tried?), like BART coreference system , but most of the research I have seen so far is based around some sort of rules / syntactic parsing (this presentation is quite useful: Anaphora resolution). There is a vast application area for AR, including sentiment analysis and machine translation (also fact extraction, question-answering etc).

3. Machine translation. Disambiguation, anaphora, object relations, syntax, semantics and more in a single soup. Surely, you can try to model all of these with ML, but commercial systems in MT are more or less done with rules (+ml recently). I'm expecting DL can produce advancements in MT. I'll cite one paper here that uses DL and improves on phrase-based SMT: [1409.3215] Sequence to Sequence Learning with Neural Networks Update: some recent fun experiment with DL based machine translation.

The list can be extended to knowledge bases etc, but I hope I made my point.

4 comments:

ark survival evolved pearls said...

Dangerously naïve. The dangers aren't in each research... it arises when each trained part connects to each other. Google have already a case where the AI invented a language that it uses itself... the problem is that the researchers don't understand the language. Despite their ability to check anything they want. I fear these researchers naïvity far more than any AI.

eu4 console commands said...

This is absolutely fascinating. Everyone just normally shows the diagram of the neural networks and I never really understood what was going on, but the way you showed how the graph gets manipulated with subsequent layers made things much clearer. Thanks!

Dmitry Kan said...

As long as AI has a plug to be taken out, we should be technically safe. Tesla for instance designs their cars with a hard stop button, that should prevent hacker attacks when all cars in the U.S. would be directed to NYC.

There are many things cryptic in practical Computer Science on the surface: binary machine language (which programmers of punch cards used to know in detail), traditional machine learnt models, probabilities of a statistical machine translation (knowing them is one thing, but controlling is completely separate). But still progress is unstoppable and yes, I would be more afraid of incorrect decisions by AI picking treatment without extra human check.

Smarthome Hannover said...

Hi nice reading your bloog