Wednesday, December 12, 2018

Automatic writing with Deep Learning: Progress

This is a continuation of the post https://dmitrykan.blogspot.com/2018/05/automatic-writing-with-deep-learning.html. This item was reblogged at Writer's DZone: https://dzone.com/articles/automatic-writing-with-deep-learning-progress

Fast forward few months (apologies for the delay) I can share some findings.
Again, I think, we should take AI co-writer exercises with a grain of salt. However, during this time I have come across practical usage example areas for such systems.

One of them is augmentation of a news article writer. More specifically, when writing a news item, one of the most challenging tasks is to coin a catchy title. Does the title have some trendy phrases in it? Or perhaps it mentions an emerging topic, that captures attention at this given moment? Or reuses a pattern that worked well for this given author? Or just spurs an idea in the author's head?

Copyright: https://www.rogerwilco.co.za/blog/robot-writers-how-ai-will-affect-copywriting


In the following exercise I have set a very modest goal: train a co-writer on previously written texts with an attempt to suggest something useful from them. I could imagine, that this could be extended to texts that are trending or a collection of particularly interesting titles. What have you.

To train such a model I have used Robin Sloan's RNN writer: https://github.com/robinsloan/rnn-writer. The goodies of the project are:
  • Trained on Torch. Nowadays, Torch is leveraged via PyTorch, a deep learning Python library that is nearing its production readiness time.
  • The trained model gets exposed into an Atom -- pluginable editor (I'd imagine, real writers would want to have the model integrated into their favourite editor, like Word).
  • API is available too to integrate into custom apps (and this is exactly how it is integrated with Atom).

I will skip the installation of Torch and training the network and proceed to examples. The rnn-writer github repository has a good set of instructions to proceed with. I have installed Torch and trained the model on a Mac.

First things first: RNN trained on my Master's Thesis "Design and Implementation of Peer-to-Peer Network" (University of Kuopio, 2007).


The text of the Master's Thesis is about 50 pages in English with diagrams and formulas. On one hand, having more data makes NNs learn more word representations and should have larger probability space to predict next word given the condition of the current word or phrase. On the other hand, limiting the input corpus to phrases that have certain domain goal, like writing an email, could leverage a clean set of phrases that a user employs in many typical email passages.

As I got an access to Fox articles, I thought, this could warrant another RNN model and a test. Something to share next time.

No comments: