LSTM is arguably the most successful RNN architecture for many tasks that involve sequential information. In the past few years there have been several proposed improvements to LSTM. We propose an improvement to LSTM which allows communication between memory cells in different blocks and allows an LSTM layer to carry out internal computation within its memory.