Now just work following papers and two works, ideally.
The first one is reconstructed from Conv-TasNet of funcwj's work, and the paper is [1].
The second is about another, look more in below's paper.[2]
Some data processing code is inspired by pchao6's work.
2019.09.20
There will be also another speech separation work[3]'s implementation by recursive selecting speaker in the near future.
[1] Luo Y, Mesgarani N. TasNet: Surpassing Ideal Time-Frequency Masking for Speech Separation[J]. arXiv preprint arXiv:1809.07454, 2018.
[2] Kinoshita K, Drude L, Delcroix M, et al. Listening to Each Speaker One by One with Recurrent Selective Hearing Networks[C]. international conference on acoustics, speech, and signal processing, 2018: 5064-5068.
[3] Takahashi, N., Parthasaarathy, S., Goswami, N., Mitsufuji, Y. (2019) Recursive Speech Separation for Unknown Number of Speakers. Proc. Interspeech 2019, 1348-1352, DOI: 10.21437/Interspeech.2019-1550.