Abstract
In this paper, we are tackling a practical problem which can be faced when establishing an i-vector speaker recognition system with limited resources. This addresses the problem of lack of development data of multiple recordings for each speaker. When we only have one recording for each speaker in the development set, phonetic variability can be simply synthesised by dividing the recordings if they are of sufficient length. For channel variability, we pass the recordings through a Gaussian channel to produce another set of recordings, referred to here as Gaussian version recordings. The proposed method for channel variability synthesis produces total relative improvements in EER of 5%.
Original language | English |
---|---|
Title of host publication | IET 3rd International Conference on Intelligent Signal Processing (ISP 2017) |
Publisher | IET |
Pages | 1-6 |
Number of pages | 6 |
ISBN (Electronic) | 978-1-78561-708-9 |
ISBN (Print) | 978-1-78561-707-2 |
DOIs | |
Publication status | Published - 4 Dec 2017 |
Event | IET 3rd International Conference on Intelligent Signal Processing - Savoy, London, United Kingdom Duration: 4 Dec 2017 → 5 Dec 2017 https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8329306 |
Conference
Conference | IET 3rd International Conference on Intelligent Signal Processing |
---|---|
Abbreviated title | ISP 2017 |
Country/Territory | United Kingdom |
City | London |
Period | 4/12/17 → 5/12/17 |
Internet address |
Keywords
- multi-condition training
- session variability
- i-vector