Channel variability synthesis in i-vector speaker recognition

Ahmed Isam Ahmed, John Chiverton, David Ndzi, Victor Becerra

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Citation (Scopus)

    Abstract

    In this paper, we are tackling a practical problem which can be faced when establishing an i-vector speaker recognition system with limited resources. This addresses the problem of lack of development data of multiple recordings for each speaker. When we only have one recording for each speaker in the development set, phonetic variability can be simply synthesised by dividing the recordings if they are of sufficient length. For channel variability, we pass the recordings through a Gaussian channel to produce another set of recordings, referred to here as Gaussian version recordings. The proposed method for channel variability synthesis produces total relative improvements in EER of 5%.
    Original languageEnglish
    Title of host publication​IET 3rd International Conference on ​​Intelligent Signal Processing (ISP 2017)
    PublisherIET
    Pages1-6
    Number of pages6
    ISBN (Electronic)978-1-78561-708-9
    ISBN (Print)978-1-78561-707-2
    DOIs
    Publication statusPublished - 4 Dec 2017
    Event​IET 3rd International Conference on ​​Intelligent Signal Processing - Savoy, London, United Kingdom
    Duration: 4 Dec 20175 Dec 2017
    https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8329306

    Conference

    Conference​IET 3rd International Conference on ​​Intelligent Signal Processing
    Abbreviated titleISP 2017
    Country/TerritoryUnited Kingdom
    CityLondon
    Period4/12/175/12/17
    Internet address

    Keywords

    • multi-condition training
    • session variability
    • i-vector

    Fingerprint

    Dive into the research topics of 'Channel variability synthesis in i-vector speaker recognition'. Together they form a unique fingerprint.

    Cite this