• Acted clear speech corpus 

    Mayo, Catherine (LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece, 2013-09-24)
    Single male native British English talker recorded producing 25 TIMIT sentences in 5 conditions, two natural: (i) quiet, (ii) while the talker listened to high-intensity speech-shaped noise, and three acted: (i) as if to ...
  • Analysis Software for Model Checking Edinburgh Buses 

    Reijsbergen, Daniel; Gao, Wulinjian
    This software is supplementary material for the paper 'An automated methodology for analysing urban transportation systems using model checking' by Daniël Reijsbergen and Stephen Gilmore. It was used to construct the figures ...
  • Artificial Personality 

    Wester, Mirjam; Aylett, Matthew; Tomalin, Marcus; Dall, Rasmus
    This dataset is associated with the paper “Artificial Personality and Disfluency” by Mirjam Wester, Matthew Aylett, Marcus Tomalin and Rasmus Dall published at Interspeech 2015, Dresden. The focus of this paper is ...
  • Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2015) Database 

    Wu, Zhizheng; Kinnunen, Tomi; Evans, Nicholas; Yamagishi, Junichi
    The database has been used in the first Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2015). Genuine speech is collected from 106 speakers (45 male, 61 female) and with no significant channel ...
  • Code for Factorial Switching Linear Dynamical System (FSLDS) Monitoring of Intensive Care Unit Data 

    Williams, Chris; Lal, Partha; Shaw, Martin
    The submission contains: (1) realtime - C++ code for performing Factoral Switching Linear Dynamical System (FSLDS) inference in real time on CSV input files; (2) matlab - Matlab code used for offline parameter estimation ...
  • CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit 

    Veaux, Christophe; Yamagishi, Junichi; MacDonald, Kirsten
    This CSTR VCTK Corpus (Centre for Speech Technology Voice Cloning Toolkit) includes speech data uttered by 109 native speakers of English with various accents. Each speaker reads out about 400 sentences, most of which were ...
  • DiapixFL 

    Cooke, Martin; Garcia Lecumberri, Maria Luisa; Wester, Mirjam (LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece., 2013-10-01)
    DiapixFL consists of speakers whose first language (L1) is either English or Spanish solving a "spot-the-difference" task in both their L1 and their second language (L2, which for native English talkers is Spanish, and for ...
  • EEMBC Benchmark Suite Simulations 

    Tomusk, Erik
    This dataset contains gem5 simulation results and McPAT power consumption figures for 3000 out-of-order CPU cores running EEMBC DENBench (digital entertainment) and Networking 2.0 benchmarks. The benchmarks have been ...
  • Experiment materials for "Disfluencies in change detection in natural, vocoded and synthetic speech." 

    Dall, Rasmus; Wester, Mirjam; Corley, Martin
    The current dataset is associated with the DiSS paper "Disfluencies in change detection in natural, vocoded and synthetic speech." In this paper we investigate the effect of filled pauses, a discourse marker and silent ...
  • Experiment materials for "Testing the consistency assumption: pronunciation variant forced alignment in read and spontaneous speech synthesis" 

    Dall, Rasmus
    The matlab scripts are used to analyse the results files in the results folder. The Test_Wavs are the wavfiles used for the listening test divided by group and the pre-test test files.
  • Experiment materials for "The temporal delay hypothesis: Natural, vocoded and synthetic speech." 

    Corley, Martin; Dall, Rasmus; Wester, Mirjam
    Including disfluencies in synthetic speech is being explored as a way of making synthetic speech sound more natural and conversational. How to measure whether the resulting speech is actually more natural, however, is not ...
  • Hiberlink project data 

    Tobin, Richard; Grover, Claire; Zhou, Ke
    Summary files (in XML format) listing URIs referenced in papers from arXiv, Elsevier, and PMC respectively (approximately 1 million URIs from 3 million papers in total). The focus of the Hiberlink project was to assess the ...
  • The Human Know-How Dataset 

    Pareti, Paolo
    The Human Know-How Dataset describes 211,696 human activities from many different domains. These activities are decomposed into 2,609,236 entities (each with an English textual label). These entities represent over two ...
  • Human vs Machine Spoofing 

    Wester, Mirjam; Wu, Zhizheng; Yamagishi, Junichi
    Listening test materials for "Human vs Machine Spoofing Detection on Wideband and Narrowband data." They include lists of the speech material selected from the SAS spoofing database and the listeners' responses. The main ...
  • Hurricane natural speech corpus 

    Cooke, Martin; Mayo, Catherine; Valentini-Botinhao, Cassia (LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece, 2013-10-01)
    Single male native British-English talker recorded producing three speech sets (Harvard sentences, Modified Rhyme Test, news sentences) in quiet and while the talker was listening to speech-shaped noise at 84dB(A).
  • Listening test materials for "A study of speaker adaptation for DNN-based speech synthesis" 

    Wu, Zhizheng
    The dataset contains the testing stimuli and listeners' MUSHRA test responses for the Interspeech 2015 paper, "A study of speaker adaptation for DNN-based speech synthesis". In this paper, we conduct an experimental analysis ...
  • Listening test materials for "A template-based approach for speech synthesis intonation generation using LSTMs" 

    Ronanki, Srikanth; Henter, Gustav Eje; Wu, Zhizheng; King, Simon
    This data release contains listening test materials associated with the paper "A template-based approach for speech synthesis intonation generation using LSTMs", presented at Interspeech 2016 in San Francisco, USA.
  • Listening test materials for "Deep neural network context embeddings for model selection in rich-context HMM synthesis" 

    Merritt, Thomas
    These are the listening test materials for "Deep neural network context embeddings for model selection in rich-context HMM synthesis". They include the waveforms played to listeners as well as the listeners' responses.
  • Listening test materials for "Deep neural network-guided unit selection synthesis" 

    Merritt, Thomas; Clark, Robert; Wu, Zhizheng; Yamagishi, Junichi; King, Simon
    These are the listening test materials for "Deep neural network-guided unit selection synthesis". They include the waveforms played to listeners as well as the listeners' responses.
  • Listening test materials for "Evaluating comprehension of natural and synthetic conversational speech" 

    Wester, Mirjam; Watts, Oliver; Henter, Gustav Eje
    Current speech synthesis methods typically operate on isolated sentences and lack convincing prosody when generating longer segments of speech. Similarly, prevailing TTS evaluation paradigms, such as intelligibility ...