Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
3,200 |
On the Conditioning of the Spherical Harmonic Matrix for Spatial Audio Applications
|
eess.AS
|
In this paper, we attempt to study the conditioning of the Spherical Harmonic
Matrix (SHM), which is widely used in the discrete, limited order orthogonal
representation of sound fields. SHM's has been widely used in the audio
applications like spatial sound reproduction using loudspeakers, orthogonal
representation of Head Related Transfer Functions (HRTFs) etc. The conditioning
behaviour of the SHM depends on the sampling positions chosen in the 3D space.
Identification of the optimal sampling points in the continuous 3D space that
results in a well-conditioned SHM for any number of sampling points is a highly
challenging task. In this work, an attempt has been made to solve a discrete
version of the above problem using optimization based techniques. The discrete
problem is, to identify the optimal sampling points from a discrete set of
densely sampled positions of the 3D space, that minimizes the condition number
of SHM. This method has been subsequently utilized for identifying the geometry
of loudspeakers in the spatial sound reproduction, and in the selection of
spatial sampling configurations for HRTF measurement. The application specific
requirements have been formulated as additional constraints of the optimization
problem. Recently developed mixed-integer optimization solvers have been used
in solving the formulated problem. The performance of the obtained sampling
position in each application is compared with the existing configurations.
Objective measures like condition number, D-measure, and spectral distortion
are used to study the performance of the sampling configurations resulting from
the proposed and the existing methods. It is observed that the proposed
solution is able to find the sampling points that results in a better
conditioned SHM and also maintains all the application specific requirements.
|
electrics
|
3,201 |
Singing voice correction using canonical time warping
|
eess.AS
|
Expressive singing voice correction is an appealing but challenging problem.
A robust time-warping algorithm which synchronizes two singing recordings can
provide a promising solution. We thereby propose to address the problem by
canonical time warping (CTW) which aligns amateur singing recordings to
professional ones. A new pitch contour is generated given the alignment
information, and a pitch-corrected singing is synthesized back through the
vocoder. The objective evaluation shows that CTW is robust against
pitch-shifting and time-stretching effects, and the subjective test
demonstrates that CTW prevails the other methods including DTW and the
commercial auto-tuning software. Finally, we demonstrate the applicability of
the proposed method in a practical, real-world scenario.
|
electrics
|
3,202 |
Raga Identification using Repetitive Note Patterns from prescriptive notations of Carnatic Music
|
eess.AS
|
Carnatic music, a form of Indian Art Music, has relied on an oral tradition
for transferring knowledge across several generations. Over the last two
hundred years, the use of prescriptive notations has been adopted for learning,
sight-playing and sight-singing. Prescriptive notations offer generic
guidelines for a raga rendition and do not include information about the
ornamentations or the gamakas, which are considered to be critical for
characterizing a raga. In this paper, we show that prescriptive notations
contain raga attributes and can reliably identify a raga of Carnatic music from
its octave-folded prescriptive notations. We restrict the notations to 7 notes
and suppress the finer note position information. A dictionary based approach
captures the statistics of repetitive note patterns within a raga notation. The
proposed stochastic models of repetitive note patterns (or SMRNP in short)
obtained from raga notations of known compositions, outperforms the state of
the art melody based raga identification technique on an equivalent melodic
data corresponding to the same compositions. This in turn shows that for
Carnatic music, the note transitions and movements have a greater role in
defining the raga structure than the exact note positions.
|
electrics
|
3,203 |
Enhancement of Noisy Speech Exploiting an Exponential Model Based Threshold and a Custom Thresholding Function in Perceptual Wavelet Packet Domain
|
eess.AS
|
For enhancement of noisy speech, a method of threshold determination based on
modeling of Teager energy (TE) operated perceptual wavelet packet (PWP)
coefficients of the noisy speech by exponential distribution is presented. A
custom thresholding function based on the combination of mu-law and semisoft
thresholding functions is designed and exploited to apply the statistically
derived threshold upon the PWP coefficients. The effectiveness of the proposed
method is evaluated for car and multi-talker babble noise corrupted speech
signals through performing extensive simulations using the NOIZEUS database.
The proposed method outperforms some of the state-of-the-art speech enhancement
methods both at high and low levels of SNRs in terms of the standard objective
measures and the subjective evaluations including formal listening tests.
|
electrics
|
3,204 |
Precise Detection of Speech Endpoints Dynamically: A Wavelet Convolution based approach
|
eess.AS
|
Precise detection of speech endpoints is an important factor which affects
the performance of the systems where speech utterances need to be extracted
from the speech signal such as Automatic Speech Recognition (ASR) system.
Existing endpoint detection (EPD) methods mostly uses Short-Term Energy (STE),
Zero-Crossing Rate (ZCR) based approaches and their variants. But STE and ZCR
based EPD algorithms often fail in the presence of Non-speech Sound Artifacts
(NSAs) produced by the speakers. Algorithms based on pattern recognition and
classification techniques are also proposed but require labeled data for
training. A new algorithm termed as Wavelet Convolution based Speech Endpoint
Detection (WCSEPD) is proposed in this article to extract speech endpoints.
WCSEPD decomposes the speech signal into high-frequency and low-frequency
components using wavelet convolution and computes entropy based thresholds for
the two frequency components. The low-frequency thresholds are used to extract
voiced speech segments, whereas the high-frequency thresholds are used to
extract the unvoiced speech segments by filtering out the NSAs. WCSEPD does not
require any labeled data for training and can automatically extract speech
segments. Experiment results show that the proposed algorithm precisely
extracts speech endpoints in the presence of NSAs.
|
electrics
|
3,205 |
Simulating dysarthric speech for training data augmentation in clinical speech applications
|
eess.AS
|
Training machine learning algorithms for speech applications requires large,
labeled training data sets. This is problematic for clinical applications where
obtaining such data is prohibitively expensive because of privacy concerns or
lack of access. As a result, clinical speech applications are typically
developed using small data sets with only tens of speakers. In this paper, we
propose a method for simulating training data for clinical applications by
transforming healthy speech to dysarthric speech using adversarial training. We
evaluate the efficacy of our approach using both objective and subjective
criteria. We present the transformed samples to five experienced
speech-language pathologists (SLPs) and ask them to identify the samples as
healthy or dysarthric. The results reveal that the SLPs identify the
transformed speech as dysarthric 65% of the time. In a pilot classification
experiment, we show that by using the simulated speech samples to balance an
existing dataset, the classification accuracy improves by about 10% after data
augmentation.
|
electrics
|
3,206 |
Angular Softmax Loss for End-to-end Speaker Verification
|
eess.AS
|
End-to-end speaker verification systems have received increasing interests.
The traditional i-vector approach trains a generative model (basically a
factor-analysis model) to extract i-vectors as speaker embeddings. In contrast,
the end-to-end approach directly trains a discriminative model (often a neural
network) to learn discriminative speaker embeddings; a crucial component is the
training criterion. In this paper, we use angular softmax (A-softmax), which is
originally proposed for face verification, as the loss function for feature
learning in end-to-end speaker verification. By introducing margins between
classes into softmax loss, A-softmax can learn more discriminative features
than softmax loss and triplet loss, and at the same time, is easy and stable
for usage. We make two contributions in this work. 1) We introduce A-softmax
loss into end-to-end speaker verification and achieve significant EER
reductions. 2) We find that the combination of using A-softmax in training the
front-end and using PLDA in the back-end scoring further boosts the performance
of end-to-end systems under short utterance condition (short in both enrollment
and test). Experiments are conducted on part of $Fisher$ dataset and
demonstrate the improvements of using A-softmax.
|
electrics
|
3,207 |
RTF-Based Binaural MVDR Beamformer Exploiting an External Microphone in a Diffuse Noise Field
|
eess.AS
|
Besides suppressing all undesired sound sources, an important objective of a
binaural noise reduction algorithm for hearing devices is the preservation of
the binaural cues, aiming at preserving the spatial perception of the acoustic
scene. A well-known binaural noise reduction algorithm is the binaural minimum
variance distortionless response beamformer, which can be steered using the
relative transfer function (RTF) vector of the desired source, relating the
acoustic transfer functions between the desired source and all microphones to a
reference microphone. In this paper, we propose a computationally efficient
method to estimate the RTF vector in a diffuse noise field, requiring an
additional microphone that is spatially separated from the head-mounted
microphones. Assuming that the spatial coherence between the noise components
in the head-mounted microphone signals and the additional microphone signal is
zero, we show that an unbiased estimate of the RTF vector can be obtained.
Based on real-world recordings, experimental results for several reverberation
times show that the proposed RTF estimator outperforms the widely used RTF
estimator based on covariance whitening and a simple biased RTF estimator in
terms of noise reduction and binaural cue preservation performance.
|
electrics
|
3,208 |
Independent Low-Rank Matrix Analysis Based on Time-Variant Sub-Gaussian Source Model
|
eess.AS
|
Independent low-rank matrix analysis (ILRMA) is a fast and stable method for
blind audio source separation. Conventional ILRMAs assume time-variant
(super-)Gaussian source models, which can only represent signals that follow a
super-Gaussian distribution. In this paper, we focus on ILRMA based on a
generalized Gaussian distribution (GGD-ILRMA) and propose a new type of
GGD-ILRMA that adopts a time-variant sub-Gaussian distribution for the source
model. By using a new update scheme called generalized iterative projection for
homogeneous source models, we obtain a convergence-guaranteed update rule for
demixing spatial parameters. In the experimental evaluation, we show the
versatility of the proposed method, i.e., the proposed time-variant
sub-Gaussian source model can be applied to various types of source signal.
|
electrics
|
3,209 |
Advancing Multi-Accented LSTM-CTC Speech Recognition using a Domain Specific Student-Teacher Learning Paradigm
|
eess.AS
|
Non-native speech causes automatic speech recognition systems to degrade in
performance. Past strategies to address this challenge have considered model
adaptation, accent classification with a model selection, alternate
pronunciation lexicon, etc. In this study, we consider a recurrent neural
network (RNN) with connectionist temporal classification (CTC) cost function
trained on multi-accent English data including US (Native), Indian and Hispanic
accents. We exploit dark knowledge from a model trained with the multi-accent
data to train student models under the guidance of both a teacher model and CTC
cost of target transcription. We show that transferring knowledge from a single
RNN-CTC trained model toward a student model, yields better performance than
the stand-alone teacher model. Since the outputs of different trained CTC
models are not necessarily aligned, it is not possible to simply use an
ensemble of CTC teacher models. To address this problem, we train accent
specific models under the guidance of a single multi-accent teacher, which
results in having multiple aligned and trained CTC models. Furthermore, we
train a student model under the supervision of the accent-specific teachers,
resulting in an even further complementary model, which achieves +20.1%
relative Character Error Rate (CER) reduction compared to the baseline trained
without any teacher. Having this effective multi-accent model, we can achieve
further improvement for each accent by adapting the model to each accent. Using
the accent specific model's outputs to regularize the adapting process (i.e., a
knowledge distillation version of Kullback-Leibler (KL) divergence) results in
even superior performance compared to the conventional approach using general
teacher models.
|
electrics
|
3,210 |
Evaluating MCC-PHAT for the LOCATA Challenge - Task 1 and Task 3
|
eess.AS
|
This report presents test results for the \mbox{LOCATA} challenge
\cite{lollmann2018locata} using the recently developed MCC-PHAT (multichannel
cross correlation - phase transform) sound source localization method. The
specific tasks addressed are respectively the localization of a single static
and a single moving speakers using sound recordings of a variety of static
microphone arrays. The test results are compared with those of the MUSIC
(multiple signal classification) method. The optimal subpattern assignment
(OSPA) metric is used for quantitative performance evaluation. In most cases,
the MCC-PHAT method demonstrates more reliable and accurate location estimates,
in comparison with those of the MUSIC method.
|
electrics
|
3,211 |
Error Reduction Network for DBLSTM-based Voice Conversion
|
eess.AS
|
So far, many of the deep learning approaches for voice conversion produce
good quality speech by using a large amount of training data. This paper
presents a Deep Bidirectional Long Short-Term Memory (DBLSTM) based voice
conversion framework that can work with a limited amount of training data. We
propose to implement a DBLSTM based average model that is trained with data
from many speakers. Then, we propose to perform adaptation with a limited
amount of target data. Last but not least, we propose an error reduction
network that can improve the voice conversion quality even further. The
proposed framework is motivated by three observations. Firstly, DBLSTM can
achieve a remarkable voice conversion by considering the long-term dependencies
of the speech utterance. Secondly, DBLSTM based average model can be easily
adapted with a small amount of data, to achieve a speech that sounds closer to
the target. Thirdly, an error reduction network can be trained with a small
amount of training data, and can improve the conversion quality effectively.
The experiments show that the proposed voice conversion framework is flexible
to work with limited training data and outperforms the traditional frameworks
in both objective and subjective evaluations.
|
electrics
|
3,212 |
Concatenated Identical DNN (CI-DNN) to Reduce Noise-Type Dependence in DNN-Based Speech Enhancement
|
eess.AS
|
Estimating time-frequency domain masks for speech enhancement using deep
learning approaches has recently become a popular field of research. In this
paper, we propose a mask-based speech enhancement framework by using
concatenated identical deep neural networks (CI-DNNs). The idea is that a
single DNN is trained under multiple input and output signal-to-noise power
ratio (SNR) conditions, using targets that provide a moderate SNR gain with
respect to the input and therefore achieve a balance between speech component
quality and noise suppression. We concatenate this single DNN several times
without any retraining to provide enough noise attenuation. Simulation results
show that our proposed CI-DNN outperforms enhancement methods using classical
spectral weighting rules w.r.t. total speech quality and speech
intelligibility. Moreover, our approach shows similar or even a little bit
better performance with much fewer trainable parameters compared with a
noisy-target single DNN approach of the same size. A comparison to the
conventional clean-target single DNN approach shows that our proposed CI-DNN is
better in speech component quality and much better in residual noise component
quality. Most importantly, our new CI-DNN generalized best to an unseen noise
type, if compared to the other tested deep learning approaches.
|
electrics
|
3,213 |
A Proper version of Synthesis-based Sparse Audio Declipper
|
eess.AS
|
Methods based on sparse representation have found great use in the recovery
of audio signals degraded by clipping. The state of the art in declipping has
been achieved by the SPADE algorithm by Kiti\'c et. al. (LVA/ICA2015). Our
recent study (LVA/ICA2018) has shown that although the original S-SPADE can be
improved such that it converges significantly faster than the A-SPADE, the
restoration quality is significantly worse. In the present paper, we propose a
new version of S-SPADE. Experiments show that the novel version of S-SPADE
outperforms its old version in terms of restoration quality, and that it is
comparable with the A-SPADE while being even slightly faster than A-SPADE.
|
electrics
|
3,214 |
Speech Coding, Speech Interfaces and IoT - Opportunities and Challenges
|
eess.AS
|
Recent speech and audio coding standards such as 3GPP Enhanced Voice Services
match the foreseeable needs and requirements in transmission of speech and
audio, when using current transmission infrastructure and applications. Trends
in Internet-of-Things technology and development in personal digital assistants
(PDAs) however begs us to consider future requirements for speech and audio
codecs. The opportunities and challenges are here summarized in three concepts:
collaboration, unification and privacy. First, an increasing number of devices
will in the future be speech-operated, whereby the ability to focus voice
commands to a specific devices becomes essential. We therefore need methods
which allows collaboration between devices, such that ambiguities can be
resolved. Second, such collaboration can be achieved with a unified and
standardized communication protocol between voice-operated devices. To achieve
such collaboration protocols, we need to develop distributed speech coding
technology for ad-hoc IoT networks. Finally however, collaboration will
increase the demand for privacy protection in speech interfaces and it is
therefore likely that technologies for supporting privacy and generating trust
will be in high demand.
|
electrics
|
3,215 |
Building and Evaluation of a Real Room Impulse Response Dataset
|
eess.AS
|
This paper presents BUT ReverbDB - a dataset of real room impulse responses
(RIR), background noises and re-transmitted speech data. The retransmitted data
includes LibriSpeech test-clean, 2000 HUB5 English evaluation and part of 2010
NIST Speaker Recognition Evaluation datasets. We provide a detailed description
of RIR collection (hardware, software, post-processing) that can serve as a
"cook-book" for similar efforts. We also validate BUT ReverbDB in two sets of
automatic speech recognition (ASR) experiments and draw conclusions for
augmenting ASR training data with real and artificially generated RIRs. We show
that a limited number of real RIRs, carefully selected to match the target
environment, provide results comparable to a large number of artificially
generated RIRs, and that both sets can be combined to achieve the best ASR
results. The dataset is distributed for free under a non-restrictive license
and it currently contains data from 8 rooms, which is growing. The distribution
package also contains a Kaldi-based recipe for augmenting publicly available
AMI close-talk meeting data and test the results on an AMI single distant
microphone set, allowing it to reproduce our experiments.
|
electrics
|
3,216 |
Non linear time compression of clear and normal speech at high rates
|
eess.AS
|
We compare a series of time compression methods applied to normal and clear
speech. First we evaluate a linear (uniform) method applied to these styles as
well as to naturally-produced fast speech. We found, in line with the
literature, that unprocessed fast speech was less intelligible than linearly
compressed normal speech. Fast speech was also less intelligible than
compressed clear speech but at the highest rate (three times faster than
normal) the advantage of clear over fast speech was lost. To test whether this
was due to shorter speech duration we evaluate, in our second experiments, a
range of methods that compress speech and silence at different rates. We found
that even when the overall duration of speech and silence is kept the same
across styles, compressed normal speech is still more intelligible than
compressed clear speech. Compressing silence twice as much as speech improved
results further for normal speech with very little additional computational
costs.
|
electrics
|
3,217 |
Speaker Verification By Partial AUC Optimization With Mahalanobis Distance Metric Learning
|
eess.AS
|
Receiver operating characteristic (ROC) and detection error tradeoff (DET)
curves are two widely used evaluation metrics for speaker verification. They
are equivalent since the latter can be obtained by transforming the former's
true positive y-axis to false negative y-axis and then re-scaling both axes by
a probit operator. Real-world speaker verification systems, however, usually
work on part of the ROC curve instead of the entire ROC curve given an
application. Therefore, we propose in this paper to use the area under part of
the ROC curve (pAUC) as a more efficient evaluation metric for speaker
verification. A Mahalanobis distance metric learning based back-end is applied
to optimize pAUC, where the Mahalanobis distance metric learning guarantees
that the optimization objective of the back-end is a convex one so that the
global optimum solution is achievable. To improve the performance of the
state-of-the-art speaker verification systems by the proposed back-end, we
further propose two feature preprocessing techniques based on
length-normalization and probabilistic linear discriminant analysis
respectively. We evaluate the proposed systems on the major languages of NIST
SRE16 and the core tasks of SITW. Experimental results show that the proposed
back-end outperforms the state-of-the-art speaker verification back-ends in
terms of seven evaluation metrics.
|
electrics
|
3,218 |
Overlap-Add Windows with Maximum Energy Concentration for Speech and Audio Processing
|
eess.AS
|
Processing of speech and audio signals with time-frequency representations
require windowing methods which allow perfect reconstruction of the original
signal and where processing artifacts have a predictable behavior. The most
common approach for this purpose is overlap-add windowing, where signal
segments are windowed before and after processing. Commonly used windows
include the half-sine and a Kaiser-Bessel derived window. The latter is an
approximation of the discrete prolate spherical sequence, and thus a maximum
energy concentration window, adapted for overlap-add. We demonstrate that
performance can be improved by including the overlap-add structure as a
constraint in optimization of the maximum energy concentration criteria. The
same approach can be used to find further special cases such as optimal
low-overlap windows. Our experiments demonstrate that the proposed windows
provide notable improvements in terms of reduction in side-lobe magnitude.
|
electrics
|
3,219 |
Active Acoustic Source Tracking Exploiting Particle Filtering and Monte Carlo Tree Search
|
eess.AS
|
In this paper, we address the task of active acoustic source tracking as part
of robotic path planning. It denotes the planning of sequences of robotic
movements to enhance tracking results of acoustic sources, e.g., talking
humans, by fusing observations from multiple positions. Essentially, two
strategies are possible: short-term planning, which results in greedy behavior,
and long-term planning, which considers a sequence of possible future movements
of the robot and the source. Here, we focus on the second method as it might
improve tracking performance compared to greedy behavior and propose a flexible
path planning algorithm which exploits Monte Carlo Tree Search (MCTS) and
particle filtering based on a reward motivated by information-theoretic
considerations.
|
electrics
|
3,220 |
Irrelevant speech effect in open plan offices: A laboratory study
|
eess.AS
|
It seems now accepted that speech noise in open plan offices is the main
source of discomfort for employees. This work follows a series of studies
conducted at INRS France and INSA Lyon based on Hongisto's theoretical model
(2005) linking the Decrease in Performance (DP) and the Speech Transmission
Index (STI). This model predicts that for STI values between 0.7 and 1, which
means a speech signal close to 100% of intelligibility, the DP remains constant
at about 7%. The experiment that we carried out aimed to gather more
information about the relation between DP and STI, varying the STI value up to
0.9. Fifty-five subjects between 25-59 years old participated in the
experiment. First, some psychological parameters were observed in order to
better characterize the inter-subjects variability. Then, subjects performed a
Working-Memory (WM) task in silence and in four different sound conditions (STI
from 0.25 to 0.9). This task was customized by an initial measure of mnemonic
span so that two different cognitive loads (low/high) were equally defined for
each subject around their span value. Subjects also subjectively evaluated
their mental load and discomfort at the end of each WM task, for each noise
condition. Results show a significant effect of the STI on the DP, the mental
load and the discomfort. Furthermore, a significant correlation was found
between the age of subjects and their performance during the WM task. This
result was confirmed by a cluster analysis that enabled us to separate the
subjects on two different groups, one group of younger and more efficient
subjects and one group of older and less efficient subjects. General results
did not show any increase of DP for the highest STI values, so the "plateau"
hypothesis of Hongisto's model cannot be rejected on the basis of this
experiment.
|
electrics
|
3,221 |
USTCSpeech System for VOiCES from a Distance Challenge 2019
|
eess.AS
|
This document describes the speaker verification systems developed in the
Speech lab at the University of Science and Technology of China (USTC) for the
VOiCES from a Distance Challenge 2019. We develop the system for the Fixed
Condition on two public corpus, VoxCeleb and SITW. The frameworks of our
systems are based on the mainstream ivector/PLDA and x-vector/PLDA algorithms.
|
electrics
|
3,222 |
An End-to-End Approach to Automatic Speech Assessment for Cantonese-speaking People with Aphasia
|
eess.AS
|
Conventional automatic assessment of pathological speech usually follows two
main steps: (1) extraction of pathology-specific features; (2) classification
or regression on extracted features. Given the great variety of speech and
language disorders, feature design is never a straightforward task, and yet it
is most crucial to the performance of assessment. This paper presents an
end-to-end approach to automatic speech assessment for Cantonese-speaking
People With Aphasia (PWA). The assessment is formulated as a binary
classification task to discriminate PWA with high scores of subjective
assessment from those with low scores. The sequence-to-one Recurrent Neural
Network with Gated Recurrent Unit (GRU-RNN) and Convolutional Neural Network
(CNN) models are applied to realize the end-to-end mapping from fundamental
speech features to the classification result. The pathology-specific features
used for assessment can be learned implicitly by the neural network model.
Class Activation Mapping (CAM) method is utilized to visualize how those
features contribute to the assessment result. Our experimental results show
that the end-to-end approach outperforms the conventional two-step approach in
the classification task, and confirm that the CNN model is able to learn
impairment-related features that are similar to human-designed features. The
experimental results also suggest that CNN model performs better than
sequence-to-one GRU-RNN model in this specific task.
|
electrics
|
3,223 |
Room Geometry Estimation from Room Impulse Responses using Convolutional Neural Networks
|
eess.AS
|
We describe a new method to estimate the geometry of a room given room
impulse responses. The method utilises convolutional neural networks to
estimate the room geometry and uses the mean square error as the loss function.
In contrast to existing methods, we do not require the position or distance of
sources or receivers in the room. The method can be used with only a single
room impulse response between one source and one receiver for room geometry
estimation. The proposed estimation method can achieve an average of six
centimetre accuracy. In addition, the proposed method is shown to be
computationally efficient compared to state-of-the-art methods.
|
electrics
|
3,224 |
Progressive Speech Enhancement with Residual Connections
|
eess.AS
|
This paper studies the Speech Enhancement based on Deep Neural Networks. The
proposed architecture gradually follows the signal transformation during
enhancement by means of a visualization probe at each network block. Alongside
the process, the enhancement performance is visually inspected and evaluated in
terms of regression cost. This progressive scheme is based on Residual
Networks. During the process, we investigate a residual connection with a
constant number of channels, including internal state between blocks, and
adding progressive supervision. The insights provided by the interpretation of
the network enhancement process leads us to design an improved architecture for
the enhancement purpose. Following this strategy, we are able to obtain speech
enhancement results beyond the state-of-the-art, achieving a favorable
trade-off between dereverberation and the amount of spectral distortion.
|
electrics
|
3,225 |
Leveraging native language information for improved accented speech recognition
|
eess.AS
|
Recognition of accented speech is a long-standing challenge for automatic
speech recognition (ASR) systems, given the increasing worldwide population of
bi-lingual speakers with English as their second language. If we consider
foreign-accented speech as an interpolation of the native language (L1) and
English (L2), using a model that can simultaneously address both languages
would perform better at the acoustic level for accented speech. In this study,
we explore how an end-to-end recurrent neural network (RNN) trained system with
English and native languages (Spanish and Indian languages) could leverage data
of native languages to improve performance for accented English speech. To this
end, we examine pre-training with native languages, as well as multi-task
learning (MTL) in which the main task is trained with native English and the
secondary task is trained with Spanish or Indian Languages. We show that the
proposed MTL model performs better than the pre-training approach and
outperforms a baseline model trained simply with English data. We suggest a new
setting for MTL in which the secondary task is trained with both English and
the native language, using the same output set. This proposed scenario yields
better performance with +11.95% and +17.55% character error rate gains over
baseline for Hispanic and Indian accents, respectively.
|
electrics
|
3,226 |
Latent Class Model with Application to Speaker Diarization
|
eess.AS
|
In this paper, we apply a latent class model (LCM) to the task of speaker
diarization. LCM is similar to Patrick Kenny's variational Bayes (VB) method in
that it uses soft information and avoids premature hard decisions in its
iterations. In contrast to the VB method, which is based on a generative model,
LCM provides a framework allowing both generative and discriminative models.
The discriminative property is realized through the use of i-vector (Ivec),
probabilistic linear discriminative analysis (PLDA), and a support vector
machine (SVM) in this work. Systems denoted as LCM-Ivec-PLDA, LCM-Ivec-SVM, and
LCM-Ivec-Hybrid are introduced. In addition, three further improvements are
applied to enhance its performance. 1) Adding neighbor windows to extract more
speaker information for each short segment. 2) Using a hidden Markov model to
avoid frequent speaker change points. 3) Using an agglomerative hierarchical
cluster to do initialization and present hard and soft priors, in order to
overcome the problem of initial sensitivity. Experiments on the National
Institute of Standards and Technology Rich Transcription 2009 speaker
diarization database, under the condition of a single distant microphone, show
that the diarization error rate (DER) of the proposed methods has substantial
relative improvements compared with mainstream systems. Compared to the VB
method, the relative improvements of LCM-Ivec-PLDA, LCM-Ivec-SVM, and
LCM-Ivec-Hybrid systems are 23.5%, 27.1%, and 43.0%, respectively. Experiments
on our collected database, CALLHOME97, CALLHOME00 and SRE08 short2-summed trial
conditions also show that the proposed LCM-Ivec-Hybrid system has the best
overall performance.
|
electrics
|
3,227 |
Semi-Supervised Speech Emotion Recognition with Ladder Networks
|
eess.AS
|
Speech emotion recognition (SER) systems find applications in various fields
such as healthcare, education, and security and defense. A major drawback of
these systems is their lack of generalization across different conditions. This
problem can be solved by training models on large amounts of labeled data from
the target domain, which is expensive and time-consuming. Another approach is
to increase the generalization of the models. An effective way to achieve this
goal is by regularizing the models through multitask learning (MTL), where
auxiliary tasks are learned along with the primary task. These methods often
require the use of labeled data which is computationally expensive to collect
for emotion recognition (gender, speaker identity, age or other emotional
descriptors). This study proposes the use of ladder networks for emotion
recognition, which utilizes an unsupervised auxiliary task. The primary task is
a regression problem to predict emotional attributes. The auxiliary task is the
reconstruction of intermediate feature representations using a denoising
autoencoder. This auxiliary task does not require labels so it is possible to
train the framework in a semi-supervised fashion with abundant unlabeled data
from the target domain. This study shows that the proposed approach creates a
powerful framework for SER, achieving superior performance than fully
supervised single-task learning (STL) and MTL baselines. The approach is
implemented with several acoustic features, showing that ladder networks
generalize significantly better in cross-corpus settings. Compared to the STL
baselines, the proposed approach achieves relative gains in concordance
correlation coefficient (CCC) between 3.0% and 3.5% for within corpus
evaluations, and between 16.1% and 74.1% for cross corpus evaluations,
highlighting the power of the architecture.
|
electrics
|
3,228 |
Binaural LCMV Beamforming with Partial Noise Estimation
|
eess.AS
|
Besides reducing undesired sources (interfering sources and background
noise), another important objective of a binaural beamforming algorithm is to
preserve the spatial impression of the acoustic scene, which can be achieved by
preserving the binaural cues of all sound sources. While the binaural minimum
variance distortionless response (BMVDR) beamformer provides a good noise
reduction performance and preserves the binaural cues of the desired source, it
does not allow to control the reduction of the interfering sources and distorts
the binaural cues of the interfering sources and the background noise. Hence,
several extensions have been proposed. First, the binaural linearly constrained
minimum variance (BLCMV) beamformer uses additional constraints, enabling to
control the reduction of the interfering sources while preserving their
binaural cues. Second, the BMVDR with partial noise estimation (BMVDR-N) mixes
the output signals of the BMVDR with the noisy reference microphone signals,
enabling to control the binaural cues of the background noise. Merging the
advantages of both extensions, in this paper we propose the BLCMV with partial
noise estimation (BLCMV-N). We show that the output signals of the BLCMV-N can
be interpreted as a mixture of the noisy reference microphone signals and the
output signals of a BLCMV using an adjusted interference scaling parameter. We
provide a theoretical comparison between the BMVDR, the BLCMV, the BMVDR-N and
the proposed BLCMV-N in terms of noise and interference reduction performance
and binaural cue preservation. Experimental results using recorded signals as
well as the results of a perceptual listening test show that the BLCMV-N is
able to preserve the binaural cues of an interfering source (like the BLCMV),
while enabling to trade off between noise reduction performance and binaural
cue preservation of the background noise (like the BMVDR-N).
|
electrics
|
3,229 |
Measuring the Effectiveness of Voice Conversion on Speaker Identification and Automatic Speech Recognition Systems
|
eess.AS
|
This paper evaluates the effectiveness of a Cycle-GAN based voice converter
(VC) on four speaker identification (SID) systems and an automated speech
recognition (ASR) system for various purposes. Audio samples converted by the
VC model are classified by the SID systems as the intended target at up to 46%
top-1 accuracy among more than 250 speakers. This encouraging result in
imitating the target styles led us to investigate if converted (synthetic)
samples can be used to improve ASR training. Unfortunately, adding synthetic
data to the ASR training set only marginally improves word and character error
rates. Our results indicate that even though VC models can successfully mimic
the style of target speakers as measured by SID systems, improving ASR training
with synthetic data from VC systems needs further research to establish its
efficacy.
|
electrics
|
3,230 |
The DKU-SMIIP System for NIST 2018 Speaker Recognition Evaluation
|
eess.AS
|
In this paper, we present the system submission for the NIST 2018 Speaker
Recognition Evaluation by DKU Speech and Multi-Modal Intelligent Information
Processing (SMIIP) Lab. We explore various kinds of state-of-the-art front-end
extractors as well as back-end modeling for text-independent speaker
verifications. Our submitted primary systems employ multiple state-of-the-art
front-end extractors, including the MFCC i-vector, the DNN tandem i-vector, the
TDNN x-vector, and the deep ResNet. After speaker embedding is extracted, we
exploit several kinds of back-end modeling to perform variability compensation
and domain adaptation for mismatch training and testing conditions. The final
submitted system on the fixed condition obtains actual detection cost of 0.392
and 0.494 on CMN2 and VAST evaluation data respectively. After the official
evaluation, we further extend our experiments by investigating multiple
encoding layer designs and loss functions for the deep ResNet system.
|
electrics
|
3,231 |
The DKU System for the Speaker Recognition Task of the 2019 VOiCES from a Distance Challenge
|
eess.AS
|
In this paper, we present the DKU system for the speaker recognition task of
the VOiCES from a distance challenge 2019. We investigate the whole system
pipeline for the far-field speaker verification, including data pre-processing,
short-term spectral feature representation, utterance-level speaker modeling,
back-end scoring, and score normalization. Our best single system employs a
residual neural network trained with angular softmax loss. Also, the weighted
prediction error algorithms can further improve performance. It achieves 0.3668
minDCF and 5.58% EER on the evaluation set by using a simple cosine similarity
scoring. Finally, the submitted primary system obtains 0.3532 minDCF and 4.96%
EER on the evaluation set.
|
electrics
|
3,232 |
Localization Uncertainty in Time-Amplitude Stereophonic Reproduction
|
eess.AS
|
This article studies the effects of inter-channel time and level differences
in stereophonic reproduction on perceived localization uncertainty, which is
defined as how difficult it is for a listener to tell where a sound source is
located. Towards this end, a computational model of localization uncertainty is
proposed first. The model calculates inter-aural time and level difference
cues, and compares them to those associated to free-field point-like sources.
The comparison is carried out using a particular distance functional that
replicates the increased uncertainty observed experimentally with inconsistent
inter-aural time and level difference cues. The model is validated by formal
listening tests, achieving a Pearson correlation of 0.99. The model is then
used to predict localization uncertainty for stereophonic setups and a listener
in central and off-central positions. Results show that amplitude methods
achieve a slightly lower localization uncertainty for a listener positioned
exactly in the center of the sweet spot. As soon as the listener moves away
from that position, the situation reverses, with time-amplitude methods
achieving a lower localization uncertainty.
|
electrics
|
3,233 |
Black-box Attacks on Automatic Speaker Verification using Feedback-controlled Voice Conversion
|
eess.AS
|
Automatic speaker verification (ASV) systems in practice are greatly
vulnerable to spoofing attacks. The latest voice conversion technologies are
able to produce perceptually natural sounding speech that mimics any target
speakers. However, the perceptual closeness to a speaker's identity may not be
enough to deceive an ASV system. In this work, we propose a framework that uses
the output scores of an ASV system as the feedback to a voice conversion
system. The attacker framework is a black-box adversary that steals one's voice
identity, because it does not require any knowledge about the ASV system but
the system outputs. Experimental results conducted on ASVspoof 2019 database
confirm that the proposed feedback-controlled voice conversion framework
produces adversarial samples that are more deceptive than the straightforward
voice conversion, thereby boosting the impostor ASV scores. Further, the
perceptual evaluation studies reveal that converted speech does not adversely
affect the voice quality from the baseline system.
|
electrics
|
3,234 |
A Modularized Neural Network with Language-Specific Output Layers for Cross-lingual Voice Conversion
|
eess.AS
|
This paper presents a cross-lingual voice conversion framework that adopts a
modularized neural network. The modularized neural network has a common input
structure that is shared for both languages, and two separate output modules,
one for each language. The idea is motivated by the fact that phonetic systems
of languages are similar because humans share a common vocal production system,
but acoustic renderings, such as prosody and phonotactic, vary a lot from
language to language. The modularized neural network is trained to map Phonetic
PosteriorGram (PPG) to acoustic features for multiple speakers. It is
conditioned on a speaker i-vector to generate the desired target voice. We
validated the idea between English and Mandarin languages in objective and
subjective tests. In addition, mixed-lingual PPG derived from a unified
English-Mandarin acoustic model is proposed to capture the linguistic
information from both languages. It is found that our proposed modularized
neural network significantly outperforms the baseline approaches in terms of
speech quality and speaker individuality, and mixed-lingual PPG representation
further improves the conversion performance.
|
electrics
|
3,235 |
Objective Human Affective Vocal Expression Detection and Automatic Classification with Stochastic Models and Learning Systems
|
eess.AS
|
This paper presents a widespread analysis of affective vocal expression
classification systems. In this study, state-of-the-art acoustic features are
compared to two novel affective vocal prints for the detection of emotional
states: the Hilbert-Huang-Hurst Coefficients (HHHC) and the vector of index of
non-stationarity (INS). HHHC is here proposed as a nonlinear vocal source
feature vector that represents the affective states according to their effects
on the speech production mechanism. Emotional states are highlighted by the
empirical mode decomposition (EMD) based method, which exploits the
non-stationarity of the affective acoustic variations. Hurst coefficients
(closely related to the excitation source) are then estimated from the
decomposition process to compose the feature vector. Additionally, the INS
vector is introduced as dynamic information to the HHHC feature. The proposed
features are evaluated in speech emotion classification experiments with three
databases in German and English languages. Three state-of-the-art acoustic
features are adopted as baseline. The $\alpha$-integrated Gaussian model
($\alpha$-GMM) is also introduced for the emotion representation and
classification. Its performance is compared to competing stochastic and machine
learning classifiers. Results demonstrate that HHHC leads to significant
classification improvement when compared to the baseline acoustic features.
Moreover, results also show that $\alpha$-GMM outperforms the competing
classification methods. Finally, HHHC and INS are also evaluated as
complementary features for the GeMAPS and eGeMAPS feature sets
|
electrics
|
3,236 |
Cross lingual transfer learning for zero-resource domain adaptation
|
eess.AS
|
We propose a method for zero-resource domain adaptation of DNN acoustic
models, for use in low-resource situations where the only in-language training
data available may be poorly matched to the intended target domain. Our method
uses a multi-lingual model in which several DNN layers are shared between
languages. This architecture enables domain adaptation transforms learned for
one well-resourced language to be applied to an entirely different low-resource
language. First, to develop the technique we use English as a well-resourced
language and take Spanish to mimic a low-resource language. Experiments in
domain adaptation between the conversational telephone speech (CTS) domain and
broadcast news (BN) domain demonstrate a 29% relative WER improvement on
Spanish BN test data by using only English adaptation data. Second, we
demonstrate the effectiveness of the method for low-resource languages with a
poor match to the well-resourced language. Even in this scenario, the proposed
method achieves relative WER improvements of 18-27% by using solely English
data for domain adaptation. Compared to other related approaches based on
multi-task and multi-condition training, the proposed method is able to better
exploit well-resource language data for improved acoustic modelling of the
low-resource target domain.
|
electrics
|
3,237 |
Multi-Talker MVDR Beamforming Based on Extended Complex Gaussian Mixture Model
|
eess.AS
|
In this letter, we present a novel multi-talker minimum variance
distortionless response (MVDR) beamforming as the front-end of an automatic
speech recognition (ASR) system in a dinner party scenario. The CHiME-5 dataset
is selected to evaluate our proposal for overlapping multi-talker scenario with
severe noise. A detailed study on beamforming is conducted based on the
proposed extended complex Gaussian mixture model (CGMM) integrated with various
speech separation and speech enhancement masks. Three main changes are made to
adopt the original CGMM-based MVDR for the multi-talker scenario. First, the
number of Gaussian distributions is extended to 3 with an additional inference
speaker model. Second, the mixture coefficients are introduced as a supervisor
to generate more elaborate masks and avoid the permutation problems. Moreover,
we reorganize the MVDR and mask-based speech separation to achieve both noise
reduction and target speaker extraction. With the official baseline ASR
back-end, our front-end algorithm gained an absolute WER reduction of 13.87%
compared with the baseline front-end.
|
electrics
|
3,238 |
Multi-channel Time-Varying Covariance Matrix Model for Late Reverberation Reduction
|
eess.AS
|
In this paper, a multi-channel time-varying covariance matrix model for late
reverberation reduction is proposed. Reflecting that variance of the late
reverberation is time-varying and it depends on past speech source variance,
the proposed model is defined as convolution of a speech source variance with a
multi-channel time-invariant covariance matrix of late reverberation. The
multi-channel time-invariant covariance matrix can be interpreted as a
covariance matrix of a multi-channel acoustic transfer function (ATF). An
advantageous point of the covariance matrix model against a deterministic ATF
model is that the covariance matrix model is robust against fluctuation of the
ATF. We propose two covariance matrix models. The first model is a covariance
matrix model of late reverberation in the original microphone input signal. The
second one is a covariance matrix model of late reverberation in an extended
microphone input signal which includes not only current microphone input signal
but also past microphone input signal. The second one considers correlation
between the current microphone input signal and the past microphone input
signal. Experimental results show that the proposed method effectively reduces
reverberation especially in a time-varying ATF scenario and the second model is
shown to be more effective than the first model.
|
electrics
|
3,239 |
Frequency-Sliding Generalized Cross-Correlation: A Sub-band Time Delay Estimation Approach
|
eess.AS
|
The generalized cross correlation (GCC) is regarded as the most popular
approach for estimating the time difference of arrival (TDOA) between the
signals received at two sensors. Time delay estimates are obtained by
maximizing the GCC output, where the direct-path delay is usually observed as a
prominent peak. Moreover, GCCs play also an important role in steered response
power (SRP) localization algorithms, where the SRP functional can be written as
an accumulation of the GCCs computed from multiple sensor pairs. Unfortunately,
the accuracy of TDOA estimates is affected by multiple factors, including
noise, reverberation and signal bandwidth. In this paper, a sub-band approach
for time delay estimation aimed at improving the performance of the
conventional GCC is presented. The proposed method is based on the extraction
of multiple GCCs corresponding to different frequency bands of the cross-power
spectrum phase in a sliding-window fashion. The major contributions of this
paper include: 1) a sub-band GCC representation of the cross-power spectrum
phase that, despite having a reduced temporal resolution, provides a more
suitable representation for estimating the true TDOA; 2) such matrix
representation is shown to be rank one in the ideal noiseless case, a property
that is exploited in more adverse scenarios to obtain a more robust and
accurate GCC; 3) we propose a set of low-rank approximation alternatives for
processing the sub-band GCC matrix, leading to better TDOA estimates and source
localization performance. An extensive set of experiments is presented to
demonstrate the validity of the proposed approach.
|
electrics
|
3,240 |
BUT System Description for DIHARD Speech Diarization Challenge 2019
|
eess.AS
|
This paper describes the systems developed by the BUT team for the four
tracks of the second DIHARD speech diarization challenge. For tracks 1 and 2
the systems were based on performing agglomerative hierarchical clustering
(AHC) over x-vectors, followed by the Bayesian Hidden Markov Model (HMM) with
eigenvoice priors applied at x-vector level followed by the same approach
applied at frame level. For tracks 3 and 4, the systems were based on
performing AHC using x-vectors extracted on all channels.
|
electrics
|
3,241 |
Using Speech Synthesis to Train End-to-End Spoken Language Understanding Models
|
eess.AS
|
End-to-end models are an attractive new approach to spoken language
understanding (SLU) in which the meaning of an utterance is inferred directly
from the raw audio without employing the standard pipeline composed of a
separately trained speech recognizer and natural language understanding module.
The downside of end-to-end SLU is that in-domain speech data must be recorded
to train the model. In this paper, we propose a strategy for overcoming this
requirement in which speech synthesis is used to generate a large synthetic
training dataset from several artificial speakers. Experiments on two
open-source SLU datasets confirm the effectiveness of our approach, both as a
sole source of training data and as a form of data augmentation.
|
electrics
|
3,242 |
GCI detection from raw speech using a fully-convolutional network
|
eess.AS
|
Glottal Closure Instants (GCI) detection consists in automatically detecting
temporal locations of most significant excitation of the vocal tract from the
speech signal. It is used in many speech analysis and processing applications,
and various algorithms have been proposed for this purpose. Recently, new
approaches using convolutional neural networks have emerged, with encouraging
results. Following this trend, we propose a simple approach that performs a
mapping from the speech waveform to a target signal from which the GCIs are
obtained by peak-picking. However, the ground truth GCIs used for training and
evaluation are usually extracted from EGG signals, which are not perfectly
reliable and often not available. To overcome this problem, we propose to train
our network on high-quality synthetic speech with perfect ground truth. The
performances of the proposed algorithm are compared with three other
state-of-the-art approaches using publicly available datasets, and the impact
of using controlled synthetic or real speech signals in the training stage is
investigated. The experimental results demonstrate that the proposed method
obtains similar or better results than other state-of-the-art algorithms and
that using large synthetic datasets with many speakers offers a better
generalization ability than using a smaller database of real speech and EGG
signals.
|
electrics
|
3,243 |
QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions
|
eess.AS
|
We propose a new end-to-end neural acoustic model for automatic speech
recognition. The model is composed of multiple blocks with residual connections
between them. Each block consists of one or more modules with 1D time-channel
separable convolutional layers, batch normalization, and ReLU layers. It is
trained with CTC loss. The proposed network achieves near state-of-the-art
accuracy on LibriSpeech and Wall Street Journal, while having fewer parameters
than all competing models. We also demonstrate that this model can be
effectively fine-tuned on new datasets.
|
electrics
|
3,244 |
End-to-end architectures for ASR-free spoken language understanding
|
eess.AS
|
Spoken Language Understanding (SLU) is the problem of extracting the meaning
from speech utterances. It is typically addressed as a two-step problem, where
an Automatic Speech Recognition (ASR) model is employed to convert speech into
text, followed by a Natural Language Understanding (NLU) model to extract
meaning from the decoded text. Recently, end-to-end approaches were emerged,
aiming at unifying the ASR and NLU into a single SLU deep neural architecture,
trained using combinations of ASR and NLU-level recognition units. In this
paper, we explore a set of recurrent architectures for intent classification,
tailored to the recently introduced Fluent Speech Commands (FSC) dataset, where
intents are formed as combinations of three slots (action, object, and
location). We show that by combining deep recurrent architectures with standard
data augmentation, state-of-the-art results can be attained, without using
ASR-level targets or pretrained ASR models. We also investigate its
generalizability to new wordings, and we show that the model can perform
reasonably well on wordings unseen during training.
|
electrics
|
3,245 |
End-to-end Domain-Adversarial Voice Activity Detection
|
eess.AS
|
Voice activity detection is the task of detecting speech regions in a given
audio stream or recording. First, we design a neural network combining
trainable filters and recurrent layers to tackle voice activity detection
directly from the waveform. Experiments on the challenging DIHARD dataset show
that the proposed end-to-end model reaches state-of-the-art performance and
outperforms a variant where trainable filters are replaced by standard cepstral
coefficients. Our second contribution aims at making the proposed voice
activity detection model robust to domain mismatch. To that end, a domain
classification branch is added to the network and trained in an adversarial
manner. The same DIHARD dataset, drawn from 11 different domains is used for
evaluation under two scenarios. In the in-domain scenario where the training
and test sets cover the exact same domains, we show that the domain-adversarial
approach does not degrade performance of the proposed end-to-end model. In the
out-domain scenario where the test domain is different from training domains,
it brings a relative improvement of more than 10%. Finally, our last
contribution is the provision of a fully reproducible open-source pipeline than
can be easily adapted to other datasets.
|
electrics
|
3,246 |
Zero-Shot Multi-Speaker Text-To-Speech with State-of-the-art Neural Speaker Embeddings
|
eess.AS
|
While speaker adaptation for end-to-end speech synthesis using speaker
embeddings can produce good speaker similarity for speakers seen during
training, there remains a gap for zero-shot adaptation to unseen speakers. We
investigate multi-speaker modeling for end-to-end text-to-speech synthesis and
study the effects of different types of state-of-the-art neural speaker
embeddings on speaker similarity for unseen speakers. Learnable dictionary
encoding-based speaker embeddings with angular softmax loss can improve equal
error rates over x-vectors in a speaker verification task; these embeddings
also improve speaker similarity and naturalness for unseen speakers when used
for zero-shot adaptation to new speakers in end-to-end speech synthesis.
|
electrics
|
3,247 |
Learning deep representations by multilayer bootstrap networks for speaker diarization
|
eess.AS
|
The performance of speaker diarization is strongly affected by its clustering
algorithm at the test stage. However, it is known that clustering algorithms
are sensitive to random noises and small variations, particularly when the
clustering algorithms themselves suffer some weaknesses, such as bad local
minima and prior assumptions. To deal with the problem, a compact
representation of speech segments with small within-class variances and large
between-class distances is usually needed. In this paper, we apply an
unsupervised deep model, named multilayer bootstrap network (MBN), to further
process the embedding vectors of speech segments for the above problem. MBN is
an unsupervised deep model for nonlinear dimensionality reduction. Unlike
traditional neural network based deep model, it is a stack of $k$-centroids
clustering ensembles, each of which is trained simply by random resampling of
data and one-nearest-neighbor optimization. We construct speaker diarization
systems by combining MBN with either the i-vector frontend or x-vector
frontend, and evaluated their effectiveness on a simulated NIST diarization
dataset, the AMI meeting corpus, and NIST SRE 2000 CALLHOME database.
Experimental results show that the proposed systems are better than or at least
comparable to the systems that do not use MBN.
|
electrics
|
3,248 |
Analyzing the impact of speaker localization errors on speech separation for automatic speech recognition
|
eess.AS
|
We investigate the effect of speaker localization on the performance of
speech recognition systems in a multispeaker, multichannel environment. Given
the speaker location information, speech separation is performed in three
stages. In the first stage, a simple delay-and-sum (DS) beamformer is used to
enhance the signal impinging from the speaker location which is then used to
estimate a time-frequency mask corresponding to the localized speaker using a
neural network. This mask is used to compute the second order statistics and to
derive an adaptive beamformer in the third stage. We generated a multichannel,
multispeaker, reverberated, noisy dataset inspired from the well studied
WSJ0-2mix and study the performance of the proposed pipeline in terms of the
word error rate (WER). An average WER of $29.4$% was achieved using the ground
truth localization information and $42.4$% using the localization information
estimated via GCC-PHAT. The signal-to-interference ratio (SIR) between the
speakers has a higher impact on the ASR performance, to the extent of reducing
the WER by $59$% relative for a SIR increase of $15$ dB. By contrast,
increasing the spatial distance to $50^\circ$ or more improves the WER by $23$%
relative only
|
electrics
|
3,249 |
SLOGD: Speaker LOcation Guided Deflation approach to speech separation
|
eess.AS
|
Speech separation is the process of separating multiple speakers from an
audio recording. In this work we propose to separate the sources using a
Speaker LOcalization Guided Deflation (SLOGD) approach wherein we estimate the
sources iteratively. In each iteration we first estimate the location of the
speaker and use it to estimate a mask corresponding to the localized speaker.
The estimated source is removed from the mixture before estimating the location
and mask of the next source. Experiments are conducted on a reverberated, noisy
multichannel version of the well-studied WSJ-2MIX dataset using word error rate
(WER) as a metric. The proposed method achieves a WER of $44.2$%, a $34$%
relative improvement over the system without separation and $17$% relative
improvement over Conv-TasNet.
|
electrics
|
3,250 |
Overlapped speech recognition from a jointly learned multi-channel neural speech extraction and representation
|
eess.AS
|
We propose an end-to-end joint optimization framework of a multi-channel
neural speech extraction and deep acoustic model without mel-filterbank (FBANK)
extraction for overlapped speech recognition. First, based on a multi-channel
convolutional TasNet with STFT kernel, we unify the multi-channel target speech
enhancement front-end network and a convolutional, long short-term memory and
fully connected deep neural network (CLDNN) based acoustic model (AM) with the
FBANK extraction layer to build a hybrid neural network, which is thus jointly
updated only by the recognition loss. The proposed framework achieves 28% word
error rate reduction (WERR) over a separately optimized system on AISHELL-1 and
shows consistent robustness to signal to interference ratio (SIR) and angle
difference between overlapping speakers. Next, a further exploration shows that
the speech recognition is improved with a simplified structure by replacing the
FBANK extraction layer in the joint model with a learnable feature projection.
Finally, we also perform the objective measurement of speech quality on the
reconstructed waveform from the enhancement network in the joint model.
|
electrics
|
3,251 |
Modeling of Rakugo Speech and Its Limitations: Toward Speech Synthesis That Entertains Audiences
|
eess.AS
|
We have been investigating rakugo speech synthesis as a challenging example
of speech synthesis that entertains audiences. Rakugo is a traditional Japanese
form of verbal entertainment similar to a combination of one-person stand-up
comedy and comic storytelling and is popular even today. In rakugo, a performer
plays multiple characters, and conversations or dialogues between the
characters make the story progress. To investigate how close the quality of
synthesized rakugo speech can approach that of professionals' speech, we
modeled rakugo speech using Tacotron 2, a state-of-the-art speech synthesis
system that can produce speech that sounds as natural as human speech albeit
under limited conditions, and an enhanced version of it with self-attention to
better consider long-term dependencies. We also used global style tokens and
manually labeled context features to enrich speaking styles. Through a
listening test, we measured not only naturalness but also distinguishability of
characters, understandability of the content, and the degree of entertainment.
Although we found that the speech synthesis models could not yet reach the
professional level, the results of the listening test provided interesting
insights: 1) we should not focus only on the naturalness of synthesized speech
but also the distinguishability of characters and the understandability of the
content to further entertain audiences; 2) the fundamental frequency (fo)
expressions of synthesized speech are poorer than those of human speech, and
more entertaining speech should have richer fo expression. Although there is
room for improvement, we believe this is an important stepping stone toward
achieving entertaining speech synthesis at the professional level.
|
electrics
|
3,252 |
Deep neural networks for emotion recognition combining audio and transcripts
|
eess.AS
|
In this paper, we propose to improve emotion recognition by combining
acoustic information and conversation transcripts. On the one hand, an LSTM
network was used to detect emotion from acoustic features like f0, shimmer,
jitter, MFCC, etc. On the other hand, a multi-resolution CNN was used to detect
emotion from word sequences. This CNN consists of several parallel convolutions
with different kernel sizes to exploit contextual information at different
levels. A temporal pooling layer aggregates the hidden representations of
different words into a unique sequence level embedding, from which we computed
the emotion posteriors. We optimized a weighted sum of classification and
verification losses. The verification loss tries to bring embeddings from the
same emotions closer while separating embeddings from different emotions. We
also compared our CNN with state-of-the-art text-based hand-crafted features
(e-vector). We evaluated our approach on the USC-IEMOCAP dataset as well as the
dataset consisting of US English telephone speech. In the former, we used
human-annotated transcripts while in the latter, we used ASR transcripts. The
results showed fusing audio and transcript information improved unweighted
accuracy by relative 24% for IEMOCAP and relative 3.4% for the telephone data
compared to a single acoustic system.
|
electrics
|
3,253 |
Mask-dependent Phase Estimation for Monaural Speaker Separation
|
eess.AS
|
Speaker separation refers to isolating speech of interest in a multi-talker
environment. Most methods apply real-valued Time-Frequency (T-F) masks to the
mixture Short-Time Fourier Transform (STFT) to reconstruct the clean speech.
Hence there is an unavoidable mismatch between the phase of the reconstruction
and the original phase of the clean speech. In this paper, we propose a simple
yet effective phase estimation network that predicts the phase of the clean
speech based on a T-F mask predicted by a chimera++ network. To overcome the
label-permutation problem for both the T-F mask and the phase, we propose a
mask-dependent permutation invariant training (PIT) criterion to select the
phase signal based on the loss from the T-F mask prediction. We also propose an
Inverse Mask Weighted Loss Function for phase prediction to focus the model on
the T-F regions in which the phase is more difficult to predict. Results on the
WSJ0-2mix dataset show that the phase estimation network achieves comparable
performance to models that use iterative phase reconstruction or end-to-end
time-domain loss functions, but in a more straightforward manner.
|
electrics
|
3,254 |
Signal-Adaptive and Perceptually Optimized Sound Zones with Variable Span Trade-Off Filters
|
eess.AS
|
Creating sound zones has been an active research field since the idea was
first proposed. So far, most sound zone control methods rely on either an
optimization of physical metrics such as acoustic contrast and signal
distortion or a mode decomposition of the desired sound field. By using these
types of methods, approximately 15 dB of acoustic contrast between the
reproduced sound field in the target zone and its leakage to other zone(s) has
been reported in practical set-ups, but this is typically not high enough to
satisfy the people inside the zones. In this paper, we propose a sound zone
control method shaping the leakage errors so that they are as inaudible as
possible for a given acoustic contrast. The shaping of the leakage errors is
performed by taking the time-varying input signal characteristics and the human
auditory system into account when the loudspeaker control filters are
calculated. We show how this shaping can be performed using variable span
trade-off filters, and we show theoretically how these filters can be used for
trading signal distortion in the target zone for acoustic contrast. The
proposed method is evaluated based on physical metrics such as acoustic
contrast and perceptual metrics such as STOI. The computational complexity and
processing time of the proposed method for different system set-ups are also
investigated. Lastly, the results of a MUSHRA listening test are reported. The
test results show that the proposed method provides more than 20% perceptual
improvement compared to existing sound zone control methods.
|
electrics
|
3,255 |
Sound event detection via dilated convolutional recurrent neural networks
|
eess.AS
|
Convolutional recurrent neural networks (CRNNs) have achieved
state-of-the-art performance for sound event detection (SED). In this paper, we
propose to use a dilated CRNN, namely a CRNN with a dilated convolutional
kernel, as the classifier for the task of SED. We investigate the effectiveness
of dilation operations which provide a CRNN with expanded receptive fields to
capture long temporal context without increasing the amount of CRNN's
parameters. Compared to the classifier of the baseline CRNN, the classifier of
the dilated CRNN obtains a maximum increase of 1.9%, 6.3% and 2.5% at F1 score
and a maximum decrease of 1.7%, 4.1% and 3.9% at error rate (ER), on the
publicly available audio corpora of the TUT-SED Synthetic 2016, the TUT Sound
Event 2016 and the TUT Sound Event 2017, respectively.
|
electrics
|
3,256 |
Cross-lingual Multi-speaker Text-to-speech Synthesis for Voice Cloning without Using Parallel Corpus for Unseen Speakers
|
eess.AS
|
We investigate a novel cross-lingual multi-speaker text-to-speech synthesis
approach for generating high-quality native or accented speech for
native/foreign seen/unseen speakers in English and Mandarin. The system
consists of three separately trained components: an x-vector speaker encoder, a
Tacotron-based synthesizer and a WaveNet vocoder. It is conditioned on 3 kinds
of embeddings: (1) speaker embedding so that the system can be trained with
speech from many speakers will little data from each speaker; (2) language
embedding with shared phoneme inputs; (3) stress and tone embedding which
improves naturalness of synthesized speech, especially for a tonal language
like Mandarin. By adjusting the various embeddings, MOS results show that our
method can generate high-quality natural and intelligible native speech for
native/foreign seen/unseen speakers. Intelligibility and naturalness of
accented speech is low as expected. Speaker similarity is good for native
speech from native speakers. Interestingly, speaker similarity is also good for
accented speech from foreign speakers. We also find that normalizing speaker
embedding x-vectors by L2-norm normalization or whitening improves output
quality a lot in many cases, and the WaveNet performance seems to be
language-independent: our WaveNet is trained with Cantonese speech and can be
used to generate Mandarin and English speech very well.
|
electrics
|
3,257 |
Automatic prediction of suicidal risk in military couples using multimodal interaction cues from couples conversations
|
eess.AS
|
Suicide is a major societal challenge globally, with a wide range of risk
factors, from individual health, psychological and behavioral elements to
socio-economic aspects. Military personnel, in particular, are at especially
high risk. Crisis resources, while helpful, are often constrained by access to
clinical visits or therapist availability, especially when needed in a timely
manner. There have hence been efforts on identifying whether communication
patterns between couples at home can provide preliminary information about
potential suicidal behaviors, prior to intervention. In this work, we
investigate whether acoustic, lexical, behavior and turn-taking cues from
military couples' conversations can provide meaningful markers of suicidal
risk. We test their effectiveness in real-world noisy conditions by extracting
these cues through an automatic diarization and speech recognition front-end.
Evaluation is performed by classifying 3 degrees of suicidal risk: none,
ideation, attempt. Our automatic system performs significantly better than
chance in all classification scenarios and we find that behavior and
turn-taking cues are the most informative ones. We also observe that
conditioning on factors such as speaker gender and topic of discussion tends to
improve classification performance.
|
electrics
|
3,258 |
Multi-Source Direction-of-Arrival Estimation Using Improved Estimation Consistency Method
|
eess.AS
|
We address the problem of estimating direction-of-arrivals (DOAs) for
multiple acoustic sources in a reverberant environment using a spherical
microphone array. It is well-known that multi-source DOA estimation is
challenging in the presence of room reverberation, environmental noise and
overlapping sources. In this work, we introduce multiple schemes to improve the
robustness of estimation consistency (EC) approach in reverberant and noisy
conditions through redefined and modified parametric weights. Simulation
results show that our proposed methods achieve superior performance compared to
the existing EC approach, especially when the sources are spatially close in a
reverberant environment.
|
electrics
|
3,259 |
Attention-based ASR with Lightweight and Dynamic Convolutions
|
eess.AS
|
End-to-end (E2E) automatic speech recognition (ASR) with sequence-to-sequence
models has gained attention because of its simple model training compared with
conventional hidden Markov model based ASR. Recently, several studies report
the state-of-the-art E2E ASR results obtained by Transformer. Compared to
recurrent neural network (RNN) based E2E models, training of Transformer is
more efficient and also achieves better performance on various tasks. However,
self-attention used in Transformer requires computation quadratic in its input
length. In this paper, we propose to apply lightweight and dynamic convolution
to E2E ASR as an alternative architecture to the self-attention to make the
computational order linear. We also propose joint training with connectionist
temporal classification, convolution on the frequency axis, and combination
with self-attention. With these techniques, the proposed architectures achieve
better performance than RNN-based E2E model and performance competitive to
state-of-the-art Transformer on various ASR benchmarks including
noisy/reverberant tasks.
|
electrics
|
3,260 |
Attention-based gated scaling adaptative acoustic model for ctc-based speech recognition
|
eess.AS
|
In this paper, we propose a novel adaptive technique that uses an
attention-based gated scaling (AGS) scheme to improve deep feature learning for
connectionist temporal classification (CTC) acoustic modeling. In AGS, the
outputs of each hidden layer of the main network are scaled by an auxiliary
gate matrix extracted from the lower layer by using attention mechanisms.
Furthermore, the auxiliary AGS layer and the main network are jointly trained
without requiring second-pass model training or additional speaker information,
such as speaker code. On the Mandarin AISHELL-1 datasets, the proposed AGS
yields a 7.94% character error rate (CER). To the best of our knowledge, this
result is the best recognition accuracy achieved on this dataset by using an
end-to-end framework.
|
electrics
|
3,261 |
A Memory Augmented Architecture for Continuous Speaker Identification in Meetings
|
eess.AS
|
We introduce and analyze a novel approach to the problem of speaker
identification in multi-party recorded meetings. Given a speech segment and a
set of available candidate profiles, we propose a novel data-driven way to
model the distance relations between them, aiming at identifying the speaker
label corresponding to that segment. To achieve this we employ a recurrent,
memory-based architecture, since this class of neural networks has been shown
to yield advanced performance in problems requiring relational reasoning. The
proposed encoding of distance relations is shown to outperform traditional
distance metrics, such as the cosine distance. Additional improvements are
reported when the temporal continuity of the audio signals and the speaker
changes is modeled in. In this paper, we have evaluated our method in two
different tasks, i.e. scripted and real-world business meeting scenarios, where
we report a relative reduction in speaker error rate of 39.28% and 51.84%,
respectively, compared to the baseline.
|
electrics
|
3,262 |
Interpretable Filter Learning Using Soft Self-attention For Raw Waveform Speech Recognition
|
eess.AS
|
Speech recognition from raw waveform involves learning the spectral
decomposition of the signal in the first layer of the neural acoustic model
using a convolution layer. In this work, we propose a raw waveform
convolutional filter learning approach using soft self-attention. The acoustic
filter bank in the proposed model is implemented using a parametric
cosine-modulated Gaussian filter bank whose parameters are learned. A
network-in-network architecture provides self-attention to generate attention
weights over the sub-band filters. The attention weighted log filter bank
energies are fed to the acoustic model for the task of speech recognition.
Experiments are conducted on Aurora-4 (additive noise with channel artifact),
and CHiME-3 (additive noise with reverberation) databases. In these
experiments, the attention based filter learning approach provides considerable
improvements in ASR performance over the baseline mel filter-bank features and
other robust front-ends (average relative improvement of 7% in word error rate
over baseline features on Aurora-4 dataset, and 5% on CHiME-3 database). Using
the self-attention weights, we also present an analysis on the interpretability
of the filters for the ASR task.
|
electrics
|
3,263 |
Noise dependent Super Gaussian-Coherence based dual microphone Speech Enhancement for hearing aid application using smartphone
|
eess.AS
|
In this paper, the coherence between speech and noise signals is used to
obtain a Speech Enhancement (SE) gain function, in combination with a Super
Gaussian Joint Maximum a Posteriori (SGJMAP) single microphone SE gain
function. The proposed SE method can be implemented on a smartphone that works
as an assistive device to hearing aids. Although coherence SE gain function
suppresses the background noise well, it distorts the speech. In contrary, SE
using SGJMAP improves speech quality with additional musical noise, which we
contain by using a post filter. The weighted union of these two gain functions
strikes a balance between noise suppression and speech distortion. A
'weighting' parameter is introduced in the derived gain function to allow the
smartphone user to control the weighting factor based on different background
noise and their comfort level of hearing. Objective and subjective measures of
the proposed method show effective improvement in comparison to standard
techniques considered in this paper for several noisy conditions at signal to
noise ratio levels of -5 dB, 0 dB and 5 dB.
|
electrics
|
3,264 |
Phase-Aware Speech Enhancement with a Recurrent Two Stage Net work
|
eess.AS
|
We propose a neural network-based speech enhancement (SE) method called the
phase-aware recurrent two stage network (rTSN). The rTSN is an extension of our
previously proposed two stage network (TSN) framework. This TSN framework was
equipped with a boosting strategy (BS) that initially estimates the multiple
base predictions (MBPs) from a prior neural network (pri-NN) and then the MBPs
are aggregated by a posterior neural network (post-NN) to obtain the final
prediction. The TSN outperformed various state-of-the-art methods; however, it
adopted the simple deep neural network as pri-NN. We have found that the pri-NN
affects the performance (in perceptual quality), more than post-NN; therefore
we adopted the long short-term memory recurrent neural network (LSTM-RNN) as
pri-NN to boost the context information usage within speech signals. Further,
the TSN framework did not consider the phase reconstruction, though phase
information affected the perceptual quality. Therefore, we proposed to adopt
the phase reconstruction method based on the Griffin-Lim algorithm. Finally, we
evaluated rTSN with baselines such as TSN in perceptual quality related metrics
as well as the phone recognition error rate.
|
electrics
|
3,265 |
Source coding of audio signals with a generative model
|
eess.AS
|
We consider source coding of audio signals with the help of a generative
model. We use a construction where a waveform is first quantized, yielding a
finite bitrate representation. The waveform is then reconstructed by random
sampling from a model conditioned on the quantized waveform. The proposed
coding scheme is theoretically analyzed. Using SampleRNN as the generative
model, we demonstrate that the proposed coding structure provides performance
competitive with state-of-the-art source coding tools for specific categories
of audio signals.
|
electrics
|
3,266 |
Improving LPCNet-based Text-to-Speech with Linear Prediction-structured Mixture Density Network
|
eess.AS
|
In this paper, we propose an improved LPCNet vocoder using a linear
prediction (LP)-structured mixture density network (MDN). The recently proposed
LPCNet vocoder has successfully achieved high-quality and lightweight speech
synthesis systems by combining a vocal tract LP filter with a WaveRNN-based
vocal source (i.e., excitation) generator. However, the quality of synthesized
speech is often unstable because the vocal source component is insufficiently
represented by the mu-law quantization method, and the model is trained without
considering the entire speech production mechanism. To address this problem, we
first introduce LP-MDN, which enables the autoregressive neural vocoder to
structurally represent the interactions between the vocal tract and vocal
source components. Then, we propose to incorporate the LP-MDN to the LPCNet
vocoder by replacing the conventional discretized output with continuous
density distribution. The experimental results verify that the proposed system
provides high quality synthetic speech by achieving a mean opinion score of
4.41 within a text-to-speech framework.
|
electrics
|
3,267 |
Multitask Learning with Capsule Networks for Speech-to-Intent Applications
|
eess.AS
|
Voice controlled applications can be a great aid to society, especially for
physically challenged people. However this requires robustness to all kinds of
variations in speech. A spoken language understanding system that learns from
interaction with and demonstrations from the user, allows the use of such a
system in different settings and for different types of speech, even for
deviant or impaired speech, while also allowing the user to choose a phrasing.
The user gives a command and enters its intent through an interface, after
which the model learns to map the speech directly to the right action. Since
the effort of the user should be as low as possible, capsule networks have
drawn interest due to potentially needing little training data compared to
deeper neural networks. In this paper, we show how capsules can incorporate
multitask learning, which often can improve the performance of a model when the
task is difficult. The basic capsule network will be expanded with a
regularisation to create more structure in its output: it learns to identify
the speaker of the utterance by forcing the required information into the
capsule vectors. To this end we move from a speaker dependent to a speaker
independent setting.
|
electrics
|
3,268 |
Multi-Branch Learning for Weakly-Labeled Sound Event Detection
|
eess.AS
|
There are two sub-tasks implied in the weakly-supervised SED: audio tagging
and event boundary detection. Current methods which combine multi-task learning
with SED requires annotations both for these two sub-tasks. Since there are
only annotations for audio tagging available in weakly-supervised SED, we
design multiple branches with different learning purposes instead of pursuing
multiple tasks. Similar to multiple tasks, multiple different learning purposes
can also prevent the common feature which the multiple branches share from
overfitting to any one of the learning purposes. We design these multiple
different learning purposes based on combinations of different MIL strategies
and different pooling methods. Experiments on the DCASE 2018 Task 4 dataset and
the URBAN-SED dataset both show that our method achieves competitive
performance.
|
electrics
|
3,269 |
Controllable Sequence-To-Sequence Neural TTS with LPCNET Backend for Real-time Speech Synthesis on CPU
|
eess.AS
|
State-of-the-art sequence-to-sequence acoustic networks, that convert a
phonetic sequence to a sequence of spectral features with no explicit prosody
prediction, generate speech with close to natural quality, when cascaded with
neural vocoders, such as Wavenet. However, the combined system is typically too
heavy for real-time speech synthesis on a CPU. In this work we present a
sequence-to-sequence acoustic network combined with lightweight LPCNet neural
vocoder, designed for real-time speech synthesis on a CPU. In addition, the
system allows sentence-level pace and expressivity control at inference time.
We demonstrate that the proposed system can synthesize high quality 22 kHz
speech in real-time on a general-purpose CPU. In terms of MOS score degradation
relative to PCM, the system attained as low as 6.1-6.5% for quality and 6.3-
7.0% for expressiveness, reaching equivalent or better quality when compared to
a similar system with a Wavenet vocoder backend.
|
electrics
|
3,270 |
An LSTM Based Architecture to Relate Speech Stimulus to EEG
|
eess.AS
|
Modeling the relationship between natural speech and a recorded
electroencephalogram (EEG) helps us understand how the brain processes speech
and has various applications in neuroscience and brain-computer interfaces. In
this context, so far mainly linear models have been used. However, the decoding
performance of the linear model is limited due to the complex and highly
non-linear nature of the auditory processing in the human brain. We present a
novel Long Short-Term Memory (LSTM)-based architecture as a non-linear model
for the classification problem of whether a given pair of (EEG, speech
envelope) correspond to each other or not. The model maps short segments of the
EEG and the envelope to a common embedding space using a CNN in the EEG path
and an LSTM in the speech path. The latter also compensates for the brain
response delay. In addition, we use transfer learning to fine-tune the model
for each subject. The mean classification accuracy of the proposed model
reaches 85%, which is significantly higher than that of a state of the art
Convolutional Neural Network (CNN)-based model (73%) and the linear model
(69%).
|
electrics
|
3,271 |
Lightweight Online Separation of the Sound Source of Interest through BLSTM-Based Binary Masking
|
eess.AS
|
Online audio source separation has been an important part of auditory scene
analysis and robot audition. The main type of technique to carry this out,
because of its online capabilities, has been spatial filtering (or
beamforming), where it is assumed that the location (mainly, the direction of
arrival; DOA) of the source of interest (SOI) is known. However, these
techniques suffer from considerable interference leakage in the final result.
In this paper, we propose a two step technique: 1) a phase-based beamformer
that provides, in addition to the estimation of the SOI, an estimation of the
cumulative environmental interference; and 2) a BLSTM-based TF binary masking
stage that calculates a binary mask that aims to separate the SOI from the
cumulative environmental interference. In our tests, this technique provides a
signal-to-interference ratio (SIR) above 20 dB with simulated data. Because of
the nature of the beamformer outputs, the label permutation problem is handled
from the beginning. This makes the proposed solution a lightweight alternative
that requires considerably less computational resources (almost an order of
magnitude) compared to current deep-learning based techniques, while providing
a comparable SIR performance.
|
electrics
|
3,272 |
Multitask Learning and Multistage Fusion for Dimensional Audiovisual Emotion Recognition
|
eess.AS
|
Due to its ability to accurately predict emotional state using multimodal
features, audiovisual emotion recognition has recently gained more interest
from researchers. This paper proposes two methods to predict emotional
attributes from audio and visual data using a multitask learning and a fusion
strategy. First, multitask learning is employed by adjusting three parameters
for each attribute to improve the recognition rate. Second, a multistage fusion
is proposed to combine results from various modalities' final prediction. Our
approach used multitask learning, employed at unimodal and early fusion
methods, shows improvement over single-task learning with an average CCC score
of 0.431 compared to 0.297. A multistage method, employed at the late fusion
approach, significantly improved the agreement score between true and predicted
values on the development set of data (from [0.537, 0.565, 0.083] to [0.68,
0.656, 0.443]) for arousal, valence, and liking.
|
electrics
|
3,273 |
BUT System for the Second DIHARD Speech Diarization Challenge
|
eess.AS
|
This paper describes the winning systems developed by the BUT team for the
four tracks of the Second DIHARD Speech Diarization Challenge. For tracks 1 and
2 the systems were mainly based on performing agglomerative hierarchical
clustering (AHC) of x-vectors, followed by another x-vector clustering based on
Bayes hidden Markov model and variational Bayes inference. We provide a
comparison of the improvement given by each step and share the implementation
of the core of the system. For tracks 3 and 4 with recordings from the Fifth
CHiME Challenge, we explored different approaches for doing multi-channel
diarization and our best performance was obtained when applying AHC on the
fusion of per channel probabilistic linear discriminant analysis scores.
|
electrics
|
3,274 |
Auxiliary Function-Based Algorithm for Blind Extraction of a Moving Speaker
|
eess.AS
|
Recently, Constant Separating Vector (CSV) mixing model has been proposed for
the Blind Source Extraction (BSE) of moving sources. In this paper, we
experimentally verify the applicability of CSV in the blind extraction of a
moving speaker and propose a new BSE method derived by modifying the auxiliary
function-based algorithm for Independent Vector Analysis. Also, a piloted
variant is proposed for the method with partially controllable global
convergence. The methods are verified under reverberant and noisy conditions
using {\color{red} simulated as well as real-world acoustic conditions}. They
are also verified within the CHiME-4 speech separation and recognition
challenge. The experiments corroborate the applicability of CSV as well as the
improved convergence of the proposed algorithms.
|
electrics
|
3,275 |
Vowels and Prosody Contribution in Neural Network Based Voice Conversion Algorithm with Noisy Training Data
|
eess.AS
|
This research presents a neural network based voice conversion (VC) model.
While it is a known fact that voiced sounds and prosody are the most important
component of the voice conversion framework, what is not known is their
objective contributions particularly in a noisy and uncontrolled environment.
This model uses a 2-layer feedforward neural network to map the Linear
prediction analysis coefficients of a source speaker to the acoustic vector
space of the target speaker with a view to objectively determine the
contributions of the voiced, unvoiced and supra-segmental components of sounds
to the voice conversion model. Results showed that vowels 'a', 'i', 'o' have
the most significant contribution in the conversion success. The voiceless
sounds were also found to be most affected by the noisy training data. An
average noise level of 40 dB above the noise floor were found to degrade the
voice conversion success by 55.14 percent relative to the voiced sounds. The
result also shows that for cross-gender voice conversion, prosody conversion is
more significant in scenarios where a female is the target speaker.
|
electrics
|
3,276 |
Voice conversion using coefficient mapping and neural network
|
eess.AS
|
The research presents a voice conversion model using coefficient mapping and
neural network. Most previous works on parametric speech synthesis did not
account for losses in spectral details causing over smoothing and invariably,
an appreciable deviation of the converted speech from the targeted speaker. An
improved model that uses both linear predictive coding (LPC) and line spectral
frequency (LSF) coefficients to parametrize the source speech signal was
developed in this work to reveal the effect of over-smoothing. Non-linear
mapping ability of neural network was employed in mapping the source speech
vectors into the acoustic vector space of the target. Training LPC coefficients
with neural network yielded a poor result due to the instability of the LPC
filter poles. The LPC coefficients were converted to line spectral frequency
coefficients before been trained with a 3-layer neural network. The algorithm
was tested with noisy data with the result evaluated using Mel-Cepstral
Distance measurement. Cepstral distance evaluation shows a 35.7 percent
reduction in the spectral distance between the target and the converted speech.
|
electrics
|
3,277 |
Robust Audio Watermarking Using Graph-based Transform and Singular Value Decomposition
|
eess.AS
|
Graph-based Transform (GT) has been recently leveraged successfully in the
signal processing domain, specifically for compression purposes. In this paper,
we employ the GBT, as well as the Singular Value Decomposition (SVD) with the
goal to improve the robustness of audio watermarking against different attacks
on the audio signals, such as noise and compression. Experimental results on
the NOIZEUS speech database and MIR-1k music database clearly certify that the
proposed GBT-SVD-based method is robust against the attacks. Moreover, the
results exhibit a good quality after the embedding based on PSNR, PESQ, and
STOI measures. Also, the payload for the proposed method is 800 and 1600 for
speech and music signals, respectively which are higher than some robust
watermarking methods such as DWT-SVD and DWT-DCT.
|
electrics
|
3,278 |
Acoustic Scene Classification using Audio Tagging
|
eess.AS
|
Acoustic scene classification systems using deep neural networks classify
given recordings into pre-defined classes. In this study, we propose a novel
scheme for acoustic scene classification which adopts an audio tagging system
inspired by the human perception mechanism. When humans identify an acoustic
scene, the existence of different sound events provides discriminative
information which affects the judgement. The proposed framework mimics this
mechanism using various approaches. Firstly, we employ three methods to
concatenate tag vectors extracted using an audio tagging system with an
intermediate hidden layer of an acoustic scene classification system. We also
explore the multi-head attention on the feature map of an acoustic scene
classification system using tag vectors. Experiments conducted on the detection
and classification of acoustic scenes and events 2019 task 1-a dataset
demonstrate the effectiveness of the proposed scheme. Concatenation and
multi-head attention show a classification accuracy of 75.66 % and 75.58 %,
respectively, compared to 73.63 % accuracy of the baseline. The system with the
proposed two approaches combined demonstrates an accuracy of 76.75 %.
|
electrics
|
3,279 |
Deep Generative Variational Autoencoding for Replay Spoof Detection in Automatic Speaker Verification
|
eess.AS
|
Automatic speaker verification (ASV) systems are highly vulnerable to
presentation attacks, also called spoofing attacks. Replay is among the
simplest attacks to mount - yet difficult to detect reliably. The
generalization failure of spoofing countermeasures (CMs) has driven the
community to study various alternative deep learning CMs. The majority of them
are supervised approaches that learn a human-spoof discriminator. In this
paper, we advocate a different, deep generative approach that leverages from
powerful unsupervised manifold learning in classification. The potential
benefits include the possibility to sample new data, and to obtain insights to
the latent features of genuine and spoofed speech. To this end, we propose to
use variational autoencoders (VAEs) as an alternative backend for replay attack
detection, via three alternative models that differ in their
class-conditioning. The first one, similar to the use of Gaussian mixture
models (GMMs) in spoof detection, is to train independently two VAEs - one for
each class. The second one is to train a single conditional model (C-VAE) by
injecting a one-hot class label vector to the encoder and decoder networks. Our
final proposal integrates an auxiliary classifier to guide the learning of the
latent space. Our experimental results using constant-Q cepstral coefficient
(CQCC) features on the ASVspoof 2017 and 2019 physical access subtask datasets
indicate that the C-VAE offers substantial improvement in comparison to
training two separate VAEs for each class. On the 2019 dataset, the C-VAE
outperforms the VAE and the baseline GMM by an absolute 9 - 10% in both equal
error rate (EER) and tandem detection cost function (t-DCF) metrics. Finally,
we propose VAE residuals - the absolute difference of the original input and
the reconstruction as features for spoofing detection.
|
electrics
|
3,280 |
Dialect Identification of Spoken North Sámi Language Varieties Using Prosodic Features
|
eess.AS
|
This work explores the application of various supervised classification
approaches using prosodic information for the identification of spoken North
S\'ami language varieties. Dialects are language varieties that enclose
characteristics specific for a given region or community. These characteristics
reflect segmental and suprasegmental (prosodic) differences but also high-level
properties such as lexical and morphosyntactic. One aspect that is of
particular interest and that has not been studied extensively is how the
differences in prosody may underpin the potential differences among different
dialects. To address this, this work focuses on investigating the standard
acoustic prosodic features of energy, fundamental frequency, spectral tilt,
duration, and their combinations, using sequential and context-independent
supervised classification methods, and evaluated separately over two different
units in speech: words and syllables. The primary aim of this work is to gain a
better understanding on the role of prosody in identifying among the different
language varieties. Our results show that prosodic information holds an
important role in distinguishing between the five areal varieties of North
S\'ami where the inclusion of contextual information for all acoustic prosodic
features is critical for the identification of dialects for words and
syllables.
|
electrics
|
3,281 |
Low Latency End-to-End Streaming Speech Recognition with a Scout Network
|
eess.AS
|
The attention-based Transformer model has achieved promising results for
speech recognition (SR) in the offline mode. However, in the streaming mode,
the Transformer model usually incurs significant latency to maintain its
recognition accuracy when applying a fixed-length look-ahead window in each
encoder layer. In this paper, we propose a novel low-latency streaming approach
for Transformer models, which consists of a scout network and a recognition
network. The scout network detects the whole word boundary without seeing any
future frames, while the recognition network predicts the next subword by
utilizing the information from all the frames before the predicted boundary.
Our model achieves the best performance (2.7/6.4 WER) with only 639 ms latency
on the test-clean and test-other data sets of Librispeech.
|
electrics
|
3,282 |
Evaluation of Error and Correlation-Based Loss Functions For Multitask Learning Dimensional Speech Emotion Recognition
|
eess.AS
|
The choice of a loss function is a critical part of machine learning. This
paper evaluated two different loss functions commonly used in regression-task
dimensional speech emotion recognition, an error-based and a correlation-based
loss functions. We found that using a correlation-based loss function with a
concordance correlation coefficient (CCC) loss resulted in better performance
than an error-based loss function with mean squared error (MSE) loss and mean
absolute error (MAE), in terms of the averaged CCC score. The results are
consistent with two input feature sets and two datasets. The scatter plots of
test prediction by those two loss functions also confirmed the results measured
by CCC scores.
|
electrics
|
3,283 |
Dual Attention in Time and Frequency Domain for Voice Activity Detection
|
eess.AS
|
Voice activity detection (VAD) is a challenging task in low signal-to-noise
ratio (SNR) environment, especially in non-stationary noise. To deal with this
issue, we propose a novel attention module that can be integrated in Long
Short-Term Memory (LSTM). Our proposed attention module refines each LSTM
layer's hidden states so as to make it possible to adaptively focus on both
time and frequency domain. Experiments are conducted on various noisy
conditions using Aurora 4 database. Our proposed method obtains the 95.58 %
area under the ROC curve (AUC), achieving 22.05 % relative improvement compared
to baseline, with only 2.44 % increase in the number of parameters. Besides, we
utilize focal loss for alleviating the performance degradation caused by
imbalance between speech and non-speech sections in training sets. The results
show that the focal loss can improve the performance in various imbalance
situations compared to the cross entropy loss, a commonly used loss function in
VAD.
|
electrics
|
3,284 |
Mechanical classification of voice quality
|
eess.AS
|
While there is no a priori definition of good singing voices, we tend to make
consistent evaluations of the quality of singing almost instantaneously. Such
an instantaneous evaluation might be based on the sound spectrum that can be
perceived in a short time. Here we devise a Bayesian algorithm that learns to
evaluate the choral proficiency, musical scale, and gender of individual
singers using the sound spectra of singing voices. In particular, the
classification is performed on a set of sound spectral intensities, whose
frequencies are selected by minimizing the Bayes risk. This optimization allows
the algorithm to capture sound frequencies that are essential for each
discrimination task, resulting in a good assessment performance. Experimental
results revealed that a sound duration of about 0.1 sec is sufficient for
determining the choral proficiency and gender of a singer. With a program
constructed on this algorithm, everyone can evaluate choral voices of others
and perform private vocal exercises.
|
electrics
|
3,285 |
Improved Source Counting and Separation for Monaural Mixture
|
eess.AS
|
Single-channel speech separation in time domain and frequency domain has been
widely studied for voice-driven applications over the past few years. Most of
previous works assume known number of speakers in advance, however, which is
not easily accessible through monaural mixture in practice. In this paper, we
propose a novel model of single-channel multi-speaker separation by jointly
learning the time-frequency feature and the unknown number of speakers.
Specifically, our model integrates the time-domain convolution encoded feature
map and the frequency-domain spectrogram by attention mechanism, and the
integrated features are projected into high-dimensional embedding vectors which
are then clustered with deep attractor network to modify the encoded feature.
Meanwhile, the number of speakers is counted by computing the Gerschgorin disks
of the embedding vectors which are orthogonal for different speakers. Finally,
the modified encoded feature is inverted to the sound waveform using a linear
decoder. Experimental evaluation on the GRID dataset shows that the proposed
method with a single model can accurately estimate the number of speakers with
96.7 % probability of success, while achieving the state-of-the-art separation
results on multi-speaker mixtures in terms of scale-invariant signal-to-noise
ratio improvement (SI-SNRi) and signal-to-distortion ratio improvement (SDRi).
|
electrics
|
3,286 |
On The Differences Between Song and Speech Emotion Recognition: Effect of Feature Sets, Feature Types, and Classifiers
|
eess.AS
|
In this paper, we evaluate the different features sets, feature types, and
classifiers on both song and speech emotion recognition. Three feature sets:
GeMAPS, pyAudioAnalysis, and LibROSA; two feature types: low-level descriptors
and high-level statistical functions; and four classifiers: multilayer
perceptron, LSTM, GRU, and convolution neural networks are examined on both
song and speech data with the same parameter values. The results show no
remarkable difference between song and speech data using the same method. In
addition, high-level statistical functions of acoustic features gained higher
performance scores than low-level descriptors in this classification task. This
result strengthens the previous finding on the regression task which reported
the advantage use of high-level features.
|
electrics
|
3,287 |
Subband modeling for spoofing detection in automatic speaker verification
|
eess.AS
|
Spectrograms - time-frequency representations of audio signals - have found
widespread use in neural network-based spoofing detection. While deep models
are trained on the fullband spectrum of the signal, we argue that not all
frequency bands are useful for these tasks. In this paper, we systematically
investigate the impact of different subbands and their importance on replay
spoofing detection on two benchmark datasets: ASVspoof 2017 v2.0 and ASVspoof
2019 PA. We propose a joint subband modelling framework that employs n
different sub-networks to learn subband specific features. These are later
combined and passed to a classifier and the whole network weights are updated
during training. Our findings on the ASVspoof 2017 dataset suggest that the
most discriminative information appears to be in the first and the last 1 kHz
frequency bands, and the joint model trained on these two subbands shows the
best performance outperforming the baselines by a large margin. However, these
findings do not generalise on the ASVspoof 2019 PA dataset. This suggests that
the datasets available for training these models do not reflect real world
replay conditions suggesting a need for careful design of datasets for training
replay spoofing countermeasures.
|
electrics
|
3,288 |
Using Cyclic Noise as the Source Signal for Neural Source-Filter-based Speech Waveform Model
|
eess.AS
|
Neural source-filter (NSF) waveform models generate speech waveforms by
morphing sine-based source signals through dilated convolution in the time
domain. Although the sine-based source signals help the NSF models to produce
voiced sounds with specified pitch, the sine shape may constrain the generated
waveform when the target voiced sounds are less periodic. In this paper, we
propose a more flexible source signal called cyclic noise, a quasi-periodic
noise sequence given by the convolution of a pulse train and a static random
noise with a trainable decaying rate that controls the signal shape. We further
propose a masked spectral loss to guide the NSF models to produce periodic
voiced sounds from the cyclic noise-based source signal. Results from a
large-scale listening test demonstrated the effectiveness of the cyclic noise
and the masked spectral loss on speaker-independent NSF models in
copy-synthesis experiments on the CMU ARCTIC database.
|
electrics
|
3,289 |
Deep Multilayer Perceptrons for Dimensional Speech Emotion Recognition
|
eess.AS
|
Modern deep learning architectures are ordinarily performed on
high-performance computing facilities due to the large size of the input
features and complexity of its model. This paper proposes traditional
multilayer perceptrons (MLP) with deep layers and small input size to tackle
that computation requirement limitation. The result shows that our proposed
deep MLP outperformed modern deep learning architectures, i.e., LSTM and CNN,
on the same number of layers and value of parameters. The deep MLP exhibited
the highest performance on both speaker-dependent and speaker-independent
scenarios on IEMOCAP and MSP-IMPROV corpus.
|
electrics
|
3,290 |
Emotional Voice Conversion With Cycle-consistent Adversarial Network
|
eess.AS
|
Emotional Voice Conversion, or emotional VC, is a technique of converting
speech from one emotion state into another one, keeping the basic linguistic
information and speaker identity. Previous approaches for emotional VC need
parallel data and use dynamic time warping (DTW) method to temporally align the
source-target speech parameters. These approaches often define a minimum
generation loss as the objective function, such as L1 or L2 loss, to learn
model parameters. Recently, cycle-consistent generative adversarial networks
(CycleGAN) have been used successfully for non-parallel VC. This paper
investigates the efficacy of using CycleGAN for emotional VC tasks. Rather than
attempting to learn a mapping between parallel training data using a
frame-to-frame minimum generation loss, the CycleGAN uses two discriminators
and one classifier to guide the learning process, where the discriminators aim
to differentiate between the natural and converted speech and the classifier
aims to classify the underlying emotion from the natural and converted speech.
The training process of the CycleGAN models randomly pairs source-target speech
parameters, without any temporal alignment operation. The objective and
subjective evaluation results confirm the effectiveness of using CycleGAN
models for emotional VC. The non-parallel training for a CycleGAN indicates its
potential for non-parallel emotional VC.
|
electrics
|
3,291 |
Multi-Target Emotional Voice Conversion With Neural Vocoders
|
eess.AS
|
Emotional voice conversion (EVC) is one way to generate expressive synthetic
speech. Previous approaches mainly focused on modeling one-to-one mapping,
i.e., conversion from one emotional state to another emotional state, with
Mel-cepstral vocoders. In this paper, we investigate building a multi-target
EVC (MTEVC) architecture, which combines a deep bidirectional long-short term
memory (DBLSTM)-based conversion model and a neural vocoder. Phonetic
posteriorgrams (PPGs) containing rich linguistic information are incorporated
into the conversion model as auxiliary input features, which boost the
conversion performance. To leverage the advantages of the newly emerged neural
vocoders, we investigate the conditional WaveNet and flow-based WaveNet
(FloWaveNet) as speech generators. The vocoders take in additional speaker
information and emotion information as auxiliary features and are trained with
a multi-speaker and multi-emotion speech corpus. Objective metrics and
subjective evaluation of the experimental results verify the efficacy of the
proposed MTEVC architecture for EVC.
|
electrics
|
3,292 |
Noise Tokens: Learning Neural Noise Templates for Environment-Aware Speech Enhancement
|
eess.AS
|
In recent years, speech enhancement (SE) has achieved impressive progress
with the success of deep neural networks (DNNs). However, the DNN approach
usually fails to generalize well to unseen environmental noise that is not
included in the training. To address this problem, we propose "noise tokens"
(NTs), which are a set of neural noise templates that are jointly trained with
the SE system. NTs dynamically capture the environment variability and thus
enable the DNN model to handle various environments to produce STFT magnitude
with higher quality. Experimental results show that using NTs is an effective
strategy that consistently improves the generalization ability of SE systems
across different DNN architectures. Furthermore, we investigate applying a
state-of-the-art neural vocoder to generate waveform instead of traditional
inverse STFT (ISTFT). Subjective listening tests show the residual noise can be
significantly suppressed through mel-spectrogram correction and vocoder-based
waveform synthesis.
|
electrics
|
3,293 |
An investigation of phone-based subword units for end-to-end speech recognition
|
eess.AS
|
Phones and their context-dependent variants have been the standard modeling
units for conventional speech recognition systems, while characters and
subwords have demonstrated their effectiveness for end-to-end recognition
systems. We investigate the use of phone-based subwords, in particular, byte
pair encoder (BPE), as modeling units for end-to-end speech recognition. In
addition, we also developed multi-level language model-based decoding
algorithms based on a pronunciation dictionary. Besides the use of the lexicon,
which is easily available, our system avoids the need of additional expert
knowledge or processing steps from conventional systems. Experimental results
show that phone-based BPEs tend to yield more accurate recognition systems than
the character-based counterpart. In addition, further improvement can be
obtained with a novel one-pass joint beam search decoder, which efficiently
combines phone- and character-based BPE systems. For Switchboard, our
phone-based BPE system achieves 6.8\%/14.4\% word error rate (WER) on the
Switchboard/CallHome portion of the test set while joint decoding achieves
6.3\%/13.3\% WER. On Fisher + Switchboard, joint decoding leads to 4.9\%/9.5\%
WER, setting new milestones for telephony speech recognition.
|
electrics
|
3,294 |
Att-HACK: An Expressive Speech Database with Social Attitudes
|
eess.AS
|
This paper presents Att-HACK, the first large database of acted speech with
social attitudes. Available databases of expressive speech are rare and very
often restricted to the primary emotions: anger, joy, sadness, fear. This
greatly limits the scope of the research on expressive speech. Besides, a
fundamental aspect of speech prosody is always ignored and missing from such
databases: its variety, i.e. the possibility to repeat an utterance while
varying its prosody. This paper represents a first attempt to widen the scope
of expressivity in speech, by providing a database of acted speech with social
attitudes: friendly, seductive, dominant, and distant. The proposed database
comprises 25 speakers interpreting 100 utterances in 4 social attitudes, with
3-5 repetitions each per attitude for a total of around 30 hours of speech. The
Att-HACK is freely available for academic research under a Creative Commons
Licence.
|
electrics
|
3,295 |
MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition
|
eess.AS
|
We present an MatchboxNet - an end-to-end neural network for speech command
recognition. MatchboxNet is a deep residual network composed from blocks of 1D
time-channel separable convolution, batch-normalization, ReLU and dropout
layers. MatchboxNet reaches state-of-the-art accuracy on the Google Speech
Commands dataset while having significantly fewer parameters than similar
models. The small footprint of MatchboxNet makes it an attractive candidate for
devices with limited computational resources. The model is highly scalable, so
model accuracy can be improved with modest additional memory and compute.
Finally, we show how intensive data augmentation using an auxiliary noise
dataset improves robustness in the presence of background noise.
|
electrics
|
3,296 |
Towards Fast and Accurate Streaming End-to-End ASR
|
eess.AS
|
End-to-end (E2E) models fold the acoustic, pronunciation and language models
of a conventional speech recognition model into one neural network with a much
smaller number of parameters than a conventional ASR system, thus making it
suitable for on-device applications. For example, recurrent neural network
transducer (RNN-T) as a streaming E2E model has shown promising potential for
on-device ASR. For such applications, quality and latency are two critical
factors. We propose to reduce E2E model's latency by extending the RNN-T
endpointer (RNN-T EP) model with additional early and late penalties. By
further applying the minimum word error rate (MWER) training technique, we
achieved 8.0% relative word error rate (WER) reduction and 130ms 90-percentile
latency reduction over on a Voice Search test set. We also experimented with a
second-pass Listen, Attend and Spell (LAS) rescorer . Although it did not
directly improve the first pass latency, the large WER reduction provides extra
room to trade WER for latency. RNN-T EP+LAS, together with MWER training brings
in 18.7% relative WER reduction and 160ms 90-percentile latency reductions
compared to the original proposed RNN-T EP model.
|
electrics
|
3,297 |
Can Speaker Augmentation Improve Multi-Speaker End-to-End TTS?
|
eess.AS
|
Previous work on speaker adaptation for end-to-end speech synthesis still
falls short in speaker similarity. We investigate an orthogonal approach to the
current speaker adaptation paradigms, speaker augmentation, by creating
artificial speakers and by taking advantage of low-quality data. The base
Tacotron2 model is modified to account for the channel and dialect factors
inherent in these corpora. In addition, we describe a warm-start training
strategy that we adopted for Tacotron2 training. A large-scale listening test
is conducted, and a distance metric is adopted to evaluate synthesis of
dialects. This is followed by an analysis on synthesis quality, speaker and
dialect similarity, and a remark on the effectiveness of our speaker
augmentation approach. Audio samples are available online.
|
electrics
|
3,298 |
Scyclone: High-Quality and Parallel-Data-Free Voice Conversion Using Spectrogram and Cycle-Consistent Adversarial Networks
|
eess.AS
|
This paper proposes Scyclone, a high-quality voice conversion (VC) technique
without parallel data training. Scyclone improves speech naturalness and
speaker similarity of the converted speech by introducing CycleGAN-based
spectrogram conversion with a simplified WaveRNN-based vocoder. In Scyclone, a
linear spectrogram is used as the conversion features instead of vocoder
parameters, which avoids quality degradation due to extraction errors in
fundamental frequency and voiced/unvoiced parameters. The spectrogram of source
and target speakers are modeled by modified CycleGAN networks, and the waveform
is reconstructed using the simplified WaveRNN with a single Gaussian
probability density function. The subjective experiments with completely
unpaired training data show that Scyclone is significantly better than
CycleGAN-VC2, one of the existing state-of-the-art parallel-data-free VC
techniques.
|
electrics
|
3,299 |
Cross-Language Transfer Learning, Continuous Learning, and Domain Adaptation for End-to-End Automatic Speech Recognition
|
eess.AS
|
In this paper, we demonstrate the efficacy of transfer learning and
continuous learning for various automatic speech recognition (ASR) tasks. We
start with a pre-trained English ASR model and show that transfer learning can
be effectively and easily performed on: (1) different English accents, (2)
different languages (German, Spanish and Russian) and (3) application-specific
domains. Our experiments demonstrate that in all three cases, transfer learning
from a good base model has higher accuracy than a model trained from scratch.
It is preferred to fine-tune large models than small pre-trained models, even
if the dataset for fine-tuning is small. Moreover, transfer learning
significantly speeds up convergence for both very small and very large target
datasets.
|
electrics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.