题名:一种用于语音带宽扩展的深度神经网络方法 作者:Kehuang Li:Chin-Hui Lee 2015年出来的 摘要 本文提出了一种基于深度神经网络(DNN)的语音带宽扩展(BWE)方法.利用对数谱功率作为输入输出特征进行所需的非线性变换,训练神经网络来实现这种高维映射函数.在10小时的大型测试集上对该方法进行评估时,我们发现与传统的基于高斯混合模型(GMMs)的BWE相比,DNN扩展语音信号在信噪比和对数谱失真方面具有很好的客观质量度量.在假定相位信息已知的情况下,主观听力测试对DNN扩…
Problem: high-dimensional time series forecasting ?? what is "high-dimensional" time series forecasting? one dimension for each individual time-series. n个time series为n维. A need for exploiting global pattern and coupling them with local calibrati…
论文地址:基于DNN的语音带宽扩展及其在窄带语音自动识别中加入高频缺失特征的应用 论文代码:github 博客作者:凌逆战 博客地址:https://www.cnblogs.com/LXP-Never/p/12361112.html 摘要 我们提出了一些增强技术来提高从窄带到宽带扩频(BWE)中的语音质量,解决了三个在实际应用中可能非常关键的问题,即:(1)窄带频谱和估计的高频频谱之间的不连续性,(2) 测试和训练话语之间的能量不匹配,(3)扩大了域外语音信号的带宽.通过带宽扩展语音中高频特征缺…
论文地址:PACDNN:一种用于语音增强的相位感知复合深度神经网络 引用格式:Hasannezhad M,Yu H,Zhu W P,et al. PACDNN: A phase-aware composite deep neural network for speech enhancement[J]. Speech Communication,2022,136:1-13. 摘要 目前,利用深度神经网络(DNN)进行语音增强的大多数方法都面临着一些限制:它们没有利用相位谱中的信息,同时它们的高计算…
论文地址:用于端到端语音增强的卷积递归神经网络 论文代码:https://github.com/aleXiehta/WaveCRN 引用格式:Hsieh T A, Wang H M, Lu X, et al. WaveCRN: An efficient convolutional recurrent neural network for end-to-end speech enhancement[J]. IEEE Signal Processing Letters, 2020, 27: 2149…
Deep Learning: Assuming a deep neural network is properly regulated, can adding more layers actually make the performance degrade? I found this to be really puzzling. A deeper NN is supposed to be more powerful or at least equal to a shallower NN. I…
论文地址:TCNN:时域卷积神经网络用于实时语音增强 论文代码:https://github.com/LXP-Never/TCNN(非官方复现) 引用格式:Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE International Conference on Ac…
XiangBai--[AAAI2017]TextBoxes:A Fast Text Detector with a Single Deep Neural Network 目录 作者和相关链接 方法概括 创新点和贡献 方法细节 实验结果 总结与收获点 作者和相关链接 作者 论文下载 廖明辉,石葆光, 白翔, 王兴刚 ,刘文予 代码下载 方法概括 文章核心: 改进版的SSD用来解决文字检测问题 端到端识别的pipeline: Step 1: 图像输入到修改版SSD网络中 + 非极大值抑制(NMS)→…
The state of the art of non-linearity is to use ReLU instead of sigmoid function in deep neural network, what are the advantages? I know that training a network when ReLU is used would be faster, and it is more biological inspired, what are the other…
Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation xx…