版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、. . . . 附件1:外文资料翻译译文改进型智能机器人的语音识别方法2、语音识别概述最近,由于其重大的理论意义和实用价值,语音识别已经受到越来越多的关注。到现在为止,多数的语音识别是基于传统的线性系统理论,例如隐马尔可夫模型和动态时间规整技术。随着语音识别的深度研究,研究者发现,语音信号是一个复杂的非线性过程,如果语音识别研究想要获得突破,那么就必须引进非线性系统理论方法。最近,随着非线性系统理论的发展,如人工神经网络,混沌与分形,可能应用这些理论到语音识别中。因此,本文的研究是在神经网络和混沌与分形理论的基础上介绍了语音识别的过程。 语音识别可以划分为独立发声式和非独立发声式两种。非独立发
2、声式是指发音模式是由单个人来进行训练,其对训练人命令的识别速度很快,但它对与其他人的指令识别速度很慢,或者不能识别。独立发声式是指其发音模式是由不同年龄,不同性别,不同地域的人来进行训练,它能识别一个群体的指令。一般地,由于用户不需要操作训练,独立发声式系统得到了更广泛的应用。 所以,在独立发声式系统中,从语音信号中提取语音特征是语音识别系统的一个基本问题。语音识别包括训练和识别,我们可以把它看做一种模式化的识别任务。通常地,语音信号可以看作为一段通过隐马尔可夫模型来表征的时间序列。通过这些特征提取,语音信号被转化为特征向量并把它作为一种意见,在训练程序中,这些意见将反馈到HMM的模型参数估计
3、中。这些参数包括意见和他们响应状态所对应的概率密度函数,状态间的转移概率,等等。经过参数估计以后,这个已训练模式就可以应用到识别任务当中。输入信号将会被确认为造成词,其精确度是可以评估的。整个过程如图一所示。 图1 语音识别系统的模块图3、理论与方法从语音信号中进行独立扬声器的特征提取是语音识别系统中的一个基本问题。解决这个问题的最流行方法是应用线性预测倒谱系数和Mel频率倒谱系数。这两种方法都是基于一种假设的线形程序,该假设认为说话者所拥有的语音特性是由于声道共振造成的。这些信号特征构成了语音信号最基本的光谱结构。然而,在语音信号中,这些非线形信息不容易被当前的特征提取逻辑方法所提取,所以我
4、们使用分型维数来测量非线形语音扰动。本文利用传统的LPCC和非线性多尺度分形维数特征提取研究并实现语音识别系统。3.1线性预测倒谱系数线性预测系数是一个我们在做语音的线形预分析时得到的参数,它是关于毗邻语音样本间特征联系的参数。线形预分析正式基于以下几个概念建立起来的,即一个语音样本可以通过一些以前的样本的线形组合来快速地估计,根据真实语音样本在确切的分析框架(短时间的)和预测样本之间的差别的最小平方原则,最后会确认出唯一的一组预测系数。LPC可以用来估计语音信号的倒谱。在语音信号的短时倒谱分析中,这是一种特殊的处理方法。信道模型的系统函数可以通过如下的线形预分析来得到:其中p代表线形预测命令
5、,(k=1,2, ,p)代表预测参数,脉冲响应用h(n)来表示,假设h(n)的倒谱是。那么(1)式可以扩展为(2)式:将(1)带入(2),两边同时 ,(2)变成(3)。就获得了方程(4):那么 可以通过来获得。(5)中计算的倒谱系数叫做LPCC,n代表LPCC命令。在我们采集LPCC参数以前,我们应该对语音信号进行预加重,帧处理,加工和终端窗口检测等,所以,中文命令字“前进”的端点检测如图2所示,接下来,断点检测后的中文命令字“前进”语音波形和LPCC的参数波形如图3所示。图2 中文命令字“前进”的端点检测图3 断点检测后的中文命令字“前进”语音波形和LPCC的参数波形3.2 语音分形维数计算
6、分形维数是一个与分形的规模与数量相关的定值,也是对自我的结构相似性的测量。分形分维测量是6-7。从测量的角度来看,分形维数从整数扩展到了分数,打破了一般集拓扑学方面被整数分形维数的限制,分数大多是在欧几里得几何尺寸的延伸。有许多关于分形维数的定义,例如相似维度,豪斯多夫维度,信息维度,相关维度,容积维度,计盒维度等等,其中,豪斯多夫维度是最古老同时也是最重要的,它的定义如3所示:其中,表示需要多少个单位来覆盖子集F. 端点检测后,中文命令词“向前”的语音波形和分形维数波形如图4所示。图4 端点检测后,中文命令词“向前”的语音波形和分形维数波形3.3 改进的特征提取方法考虑到LPCC语音信号和分
7、形维数在表达上各自的优点,我们把它们二者混合到信号的特取中,即分形维数表表征语音时间波形图的自相似性,周期性,随机性,同时,LPCC特性在高语音质量和高识别速度上做得很好。由于人工神经网络的非线性,自适应性,强大的自学能力这些明显的优点,它的优良分类和输入输出响应能力都使它非常适合解决语音识别问题。由于人工神经网络的输入码的数量是固定的,因此,现在是进行正规化的特征参数输入到前神经网络9,在我们的实验中,LPCC和每个样本的分形维数需要分别地通过时间规整化的网络,LPCC是一个4帧数据(LPCC1,LPCC2,LPCC3,LPCC4,每个参数都是14维的),分形维数被模化为12维数据,(FD1
8、,FD2,FD12,每一个参数都是一维),以便于每个样本的特征向量有4*14+12*1=68-D维,该命令就是前56个维数是LPCC,剩下的12个维数是分形维数。因而,这样的一个特征向量可以表征语音信号的线形和非线性特征。自动语音识别的结构和特征自动语音识别是一项尖端技术,它允许一台计算机,甚至是一台手持掌上电脑(迈尔斯,2000)来识别那些需要朗读或者任何录音设备发音的词汇。自动语音识别技术的最终目的是让那些不论词汇量,背景噪音,说话者变音的人直白地说出的单词能够达到100%的准确率(CSLU,2002)。然而,大多数的自动语音识别工程师都承认这样一个现状,即对于一个大的语音词汇单位,当前的
9、准确度水平仍然低于90%。举一个例子,Dragon's Naturally Speaking或者IBM公司,阐述了取决于口音,背景噪音,说话方式的基线识别的准确性仅仅为60%至80%(Ehsani & Knodt, 1998)。更多的能超越以上两个的昂贵的系统有Subarashii (Bernstein, et al., 1999), EduSpeak (Franco, etal., 2001), Phonepass (Hinks, 2001), ISLE Project (Menzel, et al., 2001) and RAD (CSLU, 2003)。语音识别的准确性将
10、有望改善。在自动语音识别产品中的几种语音识别方式中,隐马尔可夫模型(HMM)被认为是最主要的算法,并且被证明在处理大词汇语音时是最高效的(Ehsani & Knodt, 1998)。详细说明隐马尔可夫模型如何工作超出了本文的围,但可以在任何关于语言处理的文章中找到。其中最好的是Jurafsky & Martin (2000) and Hosom, Cole, and Fanty (2003)。简而言之,隐马尔可夫模型计算输入接收信号和包含于一个拥有数以百计的本土音素录音的数据库的匹配可能性(Hinks, 2003, p. 5)。也就是说,一台基于隐马尔可夫模型的语音识别器可以计
11、算输入一个发音的音素可以和一个基于概率论相应的模型达到的达到的接近度。高性能就意味着优良的发音,低性能就意味着劣质的发音(Larocca, et al., 1991)。虽然语音识别已被普遍用于商业听写和获取特殊需要等目的,近年来,语言学习的市场占有率急剧增加(Aist, 1999; Eskenazi, 1999; Hinks, 2003)。早期的基于自动语音识别的软件程序采用基于模板的识别系统,其使用动态规划执行模式匹配或其他时间规化技术(Dalby & Kewley-Port,1999). 这些程序包括TalktoMe (Auralog, 1995), the Tell Me Mor
12、e Series (Auralog, 2000), Triple-Play Plus (Mackey & Choi, 1998), New Dynamic English(DynEd, 1997), English Discoveries (Edusoft, 1998), and See it, Hear It, SAY IT! (CPI, 1997)。这些程序的大多数都不会提供任何反馈给超出简单说明的发音准确率,这个基于最接近模式匹配说明是由用户提出书面对话选择的。学习者不会被告之他们发音的准确率。特别是里,(2002年)评论例如Talk to Me和Tell Me More等作品中
13、的波形图,因为他们期待浮华的买家,而不会提供有意义的反馈给用户。Talk to Me 2002年的版本已经包含了更多Hinks (2003)的特性,比如,信任对于学习者来说是非常有用的: 一个视觉信号可以让学习者把他们的语调同模型扬声器发出的语调进行对比。 学习者发音的准确度通常以数字7来度量(越高越好) 那些发音失真的词语会被识别出来并被明显地标注。附件2:外文原文(复印件)Improved speech recognition methodfor intelligent robot2、Overview of speech recognitionSpeech recognition has
14、received more and moreattention recently due to the important theoreticalmeaning and practical value 5 . Up to now, mostspeech recognition is based on conventional linear system theory, such as Hidden Markov Model (HMM) and Dynamic Time Warping(DTW) . With the deepstudyof speech recognition, it is f
15、ound that speech signal is a complex nonlinearprocess. If the study ofspeech recognition wants to break through, nonlinear-system theory method must be introduced to it. Recently, with the developmentof nonlinea-system theories such as artificial neural networks(ANN) , chaosand fractal, it is possib
16、le to apply these theories tospeech recognition. Therefore, the study of this paperis based onANN and chaos and fractal theories are introduced to process speech recognition.Speech recognition is divided into two ways thatare speaker dependent and speaker independent.Speaker dependent refers to the
17、pronunciation modeltrained by a single person, the identification rate of thetraining person?sorders is high, while othersorders isin low identification rate or cant be recognized.Speaker independent refers to the pronunciation modeltrained by persons of different age, sex and region, itcan identify
18、 a group of personsorders. Generally,speaker independent system ismorewidely used, sincethe user is not required to conduct the training. So extraction of speaker independent features from thespeech signal is the fundamental problem of speakerrecognition system.Speech recognition can be viewed as a
19、pattern recognition task, which includes training and recognition.Generally, speech signal can be viewed as a time sequence and characterized by the powerful hiddenMarkovmodel (HMM). Through the feature extraction, thespeech signal is transferred into feature vectors and actasobservations. In the tr
20、ainingprocedure, these observationswill feed to estimate the model parameters ofHMM. These parameters include probability densityfunction for the observations and their correspondingstates, transition probability between the states, etc.After the parameter estimation, the trained models canbe used f
21、or recognition task. The input observationswill be recognized as the resulted words and the accuracy can be evaluated. Thewhole process is illustratedin Fig. 1.Fig. 1Block diagram of speech recognition system3 Theory andmethodExtraction of speaker independent features fromthe speech signal is the fu
22、ndamentalproblemof speaker recognition system. The standard methodology forsolving this problem uses Linear Predictive CepstralCoefficients (LPCC) andMel-Frequency Cepstral Co-efficient (MFCC). Both these methods are linear procedures based on the assumption that speaker featureshave properties caus
23、ed by the vocal tract resonances.These features form the basic spectral structure of thespeech signal. However, the non-linear information inspeech signals is not easily extracted by thepresentfeature extraction methodologies. So we use fractal dimension to measure non2linear speech turbulence.This
24、paper investigates and implements speaker identification system using both traditionalLPCC and non-linearmultiscaled fractal dimension feature extraction.3. 1L inear Predictive Cepstral CoefficientsLinearprediction coefficient (LPC) is a parameter setwhich is obtained when we do linear predictionana
25、lysis of speech. It is about some correlation characteristics between adjacent speech samples. Linear prediction analysis is based on the following basic concepts. That is, a speech sample can be estimated approximately by the linear combination of some pastspeech samples. According to the minimal s
26、quare sumprinciple of difference between real speech sample incertain analysis frame short-time and predictivesample, the only group ofprediction coefficients can bedetermined.LPC coefficient can be used to estimate speechsignal cepstrum. This is a specialprocessingmethod inanalysis of speech signal
27、 short-time cepstrum. Systemfunction of channelmodel isobtained by linearprediction analysis as follow.Where p represents linear prediction order, ak,(k=1,2,p) represent sprediction coefficient, Impulse response is represented by h(n). Suppose cepstrum of h(n) is represented by ,then (1) can be expa
28、nded as (2).The cepstrum coefficient calculated in the way of (5) is called LPCC, n represents LPCC order.When we extract LPCC parameter before, we should carry on speech signal pre-emphasis, framing processing, windowingprocessing and endpoints detection etc. , so the endpoint detection of Chinese
29、command word“Forward”is shown in Fig.2, next, the speech waveform ofChinese command word“Forward”and LPCC parameter waveform after Endpoint detection is shown in Fig. 3.3. 2 Speech Fractal Dimension Computation Fractal dimension is a quantitative value from the scale relation on the meaning of fract
30、al, and also a measuring on self-similarity of its structure. The fractal measuring is fractal dimension6-7. From the viewpoint of measuring, fractal dimension is extended from integer to fraction, breaking the limitof the general to pology set dimension being integer Fractal dimension,fraction most
31、ly, is dimension extension in Euclidean geometry.There are many definitions on fractal dimension, eg.,similar dimension, Hausdoff dimension, inforation dimension, correlation dimension, capabilityimension, box-counting dimension etc. , where,Hausdoff dimension is oldest and also most important, for
32、any sets, it is defined as3.Where, M(F) denotes how many unit needed to cover subset F.In thispaper, the Box-Counting dimension (DB) of ,F, is obtained by partitioning the plane with squares grids of side , and the numberof squares that intersect the plane (N() and is defined as8.The speech waveform
33、 of Chinese command word“Forward”and fractal dimension waveform after Endpoint detection is shown in Fig. 4.3. 3Improved feature extractions methodConsidering the respective advantages on expressing speech signal of LPCC and fractal dimension,we mix both to be the feature signal, that is, fractal di
34、mension denotes the self2similarity, periodicity and randomness of speech time wave shape, meanwhile LPCC feature is good for speech quality and high on identification rate.Due to ANNs nonlinearity, self-adaptability, robust and self-learning such obvious advantages, its good classification and inpu
35、t2output reflection abilityare suitable to resolve speech recognition problem.Due to the number of ANN input nodes being fixed, therefore time regularization is carried out to the feature parameter before inputted to the neural network9. In our experiments, LPCC and fractal dimension of each sample
36、are need to get through the network of time regularization separately, LPCC is 4-frame data(LPCC1,LPCC2,LPCC3,LPCC4, each frame parameter is 14-D), fractal dimension is regularized to be12-frame data(FD1,FD2,FD12, each frame parameter is 1-D), so that the feature vector of each sample has 4*14+1*12=
37、68-D, the order is, the first 56 dimensions are LPCC, the rest 12 dimensions are fractal dimensions. Thus, such mixed feature parameter can show speech linear and nonlinear characteristics as well.Architectures and Features of ASRASR is a cutting edge technology that allows a computer or even a hand
38、-held PDA (Myers, 2000) to identify words that are read aloud or spoken into any sound-recording device. The ultimate purpose of ASR technology is to allow 100% accuracy with all words that are intelligibly spoken by any person regardless of vocabulary size, background noise, or speaker variables (C
39、SLU, 2002). However, most ASR engineers admit that the current accuracy level for a large vocabulary unit of speech (e.g., the sentence) remains less than 90%. Dragon's Naturally Speaking or IBM's ViaVoice, for example, show a baseline recognition accuracy of only 60% to 80%, depending upon
40、accent, background noise, type of utterance, etc. (Ehsani & Knodt, 1998). More expensive systems that are reported to outperform these two are Subarashii (Bernstein, et al., 1999), EduSpeak (Franco, et al., 2001), Phonepass (Hinks, 2001), ISLEProject (Menzel, et al., 2001) and RAD (CSLU, 2003).
41、ASR accuracy is expected to improve. Among several types of speech recognizers used in ASR products, both implemented and proposed, the Hidden Markov Model (HMM) is one of the most dominant algorithms and has proven to be an effective method of dealing with large units of speech (Ehsani & Knodt,
42、 1998). Detailed descriptions of how the HHM model works go beyond the scope of this paper and can be found in any text concerned with language processing; among the best are Jurafsky & Martin (2000) and Hosom, Cole, and Fanty (2003). Put simply, HMM computes the probable match between the input
43、 it receives and phonemes contained in a database of hundreds of native speaker recordings (Hinks, 2003, p. 5). That is, a speech recognizer based on HMM computes how close the phonemes of a spoken input are to a corresponding model, based on probability theory. High likelihood represents good pronunciation; low likelihood represents poor pronunciation (Larocca, et al., 1991). While ASR has been commonly used for such purposes as business dictation and special n
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 同步优化设计2024年高中数学第一章直线与圆1.4两条直线的平行与垂直课后篇巩固提升含解析北师大版选择性必修第一册
- 专题11 课外阅读(讲义+试题) -2023年三升四语文暑假衔接课(统编版)
- 2024贷款购销合同范本范文
- 2024养猪场转让合同(参考文本)
- 草药基地合同范本(2篇)
- 2022年监理合同(2篇)
- 关于试用期工作总结
- 顽固皮肤病康复经验分享
- 国际会展中心建设总承包合同
- 跨境电商快递租赁合同
- 植物盆栽课件教学课件
- 《复活》精制课件
- 2023年四川绵阳科技城新区下属国有企业科服公司招聘笔试真题
- 2024年中小学天文知识竞赛初赛试卷
- 2024年10月时政100题(附答案)
- 2024年危险化学品经营单位安全管理人员证考试题库
- JJF(苏) 275-2024 测斜仪校验台校准规范
- 【9道期中】安徽省黄山地区2023-2024学年九年级上学期期中考试道德与法治试题(含详解)
- 黑布林-Peter-Pan-中英双语阅读
- 杨柳煤矿“三量”动态变化情况分析报告(3)
- 因式分解经典题型(含详细答案)
评论
0/150
提交评论