语音识别中英文对照外文翻译文献_第1页
语音识别中英文对照外文翻译文献_第2页
语音识别中英文对照外文翻译文献_第3页
语音识别中英文对照外文翻译文献_第4页
已阅读5页,还剩9页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、中英文资料对照外文翻译(文档含英文原文和中文翻译)Speech Recognition1Defining the ProblemSpeech recognitionistheprocess ofconvertinganacousticsignal,captured bya microphone or a telephone, to a set of words. The recognized words can be the final results, as for applications such as commands & control, data entry, and docume

2、nt preparation. They can also serve as the input to further linguistic processing in order to achieve speech understanding, a subject covered in section.Speech recognition systems can be characterized by many parameters,some of the more important of which are shown in Figure. An isolated-word speech

3、 recognition system requires that the speaker pause briefly between words, whereas a continuous speech recognition system does not. Spontaneous, or extemporaneously generated, speech contains disfluencies, and is much more difficultto recognize than speech read from script. Some systems require spea

4、ker enrollment-a user must provide samples of his or her speech before using them, whereas othersystems are said to be speaker-independent, in that no enrollment is necessary. Some of the otherparameters depend on the specific task. Recognition is generally more difficult when vocabularies are large

5、 or have many similar-sounding words. When speech is produced in a sequence of words, language models or artificial grammars are used to restrict the combination of words.Thesimplest language modelcan be specifiedas afinite-statenetwork,where the1permissible words followingeach word are given explic

6、itly.More general language models approximating natural language are specified in terms of a context-sensitive grammar.One popular measure of the difficulty of the task, combining the vocabulary size and thelanguage model, is perplexity, loosely definedas the geometric mean of the number of words th

7、at can follow a word after the language model has been applied (see section for a discussion of language modeling ingeneral and perplexityin particular). Finally,there are some external parameters that can affect speech recognition system performance, including the characteristics of the environment

8、al noise and the type and the placement of the microphone.ParametersRangeSpeaking ModeIsolated words to continuous speechSpeaking StyleRead speech to spontaneous speechEnrollmentSpeaker-dependent to Speaker-independentVocabularySmall(20,000 words)Language ModelFinite-state to context-sensitivePerple

9、xitySmall(100)SNRHigh (30 dB) to law (10dB)TransducerVoice-cancelling microphone to telephoneTable: Typical parameters used to characterize the capability of speech recognition systems Speech recognition is a difficult problem, largely because of the many sources of variabilityassociated with the si

10、gnal. First, the acoustic realizations of phonemes, the smallest sound units of which words are composed, are highly dependent on the context in which they appear. These phonetic variabilitiesare exemplifiedby the acoustic differences of the phoneme, Atword boundaries, contextual variations can be q

11、uite dramatic-making gas shortage sound like gashshortage in American English, and devo andare sound like devandare in Italian.Second, acoustic variabilities can result from changes in the environment as well as in the position and characteristics of the transducer. Third, within-speaker variabiliti

12、es can result from changesin the speakers physical and emotional state, speaking rate, or voice quality. Finally, differences in sociolinguistic background, dialect, and vocal tract size and shape can contribute to across-speaker variabilities.Figure shows the major components of a typical speech re

13、cognition system. The digitized speech signal is first transformed into a set of useful measurements or features at a fixed rate, typicallyonce every 10-20 msec (see sectionsand 11.3 for signal representation and digital signal processing, respectively). These measurements are then used to search fo

14、r the most likely word candidate, making use of constraints imposed by the acoustic, lexical, and language models. Throughout this process, training data are used to determine the values of the model parameters.Figure: Components of a typical speech recognition system.Speech recognition systems atte

15、mpt to model the sources of variability described above in several ways. At the level of signalrepresentation, researchers have developed representations thatemphasize perceptuallyimportantspeaker-independent featuresofthesignal,and de-emphasize speaker-dependent characteristics. Attheacousticphonet

16、iclevel,speaker variabilityis typicallymodeled using statistical techniques applied to large amounts of data.Speaker adaptation algorithms have also been developed that adapt speaker-independent acoustic models to those of the current speaker during system use, (see section). Effects of linguistic c

17、ontext at the acoustic phonetic level are typicallyhandled by training separate models for phonemes in different contexts; this is called context dependent acoustic modeling.Word level variabilitycan be handled by allowingalternate pronunciations of words in representations known as pronunciation ne

18、tworks. Common alternate pronunciations of words, as wellas effects of dialect and accent are handled by allowingsearch algorithmsto find alternate paths ofphonemes through these networks. Statistical language models, based on estimates of the frequency of occurrence of word sequences, are often use

19、d to guide the searchthrough the most probable sequence of words.The dominant recognition paradigm in the past fifteen years is known as hidden Markov models (HMM).AnHMMisa doubly stochastic model, inwhichthe generation ofthe underlyingphoneme stringand the frame-by-frame, surface acoustic realizati

20、ons are both represented probabilisticallyas Markov processes,as discussed in sections,and 11.2. Neural networks have also been used to estimate the frame based scores; these scores are then integrated into HMM-based system architectures, in what has come to be known as hybrid systems, as described

21、in section 11.5.An interesting feature of frame-based HMM systems is that speech segments are identified during the search process, rather than explicitly. An alternate approach is to first identify speech segments, then classify the segments and use the segment scores to recognize words. This appro

22、ach has produced competitive recognition performance in several tasks.2State of the ArtComments about the state-of-the-art need to be made in the context of specific applications whichreflectthe constraints on the task. Moreover, differenttechnologies are sometimes appropriate for different tasks. F

23、or example, when the vocabulary is small, the entire word canbe modeled as a single unit. Such an approach is not practical for large vocabularies, where word models must be built up from subword units.Performance of speech recognition systems is typically described in terms of word error rateE, def

24、ined as:where N is the total number of words in the test set, andS, I , and D are the total number of substitutions, insertions, and deletions, respectively.The past decade has witnessed significant progress in speech recognition technology. Worderror rates continue to drop by a factor of 2 every tw

25、o years. Substantial progress has been made in the basic technology, leading to the lowering of barriers to speaker independence, continuous speech, and large vocabularies. There are several factors that have contributed to this rapid progress. First, there is the coming of age of the HMM.HMMis powe

26、rful in that, withthe availabilityof training data, the parametersof the model can be trained automatically to giveoptimal performance.Second, much effort has gone into the development of large speech corpora forsystem development, training, and testing. Some of these corpora are designed for acoust

27、ic phonetic research, while others are highly task specific. Nowadays, it is not uncommon to have tens of thousands ofsentences availableforsystem trainingand testing.These corporapermit researchers to quantifythe acoustic cues important for phonetic contrasts and to determine parametersof the recog

28、nizers in a statistically meaningful way. While many of these corpora (e.g., TIMIT,RM,ATIS,and WSJ; see section 12.3) were originallycollected under the sponsorship of the U.S. Defense Advanced Research ProjectsAgency (ARPA) tospur human language technologydevelopment amongitscontractors, theyhave n

29、evertheless gained world-wide acceptance (e.g., in Canada, France, Germany, Japan, and the U.K.) as standards on which to evaluate speech recognition.Third, progress has been brought about by the establishment of standards for performance evaluation. Onlya decade ago, researchers trained and tested

30、theirsystems using locally collected data, and had not been very careful in delineating training and testing sets. As a result, itwas very difficulttocompare performance across systems, and a systems performance typically degraded when it was presented with previously unseen data. The recent availab

31、ility of a large body of data in the public domain, coupled with the specification of evaluation standards, has resulted in uniform documentation of test results, thus contributing to greater reliability in monitoringprogress(corpusdevelopmentactivitiesandevaluationmethodologiesare summarized in cha

32、pters 12 and 13 respectively).Finally, advances in computer technology have also indirectly influenced our progress. The availability of fast computers with inexpensive mass storage capabilities has enabled researchers to run many large scale experiments in a short amount of time. This means that th

33、e elapsed time between an idea and its implementationand evaluation isgreatly reduced. Infact, speech recognition systems withreasonable performance can now run inreal time using high-end workstations without additional hardware-a feat unimaginable only a few years ago.One of the most popular, and p

34、otentially most useful tasks with low perplexity (PP=11) isthe recognition of digits. For American English, speaker-independent recognition of digit strings spoken continuously and restricted to telephone bandwidth can achieve an error rate of 0.3% when the string length is known.One of the best kno

35、wn moderate-perplexity tasks is the 1,000-word so-called Resource Management (RM) task, in which inquiries can be made concerning various naval vessels in the Pacific ocean. The best speaker-independent performance on the RM task is less than 4%, using a word-pair language model that constrains the

36、possible words following a given word (PP=60).Morerecently, researchers have begun toaddress theissue ofrecognizing spontaneouslygenerated speech. For example, in the Air Travel Information Service (ATIS) domain, word error rates of less than 3% has been reported for a vocabulary of nearly 2,000 wor

37、ds and a bigram language model with a perplexity of around 15.High perplexity tasks with a vocabulary of thousands of words are intended primarily for the dictation application. After working on isolated-word, speaker-dependent systems for many years, the community has since 1992 moved towards very-

38、large-vocabulary (20,000 words andmore), high-perplexity (PP 200), speaker-independent, continuous speech recognition. The bestsystem in 1994 achieved an error rate of 7.2% on read sentencesdrawn from North America business news.With the steady improvements in speech recognition performance, systems

39、 are now being deployed within telephone and cellular networks in many countries. Within the next few years, speech recognitionwillbe pervasive intelephone networksaround the world.There are tremendous forces drivingthe development of the technology; in many countries, touch tone penetration is low,

40、and voice is the only option for controllingautomated services. In voice dialing, forexample, userscan dial 10-20 telephone numbers by voice (e.g., call home) after having enrolled their voices by saying the words associated with telephone numbers. AT&T, on the other hand, has installed a call routi

41、ngsystem using speaker-independentword-spotting technology that can detect a few key phrases (e.g., person to person, calling card) in sentences such as: I want to charge it to my calling card.Atpresent, several very large vocabulary dictation systems are available fordocument generation.Thesesystem

42、s generallyrequirespeakers topause betweenwords.Their performance can be further enhanced if one can apply constraints of the specific domain such as dictating medical reports.Even though much progress is being made, machines are a long way from recognizing conversational speech. Word recognition ra

43、tes on telephone conversations in the Switchboard corpus are around 50%. It will be many years before unlimited vocabulary, speaker-independentcontinuous dictation capability is realized.3Future DirectionsIn 1992, the U.S.National Science Foundation sponsored a workshop to identify the key research

44、challenges in the area of human language technology, and the infrastructure needed to support the work. The key research challenges are summarized in. Research in the following areas for speech recognition were identified:Robustness:In a robust system, performance degrades gracefully (rather than ca

45、tastrophically) as conditions become more different from those under which it was trained. Differences in channel characteristics and acoustic environment should receive particular attention.Portability:Portability refers to the goal of rapidly designing, developing and deploying systems for new app

46、lications. At present, systems tend to suffer significant degradation when moved to a new task. In order to return to peak performance, they must be trained on examples specific to the new task, which is time consuming and expensive.Adaptation:How can systems continuously adapt to changing condition

47、s (new speakers, microphone, task, etc) and improve through use? Such adaptation can occur at many levels in systems, subword models, word pronunciations, language models, etc.Language Modeling:Current systems use statistical language models to help reduce the search space and resolve acoustic ambig

48、uity. As vocabulary size grows and other constraints are relaxed to create more habitable systems, it will be increasingly important to get as much constraint as possible from language models; perhaps incorporatingsyntactic and semantic constraints that cannot be captured by purely statistical model

49、s.Confidence Measures:Most speech recognition systems assign scores to hypotheses forthe purpose ofrank ordering them. These scores do not provide a good indication of whether a hypothesis is correct or not, just that it is better than the other hypotheses. As we move to tasks that require actions,w

50、e need better methods to evaluate the absolute correctness of hypotheses.Out-of-Vocabulary Words:Systems are designed for use with a particular set of words, but system users may not know exactlywhichwordsare inthe system vocabulary. Thisleads toa certain percentage of out-of-vocabulary words in nat

51、ural conditions. Systems must have some method of detecting such out-of-vocabulary words, or they will end up mapping a word from the vocabulary onto the unknown word, causing an error.Spontaneous Speech:Systems that are deployed for real use must deal witha variety of spontaneousspeech phenomena, s

52、uch as filled pauses, false starts, hesitations, ungrammatical constructions and other common behaviors not found in read speech. Development on the ATIStask has resulted in progress in this area, but much work remains to be done.Prosody:Prosody refers to acoustic structure that extends over several

53、 segments or words. Stress, intonation, and rhythm convey important information for word recognition and the users intentions (e.g., sarcasm, anger). Current systems do not capture prosodic structure. How to integrate prosodic information into the recognition architecture is a critical question that

54、 has not yet been answered.Modeling Dynamics:Systems assume a sequence of input frames which are treated as if they were independent.But it is known that perceptual cues for words and phonemes require the integration of features that reflect the movements of the articulators, which are dynamic in na

55、ture. How to model dynamics and incorporate this information into recognition systems is an unsolved problem.语音识别一定义问题语音识别是指音频信号的转换过程,被 或麦克风的所捕获的一系列的消息。所识别的消息作为最后的结果,用于控制应用,如命令与数据录入,以及文件准备。它们也可以作为处理输入的语言,以便进一步实现语音理解,在第一个主题涵盖。语音识别系统可以用多个参数来描绘,一些更重要参数在图形中显示出来。一个孤立字语音识别系统要求词与词之间短暂停顿,而连续语音识别统对那些不自发的,或临时

56、生成的,言语不流利的语音,比用讲稿更难以识别。有些系统要求发言者登记 即用户在使用系统前必须为系统提供演讲样本或发言底稿,而其他系统据说是独立扬声器,因为没有必要登记。一些参数特征依赖于特定的任务。当词汇量比较大或有较多象声词的时候,识别起来一般比较困难。当语音由有序的词语生成时,语言模型或特定语法便会限制词语的组合。最简单的语言模型可以被指定为一个有限状态网络,每个语音所包含的所有允许的词语都能顾及到。更普遍的近似自然语言的语言模型在语法方面被指定为上下文相关联。一种普及的任务的难度测量,词汇量和语言模型相结合的语音比较复杂,大量语音的几何意义可以按照语音模型的应用定义广泛些参见文章对语言模

57、型普遍性与复杂性的详细讨论。最后,还有一些其他参数,可以影响语音识别系统的性能,包括环境噪声和麦克风的类型和安置。参数范围语音模型孤立词到连续语音语音种类朗读语音到自然语音登记词汇依赖扬声器到独立的扬声器小(20,000 字)语言模型有限个状态到上下文相关混乱信噪比小(100)高(30 分贝)到低(10 分贝)传感器消音麦克到 表格: 特有参数用于表征语音识别系统的性能语音识别是一个困难的问题,主要是因为与信号相关的变异有很多来源。首先,音素,作为组成词语的最小的语音单位,它的声学呈现是高度依赖于他们所出现的语境的。这些语音的变异性正好由音素的声学差异做出了验证。在词语的范围里,语境的变化会相当富有戏剧性 - 使得美国英语里的gas shortage听起来很像 gash shortage,而意大利语中的devoandare听起来会很像 devandare。其次,声变异可能由环境变化,以及传输介质的位置和特征引起。第三,说话人的不同,演讲者身体和情绪上的差异可能导致演讲速度,质量和话音质量的差异。最后,社会语言学背景,方言的差异和声道的大小和形状更进一步促进了演讲者的差异性。数字图形展示了语音识别系统的主要组成部分

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论