外文翻译--维特比算法和软硬判决译码的差错控制_第1页
外文翻译--维特比算法和软硬判决译码的差错控制_第2页
外文翻译--维特比算法和软硬判决译码的差错控制_第3页
外文翻译--维特比算法和软硬判决译码的差错控制_第4页
外文翻译--维特比算法和软硬判决译码的差错控制_第5页
已阅读5页,还剩14页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

中文 4650 字 毕业设计(英文翻译) 原文题目: The Viterbi Algorithm and Probability of Error for Soft and Hard-Decision Decoding 译文题目: 维特比算法和软 硬 判决译码的差错控制 二零一 四 年 三 月 1 摘 要 卷积码可以用维特比算法作为译码算法,由于维特比译码器复杂度随着反馈深度的增长成指数倍增长,因而译码反馈深度对译码器的复杂度 影响很大甚至可能无法使 用,目前有些文献中仅给出了反馈深度的大致范围,但在硬件实现和性能仿真时无法确定一个具体的数值。它是一个最大似然序列估计器 (MLSE),所以,卷积码的译码就是搜遍网格图找出最可能的序列,搜索网格图时所用的量度可以是汉明距离,也可以是欧氏距离。 由于卷积码没有固定长度,可以利用网格图某给定节点上第一次与全零序列汇合的序列的差错概率推导其性能。把节点 B 上 与全零路径汇合的路径的量度首次超过全零路径量度的概率定义为首次差错事件概率。 关键字 :卷积码;维特比译码;网格图;差错概率 1. 卷积码的最佳译 码 维特比算法 在无 记忆信道分组码的译码中,我们需计算接受码字与 k2 个可能发送码字之间的距离 (硬判决译码时时汉明译码距离,软判决译码时是欧氏 (Euclidean)距离 ),选择一个离接受码字最近的码字作为译码输出。这种判决法则需要计算 k2 个距离量度 (metrics)。在加性高斯白噪声、 p )1(CM ,那么对于任何起源于 a 节点的路径, )0(CM 将仍然大于 )1(CM 。这意味着从此以后可以不再考虑与 )1(CM 对应的路 径。与量度 )0(CM 对应的路径叫做留存路径 (Survivor)。同理,根据两个量度的大小,在 b 状态汇合的两条路径也可以去除其中之一。对 c 状态和 d 状态也可以同样重复这种步骤。结果,经过开头 3 次状态转移之后,只剩下 4 条路径,每个状态作为其中一条的终点,并且每条留存路径有相应的量度。随着以后每一时间间隔中新信号的接受,在网格图中的每一级都重复这样的步骤。 一般来说,如果用维特比算法对一个 k=1 且约束长度为 K 的二进制卷积码译码,应有 12K 个状态。因此每级有 12K 条留存路径,每条留存路径有一个路径量度,共 12K 个。更一般地,一个二进制卷积码若一次能让 k 个信息比特输入到由 K 个 k 比特移存器构成的编码器,这样卷积码将产生 )1(2 Kk 个状态的网络图。若想用维特比算法对这种码进行译码,要保存 )1(2 Kk 条留存路径和 )1(2 Kk 个路径量度。在网格图每一级的每一个节点,有 k2 条路径汇合于该点。由于汇合于同一节点的每一条路径都要计算其量度,因此每个节点要计算 k2 个量度。在汇合于每个节点的 k2 条路径中,留存路径只有一条,它就是最可能 (最小距离 )的路径。这样,在执行每一级的译码时,计算量将随 k 和 K 呈指数增加,这就将维特比算法的应用局限于 k 和 K 值较小的场合。 在对一个卷积码的长序列进行译码时,译码延时对大多数实际应用场合来说太长,用来存储留存序列全部长度 的存储器太大太贵。解决这个问题的办法如 5.1.4 节所述,设法修改维特比算法,是修改后既能保持一个固定的译码延时,有对算法的最佳性能没有显著影响。修改办法是在任意给定时间 t 内保留每个留存序列中最新的 个译码信息比特 (符号 )。当收到每一个新的信息比特 (符号 )时,译码器对留存序列量度的大小作比较,找出具有最大量度的留存序列,再在网格图 (时间 )上回退 个分支,将该留存序列上该时刻的比特判决为接受比特 (符号 )的译码输出。如果 选的足够大,在时间上回退 个分支后,所有留存序列将包含相同的译码输出比特 (符号 )。这就是说,时刻 t 4 的所有留存序列极有可能是起源于时刻 t- 的同一节点。实验 (计算机模拟法 )证明,当延时 K5 时,与最佳维特比算法性能相比其性能的下降忽略不计。 2. 软判决译码的差错控制 本节的主题是讨论加性高斯白噪声信道中使用软判决译码时维特比算法的差错概率性能 。 再推导卷积码的差错概率时,利用这类编码的线性特性可使推导简化,也就是假定发送的是全零序列,求误判成另一序列的差错概率。假定传输采用的是二厢 PSK(或四相 PSK),解调器使用相干检测,所得的卷积码第 j 个分支上的二进制编码数字如 8.2.2节定义,用 jmc, m=1, 2 ,n表示。解调器的输出,即维特比译码器的输入是序列 jmr, m=1, 2 n; j=1, 2 ,其中jmr如式 (8-2-9)定义。 维特比软判决译码器计算的分支量度由式 (8-2-14)定义,以此可计算路径量度为 )12( )(1 11)()( ijmBjnm jmBjiji crCM (8.2-16) 这里, i 表示各节点的任意一条待选路径, B 是一条路径上的分支 (信息符号 )数。例如,全零路径用 i=0 表示,具有路径量度 BjnmjmcjmBjnmcnBnnCM1 11 1)0( )1)(8.2-17) 由于卷积码没有固定长度,可以利用网格图某给定节点上第一次与全零序列汇合的序列的差错概率推导其性 能。把在节点 B 上与全零路径汇合的量度首次超过全零路径量度路径的概率定为首次差错事件概率。假设和全零路径汇合的编号为 i=1 的不正确路径与全零路径有 d 比特的差别,即编号 i=1 的路径上有 d 个“ 1”而其余为“ 0”。两条路径为一对,成对比较量度 )0(CM 和 )1(CM ,得差错概率 0(2)0()()0()1(1 1)0()1()0()1(2 jmjmBjnmjm ccrPCMCMPCMCMPdP (8.2-18) 由于两条路径的编码比特除了在 d 个位置上不同外,其他都相同 ,所以式 (8-2-18)可以简化为 5 )0()(12 dl lrPdP (8.2-19) 式中,下表 1 代表两条路径中不同的 d 个比特,集合 lr表示和该 d 比特对应的译码器输入。 lr是独立等概的高斯随机变量,均值为c,方差为02/1 N。因此,两条相差 d 比特的路径成对比较时的差错概率为 )2)/2()( 02dRQdNQdPcbc (8.2-20) 式中,0/ Nbb 是接收端的每比特信噪比,cR是码率。 尽管我们推出了与全零路径为 d 的错误路径的首次差错事件概率,但在给定节点 B上可能有很多路径以不同的距离和全零路径汇合。实际上,转移函数 T(D)为所有以不同距离与全零路径汇合于 B 节点的路径提供了最完整的描述。于是可 以用式 (8-2-20)将所有可能距离的路径的差错概率统统加起来,通过求和运算,得到首次差错事件概率的上边界为 )2()(2f r e ef r e eddcbddddedRQadPaP(8.2-21) 式中,da表示与全零路径首次汇合且距离为 d 的路径的数目。 有两个理由说明为什么式 (8-2-21)是首次差错事件概率的上边界。一是引起差错概率 )(2 dP 的事件不是独立的,这可以从网格图看出来。二是对所有可能的freedd 求和时,默认卷积码是无限长的。如果卷积码在 B 节点之后周期的截短,式 (8-2-21)的上边界可以通过求 Bddfree 间的差错事件总和而得到改善。这个改善对于确定短卷积码的性能有利,但当 B 很大时,它对性能的影响可以忽略。 如果 Q 函数被一个指数框出上限,即 cRbcb eDddRcb DedRQ |)2( (8.2.22) 那么式 (8-2-21)的上边界可以用略有不同的另一种形式表示。将式 (8-2-22)代入式(8-2-21),首次差错事件概率的上边界可以表示为 cbReDe DTP |)(8.2-23) 虽然首次差错事件概率提供了一种用来衡量卷积码性能的方法,但衡量性能跟有用 6 的办法是比特差错概率。用确定首次差错事件概率边界的方法同样可确定比特差错概率的上边界。一旦选择一条差错路径,其信息比特和正确路径信息比特的差异将造成译码的不正确。转移函数 T(D,N)中因子 N 的指数代表与全零路径 汇合于某节点 B 的所选错误路径中的差错信息比特数 (即“ 1”的个数 ),如果把成对差错概率 )(2 dP 乘以路径汇合处由错误路径造成的译码信息比特的差错个数,可得到该差错路径的比特差错率。若令每一对差错概率 )(2 dP 都乘以相应错误路径 (这些错误路径与正确路径在节点 B 处汇合 )中的译码信息比特的差错个数,并对所有 d 求和,可获得平均比特差错概率的上边界。 让 T(D, N)对 N 求微分,得到与每一条差错路径中的信息比特差错个数相对应的非常合适的乘法因子。一般 的, T(D, N)可表示为 )(),( dfdddd NDaNDTf r e s s (8.2-24) 式中, f(d) 表示 N 的指数,它是 d 的函数。求 T(D, N)对 N 的导数并令 N=1,得 ddddddddNDDdFadNNDdTf r e ef r e e)(|),(1(8.2-25) 式中, )(dfadd 。于是, k=1 时的比特差错概率上边界为 dRQdPPcbddddddbf r e ef r e e2()(2(8.2-26) 如果 Q 函数像式 (8-2-22)那样由一个指数框出上边界,式 (8-2-26)简化为 cbcbf r e eReDNReDddddbdNNDdTDP ,1|),(|(8.2-27) 如果 k1,相应的比特差错概率可以由式 (8-2-26)和式 (8-2-27)除以 k 求得。 以上差错概率表达式是在假设编码比特用二相相干 PSK 传输的前提下得到的。这个结果对四相相干 PSK 也同样成立,因为四相调制 /解调技术等效于两个独立 (相位正交 )的二相 PSK 系统。在其他调制 /解调技术,如相干和非相干二进制 FSK 时也能使用,但 7 需重新计算成对差错概率 )(2 dP 。也就是说,发送编码信息序列的调制 /解调技术的变化只影响 )(2 dP 的计算,bP的推导不变。 虽然上述卷积码维特比译码时差错概率的推导是应用与二进制卷积码的,但很容易推广到非二进制卷积 码,此时,每个二进制符号映射到不同的波形上。关键地,式 (8-2-25)中 T(D, N)导数展开式的系数 d表示两条距离 (用符号数衡量 )相隔 d 符号的路径中的符号差错个数。用 )(2 dP 表示两条成对比较的间隔为 d 符号的路径的差错概率, k 比特符号的符号差错概率的上边界应为 )(2 dPPf r e eddbM 此符号差错概率可以转换成等效的比特差错概率。例如,用 k2 个正交波形发送 k 比特符号时,等效比特差错概率是 MP 乘以 )12/(2 1 kk ,如第五章所述。 3. 硬判决译码的差错概率 正如处理软判决译码的那样,从确定首次差错事件概率开始。假设发送的是全零路径,并假定某节点 B 上准备与全零路径比较的路径和全零路径相距 d。若 d 是奇数,那么当接收序列的差错数小于 2/)1( d 时,会正确的选择全零路径;反之,选择错误路径。所以,选择错误路径的概率为 kdkddkdk ppdP )1()()(2/)1(2(8-2-28) 式中, p 是二进制对称信道的比特差错概率。若 d 为偶数,差错数超过 2/d 时将选择错误路径。如果差错数等于 2/d ,两条路径的量度一样,随便选择两条路径之一,这时有一半差错机会。于是,选择差错路径的总概率是 2/2/212/)1(2 )1()(21)1()()( ddddkdkddkdk ppppdP (8-2-29) 正如 8.2.3 节所述,在给定的节点上有许多具有不同距离的路径与 全零路径汇合,所以写不出既简单又精确的表达式。但是,把该节点上与全零路径汇合的所有可能路径的成对差错概率 )(2 dP 加起来,可以求出首次差错事件概率的上边界。于是得到联合边界如下 )()(22 dPadPf r e eddd (8-2-30) 式中, da表示与不同距离 d相对应的路径数目。这些系数正是转移函数 )(DT 或 8 ),( NDT 展开式中的系数。 如果不用式 (8-2-28)和式 (8-2-29)中 )(2 dP 的表达式,可用其上边界 2/2 )1(4)( dppdP (8-2-31) 该式在 8.1.5 节定义。利用式 (8-2-30)的边界,可以得到首次差错事件概率的较松的上边界如下 )1(42/|)()1(4ppDddddeDTppaPf r e e (8-2-32) 9 Abstract The decoder is largely affected by the decoder traceback, because the decoder complexity increases exponentially with the decoder traceback depth. There are lots of references which are analyzed theoretically the performance caused by traceback depth, but the range is so wide that it is difficult to choose precise value. It is a maximum-likelihood sequence estimator (MLSE). Therefore, optimum decoding of a convolutional code involves a search through the trellis for the most probable sequence. Depending on whether the detector following the demodulator performs hard or soft decisions, the corresponding metric in the trellis search may be either a Hamming metric or a Euclidean metric, respectively. Since the convolutional code does not necessarily have a fixed length, we derive its performance from the probability of error for sequences that merge with all-zero sequence foe the first time at a given node in the trellis. In particular, we define the first-event error probability as the probability that another path that merges with the all-zero path at node B has s metric that exceeds the metric of the all-zero path for the first time. Keywords: Optimum Decoding; The Viterbi Algorithm; the trellis; Probability of Error 1. Optimum Decoding of Convolutional Codes-The Viterbi Algorithm In the decoding of a block code for a memoryless channel, we computed the distances (Hamming distance for hard-decision decoding and Euclidean distance for soft-decision decoding) between the received code word and the k2 possible transmitted code words. Then we selected the code word that was closest in distance to the received code word .This decision rule, which requires the computation of k2 metrics, is optimum in the sense that it results in a minimum probability of error for the binary symmetric channel with p )1(CM at the merged node a after three transition, )0(CM will continue to be larger than )1(CM for any path that stems from node a. This means that the path corresponding to )1(CM can be discarded from further consideration. The path 12 corresponding to the metric )0(CM is the survivor. Similarly, one of the two paths that merge at state b can be eliminated on the basis of the two corresponding metrics. This procedure is repeated at state c and d. As a result, after the first three transitions, there are four surviving paths, one terminating at each state, and a corresponding metric for each survivor. This procedure is repeated at each state of the trellis as new signals are received in subsequent time intervals. In general, when a binary convolutional code with k=1 and constraint length K is decoded by means of the Viterbi algorithm, there are 12K states. Hence, there are 12K surviving paths at each state and 12K metrics, one for each surviving path. Furthermore, a binary convolutional code in which k bits at a time are shifted into an encoder that consists of K (k-bit) shift-register stages generates a trellis that has )1(2 Kk states. Consequently, the decoding of such a code by means of the Viterbi algorithm requires keeping track of )1(2 Kk surviving paths and )1(2 Kk metrics. At each state of the trellis, there are k2 paths that merge at each node. Since each path that converges at a common node requires the computation of a metric, there are k2 metrics computed foe each node. Of the k2 paths and that merge at each node, only one survives, and this is the most-probable (minimum-distance) path. Thus , the number of computations in decoding performed at each state increases exponentially with k and K. the exponential increase in computational burden limits the use of the Viterbi algorithm to relatively small values of K and k. The decoding delay in decoding a long information sequence that has been convolutionally encoded is usually too long for most practical applications. Moreover, the memory required to store the entire length of surviving sequences is large and expensive. As indicated in Section 5.1.4 a solution to this problem is to modify the Viterbi algorithm in a way which results in a fixed decoding delay without significantly affecting the optimal performance of the algorithm. Recall that the modification is to retain at any given time t only the most recent decoded information bites (symbols) in each surviving sequence. At each new information bit (symbol) is received, a final decision is made on the bit (symbol) received branches back in the trellis, by comparing the metrics in the surviving sequences and deciding in favor of the bit in the sequence having the largest metric. If is chosen sufficiently large, all surviving sequences will contain the identical decoded bit (symbol) branches back in time. That is, with high probability, all surviving sequences at time t stem 13 from the same node at t- . It has been found experimentally (computer simulation) that a delay K5 results in a negligible degradation in the performance relative to the optimum Viterbi algorithm. 2. Probability of Error. For Soft-Decision Decoding The topic of this subsection is the error rate performance of the Viterbi algorithm on an additive white Gaussian noise channel with soft-decision decoding. In deriving the probability of error for convolutional codes, the linearity property for this class of codes is employed to simplify the derivation. That is, we assume that the all-zero sequence is transmitted and we determine the probability of error in deciding in favor of anther sequence. The coded binary digits foe the jth branch of the convolutional code, denoted as jmc, m=1, 2 ,n and defined in Section 8.2.2 are assume to be transmitted by binary PSK (or four-phase PSK) and demodulated coherently. The output of the demodulator, which is the input to the Viterbi decoder, is the sequence jmr, m=1, 2 n; j=1, 2 where jmris defined in Equation 8.2-9. The Viterbi soft-decision decoder forms the branch metrics defined by Equation 8.2-14 and form these computes the path metrics )12( )(1 11)()( ijmBjnm jmBjiji crCM (8.2-16) where i denotes any one of the competing paths at each node and B is the number of branches (information symbol) in a path. For example, the all-zero path denoted as i=0, has a path metric BjnmjmcjmBjnmcnBnnCM1 11 1)0( )1)(8.2-17) Since the convolutional code does not necessarily have a fixed length, we derive its performance from the probability of error for sequences that merge with all-zero sequence foe the first time at a given node in the trellis. In particular, we define the first-event error probability as the probability that another path that merges with the all-zero path at node B has s metric that exceeds the metric of the all-zero path for the first time. Suppose the 14 incorrect path, call it i=1, that merges with the all-zero differs from the all-zero path in d bites, i.e, there are d is in the path i=1 and the rest are 0s. The probability of error in the pairwise comparison of the metrics )0(CM and )1(CM is 0(2)0()()0()1(1 1)0()1()0()1(2 jmjmBjnmjm ccrPCMCMPCMCMPdP (8.2-18) Since the coded bits in the two paths are identical expect in the d position. Equation 8.2-18 can be written in the simpler form )0()(12 dl lrPdP (8.2-19) where the index l runs over the set of d bits in which the two paths differ and the set lr represents the input to the decoder for these d bits. The lr are independent and identically distributed Gaussian random variables with mean cand variance02/1 N. Consequently the probability of error in the pairwise comparison of these two paths that differ in d bits is where 0/ Nbb the received SNR per bit is and cRis the code rate. )2)/2()( 02dRQdNQdPcbc (8.2-20) Although we have derived the first-event error probability for a path of distance d from the all-zero path, there are many possible paths with different distances that merge with the all-zero path, at a given node B. In fact, the transfer function T(D) provides a complete description of all the possible paths that merge with the all-zero path at node B and their distances. Thus we can sum the error probability in Equation 8.2-20 over all possible path distances. Upon performing this summation, we obtain an upper bound on the first-event error probability in the form )2()(2f r e ef r e eddcbddddedRQadPaP(8.2-21) where dadenotes the number of paths of distance d from the all-zero path that merge with 15 the all-zero path for the first time. There are two reasons why Equation 8.2-21 is an upper bound on the first-event error probability. One is that the events that result in the error probabilities )(2 dP are not disjoint. This can be seen from observation of the trellis. Second, by summing over all possiblefreedd , we have implicitly assumed that the convolutional code has infinite length. If the code is truncated periodically after B nodes, the upper bound in Equation 8.2-21 can be improved by summing the error events for Bddfree . This refinement has some merit in determining the performance of short convolutional codes, but the effect on performance is negligible when B is large. The upper bound in Equation 8.2-21 can be expressed in a slightly different form if the Q function is upper-bounded by an exponential. That is, cRbcb eDddRcb DedRQ |)2(8.2.22) If we use Equation 8.2.22 in Equation 8.2.21, the upper bound on the first-event error probability can be expressed as cbReDe DTP |)(8.2-23) Although the first-event error probability provides a measure of the performance of a convolutional code, a more useful measure of performance is the bit in bounding the first-event error probability. Specifically, we know that when an incorrect path is selected, the information bits in which the selected path differs from the correct path will be decoded incorrectly. We also know that the exponents in the factor N contained in the transfer T (D, N) indicate the number of information bit errors (number of 1s) in selecting an incorrect path that emerges with the all-zero path at some node B. If we multiply the pairwise error probability )(2 dP by the number of incorrectly decoded information bits for the incorrect path at the node where they merge, we obtain the bit error rate for the path. The average bit error probability is upper-bounded by multiplying each pairwise error probability )(2 dP by the corresponding number of incorrectly decoded information bits, for ach possible incorrect path merges with the correct path at the Bth node, and summing over all d. The appropriate multiplication factors corresponding to the number of information bit errors for each incorrectly selected path may be obtained by differentiating T (D, N) with respect to N. In general, T (D, N) can be expressed as 16 )(),( dfddd dNDaNDTf r e s s (8.2-24) where f(d) denotes the exponent of N as a function of d. Taking the derivative of T (D, N) with respect to N and setting N=1, we obtain ddddddddNDDdFadNNDdTf r e ef r e e)(|),( 1(8.2-25) where )(dfadd . Thus the bit error probability for k=1 is upper-bounded by dRQdPPcbddddddbf r e ef r e e2()(2(8.2-26) If the Q function is upper-bounded by an exponential as indicated in Equation 8.2-22, then Equation 8.2-26 can be expressed in the simple form cbcbf r e eReDNReDddddbdNNDdTDP ,1|),(|(8.2-27) If k1, the equivalent bit error probability is obtained by diving Equations 8.2-26 and 8.2-27 by k. The expressions for the probability of error given above are based on the assumption that the code bits are transmitted by binary coherent PSK. The results also hold for four-phase coherent PSK, since this modulation/demodulation technique is equivalent to two independent (phase-quadrature) binary PSK systems. Other modulation and demodulation techniques, such as coherent and no coherent binary FSK, can be accommodated by recomputing the pairwise error probability )(2 dP . That is, a change in the modulation and demodulation technique used to transmit the coded information sequence affects only the computation of )(2 dP . Otherwise, the deri

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论