压缩感知 外文文献翻译_第1页
压缩感知 外文文献翻译_第2页
压缩感知 外文文献翻译_第3页
压缩感知 外文文献翻译_第4页
压缩感知 外文文献翻译_第5页
已阅读5页,还剩5页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、次*琴毕业设计(论文)外文资料翻译题 目:基于压缩感知的信号重构算法研究院系名称:信息科学与工程学院专业班级:电信0702指导教师: 教师职称: 学生姓名:学 号:外文资料翻译译文压缩采样Emamnuel J. Candes摘要:从频域数据中采集和重构图像的传统思想和方法遵循的是奈奎斯特定理这一 基本原则。这一原则认为:为了重建图像,需要获取的傅里叶采样的数量必须匹配 图像的预期分辨率,即图像的像素点数。本文介绍了一种名为“压缩采样”或“压 缩感知”的新兴理论,该理论认为传统的观念是不正确的。或许令人吃惊的是,它 有可能从远远小于图像/信号预期分辨率的若干采样中精确地重构原始图像或数据。毫无疑

2、问,压缩采样具有深远的意义。例如,它提出一种可能的新数据采集协 议,能比传统认为所必须的传感器更少的情况下把模拟信息转化成数字形式。这个 新的抽样理论可能会造成数据的采样和压缩过程同时进行。在这个简短的概述中,我们提供一些有关这一新理论的关键数学见解,并且给 出了一些压缩采样和其他领域的交叉,如统计学、信息论、编码理论以及理论计算 科学。关键词:压缩采样,稀疏,一致不确定性原理,不定线性方程组,最小”范数,线 性规划,信号恢复,纠错。1.引言信号处理的一个中心原则是奈奎斯特/香农抽样定理:无差错的重构一个信号 所需的采样数目取决于它的带宽一一包含该信号有效频谱的最小间隔。在过去两年 左右的时间

3、里,出现了另一种“压缩采样”理论。这个理论表明超分辨信号和图像 可以从远远少于通常所认为的必要的数据/测量尺寸中重构出来。本文的目的在于 探究并提供一些有关这种新理论的关键数学见解。压缩采样吸引人的地方在于它与 某些应用科学和工程诸如统计学、信息理论、编码理论、理论计算科学等领域有着 显著的交叉和连接作用。我们将试着通过几个精选的例子来解释这些联系。从广义的观点,更一般地说,稀疏性和可压缩性在许多科学领域发挥着并将继 续发挥基础性的作用。稀疏性带来了有效估计;例如,由阈值分割或压缩算法获得 的近似估计的质量取决于我们希望估计的信号的稀疏度。稀疏性带来了高效压缩; 例如,一种变换编码器的精度取决

4、于我们希望编码的信号的稀疏度24。稀疏性带 来了降维和高效建模。这里的新颖之处在于稀疏性成了数据采集过程的核心,而且 带来了高效的数据采集协议。事实上,压缩采样提出了如何更经济地将模拟数据转换成压缩数字形式20,7。这里的关键词是“经济地”。众所周知,因为典型的信号的结构特性,它们可以在没有太多感性损失的前提下被高效压缩。例如,现代的转换编码器如JPEG2000就是利用许多信号在某些固定基上的稀疏表示,这意味着编码器可以仅存 储或传输少数的自适应选择转换系数而不是所有的信号样本。它的典型工作方式是 获取一个完整的信号,计算一系列完整的变换系数,对最大的系数编码并丢弃所有 其它的系数。对大量的数

5、据采集然后进行压缩的过程是极其浪费的(你可以想一想, 数码相机有数百万的成像传感器,但最终却只把照片的像素编码为几百炒大小)。 这就提出了一个基本的问题:因为大多数信号是可压缩的,为什么当我们知道它的 大部分数据将会被放弃时还要花这么大的努力获取所有的数据呢?有没有可能获 取压缩形式的数据从而不需要丢弃任何东西呢? “压缩采样”,也称为“压缩感知” 20表明这的确是有可能的。本文绝不是一篇关于压缩采样的详尽的概述文献。这仅仅是作者自己在这一领 域的作品和思想,其中也包括对别人作品的大量参考以及和这些作品相关的偶尔探 讨。我们已经尽力把我们的思想组织成与早期发表的这一主题的论文相衔接的逻辑 续接

6、。在我们开始之前,我们想邀请感兴趣的读者也查阅一 TRonald Devore他 也在对此进行研究对于该领域的一篇互补性调研文章17(第5节)。2.欠抽样测量考虑一个从线性测量y中重构一个向量x e RN的一般性问题,其中关于x和y的 形式为 =(x,中),k = 1, ,K, or y = Ox(2.1)也就是说,我们通过测量X对K维向量kRN的映射来获取未知信号的信息。 我们感兴趣的是在“欠定”条件K口 N下,我们有比未知信号变量少的多的测量值。 在无数的应用中都出现这种类型问题。例如在放射医学及生物医学成像中,人们对 图像感兴趣部分收集到的测量数据要比对它无用像素的测量数据少得多。在宽带

7、无 线电频率信号分析中,由于当前在模/数转换器技术方面的局限性,你可能仅仅能 在远低于奈奎斯特频率下获得一个信号。最后,基因表达的研究也提供了这样的例 子。在此,人们会想从一组较少的特别是数百的观察值中推断出成千上万基因表达水平。乍一看,求解欠定方程组似乎是不可能的,因为我们可以很容易的列举出它显 然无法求解的例子。但是现在我们设想一个信号X是可压缩的,也就是说它实质上由 一些小于N的自由度决定的。例如,假设我们的信号是稀疏的,意味着它可以看成 由一些固定基上的少数向量的叠加来准确或者精确地描述。然后,在这个前提下, 问题就发生根本性的变化,使求解成为可能。事实上,通过求解一个简单凸优化问 题

8、来精确的或者有时准确的恢复信号是可能的。2.1.非线性采样定理我们最好先来考虑一个具体的例子。假设现在我们采集到了长度为N的离散信 号x的一套不完整的频率样值。(为了简化论述,我们考虑一个一维的典型问题。这 个理论可以很容易的扩展到更高的维度。例如,我们可能对从欠抽样的傅里叶数据 中重建二维或三维物体也同样感兴趣。)我们的目标是在只给出傅里叶变换域的k 个样本条件下重建完整的信号f1 N -1(2.2)= X e - j 2 欢 / NJNtt=0式子中“可见的”频率wk是所有频率 0,N-1 的一个子集Q (长为K)。磁 共振成像的原理就是通过测量被选择的频率系数来感知一个物体,而且这一原理

9、普 遍应用于许多科学领域,包括天文学。在一般问题的表达式(2.1)中,传感矩阵中是 通过采样N X N的离散傅里叶变换变换矩阵的k行获取的。如果向量x中,: X丰0集的势少于或等于S,我们就说向量x是S-稀疏的。这样, Candes Romberg和Tao6给出了几乎总能通过求解凸优化问题来完全恢复信号的 公式(风广 |xj) TOC o 1-5 h z HYPERLINK l bookmark23 o Current Document (P1) min| x |x = y(2.3)XeRN1定理2.1(6)假设x是S-稀疏的,并且给出了频率均匀下随机抽样的K个傅里 叶系数。假设观察值的数量服

10、从 HYPERLINK l bookmark26 o Current Document K C - S log N.(2.4)这样就能以极大的概率准确地重构x。具体而言,如果在式(2.4)中常数C的形式是22( +1),那么它的成功概率就超过了 1 -O(N书)。第一个结论是,我们可以仅仅通过测量任意设定K下的频率系数就能够使信息 不失真。第二个结论是,信号x可以通过一个未设定任何关于x非零数值的凸函数 的最小化来准确地恢复,我们假设它们的位置和幅度都是事先完全未知的。虽然这似乎是一个伟大的壮举,可人们仍会问它是否是最佳的,或者我们能否 用更少的样本来恢复信号。答案是,一般来说,我们不能用更少

11、的样本来重构S-稀 疏信号。有很多例子证明,无论是什么情况,采取什么方法,为准确的重构信号所 需的最少样本数必须约是S logN。因此,这个定理是苛刻的,而且最小/范数几乎 只有在有希望通过算法实现的时候才能得到。读者一定很熟悉奈奎斯特/香农采样理论,我们可以将我们的结果用另一种表 达形式与其建立简单的联系。通过转变上例中时间和频率的作用,我们可以把定理 1重写为一个新的非线性采样定理。假设一个信号在频域范围B = |。|内。如果。是 一个连续的集合,那么我们可以把B作为x的带宽。此外,如果集合。是已知的, 那么由传统的奈奎斯特/香农采样定理可知,信号x可以从频域B的等时间间隔的抽 样中完美重

12、构出来。重构可通过一个简单的sinc插植核进行线性插值获得。现在,假设集合。大小仍为B,未知并且不一定是连续的。在这种情况下奈奎 斯特/香农定理是无助的,我们只能假设这个连续的频率集合是个完整的空间,也 就是说,准确的重构信号需要所有的N个时域采样。然而定理2.1却断定用少得多的 样本是必然的。求解出(P1)就可以从约B logN倍的时域样本中完全恢复出信号x。 此外,这些样本没有必要精心挑选;几乎任何这种尺寸的样本集合都可以使用。因 此我得到一个对奈奎斯特/香农定理的非线性模拟(这样描述是由于重建过程(P1) 是非线性的):我们可以选定任意且未知的频率集合B,从时域中任意选取约B logN

13、个样本来重构信号。外文原文Compressive samplingEmamnuel J. Candes*Abstract. Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of F

14、ourier samples we need to acquire must match the desired resolution of the image, the number of pixels in the image. This paper surveys an emerging theory which goes by the name of compressive sampling or compressed sensing, and which says that this conventional wisdom is inaccurate Perhaps surprisi

15、ngly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the iniage/signal, e.g the number of pixels in the image.It is believed that compressive sampling has far re

16、aching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simult

17、aneously.In this short survey, we provide some of the key niathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.Mathematics Subject

18、Classification (2000). Primary 00A69,41 -02,68P30; Secondary 62C65.Keywords. Compressive sampling, sparsity, uniform uncertainty principle, underdertennined systems of linear equations, -minimization, linear programming, signal recovery, error correction.1. IntroductionOne of the central tenets of s

19、ignal processing is the Nyquist/Shannon sampling theory: the number of samples needed to reconstruct a signal without error is dictated by its bandwidth - the length of the shortest interval which contains the support of the spectrum of the signal under study. In the last two years or so, an alterna

20、tive theory of compressive sampling has emerged which shows that super-resolved signals and images can be reconstructed from far fewer data/measurements than what is usually considered necessary. The purpose of this paper is to survey and provide some of the key mathematical insights underlying this

21、 new theory. An enchanting aspect of compressive sampling it that it has significant interactions and bearings on some fields in the applied sciences and engineering such as statistics, information theory, coding*The author is partially supported by an NSF grant CCF-515362.ftoceedings of the Interna

22、tional Congress of Mathematicians, Madrid, Spain, 2006 2006 European Mathematical Societytheory, theoretical computer science, and others as well. We will try to explain these connections via a few selected examples.From a general viewpoint, sparsity and, more generally, compressibility has played a

23、nd continues to play a fundamental role in many fields of science. Sparsity leads to efficient estimations; for example, the quality of estimation by thresholding or shrinkage algorithms depends on the sparsity of the signal we wish to estimate. Sparsity leads to efficient compression; for example,

24、the precision of a transform coder depends on the sparsity of the signal we wish to encode 24, Sparsity leads to dimensionality reduction and efficient modeling. The novelty here is that sparsity has bearings on the data acquisition process itself, and leads to efficient data acquisition protocols.I

25、n fact, compressive sampling suggests ways to economically translate analog data into already compressed digital form |20, |7|. The key word here is economically.* Everybody knows that because typical signals have some structure, they can be compressed efficiently without much perceptual loss. For i

26、nstance, modern transform coders such as JPEG2000 exploit the fact that many signals have a sparse representation in a fixed basis, meaning that one can store or transmit only a small number of adaptively chosen transform coefficients rather than all the signal samples. The way this typically works

27、is that one acquires the full signal, computes the complete set of transform coefficients, encode the largest coefficients and discard all the others. This process of massive data acquisition followed by compression is extremely wasteful (one can think about a digital camera which has millions of im

28、aging sensors, the pixels, but eventually encodes the picture on a few hundred kilobytes). This raises a fundamental question: because most signals are compressible, why spend so much effort acquiring all the data when we know that most of it will be discarded? Wouldnt it be possible to acquire the

29、data in already compressed form so that one does not need to throw away anything? Compressive sampling also known as compressed sensing |20 shows that this is indeed possible.This paper is by no means an exhaustive survey of the literature on compressive sampling. Rather this is merely an account of

30、 the authors own work and thinking in this area which also includes a fairly large number of references to other peoples work and occasionally discusses connections with these works. We have done our best to organize the ideas into a logical progression starting with the early papers which launched

31、this subject. Before we begin, we would like to invite the interested reader to also check the article 117 by Ronald DeVore - also in these proceedings - for a complementary survey of the field (Section 5).2. Undersampled measurementsConsider the general problem of reconstructing a vector x RA,from

32、linear measurements y about x of the form(2,1)yk = x, (pk), k =, K, or y = x.Compressive sampling3That is, we acquire information about the unknown signal by sensing x against K vectors 例 W 呻. We are interested in the underdetermined” case KN, where we have many fewer measurements than unknown signa

33、l values. Problems of this type arise in a countless number of applications. In radiology and biomedical imaging for instance, one is typically able to collect far fewer measurements about an image of interest than the number of unknown pixels. In wideband radio frequency signal analysis, one may on

34、ly be able to acquire a signal at a rate which is far lower than the Nyquist rate because of current limitations in Analog-to-Digital Converter technology. Finally, gene expression studies also provide examples of this kind. Here, one would like to infer the gene expression level of thousands of gen

35、es - that is, the dimension N of the vector x is in the thousands - from a low number of observations, typically in the tens.At first glance, solving the underdertermined system of equations appears hopeless, as it is easy to make up examples for which it clearly cannot be done. But suppose now that

36、 the signal x is compressible, meaning that it essentially depends on a number of degrees of freedom which is smaller than N. For instance, suppose our signal is sparse in the sense that it can be written either exactly or accurately as a superposition of a small number of vectors in some fixed basi

37、s. Then this premise radically changes the problem, making the search for solutions feasible. In fact, accurate and sometimes exact recovery is possible by solving a simple convex optimization problem.2.1. A nonlinear sampling theorem. It might be best to consider a concrete example first. Suppose h

38、ere that one collects an incomplete set of frequency samples of a discrete signal x of length N. (To ease the exposition, we consider a model problem in one dimension. The theory extends easily to higher dimensions. For instance, we could be equally interested in the reconstruction of 2- or 3-dimens

39、ional objects from undersampled Fourier data.) The goal is to reconstruct the full signal f given only K samples in the Fourier domain1N-咒=方匚为厂眼如*(2.2)where the visiblefrequencies 上 are a subset Q (of size K)ofthesetofall frequencies 0,., N 1. Sensing an object by measuring selected frequency coeffi

40、cients is the principle underlying Magnetic Resonance Imaging, and is common in many fields of science, including Astrophysics. In the language of the general problem (2.1), the sensing matrix is obtained by sampling K rows of the N by N discrete Fourier transform matrix.We will say that a vector x

41、is S -sparse if its support / : x, 0 is of cardinality less or equal toS, Then Cands, Romberg and Tao |6 showed that one could almost always recover the signal x exactly by solving the convex program1 (|%| := ;= |i( |)(Pi) min |5|幻 subject to .r = y.(2.3)(Pj) can even be recast as a linear program 3

42、, 15.Theorem 2.1 (6). Assume that x is S-sparse and that we are given K Fourier coefficients with frequencies selected uniformly at random. Suppose that the number of observations obeysK C S logN.(2,4)Then minimizing t reconstructs x exactly with overwhelming probability. In details, if the constant

43、 C is of the fonn 22(6 + 1) in (2.4), then the probability of success exceeds 1 O(NS,).The first conclusion is that one suffers no information loss by measuring just about any set of K frequency coefficients. The second is that the signal .r can be exactly recovered by minimizing a convex functional

44、 which does not assume any knowledge about the number of nonzero coordinates of x, their locations, and their amplitudes which we assume are all completely unknown a priori.While this seems to be a great feat, one could still ask whether this is optimal, or whether one could do with even fewer sampl

45、es. The answer is that in general, we cannot reconstruct S-sparse signals with fewer samples. There are examples for which the iiiinimuin number of samples needed for exact reconstruction by any method, no matter how intractable, must be about S log N. Hence, the theorem is tight and )-minimization

46、succeeds nearly as soon as there is any hope to succeed by any algorithm.The reader is certainly familiar with the Nyquist/Shannon sampling theory and one can reformulate our result to establish simple connections. By reversing the roles of time and frequency in the above example, we can recast Theor

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论