openstack调试错误篇总结三_第1页
openstack调试错误篇总结三_第2页
openstack调试错误篇总结三_第3页
openstack调试错误篇总结三_第4页
openstack调试错误篇总结三_第5页
已阅读5页,还剩35页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

发现了问题,policy.jsonroot:rootnova:nova的。所以我们要修改用户属性chgrp-Rnova:nova好了,基本上完成了Bug再这个问题之后我发现还有一个问题,就是neutron-server没有启动。通过openstac-statusneutron-server显示failed所以,重启下就OK了。 icehouse2014-05-2621:58:51.5145808TRACEpute.manager[instance:6aa806d7-866b-4a13-a09a-1ecaf3500905]Traceback(mostrecentcalllast):2014-05-2621:58:51.5145808TRACEpute.manager[instance: “/usr/lib/python2.6/site-packages/nova/compute/manager.py”,line1720,in_spawn2014-05-2621:58:51.5145808TRACEpute.manager[instance:6aa806d7-866b-4a13-a09a-1ecaf3500905]block_device_info) “/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py”,line2253,inspawn 6aa806d7-866b-4a13-a09a-1ecaf3500905]block_device_info)2014-05-2621:58:51.5145808TRACEpute.manager[instance: “/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py”,line3663,in2014-05-2621:58:51.5145808TRACEpute.manager[instance: 2014-05-2621:58:51.5145808TRACEpute.manager[instance:6aa806d7-866b-4a13-a09a-1ecaf3500905]VirtualInterfaceCreateException:VirtualInterfacecreationfailedvif_plugging_timeout=10vif_plugging_is_fatal=False3、novaresize Exitcode:255Stdout:Stderr:'Hostkeyverificationfailed.\r\n'.Settinginstancevm_stateto但手动执行命令可以mkdirnova用户,使用第三个命令注意后面有一个点(-p-m700.ssh&&cd#usermod-s/bin/bash#su-$chcon-usystem_u-robject_r-tuser_home_t$mkdir-p-m700--context=system_u:object_r:ssh_home_t:s0.ssh&&cd.ssDisablehostidentity$cat>config生成并分发ssh$ssh-keygen-fid_rsa-b1024-P$cp-a/var/lib/nova/.ssh/id_rsa.pub$scp/var/lib/nova/.ssh/id_rsaOK,现在切换到novaversion:OpenStackGrizzly2013.1.2 | || |2013-07-| | | ||image |Attempttobootfromvolume-noimagesupplied | ||||||kong.flavor|||||||[{u'name':||,|||||||||2013-07-|||| | | | | | | | | |ID|Hypervisor | | | 虚拟机正常运行,对虚拟机执行live-migration{ 6cd558d9-e924-4598-8e63-e86a20929bd9tohostcontrollerfailed","code":}} "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",line430,in_process_data rval=xy.dispatch(ctxt,version,method,**args) "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",line133,indispatch returngetattr(proxyobj,method)(ctxt,**kwargs) "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py",line117,in context,ex, "/usr/lib/python2.7/contextlib.py",line24,in "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py",line96,in "/usr/lib/python2.7/dist-packages/nova/scheduler/driver.py",line196,in "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py",line146,in filter_properties,instance_uuids)] "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py",line336,in "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py",line342,in hosts,filter_properties) "/usr/lib/python2.7/dist-packages/nova/filters.py",line53,inget_filtered_objects return "/usr/lib/python2.7/dist-packages/nova/filters.py",line39,infilter_all ifself._filter_one(obj, .py",line30,in line78,in spec.get('image',{}).get('properties',{}) objecthasnoattribute'get' 2013-07-1015:07:44INFO[nova.api.openstack.wsgi673][32348]HTTPexceptionthrown:Livemigrationofinstance6cd558d9-e924-4598-8e63-e86a20929bd9toanotherhostfailed么返回None呢?从代码追溯一下spec中的image属性从何而来:ifnotinstance_ref['image_ref']:image=None

image='instance_type':instance_type,'image':image}再回头看一下虚拟机信息,发现这个虚拟机是一个后端卷启动的虚拟机volume修改在线迁移虚拟机的命令参数,强制指定目的主机,跳过schedule的阶段,改成如下(注意,如果是后端卷启动,就不能加--block-migrate参数,详细/aticle/details/9186201[plain]view修改Nova的配置项scheduler_default_filters(默认配置是['RetryFilter','AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','Iageerisile']5、OpenStackIcehouseVirtualInterfacecreationfailed创建instance的virtualinterfaceexception导致createinstance的流==vif_plugging_timeout=10vif_plugging_is_fatal=False6、OpenstacklibvirtError:Unabletoreadfrommonitor:Connectionresetbypeer...Error:['Traceback(mostrecentcalllast):\n',' .py",line848,in_run_instance\n set_access_ip=set_access_ip)\n','File.py",line1107,in_spawn\n LOG.exception(_(\'Instancefailedtospawn\'),instance=instance)\n',' File"/usr/lib64/python2.6/contextlib.py",line23,inexit\n y",line1528,inspawn\n block_device_info)\n',' y",line2444,in_create_domain_and_network\n domain= y",line2405,in_create_domain\ndomain.createWithFlags(launch_flags)\n','File"/usr/lib/python2.6/site-packages/eventlet/tpool.py",line179,indoit\nresult=proxy_call(self._autowrap,f,*args,**kwargs)\n',' "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line139,inproxy_call\n "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line77,intworker\nrv=meth(*args,**kwargs)\n','File"/usr/lib64/python2.6/site-packages/libvirt.py",line708,increateWithFlags\n ifret==-1:raiselibvirtError(\'virDomainCreateWithFlags()failed\',dom=self)\n','libvirtError:Unabletoreadfrommonitor:Connectionresetbypeer\n']libvirtError:Unabletoreadfrommonitor:Connectionresetbyvncserver_listenvncserver_proxyclient_address的配置是否正确,正常应该是监听本地的IP地址。7、Openstack报错libvirtError:internalerrornosupportedarchitectureforos...Error:['Traceback(mostrecentcalllast):\n',' .py",line848,in_run_instance\n set_access_ip=set_access_ip)\n','File.py",line1107,in_spawn\n LOG.exception(_(\'Instancefailedtospawn\'),instance=instance)\n',' File"/usr/lib64/python2.6/contextlib.py",line23,inexit\n .py",line1103,in_spawn\n y",line1528,inspawn\n block_device_info)\n',' y",line2444,in_create_domain_and_network\n domain= y",line2404,in_create_domain\ndomain=self._conn.defineXML(xml)\n','File"/usr/lib/python2.6/site-packages/eventlet/tpool.py",line179,indoit\nresult=proxy_call(self._autowrap,f,*args,**kwargs)\n',' "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line139,inproxy_call\n "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line77,intworker\nrv=meth(*args,**kwargs)\n','File"/usr/lib64/python2.6/site-packages/libvirt.py",line2660,indefineXML\nifretisNone:raiselibvirtError(\'virDomainDefineXML()failed\',conn=self)\n',"libvirtError:internalerrornosupportedarchitectureforostype'hvm'\n"]nova.logCPU虚拟化的支持没有打开,请从Bios打开即可。8、OpenStack虚拟机状态故障处理,虚拟机状态error在日常管理中,经常出现比如物理机故障,QuantumNovaDown等等ErrorHardRebootSofteReboot的状态,这个时候就要借助于Nova命令来解决了。novareset-state06d9d410-bd60-439d-8427-***********nova windowswindows涉及到加载驱动的问题,virtio-win-0.1-30.iso和virtio-win-1.1.16.vfd。这里主要强调两个地方:kvm-m512-bootdile=win2003server.img,cache=writeback,if=virtio,boot=on-fdavirtio-win--cdromwindows2003_x64.iso-kvm-m1024–driveboot=on-cdromvirtio-win-0.1-30.iso-netnic,model=virtio-netuser-boot-nographic-vnc这里需要注意的地方是if=virtio,boot=on–fdavirtio-win-1.1.16.vfd和引导系统时不到驱动,所以在这里安装镜像生成后需要重新引导镜像安装更新网卡驱动为10、Unabletoattachcindervolumetoopenstackvolume服务时把lvm挂载到虚拟机实例时失败,这其实不是cinder的错误,是iscsi挂载的问题。以下是计算节点nova-compute.log2012-07-2414:33:08TRACEnova.rpc.amqpProcessExecutionError:Unexpectederrorwhilerunningcommand.2012-07-2414:33:08TRACEnova.rpc.amqpCommand:sudonova-rootwrap-mnode-T.openstack:volume-00000011-p3:32602012-07-2414:33:08TRACEnova.rpc.amqpExitcode:2012-07-2414:33:08TRACEnova.rpc.amqpStdout:2012-07-2414:33:08TRACEnova.rpc.amqpStderr:‘iscsiadm:Noportal#可是问题依然无法解决,后来发现只要在nova.conf配置文件中添加参数iscsi_helper=tgtadm根据这个情况进行了测试查看日志才发现:如果使用参数:iscsi_helper=tgtadm我测试环境的问题是tgt和iscsitarget服务都已安装并运行着(在安装nova-commontgt服务也安装上,这个不小心还真不会发现nova.conf配置中添加参数iscsi_helper=tgtadm3260发现是iscsitarget服务占用,所以导致挂载失败,我们可以根据情况来使用哪个共享存储服务!!将tgt和iscsi_helper=tgtadm、iscsitarget和iscsi_helper=ietadm保留一个即可。11、noVNCnovnc的问题比较多,网上也有关于这方面的很多配置介绍,其实配置不复杂,a、提示“Connection可能是控制节点在收到vnc请求的时候,无法解析计算节点的主机名,从而无法ip和主机名的对应关系加入到控制节点的/etc/hosts文件中。b、提示“failedconnecttonovnc的功能需要浏览器支持WebSocket和HTML5.推荐使用谷歌。是XXX了。nova-compute的主机时间和controller的主机时间不一致。nova-computeservicesupdatenova-computecontrollernova-computecontrollernova-computenova-compute异常。nova-scheduler.loghost其他服务节点类同,这是nova心跳机制问题。所以讲nova环境中各节点时间同步很重要。一定要确保时dashboardnova-computenova或者底层虚拟机,又或者在实例错误时删除不NovainstancenotLocalfilestorageoftheimagefiles. 2013-03-0917:58:08TRACEnovaInstanceNotFound:Instanceinstance-00000002couldnotbefound.2013-03-0917:58:08TRACE$mysql–uroot–pRecreatetheCREATEDATABASEnova;(stripformattingifyoucopyandpasteanyof PRIVILEGES usenova;ASaINNERJOINnova.instancesASbONa.instance_uuid=b.idwhereDELETEFROMnova.instance_info_cachesWHEREinstance_uuid='$1';DELETEFROMnova.instancesWHEREuuid='$1';将以上文件写入delete_insrance.sh中,然后执行shinstrance_id可以通过novalist14、libvirt错误2013-03-0917:05:42TRACEnovareturnlibvirt.openAuth(uri,auth,0)2013-03-0917:05:42TRACEnovaFile“/usr/lib/python2.7/dist-line102,in libvirtError(‘virConnectOpenAuth()2013-03-0917:05:42TRACEnovalibvirtError:Failedtoconnectsocket‘/var/run/libvirt/libvirt-sock’:Nosuchfileor2013-03-0922:05:41.909+0000:12466:info:libvirtversion:2013-03-0922:05:41.909+0000:12466:error:virNetServerMDNSStart:460:internalerrorFailedtocreatemDNSclient:Daemonnotrunning出现这种错误首先要查看/var/log/libvirt/libvirtd.loglibvirt-binservicewillnotstartwithoutdbusps–ea|grepdbusdbusisrunning,然后执行apt-getinstall二、1、OpenstackpingpingIPIP的具体原理,就不讲了,这里主要是为了犯的中做测试,首先明确一点是:10.0.50..4ping通网关IP(01)ping一下这个浮动IP,可惜的是不能够ping通。IP的原理,又一次回顾了下原理,发现它是通过Iptables规则来处理的。由于这里没有报错,所以没有问题。我想从外网主机ping018SSH01。这时我立马想到了安全组规则。于是我马上添加了一条ALLTCP,远程为CIDR的规则。瞬间发现SSH也可以了理十三-外部网络管理备忘)对/etc/neutron/l3_agent.ini配置自己创建外部网络的网路ID。handle_internal_only_routers=Trueexternal_network_bridge=br-ex同时,在你创建路由器的时候,不会显示路由器状态和端口为Down的情况。nova-managenetworklistdelete掉,删除所有以前残留的网络然后重新创建一个/23新网络:$sudonova-managenetworkdelete/241$sudonova-managenetworkcreate/231$sudonova-managenetwork [root@station140~(keystone_admin)]#novahelp|grepfloat AddafloatingIPaddresstoaserver.

BulkcreatefloatingipsbyBulkdeletefloatingipsbyrange.Listallfloatingips. AllocateafloatingIPforthecurrenttenant. De-allocateafloatingIP. Listallfloatingip RemoveafloatingIPaddressfroma生成浮动 | |InstanceId|FixedIp|Pool |43| | |pub1 IP | |Instance |Fixed |Pool |43|93d0c9c1-b38b-4fe3-9ae3-400f43276f60|0|pub1 移除浮动4、Cannotprovisiongrenetworkfornet-idtunnelingCannotprovisiongrenetworkfornet-id=223aa5ce-3a5e-4566-82f9-a1163d2499f4tunneling我问题产生的原因是自己升级了KERNEL导致OPENVSWITVH不支持3.0KERNEL最后我降到CENTOSKERNEL2.6就出现了这个问题。首先一定要确定你计算节点和网络节点的OPENVSWITCH然后删除你计算节点的OPENVSWITCH。从装从新配置你的OPENVSWITCH也就是ML2的配置 OPENVSWITCH具体5、openstacknrutronovs-gre网速慢Createandeditfile/etc/neutron/dnsmasq-neutron.confandputthisinside:Andrebootyourinstance.VerifythatyourMTUisinetaddr:Bcast:55Mask:inet6addr:fe80::f816:3eff:fef0:6a9f/64Scope:LinkUPBROADCASTRUNNINGMULTICASTMTU:1400Metric:1RXpackets:934855errors:0dropped:0overruns:0frame:0TXpackets:207741errors:0dropped:0overruns:0carrier:0collisions:0txqueuelen:1000RXbytes:1210256390(1.2GB)TXbytes:226172592(226.1MB)]:KEY和主机名都获取失败/etc/neutron/neutron.confenable_isolated_metadataTrueZEROCONFvi/etc/sysconfig/network7、openstackFailedtoconnecttoservercode我的情况是因为我把COMPUTER和NEUTRON安装到一个节点上。查看-AINPUT-jREJECT--reject-withicmp-host-发出一条icmp-host-prohibited消息。还有一种情况是单独安装的网络节点,可以参考一篇OPENSTACK社区的帖子8、centosopenstackVMIPTABLES的指令,开一个云主机会默认产生一条临时规则,如果重启了将导致规则失效。下一次将无法获取到IP地址。IPTABLESOK9、hasownernetwork:floatingipandthereforecannotbedeleteddirectlyviatheportAPIhasownernetwork:floatingipandthereforecannotbedeleteddirectlyviatheportReleasefloatingIP10、neutronvmIPVMIPIP解决办法Please,settemporarydhcp_lease_duration=120inneutron.conf(neutron-*servicesrestart)toseeDHCPREQUESTfromrunningVM,andDHCPACKfromdnsmasq,likethis:-dhcp_lease_duration=12011、netdev_linux|INFO|ioctl(SIOCGIFHWADDR)ontapd7ebabc9-4fdeviceNosuchJul2420:06:41|01035|bridge|INFO|destroyedportqr-725d56fb-f7onbridgebr-intJul2420:06:41|01036|bridge|WARN|bridgebr-tun:usingdefaultbridgeEthernetaddress6a:13:96:eb:35:4eJul2420:06:41|01037|netdev_linux|INFO|ioctl(SIOCGIFHWADDR) tapd7ebabc9-4fdevicefailed:NosuchdeviceJul2420:06:41|01038|bridge|WARN|bridgebr-int:usingdefaultbridgeEthernetaddressb2:e5:c8:7e:bd:4dJul2420:06:47|01039|netdev|WARN|Dropped76logmessagesinlast12seconds(mostrecently,1secondsago)duetoexcessiverate请先检查你系统的版本是否是centos/REDHATserviceneutron-l3-agentrestart三、1、Openstack出现"Failedtoconnecttoservercode1006)在安装了正确的Openstack后出现了,打开dashboard·之后可能会出现下面的情PS:记得重启NOVA服务iptables-nL|grep5900iptables-nL|grepiptables-IINPUT-ptcp--dport5900-jACCEPTiptables-IINPUT-ptcp--dport5999-jiptables-nL|grepiptables-IINPUT-ptcp--dport6088-j1、HeatHeatstackCloudFormation设计和实现的。所以,CloudFormationHeatIcehouse t1.template,该模板展示的是一个php应用,ApacheWebServer、PHP和简单的PHP应用程序全部都由默认安装在AmazonLinuxAMI上的AWSCloudFormation帮助程序脚本进行安装。cfn-hupAmazonEC2实例元数据中所定义配cfn-hup后台程序来更新应用程序软件(Apache或PHP的版本AWSCloudFormationPHP应用程序文件本身。来自于模板中同一个EC2资源的以下代码段显示的是,在侦测到元数据的任何更改时配置cfn-hup调用cfn-init来更新软件所需的部件:"#Startupthecfn-hupdaemontolistenforchangestotheWebServer"/opt/aws/bin/cfn-hup||error_exit'Failedtostartcfn-默认状态下,cfn-hup1515分钟AWSManagementConsole更新堆栈,您将会注意到创建初始接口,请务必提供原来创建stack时的参数值。(eaaa管理员密码(I版暂没有实现)对虚拟机的更新中,没有更新管理员密码。目前(20140519),Nova中只有Xen更新AmazonEC2AMI时,不能只通过启动和停止实例来修改AMI;AWSCloudFormation将其视为对资源不可变属性的更改。为了对不可变属性进AW即:运行新AMI的新AmazonEC2实例。在新实例运行之后,AWSCloudFormation会更新堆栈中的弹性IP地址等其他资源,以指向新资源。所有ID已随着更新更改。“Event”表中的事件包含描述“所请求的更新包含HeatCloudFormation一样,镜像属于不可变属性。可以在模板中配置image_update_policy'REBUILD','REPLACE','REBUILD-PRESERVE-EPHEMERAL'REPLACE。那么默认的行为会抛出一rebuild操作(REBUILD-PRESERVE-EPHEMERALrebuild(detach(atacautoscalingOpenStackautoscalingHeat中支持该资源类型,采用嵌套stack的方式实现。目前也没有实现虚拟机的健康检查功能。AutoScalingAmazonEC2AutoScaling正在运行的EC2实例。更新后的启动配置只适用于更新之后创建的新实例。stackAutoScalingAutoScalingEC2实例上都不会提供任何同步或序列化行为。每个主机上的cfn-hup后台程序都将独立运行且会按其自己的计划更新应用程序。当您使用cfn-hup更新实例上的配置时,每个实例都将按其自己的计划运行cfn-hup程序;堆栈中的实例之AutoScalingEC2cfn-hup更改都同时运行,更新如果cfn-hup更改在不同时间运行,则新旧版软件可能会同时运行。Heat中,autoscalingUpdatePolicy,autoscaling资源需要引用一个LaunchConfiguration资源,其属性有:'ImageId':镜像ID,必选'KernelId','RamDiskId':兼容AWS的磁盘标识,目前不支持autoscaling'LaunchConfigurationName':你懂的,必选,而且允许更新 现'HealthCheckType':健康检查类型,未实现'LoadBalancerNames':LB名称'Tags':标识,HeatCeilometer的交互,就仰仗这个TagsHeatLaunchConfigurationUpdatePolicy属性中的AutoScalingRollingUpdatemap。autoscaling资源中的metadata中保存了AWSCloudFormation更新任何属性;但是您应该在进行任何更改之前考虑以下问题:更改可变属性还是不可变属性?对资源属性的某些更改,如更改AmazonEC2实例上的AMI,不受基础服务的支持。如果更改可变属性,AWSCloudFormation将使用适用于基础资源的“更新”或“修改”类型API。对于不可变的属性更改,AWSCloudFormation将用更新后的属性创建新资源,然后再删除旧资源之前将此资源链接至stack。虽然AWSWSSDIste资t属性,AWSClFmatin使用该数据库实例的应用程序将新端口设置以及您进行的任何其他更改考虑在2、/etc/init.d/ceph:line15:/lib/lsb/init-functions:Nosuchfileordirectory[controller][WARNINetc/init.d/cephline15lib/lsb/init-functionsNosuchfileor[controller][ERROR]RuntimeError:commandreturnednon-zeroexitstatus:1[ceph_deploy.mon][ERROR]Failedtoexecutecommand:/sbin/serviceceph-c[ceph_deploy][ERROR]GenericError:Failedtocreate13、FATALModulerbdnotCENTOS6.5CEPHRDBERROR:modinfo:couldnotfindmodulerbdFATAL:Modulerbdnotfound.rbd:modproberbdfailed!yum--enablerepo=elrepo-kernelinstallkernel-ml#willinstall3.11.latest,stable,#yum--enablerepo=elrepo-kernelinstallkernel-lt#willinstall3.0.latest,longterm-2.355.el6.2.cuttlefish.x86rpm--import yuminstallcephrpm-e--nodepsqemu-imgrpm-Uvhqemu-*horizon中无法删除的话,我们需要到服务器上去手动删除,“Can'tremoveopenlogicalvolumestop掉,再尝试删除。删除完还需到数据库cinder的volumes表里清除相关记录。TypeErrorathasattr():attributenamemustbestringRequestMethod: DjangoVersion: ExceptionType: ExceptionValue:hasattr():attributenamemustbe init_,line78PythonExecutable: PythonVersion: PythonServertime:Fri,29Mar201312:51:09apache2errorTraceback(mostrecentcalllast):File"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py",line111,inresponse=callback(request,*callback_args,returnview_func(request,*args,**kwargs)returnview_func(request,*args,**kwargs)returnview_func(request,*args,**kwargs)returnview_func(request,*args,**kwargs)returnview_func(request,*args,**kwargs)File"/usr/lib/python2.7/dist-packages/django/views/generic/base.py",line48,inFile"/usr/lib/python2.7/dist-packages/django/views/generic/base.py",line69,inreturnhandler(request,*args,File"/usr/lib/python2.7/dist-packages/horizon/tables/views.py",line155,ingethandled=self.construct_tables() handled= data= oard/dashboards/admin/overview/views.py",line41,inget_dataoard/usage/views.py",line34,inget_dataoard/usage/base.py",line115,inget_quotas_("Unabletoretrievequotaoard/usage/base.py",line112,inget_quotasFile"/usr/lib/python2.7/dist-packages/horizon/utils/memoized.py",line33,invalue=oard/usage/quotas.py",line115,intenant_quota_usagesoard/usage/quotas.py",line98,inget_tenant_quota_dataoard/usage/quotas.py",line80,in_get_quota_dataoard/api/cinder.py",line123,intenant_quota_getc_client=oard/api/cinder.py",line59,incinderclient File"/usr/lib/python2.7/dist-packages/cinderclient/client.py",line78,in ifhasattr(requests,logging):错误信息中指出了Cinderclient的client.py中78行hasattr()方法的属性必ifhasattr(requestslogging):#ifhasattr(requests apache2dashboardvolume以下是计算节点nova-compute.log的错误日志:2012-07-2414:33:08TRACEnova.rpc.amqpProcessExecutionError:Unexpectederrorwhilerunningcommand.2012-07-2414:33:08TRACEnova.rpc.amqpCommand:sudonova-rootwrap-mnode-T.openstack:volume-00000011-p3:32602012-07-2414:33:08TRACEnova.rpc.amqpExitcode:2012-07-2414:33:08TRACEnova.rpc.amqpStdout:2012-07-2414:33:08TRACEnova.rpc.amqpStderr:‘iscsiadm:Noportal以上错误是没有找到iscsiopenstack资料说要 iscsi_ip_address=#volume可是问题依然无法解决,后来发现只要在nova.conf配置文件中添加参数iscsi_helper=tgtadm根据这个情况进行了测试查看日志才发现:如果使用参数iscsi_helper=tgtadm时就必须使用tgt服务,反之使用iscsitarget服务在添加参数iscsi_helper=我测试环境的问题是tgt和iscsitarget服务都已安装并运行着,在安装nova-common时会把tgt服务也安装上,在nova.conf配置中添加参数iscsi_helper=tgtadm3260iscsitarget服务占用,所以导致挂载iscsi_helper=ietadm加上这个参数,就是使用iscsitarget五、1Errorunabletoconnecttonoderabbit@localhost:ThedatabaseRabbitMQusesisboundtothemachine’shostname,soifyoucopiedthedatabasedirtoanothermachine,itwon’twork.Ifthisisthecase,youhavetosetupamachinewiththesamehostnameasbeforeandtransferanyoutstandingmessagestothenewmachine.Ifthere’snothingimportantinrabbit,youcouldjustcleareverythingbyremovingtheRabbitMQfilesin/var/lib/rabbitmq.rm-rf2、Accessdeniedforuser‘keystone@localhost(using super(Connection,self). (*args,**kwargs2)sqlalchemy.exc.OperationalError:(OperationalError)(1045,“Accessdeniedforuser‘keystone’@'openstack1′(usingpassword:YES)”)NoneNoneconnection=3、KeystoneNohandlerscouldbefoundforlogger“keystoneclient.client”UnabletoauthorizeNohandlerscouldbefoundforloggerUnabletoauthorizeNohandlerscouldbefoundforloggerUnabletoauthorize出现这种错误是大多数是由于keystone_data.shdriver= template_filemysql-uroot-popenstack-e“dropdatabasenova;”mysql-uroot-popenstack-e“dropdatabaseglance;”mysql-uroot-popenstack-e“dropdatabasekeystone;”nova-compute-kvmnova-docnova-networknova-apt-getautoremoverm-rf/var/lib/glancerm-rf/var/lib/nova/rm-rf六、1、ImageNotFound:Image3bb459f8-2fd9-4464-9b66-f2f34c17396fcouldnotbeGLANCE镜像启动原理看这个地址/nova/+bug/1029674大概意思是修改计算节点nova.conf#Shouldunusedbaseimagesberemoved?(booleanvalue)#Unusedunresizedbaseimagesyoungerthanthiswillnotbe#removed(integervalue)2、cephlibrbdimagehaswatchersnot$rbdrmRemovingimage:99%complete...failed.2013-08-0214:07:17.530470-1librbd:errorremovingheader:(16)Deviceorresourcebusyrbd:error:imagestillhaswatchersThismeanstheimageisstillopenortheclientusingitcrashed.Tryagainafterclosing/unmappingitorwaiting30sforthecrashedclienttotimeout.解决办法$rbdidpoolimagesnapdevice1rbdmyrbd-/dev/rbd1$servicerbdmap[ok]StoppingRBDMapping:$rbdrmRemovingimage:100%$vimidpoolimagesnapdevicerbdfoo–rbdfoo–rbdfoo–rbdfoo–rbdfoo–/dev/rbd43ceph-disk:Error:unabletocreatesymlink/var/lib/ceph/osd/ceph-5->[network][INFO]Runningcommand:ceph-disk-activate–mark-initsysvinit FileJournal::_open:disablingaiofornon-blockjournal.Usejournal_force_aiotoforceuseofaioanyway FileJournal::_open:disablingaiofornon-blockjournal.Usejournal_force_aiotoforceuseofaioanyway filestore(/tmp/osd0)couldnotfind23c2fcde/osd_superblock/0//-1inindex:(2)Nosuchfileordirectory[network][WARNIN]2014-06-0918:49:07.3333647f338b54c7a0-1createdobjectstore/tmp/osd0journal/tmp/osd0/journalforosd.5fsid[network][WARNIN]2014-06-0918:49:07.3334137f338b54c7a0-1auth:errorreadingfile:/tmp/osd0/keyring:can’topen/tmp/osd0/keyring:(2)Nosuchfileor[network][WARNIN]2014-06-0918:49:07.3335317f338b54c7a0-1creatednewkeyinkeyring/tmp/osd0/keyring [network][ERROR]RuntimeError:commandreturnednon-zeroexitstatus:1 ceph-disk-activate–mark-initsysvinit–mount/tmp/osd0在节点中执行mkdir-p/ -/[network][WARNIN]ceph-disk:Error:Noclusterconffoundin/etc/cephwithfsid

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论