Blunder discovering drive effigy even though listing exists
Hi,
I am receiving the blunder underneath around my agent.log. Practically nothing has become affected given that Friday, the period which every little thing did wonders, and was very good. Managing vm's work fine, however I cannot begin with any brand new ones.
This can be an error I will be acquiring:
2011-09-12 11:23:51,711 WARN [resource.computing.LibvirtComputingResource] (Agent-Handler-4:null) Failed to start domain: i-2-54-VM: internal error process exited while connecting to monitor: char device redirected to /dev/pts/5
qemu: could not open disk image /mnt/3f004114-a10b-3e5a-a7a3-642729f6a7bb/3058eeb8-ae70-40e9-abc6-cf4d7ec0d38e: No such file or directory
2011-09-12 11:23:51,711 WARN [resource.computing.LibvirtComputingResource] (Agent-Handler-4:null) Exception
org.libvirt.LibvirtException: internal error process exited while connecting to monitor: char device redirected to /dev/pts/5
qemu: could not open disk image /mnt/3f004114-a10b-3e5a-a7a3-642729f6a7bb/3058eeb8-ae70-40e9-abc6-cf4d7ec0d38e: No such file or directory
at org.libvirt.ErrorHandler.processError(Unknown Source)
at org.libvirt.Connect.processError(Unknown Source)
at org.libvirt.Domain.processError(Unknown Source)
at org.libvirt.Domain.create(Unknown Source)
at com.cloud.agent.resource.computing.LibvirtComputingResource.startDomain(LibvirtComputingResource.java:759)
at com.cloud.agent.resource.computing.LibvirtComputingResource.execute(LibvirtComputingResource.java:2186)
at com.cloud.agent.resource.computing.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:884)
at com.cloud.agent.Agent.processRequest(Agent.java:499)
at com.cloud.agent.Agent$ServerHandler.doTask(Agent.java:818)
at com.cloud.utils.nio.Task.run(Task.java:85)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
You can observe down below that these listings will be there; here's a lot more relevant information:
Like we explained absolutely nothing has modified because it previous worked, simply all of the sudden following your few days that mistakes available. I am thinking perhaps something with the auxiliary storage vm?
[root@cloudstack-agent1 agent]# ll /mnt/3f004114-a10b-3e5a-a7a3-642729f6a7bb/3058eeb8-ae70-40e9-abc6-cf4d7ec0d38e
-rw——-. 1 root root 262144 Sep 12 11:23 /mnt/3f004114-a10b-3e5a-a7a3-642729f6a7bb/3058eeb8-ae70-40e9-abc6-cf4d7ec0d38e
[root@cloudstack-agent1 agent]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p3 66G 1.9G 61G 3% /
tmpfs 64G 0 64G 0% /dev/shm
/dev/cciss/c0d0p1 124M 54M 64M 46% /boot
/dev/cciss/c0d1p1 404G 43G 341G 12% /data
cloudstack-agent1.dv.adinfocenter.com:/data/primary
404G 43G 341G 12% /mnt/3f004114-a10b-3e5a-a7a3-642729f6a7bb
cloudstack-agent1.dv.adinfocenter.com:/data/secondary/template/tmpl/2/201/
404G 43G 341G 12% /mnt/1d5cab6e-b24d-3aa7-ae32-040acfa8e6eb
cloudstack-agent1.dv.adinfocenter.com:/data/secondary
404G 43G 341G 12% /mnt/922989e9-70fa-303f-9ad9-da56c63fc2f6
cloudstack-agent1.dv.adinfocenter.com:/data/secondary/template/tmpl/3/206/
404G 43G 341G 12% /mnt/8719af57-4fa3-33d7-977f-a542bcd647d8
[root@cloudstack-agent1 agent]#
Please post just about any feelings/suggestions that will be of help.
Thanks.