From deeptik at linux.vnet.ibm.com Tue Sep 1 07:22:39 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 01 Sep 2009 00:22:39 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fixing HostedDependency/04_reverse_errs.py Message-ID: <4ccfbf5da9c6a03d9942.1251789759@elm3b151.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1251789710 25200 # Node ID 4ccfbf5da9c6a03d994246d415c4ada5484594bc # Parent 95fa64bf447e5bc2bab501564e3d9336edef997d [TEST] Fixing HostedDependency/04_reverse_errs.py The error desc with and w/o sbmil-cmpi-base is same. Tested with current sources on F11 and SLES11 with KVM and current sources Also, tested with and w/o sbmil-cmpi-base is same on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 95fa64bf447e -r 4ccfbf5da9c6 suites/libvirt-cim/cimtest/HostedDependency/04_reverse_errs.py --- a/suites/libvirt-cim/cimtest/HostedDependency/04_reverse_errs.py Thu Aug 27 16:39:53 2009 -0700 +++ b/suites/libvirt-cim/cimtest/HostedDependency/04_reverse_errs.py Tue Sep 01 00:21:50 2009 -0700 @@ -45,14 +45,9 @@ test_mac = "00:11:22:33:44:55" def set_expr_values(host_ccn): - if (host_ccn == "Linux_ComputerSystem"): - exp_rc = pywbem.CIM_ERR_INVALID_PARAMETER - exp_d1 = "INVALID" - exp_d2 = "INVALID" - else: - exp_rc = pywbem.CIM_ERR_NOT_FOUND - exp_d1 = "No such instance (Name)" - exp_d2 = "No such instance (CreationClassName)" + exp_rc = pywbem.CIM_ERR_NOT_FOUND + exp_d1 = "No such instance (Name)" + exp_d2 = "No such instance (CreationClassName)" expr_values = { "INVALID_NameValue" : { 'rc' : exp_rc, 'desc' : exp_d1 }, From deeptik at linux.vnet.ibm.com Tue Sep 1 08:51:58 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 01 Sep 2009 01:51:58 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fixing HostedResourcePool/03_forward_errs.py Message-ID: <66a9e25564538bea1d26.1251795118@elm3b151.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1251794989 25200 # Node ID 66a9e25564538bea1d26b3b71d50ba99d70aca9a # Parent 95fa64bf447e5bc2bab501564e3d9336edef997d [TEST] Fixing HostedResourcePool/03_forward_errs.py The error desc with and w/o sbmil-cmpi-base is same. Tested with current sources on F11 and SLES11 with KVM and current sources Also, tested with and w/o sbmil-cmpi-base is same on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 95fa64bf447e -r 66a9e2556453 suites/libvirt-cim/cimtest/HostedResourcePool/03_forward_errs.py --- a/suites/libvirt-cim/cimtest/HostedResourcePool/03_forward_errs.py Thu Aug 27 16:39:53 2009 -0700 +++ b/suites/libvirt-cim/cimtest/HostedResourcePool/03_forward_errs.py Tue Sep 01 01:49:49 2009 -0700 @@ -56,11 +56,6 @@ host_cn = host_inst.CreationClassName host_sys = host_inst.Name - if (host_cn == "Linux_ComputerSystem"): - sblim_rc = pywbem.CIM_ERR_INVALID_PARAMETER - expr_values['invalid_ccname'] = {"rc" : sblim_rc, "desc" : "wrong"} - expr_values['invalid_name'] = {"rc" : sblim_rc, "desc" : "wrong"} - assoc_classname = get_typed_class(options.virt, "HostedResourcePool") keys = {"Name" : host_sys, "CreationClassName" : "wrong"} ret = try_assoc(conn, host_cn, assoc_classname, keys, From deeptik at linux.vnet.ibm.com Tue Sep 1 09:02:06 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 01 Sep 2009 02:02:06 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fixing HostedService/03_forward_errs.py Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1251795710 25200 # Node ID e6cfd57f957090760ab2454d60ce3b8c6d12a4ee # Parent 4ccfbf5da9c6a03d994246d415c4ada5484594bc [TEST] Fixing HostedService/03_forward_errs.py The error desc with and w/o sbmil-cmpi-base is same. Tested with current sources on F11 and SLES11 with KVM and current sources Also, tested with and w/o sbmil-cmpi-base is same on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 4ccfbf5da9c6 -r e6cfd57f9570 suites/libvirt-cim/cimtest/HostedService/03_forward_errs.py --- a/suites/libvirt-cim/cimtest/HostedService/03_forward_errs.py Tue Sep 01 00:21:50 2009 -0700 +++ b/suites/libvirt-cim/cimtest/HostedService/03_forward_errs.py Tue Sep 01 02:01:50 2009 -0700 @@ -55,14 +55,6 @@ host_ccn = host_inst.CreationClassName host_name = host_inst.Name - if (host_ccn == "Linux_ComputerSystem"): - exp_values['invalid_ccname'] = {"rc" : pywbem.CIM_ERR_INVALID_PARAMETER, - "desc" : "Linux_ComputerSystem" - } - exp_values['invalid_name'] = {"rc" : pywbem.CIM_ERR_INVALID_PARAMETER, - "desc" : "Linux_ComputerSystem" - } - conn = assoc.myWBEMConnection('http://%s' % options.ip, (CIM_USER, CIM_PASS), CIM_NS) From deeptik at linux.vnet.ibm.com Tue Sep 1 11:07:48 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Tue, 01 Sep 2009 16:37:48 +0530 Subject: [Libvirt-cim] Test Run Summary (Sep 01 2009): KVM on SUSE Linux Enterprise Server 11 (i586) with sfcb Message-ID: <4A9D0084.1060803@linux.vnet.ibm.com> ================================================= Test Run Summary (Sep 01 2009): KVM on SUSE Linux Enterprise Server 11 (i586) with sfcb ================================================= Distro: SUSE Linux Enterprise Server 11 (i586) Kernel: 2.6.27.19-5-pae libvirt: 0.4.6 Hypervisor: QEMU 0.9.1 CIMOM: sfcb sfcbd 1.3.2 Libvirt-cim revision: 968 Libvirt-cim changeset: b0f5fe2c2a73 Cimtest revision: 770 Cimtest changeset: 4ccfbf5da9c6 ================================================= FAIL : 5 XFAIL : 5 SKIP : 11 PASS : 148 ----------------- Total : 169 ================================================= FAIL Test Summary: ComputerSystemIndication - 01_created_indication.py: FAIL HostedResourcePool - 03_forward_errs.py: FAIL HostedService - 03_forward_errs.py: FAIL Memory - 03_mem_gi_errs.py: FAIL VirtualSystemManagementService - 15_mod_system_settings.py: FAIL ================================================= XFAIL Test Summary: ComputerSystem - 32_start_reboot.py: XFAIL ComputerSystem - 33_suspend_reboot.py: XFAIL VirtualSystemManagementService - 09_procrasd_persist.py: XFAIL VirtualSystemManagementService - 16_removeresource.py: XFAIL VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ================================================= SKIP Test Summary: ComputerSystem - 02_nosystems.py: SKIP ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP HostSystem - 05_hs_gi_errs.py: SKIP LogicalDisk - 02_nodevs.py: SKIP VSSD - 02_bootldr.py: SKIP VirtualSystemMigrationService - 01_migratable_host.py: SKIP VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP ================================================= Full report: -------------------------------------------------------------------- AllocationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- AllocationCapabilities - 02_alloccap_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 01_enum.py: PASS -------------------------------------------------------------------- ComputerSystem - 02_nosystems.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- ComputerSystem - 03_defineVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 04_defineStartVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 05_activate_defined_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 06_paused_active_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 22_define_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 23_pause_pause.py: PASS -------------------------------------------------------------------- ComputerSystem - 27_define_pause_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 32_start_reboot.py: XFAIL ERROR - Got CIM error Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot with return code 1 ERROR - Exception: Unable reboot dom 'cs_test_domain' InvokeMethod(RequestStateChange): Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot Bug:<00005> -------------------------------------------------------------------- ComputerSystem - 33_suspend_reboot.py: XFAIL ERROR - Got CIM error State not supported with return code 7 ERROR - Exception: Unable Suspend dom 'test_domain' InvokeMethod(RequestStateChange): State not supported Bug:<00012> -------------------------------------------------------------------- ComputerSystem - 34_start_disable.py: PASS -------------------------------------------------------------------- ComputerSystem - 35_start_reset.py: PASS -------------------------------------------------------------------- ComputerSystem - 40_RSC_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 41_cs_to_settingdefinestate.py: PASS -------------------------------------------------------------------- ComputerSystem - 42_cs_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystemIndication - 01_created_indication.py: FAIL ERROR - Waited too long for define indication ERROR - Waited too long for start indication ERROR - Waited too long for destroy indication -------------------------------------------------------------------- ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP -------------------------------------------------------------------- ElementAllocatedFromPool - 01_forward.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 03_reverse_errs.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 04_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- ElementCapabilities - 02_reverse.py: PASS -------------------------------------------------------------------- ElementCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 05_hostsystem_cap.py: PASS -------------------------------------------------------------------- ElementConforms - 01_forward.py: PASS -------------------------------------------------------------------- ElementConforms - 02_reverse.py: PASS -------------------------------------------------------------------- ElementConforms - 03_ectp_fwd_errs.py: PASS -------------------------------------------------------------------- ElementConforms - 04_ectp_rev_errs.py: PASS -------------------------------------------------------------------- ElementSettingData - 01_forward.py: PASS -------------------------------------------------------------------- ElementSettingData - 03_esd_assoc_with_rasd_errs.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 02_elecap_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 01_enum.py: PASS -------------------------------------------------------------------- HostSystem - 02_hostsystem_to_rasd.py: PASS -------------------------------------------------------------------- HostSystem - 03_hs_to_settdefcap.py: PASS -------------------------------------------------------------------- HostSystem - 04_hs_to_EAPF.py: PASS -------------------------------------------------------------------- HostSystem - 05_hs_gi_errs.py: SKIP -------------------------------------------------------------------- HostSystem - 06_hs_to_vsms.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 01_forward.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 01_forward.py: PASS -------------------------------------------------------------------- HostedDependency - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 03_enabledstate.py: PASS -------------------------------------------------------------------- HostedDependency - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 01_forward.py: PASS -------------------------------------------------------------------- HostedResourcePool - 02_reverse.py: PASS -------------------------------------------------------------------- HostedResourcePool - 03_forward_errs.py: FAIL ERROR - Unexpected rc code 6 and description No such instance (CreationClassName) ERROR - ------FAILED: Invalid CreationClassName Key Value.------ -------------------------------------------------------------------- HostedResourcePool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedService - 01_forward.py: PASS -------------------------------------------------------------------- HostedService - 02_reverse.py: PASS -------------------------------------------------------------------- HostedService - 03_forward_errs.py: FAIL ERROR - Unexpected rc code 6 and description No such instance (Name) ERROR - ------ FAILED: Invalid Name Key Name.------ -------------------------------------------------------------------- HostedService - 04_reverse_errs.py: PASS -------------------------------------------------------------------- KVMRedirectionSAP - 01_enum_KVMredSAP.py: PASS -------------------------------------------------------------------- LogicalDisk - 01_disk.py: PASS -------------------------------------------------------------------- LogicalDisk - 02_nodevs.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- LogicalDisk - 03_ld_gi_errs.py: PASS -------------------------------------------------------------------- Memory - 01_memory.py: PASS -------------------------------------------------------------------- Memory - 02_defgetmem.py: PASS -------------------------------------------------------------------- Memory - 03_mem_gi_errs.py: FAIL ERROR - Got CIM error *** Provider Virt_VirtualSystemManagementService(10382) exiting due to a SIGSEGV signal with return code 1 ERROR - Failed to Create the dom: domU InvokeMethod(DefineSystem): *** Provider Virt_VirtualSystemManagementService(10382) exiting due to a SIGSEGV signal -------------------------------------------------------------------- NetworkPort - 01_netport.py: PASS -------------------------------------------------------------------- NetworkPort - 02_np_gi_errors.py: PASS -------------------------------------------------------------------- NetworkPort - 03_user_netport.py: PASS -------------------------------------------------------------------- Processor - 01_processor.py: PASS -------------------------------------------------------------------- Processor - 02_definesys_get_procs.py: PASS -------------------------------------------------------------------- Processor - 03_proc_gi_errs.py: PASS -------------------------------------------------------------------- Profile - 01_enum.py: PASS -------------------------------------------------------------------- Profile - 02_profile_to_elec.py: PASS -------------------------------------------------------------------- Profile - 03_rprofile_gi_errs.py: PASS -------------------------------------------------------------------- RASD - 01_verify_rasd_fields.py: PASS -------------------------------------------------------------------- RASD - 02_enum.py: PASS -------------------------------------------------------------------- RASD - 03_rasd_errs.py: PASS -------------------------------------------------------------------- RASD - 04_disk_rasd_size.py: PASS -------------------------------------------------------------------- RASD - 05_disk_rasd_emu_type.py: PASS -------------------------------------------------------------------- RASD - 06_parent_net_pool.py: PASS -------------------------------------------------------------------- RASD - 07_parent_disk_pool.py: PASS -------------------------------------------------------------------- RedirectionService - 01_enum_crs.py: PASS -------------------------------------------------------------------- RedirectionService - 02_enum_crscap.py: PASS -------------------------------------------------------------------- RedirectionService - 03_RedirectionSAP_errs.py: PASS -------------------------------------------------------------------- ReferencedProfile - 01_verify_refprof.py: PASS -------------------------------------------------------------------- ReferencedProfile - 02_refprofile_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 01_forward.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 03_forward_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 05_RAPF_err.py: PASS -------------------------------------------------------------------- ResourcePool - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePool - 02_rp_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 02_rpcc_gi_errs.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 02_rcps_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 03_CreateResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 05_AddResourcesToResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 06_RemoveResourcesFromResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 07_DeleteResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 08_CreateDiskResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 09_DeleteDiskPool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 10_create_storagevolume.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 02_reverse.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefine - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 03_sds_fwd_errs.py: PASS -------------------------------------------------------------------- SettingsDefine - 04_sds_rev_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 04_forward_vsmsdata.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 05_reverse_vsmcap.py: PASS -------------------------------------------------------------------- SystemDevice - 01_forward.py: PASS -------------------------------------------------------------------- SystemDevice - 02_reverse.py: PASS -------------------------------------------------------------------- SystemDevice - 03_fwderrs.py: PASS -------------------------------------------------------------------- VSSD - 01_enum.py: PASS -------------------------------------------------------------------- VSSD - 02_bootldr.py: SKIP -------------------------------------------------------------------- VSSD - 03_vssd_gi_errs.py: PASS -------------------------------------------------------------------- VSSD - 04_vssd_to_rasd.py: PASS -------------------------------------------------------------------- VSSD - 05_set_uuid.py: PASS -------------------------------------------------------------------- VSSD - 06_duplicate_uuid.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 02_vsmcap_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 01_definesystem_name.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 02_destroysystem.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 03_definesystem_ess.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 04_definesystem_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 05_destroysystem_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 06_addresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 07_addresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 08_modifyresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 09_procrasd_persist.py: XFAIL -------------------------------------------------------------------- VirtualSystemManagementService - 10_hv_version.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 11_define_memrasdunits.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 12_referenced_config.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 13_refconfig_additional_devs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 14_define_sys_disk.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 15_mod_system_settings.py: FAIL ERROR - CIMError : (1, u"Guest 'rstest_domain' is already defined with UUID 94857fab-81ed-4cb9-86bc-781b7d0393cc") Traceback (most recent call last): File "./lib/XenKvmLib/const.py", line 139, in do_try File "15_mod_system_settings.py", line 103, in main ret = service.ModifySystemSettings(SystemSettings=vssd) File "/data/users/deepti/SLES11/cimtest/lib/CimTest/CimExt.py", line 32, in __call__ return self.__invoker(self.__name, args) File "/data/users/deepti/SLES11/cimtest/lib/CimTest/CimExt.py", line 44, in __invoke return self.conn.InvokeMethod(method, self.inst, **params) File "/usr/lib/python2.6/site-packages/pywbem/cim_operations.py", line 801, in InvokeMethod result = self.methodcall(MethodName, obj, **params) File "/usr/lib/python2.6/site-packages/pywbem/cim_operations.py", line 362, in methodcall raise CIMError(code, tt[0][1]['DESCRIPTION']) CIMError: (1, u"Guest 'rstest_domain' is already defined with UUID 94857fab-81ed-4cb9-86bc-781b7d0393cc") ERROR - None InvokeMethod(ModifySystemSettings): Guest 'rstest_domain' is already defined with UUID 94857fab-81ed-4cb9-86bc-781b7d0393cc -------------------------------------------------------------------- VirtualSystemManagementService - 16_removeresource.py: XFAIL ERROR - 0 RASD insts for domain/mouse:ps2 No such instance (no device domain/mouse:ps2) Bug:<00014> -------------------------------------------------------------------- VirtualSystemManagementService - 17_removeresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 18_define_sys_bridge.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 19_definenetwork_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 20_verify_vnc_password.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 21_createVS_verifyMAC.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ERROR - Error invoking AddRS: add_net_res ERROR - (1, u'Unable to change (0) device: this function is not supported by the hypervisor: this device type cannot be attached') ERROR - Failed to destroy Virtual Network 'my_network1' InvokeMethod(AddResourceSettings): Unable to change (0) device: this function is not supported by the hypervisor: this device type cannot be attached Bug:<00015> -------------------------------------------------------------------- VirtualSystemManagementService - 23_verify_duplicate_mac_err.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationService - 01_migratable_host.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 02_vsmsd_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 01_forward.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 02_reverse.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 02_vs_sservice_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 03_create_snapshot.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 02_vs_sservicecap_gi_errs.py: PASS -------------------------------------------------------------------- -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Tue Sep 1 12:49:03 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Tue, 01 Sep 2009 18:19:03 +0530 Subject: [Libvirt-cim] Re: Test Run Summary (Sep 01 2009): KVM on SUSE Linux Enterprise Server 11 (i586) with sfcb In-Reply-To: <4A9D0084.1060803@linux.vnet.ibm.com> References: <4A9D0084.1060803@linux.vnet.ibm.com> Message-ID: <4A9D183F.2080608@linux.vnet.ibm.com> Deepti B Kalakeri wrote: > ================================================= > Test Run Summary (Sep 01 2009): KVM on SUSE Linux Enterprise Server > 11 (i586) with sfcb > ================================================= > Distro: SUSE Linux Enterprise Server 11 (i586) Kernel: 2.6.27.19-5-pae > libvirt: 0.4.6 > Hypervisor: QEMU 0.9.1 > CIMOM: sfcb sfcbd 1.3.2 > Libvirt-cim revision: 968 > Libvirt-cim changeset: b0f5fe2c2a73 > Cimtest revision: 770 > Cimtest changeset: 4ccfbf5da9c6 > ================================================= > FAIL : 5 > XFAIL : 5 > SKIP : 11 > PASS : 148 > ----------------- > Total : 169 > ================================================= > FAIL Test Summary: > ComputerSystemIndication - 01_created_indication.py: FAIL Known issue > HostedResourcePool - 03_forward_errs.py: FAIL > HostedService - 03_forward_errs.py: FAIL Fir for this already submitted > Memory - 03_mem_gi_errs.py: FAIL This passed when run manually. > VirtualSystemManagementService - 15_mod_system_settings.py: FAIL > > Will send fix for this. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From rmaciel at linux.vnet.ibm.com Tue Sep 1 15:16:49 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Tue, 01 Sep 2009 12:16:49 -0300 Subject: [Libvirt-cim] [PATCH 0 of 3] First set of patches relating to image deletion In-Reply-To: References: Message-ID: <4A9D3AE1.3000609@linux.vnet.ibm.com> +1 On 08/28/2009 08:13 PM, Kaitlin Rupert wrote: > This is the first set of changes. There needs to be a follow up patch to > enumerate the storage volumes - some way other than via the SDC association. > > Updates: > -This fixes the image deletion code so that the user can pass one > of the RASDs returned from SDC. > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Tue Sep 1 18:35:10 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 01 Sep 2009 11:35:10 -0700 Subject: [Libvirt-cim] [PATCH] Fix logic for checking UUID conflicts in ModifySystemSettings() Message-ID: # HG changeset patch # User Kaitlin Rupert # Date 1251830061 25200 # Node ID ed4e0bfacffbeded283d2a81d0b0fb0736fb6f5b # Parent a0297a6cdac8864acd43c873058beecaf54fca2b Fix logic for checking UUID conflicts in ModifySystemSettings() Instead of checking to see if the UUID is in use, we need to make sure the provider is using the existing UUID. If the user specifies a UUID that is different, then an error is returned. If no UUID (or if an empty string is specified), the provider will override that value with the original UUID. This fixes a bug where the user specifies a empty string, which we were passing to libvirt. Signed-off-by: Kaitlin Rupert diff -r a0297a6cdac8 -r ed4e0bfacffb src/Virt_VirtualSystemManagementService.c --- a/src/Virt_VirtualSystemManagementService.c Tue Aug 25 13:38:23 2009 -0700 +++ b/src/Virt_VirtualSystemManagementService.c Tue Sep 01 11:34:21 2009 -0700 @@ -1621,6 +1621,7 @@ virDomainPtr dom = NULL; struct domain *dominfo = NULL; char *xml = NULL; + const char *uuid = NULL; ret = cu_get_str_prop(vssd, "VirtualSystemIdentifier", &name); if (ret != CMPI_RC_OK) { @@ -1652,6 +1653,8 @@ goto out; } + uuid = strdup(dominfo->uuid); + if (!vssd_to_domain(vssd, dominfo)) { cu_statusf(_BROKER, &s, CMPI_RC_ERR_FAILED, @@ -1659,9 +1662,18 @@ goto out; } - s = check_uuid_in_use(ref, dominfo); - if (s.rc != CMPI_RC_OK) + if ((dominfo->uuid == NULL) || (STREQ(dominfo->uuid, ""))) { + dominfo->uuid = strdup(uuid); + } else if (!STREQ(uuid, dominfo->uuid)) { + cu_statusf(_BROKER, &s, + CMPI_RC_ERR_FAILED, + "%s is already defined with UUID %s - cannot change " + "UUID to the UUID specified %s", + name, + uuid, + dominfo->uuid); goto out; + } xml = system_to_xml(dominfo); if (xml != NULL) { From kaitlin at linux.vnet.ibm.com Tue Sep 1 18:35:45 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 01 Sep 2009 11:35:45 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Add try / except to VSMS 15 Message-ID: # HG changeset patch # User Kaitlin Rupert # Date 1251828184 25200 # Node ID ddb880e221d36151a9f91c3b0ab95f9cca97c2fa # Parent 95fa64bf447e5bc2bab501564e3d9336edef997d [TEST] Add try / except to VSMS 15 This will catch any unexpected exceptions. Otherwise, the exception isn't caught and the guest may not be properly undefined Signed-off-by: Kaitlin Rupert diff -r 95fa64bf447e -r ddb880e221d3 suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Thu Aug 27 16:39:53 2009 -0700 +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Tue Sep 01 11:03:04 2009 -0700 @@ -74,72 +74,71 @@ cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) service = vsms.get_vsms_class(options.virt)(options.ip) - for case in test_cases: - #Each time through, define guest using a default XML - cxml.undefine(options.ip) - cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) - ret = cxml.cim_define(options.ip) - if not ret: - logger.error("Failed to define the dom: %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + try: - if case == "start": - ret = cxml.start(options.ip) + for case in test_cases: + #Each time through, define guest using a default XML + cxml.undefine(options.ip) + cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) + ret = cxml.cim_define(options.ip) if not ret: - logger.error("Failed to start %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + raise Exception("Failed to define the dom: %s", default_dom) - status, inst = get_vssd(options.ip, options.virt, True) - if status != PASS: - logger.error("Failed to get the VSSD instance for %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + if case == "start": + ret = cxml.start(options.ip) + if not ret: + raise Exception("Failed to start %s", default_dom) - inst['AutomaticRecoveryAction'] = pywbem.cim_types.Uint16(RECOVERY_VAL) - vssd = inst_to_mof(inst) + status, inst = get_vssd(options.ip, options.virt, True) + if status != PASS: + raise Expcetion("Failed to get the VSSD instance for %s", + default_dom) - ret = service.ModifySystemSettings(SystemSettings=vssd) - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) - if curr_cim_rev >= libvirt_modify_setting_changes: - if ret[0] != 0: - logger.error("Failed to modify dom: %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + val = pywbem.cim_types.Uint16(RECOVERY_VAL) + inst['AutomaticRecoveryAction'] = val + vssd = inst_to_mof(inst) - if case == "start": - #This should be replaced with a RSC to shutdownt he guest - cxml.destroy(options.ip) - status, cs = poll_for_state_change(options.ip, options.virt, - default_dom, DEFINED_STATE) + ret = service.ModifySystemSettings(SystemSettings=vssd) + curr_cim_rev, changeset = get_provider_version(options.virt, + options.ip) + if curr_cim_rev >= libvirt_modify_setting_changes: + if ret[0] != 0: + raise Exception("Failed to modify dom: %s", default_dom) + + if case == "start": + #This should be replaced with a RSC to shutdownt he guest + cxml.destroy(options.ip) + status, cs = poll_for_state_change(options.ip, options.virt, + default_dom, DEFINED_STATE) + if status != PASS: + raise Exception("Failed to destroy %s", default_dom) + + status, inst = get_vssd(options.ip, options.virt, False) if status != PASS: - logger.error("Failed to destroy %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + raise Exception("Failed to get the VSSD instance for %s", + default_dom) - status, inst = get_vssd(options.ip, options.virt, False) - if status != PASS: - logger.error("Failed to get the VSSD instance for %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + if inst.AutomaticRecoveryAction != RECOVERY_VAL: + logger.error("Exp AutomaticRecoveryAction=%d, got %d", + RECOVERY_VAL, inst.AutomaticRecoveryAction) + raise Exception("%s not updated properly.", default_dom) - if inst.AutomaticRecoveryAction != RECOVERY_VAL: - logger.error("%s not updated properly.", default_dom) - logger.error("Exp AutomaticRecoveryAction=%d, got %d", RECOVERY_VAL, - inst.AutomaticRecoveryAction) - cleanup_env(options.ip, cxml) - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) - if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": - return XFAIL_RC(f9_bug) + status = PASS - if options.virt == "LXC": - return XFAIL_RC(bug) - return FAIL + except Exception, details: + logger.error(details) + status = FAIL cleanup_env(options.ip, cxml) - return PASS + curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) + if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": + return XFAIL_RC(f9_bug) + + if options.virt == "LXC": + return XFAIL_RC(bug) + + return status if __name__ == "__main__": sys.exit(main()) From kaitlin at linux.vnet.ibm.com Tue Sep 1 18:36:04 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 01 Sep 2009 11:36:04 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSSD 06 - get_vssd() already returns instance for default_dom Message-ID: <4243058cb57cbe531a3f.1251830164@elm3b151.beaverton.ibm.com> # HG changeset patch # User Kaitlin Rupert # Date 1251829185 25200 # Node ID 4243058cb57cbe531a3f28d633e0951d2146f2db # Parent ddb880e221d36151a9f91c3b0ab95f9cca97c2fa [TEST] Fix VSSD 06 - get_vssd() already returns instance for default_dom It should use the dom param that is passed in Signed-off-by: Kaitlin Rupert diff -r ddb880e221d3 -r 4243058cb57c suites/libvirt-cim/cimtest/VSSD/06_duplicate_uuid.py --- a/suites/libvirt-cim/cimtest/VSSD/06_duplicate_uuid.py Tue Sep 01 11:03:04 2009 -0700 +++ b/suites/libvirt-cim/cimtest/VSSD/06_duplicate_uuid.py Tue Sep 01 11:19:45 2009 -0700 @@ -51,7 +51,7 @@ if virt == "XenFV": virt = "Xen" - key_list = {"InstanceID" : "%s:%s" % (virt, default_dom) } + key_list = {"InstanceID" : "%s:%s" % (virt, dom) } inst = GetInstance(ip, cn, key_list, True) From rmaciel at linux.vnet.ibm.com Tue Sep 1 19:43:13 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Tue, 01 Sep 2009 16:43:13 -0300 Subject: [Libvirt-cim] [PATCH] Fix error when executing 'make install' using install tool 7.2 Message-ID: # HG changeset patch # User Richard Maciel # Date 1251833867 10800 # Node ID ad1105c7470759d5111fe2a48143c72b4205a40e # Parent 9af5eef7ea76c7e9d1657d2e8cd4df9ed126b596 Fix error when executing 'make install' using install tool 7.2 Signed-off-by: Richard Maciel diff -r 9af5eef7ea76 -r ad1105c74707 Makefile.am --- a/Makefile.am Mon Aug 24 16:14:41 2009 -0700 +++ b/Makefile.am Tue Sep 01 16:37:47 2009 -0300 @@ -144,15 +144,29 @@ schema/ElementConformsToProfile.registration \ schema/HostedAccessPoint.registration -pkgdata_DATA = $(MOFS) $(REGS) $(INTEROP_MOFS) $(INTEROP_REGS) pkgdata_SCRIPTS = provider-register.sh -EXTRA_DIST = schema $(pkgdata_DATA) $(pkgdata_SCRIPTS) \ - libvirt-cim.spec.in libvirt-cim.spec \ +EXTRA_DIST = schema $(MOFS) $(REGS) $(INTEROP_MOFS) $(INTEROP_REGS) \ + $(pkgdata_SCRIPTS) libvirt-cim.spec.in libvirt-cim.spec \ doc/CodingStyle doc/SubmittingPatches \ .changeset .revision \ examples/diskpool.conf +install-data-local: + $(mkinstalldirs) "$(DESTDIR)$(pkgdatadir)" + $(install_sh_DATA) -t "$(DESTDIR)$(pkgdatadir)" $(MOFS) + $(install_sh_DATA) -t "$(DESTDIR)$(pkgdatadir)" $(REGS) + $(install_sh_DATA) -t "$(DESTDIR)$(pkgdatadir)" $(INTEROP_MOFS) + $(install_sh_DATA) -t "$(DESTDIR)$(pkgdatadir)" $(INTEROP_REGS) + +uninstall-local: + @list='$(MOFS) $(REGS) $(INTEROP_MOFS) $(INTEROP_REGS)'; \ + for p in $$list; do \ + f=`echo "$$p" | sed 's|^.*/||;'`; \ + echo " rm -f '$(DESTDIR)$(pkgdatadir)/$$f'"; \ + rm -f "$(DESTDIR)$(pkgdatadir)/$$f"; \ + done + preinstall: sh -x base_schema/install_base_schema.sh `pwd`/base_schema From kaitlin at linux.vnet.ibm.com Tue Sep 1 19:51:21 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 01 Sep 2009 12:51:21 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fixing HostedDependency/04_reverse_errs.py In-Reply-To: <4ccfbf5da9c6a03d9942.1251789759@elm3b151.beaverton.ibm.com> References: <4ccfbf5da9c6a03d9942.1251789759@elm3b151.beaverton.ibm.com> Message-ID: <4A9D7B39.3050705@linux.vnet.ibm.com> Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1251789710 25200 > # Node ID 4ccfbf5da9c6a03d994246d415c4ada5484594bc > # Parent 95fa64bf447e5bc2bab501564e3d9336edef997d > [TEST] Fixing HostedDependency/04_reverse_errs.py > > The error desc with and w/o sbmil-cmpi-base is same. > Tested with current sources on F11 and SLES11 with KVM and current sources > Also, tested with and w/o sbmil-cmpi-base is same on SLES11. > Signed-off-by: Deepti B. Kalakeri I applied this thinking I was testing on an F11 system with sblim-cmpi-base installed. However, sblim-cmpi-base wasn't installed properly. After reinstalling, this test fails for me. With sblim-cmpi-base on F11, looks like CIM_ERR_INVALID_PARAMETER is returned. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Tue Sep 1 22:08:30 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 01 Sep 2009 15:08:30 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import Message-ID: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> # HG changeset patch # User Kaitlin Rupert # Date 1251842877 25200 # Node ID 03e78e8b7a06296eba99e1329840ae6ee521f357 # Parent a0185245b9894f195227c12af621151623972573 [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import This test was originally designed to do the following: 1) Create a guest with a MAC interface 2) Create a second guest based on the first guest - second guest has an additional MAC defined. Pass a reference to the first guest during the DefineSystem() 3) Verify the second guest was created with two MACs - one that is identical to the first guest and one that is different The providers no longer allow a guest to have the same MAC as an existing guest. Each MAC needs to be unique. Therefore, this test needs to use a different setting - disk source works for this. Also, remove the dependency on test_xml.py - that module is not obsolete. Signed-off-by: Kaitlin Rupert diff -r a0185245b989 -r 03e78e8b7a06 suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py Tue Sep 01 14:23:12 2009 -0700 +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py Tue Sep 01 15:07:57 2009 -0700 @@ -33,19 +33,16 @@ import sys from XenKvmLib.common_util import get_cs_instance from CimTest.Globals import logger -from XenKvmLib.const import do_main, get_provider_version +from XenKvmLib.const import do_main, KVM_secondary_disk_path from CimTest.ReturnCodes import FAIL, PASS from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.assoc import AssociatorNames -from XenKvmLib.test_xml import dumpxml from XenKvmLib.vxml import get_class from XenKvmLib.rasd import get_default_rasds sup_types = ['Xen', 'XenFV', 'KVM'] test_dom = 'rstest_domain' test_dom2 = 'rstest_domain2' -mac = "aa:aa:aa:00:00:00" -libvirt_mac_ref_changes = 935 def setup_first_guest(ip, virt, cxml): ret = cxml.cim_define(ip) @@ -76,22 +73,23 @@ return vssd[0] def setup_second_guest(ip, virt, cxml2, ref): - nrasd_cn = get_typed_class(virt, "NetResourceAllocationSettingData") + drasd_cn = get_typed_class(virt, "DiskResourceAllocationSettingData") rasds = get_default_rasds(ip, virt) rasd_list = {} for rasd in rasds: - if rasd.classname == nrasd_cn: - rasd['Address'] = mac - rasd['NetworkType'] = "network" - rasd_list[nrasd_cn] = inst_to_mof(rasd) + if rasd.classname == drasd_cn: + rasd['Address'] = KVM_secondary_disk_path + rasd['VirtualDevice '] = "hdb" + rasd_list[drasd_cn] = inst_to_mof(rasd) + break else: rasd_list[rasd.classname] = None - if rasd_list[nrasd_cn] is None: - logger.error("Unable to get template NetRASD") + if rasd_list[drasd_cn] is None: + logger.error("Unable to get template DiskRASD") return FAIL cxml2.set_res_settings(rasd_list) @@ -103,20 +101,21 @@ return PASS, "define" -def get_dom_macs(server, dom, virt): - mac_list = [] +def get_dom_disk_src(xml, ip): + disk_list = [] - myxml = dumpxml(dom, server, virt=virt) + xml.dumpxml(ip) + myxml = xml.get_formatted_xml() lines = myxml.splitlines() for l in lines: - if l.find("mac address=") != -1: - mac = l.split('=')[1] - mac = mac.lstrip('\'') - mac = mac.rstrip('\'/>') - mac_list.append(mac) + if l.find("source file=") != -1: + disk = l.split('=')[1] + disk = disk.lstrip('\'') + disk = disk.rstrip('\'/>') + disk_list.append(disk) - return mac_list + return disk_list @do_main(sup_types) def main(): @@ -143,26 +142,23 @@ if status != PASS: raise Exception("Unable to define %s" % test_dom2) - dom1_mac_list = get_dom_macs(ip, test_dom, virt) - if len(dom1_mac_list) != 1: - raise Exception("%s has %d macs, expected 1" % (test_dom, - len(dom1_mac_list))) + g1_disk_list = get_dom_disk_src(cxml, ip) + if len(g1_disk_list) != 1: + raise Exception("%s has %d disks, expected 1" % (test_dom, + len(g1_disk_list))) - dom2_mac_list = get_dom_macs(ip, test_dom2, virt) - if len(dom2_mac_list) != 2: - raise Exception("%s has %d macs, expected 2" % (test_dom2, - len(dom2_mac_list))) + g2_disk_list = get_dom_disk_src(cxml2, ip) + if len(g2_disk_list) != 2: + raise Exception("%s has %d disks, expected 2" % (test_dom2, + len(g2_disk_list))) - curr_cim_rev, changeset = get_provider_version(virt, ip) - if curr_cim_rev < libvirt_mac_ref_changes: - for item in dom2_mac_list: - if item != mac and item != dom1_mac_list[0]: - raise Exception("%s has unexpected mac value, exp: %s %s" \ - % (item, mac, dom1_mac_list[0])) - elif curr_cim_rev >= libvirt_mac_ref_changes: - if not mac in dom2_mac_list: - raise Exception("Did not find the mac information given to "\ - "the domain '%s'" % test_dom2) + if g2_disk_list[0] != g1_disk_list[0]: + raise Exception("%s has unexpected disk source, exp: %s, got %s" \ + % (test_dom2, g2_disk_list[0], g1_disk_list[0])) + + if g2_disk_list[1] == g1_disk_list[0]: + raise Exception("%s has unexpected disk source, exp: %s, got %s" \ + % (test_dom2, g2_disk_list[1], g1_disk_list[0])) status = PASS From deeptik at linux.vnet.ibm.com Wed Sep 2 12:11:59 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 02 Sep 2009 05:11:59 -0700 Subject: [Libvirt-cim] [PATCH 3 of 4] [TEST] Fixing HostedResourcePool/03_forward_errs.py In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1251892871 25200 # Node ID faa43f693e4d2df9cf3d7a9cd91d477ef37b0b10 # Parent 850e4a3c0275e31ca9a036c7743dee763d3ab870 [TEST] Fixing HostedResourcePool/03_forward_errs.py Tested with F11/SLES11 with and w/o sblim-cmpi-base and KVM with current sources. Signed-off-by: Deepti B. Kalakeri diff -r 850e4a3c0275 -r faa43f693e4d suites/libvirt-cim/cimtest/HostedResourcePool/03_forward_errs.py --- a/suites/libvirt-cim/cimtest/HostedResourcePool/03_forward_errs.py Wed Sep 02 04:33:04 2009 -0700 +++ b/suites/libvirt-cim/cimtest/HostedResourcePool/03_forward_errs.py Wed Sep 02 05:01:11 2009 -0700 @@ -26,7 +26,7 @@ from XenKvmLib import assoc from XenKvmLib import enumclass from XenKvmLib.common_util import get_host_info -from XenKvmLib.common_util import try_assoc +from XenKvmLib.common_util import try_assoc, check_cimom from CimTest import Globals from CimTest.Globals import logger from CimTest.ReturnCodes import PASS, FAIL @@ -55,8 +55,13 @@ Globals.CIM_NS) host_cn = host_inst.CreationClassName host_sys = host_inst.Name + + rc, out = check_cimom(options.ip) + if rc != PASS: + logger.error("Failed to get the cimom information") + return FAIL - if (host_cn == "Linux_ComputerSystem"): + if (host_cn == "Linux_ComputerSystem") and "cimserver" in out: sblim_rc = pywbem.CIM_ERR_INVALID_PARAMETER expr_values['invalid_ccname'] = {"rc" : sblim_rc, "desc" : "wrong"} expr_values['invalid_name'] = {"rc" : sblim_rc, "desc" : "wrong"} From deeptik at linux.vnet.ibm.com Wed Sep 2 12:11:57 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 02 Sep 2009 05:11:57 -0700 Subject: [Libvirt-cim] [PATCH 1 of 4] [TEST] Moving the check for cimom to function In-Reply-To: References: Message-ID: <94551c9ef9b0fa53cb2f.1251893517@elm3b151.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1251890125 25200 # Node ID 94551c9ef9b0fa53cb2ff04a5af16c8504a1da0b # Parent 4ccfbf5da9c6a03d994246d415c4ada5484594bc [TEST] Moving the check for cimom to function. Tested with F11/SLES11 with and w/o sblim-cmpi-base and KVM with current sources. Signed-off-by: Deepti B. Kalakeri diff -r 4ccfbf5da9c6 -r 94551c9ef9b0 suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Tue Sep 01 00:21:50 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Wed Sep 02 04:15:25 2009 -0700 @@ -230,6 +230,19 @@ profiles[key]['InstanceID'] = 'CIM:' + key return profiles +def check_cimom(ip): + cmd = "ps -ef | grep -v grep | grep cimserver" + rc, out = utils.run_remote(ip, cmd) + if rc != 0: + cmd = "ps -ef | grep -v grep | grep sfcbd" + rc, out = utils.run_remote(ip, cmd) + + if rc == 0 : + cmd = "%s | awk '{ print \$8 }' | uniq" % cmd + rc, out = utils.run_remote(ip, cmd) + + return rc, out + def pre_check(ip, virt): cmd = "virsh -c %s list --all" % virt2uri(virt) ret, out = utils.run_remote(ip, cmd) @@ -250,13 +263,9 @@ if ret != 0: return "Encountered an error querying for qemu-kvm and qemu " - cmd = "ps -ef | grep -v grep | grep cimserver" - rc, out = utils.run_remote(ip, cmd) + rc, out = check_cimom(ip) if rc != 0: - cmd = "ps -ef | grep -v grep | grep sfcbd" - rc, out = utils.run_remote(ip, cmd) - if rc != 0: - return "A supported CIMOM is not running" + return "A supported CIMOM is not running" cmd = "ps -ef | grep -v grep | grep libvirtd" rc, out = utils.run_remote(ip, cmd) From deeptik at linux.vnet.ibm.com Wed Sep 2 12:11:56 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 02 Sep 2009 05:11:56 -0700 Subject: [Libvirt-cim] [PATCH 0 of 4] Fixing couple of test cases Message-ID: From deeptik at linux.vnet.ibm.com Wed Sep 2 12:11:58 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 02 Sep 2009 05:11:58 -0700 Subject: [Libvirt-cim] [PATCH 2 of 4] [TEST] Fixing HostedDependency/04_reverse_errs.py In-Reply-To: References: Message-ID: <850e4a3c0275e31ca9a0.1251893518@elm3b151.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1251891184 25200 # Node ID 850e4a3c0275e31ca9a036c7743dee763d3ab870 # Parent 94551c9ef9b0fa53cb2ff04a5af16c8504a1da0b [TEST] Fixing HostedDependency/04_reverse_errs.py Tested with F11/SLES11 with and w/o sblim-cmpi-base and KVM with current sources. Signed-off-by: Deepti B. Kalakeri diff -r 94551c9ef9b0 -r 850e4a3c0275 suites/libvirt-cim/cimtest/HostedDependency/04_reverse_errs.py --- a/suites/libvirt-cim/cimtest/HostedDependency/04_reverse_errs.py Wed Sep 02 04:15:25 2009 -0700 +++ b/suites/libvirt-cim/cimtest/HostedDependency/04_reverse_errs.py Wed Sep 02 04:33:04 2009 -0700 @@ -36,7 +36,7 @@ from CimTest.Globals import logger, CIM_USER, CIM_PASS, CIM_NS from XenKvmLib.const import do_main from XenKvmLib.classes import get_typed_class -from XenKvmLib.common_util import get_host_info, try_assoc +from XenKvmLib.common_util import get_host_info, try_assoc, check_cimom from CimTest.ReturnCodes import PASS, FAIL sup_types = ['Xen', 'KVM', 'XenFV', 'LXC'] @@ -44,10 +44,19 @@ test_dom = "hd_domain1" test_mac = "00:11:22:33:44:55" -def set_expr_values(host_ccn): - exp_rc = pywbem.CIM_ERR_NOT_FOUND - exp_d1 = "No such instance (Name)" - exp_d2 = "No such instance (CreationClassName)" +def set_expr_values(host_ccn, server): + rc, out = check_cimom(server) + if rc != PASS: + return None + + if (host_ccn == "Linux_ComputerSystem") and "cimserver" in out: + exp_rc = pywbem.CIM_ERR_INVALID_PARAMETER + exp_d1 = "INVALID" + exp_d2 = "INVALID" + else: + exp_rc = pywbem.CIM_ERR_NOT_FOUND + exp_d1 = "No such instance (Name)" + exp_d2 = "No such instance (CreationClassName)" expr_values = { "INVALID_NameValue" : { 'rc' : exp_rc, 'desc' : exp_d1 }, @@ -93,7 +102,9 @@ classname = host_inst.CreationClassName host_name = host_inst.Name - expr_values = set_expr_values(classname) + expr_values = set_expr_values(classname, server) + if expr_values == None: + raise Exception("Failed to initialise the error values") msg = 'Invalid Name Key Value' field='INVALID_NameValue' @@ -113,6 +124,7 @@ except Exception, details: logger.error(details) + status=FAIL cxml.cim_destroy(server) cxml.undefine(server) From deeptik at linux.vnet.ibm.com Wed Sep 2 12:12:00 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 02 Sep 2009 05:12:00 -0700 Subject: [Libvirt-cim] [PATCH 4 of 4] [TEST] Fixing HostedService/03_forward_errs.py In-Reply-To: References: Message-ID: <8bb902c189fbfe8cd71f.1251893520@elm3b151.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1251893476 25200 # Node ID 8bb902c189fbfe8cd71fb68c4523733703966047 # Parent faa43f693e4d2df9cf3d7a9cd91d477ef37b0b10 [TEST] Fixing HostedService/03_forward_errs.py Tested with F11/SLES11 with and w/o sblim-cmpi-base and KVM with current sources. Signed-off-by: Deepti B. Kalakeri diff -r faa43f693e4d -r 8bb902c189fb suites/libvirt-cim/cimtest/HostedService/03_forward_errs.py --- a/suites/libvirt-cim/cimtest/HostedService/03_forward_errs.py Wed Sep 02 05:01:11 2009 -0700 +++ b/suites/libvirt-cim/cimtest/HostedService/03_forward_errs.py Wed Sep 02 05:11:16 2009 -0700 @@ -25,7 +25,7 @@ from pywbem.cim_obj import CIMInstanceName from XenKvmLib import assoc from XenKvmLib import enumclass -from XenKvmLib.common_util import get_host_info, try_assoc +from XenKvmLib.common_util import get_host_info, try_assoc, check_cimom from XenKvmLib.classes import get_typed_class from CimTest.Globals import logger, CIM_ERROR_ENUMERATE, CIM_USER, \ CIM_PASS, CIM_NS @@ -54,8 +54,12 @@ host_ccn = host_inst.CreationClassName host_name = host_inst.Name - - if (host_ccn == "Linux_ComputerSystem"): + rc, out = check_cimom(options.ip) + if rc != PASS: + logger.error("Failed to get the cimom information") + return FAIL + + if (host_ccn == "Linux_ComputerSystem") and "cimserver" in out: exp_values['invalid_ccname'] = {"rc" : pywbem.CIM_ERR_INVALID_PARAMETER, "desc" : "Linux_ComputerSystem" } From deeptik at linux.vnet.ibm.com Wed Sep 2 12:15:37 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Wed, 02 Sep 2009 17:45:37 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Fixing HostedDependency/04_reverse_errs.py In-Reply-To: <4A9D7B39.3050705@linux.vnet.ibm.com> References: <4ccfbf5da9c6a03d9942.1251789759@elm3b151.beaverton.ibm.com> <4A9D7B39.3050705@linux.vnet.ibm.com> Message-ID: <4A9E61E9.4070304@linux.vnet.ibm.com> Kaitlin Rupert wrote: > Deepti B. Kalakeri wrote: >> # HG changeset patch >> # User Deepti B. Kalakeri >> # Date 1251789710 25200 >> # Node ID 4ccfbf5da9c6a03d994246d415c4ada5484594bc >> # Parent 95fa64bf447e5bc2bab501564e3d9336edef997d >> [TEST] Fixing HostedDependency/04_reverse_errs.py >> >> The error desc with and w/o sbmil-cmpi-base is same. >> Tested with current sources on F11 and SLES11 with KVM and current >> sources >> Also, tested with and w/o sbmil-cmpi-base is same on SLES11. >> Signed-off-by: Deepti B. Kalakeri > > I applied this thinking I was testing on an F11 system with > sblim-cmpi-base installed. However, sblim-cmpi-base wasn't installed > properly. After reinstalling, this test fails for me. > > With sblim-cmpi-base on F11, looks like CIM_ERR_INVALID_PARAMETER is > returned. > Oops! I had not verified this on F11 with sblim-cmpi-base. Thanks .. have resubmitted the patches. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From rmaciel at linux.vnet.ibm.com Wed Sep 2 16:24:56 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Wed, 02 Sep 2009 13:24:56 -0300 Subject: [Libvirt-cim] [PATCH] Fix logic for checking UUID conflicts in ModifySystemSettings() In-Reply-To: References: Message-ID: <4A9E9C58.7050104@linux.vnet.ibm.com> +1 On 09/01/2009 03:35 PM, Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1251830061 25200 > # Node ID ed4e0bfacffbeded283d2a81d0b0fb0736fb6f5b > # Parent a0297a6cdac8864acd43c873058beecaf54fca2b > Fix logic for checking UUID conflicts in ModifySystemSettings() > > Instead of checking to see if the UUID is in use, we need to make sure the > provider is using the existing UUID. If the user specifies a UUID that is > different, then an error is returned. If no UUID (or if an empty string is > specified), the provider will override that value with the original UUID. > > This fixes a bug where the user specifies a empty string, which we were passing > to libvirt. > > Signed-off-by: Kaitlin Rupert > > diff -r a0297a6cdac8 -r ed4e0bfacffb src/Virt_VirtualSystemManagementService.c > --- a/src/Virt_VirtualSystemManagementService.c Tue Aug 25 13:38:23 2009 -0700 > +++ b/src/Virt_VirtualSystemManagementService.c Tue Sep 01 11:34:21 2009 -0700 > @@ -1621,6 +1621,7 @@ > virDomainPtr dom = NULL; > struct domain *dominfo = NULL; > char *xml = NULL; > + const char *uuid = NULL; > > ret = cu_get_str_prop(vssd, "VirtualSystemIdentifier",&name); > if (ret != CMPI_RC_OK) { > @@ -1652,6 +1653,8 @@ > goto out; > } > > + uuid = strdup(dominfo->uuid); > + > if (!vssd_to_domain(vssd, dominfo)) { > cu_statusf(_BROKER,&s, > CMPI_RC_ERR_FAILED, > @@ -1659,9 +1662,18 @@ > goto out; > } > > - s = check_uuid_in_use(ref, dominfo); > - if (s.rc != CMPI_RC_OK) > + if ((dominfo->uuid == NULL) || (STREQ(dominfo->uuid, ""))) { > + dominfo->uuid = strdup(uuid); > + } else if (!STREQ(uuid, dominfo->uuid)) { > + cu_statusf(_BROKER,&s, > + CMPI_RC_ERR_FAILED, > + "%s is already defined with UUID %s - cannot change " > + "UUID to the UUID specified %s", > + name, > + uuid, > + dominfo->uuid); > goto out; > + } > > xml = system_to_xml(dominfo); > if (xml != NULL) { > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Wed Sep 2 17:07:41 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Wed, 02 Sep 2009 22:37:41 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSSD 06 - get_vssd() already returns instance for default_dom In-Reply-To: <4243058cb57cbe531a3f.1251830164@elm3b151.beaverton.ibm.com> References: <4243058cb57cbe531a3f.1251830164@elm3b151.beaverton.ibm.com> Message-ID: <4A9EA65D.7040402@linux.vnet.ibm.com> +1 -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 3 06:53:52 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 03 Sep 2009 12:23:52 +0530 Subject: [Libvirt-cim] Test Run Summary (Sep 03 2009): KVM on Fedora release 11 (Leonidas) with Pegasus Message-ID: <4A9F6800.9020806@linux.vnet.ibm.com> ================================================= Test Run Summary (Sep 03 2009): KVM on Fedora release 11 (Leonidas) with Pegasus ================================================= Distro: Fedora release 11 (Leonidas) Kernel: 2.6.27.5-117.fc10.x86_64 libvirt: 0.7.0 Hypervisor: QEMU 0.10.1 CIMOM: Pegasus 2.9.0 Libvirt-cim revision: 973 Libvirt-cim changeset: 9c8eb2dfae84 Cimtest revision: 775 Cimtest changeset: 30196cc506c0 ================================================= FAIL : 1 XFAIL : 4 SKIP : 10 PASS : 154 ----------------- Total : 169 ================================================= FAIL Test Summary: KVMRedirectionSAP - 01_enum_KVMredSAP.py: FAIL ================================================= XFAIL Test Summary: ComputerSystem - 32_start_reboot.py: XFAIL ComputerSystem - 33_suspend_reboot.py: XFAIL VirtualSystemManagementService - 16_removeresource.py: XFAIL VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ================================================= SKIP Test Summary: ComputerSystem - 02_nosystems.py: SKIP ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP LogicalDisk - 02_nodevs.py: SKIP VSSD - 02_bootldr.py: SKIP VirtualSystemMigrationService - 01_migratable_host.py: SKIP VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP ================================================= Full report: -------------------------------------------------------------------- AllocationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- AllocationCapabilities - 02_alloccap_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 01_enum.py: PASS -------------------------------------------------------------------- ComputerSystem - 02_nosystems.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- ComputerSystem - 03_defineVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 04_defineStartVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 05_activate_defined_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 06_paused_active_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 22_define_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 23_pause_pause.py: PASS -------------------------------------------------------------------- ComputerSystem - 27_define_pause_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 32_start_reboot.py: XFAIL ERROR - Got CIM error CIM_ERR_FAILED: Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot with return code 1 ERROR - Exception: Unable reboot dom 'cs_test_domain' InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot Bug:<00005> -------------------------------------------------------------------- ComputerSystem - 33_suspend_reboot.py: XFAIL ERROR - Got CIM error CIM_ERR_NOT_SUPPORTED: State not supported with return code 7 ERROR - Exception: Unable Suspend dom 'test_domain' InvokeMethod(RequestStateChange): CIM_ERR_NOT_SUPPORTED: State not supported Bug:<00012> -------------------------------------------------------------------- ComputerSystem - 34_start_disable.py: PASS -------------------------------------------------------------------- ComputerSystem - 35_start_reset.py: PASS -------------------------------------------------------------------- ComputerSystem - 40_RSC_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 41_cs_to_settingdefinestate.py: PASS -------------------------------------------------------------------- ComputerSystem - 42_cs_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystemIndication - 01_created_indication.py: PASS -------------------------------------------------------------------- ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP -------------------------------------------------------------------- ElementAllocatedFromPool - 01_forward.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 03_reverse_errs.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 04_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- ElementCapabilities - 02_reverse.py: PASS -------------------------------------------------------------------- ElementCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 05_hostsystem_cap.py: PASS -------------------------------------------------------------------- ElementConforms - 01_forward.py: PASS -------------------------------------------------------------------- ElementConforms - 02_reverse.py: PASS -------------------------------------------------------------------- ElementConforms - 03_ectp_fwd_errs.py: PASS -------------------------------------------------------------------- ElementConforms - 04_ectp_rev_errs.py: PASS -------------------------------------------------------------------- ElementSettingData - 01_forward.py: PASS -------------------------------------------------------------------- ElementSettingData - 03_esd_assoc_with_rasd_errs.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 02_elecap_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 01_enum.py: PASS -------------------------------------------------------------------- HostSystem - 02_hostsystem_to_rasd.py: PASS -------------------------------------------------------------------- HostSystem - 03_hs_to_settdefcap.py: PASS -------------------------------------------------------------------- HostSystem - 04_hs_to_EAPF.py: PASS -------------------------------------------------------------------- HostSystem - 05_hs_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 06_hs_to_vsms.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 01_forward.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 01_forward.py: PASS -------------------------------------------------------------------- HostedDependency - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 03_enabledstate.py: PASS -------------------------------------------------------------------- HostedDependency - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 01_forward.py: PASS -------------------------------------------------------------------- HostedResourcePool - 02_reverse.py: PASS -------------------------------------------------------------------- HostedResourcePool - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedService - 01_forward.py: PASS -------------------------------------------------------------------- HostedService - 02_reverse.py: PASS -------------------------------------------------------------------- HostedService - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedService - 04_reverse_errs.py: PASS -------------------------------------------------------------------- KVMRedirectionSAP - 01_enum_KVMredSAP.py: FAIL ERROR - Exception details: 'ElementName' Value Mismatch, Expected 5900:-1, Got 5900:0 ERROR - Exception: Failed to verify information for the defined dom:test_kvmredsap_dom -------------------------------------------------------------------- LogicalDisk - 01_disk.py: PASS -------------------------------------------------------------------- LogicalDisk - 02_nodevs.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- LogicalDisk - 03_ld_gi_errs.py: PASS -------------------------------------------------------------------- Memory - 01_memory.py: PASS -------------------------------------------------------------------- Memory - 02_defgetmem.py: PASS -------------------------------------------------------------------- Memory - 03_mem_gi_errs.py: PASS -------------------------------------------------------------------- NetworkPort - 01_netport.py: PASS -------------------------------------------------------------------- NetworkPort - 02_np_gi_errors.py: PASS -------------------------------------------------------------------- NetworkPort - 03_user_netport.py: PASS -------------------------------------------------------------------- Processor - 01_processor.py: PASS -------------------------------------------------------------------- Processor - 02_definesys_get_procs.py: PASS -------------------------------------------------------------------- Processor - 03_proc_gi_errs.py: PASS -------------------------------------------------------------------- Profile - 01_enum.py: PASS -------------------------------------------------------------------- Profile - 02_profile_to_elec.py: PASS -------------------------------------------------------------------- Profile - 03_rprofile_gi_errs.py: PASS -------------------------------------------------------------------- RASD - 01_verify_rasd_fields.py: PASS -------------------------------------------------------------------- RASD - 02_enum.py: PASS -------------------------------------------------------------------- RASD - 03_rasd_errs.py: PASS -------------------------------------------------------------------- RASD - 04_disk_rasd_size.py: PASS -------------------------------------------------------------------- RASD - 05_disk_rasd_emu_type.py: PASS -------------------------------------------------------------------- RASD - 06_parent_net_pool.py: PASS -------------------------------------------------------------------- RASD - 07_parent_disk_pool.py: PASS -------------------------------------------------------------------- RedirectionService - 01_enum_crs.py: PASS -------------------------------------------------------------------- RedirectionService - 02_enum_crscap.py: PASS -------------------------------------------------------------------- RedirectionService - 03_RedirectionSAP_errs.py: PASS -------------------------------------------------------------------- ReferencedProfile - 01_verify_refprof.py: PASS -------------------------------------------------------------------- ReferencedProfile - 02_refprofile_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 01_forward.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 03_forward_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 05_RAPF_err.py: PASS -------------------------------------------------------------------- ResourcePool - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePool - 02_rp_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 02_rpcc_gi_errs.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 02_rcps_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 03_CreateResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 05_AddResourcesToResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 06_RemoveResourcesFromResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 07_DeleteResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 08_CreateDiskResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 09_DeleteDiskPool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 10_create_storagevolume.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 02_reverse.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefine - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 03_sds_fwd_errs.py: PASS -------------------------------------------------------------------- SettingsDefine - 04_sds_rev_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 04_forward_vsmsdata.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 05_reverse_vsmcap.py: PASS -------------------------------------------------------------------- SystemDevice - 01_forward.py: PASS -------------------------------------------------------------------- SystemDevice - 02_reverse.py: PASS -------------------------------------------------------------------- SystemDevice - 03_fwderrs.py: PASS -------------------------------------------------------------------- VSSD - 01_enum.py: PASS -------------------------------------------------------------------- VSSD - 02_bootldr.py: SKIP -------------------------------------------------------------------- VSSD - 03_vssd_gi_errs.py: PASS -------------------------------------------------------------------- VSSD - 04_vssd_to_rasd.py: PASS -------------------------------------------------------------------- VSSD - 05_set_uuid.py: PASS -------------------------------------------------------------------- VSSD - 06_duplicate_uuid.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 02_vsmcap_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 01_definesystem_name.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 02_destroysystem.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 03_definesystem_ess.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 04_definesystem_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 05_destroysystem_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 06_addresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 07_addresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 08_modifyresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 09_procrasd_persist.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 10_hv_version.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 11_define_memrasdunits.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 12_referenced_config.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 13_refconfig_additional_devs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 14_define_sys_disk.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 15_mod_system_settings.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 16_removeresource.py: XFAIL ERROR - 0 RASD insts for domain/mouse:ps2 CIM_ERR_NOT_FOUND: No such instance (no device domain/mouse:ps2) Bug:<00014> -------------------------------------------------------------------- VirtualSystemManagementService - 17_removeresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 18_define_sys_bridge.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 19_definenetwork_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 20_verify_vnc_password.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 21_createVS_verifyMAC.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ERROR - Error invoking AddRS: add_net_res ERROR - (1, u"CIM_ERR_FAILED: Unable to change (0) device: this function is not supported by the hypervisor: bridge/network interface attach not supported: qemu 'getfd' monitor command not available") ERROR - Failed to destroy Virtual Network 'my_network1' InvokeMethod(AddResourceSettings): CIM_ERR_FAILED: Unable to change (0) device: this function is not supported by the hypervisor: bridge/network interface attach not supported: qemu 'getfd' monitor command not available Bug:<00015> -------------------------------------------------------------------- VirtualSystemManagementService - 23_verify_duplicate_mac_err.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationService - 01_migratable_host.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 02_vsmsd_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 01_forward.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 02_reverse.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 02_vs_sservice_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 03_create_snapshot.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 02_vs_sservicecap_gi_errs.py: PASS -------------------------------------------------------------------- -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 3 07:16:57 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 03 Sep 2009 12:46:57 +0530 Subject: [Libvirt-cim] Re: Test Run Summary (Sep 03 2009): KVM on Fedora release 11 (Leonidas) with Pegasus In-Reply-To: <4A9F6800.9020806@linux.vnet.ibm.com> References: <4A9F6800.9020806@linux.vnet.ibm.com> Message-ID: <4A9F6D69.8010209@linux.vnet.ibm.com> Deepti B Kalakeri wrote: > ================================================= > Test Run Summary (Sep 03 2009): KVM on Fedora release 11 (Leonidas) > with Pegasus > ================================================= > Distro: Fedora release 11 (Leonidas) > Kernel: 2.6.27.5-117.fc10.x86_64 > libvirt: 0.7.0 > Hypervisor: QEMU 0.10.1 > CIMOM: Pegasus 2.9.0 > Libvirt-cim revision: 973 > Libvirt-cim changeset: 9c8eb2dfae84 > Cimtest revision: 775 > Cimtest changeset: 30196cc506c0 > ================================================= > FAIL : 1 > XFAIL : 4 > SKIP : 10 > PASS : 154 > ----------------- > Total : 169 > ================================================= > FAIL Test Summary: > KVMRedirectionSAP - 01_enum_KVMredSAP.py: FAIL This test passed when run manually. > > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 3 11:03:40 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 03 Sep 2009 16:33:40 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Add try / except to VSMS 15 In-Reply-To: References: Message-ID: <4A9FA28C.6080302@linux.vnet.ibm.com> Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1251828184 25200 > # Node ID ddb880e221d36151a9f91c3b0ab95f9cca97c2fa > # Parent 95fa64bf447e5bc2bab501564e3d9336edef997d > [TEST] Add try / except to VSMS 15 > > This will catch any unexpected exceptions. Otherwise, the exception isn't > caught and the guest may not be properly undefined > > Signed-off-by: Kaitlin Rupert > > diff -r 95fa64bf447e -r ddb880e221d3 suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py > --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Thu Aug 27 16:39:53 2009 -0700 > +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Tue Sep 01 11:03:04 2009 -0700 > @@ -74,72 +74,71 @@ > Though it is not part of the changes in this patch, can you remove the following import statements from the tc: remove the import statement for default_network_name > cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) > service = vsms.get_vsms_class(options.virt)(options.ip) > > - for case in test_cases: > - #Each time through, define guest using a default XML > - cxml.undefine(options.ip) > - cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) > - ret = cxml.cim_define(options.ip) > - if not ret: > - logger.error("Failed to define the dom: %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + try: > > - if case == "start": > - ret = cxml.start(options.ip) > + for case in test_cases: > + #Each time through, define guest using a default XML > + cxml.undefine(options.ip) > + cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) > + ret = cxml.cim_define(options.ip) > if not ret: > - logger.error("Failed to start %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + raise Exception("Failed to define the dom: %s", default_dom) > > Remove the comma in the Exception statement and use % default instead, otherwise the exception will be printed as follows, the format string wont be substituted properly. ERROR - ('Failed to define the dom: %s', 'rstest_domain') Instead the above could be raise Exception("Failed to define the dom: %s" % default_dom) which would print the exception as below: ERROR - Failed to define the dom: rstest_domain > - status, inst = get_vssd(options.ip, options.virt, True) > - if status != PASS: > - logger.error("Failed to get the VSSD instance for %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + if case == "start": > + ret = cxml.start(options.ip) > + if not ret: > + raise Exception("Failed to start %s", default_dom) > Same here Remove the comma in the Exception statement and use % after " > - inst['AutomaticRecoveryAction'] = pywbem.cim_types.Uint16(RECOVERY_VAL) > - vssd = inst_to_mof(inst) > + status, inst = get_vssd(options.ip, options.virt, True) > + if status != PASS: > + raise Expcetion("Failed to get the VSSD instance for %s", > + default_dom) > > Same here Remove the comma in the Exception statement and use % after " > - ret = service.ModifySystemSettings(SystemSettings=vssd) > - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) > - if curr_cim_rev >= libvirt_modify_setting_changes: > - if ret[0] != 0: > - logger.error("Failed to modify dom: %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + val = pywbem.cim_types.Uint16(RECOVERY_VAL) > + inst['AutomaticRecoveryAction'] = val > + vssd = inst_to_mof(inst) > > - if case == "start": > - #This should be replaced with a RSC to shutdownt he guest > - cxml.destroy(options.ip) > - status, cs = poll_for_state_change(options.ip, options.virt, > - default_dom, DEFINED_STATE) > + ret = service.ModifySystemSettings(SystemSettings=vssd) > + curr_cim_rev, changeset = get_provider_version(options.virt, > + options.ip) > + if curr_cim_rev >= libvirt_modify_setting_changes: > + if ret[0] != 0: > + raise Exception("Failed to modify dom: %s", default_dom) > + > + if case == "start": > + #This should be replaced with a RSC to shutdownt he guest > + cxml.destroy(options.ip) > + status, cs = poll_for_state_change(options.ip, options.virt, > + default_dom, DEFINED_STATE) > you can use cim_destroy() instead. > + if status != PASS: > + raise Exception("Failed to destroy %s", default_dom) > Same here Remove the comma in the Exception statement and use % after " > + > + status, inst = get_vssd(options.ip, options.virt, False) > if status != PASS: > - logger.error("Failed to destroy %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + raise Exception("Failed to get the VSSD instance for %s", > + default_dom) > > Same here Remove the comma in the Exception statement and use % after " > - status, inst = get_vssd(options.ip, options.virt, False) > - if status != PASS: > - logger.error("Failed to get the VSSD instance for %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + if inst.AutomaticRecoveryAction != RECOVERY_VAL: > + logger.error("Exp AutomaticRecoveryAction=%d, got %d", > + RECOVERY_VAL, inst.AutomaticRecoveryAction) > + raise Exception("%s not updated properly.", default_dom) > > Same here Remove the comma in the Exception statement and use % after " > - if inst.AutomaticRecoveryAction != RECOVERY_VAL: > - logger.error("%s not updated properly.", default_dom) > - logger.error("Exp AutomaticRecoveryAction=%d, got %d", RECOVERY_VAL, > - inst.AutomaticRecoveryAction) > - cleanup_env(options.ip, cxml) > - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) > - if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": > - return XFAIL_RC(f9_bug) > + status = PASS > > - if options.virt == "LXC": > - return XFAIL_RC(bug) > - return FAIL > + except Exception, details: > + logger.error(details) > + status = FAIL > > cleanup_env(options.ip, cxml) > > - return PASS > + curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) > + if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": > + return XFAIL_RC(f9_bug) > + > + if options.virt == "LXC": > + return XFAIL_RC(bug) > + > + return status > > if __name__ == "__main__": > sys.exit(main()) > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 3 12:59:30 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 03 Sep 2009 18:29:30 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import In-Reply-To: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> References: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> Message-ID: <4A9FBDB2.9060901@linux.vnet.ibm.com> Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1251842877 25200 > # Node ID 03e78e8b7a06296eba99e1329840ae6ee521f357 > # Parent a0185245b9894f195227c12af621151623972573 > [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import > > This test was originally designed to do the following: > > 1) Create a guest with a MAC interface > 2) Create a second guest based on the first guest - second guest has an > additional MAC defined. Pass a reference to the first guest during the > DefineSystem() > 3) Verify the second guest was created with two MACs - one that is identical to > the first guest and one that is different > > The providers no longer allow a guest to have the same MAC as an existing guest. > Each MAC needs to be unique. Therefore, this test needs to use a different > setting - disk source works for this. > > Also, remove the dependency on test_xml.py - that module is not obsolete. > The test case seems to be adding 2 MAC's as well to the second domain. See the xml below obtained from the debug message: rstest_domain2 destroy destroy 8529544b-9655-4e3b-b702-d7fe64167b71 hvm 131072 131072 1 /usr/bin/qemu-kvm The MAC in the above XML shows that its generated by the libvirt-CIM provider as it has prefix "00:16:3e". I think we get two MAC's as the referenced domain was passed to DefineSystem() would also have a interface information ? The debug messages do not directly imply how this might be getting added. Probably we can include more debug statements going further in the libvirt-cim provider. I think we should not be adding 2 net interface when we are not asking for it. > Signed-off-by: Kaitlin Rupert > > diff -r a0185245b989 -r 03e78e8b7a06 suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py > --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py Tue Sep 01 14:23:12 2009 -0700 > +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py Tue Sep 01 15:07:57 2009 -0700 > @@ -33,19 +33,16 @@ > import sys > from XenKvmLib.common_util import get_cs_instance > from CimTest.Globals import logger > -from XenKvmLib.const import do_main, get_provider_version > +from XenKvmLib.const import do_main, KVM_secondary_disk_path > from CimTest.ReturnCodes import FAIL, PASS > from XenKvmLib.classes import get_typed_class, inst_to_mof > from XenKvmLib.assoc import AssociatorNames > -from XenKvmLib.test_xml import dumpxml > from XenKvmLib.vxml import get_class > from XenKvmLib.rasd import get_default_rasds > > sup_types = ['Xen', 'XenFV', 'KVM'] > test_dom = 'rstest_domain' > test_dom2 = 'rstest_domain2' > -mac = "aa:aa:aa:00:00:00" > -libvirt_mac_ref_changes = 935 > > def setup_first_guest(ip, virt, cxml): > ret = cxml.cim_define(ip) > @@ -76,22 +73,23 @@ > return vssd[0] > > def setup_second_guest(ip, virt, cxml2, ref): > - nrasd_cn = get_typed_class(virt, "NetResourceAllocationSettingData") > + drasd_cn = get_typed_class(virt, "DiskResourceAllocationSettingData") > > rasds = get_default_rasds(ip, virt) > > rasd_list = {} > > for rasd in rasds: > - if rasd.classname == nrasd_cn: > - rasd['Address'] = mac > - rasd['NetworkType'] = "network" > - rasd_list[nrasd_cn] = inst_to_mof(rasd) > + if rasd.classname == drasd_cn: > + rasd['Address'] = KVM_secondary_disk_path > + rasd['VirtualDevice '] = "hdb" > + rasd_list[drasd_cn] = inst_to_mof(rasd) > + break > else: > rasd_list[rasd.classname] = None > > - if rasd_list[nrasd_cn] is None: > - logger.error("Unable to get template NetRASD") > + if rasd_list[drasd_cn] is None: > + logger.error("Unable to get template DiskRASD") > return FAIL > > cxml2.set_res_settings(rasd_list) > @@ -103,20 +101,21 @@ > > return PASS, "define" > > -def get_dom_macs(server, dom, virt): > - mac_list = [] > +def get_dom_disk_src(xml, ip): > + disk_list = [] > > - myxml = dumpxml(dom, server, virt=virt) > + xml.dumpxml(ip) > + myxml = xml.get_formatted_xml() > > lines = myxml.splitlines() > for l in lines: > - if l.find("mac address=") != -1: > - mac = l.split('=')[1] > - mac = mac.lstrip('\'') > - mac = mac.rstrip('\'/>') > - mac_list.append(mac) > + if l.find("source file=") != -1: > + disk = l.split('=')[1] > + disk = disk.lstrip('\'') > + disk = disk.rstrip('\'/>') > + disk_list.append(disk) > > - return mac_list > + return disk_list > > @do_main(sup_types) > def main(): > @@ -143,26 +142,23 @@ > if status != PASS: > raise Exception("Unable to define %s" % test_dom2) > > - dom1_mac_list = get_dom_macs(ip, test_dom, virt) > - if len(dom1_mac_list) != 1: > - raise Exception("%s has %d macs, expected 1" % (test_dom, > - len(dom1_mac_list))) > + g1_disk_list = get_dom_disk_src(cxml, ip) > + if len(g1_disk_list) != 1: > + raise Exception("%s has %d disks, expected 1" % (test_dom, > + len(g1_disk_list))) > > - dom2_mac_list = get_dom_macs(ip, test_dom2, virt) > - if len(dom2_mac_list) != 2: > - raise Exception("%s has %d macs, expected 2" % (test_dom2, > - len(dom2_mac_list))) > + g2_disk_list = get_dom_disk_src(cxml2, ip) > + if len(g2_disk_list) != 2: > + raise Exception("%s has %d disks, expected 2" % (test_dom2, > + len(g2_disk_list))) > > - curr_cim_rev, changeset = get_provider_version(virt, ip) > - if curr_cim_rev < libvirt_mac_ref_changes: > - for item in dom2_mac_list: > - if item != mac and item != dom1_mac_list[0]: > - raise Exception("%s has unexpected mac value, exp: %s %s" \ > - % (item, mac, dom1_mac_list[0])) > - elif curr_cim_rev >= libvirt_mac_ref_changes: > - if not mac in dom2_mac_list: > - raise Exception("Did not find the mac information given to "\ > - "the domain '%s'" % test_dom2) > + if g2_disk_list[0] != g1_disk_list[0]: > + raise Exception("%s has unexpected disk source, exp: %s, got %s" \ > + % (test_dom2, g2_disk_list[0], g1_disk_list[0])) > + > + if g2_disk_list[1] == g1_disk_list[0]: > + raise Exception("%s has unexpected disk source, exp: %s, got %s" \ > + % (test_dom2, g2_disk_list[1], g1_disk_list[0])) > better would be g2_disk_list[1] in g1_disk_list[0]. > status = PASS > > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 3 17:18:37 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 10:18:37 -0700 Subject: [Libvirt-cim] Re: Test Run Summary (Sep 03 2009): KVM on Fedora release 11 (Leonidas) with Pegasus In-Reply-To: <4A9F6D69.8010209@linux.vnet.ibm.com> References: <4A9F6800.9020806@linux.vnet.ibm.com> <4A9F6D69.8010209@linux.vnet.ibm.com> Message-ID: <4A9FFA6D.4020504@linux.vnet.ibm.com> >> ================================================= >> FAIL Test Summary: >> KVMRedirectionSAP - 01_enum_KVMredSAP.py: FAIL > This test passed when run manually. >> >> > Do you know why the test originally failed? -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 3 18:15:12 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 11:15:12 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Add try / except to VSMS 15 In-Reply-To: <4A9FA28C.6080302@linux.vnet.ibm.com> References: <4A9FA28C.6080302@linux.vnet.ibm.com> Message-ID: <4AA007B0.4010703@linux.vnet.ibm.com> Ah, oops! Careless errors there. Thanks for catching these Deepti =) Deepti B Kalakeri wrote: > > > Kaitlin Rupert wrote: >> # HG changeset patch >> # User Kaitlin Rupert >> # Date 1251828184 25200 >> # Node ID ddb880e221d36151a9f91c3b0ab95f9cca97c2fa >> # Parent 95fa64bf447e5bc2bab501564e3d9336edef997d >> [TEST] Add try / except to VSMS 15 >> >> This will catch any unexpected exceptions. Otherwise, the exception >> isn't >> caught and the guest may not be properly undefined >> >> Signed-off-by: Kaitlin Rupert >> >> diff -r 95fa64bf447e -r ddb880e221d3 >> suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py >> >> --- >> a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py >> Thu Aug 27 16:39:53 2009 -0700 >> +++ >> b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py >> Tue Sep 01 11:03:04 2009 -0700 >> @@ -74,72 +74,71 @@ >> > Though it is not part of the changes in this patch, can you remove the > following import statements from the tc: > > remove the import statement for default_network_name > > >> cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) >> service = vsms.get_vsms_class(options.virt)(options.ip) >> >> - for case in test_cases: >> - #Each time through, define guest using a default XML >> - cxml.undefine(options.ip) >> - cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) >> - ret = cxml.cim_define(options.ip) >> - if not ret: >> - logger.error("Failed to define the dom: %s", default_dom) >> - cleanup_env(options.ip, cxml) >> - return FAIL >> + try: >> >> - if case == "start": >> - ret = cxml.start(options.ip) >> + for case in test_cases: >> + #Each time through, define guest using a default XML >> + cxml.undefine(options.ip) >> + cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) >> + ret = cxml.cim_define(options.ip) >> if not ret: >> - logger.error("Failed to start %s", default_dom) >> - cleanup_env(options.ip, cxml) >> - return FAIL >> + raise Exception("Failed to define the dom: %s", >> default_dom) >> >> > Remove the comma in the Exception statement and use % default instead, > otherwise the exception will be printed as follows, the format string > wont be substituted properly. > ERROR - ('Failed to define the dom: %s', 'rstest_domain') > > Instead the above could be > > raise Exception("Failed to define the dom: %s" % default_dom) > > which would print the exception as below: > > ERROR - Failed to define the dom: rstest_domain > > > >> - status, inst = get_vssd(options.ip, options.virt, True) >> - if status != PASS: >> - logger.error("Failed to get the VSSD instance for %s", >> default_dom) >> - cleanup_env(options.ip, cxml) >> - return FAIL >> + if case == "start": >> + ret = cxml.start(options.ip) >> + if not ret: >> + raise Exception("Failed to start %s", default_dom) >> > Same here Remove the comma in the Exception statement and use % after " >> - inst['AutomaticRecoveryAction'] = >> pywbem.cim_types.Uint16(RECOVERY_VAL) >> - vssd = inst_to_mof(inst) >> + status, inst = get_vssd(options.ip, options.virt, True) >> + if status != PASS: >> + raise Expcetion("Failed to get the VSSD instance for >> %s", + default_dom) >> >> > Same here Remove the comma in the Exception statement and use % after " >> - ret = service.ModifySystemSettings(SystemSettings=vssd) >> - curr_cim_rev, changeset = get_provider_version(options.virt, >> options.ip) >> - if curr_cim_rev >= libvirt_modify_setting_changes: >> - if ret[0] != 0: >> - logger.error("Failed to modify dom: %s", default_dom) >> - cleanup_env(options.ip, cxml) >> - return FAIL >> + val = pywbem.cim_types.Uint16(RECOVERY_VAL) >> + inst['AutomaticRecoveryAction'] = val >> + vssd = inst_to_mof(inst) >> >> - if case == "start": >> - #This should be replaced with a RSC to shutdownt he guest >> - cxml.destroy(options.ip) >> - status, cs = poll_for_state_change(options.ip, >> options.virt, - >> default_dom, DEFINED_STATE) >> + ret = service.ModifySystemSettings(SystemSettings=vssd) >> + curr_cim_rev, changeset = >> get_provider_version(options.virt, >> + options.ip) >> + if curr_cim_rev >= libvirt_modify_setting_changes: >> + if ret[0] != 0: >> + raise Exception("Failed to modify dom: %s", >> default_dom) >> + >> + if case == "start": >> + #This should be replaced with a RSC to shutdownt he >> guest >> + cxml.destroy(options.ip) >> + status, cs = poll_for_state_change(options.ip, >> options.virt, + >> default_dom, DEFINED_STATE) >> > you can use cim_destroy() instead. >> + if status != PASS: >> + raise Exception("Failed to destroy %s", default_dom) >> > > Same here Remove the comma in the Exception statement and use % after " >> + >> + status, inst = get_vssd(options.ip, options.virt, False) >> if status != PASS: >> - logger.error("Failed to destroy %s", default_dom) >> - cleanup_env(options.ip, cxml) >> - return FAIL >> + raise Exception("Failed to get the VSSD instance for >> %s", + default_dom) >> >> > Same here Remove the comma in the Exception statement and use % after " >> - status, inst = get_vssd(options.ip, options.virt, False) >> - if status != PASS: >> - logger.error("Failed to get the VSSD instance for %s", >> default_dom) >> - cleanup_env(options.ip, cxml) >> - return FAIL >> + if inst.AutomaticRecoveryAction != RECOVERY_VAL: >> + logger.error("Exp AutomaticRecoveryAction=%d, got >> %d", + RECOVERY_VAL, >> inst.AutomaticRecoveryAction) >> + raise Exception("%s not updated properly.", default_dom) >> >> > Same here Remove the comma in the Exception statement and use % after " >> - if inst.AutomaticRecoveryAction != RECOVERY_VAL: >> - logger.error("%s not updated properly.", default_dom) >> - logger.error("Exp AutomaticRecoveryAction=%d, got %d", >> RECOVERY_VAL, >> - inst.AutomaticRecoveryAction) >> - cleanup_env(options.ip, cxml) >> - curr_cim_rev, changeset = >> get_provider_version(options.virt, options.ip) >> - if curr_cim_rev <= libvirt_f9_revision and options.virt >> == "KVM": >> - return XFAIL_RC(f9_bug) >> + status = PASS >> >> - if options.virt == "LXC": >> - return XFAIL_RC(bug) >> - return FAIL + except Exception, details: >> + logger.error(details) >> + status = FAIL >> >> cleanup_env(options.ip, cxml) >> >> - return PASS + curr_cim_rev, changeset = >> get_provider_version(options.virt, options.ip) >> + if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": >> + return XFAIL_RC(f9_bug) >> + >> + if options.virt == "LXC": >> + return XFAIL_RC(bug) >> + >> + return status >> if __name__ == "__main__": >> sys.exit(main()) >> >> _______________________________________________ >> Libvirt-cim mailing list >> Libvirt-cim at redhat.com >> https://www.redhat.com/mailman/listinfo/libvirt-cim >> > -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 3 18:20:41 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 11:20:41 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] #2 Add try / except to VSMS 15 Message-ID: <54bf724a87d4dcf370ca.1252002041@elm3b151.beaverton.ibm.com> # HG changeset patch # User Kaitlin Rupert # Date 1252002019 25200 # Node ID 54bf724a87d4dcf370ca68714809cfaaf55457ca # Parent 30196cc506c07d81642c94a01fc65b34421c0714 [TEST] #2 Add try / except to VSMS 15 This will catch any unexpected exceptions. Otherwise, the exception isn't caught and the guest may not be properly undefined Updates: -Fix Exception() calls to use % instead of a , when specifying arguments -Remove import of default_network_name -Replace destroy() with cim_destroy() Signed-off-by: Kaitlin Rupert diff -r 30196cc506c0 -r 54bf724a87d4 suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Wed Sep 02 05:11:16 2009 -0700 +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Thu Sep 03 11:20:19 2009 -0700 @@ -26,7 +26,7 @@ from XenKvmLib import vxml from CimTest.Globals import logger from CimTest.ReturnCodes import PASS, FAIL, XFAIL_RC -from XenKvmLib.const import do_main, default_network_name +from XenKvmLib.const import do_main from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.enumclass import GetInstance from XenKvmLib.common_util import poll_for_state_change @@ -74,72 +74,70 @@ cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) service = vsms.get_vsms_class(options.virt)(options.ip) - for case in test_cases: - #Each time through, define guest using a default XML - cxml.undefine(options.ip) - cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) - ret = cxml.cim_define(options.ip) - if not ret: - logger.error("Failed to define the dom: %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + try: - if case == "start": - ret = cxml.start(options.ip) + for case in test_cases: + #Each time through, define guest using a default XML + cxml.undefine(options.ip) + cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) + ret = cxml.cim_define(options.ip) if not ret: - logger.error("Failed to start %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + raise Exception("Failed to define the dom: %s" % default_dom) - status, inst = get_vssd(options.ip, options.virt, True) - if status != PASS: - logger.error("Failed to get the VSSD instance for %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + if case == "start": + ret = cxml.start(options.ip) + if not ret: + raise Exception("Failed to start %s" % default_dom) - inst['AutomaticRecoveryAction'] = pywbem.cim_types.Uint16(RECOVERY_VAL) - vssd = inst_to_mof(inst) + status, inst = get_vssd(options.ip, options.virt, True) + if status != PASS: + raise Expcetion("Failed to get the VSSD instance for %s", + default_dom) - ret = service.ModifySystemSettings(SystemSettings=vssd) - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) - if curr_cim_rev >= libvirt_modify_setting_changes: - if ret[0] != 0: - logger.error("Failed to modify dom: %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + val = pywbem.cim_types.Uint16(RECOVERY_VAL) + inst['AutomaticRecoveryAction'] = val + vssd = inst_to_mof(inst) - if case == "start": - #This should be replaced with a RSC to shutdownt he guest - cxml.destroy(options.ip) - status, cs = poll_for_state_change(options.ip, options.virt, - default_dom, DEFINED_STATE) + ret = service.ModifySystemSettings(SystemSettings=vssd) + curr_cim_rev, changeset = get_provider_version(options.virt, + options.ip) + if curr_cim_rev >= libvirt_modify_setting_changes: + if ret[0] != 0: + raise Exception("Failed to modify dom: %s" % default_dom) + + if case == "start": + cxml.cim_destroy(options.ip) + status, cs = poll_for_state_change(options.ip, options.virt, + default_dom, DEFINED_STATE) + if status != PASS: + raise Exception("Failed to destroy %s" % default_dom) + + status, inst = get_vssd(options.ip, options.virt, False) if status != PASS: - logger.error("Failed to destroy %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + raise Exception("Failed to get the VSSD instance for %s" % \ + default_dom) - status, inst = get_vssd(options.ip, options.virt, False) - if status != PASS: - logger.error("Failed to get the VSSD instance for %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL + if inst.AutomaticRecoveryAction != RECOVERY_VAL: + logger.error("Exp AutomaticRecoveryAction=%d, got %d", + RECOVERY_VAL, inst.AutomaticRecoveryAction) + raise Exception("%s not updated properly" % default_dom) - if inst.AutomaticRecoveryAction != RECOVERY_VAL: - logger.error("%s not updated properly.", default_dom) - logger.error("Exp AutomaticRecoveryAction=%d, got %d", RECOVERY_VAL, - inst.AutomaticRecoveryAction) - cleanup_env(options.ip, cxml) - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) - if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": - return XFAIL_RC(f9_bug) + status = PASS - if options.virt == "LXC": - return XFAIL_RC(bug) - return FAIL + except Exception, details: + logger.error(details) + status = FAIL cleanup_env(options.ip, cxml) - return PASS + curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) + if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": + return XFAIL_RC(f9_bug) + + if options.virt == "LXC": + return XFAIL_RC(bug) + + return status if __name__ == "__main__": sys.exit(main()) From kaitlin at linux.vnet.ibm.com Thu Sep 3 19:27:33 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 12:27:33 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import In-Reply-To: <4A9FBDB2.9060901@linux.vnet.ibm.com> References: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> <4A9FBDB2.9060901@linux.vnet.ibm.com> Message-ID: <4AA018A5.7070100@linux.vnet.ibm.com> Deepti B Kalakeri wrote: > > > Kaitlin Rupert wrote: >> # HG changeset patch >> # User Kaitlin Rupert >> # Date 1251842877 25200 >> # Node ID 03e78e8b7a06296eba99e1329840ae6ee521f357 >> # Parent a0185245b9894f195227c12af621151623972573 >> [TEST] Fix VSMS to do a proper check of ref config, also remove >> test_xml import >> >> This test was originally designed to do the following: >> >> 1) Create a guest with a MAC interface >> 2) Create a second guest based on the first guest - second guest has an >> additional MAC defined. Pass a reference to the first guest during >> the >> DefineSystem() >> 3) Verify the second guest was created with two MACs - one that is >> identical to >> the first guest and one that is different >> >> The providers no longer allow a guest to have the same MAC as an >> existing guest. >> Each MAC needs to be unique. Therefore, this test needs to use a >> different >> setting - disk source works for this. >> >> Also, remove the dependency on test_xml.py - that module is not obsolete. >> > The test case seems to be adding 2 MAC's as well to the second domain. > See the xml below obtained from the debug message: > > > > > > > > > The MAC in the above XML shows that its generated by the libvirt-CIM > provider as it has prefix "00:16:3e". > I think we get two MAC's as the referenced domain was passed to > DefineSystem() would also have a interface information ? > The debug messages do not directly imply how this might be getting added. > Probably we can include more debug statements going further in the > libvirt-cim provider. > > I think we should not be adding 2 net interface when we are not asking > for it. > Right, that's correct. This is the intended behavior. If you pass a a value for the ReferencedConfiguration parameter, DefineSystem() will use that guest as a basis for the new guest. If you look at the RASDs we pass in the test case, you'll see that the test case specifies a NetRASD (see the array of RASDs we pass below). Since the guest we specified for the ReferencedConfiguration already had a network interface, the NetRASD we specified adds an additional network interface to the guest. So the test case is really asking for two interfaces. This was the original purpose of the test. ['instance of KVM_DiskResourceAllocationSettingData {\n\tPoolID = "DiskPool/cimtest-diskpool";\n\tVirtualDevice = "hda";\n\tResourceType = 17;\n\tAddress = "/tmp/default-kvm-dimage.2ND";\n\tVirtualQuantityUnits = "count";\n\tInstanceID = "Default";\n\tEmulatedType = 0;\n\tVirtualQuantity = 4096;\n\tVirtualDevice = "hdb";\n};\n', 'instance of KVM_ProcResourceAllocationSettingData {\nResourceType = 3;\nVirtualQuantity = 1;\nInstanceID = "rstest_domain2/proc";\n};', 'instance of KVM_NetResourceAllocationSettingData {\nResourceType = 10;\nNetworkType = "network";\nPoolID = "NetworkPool/cimtest-networkpool";\n};'] -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 3 19:53:20 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 12:53:20 -0700 Subject: [Libvirt-cim] [PATCH] Use Qumranet's OUI for KVM guests instead of using Xensource's Message-ID: <234141bf7f0368531c88.1252007600@elm3b151.beaverton.ibm.com> # HG changeset patch # User Kaitlin Rupert # Date 1252007567 25200 # Node ID 234141bf7f0368531c884334b1da5b94cc038758 # Parent 9c8eb2dfae84ed67999657d8f238a8bd777e1c36 Use Qumranet's OUI for KVM guests instead of using Xensource's. Signed-off-by: Kaitlin Rupert diff -r 9c8eb2dfae84 -r 234141bf7f03 src/Virt_VirtualSystemManagementService.c --- a/src/Virt_VirtualSystemManagementService.c Tue Sep 01 11:34:21 2009 -0700 +++ b/src/Virt_VirtualSystemManagementService.c Thu Sep 03 12:52:47 2009 -0700 @@ -57,7 +57,8 @@ #include "config.h" -#define DEFAULT_MAC_PREFIX "00:16:3e" +#define XEN_MAC_PREFIX "00:16:3e" +#define KVM_MAC_PREFIX "00:1A:4A" #define DEFAULT_XEN_WEIGHT 1024 #define BRIDGE_TYPE "bridge" #define NETWORK_TYPE "network" @@ -530,7 +531,7 @@ return poolid; } -static const char *_net_rand_mac(void) +static const char *_net_rand_mac(const CMPIObjectPath *ref) { int r; int ret; @@ -540,6 +541,8 @@ CMPIString *str = NULL; CMPIStatus status; struct timeval curr_time; + const char *mac_prefix = NULL; + char *cn_prefix = NULL; ret = gettimeofday(&curr_time, NULL); if (ret != 0) @@ -549,9 +552,18 @@ s = curr_time.tv_usec; r = rand_r(&s); + cn_prefix = class_prefix_name(CLASSNAME(ref)); + + if (STREQ(cn_prefix, "KVM")) + mac_prefix = KVM_MAC_PREFIX; + else + mac_prefix = XEN_MAC_PREFIX; + + free(cn_prefix); + ret = asprintf(&mac, "%s:%02x:%02x:%02x", - DEFAULT_MAC_PREFIX, + mac_prefix, r & 0xFF, (r & 0xFF00) >> 8, (r & 0xFF0000) >> 16); @@ -646,9 +658,16 @@ const char *val = NULL; const char *msg = NULL; char *network = NULL; + CMPIObjectPath *op = NULL; + + op = CMGetObjectPath(inst, NULL); + if (op == NULL) { + msg = "Unable to determine classname of NetRASD"; + goto out; + } if (cu_get_str_prop(inst, "Address", &val) != CMPI_RC_OK) { - val = _net_rand_mac(); + val = _net_rand_mac(op); if (val == NULL) { msg = "Unable to generate a MAC address"; goto out; @@ -1379,7 +1398,7 @@ free((*domain)->dev_net->dev.net.mac); (*domain)->dev_net->dev.net.mac = NULL; - mac = _net_rand_mac(); + mac = _net_rand_mac(ref); if (mac == NULL) { cu_statusf(_BROKER, &s, CMPI_RC_ERR_INVALID_PARAMETER, From kaitlin at linux.vnet.ibm.com Thu Sep 3 22:15:30 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 15:15:30 -0700 Subject: [Libvirt-cim] [PATCH] Add timestamps to main.py to calculate run time of tests Message-ID: # HG changeset patch # User Kaitlin Rupert # Date 1252016104 25200 # Node ID fbedb0f125546bf16bc7a4b915a25e4042be0ac7 # Parent db3af9cb2c9affb0a32a8ea3a2c23648c5efe91e Add timestamps to main.py to calculate run time of tests These changes allow the user to specify the --print-exec-time flag, which will print the execution time of each test. If this flag isn't specified, the total run time of the test is still printed. Signed-off-by: Kaitlin Rupert diff -r db3af9cb2c9a -r fbedb0f12554 suites/libvirt-cim/main.py --- a/suites/libvirt-cim/main.py Thu Sep 03 13:03:52 2009 -0700 +++ b/suites/libvirt-cim/main.py Thu Sep 03 15:15:04 2009 -0700 @@ -22,6 +22,7 @@ # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # +from time import time from optparse import OptionParser import os import sys @@ -64,6 +65,9 @@ help="Duplicate the output to stderr") parser.add_option("--report", dest="report", help="Send report using mail info: --report=") +parser.add_option("--print-exec-time", action="store_true", + dest="print_exec_time", + help="Print execution time of each test") TEST_SUITE = 'cimtest' CIMTEST_RCFILE = '%s/.cimtestrc' % os.environ['HOME'] @@ -146,6 +150,27 @@ return PASS +def print_exec_time(testsuite, exec_time): + + #Convert run time from seconds to hours + tmp = exec_time / (60 * 60) + h = int(tmp) + + #Subtract out hours and convert remainder to minutes + tmp = (tmp - h) * 60 + m = int(tmp) + + #Subtract out minutes and convert remainder to seconds + tmp = (tmp - m) * 60 + s = int(tmp) + + #Subtract out seconds and convert remainder to milliseconds + tmp = (tmp - s) * 1000 + msec = int(tmp) + + testsuite.debug(" Execution time: %sh %smin %ssec %smsec" % + (h, m, s, msec)) + def main(): (options, args) = parser.parse_args() to_addr = None @@ -213,6 +238,8 @@ print "\nTesting " + options.virt + " hypervisor" + test_run_time_total = 0 + for test in test_list: testsuite.debug(div) t_path = os.path.join(TEST_SUITE, test['group']) @@ -222,13 +249,25 @@ options.virt, dbg, options.t_url) cmd = cdto + ' && ' + ' ' + run + start_time = time() status, output = commands.getstatusoutput(cmd) + end_time = time() os_status = os.WEXITSTATUS(status) testsuite.print_results(test['group'], test['test'], os_status, output) + exec_time = end_time - start_time + test_run_time_total = test_run_time_total + exec_time + + if options.print_exec_time: + print_exec_time(testsuite, exec_time) + testsuite.debug("%s\n" % div) + testsuite.debug("Total test execution: ") + print_exec_time(testsuite, test_run_time_total) + testsuite.debug("\n") + testsuite.finish() status = cleanup_env(options.ip, options.virt) From rmaciel at linux.vnet.ibm.com Thu Sep 3 22:16:43 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Thu, 03 Sep 2009 19:16:43 -0300 Subject: [Libvirt-cim] [PATCH] Use Qumranet's OUI for KVM guests instead of using Xensource's In-Reply-To: <234141bf7f0368531c88.1252007600@elm3b151.beaverton.ibm.com> References: <234141bf7f0368531c88.1252007600@elm3b151.beaverton.ibm.com> Message-ID: <4AA0404B.5090205@linux.vnet.ibm.com> +1 On 09/03/2009 04:53 PM, Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252007567 25200 > # Node ID 234141bf7f0368531c884334b1da5b94cc038758 > # Parent 9c8eb2dfae84ed67999657d8f238a8bd777e1c36 > Use Qumranet's OUI for KVM guests instead of using Xensource's. > > Signed-off-by: Kaitlin Rupert > > diff -r 9c8eb2dfae84 -r 234141bf7f03 src/Virt_VirtualSystemManagementService.c > --- a/src/Virt_VirtualSystemManagementService.c Tue Sep 01 11:34:21 2009 -0700 > +++ b/src/Virt_VirtualSystemManagementService.c Thu Sep 03 12:52:47 2009 -0700 > @@ -57,7 +57,8 @@ > > #include "config.h" > > -#define DEFAULT_MAC_PREFIX "00:16:3e" > +#define XEN_MAC_PREFIX "00:16:3e" > +#define KVM_MAC_PREFIX "00:1A:4A" > #define DEFAULT_XEN_WEIGHT 1024 > #define BRIDGE_TYPE "bridge" > #define NETWORK_TYPE "network" > @@ -530,7 +531,7 @@ > return poolid; > } > > -static const char *_net_rand_mac(void) > +static const char *_net_rand_mac(const CMPIObjectPath *ref) > { > int r; > int ret; > @@ -540,6 +541,8 @@ > CMPIString *str = NULL; > CMPIStatus status; > struct timeval curr_time; > + const char *mac_prefix = NULL; > + char *cn_prefix = NULL; > > ret = gettimeofday(&curr_time, NULL); > if (ret != 0) > @@ -549,9 +552,18 @@ > s = curr_time.tv_usec; > r = rand_r(&s); > > + cn_prefix = class_prefix_name(CLASSNAME(ref)); > + > + if (STREQ(cn_prefix, "KVM")) > + mac_prefix = KVM_MAC_PREFIX; > + else > + mac_prefix = XEN_MAC_PREFIX; > + > + free(cn_prefix); > + > ret = asprintf(&mac, > "%s:%02x:%02x:%02x", > - DEFAULT_MAC_PREFIX, > + mac_prefix, > r& 0xFF, > (r& 0xFF00)>> 8, > (r& 0xFF0000)>> 16); > @@ -646,9 +658,16 @@ > const char *val = NULL; > const char *msg = NULL; > char *network = NULL; > + CMPIObjectPath *op = NULL; > + > + op = CMGetObjectPath(inst, NULL); > + if (op == NULL) { > + msg = "Unable to determine classname of NetRASD"; > + goto out; > + } > > if (cu_get_str_prop(inst, "Address",&val) != CMPI_RC_OK) { > - val = _net_rand_mac(); > + val = _net_rand_mac(op); > if (val == NULL) { > msg = "Unable to generate a MAC address"; > goto out; > @@ -1379,7 +1398,7 @@ > free((*domain)->dev_net->dev.net.mac); > (*domain)->dev_net->dev.net.mac = NULL; > > - mac = _net_rand_mac(); > + mac = _net_rand_mac(ref); > if (mac == NULL) { > cu_statusf(_BROKER,&s, > CMPI_RC_ERR_INVALID_PARAMETER, > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Fri Sep 4 00:04:35 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 17:04:35 -0700 Subject: [Libvirt-cim] [PATCH] Add timestamps to main.py to calculate run time of tests In-Reply-To: References: Message-ID: <4AA05993.7040108@linux.vnet.ibm.com> Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252016104 25200 > # Node ID fbedb0f125546bf16bc7a4b915a25e4042be0ac7 > # Parent db3af9cb2c9affb0a32a8ea3a2c23648c5efe91e > Add timestamps to main.py to calculate run time of tests > > These changes allow the user to specify the --print-exec-time flag, which will > print the execution time of each test. If this flag isn't specified, the > total run time of the test is still printed. > > Signed-off-by: Kaitlin Rupert > > diff -r db3af9cb2c9a -r fbedb0f12554 suites/libvirt-cim/main.py This should have been sent with [TEST] in the subject. Resending this patch. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Fri Sep 4 00:05:39 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 03 Sep 2009 17:05:39 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Add timestamps to main.py to calculate run time of tests Message-ID: <2d852ba88fd24102ec98.1252022739@elm3b151.beaverton.ibm.com> # HG changeset patch # User Kaitlin Rupert # Date 1252022738 25200 # Node ID 2d852ba88fd24102ec988145e464a13f5faae5c0 # Parent db3af9cb2c9affb0a32a8ea3a2c23648c5efe91e [TEST] Add timestamps to main.py to calculate run time of tests These changes allow the user to specify the --print-exec-time flag, which will print the execution time of each test. If this flag isn't specified, the total run time of the test is still printed. Signed-off-by: Kaitlin Rupert diff -r db3af9cb2c9a -r 2d852ba88fd2 suites/libvirt-cim/main.py --- a/suites/libvirt-cim/main.py Thu Sep 03 13:03:52 2009 -0700 +++ b/suites/libvirt-cim/main.py Thu Sep 03 17:05:38 2009 -0700 @@ -22,6 +22,7 @@ # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # +from time import time from optparse import OptionParser import os import sys @@ -64,6 +65,9 @@ help="Duplicate the output to stderr") parser.add_option("--report", dest="report", help="Send report using mail info: --report=") +parser.add_option("--print-exec-time", action="store_true", + dest="print_exec_time", + help="Print execution time of each test") TEST_SUITE = 'cimtest' CIMTEST_RCFILE = '%s/.cimtestrc' % os.environ['HOME'] @@ -146,6 +150,27 @@ return PASS +def print_exec_time(testsuite, exec_time): + + #Convert run time from seconds to hours + tmp = exec_time / (60 * 60) + h = int(tmp) + + #Subtract out hours and convert remainder to minutes + tmp = (tmp - h) * 60 + m = int(tmp) + + #Subtract out minutes and convert remainder to seconds + tmp = (tmp - m) * 60 + s = int(tmp) + + #Subtract out seconds and convert remainder to milliseconds + tmp = (tmp - s) * 1000 + msec = int(tmp) + + testsuite.debug(" Execution time: %sh %smin %ssec %smsec" % + (h, m, s, msec)) + def main(): (options, args) = parser.parse_args() to_addr = None @@ -213,6 +238,8 @@ print "\nTesting " + options.virt + " hypervisor" + test_run_time_total = 0 + for test in test_list: testsuite.debug(div) t_path = os.path.join(TEST_SUITE, test['group']) @@ -222,13 +249,25 @@ options.virt, dbg, options.t_url) cmd = cdto + ' && ' + ' ' + run + start_time = time() status, output = commands.getstatusoutput(cmd) + end_time = time() os_status = os.WEXITSTATUS(status) testsuite.print_results(test['group'], test['test'], os_status, output) + exec_time = end_time - start_time + test_run_time_total = test_run_time_total + exec_time + + if options.print_exec_time: + print_exec_time(testsuite, exec_time) + testsuite.debug("%s\n" % div) + testsuite.debug("Total test execution: ") + print_exec_time(testsuite, test_run_time_total) + testsuite.debug("\n") + testsuite.finish() status = cleanup_env(options.ip, options.virt) From dayne.medlyn at hp.com Fri Sep 4 19:23:07 2009 From: dayne.medlyn at hp.com (Medlyn, Dayne (VSL - Ft Collins)) Date: Fri, 4 Sep 2009 19:23:07 +0000 Subject: [Libvirt-cim] What is the latest version of libvirt-CIM that worked with libvirt-0.3.3? Message-ID: What is the latest version of libvirt-CIM that worked with libvirt-0.3.3? Thanks. Dayne From kaitlin at linux.vnet.ibm.com Fri Sep 4 20:49:23 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Fri, 04 Sep 2009 13:49:23 -0700 Subject: [Libvirt-cim] [PATCH] get_disk_pool() is only valid for newer versions of libvirt Message-ID: <2b627ddf93c9d303b87f.1252097363@elm3b151.beaverton.ibm.com> # HG changeset patch # User Kaitlin Rupert # Date 1252098766 25200 # Node ID 2b627ddf93c9d303b87fd186a6d6334465a9a14c # Parent 23572a8bc37d425291732467773f46224f640b72 get_disk_pool() is only valid for newer versions of libvirt This patches fixes a compile issue with older versions of libvirt. diff -r 23572a8bc37d -r 2b627ddf93c9 src/Virt_DevicePool.h --- a/src/Virt_DevicePool.h Thu Sep 03 16:42:54 2009 -0700 +++ b/src/Virt_DevicePool.h Fri Sep 04 14:12:46 2009 -0700 @@ -28,6 +28,12 @@ #include "pool_parsing.h" +#if LIBVIR_VERSION_NUMBER > 4000 +# define VIR_USE_LIBVIRT_STORAGE 1 +#else +# define VIR_USE_LIBVIRT_STORAGE 0 +#endif + /** * Get the InstanceID of a pool that a given RASD id (for type) is in * @@ -135,6 +141,7 @@ uint16_t type, CMPIStatus *status); +#if VIR_USE_LIBVIRT_STORAGE /** * Get the configuration settings of a given storage pool * @@ -143,6 +150,7 @@ * @returns An int that indicates whether the function was successful */ int get_disk_pool(virStoragePoolPtr poolptr, struct virt_pool **pool); +#endif #endif From kaitlin at linux.vnet.ibm.com Fri Sep 4 20:57:59 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Fri, 4 Sep 2009 16:57:59 -0400 Subject: [Libvirt-cim] Test Run Summary (Sep 04 2009): Xen on Red Hat Enterprise Linux Server release 5.3 (Tikanga) with Pegasus Message-ID: <200909042057.n84KvxYJ011227@d01av02.pok.ibm.com> ================================================= Test Run Summary (Sep 04 2009): Xen on Red Hat Enterprise Linux Server release 5.3 (Tikanga) with Pegasus ================================================= Distro: Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-128.el5xen libvirt: 0.3.3 Hypervisor: Xen 3.1.0 CIMOM: Pegasus 2.7.1 Libvirt-cim revision: 975 Libvirt-cim changeset: 169f5703b2e9+ Cimtest revision: Cimtest changeset: ================================================= FAIL : 16 XFAIL : 2 SKIP : 4 PASS : 147 ----------------- Total : 169 ================================================= FAIL Test Summary: ElementAllocatedFromPool - 01_forward.py: FAIL ResourceAllocationFromPool - 01_forward.py: FAIL ResourceAllocationFromPool - 02_reverse.py: FAIL ResourcePoolConfigurationService - 07_DeleteResourcePool.py: FAIL ResourcePoolConfigurationService - 09_DeleteDiskPool.py: FAIL ResourcePoolConfigurationService - 10_create_storagevolume.py: FAIL VirtualSystemManagementService - 15_mod_system_settings.py: FAIL VirtualSystemMigrationService - 01_migratable_host.py: FAIL VirtualSystemMigrationService - 02_host_migrate_type.py: FAIL VirtualSystemMigrationService - 06_remote_live_migration.py: FAIL VirtualSystemMigrationService - 07_remote_offline_migration.py: FAIL VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: FAIL VirtualSystemSettingDataComponent - 02_reverse.py: FAIL VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: FAIL VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: FAIL VirtualSystemSnapshotService - 03_create_snapshot.py: FAIL ================================================= XFAIL Test Summary: ComputerSystem - 33_suspend_reboot.py: XFAIL VirtualSystemManagementService - 16_removeresource.py: XFAIL ================================================= SKIP Test Summary: ComputerSystem - 02_nosystems.py: SKIP LogicalDisk - 02_nodevs.py: SKIP NetworkPort - 03_user_netport.py: SKIP ResourcePoolConfigurationService - 08_CreateDiskResourcePool.py: SKIP ================================================= Full report: -------------------------------------------------------------------- AllocationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- AllocationCapabilities - 02_alloccap_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 01_enum.py: PASS -------------------------------------------------------------------- ComputerSystem - 02_nosystems.py: SKIP -------------------------------------------------------------------- ComputerSystem - 03_defineVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 04_defineStartVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 05_activate_defined_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 06_paused_active_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 22_define_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 23_pause_pause.py: PASS -------------------------------------------------------------------- ComputerSystem - 27_define_pause_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 32_start_reboot.py: PASS -------------------------------------------------------------------- ComputerSystem - 33_suspend_reboot.py: XFAIL ERROR - Got CIM error CIM_ERR_NOT_SUPPORTED: State not supported with return code 7 ERROR - Exception: Unable Suspend dom 'test_domain' InvokeMethod(RequestStateChange): CIM_ERR_NOT_SUPPORTED: State not supported Bug:<00012> -------------------------------------------------------------------- ComputerSystem - 34_start_disable.py: PASS -------------------------------------------------------------------- ComputerSystem - 35_start_reset.py: PASS -------------------------------------------------------------------- ComputerSystem - 40_RSC_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 41_cs_to_settingdefinestate.py: PASS -------------------------------------------------------------------- ComputerSystem - 42_cs_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystemIndication - 01_created_indication.py: PASS -------------------------------------------------------------------- ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 01_forward.py: FAIL ERROR - Xen_ElementAllocatedFromPool returned 0 ResourcePool objects for domain 'hd_domain' -------------------------------------------------------------------- ElementAllocatedFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 03_reverse_errs.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 04_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- ElementCapabilities - 02_reverse.py: PASS -------------------------------------------------------------------- ElementCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 05_hostsystem_cap.py: PASS -------------------------------------------------------------------- ElementConforms - 01_forward.py: PASS -------------------------------------------------------------------- ElementConforms - 02_reverse.py: PASS -------------------------------------------------------------------- ElementConforms - 03_ectp_fwd_errs.py: PASS -------------------------------------------------------------------- ElementConforms - 04_ectp_rev_errs.py: PASS -------------------------------------------------------------------- ElementSettingData - 01_forward.py: PASS -------------------------------------------------------------------- ElementSettingData - 03_esd_assoc_with_rasd_errs.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 02_elecap_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 01_enum.py: PASS -------------------------------------------------------------------- HostSystem - 02_hostsystem_to_rasd.py: PASS -------------------------------------------------------------------- HostSystem - 03_hs_to_settdefcap.py: PASS -------------------------------------------------------------------- HostSystem - 04_hs_to_EAPF.py: PASS -------------------------------------------------------------------- HostSystem - 05_hs_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 06_hs_to_vsms.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 01_forward.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 01_forward.py: PASS -------------------------------------------------------------------- HostedDependency - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 03_enabledstate.py: PASS -------------------------------------------------------------------- HostedDependency - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 01_forward.py: PASS -------------------------------------------------------------------- HostedResourcePool - 02_reverse.py: PASS -------------------------------------------------------------------- HostedResourcePool - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedService - 01_forward.py: PASS -------------------------------------------------------------------- HostedService - 02_reverse.py: PASS -------------------------------------------------------------------- HostedService - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedService - 04_reverse_errs.py: PASS -------------------------------------------------------------------- KVMRedirectionSAP - 01_enum_KVMredSAP.py: PASS -------------------------------------------------------------------- LogicalDisk - 01_disk.py: PASS -------------------------------------------------------------------- LogicalDisk - 02_nodevs.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- LogicalDisk - 03_ld_gi_errs.py: PASS -------------------------------------------------------------------- Memory - 01_memory.py: PASS -------------------------------------------------------------------- Memory - 02_defgetmem.py: PASS -------------------------------------------------------------------- Memory - 03_mem_gi_errs.py: PASS -------------------------------------------------------------------- NetworkPort - 01_netport.py: PASS -------------------------------------------------------------------- NetworkPort - 02_np_gi_errors.py: PASS -------------------------------------------------------------------- NetworkPort - 03_user_netport.py: SKIP -------------------------------------------------------------------- Processor - 01_processor.py: PASS -------------------------------------------------------------------- Processor - 02_definesys_get_procs.py: PASS -------------------------------------------------------------------- Processor - 03_proc_gi_errs.py: PASS -------------------------------------------------------------------- Profile - 01_enum.py: PASS -------------------------------------------------------------------- Profile - 02_profile_to_elec.py: PASS -------------------------------------------------------------------- Profile - 03_rprofile_gi_errs.py: PASS -------------------------------------------------------------------- RASD - 01_verify_rasd_fields.py: PASS -------------------------------------------------------------------- RASD - 02_enum.py: PASS -------------------------------------------------------------------- RASD - 03_rasd_errs.py: PASS -------------------------------------------------------------------- RASD - 04_disk_rasd_size.py: PASS -------------------------------------------------------------------- RASD - 05_disk_rasd_emu_type.py: PASS -------------------------------------------------------------------- RASD - 06_parent_net_pool.py: PASS -------------------------------------------------------------------- RASD - 07_parent_disk_pool.py: PASS -------------------------------------------------------------------- RedirectionService - 01_enum_crs.py: PASS -------------------------------------------------------------------- RedirectionService - 02_enum_crscap.py: PASS -------------------------------------------------------------------- RedirectionService - 03_RedirectionSAP_errs.py: PASS -------------------------------------------------------------------- ReferencedProfile - 01_verify_refprof.py: PASS -------------------------------------------------------------------- ReferencedProfile - 02_refprofile_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 01_forward.py: FAIL ERROR - No RASD associated with NetworkPool/cimtest-networkpool -------------------------------------------------------------------- ResourceAllocationFromPool - 02_reverse.py: FAIL ERROR - No associated pool with RAFP_dom/00:16:3e:bd:06:6a CIM_ERR_FAILED: Unable to determine pool of `RAFP_dom/00:16:3e:bd:06:6a' -------------------------------------------------------------------- ResourceAllocationFromPool - 03_forward_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 05_RAPF_err.py: PASS -------------------------------------------------------------------- ResourcePool - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePool - 02_rp_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 02_rpcc_gi_errs.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 02_rcps_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 03_CreateResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 05_AddResourcesToResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 06_RemoveResourcesFromResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 07_DeleteResourcePool.py: FAIL ERROR - Exception in create_pool() ERROR - Exception details: (1, u'CIM_ERR_FAILED: Pool with that name already exists') ERROR - Error in networkpool creation InvokeMethod(CreateChildResourcePool): CIM_ERR_FAILED: Pool with that name already exists -------------------------------------------------------------------- ResourcePoolConfigurationService - 08_CreateDiskResourcePool.py: SKIP -------------------------------------------------------------------- ResourcePoolConfigurationService - 09_DeleteDiskPool.py: FAIL ERROR - Exception in create_pool() ERROR - Exception details: (1, u'CIM_ERR_FAILED: Settings Error: Storage pool creation not supported in this version of libvirt') ERROR - Failed to create diskpool 'dp_pool' InvokeMethod(CreateChildResourcePool): CIM_ERR_FAILED: Settings Error: Storage pool creation not supported in this version of libvirt -------------------------------------------------------------------- ResourcePoolConfigurationService - 10_create_storagevolume.py: FAIL ERROR - Exception details: (1, u'CIM_ERR_FAILED: Unable to get attributes for resource: This function does not support this resource type') InvokeMethod(CreateResourceInPool): CIM_ERR_FAILED: Unable to get attributes for resource: This function does not support this resource type -------------------------------------------------------------------- ServiceAccessBySAP - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 02_reverse.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefine - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 03_sds_fwd_errs.py: PASS -------------------------------------------------------------------- SettingsDefine - 04_sds_rev_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 04_forward_vsmsdata.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 05_reverse_vsmcap.py: PASS -------------------------------------------------------------------- SystemDevice - 01_forward.py: PASS -------------------------------------------------------------------- SystemDevice - 02_reverse.py: PASS -------------------------------------------------------------------- SystemDevice - 03_fwderrs.py: PASS -------------------------------------------------------------------- VSSD - 01_enum.py: PASS -------------------------------------------------------------------- VSSD - 02_bootldr.py: PASS -------------------------------------------------------------------- VSSD - 03_vssd_gi_errs.py: PASS -------------------------------------------------------------------- VSSD - 04_vssd_to_rasd.py: PASS -------------------------------------------------------------------- VSSD - 05_set_uuid.py: PASS -------------------------------------------------------------------- VSSD - 06_duplicate_uuid.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 02_vsmcap_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 01_definesystem_name.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 02_destroysystem.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 03_definesystem_ess.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 04_definesystem_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 05_destroysystem_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 06_addresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 07_addresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 08_modifyresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 09_procrasd_persist.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 10_hv_version.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 11_define_memrasdunits.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 12_referenced_config.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 13_refconfig_additional_devs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 14_define_sys_disk.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 15_mod_system_settings.py: FAIL ERROR - CS instance not returned for rstest_domain. ERROR - Failed to destroy rstest_domain ERROR - Got CIM error CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName with return code 6 CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName InvokeMethod(DestroySystem): CIM_ERR_NOT_FOUND: Referenced domain `rstest_domain' does not exist: Domain not found: xenUnifiedDomainLookupByName -------------------------------------------------------------------- VirtualSystemManagementService - 16_removeresource.py: XFAIL ERROR - 0 RASD insts for domain/mouse:xen CIM_ERR_NOT_FOUND: No such instance (no device domain/mouse:xen) Bug:<00014> -------------------------------------------------------------------- VirtualSystemManagementService - 17_removeresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 18_define_sys_bridge.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 19_definenetwork_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 20_verify_vnc_password.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 21_createVS_verifyMAC.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 22_addmulti_brg_interface.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 23_verify_duplicate_mac_err.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationService - 01_migratable_host.py: FAIL ERROR - Error create domain dom_migrate -------------------------------------------------------------------- VirtualSystemMigrationService - 02_host_migrate_type.py: FAIL ERROR - Migration verification for 'dom_migrate' failed -------------------------------------------------------------------- VirtualSystemMigrationService - 05_migratable_host_errs.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationService - 06_remote_live_migration.py: FAIL ERROR - Migration timed out.... ERROR - Increase timeout > 40 and try again.. ERROR - Got CIM error CIM_ERR_NOT_FOUND: Referenced domain `VM_frm_elm3b43.beaverton.ibm.com' does not exist: invalid argument in __virGetDomain with return code 6 InvokeMethod(DestroySystem): CIM_ERR_NOT_FOUND: Referenced domain `VM_frm_elm3b43.beaverton.ibm.com' does not exist: invalid argument in __virGetDomain -------------------------------------------------------------------- VirtualSystemMigrationService - 07_remote_offline_migration.py: FAIL ERROR - Got CIM error CIM_ERR_FAILED: Failed to lookup resulting system with return code 1 ERROR - Error define domain VM_frm_elm3b43.beaverton.ibm.com ERROR - Error setting up the guest InvokeMethod(DefineSystem): CIM_ERR_FAILED: Failed to lookup resulting system -------------------------------------------------------------------- VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: FAIL ERROR - Got CIM error CIM_ERR_FAILED: Failed to lookup resulting system with return code 1 ERROR - Error define domain VM_frm_elm3b43.beaverton.ibm.com ERROR - Error setting up the guest InvokeMethod(DefineSystem): CIM_ERR_FAILED: Failed to lookup resulting system -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 02_vsmsd_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 01_forward.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 02_reverse.py: FAIL ERROR - Got CIM error CIM_ERR_FAILED: ResourceSettings Error: Conflicting MAC Addresses with return code 1 ERROR - Failed to define the dom: VSSDC_dom InvokeMethod(DefineSystem): CIM_ERR_FAILED: ResourceSettings Error: Conflicting MAC Addresses -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: FAIL ERROR - Got CIM error CIM_ERR_FAILED: ResourceSettings Error: Conflicting MAC Addresses with return code 1 ERROR - Unable to define domain domu1 InvokeMethod(DefineSystem): CIM_ERR_FAILED: ResourceSettings Error: Conflicting MAC Addresses -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: FAIL ERROR - Got CIM error CIM_ERR_FAILED: Unable to start domain: POST operation failed: (xend.err 'Device 51712 (vbd) could not be connected.\nFile /tmp/default-xen-dimage is loopback-mounted through /dev/loop5,\nwhich is mounted in a guest domain,\nand so cannot be mounted now.') with return code 1 ERROR - Unable to start domain domu1 InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Unable to start domain: POST operation failed: (xend.err 'Device 51712 (vbd) could not be connected.\nFile /tmp/default-xen-dimage is loopback-mounted through /dev/loop5,\nwhich is mounted in a guest domain,\nand so cannot be mounted now.') -------------------------------------------------------------------- VirtualSystemSnapshotService - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 02_vs_sservice_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 03_create_snapshot.py: FAIL ERROR - Got CIM error CIM_ERR_FAILED: Unable to start domain: POST operation failed: (xend.err 'Device 51712 (vbd) could not be connected.\nFile /tmp/default-xen-dimage is loopback-mounted through /dev/loop5,\nwhich is mounted in a guest domain,\nand so cannot be mounted now.') with return code 1 ERROR - Exception: Failed to start the defined domain: snapshot_vm ERROR - Failed to remove snapshot file for snapshot_vm InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Unable to start domain: POST operation failed: (xend.err 'Device 51712 (vbd) could not be connected.\nFile /tmp/default-xen-dimage is loopback-mounted through /dev/loop5,\nwhich is mounted in a guest domain,\nand so cannot be mounted now.') -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 02_vs_sservicecap_gi_errs.py: PASS -------------------------------------------------------------------- From kaitlin at linux.vnet.ibm.com Fri Sep 4 21:25:14 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Fri, 04 Sep 2009 14:25:14 -0700 Subject: [Libvirt-cim] What is the latest version of libvirt-CIM that worked with libvirt-0.3.3? In-Reply-To: References: Message-ID: <4AA185BA.4090609@linux.vnet.ibm.com> Hi Dayne, Looks like it's been awhile since we've done a Xen test run. I don't have results from the latest libvirt-cim release. However, I just checked out sources and did a build. Looks like the is a compile issue, which I've submitted a patch for. I also just sent a test run using libvirt 0.3.3 and paravirt Xen. There's more failures than I'd like to see, but I think some of them might be test case issues. What issue are you're hitting? Is this an issue with paravirt Xen or full virt Xen? Medlyn, Dayne (VSL - Ft Collins) wrote: > What is the latest version of libvirt-CIM that worked with libvirt-0.3.3? > > Thanks. > > Dayne > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Tue Sep 8 07:22:23 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 08 Sep 2009 07:22:23 -0000 Subject: [Libvirt-cim] [PATCH 1 of 2] [TEST] Modified pool.py to support RPCS CreateResourceInPool In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1252394005 25200 # Node ID fdc0d9aef3427500032bbd35caba0e5977be47f6 # Parent 30196cc506c07d81642c94a01fc65b34421c0714 [TEST] Modified pool.py to support RPCS CreateResourceInPool. Added the following two functions which are used in RPCS/10*py and RPCS/11*py 1) get_stovol_rasd_from_sdc() to get the stovol rasd from sdc 2) get_stovol_default_settings() to get default sto vol settings Also, modified common_util.py to remove the backed up exportfs file. Added RAW_VOL_TYPE which is the FormatType supported by RPCS currently. Once this patch gets accepted we can modify RPCS/10*py to refer to these functions. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 30196cc506c0 -r fdc0d9aef342 suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Wed Sep 02 05:11:16 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Tue Sep 08 00:13:25 2009 -0700 @@ -531,7 +531,7 @@ # Remove the temp dir created . clean_temp_files(server, src_dir, dst_dir) - + # Restore the original exports file. if os.path.exists(back_exports_file): os.remove(exports_file) @@ -551,6 +551,8 @@ try: # Backup the original exports file. if (os.path.exists(exports_file)): + if os.path.exists(back_exports_file): + os.remove(back_exports_file) move_file(exports_file, back_exports_file) fd = open(exports_file, "w") line = "\n %s %s(rw)" %(src_dir_for_mnt, server) diff -r 30196cc506c0 -r fdc0d9aef342 suites/libvirt-cim/lib/XenKvmLib/pool.py --- a/suites/libvirt-cim/lib/XenKvmLib/pool.py Wed Sep 02 05:11:16 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/pool.py Tue Sep 08 00:13:25 2009 -0700 @@ -34,6 +34,7 @@ from CimTest.CimExt import CIMClassMOF from XenKvmLib.vxml import NetXML, PoolXML from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.vsms import RASD_TYPE_STOREVOL cim_errno = pywbem.CIM_ERR_NOT_SUPPORTED cim_mname = "CreateChildResourcePool" @@ -48,6 +49,9 @@ LOGICAL_POOL = 6L SCSI_POOL = 7L +#Volume types +RAW_VOL_TYPE = 1 + def pool_cn_to_rasd_cn(pool_cn, virt): if pool_cn.find('ProcessorPool') >= 0: return get_typed_class(virt, "ProcResourceAllocationSettingData") @@ -297,3 +301,41 @@ status = PASS return status + +def get_stovol_rasd_from_sdc(virt, server, dp_inst_id): + rasd = None + ac_cn = get_typed_class(virt, "AllocationCapabilities") + an_cn = get_typed_class(virt, "SettingsDefineCapabilities") + key_list = {"InstanceID" : dp_inst_id} + + try: + inst = GetInstance(server, ac_cn, key_list) + rasd = Associators(server, an_cn, ac_cn, InstanceID=inst.InstanceID) + except Exception, detail: + logger.error("Exception: %s", detail) + return FAIL, None + + return PASS, rasd + +def get_stovol_default_settings(virt, server, dp_cn, + pool_name, path, vol_name): + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, dp_rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol RASD's") + return None + + for dpool_rasd in dp_rasds: + if dpool_rasd['ResourceType'] == RASD_TYPE_STOREVOL and \ + 'Default' in dpool_rasd['InstanceID']: + + dpool_rasd['PoolID'] = dp_inst_id + dpool_rasd['Path'] = path + dpool_rasd['VolumeName'] = vol_name + break + + if not pool_name in dpool_rasd['PoolID']: + return None + + return dpool_rasd From deeptik at linux.vnet.ibm.com Tue Sep 8 07:22:24 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 08 Sep 2009 07:22:24 -0000 Subject: [Libvirt-cim] [PATCH 2 of 2] [TEST] Added new tc to verify the RPCS error values In-Reply-To: References: Message-ID: <465cfe3802c691e2315d.1252394544@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1252394401 25200 # Node ID 465cfe3802c691e2315dc47eb07790df6c96fb77 # Parent fdc0d9aef3427500032bbd35caba0e5977be47f6 [TEST] Added new tc to verify the RPCS error values. This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when: 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE 2) Trying to create a Vol in a netfs storage pool 3) Trying to create 2 Vol in the same Path Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r fdc0d9aef342 -r 465cfe3802c6 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_storagevolume_errs.py Tue Sep 08 00:20:01 2009 -0700 @@ -0,0 +1,226 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when: +# 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE +# 2) Trying to create a Vol in a netfs storage pool +# 3) Trying to create 2 Vol in the same Path +# +# -Date: 04-09-2009 + +import sys +import os +from VirtLib import utils +from random import randint +from pywbem.cim_types import Uint64 +from pywbem import CIM_ERR_FAILED, CIMError +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib import rpcs_service +from XenKvmLib.assoc import Associators +from XenKvmLib.enumclass import GetInstance, EnumNames +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool, nfs_netfs_setup, \ + netfs_cleanup +from XenKvmLib.pool import create_pool, undefine_diskpool, RAW_VOL_TYPE, \ + DIR_POOL, NETFS_POOL, \ + get_stovol_rasd_from_sdc, get_stovol_default_settings + +dir_pool_attr = { "Path" : "/tmp" } +vol_name = "cimtest-vol.img" + +INVALID_FTYPE = RAW_VOL_TYPE + randint(20,100) +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'INVALID_FTYPE': { 'msg' : "Unable to generate XML "\ + "for new resource" }, + 'NETFS_POOL' : { 'msg' : "This function does not "\ + "support this resource type"}, + 'DUP_VOL_PATH' : { 'msg' : "Unable to create storage volume"} + } + +def get_pool_attr(server, pool_type, dp_types): + pool_attr = dir_pool_attr + + if pool_type == dp_types['NETFS_POOL']: + status , src_mnt_dir, dir_mnt_dir = nfs_netfs_setup(server) + if status != PASS: + logger.error("Failed to get pool_attr for NETFS diskpool type") + return FAIL, pool_attr + + pool_attr['SourceDirectory'] = src_mnt_dir + pool_attr['Host'] = server + pool_attr['Path'] = dir_mnt_dir + + return PASS, pool_attr + +def get_diskpool(server, virt, dp_cn, pool_name): + dp_inst = None + dpool_cn = get_typed_class(virt, dp_cn) + pools = EnumNames(server, dpool_cn) + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + for pool in pools: + if pool['InstanceID'] == dp_inst_id: + dp_inst = pool + break + + return dp_inst + +def verify_vol_err(server, virt, sv_settings, dp_inst, key): + status = FAIL + res = [FAIL] + try: + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + # For duplicate vol path verfication we should have been able to + # create the first dir pool successfully before attempting the next + if key == 'DUP_VOL_PATH' and res[0] == PASS: + # Trying to create the vol in the same vol path should return + # an error + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if exp_err_values[key]['msg'] in err_desc and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, key) + status=PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[key]['msg']) + + if status != PASS: + logger.error("Should not have been able to create Vol %s", vol_name) + + return status + +def cleanup_pool_vol(server, virt, pool_name, clean_pool, exp_vol_path): + try: + if clean_pool == True: + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to undefine diskpool '%s'" \ + % pool_name) + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + logger.info("'%s' was not removed, please remove it manually", + exp_vol_path) + return PASS + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + dp_types = { "NETFS_POOL" : NETFS_POOL } + dp_types['DUP_VOL_PATH'] = dp_types['INVALID_FTYPE'] = DIR_POOL + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (dir_pool_attr['Path'], vol_name) + + for pool_name, pool_type in dp_types.iteritems(): + status = FAIL + clean_pool=True + try: + status, pool_attr = get_pool_attr(server, pool_type, dp_types) + if status != PASS: + return status + + # err_key will contain either INVALID_FTYPE/DUP_VOL_PATH/NETFS_POOL + # to be able access the err mesg + err_key = pool_name + + if pool_type == DIR_POOL: + pool_name = default_pool_name + clean_pool=False + else: + # Creating NETFS pool to verify RPCS error + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + if err_key == "INVALID_FTYPE": + sv_rasd['FormatType'] = Uint64(INVALID_FTYPE) + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + status = verify_vol_err(server, virt, sv_settings, dp_inst, err_key) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s' ", err_key) + + if err_key == 'NETFS_POOL': + netfs_cleanup(server, pool_attr) + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + if err_key == 'NETFS_POOL': + netfs_cleanup(server, pool_attr) + break + + cleanup_pool_vol(server, virt, pool_name, clean_pool, exp_vol_path) + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Tue Sep 8 07:22:22 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 08 Sep 2009 07:22:22 -0000 Subject: [Libvirt-cim] [PATCH 0 of 2] [TEST] Added new tc to verify RPCS error values. Message-ID: From deeptik at linux.vnet.ibm.com Tue Sep 8 19:24:36 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 08 Sep 2009 19:24:36 -0000 Subject: [Libvirt-cim] [PATCH 3 of 3] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: References: Message-ID: <616c8e4217a138a001a9.1252437876@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1252437748 25200 # Node ID 616c8e4217a138a001a9223363c3fdd2bb448f13 # Parent c127b5047569b1a7fbb7e2a266e8e8fea71e762e [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r c127b5047569 -r 616c8e4217a1 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume_errs.py Tue Sep 08 12:22:28 2009 -0700 @@ -0,0 +1,191 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS returns error when invalid values are +# passed. +# +# -Date: 08-09-2009 + +import sys +import os +from VirtLib import utils +from CimTest.Globals import logger +from pywbem import CIM_ERR_FAILED, CIM_ERR_INVALID_PARAMETER, CIMError +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib import rpcs_service +from XenKvmLib.assoc import Associators +from XenKvmLib.enumclass import GetInstance, EnumNames +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, \ + get_stovol_rasd_from_sdc + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" +invalid_scen = { "INVALID_ADDRESS" : { 'val' : 'Junkvol_path', + 'msg' : 'no storage vol with '\ + 'matching path' }, + "NO_ADDRESS" : { 'msg' :'Missing Address in '\ + 'resource RASD' }, + "MISSING_RESOURCE" : { 'msg' :"Missing argument `Resource'"}, + "MISSING_POOL" : { 'msg' :"Missing argument `Pool'"} + } + + +def get_sto_vol_rasd(virt, server, dp_cn, pool_name, exp_vol_path): + dv_rasds = None + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol for '%s' vol", exp_vol_path) + return FAIL + + for item in rasds: + if item['Address'] == exp_vol_path and item['PoolID'] == dp_inst_id: + dv_rasds = item + break + + return dv_rasds + + +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, + exp_vol_path, dp_inst): + for err_scen in invalid_scen.keys(): + logger.info("Verifying errors for '%s'....", err_scen) + status = FAIL + del_res = [FAIL] + try: + res_settings = get_sto_vol_rasd(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed to get the resource settings for '%s'" \ + " Vol" % vol_name) + if not "MISSING" in err_scen: + exp_err_no = CIM_ERR_FAILED + if "NO_ADDRESS" in err_scen: + del res_settings['Address'] + elif "INVALID_ADDRESS" in err_scen: + res_settings['Address'] = invalid_scen[err_scen]['val'] + + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource, + Pool=dp_inst) + else: + exp_err_no = CIM_ERR_INVALID_PARAMETER + if err_scen == "MISSING_RESOURCE": + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) + elif err_scen == "MISSING_POOL": + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) + + except CIMError, (err_no, err_desc): + if invalid_scen[err_scen]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' for '%s'", + err_desc, err_scen) + status=PASS + else: + logger.error("Failed to get the error message '%s'", + invalid_scen[err_scen]['msg']) + + if del_res[0] == PASS: + logger.error("Should not have been able to delete Vol %s", vol_name) + return FAIL + + return status + +def cleanup_pool_vol(server, exp_vol_path): + try: + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it " \ + "manually" % exp_vol_path) + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + pool_name = default_pool_name + pool_type = DIR_POOL + status = FAIL + res = del_res = [FAIL] + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + status = verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, + pool_name, exp_vol_path, dp_inst) + if status != PASS : + raise Exception("Failed to verify the error") + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_pool_vol(server, exp_vol_path) + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Tue Sep 8 19:24:34 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 08 Sep 2009 19:24:34 -0000 Subject: [Libvirt-cim] [PATCH 1 of 3] [TEST] Adding get_diskpool() to pool.py In-Reply-To: References: Message-ID: <4c9b50a928295e90904b.1252437874@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1252412311 25200 # Node ID 4c9b50a928295e90904b2f560334cd2c398808af # Parent 465cfe3802c691e2315dc47eb07790df6c96fb77 [TEST] Adding get_diskpool() to pool.py Added get_diskpool() definition to pool.py as this will be referenced by RPCS/10*py, RPCS/11*py and RPCS/12*py. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 465cfe3802c6 -r 4c9b50a92829 suites/libvirt-cim/lib/XenKvmLib/pool.py --- a/suites/libvirt-cim/lib/XenKvmLib/pool.py Tue Sep 08 00:20:01 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/pool.py Tue Sep 08 05:18:31 2009 -0700 @@ -25,7 +25,7 @@ from CimTest.ReturnCodes import PASS, FAIL, SKIP from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.const import get_provider_version, default_pool_name -from XenKvmLib.enumclass import EnumInstances, GetInstance +from XenKvmLib.enumclass import EnumInstances, GetInstance, EnumNames from XenKvmLib.assoc import Associators from VirtLib.utils import run_remote from XenKvmLib.xm_virt_util import virt2uri, net_list @@ -40,6 +40,7 @@ cim_mname = "CreateChildResourcePool" input_graphics_pool_rev = 757 libvirt_cim_child_pool_rev = 837 +libvirt_rasd_spool_del_changes = 971 DIR_POOL = 1L FS_POOL = 2L @@ -339,3 +340,16 @@ return None return dpool_rasd + +def get_diskpool(server, virt, dp_cn, pool_name): + dp_inst = None + dpool_cn = get_typed_class(virt, dp_cn) + pools = EnumNames(server, dpool_cn) + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + for pool in pools: + if pool['InstanceID'] == dp_inst_id: + dp_inst = pool + break + + return dp_inst From deeptik at linux.vnet.ibm.com Tue Sep 8 19:24:33 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 08 Sep 2009 19:24:33 -0000 Subject: [Libvirt-cim] [PATCH 0 of 3] [TEST] Added new tc to verify RPCS DeleteResourceInPool() Message-ID: This patchset is dependent on the "Added new tc to verify RPCS error values." changes The patches should be applied on top of "Added new tc to verify RPCS error values" pathces From deeptik at linux.vnet.ibm.com Tue Sep 8 19:24:35 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 08 Sep 2009 19:24:35 -0000 Subject: [Libvirt-cim] [PATCH 2 of 3] [TEST] Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1252413103 25200 # Node ID c127b5047569b1a7fbb7e2a266e8e8fea71e762e # Parent 4c9b50a928295e90904b2f560334cd2c398808af [TEST] Add new tc to verify the DeleteResourceInPool(). Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 4c9b50a92829 -r c127b5047569 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_delete_storagevolume.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_delete_storagevolume.py Tue Sep 08 05:31:43 2009 -0700 @@ -0,0 +1,177 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS. +# +# -Date: 08-09-2009 + +import sys +import os +from VirtLib import utils +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib import rpcs_service +from XenKvmLib.assoc import Associators +from XenKvmLib.enumclass import GetInstance, EnumNames +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, \ + get_stovol_rasd_from_sdc + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" + +def get_sto_vol_rasd(virt, server, dp_cn, pool_name, exp_vol_path): + dv_rasds = None + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol for '%s' vol", exp_vol_path) + return FAIL + + for item in rasds: + if item['Address'] == exp_vol_path and item['PoolID'] == dp_inst_id: + dv_rasds = item + break + + return dv_rasds + +def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): + try: + if clean_vol == True: + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to undefine diskpool '%s'" \ + % pool_name) + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it " \ + "manually" % exp_vol_path) + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + # For now the test case support only the deletion of dir type based + # vol, we can extend dp_types to include netfs etc ..... + dp_types = { "DISK_POOL_DIR" : DIR_POOL } + + for pool_name, pool_type in dp_types.iteritems(): + status = FAIL + res = del_res = [FAIL] + clean_pool=True + try: + if pool_type == DIR_POOL: + pool_name = default_pool_name + clean_pool=False + else: + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + res_settings = get_sto_vol_rasd(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed to get the resource settings for '%s'" \ + " Vol" % vol_name) + + resource_setting = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource_setting, + Pool=dp_inst) + + res_settings = get_sto_vol_rasd(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings != None: + raise Exception("'%s' vol of '%s' pool was not deleted" \ + % (vol_name, pool_name)) + else: + logger.info("Vol '%s' of '%s' pool deleted successfully by " + "DeleteResourceInPool()", vol_name, pool_name) + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_pool_vol(server, virt, pool_name, + clean_pool, exp_vol_path) + if del_res[0] == PASS and ret == PASS : + status = PASS + else: + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) From dayne.medlyn at hp.com Tue Sep 8 21:03:45 2009 From: dayne.medlyn at hp.com (Medlyn, Dayne (VSL - Ft Collins)) Date: Tue, 8 Sep 2009 21:03:45 +0000 Subject: [Libvirt-cim] What is the latest version of libvirt-CIM that worked with libvirt-0.3.3? In-Reply-To: <4AA185BA.4090609@linux.vnet.ibm.com> References: <4AA185BA.4090609@linux.vnet.ibm.com> Message-ID: Thanks Kaitlin, I guess I assumed that libvirt-cim was highly dependent on the underlying libvirt version, as in it would not build/install with the wrong version of libvirt. >From what you are saying there is no hard dependency, though I suspect there will be some functionality wholes in libvirt-cim on older versions of libvirt. This is what I needed to know. Thanks for your help. Dayne > -----Original Message----- > From: libvirt-cim-bounces at redhat.com [mailto:libvirt-cim- > bounces at redhat.com] On Behalf Of Kaitlin Rupert > Sent: Friday, September 04, 2009 3:25 PM > To: List for discussion and development of libvirt CIM > Subject: Re: [Libvirt-cim] What is the latest version of libvirt-CIM > that worked with libvirt-0.3.3? > > Hi Dayne, > > Looks like it's been awhile since we've done a Xen test run. I don't > have results from the latest libvirt-cim release. > > However, I just checked out sources and did a build. Looks like the is > a > compile issue, which I've submitted a patch for. I also just sent a > test run using libvirt 0.3.3 and paravirt Xen. There's more failures > than I'd like to see, but I think some of them might be test case > issues. > > What issue are you're hitting? Is this an issue with paravirt Xen or > full virt Xen? > > > Medlyn, Dayne (VSL - Ft Collins) wrote: > > What is the latest version of libvirt-CIM that worked with libvirt- > 0.3.3? > > > > Thanks. > > > > Dayne > > > > _______________________________________________ > > Libvirt-cim mailing list > > Libvirt-cim at redhat.com > > https://www.redhat.com/mailman/listinfo/libvirt-cim > > > -- > Kaitlin Rupert > IBM Linux Technology Center > kaitlin at linux.vnet.ibm.com > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim From kaitlin at linux.vnet.ibm.com Tue Sep 8 21:46:25 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 08 Sep 2009 14:46:25 -0700 Subject: [Libvirt-cim] What is the latest version of libvirt-CIM that worked with libvirt-0.3.3? In-Reply-To: References: <4AA185BA.4090609@linux.vnet.ibm.com> Message-ID: <4AA6D0B1.1030307@linux.vnet.ibm.com> Medlyn, Dayne (VSL - Ft Collins) wrote: > Thanks Kaitlin, > > I guess I assumed that libvirt-cim was highly dependent on the underlying libvirt version, as in it would not build/install with the wrong version of libvirt. > >>From what you are saying there is no hard dependency, though I suspect there will be some functionality wholes in libvirt-cim on older versions of libvirt. This is what I needed to know. Yes, this is correct. There are some cases we have to careful of - older versions of libvirt don't have storage pool support. This means that all of the recent storage pool related functionality in libvirt-cim doesn't work with older versions of libvirt, since older versions of libvirt doesn't have a notion of a storage pool. But for the majority of functionality, we work to make the providers as backward compatible as possible. And in general, we've tried to do regular test runs on both upstream versions of libvirt as well as 0.3.3. > > Thanks for your help. > > Dayne > > >> -----Original Message----- >> From: libvirt-cim-bounces at redhat.com [mailto:libvirt-cim- >> bounces at redhat.com] On Behalf Of Kaitlin Rupert >> Sent: Friday, September 04, 2009 3:25 PM >> To: List for discussion and development of libvirt CIM >> Subject: Re: [Libvirt-cim] What is the latest version of libvirt-CIM >> that worked with libvirt-0.3.3? >> >> Hi Dayne, >> >> Looks like it's been awhile since we've done a Xen test run. I don't >> have results from the latest libvirt-cim release. >> >> However, I just checked out sources and did a build. Looks like the is >> a >> compile issue, which I've submitted a patch for. I also just sent a >> test run using libvirt 0.3.3 and paravirt Xen. There's more failures >> than I'd like to see, but I think some of them might be test case >> issues. >> >> What issue are you're hitting? Is this an issue with paravirt Xen or >> full virt Xen? >> >> >> Medlyn, Dayne (VSL - Ft Collins) wrote: >>> What is the latest version of libvirt-CIM that worked with libvirt- >> 0.3.3? >>> Thanks. >>> >>> Dayne >>> >>> _______________________________________________ >>> Libvirt-cim mailing list >>> Libvirt-cim at redhat.com >>> https://www.redhat.com/mailman/listinfo/libvirt-cim >> >> -- >> Kaitlin Rupert >> IBM Linux Technology Center >> kaitlin at linux.vnet.ibm.com >> >> _______________________________________________ >> Libvirt-cim mailing list >> Libvirt-cim at redhat.com >> https://www.redhat.com/mailman/listinfo/libvirt-cim > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Tue Sep 8 22:05:51 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 08 Sep 2009 15:05:51 -0700 Subject: [Libvirt-cim] [PATCH 2 of 2] [TEST] Added new tc to verify the RPCS error values In-Reply-To: <465cfe3802c691e2315d.1252394544@elm3a148.beaverton.ibm.com> References: <465cfe3802c691e2315d.1252394544@elm3a148.beaverton.ibm.com> Message-ID: <4AA6D53F.7000008@linux.vnet.ibm.com> Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1252394401 25200 > # Node ID 465cfe3802c691e2315dc47eb07790df6c96fb77 > # Parent fdc0d9aef3427500032bbd35caba0e5977be47f6 > [TEST] Added new tc to verify the RPCS error values. > > This test case verifies the creation of the StorageVol using the > CreateResourceInPool method of RPCS returns an error when invalid values > are passed. > The test case checks for the errors when: > 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE > 2) Trying to create a Vol in a netfs storage pool > 3) Trying to create 2 Vol in the same Path > > Tested with KVM and current sources on SLES11. > Signed-off-by: Deepti B. Kalakeri > > diff -r fdc0d9aef342 -r 465cfe3802c6 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_storagevolume_errs.py > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_storagevolume_errs.py Tue Sep 08 00:20:01 2009 -0700 > @@ -0,0 +1,226 @@ > +#!/usr/bin/python > +# > +# Copyright 2009 IBM Corp. > +# > +# Authors: > +# Deepti B. Kalakeri > +# > +# > +# This library is free software; you can redistribute it and/or > +# modify it under the terms of the GNU General Public > +# License as published by the Free Software Foundation; either > +# version 2.1 of the License, or (at your option) any later version. > +# > +# This library is distributed in the hope that it will be useful, > +# but WITHOUT ANY WARRANTY; without even the implied warranty of > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > +# General Public License for more details. > +# > +# You should have received a copy of the GNU General Public > +# License along with this library; if not, write to the Free Software > +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA > +# > +# > +# This test case verifies the creation of the StorageVol using the > +# CreateResourceInPool method of RPCS returns an error when invalid values > +# are passed. > +# The test case checks for the errors when: > +# 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE > +# 2) Trying to create a Vol in a netfs storage pool > +# 3) Trying to create 2 Vol in the same Path > +# > +# -Date: 04-09-2009 > + > +import sys > +import os > +from VirtLib import utils > +from random import randint > +from pywbem.cim_types import Uint64 > +from pywbem import CIM_ERR_FAILED, CIMError > +from CimTest.Globals import logger > +from CimTest.ReturnCodes import FAIL, PASS, SKIP > +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ > + get_provider_version > +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes > +from XenKvmLib import rpcs_service > +from XenKvmLib.assoc import Associators > +from XenKvmLib.enumclass import GetInstance, EnumNames > +from XenKvmLib.xm_virt_util import virsh_version > +from XenKvmLib.classes import get_typed_class, inst_to_mof > +from XenKvmLib.common_util import destroy_diskpool, nfs_netfs_setup, \ > + netfs_cleanup > +from XenKvmLib.pool import create_pool, undefine_diskpool, RAW_VOL_TYPE, \ > + DIR_POOL, NETFS_POOL, \ > + get_stovol_rasd_from_sdc, get_stovol_default_settings > + > +dir_pool_attr = { "Path" : "/tmp" } > +vol_name = "cimtest-vol.img" > + > +INVALID_FTYPE = RAW_VOL_TYPE + randint(20,100) > +exp_err_no = CIM_ERR_FAILED > +exp_err_values = { 'INVALID_FTYPE': { 'msg' : "Unable to generate XML "\ > + "for new resource" }, > + 'NETFS_POOL' : { 'msg' : "This function does not "\ > + "support this resource type"}, > + 'DUP_VOL_PATH' : { 'msg' : "Unable to create storage volume"} Can you line up the third entry with the other two? > + } > + > + > + at do_main(platform_sup) > +def main(): > + options = main.options > + server = options.ip > + virt = options.virt > + > + libvirt_ver = virsh_version(server, virt) > + cim_rev, changeset = get_provider_version(virt, server) > + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: > + logger.info("Storage Volume creation support is available with Libvirt" > + "version >= 0.4.1 and Libvirt-CIM rev '%s'", > + libvirt_rasd_storagepool_changes) > + return SKIP > + > + dp_types = { "NETFS_POOL" : NETFS_POOL } > + dp_types['DUP_VOL_PATH'] = dp_types['INVALID_FTYPE'] = DIR_POOL > + dp_cn = "DiskPool" > + exp_vol_path = "%s/%s" % (dir_pool_attr['Path'], vol_name) > + > + for pool_name, pool_type in dp_types.iteritems(): > + status = FAIL > + clean_pool=True > + try: > + status, pool_attr = get_pool_attr(server, pool_type, dp_types) > + if status != PASS: > + return status > + > + # err_key will contain either INVALID_FTYPE/DUP_VOL_PATH/NETFS_POOL > + # to be able access the err mesg > + err_key = pool_name > + > + if pool_type == DIR_POOL: > + pool_name = default_pool_name > + clean_pool=False > + else: > + # Creating NETFS pool to verify RPCS error > + status = create_pool(server, virt, pool_name, pool_attr, > + mode_type=pool_type, pool_type=dp_cn) A netfs pool requires a nfsserver running on the system, and not all systems have nfs installed. So I wouldn't use the netfs type pool in this test. > + > + if status != PASS: > + logger.error("Failed to create pool '%s'", pool_name) > + return status > + > + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, > + exp_vol_path, vol_name) > + if sv_rasd == None: > + raise Exception("Failed to get the defualt StorageVolRASD info") > + > + if err_key == "INVALID_FTYPE": > + sv_rasd['FormatType'] = Uint64(INVALID_FTYPE) > + > + sv_settings = inst_to_mof(sv_rasd) Looks like you don't use sv_settings elsewhere in the test, so include this line in verify_vol_err(). > + > + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) > + if dp_inst == None: > + raise Exception("DiskPool instance for '%s' not found!" \ > + % pool_name) > + > + status = verify_vol_err(server, virt, sv_settings, dp_inst, err_key) > + if status != PASS : > + raise Exception("Failed to verify the Invlaid '%s' ", err_key) > + > + if err_key == 'NETFS_POOL': > + netfs_cleanup(server, pool_attr) > + > + except Exception, details: > + logger.error("Exception details: %s", details) > + status = FAIL > + if err_key == 'NETFS_POOL': > + netfs_cleanup(server, pool_attr) > + break I would have the try / except block outside of the for loop so you don't need to break from the look. Raising the exception should be enough to break you out of the loop. > + > + cleanup_pool_vol(server, virt, pool_name, clean_pool, exp_vol_path) > + > + return status > +if __name__ == "__main__": > + sys.exit(main()) > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Tue Sep 8 22:08:19 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 08 Sep 2009 15:08:19 -0700 Subject: [Libvirt-cim] [PATCH 1 of 2] [TEST] Modified pool.py to support RPCS CreateResourceInPool In-Reply-To: References: Message-ID: <4AA6D5D3.1080306@linux.vnet.ibm.com> > +def get_stovol_rasd_from_sdc(virt, server, dp_inst_id): > + rasd = None > + ac_cn = get_typed_class(virt, "AllocationCapabilities") > + an_cn = get_typed_class(virt, "SettingsDefineCapabilities") > + key_list = {"InstanceID" : dp_inst_id} > + > + try: > + inst = GetInstance(server, ac_cn, key_list) What if inst is None - should return an error here. > + rasd = Associators(server, an_cn, ac_cn, InstanceID=inst.InstanceID) Should also check to make sure rasd is of some length - otherwise, return an error. > + except Exception, detail: > + logger.error("Exception: %s", detail) > + return FAIL, None > + > + return PASS, rasd > + -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Wed Sep 9 06:57:00 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Wed, 09 Sep 2009 12:27:00 +0530 Subject: [Libvirt-cim] Test Run Summary (Sep 09 2009): KVM on SUSE Linux Enterprise Server 11 (i586) with sfcb Message-ID: <4AA751BC.7070302@linux.vnet.ibm.com> ================================================= Test Run Summary (Sep 09 2009): KVM on SUSE Linux Enterprise Server 11 (i586) with sfcb ================================================= Distro: SUSE Linux Enterprise Server 11 (i586) Kernel: 2.6.27.19-5-pae libvirt: 0.4.6 Hypervisor: QEMU 0.9.1 CIMOM: sfcb sfcbd 1.3.2 Libvirt-cim revision: 974 Libvirt-cim changeset: 234141bf7f03 Cimtest revision: 775 Cimtest changeset: 30196cc506c0 ================================================= FAIL : 1 XFAIL : 5 SKIP : 11 PASS : 152 ----------------- Total : 169 ================================================= FAIL Test Summary: ComputerSystemIndication - 01_created_indication.py: FAIL ================================================= XFAIL Test Summary: ComputerSystem - 32_start_reboot.py: XFAIL ComputerSystem - 33_suspend_reboot.py: XFAIL VirtualSystemManagementService - 09_procrasd_persist.py: XFAIL VirtualSystemManagementService - 16_removeresource.py: XFAIL VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ================================================= SKIP Test Summary: ComputerSystem - 02_nosystems.py: SKIP ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP HostSystem - 05_hs_gi_errs.py: SKIP LogicalDisk - 02_nodevs.py: SKIP VSSD - 02_bootldr.py: SKIP VirtualSystemMigrationService - 01_migratable_host.py: SKIP VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP ================================================= Full report: -------------------------------------------------------------------- AllocationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- AllocationCapabilities - 02_alloccap_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 01_enum.py: PASS -------------------------------------------------------------------- ComputerSystem - 02_nosystems.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- ComputerSystem - 03_defineVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 04_defineStartVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 05_activate_defined_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 06_paused_active_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 22_define_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 23_pause_pause.py: PASS -------------------------------------------------------------------- ComputerSystem - 27_define_pause_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 32_start_reboot.py: XFAIL ERROR - Got CIM error Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot with return code 1 ERROR - Exception: Unable reboot dom 'cs_test_domain' InvokeMethod(RequestStateChange): Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot Bug:<00005> -------------------------------------------------------------------- ComputerSystem - 33_suspend_reboot.py: XFAIL ERROR - Got CIM error State not supported with return code 7 ERROR - Exception: Unable Suspend dom 'test_domain' InvokeMethod(RequestStateChange): State not supported Bug:<00012> -------------------------------------------------------------------- ComputerSystem - 34_start_disable.py: PASS -------------------------------------------------------------------- ComputerSystem - 35_start_reset.py: PASS -------------------------------------------------------------------- ComputerSystem - 40_RSC_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 41_cs_to_settingdefinestate.py: PASS -------------------------------------------------------------------- ComputerSystem - 42_cs_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystemIndication - 01_created_indication.py: FAIL ERROR - Waited too long for define indication ERROR - Waited too long for start indication ERROR - Waited too long for destroy indication -------------------------------------------------------------------- ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP -------------------------------------------------------------------- ElementAllocatedFromPool - 01_forward.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 03_reverse_errs.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 04_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- ElementCapabilities - 02_reverse.py: PASS -------------------------------------------------------------------- ElementCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 05_hostsystem_cap.py: PASS -------------------------------------------------------------------- ElementConforms - 01_forward.py: PASS -------------------------------------------------------------------- ElementConforms - 02_reverse.py: PASS -------------------------------------------------------------------- ElementConforms - 03_ectp_fwd_errs.py: PASS -------------------------------------------------------------------- ElementConforms - 04_ectp_rev_errs.py: PASS -------------------------------------------------------------------- ElementSettingData - 01_forward.py: PASS -------------------------------------------------------------------- ElementSettingData - 03_esd_assoc_with_rasd_errs.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 02_elecap_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 01_enum.py: PASS -------------------------------------------------------------------- HostSystem - 02_hostsystem_to_rasd.py: PASS -------------------------------------------------------------------- HostSystem - 03_hs_to_settdefcap.py: PASS -------------------------------------------------------------------- HostSystem - 04_hs_to_EAPF.py: PASS -------------------------------------------------------------------- HostSystem - 05_hs_gi_errs.py: SKIP -------------------------------------------------------------------- HostSystem - 06_hs_to_vsms.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 01_forward.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 01_forward.py: PASS -------------------------------------------------------------------- HostedDependency - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 03_enabledstate.py: PASS -------------------------------------------------------------------- HostedDependency - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 01_forward.py: PASS -------------------------------------------------------------------- HostedResourcePool - 02_reverse.py: PASS -------------------------------------------------------------------- HostedResourcePool - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedService - 01_forward.py: PASS -------------------------------------------------------------------- HostedService - 02_reverse.py: PASS -------------------------------------------------------------------- HostedService - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedService - 04_reverse_errs.py: PASS -------------------------------------------------------------------- KVMRedirectionSAP - 01_enum_KVMredSAP.py: PASS -------------------------------------------------------------------- LogicalDisk - 01_disk.py: PASS -------------------------------------------------------------------- LogicalDisk - 02_nodevs.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- LogicalDisk - 03_ld_gi_errs.py: PASS -------------------------------------------------------------------- Memory - 01_memory.py: PASS -------------------------------------------------------------------- Memory - 02_defgetmem.py: PASS -------------------------------------------------------------------- Memory - 03_mem_gi_errs.py: PASS -------------------------------------------------------------------- NetworkPort - 01_netport.py: PASS -------------------------------------------------------------------- NetworkPort - 02_np_gi_errors.py: PASS -------------------------------------------------------------------- NetworkPort - 03_user_netport.py: PASS -------------------------------------------------------------------- Processor - 01_processor.py: PASS -------------------------------------------------------------------- Processor - 02_definesys_get_procs.py: PASS -------------------------------------------------------------------- Processor - 03_proc_gi_errs.py: PASS -------------------------------------------------------------------- Profile - 01_enum.py: PASS -------------------------------------------------------------------- Profile - 02_profile_to_elec.py: PASS -------------------------------------------------------------------- Profile - 03_rprofile_gi_errs.py: PASS -------------------------------------------------------------------- RASD - 01_verify_rasd_fields.py: PASS -------------------------------------------------------------------- RASD - 02_enum.py: PASS -------------------------------------------------------------------- RASD - 03_rasd_errs.py: PASS -------------------------------------------------------------------- RASD - 04_disk_rasd_size.py: PASS -------------------------------------------------------------------- RASD - 05_disk_rasd_emu_type.py: PASS -------------------------------------------------------------------- RASD - 06_parent_net_pool.py: PASS -------------------------------------------------------------------- RASD - 07_parent_disk_pool.py: PASS -------------------------------------------------------------------- RedirectionService - 01_enum_crs.py: PASS -------------------------------------------------------------------- RedirectionService - 02_enum_crscap.py: PASS -------------------------------------------------------------------- RedirectionService - 03_RedirectionSAP_errs.py: PASS -------------------------------------------------------------------- ReferencedProfile - 01_verify_refprof.py: PASS -------------------------------------------------------------------- ReferencedProfile - 02_refprofile_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 01_forward.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 03_forward_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ResourceAllocationFromPool - 05_RAPF_err.py: PASS -------------------------------------------------------------------- ResourcePool - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePool - 02_rp_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 02_rpcc_gi_errs.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 02_rcps_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 03_CreateResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 05_AddResourcesToResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 06_RemoveResourcesFromResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 07_DeleteResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 08_CreateDiskResourcePool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 09_DeleteDiskPool.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationService - 10_create_storagevolume.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 02_reverse.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAffectsElement - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefine - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 03_sds_fwd_errs.py: PASS -------------------------------------------------------------------- SettingsDefine - 04_sds_rev_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 04_forward_vsmsdata.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 05_reverse_vsmcap.py: PASS -------------------------------------------------------------------- SystemDevice - 01_forward.py: PASS -------------------------------------------------------------------- SystemDevice - 02_reverse.py: PASS -------------------------------------------------------------------- SystemDevice - 03_fwderrs.py: PASS -------------------------------------------------------------------- VSSD - 01_enum.py: PASS -------------------------------------------------------------------- VSSD - 02_bootldr.py: SKIP -------------------------------------------------------------------- VSSD - 03_vssd_gi_errs.py: PASS -------------------------------------------------------------------- VSSD - 04_vssd_to_rasd.py: PASS -------------------------------------------------------------------- VSSD - 05_set_uuid.py: PASS -------------------------------------------------------------------- VSSD - 06_duplicate_uuid.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 02_vsmcap_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 01_definesystem_name.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 02_destroysystem.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 03_definesystem_ess.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 04_definesystem_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 05_destroysystem_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 06_addresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 07_addresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 08_modifyresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 09_procrasd_persist.py: XFAIL -------------------------------------------------------------------- VirtualSystemManagementService - 10_hv_version.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 11_define_memrasdunits.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 12_referenced_config.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 13_refconfig_additional_devs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 14_define_sys_disk.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 15_mod_system_settings.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 16_removeresource.py: XFAIL ERROR - 0 RASD insts for domain/mouse:ps2 No such instance (no device domain/mouse:ps2) Bug:<00014> -------------------------------------------------------------------- VirtualSystemManagementService - 17_removeresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 18_define_sys_bridge.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 19_definenetwork_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 20_verify_vnc_password.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 21_createVS_verifyMAC.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ERROR - Error invoking AddRS: add_net_res ERROR - (1, u'Unable to change (0) device: this function is not supported by the hypervisor: this device type cannot be attached') ERROR - Failed to destroy Virtual Network 'my_network1' InvokeMethod(AddResourceSettings): Unable to change (0) device: this function is not supported by the hypervisor: this device type cannot be attached Bug:<00015> -------------------------------------------------------------------- VirtualSystemManagementService - 23_verify_duplicate_mac_err.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationService - 01_migratable_host.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 02_vsmsd_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 01_forward.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 02_reverse.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 02_vs_sservice_gi_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 03_create_snapshot.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 02_vs_sservicecap_gi_errs.py: PASS -------------------------------------------------------------------- -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Wed Sep 9 07:17:22 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Wed, 09 Sep 2009 12:47:22 +0530 Subject: [Libvirt-cim] [PATCH 2 of 2] [TEST] Added new tc to verify the RPCS error values In-Reply-To: <4AA6D53F.7000008@linux.vnet.ibm.com> References: <465cfe3802c691e2315d.1252394544@elm3a148.beaverton.ibm.com> <4AA6D53F.7000008@linux.vnet.ibm.com> Message-ID: <4AA75682.5000006@linux.vnet.ibm.com> Kaitlin Rupert wrote: > Deepti B. Kalakeri wrote: >> # HG changeset patch >> # User Deepti B. Kalakeri >> # Date 1252394401 25200 >> # Node ID 465cfe3802c691e2315dc47eb07790df6c96fb77 >> # Parent fdc0d9aef3427500032bbd35caba0e5977be47f6 >> [TEST] Added new tc to verify the RPCS error values. >> >> This test case verifies the creation of the StorageVol using the >> CreateResourceInPool method of RPCS returns an error when invalid values >> are passed. >> The test case checks for the errors when: >> 1) FormatType field in the StoragePoolRASD set to value other than >> RAW_TYPE >> 2) Trying to create a Vol in a netfs storage pool >> 3) Trying to create 2 Vol in the same Path >> >> Tested with KVM and current sources on SLES11. >> Signed-off-by: Deepti B. Kalakeri >> >> diff -r fdc0d9aef342 -r 465cfe3802c6 >> suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_storagevolume_errs.py >> >> --- /dev/null Thu Jan 01 00:00:00 1970 +0000 >> +++ >> b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_storagevolume_errs.py >> Tue Sep 08 00:20:01 2009 -0700 >> @@ -0,0 +1,226 @@ >> +#!/usr/bin/python >> +# >> +# Copyright 2009 IBM Corp. >> +# >> +# Authors: >> +# Deepti B. Kalakeri +# +# >> +# This library is free software; you can redistribute it and/or >> +# modify it under the terms of the GNU General Public >> +# License as published by the Free Software Foundation; either >> +# version 2.1 of the License, or (at your option) any later version. >> +# >> +# This library is distributed in the hope that it will be useful, >> +# but WITHOUT ANY WARRANTY; without even the implied warranty of >> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >> +# General Public License for more details. >> +# >> +# You should have received a copy of the GNU General Public >> +# License along with this library; if not, write to the Free Software >> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA >> 02111-1307 USA >> +# >> +# >> +# This test case verifies the creation of the StorageVol using the >> +# CreateResourceInPool method of RPCS returns an error when invalid >> values >> +# are passed. >> +# The test case checks for the errors when: >> +# 1) FormatType field in the StoragePoolRASD set to value other than >> RAW_TYPE >> +# 2) Trying to create a Vol in a netfs storage pool >> +# 3) Trying to create 2 Vol in the same Path >> +# >> +# -Date: 04-09-2009 >> + >> +import sys >> +import os >> +from VirtLib import utils >> +from random import randint >> +from pywbem.cim_types import Uint64 >> +from pywbem import CIM_ERR_FAILED, CIMError >> +from CimTest.Globals import logger >> +from CimTest.ReturnCodes import FAIL, PASS, SKIP >> +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ >> + get_provider_version >> +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes >> +from XenKvmLib import rpcs_service >> +from XenKvmLib.assoc import Associators >> +from XenKvmLib.enumclass import GetInstance, EnumNames >> +from XenKvmLib.xm_virt_util import virsh_version >> +from XenKvmLib.classes import get_typed_class, inst_to_mof >> +from XenKvmLib.common_util import destroy_diskpool, nfs_netfs_setup, \ >> + netfs_cleanup >> +from XenKvmLib.pool import create_pool, undefine_diskpool, >> RAW_VOL_TYPE, \ >> + DIR_POOL, NETFS_POOL, \ >> + get_stovol_rasd_from_sdc, get_stovol_default_settings >> + >> +dir_pool_attr = { "Path" : "/tmp" } >> +vol_name = "cimtest-vol.img" >> + >> +INVALID_FTYPE = RAW_VOL_TYPE + randint(20,100) >> +exp_err_no = CIM_ERR_FAILED >> +exp_err_values = { 'INVALID_FTYPE': { 'msg' : "Unable to generate >> XML "\ >> + "for new resource" }, >> + 'NETFS_POOL' : { 'msg' : "This function does not "\ >> + "support this resource type"}, >> + 'DUP_VOL_PATH' : { 'msg' : "Unable to create storage volume"} > > Can you line up the third entry with the other two? > >> + } >> + > >> + >> + at do_main(platform_sup) >> +def main(): >> + options = main.options >> + server = options.ip >> + virt = options.virt >> + >> + libvirt_ver = virsh_version(server, virt) >> + cim_rev, changeset = get_provider_version(virt, server) >> + if libvirt_ver < "0.4.1" and cim_rev < >> libvirt_rasd_storagepool_changes: >> + logger.info("Storage Volume creation support is available with >> Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + >> libvirt_rasd_storagepool_changes) >> + return SKIP >> + >> + dp_types = { "NETFS_POOL" : NETFS_POOL } >> + dp_types['DUP_VOL_PATH'] = dp_types['INVALID_FTYPE'] = DIR_POOL >> + dp_cn = "DiskPool" >> + exp_vol_path = "%s/%s" % (dir_pool_attr['Path'], vol_name) >> + + for pool_name, pool_type in dp_types.iteritems(): >> + status = FAIL + clean_pool=True >> + try: >> + status, pool_attr = get_pool_attr(server, pool_type, dp_types) >> + if status != PASS: >> + return status >> + >> + # err_key will contain either INVALID_FTYPE/DUP_VOL_PATH/NETFS_POOL >> + # to be able access the err mesg >> + err_key = pool_name + >> + if pool_type == DIR_POOL: >> + pool_name = default_pool_name >> + clean_pool=False >> + else: >> + # Creating NETFS pool to verify RPCS error >> + status = create_pool(server, virt, pool_name, pool_attr, + >> mode_type=pool_type, pool_type=dp_cn) > > A netfs pool requires a nfsserver running on the system, and not all > systems have nfs installed. So I wouldn't use the netfs type pool in > this test. Other pool types would require user to give inputs. So I found netfs the only option to verify the error. Any suggestion for the pool types ? > >> + >> + if status != PASS: >> + logger.error("Failed to create pool '%s'", pool_name) >> + return status >> + >> + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, >> pool_name, + exp_vol_path, vol_name) >> + if sv_rasd == None: >> + raise Exception("Failed to get the defualt StorageVolRASD info") >> + >> + if err_key == "INVALID_FTYPE": >> + sv_rasd['FormatType'] = Uint64(INVALID_FTYPE) >> + >> + sv_settings = inst_to_mof(sv_rasd) > > Looks like you don't use sv_settings elsewhere in the test, so include > this line in verify_vol_err(). > >> + >> + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) >> + if dp_inst == None: >> + raise Exception("DiskPool instance for '%s' not found!" \ >> + % pool_name) >> + >> + status = verify_vol_err(server, virt, sv_settings, dp_inst, err_key) >> + if status != PASS : >> + raise Exception("Failed to verify the Invlaid '%s' ", err_key) >> + >> + if err_key == 'NETFS_POOL': >> + netfs_cleanup(server, pool_attr) >> + + except Exception, details: >> + logger.error("Exception details: %s", details) >> + status = FAIL >> + if err_key == 'NETFS_POOL': >> + netfs_cleanup(server, pool_attr) >> + break > > I would have the try / except block outside of the for loop so you > don't need to break from the look. Raising the exception should be > enough to break you out of the loop. > >> + >> + cleanup_pool_vol(server, virt, pool_name, clean_pool, exp_vol_path) >> + + return status >> +if __name__ == "__main__": >> + sys.exit(main()) >> >> _______________________________________________ >> Libvirt-cim mailing list >> Libvirt-cim at redhat.com >> https://www.redhat.com/mailman/listinfo/libvirt-cim > > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From rmaciel at linux.vnet.ibm.com Wed Sep 9 13:54:42 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Wed, 09 Sep 2009 10:54:42 -0300 Subject: [Libvirt-cim] [PATCH] get_disk_pool() is only valid for newer versions of libvirt In-Reply-To: <2b627ddf93c9d303b87f.1252097363@elm3b151.beaverton.ibm.com> References: <2b627ddf93c9d303b87f.1252097363@elm3b151.beaverton.ibm.com> Message-ID: <4AA7B3A2.6070300@linux.vnet.ibm.com> On 09/04/2009 05:49 PM, Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252098766 25200 > # Node ID 2b627ddf93c9d303b87fd186a6d6334465a9a14c > # Parent 23572a8bc37d425291732467773f46224f640b72 > get_disk_pool() is only valid for newer versions of libvirt > > This patches fixes a compile issue with older versions of libvirt. > > diff -r 23572a8bc37d -r 2b627ddf93c9 src/Virt_DevicePool.h > --- a/src/Virt_DevicePool.h Thu Sep 03 16:42:54 2009 -0700 > +++ b/src/Virt_DevicePool.h Fri Sep 04 14:12:46 2009 -0700 > @@ -28,6 +28,12 @@ > > #include "pool_parsing.h" > > +#if LIBVIR_VERSION_NUMBER> 4000 > +# define VIR_USE_LIBVIRT_STORAGE 1 > +#else > +# define VIR_USE_LIBVIRT_STORAGE 0 > +#endif > + > /** > * Get the InstanceID of a pool that a given RASD id (for type) is in > * > @@ -135,6 +141,7 @@ > uint16_t type, > CMPIStatus *status); > > +#if VIR_USE_LIBVIRT_STORAGE > /** > * Get the configuration settings of a given storage pool > * > @@ -143,6 +150,7 @@ > * @returns An int that indicates whether the function was successful > */ > int get_disk_pool(virStoragePoolPtr poolptr, struct virt_pool **pool); > +#endif Duplicated #endif? > > #endif > > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Wed Sep 9 17:17:56 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 09 Sep 2009 10:17:56 -0700 Subject: [Libvirt-cim] [PATCH] get_disk_pool() is only valid for newer versions of libvirt In-Reply-To: <4AA7B3A2.6070300@linux.vnet.ibm.com> References: <2b627ddf93c9d303b87f.1252097363@elm3b151.beaverton.ibm.com> <4AA7B3A2.6070300@linux.vnet.ibm.com> Message-ID: <4AA7E344.1090709@linux.vnet.ibm.com> Richard Maciel wrote: > On 09/04/2009 05:49 PM, Kaitlin Rupert wrote: >> # HG changeset patch >> # User Kaitlin Rupert >> # Date 1252098766 25200 >> # Node ID 2b627ddf93c9d303b87fd186a6d6334465a9a14c >> # Parent 23572a8bc37d425291732467773f46224f640b72 >> get_disk_pool() is only valid for newer versions of libvirt >> >> This patches fixes a compile issue with older versions of libvirt. >> >> diff -r 23572a8bc37d -r 2b627ddf93c9 src/Virt_DevicePool.h >> --- a/src/Virt_DevicePool.h Thu Sep 03 16:42:54 2009 -0700 >> +++ b/src/Virt_DevicePool.h Fri Sep 04 14:12:46 2009 -0700 >> @@ -28,6 +28,12 @@ >> >> #include "pool_parsing.h" >> >> +#if LIBVIR_VERSION_NUMBER> 4000 >> +# define VIR_USE_LIBVIRT_STORAGE 1 >> +#else >> +# define VIR_USE_LIBVIRT_STORAGE 0 >> +#endif >> + >> /** >> * Get the InstanceID of a pool that a given RASD id (for type) is in >> * >> @@ -135,6 +141,7 @@ >> uint16_t type, >> CMPIStatus *status); >> >> +#if VIR_USE_LIBVIRT_STORAGE >> /** >> * Get the configuration settings of a given storage pool >> * >> @@ -143,6 +150,7 @@ >> * @returns An int that indicates whether the function was successful >> */ >> int get_disk_pool(virStoragePoolPtr poolptr, struct virt_pool **pool); >> +#endif > > Duplicated #endif? The #endif above is for the "if VIR_USE_LIBVIRT_STORAGE" statement. The #endif below is for the "#ifndef __RES_POOLS_H" statement. > >> >> #endif >> >> >> _______________________________________________ >> Libvirt-cim mailing list >> Libvirt-cim at redhat.com >> https://www.redhat.com/mailman/listinfo/libvirt-cim > > -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Wed Sep 9 21:01:24 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 09 Sep 2009 14:01:24 -0700 Subject: [Libvirt-cim] [PATCH 2 of 2] [TEST] Added new tc to verify the RPCS error values In-Reply-To: <4AA75682.5000006@linux.vnet.ibm.com> References: <465cfe3802c691e2315d.1252394544@elm3a148.beaverton.ibm.com> <4AA6D53F.7000008@linux.vnet.ibm.com> <4AA75682.5000006@linux.vnet.ibm.com> Message-ID: <4AA817A4.6090807@linux.vnet.ibm.com> >>> + # Creating NETFS pool to verify RPCS error >>> + status = create_pool(server, virt, pool_name, pool_attr, + >>> mode_type=pool_type, pool_type=dp_cn) >> >> A netfs pool requires a nfsserver running on the system, and not all >> systems have nfs installed. So I wouldn't use the netfs type pool in >> this test. > > Other pool types would require user to give inputs. So I found netfs the > only option to verify the error. > Any suggestion for the pool types ? >> Why not use a directory pool? The case you're trying to test is whether libvirt-cim returns an error if a pool with that name has already been specified. For this case, the pool type doesn't matter. The provider does a look up in libvirt based on the pool name - it doesn't even consider the pool type. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Wed Sep 9 21:01:50 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 09 Sep 2009 14:01:50 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import In-Reply-To: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> References: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> Message-ID: <4AA817BE.2080809@linux.vnet.ibm.com> Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1251842877 25200 > # Node ID 03e78e8b7a06296eba99e1329840ae6ee521f357 > # Parent a0185245b9894f195227c12af621151623972573 > [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import > > This test was originally designed to do the following: > > 1) Create a guest with a MAC interface > 2) Create a second guest based on the first guest - second guest has an > additional MAC defined. Pass a reference to the first guest during the > DefineSystem() > 3) Verify the second guest was created with two MACs - one that is identical to > the first guest and one that is different > > The providers no longer allow a guest to have the same MAC as an existing guest. > Each MAC needs to be unique. Therefore, this test needs to use a different > setting - disk source works for this. > > Also, remove the dependency on test_xml.py - that module is not obsolete. > > Signed-off-by: Kaitlin Rupert > > diff -r a0185245b989 -r 03e78e8b7a06 suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py Can someone take a look a this patch? Thanks! -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Wed Sep 9 21:02:22 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 09 Sep 2009 14:02:22 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] #2 Add try / except to VSMS 15 In-Reply-To: <54bf724a87d4dcf370ca.1252002041@elm3b151.beaverton.ibm.com> References: <54bf724a87d4dcf370ca.1252002041@elm3b151.beaverton.ibm.com> Message-ID: <4AA817DE.7020609@linux.vnet.ibm.com> Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252002019 25200 > # Node ID 54bf724a87d4dcf370ca68714809cfaaf55457ca > # Parent 30196cc506c07d81642c94a01fc65b34421c0714 > [TEST] #2 Add try / except to VSMS 15 > > This will catch any unexpected exceptions. Otherwise, the exception isn't > caught and the guest may not be properly undefined > > Updates: > -Fix Exception() calls to use % instead of a , when specifying arguments > -Remove import of default_network_name > -Replace destroy() with cim_destroy() > > Signed-off-by: Kaitlin Rupert > > diff -r 30196cc506c0 -r 54bf724a87d4 suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py And this patch a well - can someone do a review? Thanks! -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 10 05:00:07 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 10 Sep 2009 10:30:07 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import In-Reply-To: <4AA817BE.2080809@linux.vnet.ibm.com> References: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> <4AA817BE.2080809@linux.vnet.ibm.com> Message-ID: <4AA887D7.3030808@linux.vnet.ibm.com> Oops ! sorry the changes for these were in a different directory marked as Spam. Kaitlin Rupert wrote: > Kaitlin Rupert wrote: >> # HG changeset patch >> # User Kaitlin Rupert >> # Date 1251842877 25200 >> # Node ID 03e78e8b7a06296eba99e1329840ae6ee521f357 >> # Parent a0185245b9894f195227c12af621151623972573 >> [TEST] Fix VSMS to do a proper check of ref config, also remove >> test_xml import >> >> This test was originally designed to do the following: >> >> 1) Create a guest with a MAC interface >> 2) Create a second guest based on the first guest - second guest has an >> additional MAC defined. Pass a reference to the first guest during the >> DefineSystem() >> 3) Verify the second guest was created with two MACs - one that is >> identical to >> the first guest and one that is different >> >> The providers no longer allow a guest to have the same MAC as an >> existing guest. >> Each MAC needs to be unique. Therefore, this test needs to use a >> different >> setting - disk source works for this. >> >> Also, remove the dependency on test_xml.py - that module is not >> obsolete. >> >> Signed-off-by: Kaitlin Rupert >> >> diff -r a0185245b989 -r 03e78e8b7a06 >> suites/libvirt-cim/cimtest/VirtualSystemManagementService/12_referenced_config.py >> > > Can someone take a look a this patch? Thanks! > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 10 05:04:26 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 10 Sep 2009 10:34:26 +0530 Subject: [Libvirt-cim] [PATCH 2 of 2] [TEST] Added new tc to verify the RPCS error values In-Reply-To: <4AA817A4.6090807@linux.vnet.ibm.com> References: <465cfe3802c691e2315d.1252394544@elm3a148.beaverton.ibm.com> <4AA6D53F.7000008@linux.vnet.ibm.com> <4AA75682.5000006@linux.vnet.ibm.com> <4AA817A4.6090807@linux.vnet.ibm.com> Message-ID: <4AA888DA.6090702@linux.vnet.ibm.com> Kaitlin Rupert wrote: > >>>> + # Creating NETFS pool to verify RPCS error >>>> + status = create_pool(server, virt, pool_name, pool_attr, + >>>> mode_type=pool_type, pool_type=dp_cn) >>> >>> A netfs pool requires a nfsserver running on the system, and not all >>> systems have nfs installed. So I wouldn't use the netfs type pool in >>> this test. >> >> Other pool types would require user to give inputs. So I found netfs >> the only option to verify the error. >> Any suggestion for the pool types ? >>> > > Why not use a directory pool? The case you're trying to test is > whether libvirt-cim returns an error if a pool with that name has > already been specified. > > For this case, the pool type doesn't matter. The provider does a look > up in libvirt based on the pool name - it doesn't even consider the > pool type. > yes we can use the dir pool for the duplicate path verification but for verifying the unsupported error we would require a pool other than dir pool. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 10 05:21:02 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 09 Sep 2009 22:21:02 -0700 Subject: [Libvirt-cim] [PATCH 2 of 2] [TEST] Added new tc to verify the RPCS error values In-Reply-To: <4AA888DA.6090702@linux.vnet.ibm.com> References: <465cfe3802c691e2315d.1252394544@elm3a148.beaverton.ibm.com> <4AA6D53F.7000008@linux.vnet.ibm.com> <4AA75682.5000006@linux.vnet.ibm.com> <4AA817A4.6090807@linux.vnet.ibm.com> <4AA888DA.6090702@linux.vnet.ibm.com> Message-ID: <4AA88CBE.70300@linux.vnet.ibm.com> Deepti B Kalakeri wrote: > > > Kaitlin Rupert wrote: >> >>>>> + # Creating NETFS pool to verify RPCS error >>>>> + status = create_pool(server, virt, pool_name, pool_attr, + >>>>> mode_type=pool_type, pool_type=dp_cn) >>>> >>>> A netfs pool requires a nfsserver running on the system, and not all >>>> systems have nfs installed. So I wouldn't use the netfs type pool in >>>> this test. >>> >>> Other pool types would require user to give inputs. So I found netfs >>> the only option to verify the error. >>> Any suggestion for the pool types ? >>>> >> >> Why not use a directory pool? The case you're trying to test is >> whether libvirt-cim returns an error if a pool with that name has >> already been specified. >> >> For this case, the pool type doesn't matter. The provider does a look >> up in libvirt based on the pool name - it doesn't even consider the >> pool type. >> > yes we can use the dir pool for the duplicate path verification but for > verifying the unsupported error we would require a pool other than dir > pool. > Ah, right - good point. I'm wondering if the unsupported error should be a different test. I'll need to give this some thought.. ordinarily, I would suggest having the test skip if there isn't a nfsserver.. but in this case, I would like to see the test report pass/failure status on the other conditions of the test. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 10 09:14:51 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 10 Sep 2009 14:44:51 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] #2 Add try / except to VSMS 15 In-Reply-To: <54bf724a87d4dcf370ca.1252002041@elm3b151.beaverton.ibm.com> References: <54bf724a87d4dcf370ca.1252002041@elm3b151.beaverton.ibm.com> Message-ID: <4AA8C38B.2050005@linux.vnet.ibm.com> Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252002019 25200 > # Node ID 54bf724a87d4dcf370ca68714809cfaaf55457ca > # Parent 30196cc506c07d81642c94a01fc65b34421c0714 > [TEST] #2 Add try / except to VSMS 15 > > This will catch any unexpected exceptions. Otherwise, the exception isn't > caught and the guest may not be properly undefined > > Updates: > -Fix Exception() calls to use % instead of a , when specifying arguments > -Remove import of default_network_name > -Replace destroy() with cim_destroy() > > Signed-off-by: Kaitlin Rupert > > diff -r 30196cc506c0 -r 54bf724a87d4 suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py > --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Wed Sep 02 05:11:16 2009 -0700 > +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/15_mod_system_settings.py Thu Sep 03 11:20:19 2009 -0700 > @@ -26,7 +26,7 @@ > from XenKvmLib import vxml > from CimTest.Globals import logger > from CimTest.ReturnCodes import PASS, FAIL, XFAIL_RC > -from XenKvmLib.const import do_main, default_network_name > +from XenKvmLib.const import do_main > from XenKvmLib.classes import get_typed_class, inst_to_mof > from XenKvmLib.enumclass import GetInstance > from XenKvmLib.common_util import poll_for_state_change > @@ -74,72 +74,70 @@ > cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) > service = vsms.get_vsms_class(options.virt)(options.ip) > > - for case in test_cases: > - #Each time through, define guest using a default XML > - cxml.undefine(options.ip) > - cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) > - ret = cxml.cim_define(options.ip) > - if not ret: > - logger.error("Failed to define the dom: %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + try: > > - if case == "start": > - ret = cxml.start(options.ip) > + for case in test_cases: > + #Each time through, define guest using a default XML > + cxml.undefine(options.ip) > + cxml = vxml.get_class(options.virt)(default_dom, vcpus=cpu) > + ret = cxml.cim_define(options.ip) > if not ret: > - logger.error("Failed to start %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + raise Exception("Failed to define the dom: %s" % default_dom) > > - status, inst = get_vssd(options.ip, options.virt, True) > - if status != PASS: > - logger.error("Failed to get the VSSD instance for %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + if case == "start": > + ret = cxml.start(options.ip) > Sorry I missed this one last time, we can use cim_start() instead of cxml.start() > + if not ret: > + raise Exception("Failed to start %s" % default_dom) > > - inst['AutomaticRecoveryAction'] = pywbem.cim_types.Uint16(RECOVERY_VAL) > - vssd = inst_to_mof(inst) > + status, inst = get_vssd(options.ip, options.virt, True) > + if status != PASS: > + raise Expcetion("Failed to get the VSSD instance for %s", > + default_dom) > > Need to remove the comma and also ues % The tc fails with the following error: VirtualSystemManagementService - 15_mod_system_settings.py: FAIL -------------------------------------------------------------------- ERROR - CS instance not returned for rstest_domain. ERROR - Failed to destroy rstest_domain ERROR - Got CIM error Referenced domain `rstest_domain' does not exist: Domain not found with return code 6 InvokeMethod(DestroySystem): Referenced domain `rstest_domain' does not exist: Domain not found -------------------------------------------------------------------- This is because the cim_destroy() is destroying and undefining the VM. The call to DestroySystem() should only destroy the VM and not undefine the VM. We have not seen this problem in the tests till now because we have never verified if DestroySystem() only destroys the domain or undefines it as well. This test case needed the VM to be in the defined state after DestroySystem() and hence we caught hold of this error. Here is the debug message: misc_util.c(75): Connecting to libvirt with uri `qemu:///system' misc_util.c(75): Connecting to libvirt with uri `qemu:///system' misc_util.c(202): URI of connection is: qemu:///system misc_util.c(202): URI of connection is: qemu:///system device_parsing.c(273): Disk node: disk infostore.c(88): Path is /etc/libvirt/cim/QEMU_rstest_domain Virt_ComputerSystemIndication.c(722): triggered std_invokemethod.c(305): Method `ModifySystemSettings' returned 0 misc_util.c(75): Connecting to libvirt with uri `qemu:///system' misc_util.c(202): URI of connection is: qemu:///system Virt_HostSystem.c(203): SBLIM: Returned instance std_invokemethod.c(279): Method `DestroySystem' execution attempted std_invokemethod.c(230): Method parameter `AffectedSystem' validated type 0x1100 std_invokemethod.c(303): Executing handler for method `DestroySystem' misc_util.c(75): Connecting to libvirt with uri `qemu:///system' Virt_VirtualSystemManagementService.c(1602): Domain successfully destroyed and undefined Virt_ComputerSystemIndication.c(722): triggered std_invokemethod.c(305): Method `DestroySystem' returned 0 > - ret = service.ModifySystemSettings(SystemSettings=vssd) > - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) > - if curr_cim_rev >= libvirt_modify_setting_changes: > - if ret[0] != 0: > - logger.error("Failed to modify dom: %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + val = pywbem.cim_types.Uint16(RECOVERY_VAL) > + inst['AutomaticRecoveryAction'] = val > + vssd = inst_to_mof(inst) > > - if case == "start": > - #This should be replaced with a RSC to shutdownt he guest > - cxml.destroy(options.ip) > - status, cs = poll_for_state_change(options.ip, options.virt, > - default_dom, DEFINED_STATE) > + ret = service.ModifySystemSettings(SystemSettings=vssd) > + curr_cim_rev, changeset = get_provider_version(options.virt, > + options.ip) > + if curr_cim_rev >= libvirt_modify_setting_changes: > + if ret[0] != 0: > + raise Exception("Failed to modify dom: %s" % default_dom) > + > + if case == "start": > + cxml.cim_destroy(options.ip) > + status, cs = poll_for_state_change(options.ip, options.virt, > + default_dom, DEFINED_STATE) > + if status != PASS: > + raise Exception("Failed to destroy %s" % default_dom) > + > + status, inst = get_vssd(options.ip, options.virt, False) > if status != PASS: > - logger.error("Failed to destroy %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + raise Exception("Failed to get the VSSD instance for %s" % \ > + default_dom) > > - status, inst = get_vssd(options.ip, options.virt, False) > - if status != PASS: > - logger.error("Failed to get the VSSD instance for %s", default_dom) > - cleanup_env(options.ip, cxml) > - return FAIL > + if inst.AutomaticRecoveryAction != RECOVERY_VAL: > + logger.error("Exp AutomaticRecoveryAction=%d, got %d", > + RECOVERY_VAL, inst.AutomaticRecoveryAction) > + raise Exception("%s not updated properly" % default_dom) > > - if inst.AutomaticRecoveryAction != RECOVERY_VAL: > - logger.error("%s not updated properly.", default_dom) > - logger.error("Exp AutomaticRecoveryAction=%d, got %d", RECOVERY_VAL, > - inst.AutomaticRecoveryAction) > - cleanup_env(options.ip, cxml) > - curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) > - if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": > - return XFAIL_RC(f9_bug) > + status = PASS > > - if options.virt == "LXC": > - return XFAIL_RC(bug) > - return FAIL > + except Exception, details: > + logger.error(details) > + status = FAIL > > cleanup_env(options.ip, cxml) > > - return PASS > + curr_cim_rev, changeset = get_provider_version(options.virt, options.ip) > + if curr_cim_rev <= libvirt_f9_revision and options.virt == "KVM": > + return XFAIL_RC(f9_bug) > + > + if options.virt == "LXC": > + return XFAIL_RC(bug) > + > + return status > > if __name__ == "__main__": > sys.exit(main()) > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 10 09:29:54 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 10 Sep 2009 14:59:54 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Fix VSMS to do a proper check of ref config, also remove test_xml import In-Reply-To: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> References: <03e78e8b7a06296eba99.1251842910@elm3b151.beaverton.ibm.com> Message-ID: <4AA8C712.6010005@linux.vnet.ibm.com> +1 for me. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 10 12:17:49 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 10 Sep 2009 12:17:49 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain Message-ID: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1252590021 14400 # Node ID 53b05fc42fbc04ce45eea4a09ad84881fbcf6d3e # Parent 30196cc506c07d81642c94a01fc65b34421c0714 [TEST] Adding verification for DestroySystem() of the domain. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 30196cc506c0 -r 53b05fc42fbc suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py Wed Sep 02 05:11:16 2009 -0700 +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py Thu Sep 10 09:40:21 2009 -0400 @@ -27,8 +27,9 @@ from VirtLib import utils from XenKvmLib.xm_virt_util import domain_list, active_domain_list from XenKvmLib import vsms, vxml +from XenKvmLib.common_util import poll_for_state_change from XenKvmLib.classes import get_typed_class -from XenKvmLib.const import do_main +from XenKvmLib.const import do_main, CIM_DISABLE from CimTest.Globals import logger from CimTest.ReturnCodes import PASS, FAIL @@ -45,44 +46,49 @@ service = vsms.get_vsms_class(options.virt)(options.ip) cxml = vxml.get_class(options.virt)(default_dom) - ret = cxml.cim_define(options.ip) - if not ret: - logger.error("Failed to define the dom: %s", default_dom) - return FAIL - ret = cxml.start(options.ip) - if not ret: - logger.error("Failed to start the dom: %s", default_dom) - cleanup_env(options.ip, cxml) - return FAIL - classname = get_typed_class(options.virt, 'ComputerSystem') - cs_ref = CIMInstanceName(classname, keybindings = { - 'Name':default_dom, - 'CreationClassName':classname}) - list_before = domain_list(options.ip, options.virt) - if default_dom not in list_before: - logger.error("Domain not in domain list") - cleanup_env(options.ip, cxml) - return FAIL + try: + ret = cxml.cim_define(options.ip) + if not ret: + logger.error("Failed to define the dom: %s", default_dom) + return FAIL + + ret = cxml.cim_start(options.ip) + if ret: + logger.error("Failed to start the dom: %s", default_dom) + cxml.undefine(options.ip) + return FAIL + + list_before = domain_list(options.ip, options.virt) + if default_dom not in list_before: + raise Exception("Domain not in domain list") - try: - service.DestroySystem(AffectedSystem=cs_ref) + ret = cxml.cim_destroy(options.ip) + if not ret: + raise Exception("Failed to destroy dom '%s'" % default_dom) + + list_after = domain_list(options.ip, options.virt) + + if default_dom in list_after: + raise Exception("Domain '%s' not destroyed: provider didn't " \ + "return error" % default_dom) + + status, dom_cs = poll_for_state_change(options.ip, options.virt, + default_dom, CIM_DISABLE) + if status != PASS: + raise Exception("RequestedState for dom '%s' is not '%s'"\ + % (default_dom, CIM_DISABLE)) + + ret = cxml.undefine(options.ip) + if not ret: + logger.error("Failed to undefine domain '%s'", default_dom) + return FAIL + except Exception, details: - logger.error('Unknow exception happened') logger.error(details) cleanup_env(options.ip, cxml) return FAIL - list_after = domain_list(options.ip, options.virt) - - if default_dom in list_after: - logger.error("Domain %s not destroyed: provider didn't return error", - default_dom) - cleanup_env(options.ip, cxml) - status = FAIL - else: - status = PASS - return status From rmaciel at linux.vnet.ibm.com Thu Sep 10 13:33:52 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Thu, 10 Sep 2009 10:33:52 -0300 Subject: [Libvirt-cim] [PATCH] get_disk_pool() is only valid for newer versions of libvirt In-Reply-To: <2b627ddf93c9d303b87f.1252097363@elm3b151.beaverton.ibm.com> References: <2b627ddf93c9d303b87f.1252097363@elm3b151.beaverton.ibm.com> Message-ID: <4AA90040.6030906@linux.vnet.ibm.com> +1 On 09/04/2009 05:49 PM, Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252098766 25200 > # Node ID 2b627ddf93c9d303b87fd186a6d6334465a9a14c > # Parent 23572a8bc37d425291732467773f46224f640b72 > get_disk_pool() is only valid for newer versions of libvirt > > This patches fixes a compile issue with older versions of libvirt. > > diff -r 23572a8bc37d -r 2b627ddf93c9 src/Virt_DevicePool.h > --- a/src/Virt_DevicePool.h Thu Sep 03 16:42:54 2009 -0700 > +++ b/src/Virt_DevicePool.h Fri Sep 04 14:12:46 2009 -0700 > @@ -28,6 +28,12 @@ > > #include "pool_parsing.h" > > +#if LIBVIR_VERSION_NUMBER> 4000 > +# define VIR_USE_LIBVIRT_STORAGE 1 > +#else > +# define VIR_USE_LIBVIRT_STORAGE 0 > +#endif > + > /** > * Get the InstanceID of a pool that a given RASD id (for type) is in > * > @@ -135,6 +141,7 @@ > uint16_t type, > CMPIStatus *status); > > +#if VIR_USE_LIBVIRT_STORAGE > /** > * Get the configuration settings of a given storage pool > * > @@ -143,6 +150,7 @@ > * @returns An int that indicates whether the function was successful > */ > int get_disk_pool(virStoragePoolPtr poolptr, struct virt_pool **pool); > +#endif > > #endif > > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 10 20:24:50 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 10 Sep 2009 13:24:50 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Add timestamps to main.py to calculate run time of tests In-Reply-To: <2d852ba88fd24102ec98.1252022739@elm3b151.beaverton.ibm.com> References: <2d852ba88fd24102ec98.1252022739@elm3b151.beaverton.ibm.com> Message-ID: <4AA96092.1050508@linux.vnet.ibm.com> Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252022738 25200 > # Node ID 2d852ba88fd24102ec988145e464a13f5faae5c0 > # Parent db3af9cb2c9affb0a32a8ea3a2c23648c5efe91e > [TEST] Add timestamps to main.py to calculate run time of tests > > These changes allow the user to specify the --print-exec-time flag, which will > print the execution time of each test. If this flag isn't specified, the > total run time of the test is still printed. > > Signed-off-by: Kaitlin Rupert > > diff -r db3af9cb2c9a -r 2d852ba88fd2 suites/libvirt-cim/main.py I think this one missed a review as well. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 10 20:35:16 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 10 Sep 2009 13:35:16 -0700 Subject: [Libvirt-cim] [PATCH 2 of 3] [TEST] Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: <4AA96304.2030306@linux.vnet.ibm.com> > + at do_main(platform_sup) > +def main(): > + options = main.options > + server = options.ip > + virt = options.virt > + > + libvirt_ver = virsh_version(server, virt) > + cim_rev, changeset = get_provider_version(virt, server) > + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: > + logger.info("Storage Volume deletion support is available with Libvirt" > + "version >= 0.4.1 and Libvirt-CIM rev '%s'", > + libvirt_rasd_spool_del_changes) > + return SKIP > + > + dp_cn = "DiskPool" > + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) > + > + # For now the test case support only the deletion of dir type based > + # vol, we can extend dp_types to include netfs etc ..... > + dp_types = { "DISK_POOL_DIR" : DIR_POOL } > + > + for pool_name, pool_type in dp_types.iteritems(): > + status = FAIL > + res = del_res = [FAIL] > + clean_pool=True > + try: > + if pool_type == DIR_POOL: > + pool_name = default_pool_name > + clean_pool=False Need spaces here around the = sign. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 10 20:45:33 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 10 Sep 2009 13:45:33 -0700 Subject: [Libvirt-cim] [PATCH 3 of 3] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: <616c8e4217a138a001a9.1252437876@elm3a148.beaverton.ibm.com> References: <616c8e4217a138a001a9.1252437876@elm3a148.beaverton.ibm.com> Message-ID: <4AA9656D.7080203@linux.vnet.ibm.com> > +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, > + exp_vol_path, dp_inst): > + for err_scen in invalid_scen.keys(): > + logger.info("Verifying errors for '%s'....", err_scen) > + status = FAIL > + del_res = [FAIL] > + try: I would put the try / execpt outside of the for loop. This will save you some indentation. > + res_settings = get_sto_vol_rasd(virt, server, dp_cn, > + pool_name, exp_vol_path) > + if res_settings == None: > + raise Exception("Failed to get the resource settings for '%s'" \ > + " Vol" % vol_name) > + if not "MISSING" in err_scen: > + exp_err_no = CIM_ERR_FAILED > + if "NO_ADDRESS" in err_scen: > + del res_settings['Address'] > + elif "INVALID_ADDRESS" in err_scen: > + res_settings['Address'] = invalid_scen[err_scen]['val'] > + > + resource = inst_to_mof(res_settings) > + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource, > + Pool=dp_inst) > + else: > + exp_err_no = CIM_ERR_INVALID_PARAMETER > + if err_scen == "MISSING_RESOURCE": > + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) > + elif err_scen == "MISSING_POOL": > + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) Will invalid_scen.keys() already return the keys in the same order? I'm wondering if it is possible for resource to be undefined here since it only gets defined if "if not "MISSING" in err_scen:" has passed in a prior iteration of the loop. If "if not "MISSING" in err_scen:" fails the first time through the loop, resource will be undefined. > + > + except CIMError, (err_no, err_desc): > + if invalid_scen[err_scen]['msg'] in err_desc \ > + and exp_err_no == err_no: > + logger.error("Got the expected error message: '%s' for '%s'", > + err_desc, err_scen) > + status=PASS Spaces between the = here. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 10 21:14:43 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 10 Sep 2009 14:14:43 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> Message-ID: <4AA96C43.5090005@linux.vnet.ibm.com> Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1252590021 14400 > # Node ID 53b05fc42fbc04ce45eea4a09ad84881fbcf6d3e > # Parent 30196cc506c07d81642c94a01fc65b34421c0714 > [TEST] Adding verification for DestroySystem() of the domain. > > Tested with KVM and current sources on SLES11. > Signed-off-by: Deepti B. Kalakeri > > diff -r 30196cc506c0 -r 53b05fc42fbc suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py I get the following failure: Starting test suite: libvirt-cim Cleaned log files. Testing KVM hypervisor -------------------------------------------------------------------- VirtualSystemManagementService - 02_destroysystem.py: FAIL ERROR - CS instance not returned for test_domain. ERROR - RequestedState for dom 'test_domain' is not '3' Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found Referenced domain `test_domain' does not exist: Domain not found -------------------------------------------------------------------- However, the test passes for me if the patch isn't applied. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 10 23:15:36 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 10 Sep 2009 19:15:36 -0400 Subject: [Libvirt-cim] Test Run Summary (Sep 10 2009): KVM on Fedora release 11 (Leonidas) with sfcb Message-ID: <200909102315.n8ANFa0p027951@d01av04.pok.ibm.com> ================================================= Test Run Summary (Sep 10 2009): KVM on Fedora release 11 (Leonidas) with sfcb ================================================= Distro: Fedora release 11 (Leonidas) Kernel: 2.6.30.5-28.rc2.fc11.x86_64 libvirt: 0.6.2 Hypervisor: QEMU 0.10.6 CIMOM: sfcb sfcbd 1.3.5preview Libvirt-cim revision: 973 Libvirt-cim changeset: 9c8eb2dfae84 Cimtest revision: 776 Cimtest changeset: 9e08670a3c37 ================================================= FAIL : 33 XFAIL : 5 SKIP : 10 PASS : 121 ----------------- Total : 169 ================================================= FAIL Test Summary: ReferencedProfile - 01_verify_refprof.py: FAIL ReferencedProfile - 02_refprofile_errs.py: FAIL ResourceAllocationFromPool - 01_forward.py: FAIL ResourceAllocationFromPool - 02_reverse.py: FAIL ResourceAllocationFromPool - 03_forward_errs.py: FAIL ResourceAllocationFromPool - 04_reverse_errs.py: FAIL ResourceAllocationFromPool - 05_RAPF_err.py: FAIL ResourcePoolConfigurationCapabilities - 01_enum.py: FAIL ResourcePoolConfigurationCapabilities - 02_rpcc_gi_errs.py: FAIL ResourcePoolConfigurationService - 01_enum.py: FAIL ResourcePoolConfigurationService - 02_rcps_gi_errors.py: FAIL ResourcePoolConfigurationService - 03_CreateResourcePool.py: FAIL ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: FAIL ResourcePoolConfigurationService - 06_RemoveResourcesFromResourcePool.py: FAIL ResourcePoolConfigurationService - 07_DeleteResourcePool.py: FAIL ResourcePoolConfigurationService - 08_CreateDiskResourcePool.py: FAIL ResourcePoolConfigurationService - 09_DeleteDiskPool.py: FAIL ResourcePoolConfigurationService - 10_create_storagevolume.py: FAIL ServiceAccessBySAP - 02_reverse.py: FAIL ServiceAffectsElement - 01_forward.py: FAIL ServiceAffectsElement - 02_reverse.py: FAIL VSSD - 03_vssd_gi_errs.py: FAIL VirtualSystemManagementCapabilities - 01_enum.py: FAIL VirtualSystemManagementCapabilities - 02_vsmcap_gi_errs.py: FAIL VirtualSystemMigrationCapabilities - 01_enum.py: FAIL VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: FAIL VirtualSystemMigrationSettingData - 01_enum.py: FAIL VirtualSystemMigrationSettingData - 02_vsmsd_gi_errs.py: FAIL VirtualSystemSnapshotService - 01_enum.py: FAIL VirtualSystemSnapshotService - 02_vs_sservice_gi_errs.py: FAIL VirtualSystemSnapshotService - 03_create_snapshot.py: FAIL VirtualSystemSnapshotServiceCapabilities - 01_enum.py: FAIL VirtualSystemSnapshotServiceCapabilities - 02_vs_sservicecap_gi_errs.py: FAIL ================================================= XFAIL Test Summary: ComputerSystem - 32_start_reboot.py: XFAIL ComputerSystem - 33_suspend_reboot.py: XFAIL ResourcePoolConfigurationService - 05_AddResourcesToResourcePool.py: XFAIL VirtualSystemManagementService - 16_removeresource.py: XFAIL VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ================================================= SKIP Test Summary: ComputerSystem - 02_nosystems.py: SKIP ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP LogicalDisk - 02_nodevs.py: SKIP VSSD - 02_bootldr.py: SKIP VirtualSystemMigrationService - 01_migratable_host.py: SKIP VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP ================================================= Full report: -------------------------------------------------------------------- AllocationCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- AllocationCapabilities - 02_alloccap_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 01_enum.py: PASS -------------------------------------------------------------------- ComputerSystem - 02_nosystems.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- ComputerSystem - 03_defineVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 04_defineStartVS.py: PASS -------------------------------------------------------------------- ComputerSystem - 05_activate_defined_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 06_paused_active_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 22_define_suspend.py: PASS -------------------------------------------------------------------- ComputerSystem - 23_pause_pause.py: PASS -------------------------------------------------------------------- ComputerSystem - 27_define_pause_errs.py: PASS -------------------------------------------------------------------- ComputerSystem - 32_start_reboot.py: XFAIL ERROR - Got CIM error Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot with return code 1 ERROR - Exception: Unable reboot dom 'cs_test_domain' InvokeMethod(RequestStateChange): Unable to reboot domain: this function is not supported by the hypervisor: virDomainReboot Bug:<00005> -------------------------------------------------------------------- ComputerSystem - 33_suspend_reboot.py: XFAIL ERROR - Got CIM error State not supported with return code 7 ERROR - Exception: Unable Suspend dom 'test_domain' InvokeMethod(RequestStateChange): State not supported Bug:<00012> -------------------------------------------------------------------- ComputerSystem - 34_start_disable.py: PASS -------------------------------------------------------------------- ComputerSystem - 35_start_reset.py: PASS -------------------------------------------------------------------- ComputerSystem - 40_RSC_start.py: PASS -------------------------------------------------------------------- ComputerSystem - 41_cs_to_settingdefinestate.py: PASS -------------------------------------------------------------------- ComputerSystem - 42_cs_gi_errs.py: PASS -------------------------------------------------------------------- ComputerSystemIndication - 01_created_indication.py: PASS -------------------------------------------------------------------- ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: SKIP -------------------------------------------------------------------- ElementAllocatedFromPool - 01_forward.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 02_reverse.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 03_reverse_errs.py: PASS -------------------------------------------------------------------- ElementAllocatedFromPool - 04_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- ElementCapabilities - 02_reverse.py: PASS -------------------------------------------------------------------- ElementCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 04_reverse_errs.py: PASS -------------------------------------------------------------------- ElementCapabilities - 05_hostsystem_cap.py: PASS -------------------------------------------------------------------- ElementConforms - 01_forward.py: PASS -------------------------------------------------------------------- ElementConforms - 02_reverse.py: PASS -------------------------------------------------------------------- ElementConforms - 03_ectp_fwd_errs.py: PASS -------------------------------------------------------------------- ElementConforms - 04_ectp_rev_errs.py: PASS -------------------------------------------------------------------- ElementSettingData - 01_forward.py: PASS -------------------------------------------------------------------- ElementSettingData - 03_esd_assoc_with_rasd_errs.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 01_enum.py: PASS -------------------------------------------------------------------- EnabledLogicalElementCapabilities - 02_elecap_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 01_enum.py: PASS -------------------------------------------------------------------- HostSystem - 02_hostsystem_to_rasd.py: PASS -------------------------------------------------------------------- HostSystem - 03_hs_to_settdefcap.py: PASS -------------------------------------------------------------------- HostSystem - 04_hs_to_EAPF.py: PASS -------------------------------------------------------------------- HostSystem - 05_hs_gi_errs.py: PASS -------------------------------------------------------------------- HostSystem - 06_hs_to_vsms.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 01_forward.py: PASS -------------------------------------------------------------------- HostedAccessPoint - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 01_forward.py: PASS -------------------------------------------------------------------- HostedDependency - 02_reverse.py: PASS -------------------------------------------------------------------- HostedDependency - 03_enabledstate.py: PASS -------------------------------------------------------------------- HostedDependency - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 01_forward.py: PASS -------------------------------------------------------------------- HostedResourcePool - 02_reverse.py: PASS -------------------------------------------------------------------- HostedResourcePool - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedResourcePool - 04_reverse_errs.py: PASS -------------------------------------------------------------------- HostedService - 01_forward.py: PASS -------------------------------------------------------------------- HostedService - 02_reverse.py: PASS -------------------------------------------------------------------- HostedService - 03_forward_errs.py: PASS -------------------------------------------------------------------- HostedService - 04_reverse_errs.py: PASS -------------------------------------------------------------------- KVMRedirectionSAP - 01_enum_KVMredSAP.py: PASS -------------------------------------------------------------------- LogicalDisk - 01_disk.py: PASS -------------------------------------------------------------------- LogicalDisk - 02_nodevs.py: SKIP ERROR - System has defined domains; unable to run -------------------------------------------------------------------- LogicalDisk - 03_ld_gi_errs.py: PASS -------------------------------------------------------------------- Memory - 01_memory.py: PASS -------------------------------------------------------------------- Memory - 02_defgetmem.py: PASS -------------------------------------------------------------------- Memory - 03_mem_gi_errs.py: PASS -------------------------------------------------------------------- NetworkPort - 01_netport.py: PASS -------------------------------------------------------------------- NetworkPort - 02_np_gi_errors.py: PASS -------------------------------------------------------------------- NetworkPort - 03_user_netport.py: PASS -------------------------------------------------------------------- Processor - 01_processor.py: PASS -------------------------------------------------------------------- Processor - 02_definesys_get_procs.py: PASS -------------------------------------------------------------------- Processor - 03_proc_gi_errs.py: PASS -------------------------------------------------------------------- Profile - 01_enum.py: PASS -------------------------------------------------------------------- Profile - 02_profile_to_elec.py: PASS -------------------------------------------------------------------- Profile - 03_rprofile_gi_errs.py: PASS -------------------------------------------------------------------- RASD - 01_verify_rasd_fields.py: PASS -------------------------------------------------------------------- RASD - 02_enum.py: PASS -------------------------------------------------------------------- RASD - 03_rasd_errs.py: PASS -------------------------------------------------------------------- RASD - 04_disk_rasd_size.py: PASS -------------------------------------------------------------------- RASD - 05_disk_rasd_emu_type.py: PASS -------------------------------------------------------------------- RASD - 06_parent_net_pool.py: PASS -------------------------------------------------------------------- RASD - 07_parent_disk_pool.py: PASS -------------------------------------------------------------------- RedirectionService - 01_enum_crs.py: PASS -------------------------------------------------------------------- RedirectionService - 02_enum_crscap.py: PASS -------------------------------------------------------------------- RedirectionService - 03_RedirectionSAP_errs.py: PASS -------------------------------------------------------------------- ReferencedProfile - 01_verify_refprof.py: FAIL ERROR - KVM_ReferencedProfile returned 0 Profiles objects, expected atleast 1 Provider not found or not loadable -------------------------------------------------------------------- ReferencedProfile - 02_refprofile_errs.py: FAIL ERROR - Unexpected rc code 6 and description Provider not found or not loadable ERROR - ------ FAILED: to verify INVALID_Instid_KeyName.------ -------------------------------------------------------------------- ResourceAllocationFromPool - 01_forward.py: FAIL ERROR - No RASD associated with GraphicsPool/0 Provider not found or not loadable -------------------------------------------------------------------- ResourceAllocationFromPool - 02_reverse.py: FAIL ERROR - No associated pool with RAFP_dom/hda Provider not found or not loadable -------------------------------------------------------------------- ResourceAllocationFromPool - 03_forward_errs.py: FAIL ERROR - Unexpected rc code 6 and description Provider not found or not loadable -------------------------------------------------------------------- ResourceAllocationFromPool - 04_reverse_errs.py: FAIL ERROR - Unexpected rc code 6 and description Provider not found or not loadable -------------------------------------------------------------------- ResourceAllocationFromPool - 05_RAPF_err.py: FAIL ERROR - Unexpected rc code 6 and description Provider not found or not loadable ERROR - ------FAILED: to verify the RAFP.------ -------------------------------------------------------------------- ResourcePool - 01_enum.py: PASS -------------------------------------------------------------------- ResourcePool - 02_rp_gi_errors.py: PASS -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 01_enum.py: FAIL ERROR - KVM_ResourcePoolConfigurationCapabilities return 0 instances, excepted only 1 instance Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationCapabilities - 02_rpcc_gi_errs.py: FAIL ERROR - Unexpected errno 6 and desc Provider not found or not loadable ERROR - Expected No such instance (InstanceID) 6 ERROR - ------ FAILED: Invalid InstanceID Key Value.------ -------------------------------------------------------------------- ResourcePoolConfigurationService - 01_enum.py: FAIL ERROR - Too many service error Class not found Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 02_rcps_gi_errors.py: FAIL ERROR - No KVM_ResourcePoolConfigurationService instances returned Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 03_CreateResourcePool.py: FAIL ERROR - Unexpected rc code 6 and description Provider not found or not loadable InvokeMethod(CreateResourcePool): Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: FAIL ERROR - Exception in create_pool() ERROR - Exception details: (6, u'Provider not found or not loadable') ERROR - Error in networkpool creation InvokeMethod(CreateChildResourcePool): Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 05_AddResourcesToResourcePool.py: XFAIL ERROR - Unexpected rc code 6 and description Provider not found or not loadable InvokeMethod(AddResourcesToResourcePool): Provider not found or not loadable Provider not found or not loadable Bug:<92173> -------------------------------------------------------------------- ResourcePoolConfigurationService - 06_RemoveResourcesFromResourcePool.py: FAIL ERROR - Unexpected rc code 6 and description Provider not found or not loadable InvokeMethod(RemoveResourcesFromResourcePool): Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 07_DeleteResourcePool.py: FAIL ERROR - Exception in create_pool() ERROR - Exception details: (6, u'Provider not found or not loadable') ERROR - Error in networkpool creation InvokeMethod(CreateChildResourcePool): Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 08_CreateDiskResourcePool.py: FAIL ERROR - Exception in create_pool() ERROR - Exception details: (6, u'Provider not found or not loadable') ERROR - Exception details: Failed to create 'DISK_POOL_NETFS' type diskpool 'DISK_POOL_NETFS' InvokeMethod(CreateChildResourcePool): Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 09_DeleteDiskPool.py: FAIL ERROR - Exception in create_pool() ERROR - Exception details: (6, u'Provider not found or not loadable') ERROR - Failed to create diskpool 'dp_pool' InvokeMethod(CreateChildResourcePool): Provider not found or not loadable -------------------------------------------------------------------- ResourcePoolConfigurationService - 10_create_storagevolume.py: FAIL ERROR - Exception details: (6, u'Provider not found or not loadable') InvokeMethod(CreateResourceInPool): Provider not found or not loadable -------------------------------------------------------------------- ServiceAccessBySAP - 01_forward.py: PASS -------------------------------------------------------------------- ServiceAccessBySAP - 02_reverse.py: FAIL ERROR - Association didn't return any redirection service instance Traceback (most recent call last): File "/usr/lib64/python2.6/logging/__init__.py", line 754, in emit msg = self.format(record) File "/usr/lib64/python2.6/logging/__init__.py", line 637, in format return fmt.format(record) File "/usr/lib64/python2.6/logging/__init__.py", line 425, in format record.message = record.getMessage() File "/usr/lib64/python2.6/logging/__init__.py", line 295, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Traceback (most recent call last): File "/usr/lib64/python2.6/logging/__init__.py", line 754, in emit msg = self.format(record) File "/usr/lib64/python2.6/logging/__init__.py", line 637, in format return fmt.format(record) File "/usr/lib64/python2.6/logging/__init__.py", line 425, in format record.message = record.getMessage() File "/usr/lib64/python2.6/logging/__init__.py", line 295, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Traceback (most recent call last): File "/usr/lib64/python2.6/logging/__init__.py", line 754, in emit msg = self.format(record) File "/usr/lib64/python2.6/logging/__init__.py", line 637, in format return fmt.format(record) File "/usr/lib64/python2.6/logging/__init__.py", line 425, in format record.message = record.getMessage() File "/usr/lib64/python2.6/logging/__init__.py", line 295, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Provider not found or not loadable -------------------------------------------------------------------- ServiceAffectsElement - 01_forward.py: FAIL 01_forward.py:51: DeprecationWarning: the sets module is deprecated from sets import Set ERROR - Exception in fn verify_assoc() ERROR - Exception details: Failed to get insts for domain SAE_dom Provider not found or not loadable -------------------------------------------------------------------- ServiceAffectsElement - 02_reverse.py: FAIL 02_reverse.py:47: DeprecationWarning: the sets module is deprecated from sets import Set ERROR - Exception : Got '0' records for 'KVM_ServiceAffectsElement' association with 'KVM_ComputerSystem',expected 1 Provider not found or not loadable -------------------------------------------------------------------- SettingsDefine - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefine - 02_reverse.py: PASS -------------------------------------------------------------------- SettingsDefine - 03_sds_fwd_errs.py: PASS -------------------------------------------------------------------- SettingsDefine - 04_sds_rev_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 01_forward.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 03_forward_errs.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 04_forward_vsmsdata.py: PASS -------------------------------------------------------------------- SettingsDefineCapabilities - 05_reverse_vsmcap.py: PASS -------------------------------------------------------------------- SystemDevice - 01_forward.py: PASS -------------------------------------------------------------------- SystemDevice - 02_reverse.py: PASS -------------------------------------------------------------------- SystemDevice - 03_fwderrs.py: PASS -------------------------------------------------------------------- VSSD - 01_enum.py: PASS -------------------------------------------------------------------- VSSD - 02_bootldr.py: SKIP -------------------------------------------------------------------- VSSD - 03_vssd_gi_errs.py: FAIL ERROR - Unexpected errno 6 and desc Provider not found or not loadable ERROR - Expected No such instance (InstanceID) 6 ERROR - ------ FAILED: Invalid InstanceID Key Value.------ -------------------------------------------------------------------- VSSD - 04_vssd_to_rasd.py: PASS -------------------------------------------------------------------- VSSD - 05_set_uuid.py: PASS -------------------------------------------------------------------- VSSD - 06_duplicate_uuid.py: PASS -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 01_enum.py: FAIL 01_enum.py:26: DeprecationWarning: the sets module is deprecated from sets import Set ERROR - 'KVM_VirtualSystemManagementCapabilities' returned '0' instance, excepted only 1 Provider not found or not loadable -------------------------------------------------------------------- VirtualSystemManagementCapabilities - 02_vsmcap_gi_errs.py: FAIL ERROR - Unexpected errno 6 and desc Provider not found or not loadable ERROR - Expected No such instance (InstanceID) 6 ERROR - ------ FAILED: Invalid InstanceID Key Value.------ -------------------------------------------------------------------- VirtualSystemManagementService - 01_definesystem_name.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 02_destroysystem.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 03_definesystem_ess.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 04_definesystem_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 05_destroysystem_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 06_addresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 07_addresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 08_modifyresource.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 09_procrasd_persist.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 10_hv_version.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 11_define_memrasdunits.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 12_referenced_config.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 13_refconfig_additional_devs.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 14_define_sys_disk.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 15_mod_system_settings.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 16_removeresource.py: XFAIL ERROR - 0 RASD insts for domain/mouse:ps2 No such instance (no device domain/mouse:ps2) Bug:<00014> -------------------------------------------------------------------- VirtualSystemManagementService - 17_removeresource_neg.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 18_define_sys_bridge.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 19_definenetwork_ers.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 20_verify_vnc_password.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 21_createVS_verifyMAC.py: PASS -------------------------------------------------------------------- VirtualSystemManagementService - 22_addmulti_brg_interface.py: XFAIL ERROR - Error invoking AddRS: add_net_res ERROR - (1, u"Unable to change (0) device: this function is not supported by the hypervisor: device type 'interface' cannot be attached") ERROR - Failed to destroy Virtual Network 'my_network1' InvokeMethod(AddResourceSettings): Unable to change (0) device: this function is not supported by the hypervisor: device type 'interface' cannot be attached Bug:<00015> -------------------------------------------------------------------- VirtualSystemManagementService - 23_verify_duplicate_mac_err.py: PASS -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 01_enum.py: FAIL ERROR - KVM_VirtualSystemMigrationCapabilities return 0 instances, excepted only 1 instance Provider not found or not loadable -------------------------------------------------------------------- VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: FAIL ERROR - Unexpected errno 6 and desc Provider not found or not loadable ERROR - Expected No such instance (InstanceID) 6 ERROR - ------ FAILED: Invalid InstanceID Key Value.------ -------------------------------------------------------------------- VirtualSystemMigrationService - 01_migratable_host.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 06_remote_live_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 07_remote_offline_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: SKIP -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 01_enum.py: FAIL ERROR - KVM_VirtualSystemMigrationSettingData return 0 instances, excepted only 1 instance Provider not found or not loadable -------------------------------------------------------------------- VirtualSystemMigrationSettingData - 02_vsmsd_gi_errs.py: FAIL ERROR - Unexpected errno 6 and desc Provider not found or not loadable ERROR - Expected No such instance (InstanceID) 6 ERROR - ------ FAILED: Invalid InstanceID Key Value.------ -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 01_forward.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 02_reverse.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: PASS -------------------------------------------------------------------- VirtualSystemSnapshotService - 01_enum.py: FAIL ERROR - KVM_VirtualSystemSnapshotService return 0 instances, excepted only 1 instance Class not found Provider not found or not loadable -------------------------------------------------------------------- VirtualSystemSnapshotService - 02_vs_sservice_gi_errs.py: FAIL ERROR - list index out of range Provider not found or not loadable -------------------------------------------------------------------- VirtualSystemSnapshotService - 03_create_snapshot.py: FAIL ERROR - Exp at least one KVM_VirtualSystemSnapshotServiceCapabilities ERROR - Exception: Unable to get VSSSC instance ERROR - Failed to remove snapshot file for snapshot_vm Provider not found or not loadable -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 01_enum.py: FAIL ERROR - KVM_VirtualSystemSnapshotServiceCapabilities return 0 instances, excepted only 1 instance Provider not found or not loadable -------------------------------------------------------------------- VirtualSystemSnapshotServiceCapabilities - 02_vs_sservicecap_gi_errs.py: FAIL ERROR - Unexpected errno 6 and desc Provider not found or not loadable ERROR - Expected No such instance (InstanceID) 6 ERROR - ------ FAILED: Invalid InstanceID Key Value.------ -------------------------------------------------------------------- From snmishra at us.ibm.com Thu Sep 10 23:31:42 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Thu, 10 Sep 2009 16:31:42 -0700 Subject: [Libvirt-cim] [PATCH 0 of 6] Add support for resource indication provider. Message-ID: Add support for resource indication provider. This provider add support for raising resource indication whenever resource (s) are created, deleted or modified. Signed-off-by: Sharad Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From snmishra at us.ibm.com Thu Sep 10 23:39:30 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Thu, 10 Sep 2009 16:39:30 -0700 Subject: [Libvirt-cim] [PATCH 6 of 6] Virt_VirtualSystemManagementService updated to add support for resource indication provider. Message-ID: # HG changeset patch # User snmishra at us.ibm.com # Date 1252601888 25200 # Node ID c3163e536ea95f2b846eca4ca2a3cefd4ae6a4a7 # Parent 234141bf7f0368531c884334b1da5b94cc038758 Virt_VirtualSystemManagementService updated to add support for resource indication provider. Signed-off-by: Sharad Misrha diff -r 234141bf7f03 -r c3163e536ea9 src/Virt_VirtualSystemManagementService.c --- a/src/Virt_VirtualSystemManagementService.c Thu Sep 03 12:52:47 2009 -0700 +++ b/src/Virt_VirtualSystemManagementService.c Thu Sep 10 09:58:08 2009 -0700 @@ -63,6 +63,9 @@ #define BRIDGE_TYPE "bridge" #define NETWORK_TYPE "network" #define USER_TYPE "user" +#define CREATED "ResourceAllocationSettingDataCreatedIndication" +#define DELETED "ResourceAllocationSettingDataDeletedIndication" +#define MODIFIED "ResourceAllocationSettingDataModifiedIndication" const static CMPIBroker *_BROKER; @@ -442,7 +445,7 @@ ret = cu_get_str_prop(inst, "VirtualSystemIdentifier", &val); if (ret != CMPI_RC_OK) goto out; - + free(domain->name); domain->name = strdup(val); @@ -1416,7 +1419,69 @@ return s; } -static CMPIInstance *create_system(CMPIInstance *vssd, +static CMPIStatus raise_rasd_indication(const CMPIContext *context, + const char *base_type, + CMPIInstance *prev_inst, + const CMPIObjectPath *ref, + struct inst_list *list) +{ + char *type; + CMPIStatus s = {CMPI_RC_OK, NULL}; + CMPIInstance *instc = NULL; + CMPIInstance *ind = NULL; + CMPIObjectPath *op = NULL; + int i; + + CU_DEBUG("raise_rasd_indication"); + + type = get_typed_class(CLASSNAME(ref), base_type); + ind = get_typed_instance(_BROKER, + CLASSNAME(ref), + base_type, + NAMESPACE(ref)); + if (ind == NULL) { + CU_DEBUG("Failed to get indication instance"); + s.rc = CMPI_RC_ERR_FAILED; + goto out; + } + + /* PreviousInstance is set only for modify case. */ + if (prev_inst != NULL) + CMSetProperty(ind, + "PreviousInstance", + (CMPIValue *)&prev_inst, + CMPI_instance); + + for (i=0; i < list->cur; i++) { + instc = list->list[i]; + op = CMGetObjectPath(instc, NULL); + CMPIString *str = CMGetClassName(op, NULL); + + CU_DEBUG("class name is %s\n", CMGetCharsPtr(str, NULL)); + + CMSetProperty(ind, + "SourceInstance", + (CMPIValue *)&instc, + CMPI_instance); + set_source_inst_props(_BROKER, context, ref, ind); + + s = stdi_raise_indication(_BROKER, + context, + type, + NAMESPACE(ref), + ind); + } + +out: + free(type); + return s; + +} + + + +static CMPIInstance *create_system(const CMPIContext *context, + CMPIInstance *vssd, CMPIArray *resources, const CMPIObjectPath *ref, const CMPIObjectPath *refconf, @@ -1427,6 +1492,9 @@ const char *msg = NULL; virConnectPtr conn = NULL; virDomainPtr dom = NULL; + struct inst_list list; + const char *props[] = {NULL}; struct domain *domain = NULL; + inst_list_init(&list); @@ -1477,18 +1544,40 @@ CU_DEBUG("System XML:\n%s", xml); inst = connect_and_create(xml, ref, s); - if (inst != NULL) + if (inst != NULL) { update_dominfo(domain, CLASSNAME(ref)); + + *s = enum_rasds(_BROKER, + ref, + domain->name, + CIM_RES_TYPE_ALL, + props, + &list); + + if (s->rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd\n"); + goto out; + } + + raise_rasd_indication(context, + CREATED, + NULL, + ref, + &list); + } + out: cleanup_dominfo(&domain); free(xml); virDomainFree(dom); virConnectClose(conn); + inst_list_free(&list); return inst; } + static bool trigger_indication(const CMPIContext *context, const char *base_type, const CMPIObjectPath *ref) @@ -1530,7 +1620,7 @@ if (s.rc != CMPI_RC_OK) goto out; - sys = create_system(vssd, res, reference, refconf, &s); + sys = create_system(context, vssd, res, reference, refconf, &s); if (sys == NULL) goto out; @@ -1564,12 +1654,15 @@ CMPIObjectPath *sys; virConnectPtr conn = NULL; virDomainPtr dom = NULL; + struct inst_list list; + const char *props[] = {NULL}; + inst_list_init(&list); conn = connect_by_classname(_BROKER, CLASSNAME(reference), &status); if (conn == NULL) { - rc = -1; + rc = IM_RC_NOT_SUPPORTED; goto error; } @@ -1580,6 +1672,18 @@ if (dom_name == NULL) goto error; + status = enum_rasds(_BROKER, + reference, + dom_name, + CIM_RES_TYPE_ALL, + props, + &list); + + if (status.rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd"); + goto error; + } + dom = virDomainLookupByName(conn, dom_name); if (dom == NULL) { CU_DEBUG("No such domain `%s'", dom_name); @@ -1605,11 +1710,17 @@ error: if (rc == IM_RC_SYS_NOT_FOUND) - virt_set_status(_BROKER, &status, + virt_set_status(_BROKER, + &status, CMPI_RC_ERR_NOT_FOUND, conn, "Referenced domain `%s' does not exist", dom_name); + else if (rc == IM_RC_NOT_SUPPORTED) + virt_set_status(_BROKER, &status, + CMPI_RC_ERR_NOT_FOUND, + conn, + "Unable to raise resource indication"); else if (rc == IM_RC_FAILED) virt_set_status(_BROKER, &status, CMPI_RC_ERR_NOT_FOUND, @@ -1617,6 +1728,7 @@ "Unable to retrieve domain name"); else if (rc == IM_RC_OK) { status = (CMPIStatus){CMPI_RC_OK, NULL}; + raise_rasd_indication(context, DELETED, NULL, reference, &list); trigger_indication(context, "ComputerSystemDeletedIndication", reference); @@ -1625,7 +1737,7 @@ virDomainFree(dom); virConnectClose(conn); CMReturnData(results, &rc, CMPI_uint32); - + inst_list_free(&list); return status; } @@ -2071,7 +2183,8 @@ return s; } -static CMPIStatus _update_resources_for(const CMPIObjectPath *ref, +static CMPIStatus _update_resources_for(const CMPIContext *context, + const CMPIObjectPath *ref, virDomainPtr dom, const char *devid, CMPIInstance *rasd, @@ -2081,7 +2194,14 @@ struct domain *dominfo = NULL; uint16_t type; char *xml = NULL; + char *indication = NULL; CMPIObjectPath *op; + struct inst_list list; + CMPIInstance *prev_inst = NULL; + const char *props[] = {NULL}; + const char *inst_id; + int i, ret; + inst_list_init(&list); if (!get_dominfo(dom, &dominfo)) { virt_set_status(_BROKER, &s, @@ -2106,6 +2225,7 @@ goto out; } + s = func(dominfo, rasd, type, devid, NAMESPACE(ref)); if (s.rc != CMPI_RC_OK) { CU_DEBUG("Resource transform function failed"); @@ -2116,6 +2236,54 @@ if (xml != NULL) { CU_DEBUG("New XML:\n%s", xml); connect_and_create(xml, ref, &s); + + if (func == &resource_add) { + indication = strdup(CREATED); + } + else if (func == &resource_del) { + indication = strdup(DELETED); + } + else { + indication = strdup(MODIFIED); + + s = enum_rasds(_BROKER, + ref, + dominfo->name, + type, + props, + &list); + if (s.rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd"); + goto out; + } + + for(i=0; i < list.cur; i++) { + prev_inst = list.list[i]; + ret = cu_get_str_prop(prev_inst, + "InstanceID", + &inst_id); + + if (ret != CMPI_RC_OK) + continue; + + if (STREQ(inst_id, + get_fq_devid(dominfo->name, + (char *)devid))) + break; + } + + } + + inst_list_init(&list); + if (inst_list_add(&list, rasd) == 0) { + CU_DEBUG("Unable to add RASD instance to the list \n"); + goto out; + } + raise_rasd_indication(context, + indication, + prev_inst, + ref, + &list); } else { cu_statusf(_BROKER, &s, CMPI_RC_ERR_FAILED, @@ -2125,6 +2294,8 @@ out: cleanup_dominfo(&dominfo); free(xml); + free(indication); + inst_list_free(&list); return s; } @@ -2153,7 +2324,8 @@ return s; } -static CMPIStatus _update_resource_settings(const CMPIObjectPath *ref, +static CMPIStatus _update_resource_settings(const CMPIContext *context, + const CMPIObjectPath *ref, const char *domain, CMPIArray *resources, const CMPIResult *results, @@ -2208,9 +2380,14 @@ goto end; } - s = _update_resources_for(ref, dom, devid, inst, func); + s = _update_resources_for(context, + ref, + dom, + devid, + inst, + func); - end: + end: free(name); free(devid); virDomainFree(dom); @@ -2310,7 +2487,9 @@ return s; } - if (cu_get_ref_arg(argsin, "AffectedConfiguration", &sys) != CMPI_RC_OK) { + if (cu_get_ref_arg(argsin, + "AffectedConfiguration", + &sys) != CMPI_RC_OK) { cu_statusf(_BROKER, &s, CMPI_RC_ERR_INVALID_PARAMETER, "Missing AffectedConfiguration parameter"); @@ -2324,11 +2503,13 @@ return s; } - s = _update_resource_settings(reference, + s = _update_resource_settings(context, + reference, domain, arr, results, resource_add); + free(domain); return s; @@ -2351,7 +2532,8 @@ return s; } - return _update_resource_settings(reference, + return _update_resource_settings(context, + reference, NULL, arr, results, @@ -2384,7 +2566,8 @@ if (s.rc != CMPI_RC_OK) goto out; - s = _update_resource_settings(reference, + s = _update_resource_settings(context, + reference, NULL, resource_arr, results, -------------- next part -------------- An HTML attachment was scrubbed... URL: From snmishra at us.ibm.com Thu Sep 10 23:40:03 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Thu, 10 Sep 2009 16:40:03 -0700 Subject: [Libvirt-cim] [PATCH 5 of 6] Add resource indication provider. Message-ID: # HG changeset patch # User snmishra at us.ibm.com # Date 1252482482 25200 # Node ID 430f148ad7035083035f4ba3a0975e0f43a88196 # Parent 14910082e1d791b092dcb43e067d91b400e09aa2 Add resource indication provider Signed-off-by: Sharad Mishra diff -r 14910082e1d7 -r 430f148ad703 src/Virt_ResourceAllocationSettingDataIndication.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/Virt_ResourceAllocationSettingDataIndication.c Wed Sep 09 00:48:02 2009 -0700 @@ -0,0 +1,155 @@ +/* + * Copyright IBM Corp. 2007 + * + * Authors: + * Sharad Mishra + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include + +static const CMPIBroker *_BROKER; + +DECLARE_FILTER(xen_created, + "Xen_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(xen_deleted, + "Xen_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(xen_modified, + "Xen_ResourceAllocationSettingDataModifiedIndication"); +DECLARE_FILTER(kvm_created, + "KVM_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(kvm_deleted, + "KVM_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(kvm_modified, + "KVM_ResourceAllocationSettingDataModifiedIndication"); +DECLARE_FILTER(lxc_created, + "LXC_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(lxc_deleted, + "LXC_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(lxc_modified, + "LXC_ResourceAllocationSettingDataModifiedIndication"); + +static struct std_ind_filter *filters[] = { + &xen_created, + &xen_deleted, + &xen_modified, + &kvm_created, + &kvm_deleted, + &kvm_modified, + &lxc_created, + &lxc_deleted, + &lxc_modified, + NULL, +}; + + +static CMPIStatus raise_indication(const CMPIBroker *broker, + const CMPIContext *ctx, + const CMPIInstance *ind) +{ + struct std_indication_ctx *_ctx = NULL; + CMPIStatus s = {CMPI_RC_OK, NULL}; + struct ind_args *args = NULL; + CMPIObjectPath *ref = NULL; + + _ctx = malloc(sizeof(struct std_indication_ctx)); + if (_ctx == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Unable to allocate indication context"); + goto out; + } + + _ctx->brkr = broker; + _ctx->handler = NULL; + _ctx->filters = filters; + _ctx->enabled = 1; + + args = malloc(sizeof(struct ind_args)); + if (args == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Unable to allocate ind_args"); + goto out; + } + + ref = CMGetObjectPath(ind, &s); + if (ref == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Got a null object path"); + goto out; + } + + /* FIXME: This is a Pegasus work around. Pegsus loses the namespace + when an ObjectPath is pulled from an instance */ + + + CMSetNameSpace(ref, "root/virt"); + args->ns = strdup(NAMESPACE(ref)); + args->classname = strdup(CLASSNAME(ref)); + args->_ctx = _ctx; + + s = stdi_deliver(broker, ctx, args, (CMPIInstance *)ind); + if (s.rc == CMPI_RC_OK) { + CU_DEBUG("Indication delivered"); + } else { + CU_DEBUG("Not delivered: %s", CMGetCharPtr(s.msg)); + } + + out: + return s; +} + +static struct std_indication_handler rasdi = { + .raise_fn = raise_indication, + .trigger_fn = NULL, + .activate_fn = NULL, + .deactivate_fn = NULL, + .enable_fn = NULL, + .disable_fn = NULL, +}; + +DEFAULT_IND_CLEANUP(); +DEFAULT_AF(); +DEFAULT_MP(); + +STDI_IndicationMIStub(, + Virt_ResourceAllocationSettingDataIndicationProvider, + _BROKER, + libvirt_cim_init(), + &rasdi, + filters); + +/* + * Local Variables: + * mode: C + * c-set-style: "K&R" + * tab-width: 8 + * c-basic-offset: 8 + * indent-tabs-mode: nil + * End: + */ -------------- next part -------------- An HTML attachment was scrubbed... URL: From snmishra at us.ibm.com Thu Sep 10 23:40:31 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Thu, 10 Sep 2009 16:40:31 -0700 Subject: [Libvirt-cim] [PATCH 4 of 6] Modify Virt_RASD so that rasd_from_vdev() can be used by other providers. Message-ID: # HG changeset patch # User snmishra at us.ibm.com # Date 1252482482 25200 # Node ID 14910082e1d791b092dcb43e067d91b400e09aa2 # Parent 2632c5204a9a6a485f5406ea016957340895f69f Modify Virt_RASD so that rasd_from_vdev() can be used by other providers Signed-off-by: Sharad Mishra diff -r 2632c5204a9a -r 14910082e1d7 src/Virt_RASD.c --- a/src/Virt_RASD.c Wed Sep 09 00:48:02 2009 -0700 +++ b/src/Virt_RASD.c Wed Sep 09 00:48:02 2009 -0700 @@ -368,7 +368,7 @@ return s; } -static CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, +CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, struct virt_device *dev, const char *host, const CMPIObjectPath *ref, diff -r 2632c5204a9a -r 14910082e1d7 src/Virt_RASD.h --- a/src/Virt_RASD.h Wed Sep 09 00:48:02 2009 -0700 +++ b/src/Virt_RASD.h Wed Sep 09 00:48:02 2009 -0700 @@ -66,6 +66,13 @@ const uint16_t type, const char *host, struct virt_device **list); + +CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, + struct virt_device *dev, + const char *host, + const CMPIObjectPath *ref, + const char **properties); + #endif /* -------------- next part -------------- An HTML attachment was scrubbed... URL: From snmishra at us.ibm.com Thu Sep 10 23:40:59 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Thu, 10 Sep 2009 16:40:59 -0700 Subject: [Libvirt-cim] [PATCH 3 of 6] Modify Virt_CS so set_source_inst_props() can be used by other providers. Message-ID: # HG changeset patch # User snmishra at us.ibm.com # Date 1252482482 25200 # Node ID 2632c5204a9a6a485f5406ea016957340895f69f # Parent 639d5782a9f3f195b0ca88878ee169cc0efd0f18 Modify Virt_CS so set_source_inst_props() can be used by other providers Signed-off-by: Sharad Mishra diff -r 639d5782a9f3 -r 2632c5204a9a src/Virt_ComputerSystemIndication.c --- a/src/Virt_ComputerSystemIndication.c Wed Sep 09 00:48:02 2009 -0700 +++ b/src/Virt_ComputerSystemIndication.c Wed Sep 09 00:48:02 2009 -0700 @@ -192,9 +192,9 @@ return ret; } -static void set_source_inst_props(const CMPIBroker *broker, +void set_source_inst_props(const CMPIBroker *broker, const CMPIContext *context, - CMPIObjectPath *ref, + const CMPIObjectPath *ref, CMPIInstance *ind) { const char *host; diff -r 639d5782a9f3 -r 2632c5204a9a src/Virt_ComputerSystemIndication.h --- a/src/Virt_ComputerSystemIndication.h Wed Sep 09 00:48:02 2009 -0700 +++ b/src/Virt_ComputerSystemIndication.h Wed Sep 09 00:48:02 2009 -0700 @@ -29,6 +29,10 @@ const CMPIObjectPath *newsystem, char *type); +void set_source_inst_props(const CMPIBroker *broker, + const CMPIContext *context, + const CMPIObjectPath *ref, + CMPIInstance *ind); #endif /* -------------- next part -------------- An HTML attachment was scrubbed... URL: From snmishra at us.ibm.com Thu Sep 10 23:41:23 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Thu, 10 Sep 2009 16:41:23 -0700 Subject: [Libvirt-cim] [PATCH 2 of 6] Add resource indication mof and registration files. Message-ID: # HG changeset patch # User snmishra at us.ibm.com # Date 1252482482 25200 # Node ID 639d5782a9f3f195b0ca88878ee169cc0efd0f18 # Parent cba20af2b6748dbd0bf32fad6941ae69425694dd Add resource indication mof and registration files. Signed-off-by: Sharad Mishra diff -r cba20af2b674 -r 639d5782a9f3 schema/ResourceAllocationSettingDataIndication.mof --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/schema/ResourceAllocationSettingDataIndication.mof Wed Sep 09 00:48:02 2009 -0700 @@ -0,0 +1,66 @@ +// Copyright IBM Corp. 2007 + +[Description ("Xen_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("Xen_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("Xen_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; + + +[Description ("KVM_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("KVM_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("KVM_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; + + +[Description ("LXC_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("LXC_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("LXC_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; diff -r cba20af2b674 -r 639d5782a9f3 schema/ResourceAllocationSettingDataIndication.registration --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/schema/ResourceAllocationSettingDataIndication.registration Wed Sep 09 00:48:02 2009 -0700 @@ -0,0 +1,11 @@ +# Copyright IBM Corp. 2007 +# Classname Namespace ProviderName ProviderModule ProviderTypes +Xen_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +Xen_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +Xen_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method -------------- next part -------------- An HTML attachment was scrubbed... URL: From snmishra at us.ibm.com Thu Sep 10 23:41:53 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Thu, 10 Sep 2009 16:41:53 -0700 Subject: [Libvirt-cim] [PATCH 1 of 6] Add resource indication feature Makefile changes. Message-ID: # HG changeset patch # User snmishra at us.ibm.com # Date 1252482482 25200 # Node ID cba20af2b6748dbd0bf32fad6941ae69425694dd # Parent 234141bf7f0368531c884334b1da5b94cc038758 Add resource indication feature Makefile changes. MOF and Registration files for the new indication provider were added. Changes were made to src/Makefile.am to build new resource indication provider. Signed-off-by: Sharad Mishra diff -r 234141bf7f03 -r cba20af2b674 Makefile.am --- a/Makefile.am Thu Sep 03 12:52:47 2009 -0700 +++ b/Makefile.am Wed Sep 09 00:48:02 2009 -0700 @@ -27,6 +27,7 @@ schema/RegisteredProfile.mof \ schema/ElementConformsToProfile.mof \ schema/ComputerSystemIndication.mof \ + schema/ResourceAllocationSettingDataIndication.mof \ schema/ComputerSystemMigrationIndication.mof \ schema/Virt_ResourceAllocationSettingData.mof \ schema/ResourceAllocationSettingData.mof \ @@ -101,6 +102,7 @@ schema/DiskPool.registration \ schema/HostedResourcePool.registration \ schema/ComputerSystemIndication.registration \ + schema/ResourceAllocationSettingDataIndication.registration \ schema/ComputerSystemMigrationIndication.registration \ schema/ResourceAllocationSettingData.registration \ schema/ResourcePoolConfigurationService.registration \ diff -r 234141bf7f03 -r cba20af2b674 src/Makefile.am --- a/src/Makefile.am Thu Sep 03 12:52:47 2009 -0700 +++ b/src/Makefile.am Wed Sep 09 00:48:02 2009 -0700 @@ -48,6 +48,7 @@ libVirt_VirtualSystemSnapshotServiceCapabilities.la \ libVirt_SystemDevice.la \ libVirt_ComputerSystemIndication.la \ + libVirt_ResourceAllocationSettingDataIndication.la \ libVirt_ComputerSystemMigrationIndication.la \ libVirt_VirtualSystemManagementCapabilities.la \ libVirt_AllocationCapabilities.la \ @@ -86,6 +87,10 @@ libVirt_ComputerSystemIndication_la_SOURCES = Virt_ComputerSystemIndication.c libVirt_ComputerSystemIndication_la_LIBADD = -lVirt_ComputerSystem -lVirt_HostSystem -lpthread -lrt +libVirt_ResourceAllocationSettingDataIndication_la_DEPENDENCIES = libVirt_ComputerSystem.la +libVirt_ResourceAllocationSettingDataIndication_la_SOURCES = Virt_ResourceAllocationSettingDataIndication.c +libVirt_ResourceAllocationSettingDataIndication_la_LIBADD = -lVirt_ComputerSystem + libVirt_ComputerSystemMigrationIndication_la_DEPENDENCIES = libVirt_ComputerSystem.la libVirt_ComputerSystemMigrationIndication_la_SOURCES = Virt_ComputerSystemMigrationIndication.c libVirt_ComputerSystemMigrationIndication_la_LIBADD = -lVirt_ComputerSystem -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaitlin at linux.vnet.ibm.com Thu Sep 10 23:56:50 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Thu, 10 Sep 2009 16:56:50 -0700 Subject: [Libvirt-cim] Test Run Summary (Sep 10 2009): KVM on Fedora release 11 (Leonidas) with sfcb In-Reply-To: <200909102315.n8ANFa0p027951@d01av04.pok.ibm.com> References: <200909102315.n8ANFa0p027951@d01av04.pok.ibm.com> Message-ID: <4AA99242.6010003@linux.vnet.ibm.com> > -------------------------------------------------------------------- > VirtualSystemSnapshotServiceCapabilities - 01_enum.py: FAIL > ERROR - KVM_VirtualSystemSnapshotServiceCapabilities return 0 instances, excepted only 1 instance > Provider not found or not loadable > -------------------------------------------------------------------- Looks like an setup issue on this system. Will resend this test run. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Fri Sep 11 11:40:28 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Fri, 11 Sep 2009 17:10:28 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <4AA96C43.5090005@linux.vnet.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> <4AA96C43.5090005@linux.vnet.ibm.com> Message-ID: <4AAA372C.4090102@linux.vnet.ibm.com> Kaitlin Rupert wrote: > Deepti B. Kalakeri wrote: >> # HG changeset patch >> # User Deepti B. Kalakeri >> # Date 1252590021 14400 >> # Node ID 53b05fc42fbc04ce45eea4a09ad84881fbcf6d3e >> # Parent 30196cc506c07d81642c94a01fc65b34421c0714 >> [TEST] Adding verification for DestroySystem() of the domain. >> >> Tested with KVM and current sources on SLES11. >> Signed-off-by: Deepti B. Kalakeri >> >> diff -r 30196cc506c0 -r 53b05fc42fbc >> suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py >> > > I get the following failure: > > Starting test suite: libvirt-cim > Cleaned log files. > > Testing KVM hypervisor > -------------------------------------------------------------------- > VirtualSystemManagementService - 02_destroysystem.py: FAIL > ERROR - CS instance not returned for test_domain. > ERROR - RequestedState for dom 'test_domain' is not '3' > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > Referenced domain `test_domain' does not exist: Domain not found > -------------------------------------------------------------------- > > > However, the test passes for me if the patch isn't applied. Yes! This test fails with the changes, this is because the DestroySystem() is not just destroying the domain but also undefining it. The VSMS/15*py tc with the new changes also fails for the same reason. I am not sure if you got chance to look at the comments to "#2 Add try / except to VSMS 15" patch. Should I make a note of this in the libvirt.org and XFAIL this test ? -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Fri Sep 11 12:10:26 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Fri, 11 Sep 2009 17:40:26 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Add timestamps to main.py to calculate run time of tests In-Reply-To: <2d852ba88fd24102ec98.1252022739@elm3b151.beaverton.ibm.com> References: <2d852ba88fd24102ec98.1252022739@elm3b151.beaverton.ibm.com> Message-ID: <4AAA3E32.3030003@linux.vnet.ibm.com> Good one. Kaitlin Rupert wrote: > # HG changeset patch > # User Kaitlin Rupert > # Date 1252022738 25200 > # Node ID 2d852ba88fd24102ec988145e464a13f5faae5c0 > # Parent db3af9cb2c9affb0a32a8ea3a2c23648c5efe91e > [TEST] Add timestamps to main.py to calculate run time of tests > > These changes allow the user to specify the --print-exec-time flag, which will > print the execution time of each test. If this flag isn't specified, the > total run time of the test is still printed. > > Signed-off-by: Kaitlin Rupert > > diff -r db3af9cb2c9a -r 2d852ba88fd2 suites/libvirt-cim/main.py > --- a/suites/libvirt-cim/main.py Thu Sep 03 13:03:52 2009 -0700 > +++ b/suites/libvirt-cim/main.py Thu Sep 03 17:05:38 2009 -0700 > @@ -22,6 +22,7 @@ > # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA > # > > +from time import time > from optparse import OptionParser > import os > import sys > @@ -64,6 +65,9 @@ > help="Duplicate the output to stderr") > parser.add_option("--report", dest="report", > help="Send report using mail info: --report=") > +parser.add_option("--print-exec-time", action="store_true", > + dest="print_exec_time", > + help="Print execution time of each test") > > TEST_SUITE = 'cimtest' > CIMTEST_RCFILE = '%s/.cimtestrc' % os.environ['HOME'] > @@ -146,6 +150,27 @@ > > return PASS > > +def print_exec_time(testsuite, exec_time): > + > + #Convert run time from seconds to hours > + tmp = exec_time / (60 * 60) > + h = int(tmp) > + > + #Subtract out hours and convert remainder to minutes > + tmp = (tmp - h) * 60 > + m = int(tmp) > + > + #Subtract out minutes and convert remainder to seconds > + tmp = (tmp - m) * 60 > + s = int(tmp) > + > + #Subtract out seconds and convert remainder to milliseconds > + tmp = (tmp - s) * 1000 > + msec = int(tmp) > + > + testsuite.debug(" Execution time: %sh %smin %ssec %smsec" % > + (h, m, s, msec)) > You can remove the blank space from the above log, so that the message is aligned with the test case log messages. You can also include some delimiters between the time values to make more clear, also we can print the hr , min, sec in H, MIN, SEC would be good. something like this: testsuite.debug(" ---------------------------") testsuite.debug("Execution time: %sh | %smin |%ssec |%smsec|" % (h, m, s, msec)) This will print the information in the following format. Starting test suite: libvirt-cim -------------------------------------------------------------------- ComputerSystem - 04_defineStartVS.py: PASS --------------------------- Execution time: 0H | 0MIN |1SEC |638MSEC| -------------------------------------------------------------------- Total test execution: --------------------------- Execution time: 0H | 0MIN |1SEC |638MSEC| Testing KVM hypervisor -------------------------------------------------------------------- ComputerSystem - 04_defineStartVS.py: PASS --------------------------- Execution time: 0h | 0min |1sec |663msec| -------------------------------------------------------------------- Total test execution: --------------------------- Execution time: 0h | 0min |1sec |663msec| Do we require milliseconds information ? Can we print the total time as part of the Summary information in the test run report, otherwise we will have to go to the bottom of the results to know the total time details. > + > def main(): > (options, args) = parser.parse_args() > to_addr = None > @@ -213,6 +238,8 @@ > > print "\nTesting " + options.virt + " hypervisor" > > + test_run_time_total = 0 > + > for test in test_list: > testsuite.debug(div) > t_path = os.path.join(TEST_SUITE, test['group']) > @@ -222,13 +249,25 @@ > options.virt, dbg, > options.t_url) > cmd = cdto + ' && ' + ' ' + run > + start_time = time() > status, output = commands.getstatusoutput(cmd) > + end_time = time() > > os_status = os.WEXITSTATUS(status) > > testsuite.print_results(test['group'], test['test'], os_status, output) > > + exec_time = end_time - start_time > + test_run_time_total = test_run_time_total + exec_time > + > + if options.print_exec_time: > + print_exec_time(testsuite, exec_time) > + > testsuite.debug("%s\n" % div) > + testsuite.debug("Total test execution: ") > + print_exec_time(testsuite, test_run_time_total) > + testsuite.debug("\n") > + > testsuite.finish() > > status = cleanup_env(options.ip, options.virt) > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Fri Sep 11 15:07:50 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Fri, 11 Sep 2009 08:07:50 -0700 Subject: [Libvirt-cim] [PATCH 0 of 6] Add support for resource indication provider. In-Reply-To: References: Message-ID: <4AAA67C6.5050602@linux.vnet.ibm.com> Patch 1, 3, 4, and 6 fail to apply for me. Sharad Mishra wrote: > Add support for resource indication provider. > > This provider add support for raising resource indication whenever > resource(s) are created, deleted or modified. > > Signed-off-by: Sharad Mishra > > > ------------------------------------------------------------------------ > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From rmaciel at linux.vnet.ibm.com Fri Sep 11 15:12:46 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Fri, 11 Sep 2009 12:12:46 -0300 Subject: [Libvirt-cim] [PATCH 0 of 6] Add support for resource indication provider. In-Reply-To: <4AAA67C6.5050602@linux.vnet.ibm.com> References: <4AAA67C6.5050602@linux.vnet.ibm.com> Message-ID: <4AAA68EE.4070508@linux.vnet.ibm.com> On 09/11/2009 12:07 PM, Kaitlin Rupert wrote: > Patch 1, 3, 4, and 6 fail to apply for me. Just to complement, you probably need to update your local repository > > > Sharad Mishra wrote: >> Add support for resource indication provider. >> >> This provider add support for raising resource indication whenever >> resource(s) are created, deleted or modified. >> >> Signed-off-by: Sharad Mishra >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Libvirt-cim mailing list >> Libvirt-cim at redhat.com >> https://www.redhat.com/mailman/listinfo/libvirt-cim > > -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From snmishra at us.ibm.com Fri Sep 11 16:50:49 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 11 Sep 2009 09:50:49 -0700 Subject: [Libvirt-cim] [PATCH 0 of 6] Add a new resource indication provider. Message-ID: This provider will raise resource indication whenever resource(s) are created, deleted or modified. Signed-off-by: Sharad Mishra From snmishra at us.ibm.com Fri Sep 11 16:50:51 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 11 Sep 2009 09:50:51 -0700 Subject: [Libvirt-cim] [PATCH 2 of 6] Modify Virt_CS so set_source-inst_props() can be used by other providers In-Reply-To: References: Message-ID: <44e2c3144f199c7e552e.1252687851@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID 44e2c3144f199c7e552e3f5066186289b424b5db # Parent 92570a0539103628c8ccf0166983e9d85bb7431d Modify Virt_CS so set_source-inst_props() can be used by other providers. Signed-off-by: Sharad Mishra diff -r 92570a053910 -r 44e2c3144f19 src/Virt_ComputerSystemIndication.c --- a/src/Virt_ComputerSystemIndication.c Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_ComputerSystemIndication.c Fri Sep 11 09:00:47 2009 -0700 @@ -192,9 +192,9 @@ return ret; } -static void set_source_inst_props(const CMPIBroker *broker, +void set_source_inst_props(const CMPIBroker *broker, const CMPIContext *context, - CMPIObjectPath *ref, + const CMPIObjectPath *ref, CMPIInstance *ind) { const char *host; diff -r 92570a053910 -r 44e2c3144f19 src/Virt_ComputerSystemIndication.h --- a/src/Virt_ComputerSystemIndication.h Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_ComputerSystemIndication.h Fri Sep 11 09:00:47 2009 -0700 @@ -29,6 +29,10 @@ const CMPIObjectPath *newsystem, char *type); +void set_source_inst_props(const CMPIBroker *broker, + const CMPIContext *context, + const CMPIObjectPath *ref, + CMPIInstance *ind); #endif /* From snmishra at us.ibm.com Fri Sep 11 16:50:52 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 11 Sep 2009 09:50:52 -0700 Subject: [Libvirt-cim] [PATCH 3 of 6] Modify Virt_RASD so that rasd_from_vdev() can be used by other providers In-Reply-To: References: Message-ID: <74607c71855e6baeeb49.1252687852@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID 74607c71855e6baeeb49bbc134b773acc39675fb # Parent 44e2c3144f199c7e552e3f5066186289b424b5db Modify Virt_RASD so that rasd_from_vdev() can be used by other providers. Signed-off-by: Sharad Mishra diff -r 44e2c3144f19 -r 74607c71855e src/Virt_RASD.c --- a/src/Virt_RASD.c Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_RASD.c Fri Sep 11 09:00:47 2009 -0700 @@ -368,7 +368,7 @@ return s; } -static CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, +CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, struct virt_device *dev, const char *host, const CMPIObjectPath *ref, diff -r 44e2c3144f19 -r 74607c71855e src/Virt_RASD.h --- a/src/Virt_RASD.h Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_RASD.h Fri Sep 11 09:00:47 2009 -0700 @@ -66,6 +66,13 @@ const uint16_t type, const char *host, struct virt_device **list); + +CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, + struct virt_device *dev, + const char *host, + const CMPIObjectPath *ref, + const char **properties); + #endif /* From snmishra at us.ibm.com Fri Sep 11 16:50:55 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 11 Sep 2009 09:50:55 -0700 Subject: [Libvirt-cim] [PATCH 6 of 6] Add resource indication provider In-Reply-To: References: Message-ID: <43076113ae79f638317b.1252687855@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252687805 25200 # Node ID 43076113ae79f638317b0b7a0669c59a864e7904 # Parent 18b62ae07a118517ae81529fa8a9c10757a02a9b Add resource indication provider. Signed-off-by: Sharad Mishra diff -r 18b62ae07a11 -r 43076113ae79 src/Virt_ResourceAllocationSettingDataIndication.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/Virt_ResourceAllocationSettingDataIndication.c Fri Sep 11 09:50:05 2009 -0700 @@ -0,0 +1,155 @@ +/* + * Copyright IBM Corp. 2007 + * + * Authors: + * Sharad Mishra + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include + +static const CMPIBroker *_BROKER; + +DECLARE_FILTER(xen_created, + "Xen_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(xen_deleted, + "Xen_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(xen_modified, + "Xen_ResourceAllocationSettingDataModifiedIndication"); +DECLARE_FILTER(kvm_created, + "KVM_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(kvm_deleted, + "KVM_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(kvm_modified, + "KVM_ResourceAllocationSettingDataModifiedIndication"); +DECLARE_FILTER(lxc_created, + "LXC_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(lxc_deleted, + "LXC_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(lxc_modified, + "LXC_ResourceAllocationSettingDataModifiedIndication"); + +static struct std_ind_filter *filters[] = { + &xen_created, + &xen_deleted, + &xen_modified, + &kvm_created, + &kvm_deleted, + &kvm_modified, + &lxc_created, + &lxc_deleted, + &lxc_modified, + NULL, +}; + + +static CMPIStatus raise_indication(const CMPIBroker *broker, + const CMPIContext *ctx, + const CMPIInstance *ind) +{ + struct std_indication_ctx *_ctx = NULL; + CMPIStatus s = {CMPI_RC_OK, NULL}; + struct ind_args *args = NULL; + CMPIObjectPath *ref = NULL; + + _ctx = malloc(sizeof(struct std_indication_ctx)); + if (_ctx == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Unable to allocate indication context"); + goto out; + } + + _ctx->brkr = broker; + _ctx->handler = NULL; + _ctx->filters = filters; + _ctx->enabled = 1; + + args = malloc(sizeof(struct ind_args)); + if (args == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Unable to allocate ind_args"); + goto out; + } + + ref = CMGetObjectPath(ind, &s); + if (ref == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Got a null object path"); + goto out; + } + + /* FIXME: This is a Pegasus work around. Pegsus loses the namespace + when an ObjectPath is pulled from an instance */ + + + CMSetNameSpace(ref, "root/virt"); + args->ns = strdup(NAMESPACE(ref)); + args->classname = strdup(CLASSNAME(ref)); + args->_ctx = _ctx; + + s = stdi_deliver(broker, ctx, args, (CMPIInstance *)ind); + if (s.rc == CMPI_RC_OK) { + CU_DEBUG("Indication delivered"); + } else { + CU_DEBUG("Not delivered: %s", CMGetCharPtr(s.msg)); + } + + out: + return s; +} + +static struct std_indication_handler rasdi = { + .raise_fn = raise_indication, + .trigger_fn = NULL, + .activate_fn = NULL, + .deactivate_fn = NULL, + .enable_fn = NULL, + .disable_fn = NULL, +}; + +DEFAULT_IND_CLEANUP(); +DEFAULT_AF(); +DEFAULT_MP(); + +STDI_IndicationMIStub(, + Virt_ResourceAllocationSettingDataIndicationProvider, + _BROKER, + libvirt_cim_init(), + &rasdi, + filters); + +/* + * Local Variables: + * mode: C + * c-set-style: "K&R" + * tab-width: 8 + * c-basic-offset: 8 + * indent-tabs-mode: nil + * End: + */ From snmishra at us.ibm.com Fri Sep 11 16:50:53 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 11 Sep 2009 09:50:53 -0700 Subject: [Libvirt-cim] [PATCH 4 of 6] Support for resource indication was added to Virt_VirtualSystemManagementService In-Reply-To: References: Message-ID: # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID dd96ae8f1ec71af57012fcb37932d6a7b1f270fa # Parent 74607c71855e6baeeb49bbc134b773acc39675fb Support for resource indication was added to Virt_VirtualSystemManagementService Code added to call resource indication when resources are added or deleted or modified. Signed-off-by: Sharad Mishra diff -r 74607c71855e -r dd96ae8f1ec7 src/Virt_VirtualSystemManagementService.c --- a/src/Virt_VirtualSystemManagementService.c Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_VirtualSystemManagementService.c Fri Sep 11 09:00:47 2009 -0700 @@ -63,6 +63,9 @@ #define BRIDGE_TYPE "bridge" #define NETWORK_TYPE "network" #define USER_TYPE "user" +#define CREATED "ResourceAllocationSettingDataCreatedIndication" +#define DELETED "ResourceAllocationSettingDataDeletedIndication" +#define MODIFIED "ResourceAllocationSettingDataModifiedIndication" const static CMPIBroker *_BROKER; @@ -442,7 +445,7 @@ ret = cu_get_str_prop(inst, "VirtualSystemIdentifier", &val); if (ret != CMPI_RC_OK) goto out; - + free(domain->name); domain->name = strdup(val); @@ -1416,7 +1419,69 @@ return s; } -static CMPIInstance *create_system(CMPIInstance *vssd, +static CMPIStatus raise_rasd_indication(const CMPIContext *context, + const char *base_type, + CMPIInstance *prev_inst, + const CMPIObjectPath *ref, + struct inst_list *list) +{ + char *type; + CMPIStatus s = {CMPI_RC_OK, NULL}; + CMPIInstance *instc = NULL; + CMPIInstance *ind = NULL; + CMPIObjectPath *op = NULL; + int i; + + CU_DEBUG("raise_rasd_indication"); + + type = get_typed_class(CLASSNAME(ref), base_type); + ind = get_typed_instance(_BROKER, + CLASSNAME(ref), + base_type, + NAMESPACE(ref)); + if (ind == NULL) { + CU_DEBUG("Failed to get indication instance"); + s.rc = CMPI_RC_ERR_FAILED; + goto out; + } + + /* PreviousInstance is set only for modify case. */ + if (prev_inst != NULL) + CMSetProperty(ind, + "PreviousInstance", + (CMPIValue *)&prev_inst, + CMPI_instance); + + for (i=0; i < list->cur; i++) { + instc = list->list[i]; + op = CMGetObjectPath(instc, NULL); + CMPIString *str = CMGetClassName(op, NULL); + + CU_DEBUG("class name is %s\n", CMGetCharsPtr(str, NULL)); + + CMSetProperty(ind, + "SourceInstance", + (CMPIValue *)&instc, + CMPI_instance); + set_source_inst_props(_BROKER, context, ref, ind); + + s = stdi_raise_indication(_BROKER, + context, + type, + NAMESPACE(ref), + ind); + } + +out: + free(type); + return s; + +} + + + +static CMPIInstance *create_system(const CMPIContext *context, + CMPIInstance *vssd, CMPIArray *resources, const CMPIObjectPath *ref, const CMPIObjectPath *refconf, @@ -1427,9 +1492,13 @@ const char *msg = NULL; virConnectPtr conn = NULL; virDomainPtr dom = NULL; + struct inst_list list; + const char *props[] = {NULL}; struct domain *domain = NULL; + inst_list_init(&list); + if (refconf != NULL) { *s = get_reference_domain(&domain, ref, refconf); if (s->rc != CMPI_RC_OK) @@ -1477,18 +1546,40 @@ CU_DEBUG("System XML:\n%s", xml); inst = connect_and_create(xml, ref, s); - if (inst != NULL) + if (inst != NULL) { update_dominfo(domain, CLASSNAME(ref)); + *s = enum_rasds(_BROKER, + ref, + domain->name, + CIM_RES_TYPE_ALL, + props, + &list); + + if (s->rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd\n"); + goto out; + } + + raise_rasd_indication(context, + CREATED, + NULL, + ref, + &list); + } + + out: cleanup_dominfo(&domain); free(xml); virDomainFree(dom); virConnectClose(conn); + inst_list_free(&list); return inst; } + static bool trigger_indication(const CMPIContext *context, const char *base_type, const CMPIObjectPath *ref) @@ -1530,7 +1621,7 @@ if (s.rc != CMPI_RC_OK) goto out; - sys = create_system(vssd, res, reference, refconf, &s); + sys = create_system(context, vssd, res, reference, refconf, &s); if (sys == NULL) goto out; @@ -1564,12 +1655,15 @@ CMPIObjectPath *sys; virConnectPtr conn = NULL; virDomainPtr dom = NULL; + struct inst_list list; + const char *props[] = {NULL}; + inst_list_init(&list); conn = connect_by_classname(_BROKER, CLASSNAME(reference), &status); if (conn == NULL) { - rc = -1; + rc = IM_RC_NOT_SUPPORTED; goto error; } @@ -1580,6 +1674,18 @@ if (dom_name == NULL) goto error; + status = enum_rasds(_BROKER, + reference, + dom_name, + CIM_RES_TYPE_ALL, + props, + &list); + + if (status.rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd"); + goto error; + } + dom = virDomainLookupByName(conn, dom_name); if (dom == NULL) { CU_DEBUG("No such domain `%s'", dom_name); @@ -1605,11 +1711,17 @@ error: if (rc == IM_RC_SYS_NOT_FOUND) - virt_set_status(_BROKER, &status, + virt_set_status(_BROKER, + &status, CMPI_RC_ERR_NOT_FOUND, conn, "Referenced domain `%s' does not exist", dom_name); + else if (rc == IM_RC_NOT_SUPPORTED) + virt_set_status(_BROKER, &status, + CMPI_RC_ERR_NOT_FOUND, + conn, + "Unable to raise resource indication"); else if (rc == IM_RC_FAILED) virt_set_status(_BROKER, &status, CMPI_RC_ERR_NOT_FOUND, @@ -1617,6 +1729,7 @@ "Unable to retrieve domain name"); else if (rc == IM_RC_OK) { status = (CMPIStatus){CMPI_RC_OK, NULL}; + raise_rasd_indication(context, DELETED, NULL, reference, &list); trigger_indication(context, "ComputerSystemDeletedIndication", reference); @@ -1625,7 +1738,7 @@ virDomainFree(dom); virConnectClose(conn); CMReturnData(results, &rc, CMPI_uint32); - + inst_list_free(&list); return status; } @@ -2071,7 +2184,8 @@ return s; } -static CMPIStatus _update_resources_for(const CMPIObjectPath *ref, +static CMPIStatus _update_resources_for(const CMPIContext *context, + const CMPIObjectPath *ref, virDomainPtr dom, const char *devid, CMPIInstance *rasd, @@ -2081,8 +2195,15 @@ struct domain *dominfo = NULL; uint16_t type; char *xml = NULL; + char *indication = NULL; CMPIObjectPath *op; + struct inst_list list; + CMPIInstance *prev_inst = NULL; + const char *props[] = {NULL}; + const char *inst_id; + int i, ret; + inst_list_init(&list); if (!get_dominfo(dom, &dominfo)) { virt_set_status(_BROKER, &s, CMPI_RC_ERR_FAILED, @@ -2106,6 +2227,7 @@ goto out; } + s = func(dominfo, rasd, type, devid, NAMESPACE(ref)); if (s.rc != CMPI_RC_OK) { CU_DEBUG("Resource transform function failed"); @@ -2116,6 +2238,54 @@ if (xml != NULL) { CU_DEBUG("New XML:\n%s", xml); connect_and_create(xml, ref, &s); + + if (func == &resource_add) { + indication = strdup(CREATED); + } + else if (func == &resource_del) { + indication = strdup(DELETED); + } + else { + indication = strdup(MODIFIED); + + s = enum_rasds(_BROKER, + ref, + dominfo->name, + type, + props, + &list); + if (s.rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd"); + goto out; + } + + for(i=0; i < list.cur; i++) { + prev_inst = list.list[i]; + ret = cu_get_str_prop(prev_inst, + "InstanceID", + &inst_id); + + if (ret != CMPI_RC_OK) + continue; + + if (STREQ(inst_id, + get_fq_devid(dominfo->name, + (char *)devid))) + break; + } + + } + + inst_list_init(&list); + if (inst_list_add(&list, rasd) == 0) { + CU_DEBUG("Unable to add RASD instance to the list\n"); + goto out; + } + raise_rasd_indication(context, + indication, + prev_inst, + ref, + &list); } else { cu_statusf(_BROKER, &s, CMPI_RC_ERR_FAILED, @@ -2125,6 +2295,8 @@ out: cleanup_dominfo(&dominfo); free(xml); + free(indication); + inst_list_free(&list); return s; } @@ -2153,7 +2325,8 @@ return s; } -static CMPIStatus _update_resource_settings(const CMPIObjectPath *ref, +static CMPIStatus _update_resource_settings(const CMPIContext *context, + const CMPIObjectPath *ref, const char *domain, CMPIArray *resources, const CMPIResult *results, @@ -2208,9 +2381,14 @@ goto end; } - s = _update_resources_for(ref, dom, devid, inst, func); + s = _update_resources_for(context, + ref, + dom, + devid, + inst, + func); - end: + end: free(name); free(devid); virDomainFree(dom); @@ -2310,7 +2488,9 @@ return s; } - if (cu_get_ref_arg(argsin, "AffectedConfiguration", &sys) != CMPI_RC_OK) { + if (cu_get_ref_arg(argsin, + "AffectedConfiguration", + &sys) != CMPI_RC_OK) { cu_statusf(_BROKER, &s, CMPI_RC_ERR_INVALID_PARAMETER, "Missing AffectedConfiguration parameter"); @@ -2324,11 +2504,13 @@ return s; } - s = _update_resource_settings(reference, + s = _update_resource_settings(context, + reference, domain, arr, results, resource_add); + free(domain); return s; @@ -2351,7 +2533,8 @@ return s; } - return _update_resource_settings(reference, + return _update_resource_settings(context, + reference, NULL, arr, results, @@ -2384,7 +2567,8 @@ if (s.rc != CMPI_RC_OK) goto out; - s = _update_resource_settings(reference, + s = _update_resource_settings(context, + reference, NULL, resource_arr, results, From snmishra at us.ibm.com Fri Sep 11 16:50:50 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 11 Sep 2009 09:50:50 -0700 Subject: [Libvirt-cim] [PATCH 1 of 6] Add resource indication feature Makefile changes In-Reply-To: References: Message-ID: <92570a0539103628c8cc.1252687850@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID 92570a0539103628c8ccf0166983e9d85bb7431d # Parent f4e1a60c1d64888c6f8e53c9ed4ea15651825a69 Add resource indication feature Makefile changes. MOF and Registration files for the resource indication provider were added. Changes were made to src/Makefile.am to build resource indication provider. Signed-off-by: Sharad Mishra diff -r f4e1a60c1d64 -r 92570a053910 Makefile.am --- a/Makefile.am Fri Sep 04 14:12:46 2009 -0700 +++ b/Makefile.am Fri Sep 11 09:00:47 2009 -0700 @@ -27,6 +27,7 @@ schema/RegisteredProfile.mof \ schema/ElementConformsToProfile.mof \ schema/ComputerSystemIndication.mof \ + schema/ResourceAllocationSettingDataIndication.mof \ schema/ComputerSystemMigrationIndication.mof \ schema/Virt_ResourceAllocationSettingData.mof \ schema/ResourceAllocationSettingData.mof \ @@ -101,6 +102,7 @@ schema/DiskPool.registration \ schema/HostedResourcePool.registration \ schema/ComputerSystemIndication.registration \ + schema/ResourceAllocationSettingDataIndication.registration \ schema/ComputerSystemMigrationIndication.registration \ schema/ResourceAllocationSettingData.registration \ schema/ResourcePoolConfigurationService.registration \ diff -r f4e1a60c1d64 -r 92570a053910 src/Makefile.am --- a/src/Makefile.am Fri Sep 04 14:12:46 2009 -0700 +++ b/src/Makefile.am Fri Sep 11 09:00:47 2009 -0700 @@ -48,6 +48,7 @@ libVirt_VirtualSystemSnapshotServiceCapabilities.la \ libVirt_SystemDevice.la \ libVirt_ComputerSystemIndication.la \ + libVirt_ResourceAllocationSettingDataIndication.la \ libVirt_ComputerSystemMigrationIndication.la \ libVirt_VirtualSystemManagementCapabilities.la \ libVirt_AllocationCapabilities.la \ @@ -86,6 +87,10 @@ libVirt_ComputerSystemIndication_la_SOURCES = Virt_ComputerSystemIndication.c libVirt_ComputerSystemIndication_la_LIBADD = -lVirt_ComputerSystem -lVirt_HostSystem -lpthread -lrt +libVirt_ResourceAllocationSettingDataIndication_la_DEPENDENCIES = libVirt_ComputerSystem.la +libVirt_ResourceAllocationSettingDataIndication_la_SOURCES = Virt_ResourceAllocationSettingDataIndication.c +libVirt_ResourceAllocationSettingDataIndication_la_LIBADD = -lVirt_ComputerSystem + libVirt_ComputerSystemMigrationIndication_la_DEPENDENCIES = libVirt_ComputerSystem.la libVirt_ComputerSystemMigrationIndication_la_SOURCES = Virt_ComputerSystemMigrationIndication.c libVirt_ComputerSystemMigrationIndication_la_LIBADD = -lVirt_ComputerSystem From snmishra at us.ibm.com Fri Sep 11 16:50:54 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 11 Sep 2009 09:50:54 -0700 Subject: [Libvirt-cim] [PATCH 5 of 6] Add the mof and reg files needed to register the resource indication provider In-Reply-To: References: Message-ID: <18b62ae07a118517ae81.1252687854@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252687743 25200 # Node ID 18b62ae07a118517ae81529fa8a9c10757a02a9b # Parent dd96ae8f1ec71af57012fcb37932d6a7b1f270fa Add the mof and reg files needed to register the resource indication provider Signed-off-by: Sharad Mishra diff -r dd96ae8f1ec7 -r 18b62ae07a11 schema/ResourceAllocationSettingDataIndication.mof --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/schema/ResourceAllocationSettingDataIndication.mof Fri Sep 11 09:49:03 2009 -0700 @@ -0,0 +1,66 @@ +// Copyright IBM Corp. 2007 + +[Description ("Xen_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("Xen_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("Xen_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; + + +[Description ("KVM_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("KVM_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("KVM_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; + + +[Description ("LXC_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("LXC_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("LXC_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; diff -r dd96ae8f1ec7 -r 18b62ae07a11 schema/ResourceAllocationSettingDataIndication.registration --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/schema/ResourceAllocationSettingDataIndication.registration Fri Sep 11 09:49:03 2009 -0700 @@ -0,0 +1,11 @@ +# Copyright IBM Corp. 2007 +# Classname Namespace ProviderName ProviderModule ProviderTypes +Xen_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +Xen_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +Xen_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method From rmaciel at linux.vnet.ibm.com Fri Sep 11 18:28:10 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Fri, 11 Sep 2009 15:28:10 -0300 Subject: [Libvirt-cim] [PATCH] Fixed Storage Volume RASD template path Message-ID: <53acca5af901b37e1818.1252693690@localhost.localdomain> # HG changeset patch # User Richard Maciel # Date 1252693562 10800 # Node ID 53acca5af901b37e1818c161ca00ff3b937b910c # Parent 697757e558c1f6fde30288d762fd86a4dabdc8f8 Fixed Storage Volume RASD template path Signed-off-by: Richard Maciel diff -r 697757e558c1 -r 53acca5af901 src/Virt_SettingsDefineCapabilities.c --- a/src/Virt_SettingsDefineCapabilities.c Fri Sep 04 14:12:46 2009 -0700 +++ b/src/Virt_SettingsDefineCapabilities.c Fri Sep 11 15:26:02 2009 -0300 @@ -1089,7 +1089,7 @@ name = "tmp.img"; CMSetProperty(inst, "VolumeName", (CMPIValue *)name, CMPI_chars); - path = "/var/lib/libvirt/images/"; + path = pool->pool_info.disk.path; CMSetProperty(inst, "Path", (CMPIValue *)path, CMPI_chars); alloc = 0; From deeptik at linux.vnet.ibm.com Fri Sep 11 19:33:24 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Fri, 11 Sep 2009 19:33:24 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] Modifying common_util.py for netnfs Message-ID: <083b2af038f14bf24b91.1252697604@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1252697570 25200 # Node ID 083b2af038f14bf24b9117e048ae836b638ad711 # Parent 53b05fc42fbc04ce45eea4a09ad84881fbcf6d3e [TEST] Modifying common_util.py for netnfs. Modifying common_util.py to use existing nfs setup if configuring the new one fails. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 53b05fc42fbc -r 083b2af038f1 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/08_CreateDiskResourcePool.py --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/08_CreateDiskResourcePool.py Thu Sep 10 09:40:21 2009 -0400 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/08_CreateDiskResourcePool.py Fri Sep 11 12:32:50 2009 -0700 @@ -66,13 +66,13 @@ if rev >= libvirt_netfs_pool_support and \ pool_type == dp_types['DISK_POOL_NETFS']: - status , src_mnt_dir, dir_mnt_dir = nfs_netfs_setup(server) + status , host_addr, src_mnt_dir, dir_mnt_dir = nfs_netfs_setup(server) if status != PASS: logger.error("Failed to get pool_attr for NETFS diskpool type") - return FAIL, pool_attr + return status, pool_attr + pool_attr['Host'] = host_addr pool_attr['SourceDirectory'] = src_mnt_dir - pool_attr['Host'] = server pool_attr['Path'] = dir_mnt_dir return PASS, pool_attr @@ -103,6 +103,7 @@ return SKIP status = FAIL + pool_attr = None # For now the test case support only the creation of # dir type disk pool, netfs later change to fs and disk pooltypes etc for key, value in dp_types.iteritems(): @@ -147,7 +148,7 @@ logger.error("Exception details: %s", details) if key == 'DISK_POOL_NETFS': netfs_cleanup(server, pool_attr) - return status + return FAIL return status diff -r 53b05fc42fbc -r 083b2af038f1 suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Thu Sep 10 09:40:21 2009 -0400 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Fri Sep 11 12:32:50 2009 -0700 @@ -25,6 +25,8 @@ import random from time import sleep from tempfile import mkdtemp +from commands import getstatusoutput +from socket import gethostbyaddr from distutils.file_util import move_file from XenKvmLib.test_xml import * from XenKvmLib.test_doms import * @@ -517,20 +519,50 @@ return PASS -def clean_temp_files(server, src_dir_for_mnt, dest_dir_to_mnt): - cmd = "rm -rf %s %s" % (src_dir_for_mnt, dest_dir_to_mnt) +def check_existing_nfs(): + host_addr = src_dir = None + s, o = getstatusoutput("mount") + lines = o.splitlines() + for line in lines: + if "nfs" == line.split()[-2]: + addr, src_dir = line.split()[0].split(":") + host_addr = gethostbyaddr(addr)[0] + + return host_addr, src_dir + +def clean_temp_files(server, src_dir_for_mnt, dest_dir_to_mnt, cmd): rc, out = utils.run_remote(server, cmd) if rc != PASS: logger.error("Please delete %s %s if present on %s", src_dir_for_mnt, dest_dir_to_mnt, server) +def check_haddr_is_localhost(server, host_addr): + # This function is required to determine if setup a new nfs + # setup or using an old one. + new_nfs_server_setup = False + local_addr = gethostbyaddr(server) + if host_addr in local_addr: + new_nfs_server_setup = True + + return new_nfs_server_setup def netfs_cleanup(server, pool_attr): - src_dir = os.path.basename(pool_attr['SourceDirectory']) + src_dir = pool_attr['SourceDirectory'] dst_dir = pool_attr['Path'] + host_addr = pool_attr['Host'] + + # Determine if we are using existing nfs setup or configured a new one + new_nfs_server_setup = check_haddr_is_localhost(server, host_addr) + if new_nfs_server_setup == False: + cmd = "rm -rf %s " % (dst_dir) + else: + cmd = "rm -rf %s %s" % (src_dir, dst_dir) # Remove the temp dir created . - clean_temp_files(server, src_dir, dst_dir) + clean_temp_files(server, src_dir, dst_dir, cmd) + + if new_nfs_server_setup == False: + return # Restore the original exports file. if os.path.exists(back_exports_file): @@ -544,9 +576,8 @@ if rc != PASS: logger.error("Could not restart NFS server on '%s'" % server) -def netfs_config(server, nfs_server_bin): +def netfs_config(server, nfs_server_bin, dest_dir_to_mnt): src_dir_for_mnt = mkdtemp() - dest_dir_to_mnt = mkdtemp() try: # Backup the original exports file. @@ -572,23 +603,32 @@ except Exception, detail: logger.error("Exception details : %s", detail) - clean_temp_files(server, src_dir_for_mnt, dest_dir_to_mnt) - return FAIL, None, None + cmd = "rm -rf %s %s " % (src_dir_for_mnt,dest_dir_to_mnt) + clean_temp_files(server, src_dir_for_mnt, dest_dir_to_mnt, cmd) + return SKIP, None - return PASS, src_dir_for_mnt, dest_dir_to_mnt + return PASS, src_dir_for_mnt def nfs_netfs_setup(server): nfs_server_bin = get_nfs_bin(server) + dest_dir = mkdtemp() + # Before going ahead verify that nfs server is available on machine.. ret = nfs_config(server, nfs_server_bin) if ret != PASS: logger.error("Failed to configure NFS on '%s'", server) - return FAIL, None, None - - ret, src_dir, destr_dir = netfs_config(server, nfs_server_bin) - if ret != PASS: - logger.error("Failed to configure netfs on '%s'", server) - return FAIL, None, None - - return PASS, src_dir, destr_dir + logger.info("Trying to look for nfs mounted dir on '%s'...", server) + server, src_dir = check_existing_nfs() + if server == None or src_dir == None: + logger.error("No nfs mount information on '%s' ", server) + return SKIP, None, None, None + else: + return PASS, server, src_dir, dest_dir + else: + ret, src_dir = netfs_config(server, nfs_server_bin, dest_dir) + if ret != PASS: + logger.error("Failed to configure netfs on '%s'", server) + return ret, None, None, None + + return PASS, server, src_dir, dest_dir From deeptik at linux.vnet.ibm.com Mon Sep 14 11:40:51 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Mon, 14 Sep 2009 17:10:51 +0530 Subject: [Libvirt-cim] [PATCH 3 of 3] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: <4AA9656D.7080203@linux.vnet.ibm.com> References: <616c8e4217a138a001a9.1252437876@elm3a148.beaverton.ibm.com> <4AA9656D.7080203@linux.vnet.ibm.com> Message-ID: <4AAE2BC3.2080800@linux.vnet.ibm.com> Kaitlin Rupert wrote: > >> +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, + >> exp_vol_path, dp_inst): >> + for err_scen in invalid_scen.keys(): + logger.info("Verifying >> errors for '%s'....", err_scen) >> + status = FAIL >> + del_res = [FAIL] + try: > > I would put the try / execpt outside of the for loop. This will save > you some indentation. I would need the try: except block .. so that I can catch the errors for each of the invalid delete() scenarios. > >> + res_settings = get_sto_vol_rasd(virt, server, dp_cn, + pool_name, >> exp_vol_path) >> + if res_settings == None: >> + raise Exception("Failed to get the resource settings for '%s'" \ >> + " Vol" % vol_name) >> + if not "MISSING" in err_scen: >> + exp_err_no = CIM_ERR_FAILED >> + if "NO_ADDRESS" in err_scen: >> + del res_settings['Address'] + elif "INVALID_ADDRESS" in err_scen: >> + res_settings['Address'] = invalid_scen[err_scen]['val'] > > > >> + >> + resource = inst_to_mof(res_settings) + del_res = >> rpcs_conn.DeleteResourceInPool(Resource=resource, >> + Pool=dp_inst) >> + else: >> + exp_err_no = CIM_ERR_INVALID_PARAMETER >> + if err_scen == "MISSING_RESOURCE": >> + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) >> + elif err_scen == "MISSING_POOL": >> + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) > > Will invalid_scen.keys() already return the keys in the same order? > I'm wondering if it is possible for resource to be undefined here > since it only gets defined if "if not "MISSING" in err_scen:" has > passed in a prior iteration of the loop. > > If "if not "MISSING" in err_scen:" fails the first time through the > loop, resource will be undefined. > I am not sure I understand the comment here. >> + >> + except CIMError, (err_no, err_desc): >> + if invalid_scen[err_scen]['msg'] in err_desc \ >> + and exp_err_no == err_no: >> + logger.error("Got the expected error message: '%s' for '%s'", + >> err_desc, err_scen) >> + status=PASS > > Spaces between the = here. Oh! yeah , Thanks !! done. > > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From rmaciel at linux.vnet.ibm.com Mon Sep 14 17:12:48 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Mon, 14 Sep 2009 14:12:48 -0300 Subject: [Libvirt-cim] [PATCH 4 of 6] Support for resource indication was added to Virt_VirtualSystemManagementService In-Reply-To: References: Message-ID: <4AAE7990.7000309@linux.vnet.ibm.com> On 09/11/2009 01:50 PM, Sharad Mishra wrote: > # HG changeset patch > # User snmishra at us.ibm.com > # Date 1252684847 25200 > # Node ID dd96ae8f1ec71af57012fcb37932d6a7b1f270fa > # Parent 74607c71855e6baeeb49bbc134b773acc39675fb > Support for resource indication was added to Virt_VirtualSystemManagementService > > Code added to call resource indication when resources are added or deleted or modified. > > Signed-off-by: Sharad Mishra > > diff -r 74607c71855e -r dd96ae8f1ec7 src/Virt_VirtualSystemManagementService.c > --- a/src/Virt_VirtualSystemManagementService.c Fri Sep 11 09:00:47 2009 -0700 > +++ b/src/Virt_VirtualSystemManagementService.c Fri Sep 11 09:00:47 2009 -0700 > @@ -63,6 +63,9 @@ > #define BRIDGE_TYPE "bridge" > #define NETWORK_TYPE "network" > #define USER_TYPE "user" > +#define CREATED "ResourceAllocationSettingDataCreatedIndication" > +#define DELETED "ResourceAllocationSettingDataDeletedIndication" > +#define MODIFIED "ResourceAllocationSettingDataModifiedIndication" > > const static CMPIBroker *_BROKER; > > @@ -442,7 +445,7 @@ > ret = cu_get_str_prop(inst, "VirtualSystemIdentifier",&val); > if (ret != CMPI_RC_OK) > goto out; > - > + Don't add or remove spaces from the code if you don't need to change it. > free(domain->name); > domain->name = strdup(val); > > @@ -1416,7 +1419,69 @@ > return s; > } > > -static CMPIInstance *create_system(CMPIInstance *vssd, > +static CMPIStatus raise_rasd_indication(const CMPIContext *context, > + const char *base_type, > + CMPIInstance *prev_inst, > + const CMPIObjectPath *ref, > + struct inst_list *list) > +{ > + char *type; > + CMPIStatus s = {CMPI_RC_OK, NULL}; > + CMPIInstance *instc = NULL; > + CMPIInstance *ind = NULL; > + CMPIObjectPath *op = NULL; > + int i; > + > + CU_DEBUG("raise_rasd_indication"); > + > + type = get_typed_class(CLASSNAME(ref), base_type); > + ind = get_typed_instance(_BROKER, > + CLASSNAME(ref), > + base_type, > + NAMESPACE(ref)); > + if (ind == NULL) { > + CU_DEBUG("Failed to get indication instance"); > + s.rc = CMPI_RC_ERR_FAILED; > + goto out; > + } > + > + /* PreviousInstance is set only for modify case. */ > + if (prev_inst != NULL) > + CMSetProperty(ind, > + "PreviousInstance", > + (CMPIValue *)&prev_inst, > + CMPI_instance); > + > + for (i=0; i< list->cur; i++) { > + instc = list->list[i]; > + op = CMGetObjectPath(instc, NULL); > + CMPIString *str = CMGetClassName(op, NULL); > + > + CU_DEBUG("class name is %s\n", CMGetCharsPtr(str, NULL)); > + > + CMSetProperty(ind, > + "SourceInstance", > + (CMPIValue *)&instc, > + CMPI_instance); > + set_source_inst_props(_BROKER, context, ref, ind); > + > + s = stdi_raise_indication(_BROKER, > + context, > + type, > + NAMESPACE(ref), > + ind); > + } > + > +out: Our code style uses a single horizontal space before a label. > + free(type); > + return s; > + > +} > + > + > + Our code style uses a single vertical space to separate a functions > +static CMPIInstance *create_system(const CMPIContext *context, > + CMPIInstance *vssd, > CMPIArray *resources, > const CMPIObjectPath *ref, > const CMPIObjectPath *refconf, > @@ -1427,9 +1492,13 @@ > const char *msg = NULL; > virConnectPtr conn = NULL; > virDomainPtr dom = NULL; > + struct inst_list list; > + const char *props[] = {NULL}; > > struct domain *domain = NULL; > > + inst_list_init(&list); > + > if (refconf != NULL) { > *s = get_reference_domain(&domain, ref, refconf); > if (s->rc != CMPI_RC_OK) > @@ -1477,18 +1546,40 @@ > CU_DEBUG("System XML:\n%s", xml); > > inst = connect_and_create(xml, ref, s); > - if (inst != NULL) > + if (inst != NULL) { > update_dominfo(domain, CLASSNAME(ref)); > > + *s = enum_rasds(_BROKER, > + ref, > + domain->name, > + CIM_RES_TYPE_ALL, > + props, > +&list); > + > + if (s->rc != CMPI_RC_OK) { > + CU_DEBUG("Failed to enumerate rasd\n"); > + goto out; > + } > + > + raise_rasd_indication(context, > + CREATED, > + NULL, > + ref, > +&list); > + } > + > + > out: > cleanup_dominfo(&domain); > free(xml); > virDomainFree(dom); > virConnectClose(conn); > + inst_list_free(&list); > > return inst; > } > > + Remove this space > static bool trigger_indication(const CMPIContext *context, > const char *base_type, > const CMPIObjectPath *ref) > @@ -1530,7 +1621,7 @@ > if (s.rc != CMPI_RC_OK) > goto out; > > - sys = create_system(vssd, res, reference, refconf,&s); > + sys = create_system(context, vssd, res, reference, refconf,&s); > if (sys == NULL) > goto out; > > @@ -1564,12 +1655,15 @@ > CMPIObjectPath *sys; > virConnectPtr conn = NULL; > virDomainPtr dom = NULL; > + struct inst_list list; > + const char *props[] = {NULL}; > > + inst_list_init(&list); > conn = connect_by_classname(_BROKER, > CLASSNAME(reference), > &status); > if (conn == NULL) { > - rc = -1; > + rc = IM_RC_NOT_SUPPORTED; > goto error; > } > > @@ -1580,6 +1674,18 @@ > if (dom_name == NULL) > goto error; > > + status = enum_rasds(_BROKER, > + reference, > + dom_name, > + CIM_RES_TYPE_ALL, > + props, > +&list); > + > + if (status.rc != CMPI_RC_OK) { > + CU_DEBUG("Failed to enumerate rasd"); > + goto error; > + } > + > dom = virDomainLookupByName(conn, dom_name); > if (dom == NULL) { > CU_DEBUG("No such domain `%s'", dom_name); > @@ -1605,11 +1711,17 @@ > > error: > if (rc == IM_RC_SYS_NOT_FOUND) > - virt_set_status(_BROKER,&status, > + virt_set_status(_BROKER, > +&status, > CMPI_RC_ERR_NOT_FOUND, > conn, > "Referenced domain `%s' does not exist", > dom_name); > + else if (rc == IM_RC_NOT_SUPPORTED) > + virt_set_status(_BROKER,&status, > + CMPI_RC_ERR_NOT_FOUND, > + conn, > + "Unable to raise resource indication"); > else if (rc == IM_RC_FAILED) > virt_set_status(_BROKER,&status, > CMPI_RC_ERR_NOT_FOUND, > @@ -1617,6 +1729,7 @@ > "Unable to retrieve domain name"); > else if (rc == IM_RC_OK) { > status = (CMPIStatus){CMPI_RC_OK, NULL}; > + raise_rasd_indication(context, DELETED, NULL, reference,&list); > trigger_indication(context, > "ComputerSystemDeletedIndication", > reference); > @@ -1625,7 +1738,7 @@ > virDomainFree(dom); > virConnectClose(conn); > CMReturnData(results,&rc, CMPI_uint32); > - > + inst_list_free(&list); > return status; > } > > @@ -2071,7 +2184,8 @@ > return s; > } > > -static CMPIStatus _update_resources_for(const CMPIObjectPath *ref, > +static CMPIStatus _update_resources_for(const CMPIContext *context, > + const CMPIObjectPath *ref, > virDomainPtr dom, > const char *devid, > CMPIInstance *rasd, > @@ -2081,8 +2195,15 @@ > struct domain *dominfo = NULL; > uint16_t type; > char *xml = NULL; > + char *indication = NULL; > CMPIObjectPath *op; > + struct inst_list list; > + CMPIInstance *prev_inst = NULL; > + const char *props[] = {NULL}; > + const char *inst_id; > + int i, ret; > > + inst_list_init(&list); > if (!get_dominfo(dom,&dominfo)) { > virt_set_status(_BROKER,&s, > CMPI_RC_ERR_FAILED, > @@ -2106,6 +2227,7 @@ > goto out; > } > > + Remove this vertical space > s = func(dominfo, rasd, type, devid, NAMESPACE(ref)); > if (s.rc != CMPI_RC_OK) { > CU_DEBUG("Resource transform function failed"); > @@ -2116,6 +2238,54 @@ > if (xml != NULL) { > CU_DEBUG("New XML:\n%s", xml); > connect_and_create(xml, ref,&s); > + > + if (func ==&resource_add) { > + indication = strdup(CREATED); > + } > + else if (func ==&resource_del) { > + indication = strdup(DELETED); > + } > + else { > + indication = strdup(MODIFIED); > + > + s = enum_rasds(_BROKER, > + ref, > + dominfo->name, > + type, > + props, > +&list); > + if (s.rc != CMPI_RC_OK) { > + CU_DEBUG("Failed to enumerate rasd"); > + goto out; > + } > + > + for(i=0; i< list.cur; i++) { > + prev_inst = list.list[i]; > + ret = cu_get_str_prop(prev_inst, > + "InstanceID", > +&inst_id); > + > + if (ret != CMPI_RC_OK) If it fails and you can't leave, at least print a debug message before ignoring the error. > + continue; > + > + if (STREQ(inst_id, > + get_fq_devid(dominfo->name, > + (char *)devid))) > + break; > + } > + > + } > + > + inst_list_init(&list); > + if (inst_list_add(&list, rasd) == 0) { > + CU_DEBUG("Unable to add RASD instance to the list\n"); > + goto out; > + } > + raise_rasd_indication(context, > + indication, > + prev_inst, > + ref, > +&list); > } else { > cu_statusf(_BROKER,&s, > CMPI_RC_ERR_FAILED, > @@ -2125,6 +2295,8 @@ > out: > cleanup_dominfo(&dominfo); > free(xml); > + free(indication); > + inst_list_free(&list); > > return s; > } > @@ -2153,7 +2325,8 @@ > return s; > } > > -static CMPIStatus _update_resource_settings(const CMPIObjectPath *ref, > +static CMPIStatus _update_resource_settings(const CMPIContext *context, > + const CMPIObjectPath *ref, > const char *domain, > CMPIArray *resources, > const CMPIResult *results, > @@ -2208,9 +2381,14 @@ > goto end; > } > > - s = _update_resources_for(ref, dom, devid, inst, func); > + s = _update_resources_for(context, > + ref, > + dom, > + devid, > + inst, > + func); > > - end: > + end: > free(name); > free(devid); > virDomainFree(dom); > @@ -2310,7 +2488,9 @@ > return s; > } > > - if (cu_get_ref_arg(argsin, "AffectedConfiguration",&sys) != CMPI_RC_OK) { > + if (cu_get_ref_arg(argsin, > + "AffectedConfiguration", > +&sys) != CMPI_RC_OK) { > cu_statusf(_BROKER,&s, > CMPI_RC_ERR_INVALID_PARAMETER, > "Missing AffectedConfiguration parameter"); > @@ -2324,11 +2504,13 @@ > return s; > } > > - s = _update_resource_settings(reference, > + s = _update_resource_settings(context, > + reference, > domain, > arr, > results, > resource_add); > + > free(domain); > > return s; > @@ -2351,7 +2533,8 @@ > return s; > } > > - return _update_resource_settings(reference, > + return _update_resource_settings(context, > + reference, > NULL, > arr, > results, > @@ -2384,7 +2567,8 @@ > if (s.rc != CMPI_RC_OK) > goto out; > > - s = _update_resource_settings(reference, > + s = _update_resource_settings(context, > + reference, > NULL, > resource_arr, > results, > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Mon Sep 14 17:55:02 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 14 Sep 2009 17:55:02 -0000 Subject: [Libvirt-cim] [PATCH 1 of 5] [TEST] #2 Modified pool.py to support RPCS CreateResourceInPool In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1252925907 25200 # Node ID c998364d0064aa7e6c2481335f11b26baed4a4e4 # Parent 34b2abf04c101dfa45651e201476bb2055b4654c [TEST] #2 Modified pool.py to support RPCS CreateResourceInPool. Patch 2: ------- 1) Added check in get_stovol_rasd_from_sdc() 2) Added get_diskpool() to pool.py as it is used in 10*py/11*py, RPCS/12*py and will be useful for further tests as well 3) Added rev for storagevol deletion NOTE: Please base this patch on the patch "Modifying common_util.py for netnfs" Patch 1: -------- Added the following two functions which are used in RPCS/10*py and RPCS/11*py 1) get_stovol_rasd_from_sdc() to get the stovol rasd from sdc 2) get_stovol_default_settings() to get default sto vol settings Also, modified common_util.py to remove the backed up exportfs file Added RAW_VOL_TYPE which is the FormatType supported by RPCS currently Once this patch gets accepted we can modify RPCS/10*py to refer to these functions. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 34b2abf04c10 -r c998364d0064 suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Sun Sep 13 23:46:16 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Mon Sep 14 03:58:27 2009 -0700 @@ -582,6 +582,8 @@ try: # Backup the original exports file. if (os.path.exists(exports_file)): + if os.path.exists(back_exports_file): + os.remove(back_exports_file) move_file(exports_file, back_exports_file) fd = open(exports_file, "w") line = "\n %s %s(rw)" %(src_dir_for_mnt, server) diff -r 34b2abf04c10 -r c998364d0064 suites/libvirt-cim/lib/XenKvmLib/pool.py --- a/suites/libvirt-cim/lib/XenKvmLib/pool.py Sun Sep 13 23:46:16 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/pool.py Mon Sep 14 03:58:27 2009 -0700 @@ -25,7 +25,7 @@ from CimTest.ReturnCodes import PASS, FAIL, SKIP from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.const import get_provider_version, default_pool_name -from XenKvmLib.enumclass import EnumInstances, GetInstance +from XenKvmLib.enumclass import EnumInstances, GetInstance, EnumNames from XenKvmLib.assoc import Associators from VirtLib.utils import run_remote from XenKvmLib.xm_virt_util import virt2uri, net_list @@ -34,11 +34,13 @@ from CimTest.CimExt import CIMClassMOF from XenKvmLib.vxml import NetXML, PoolXML from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.vsms import RASD_TYPE_STOREVOL cim_errno = pywbem.CIM_ERR_NOT_SUPPORTED cim_mname = "CreateChildResourcePool" input_graphics_pool_rev = 757 libvirt_cim_child_pool_rev = 837 +libvirt_rasd_spool_del_changes = 971 DIR_POOL = 1L FS_POOL = 2L @@ -48,6 +50,9 @@ LOGICAL_POOL = 6L SCSI_POOL = 7L +#Volume types +RAW_VOL_TYPE = 1 + def pool_cn_to_rasd_cn(pool_cn, virt): if pool_cn.find('ProcessorPool') >= 0: return get_typed_class(virt, "ProcResourceAllocationSettingData") @@ -297,3 +302,61 @@ status = PASS return status + +def get_stovol_rasd_from_sdc(virt, server, dp_inst_id): + rasd = None + ac_cn = get_typed_class(virt, "AllocationCapabilities") + an_cn = get_typed_class(virt, "SettingsDefineCapabilities") + key_list = {"InstanceID" : dp_inst_id} + + try: + inst = GetInstance(server, ac_cn, key_list) + if inst == None: + raise Exception("Failed to GetInstance for %s" % dp_inst_id) + + rasd = Associators(server, an_cn, ac_cn, InstanceID=inst.InstanceID) + if len(rasd) < 4: + raise Exception("Failed to get default StorageVolRASD , "\ + "Expected atleast 4, Got '%s'" % len(rasd)) + + except Exception, detail: + logger.error("Exception: %s", detail) + return FAIL, None + + return PASS, rasd + +def get_stovol_default_settings(virt, server, dp_cn, + pool_name, path, vol_name): + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, dp_rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol RASD's") + return None + + for dpool_rasd in dp_rasds: + if dpool_rasd['ResourceType'] == RASD_TYPE_STOREVOL and \ + 'Default' in dpool_rasd['InstanceID']: + + dpool_rasd['PoolID'] = dp_inst_id + dpool_rasd['Path'] = path + dpool_rasd['VolumeName'] = vol_name + break + + if not pool_name in dpool_rasd['PoolID']: + return None + + return dpool_rasd + +def get_diskpool(server, virt, dp_cn, pool_name): + dp_inst = None + dpool_cn = get_typed_class(virt, dp_cn) + pools = EnumNames(server, dpool_cn) + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + for pool in pools: + if pool['InstanceID'] == dp_inst_id: + dp_inst = pool + break + + return dp_inst From deeptik at linux.vnet.ibm.com Mon Sep 14 17:55:01 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 14 Sep 2009 17:55:01 -0000 Subject: [Libvirt-cim] [PATCH 0 of 5] [TEST] Added tc to verify StorageVol deletion and creation/deletion errors Message-ID: Please base this patch on the patch "Modifying common_util.py for netnfs" From deeptik at linux.vnet.ibm.com Mon Sep 14 17:55:05 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 14 Sep 2009 17:55:05 -0000 Subject: [Libvirt-cim] [PATCH 4 of 5] [TEST] Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1252950606 25200 # Node ID c97d63289d40f9b64c8ab3b2a2c33538b9ad5907 # Parent dec604e54eceb2c28f9dce3c9b22d87b152eb614 [TEST] Add new tc to verify the DeleteResourceInPool(). Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri From deeptik at linux.vnet.ibm.com Mon Sep 14 17:55:03 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 14 Sep 2009 17:55:03 -0000 Subject: [Libvirt-cim] [PATCH 2 of 5] [TEST] Added new tc to verify the RPCS error values with dir type pool In-Reply-To: References: Message-ID: <10f8d110cd079ed7be68.1252950903@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1252933683 25200 # Node ID 10f8d110cd079ed7be6875efa0c65ea47e810cb5 # Parent c998364d0064aa7e6c2481335f11b26baed4a4e4 [TEST] Added new tc to verify the RPCS error values with dir type pool. This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when: 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE 2) Trying to create 2 Vol in the same Path Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r c998364d0064 -r 10f8d110cd07 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py Mon Sep 14 06:08:03 2009 -0700 @@ -0,0 +1,172 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when: +# 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE +# 2) Trying to create 2 Vol in the same Path +# +# -Date: 04-09-2009 + +import sys +import os +from VirtLib import utils +from random import randint +from pywbem.cim_types import Uint64 +from pywbem import CIM_ERR_FAILED, CIMError +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib import rpcs_service +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.pool import create_pool, undefine_diskpool, RAW_VOL_TYPE, \ + get_diskpool, get_stovol_rasd_from_sdc,\ + get_stovol_default_settings + +dir_pool_attr = { "Path" : "/tmp" } +vol_name = "cimtest-vol.img" + +INVALID_FTYPE = RAW_VOL_TYPE + randint(20,100) +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'INVALID_FTYPE': { 'msg' : "Unable to generate XML "\ + "for new resource" }, + 'DUP_VOL_PATH' : { 'msg' : "Unable to create storage volume"} + } + +def get_inputs(virt, server, dp_cn, key, exp_vol_path): + sv_rasd = dp_inst = None + pool_name = default_pool_name + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, + pool_name, exp_vol_path, + vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + if key == "INVALID_FTYPE": + sv_rasd['FormatType'] = Uint64(INVALID_FTYPE) + + sv_settings = inst_to_mof(sv_rasd) + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" % pool_name) + + except Exception, details: + logger.error("In get_inputs() Exception details: %s", details) + return FAIL, None, None + + return PASS, sv_settings, dp_inst + +def verify_vol_err(virt, server, dp_cn, key, exp_vol_path): + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, + key, exp_vol_path) + if status != PASS: + return status + + status = FAIL + res = ret = [FAIL] + try: + logger.info("Verifying err for '%s'...", key) + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + ret = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + # For duplicate vol path verfication we should have been able to + # create the first dir pool successfully before attempting the next + if key == 'DUP_VOL_PATH' and ret[0] == PASS: + # Trying to create the vol in the same vol path should return + # an error + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[key]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, key) + return PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[key]['msg']) + + if res[0] == PASS: + logger.error("Should not have been able to create Vol %s", vol_name) + cleanup_vol(server, exp_vol_path) + return FAIL + +def cleanup_vol(server, exp_vol_path): + try: + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it "\ + "manually" % exp_vol_path) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + dp_types = ['DUP_VOL_PATH', 'INVALID_FTYPE'] + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (dir_pool_attr['Path'], vol_name) + + try: + # err_key will contain either INVALID_FTYPE/DUP_VOL_PATH + # to be able access the err mesg + for err_key in dp_types: + status = FAIL + status = verify_vol_err(virt, server, dp_cn, err_key, exp_vol_path) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s'" % err_key) + + except Exception, details: + logger.error("In main() Exception details: %s", details) + status = FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Mon Sep 14 17:55:04 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 14 Sep 2009 17:55:04 -0000 Subject: [Libvirt-cim] [PATCH 3 of 5] [TEST] Added new tc to verify the RPCS error values for netfs pool In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1252950136 25200 # Node ID dec604e54eceb2c28f9dce3c9b22d87b152eb614 # Parent 10f8d110cd079ed7be6875efa0c65ea47e810cb5 [TEST] Added new tc to verify the RPCS error values for netfs pool. This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when, Trying to create a Vol in a netfs storage pool. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 10f8d110cd07 -r dec604e54ece suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py Mon Sep 14 10:42:16 2009 -0700 @@ -0,0 +1,193 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when, +# Trying to create a Vol in a netfs storage pool +# +# -Date: 04-09-2009 + +import sys +import os +from VirtLib import utils +from pywbem import CIM_ERR_FAILED, CIMError +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.const import do_main, platform_sup, get_provider_version +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib import rpcs_service +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool, nfs_netfs_setup, \ + netfs_cleanup +from XenKvmLib.pool import create_pool, undefine_diskpool, NETFS_POOL, \ + get_diskpool, get_stovol_rasd_from_sdc, \ + get_stovol_default_settings + +vol_name = "cimtest-vol.img" +vol_path = "/tmp/" + +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'NETFS_POOL' : { 'msg' : "This function does not "\ + "support this resource type"} + } + +def get_pool_attr(server, pool_type): + pool_attr = { } + status , host_addr, src_mnt_dir, dir_mnt_dir = nfs_netfs_setup(server) + if status != PASS: + logger.error("Failed to get pool_attr for NETFS diskpool type") + return status, pool_attr + + pool_attr['Host'] = host_addr + pool_attr['SourceDirectory'] = src_mnt_dir + pool_attr['Path'] = dir_mnt_dir + + return PASS, pool_attr + +def get_inputs(virt, server, dp_cn, pool_name, exp_vol_path): + sv_rasd = dp_inst = None + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, sv_rasd, dp_inst + + return PASS, sv_settings, dp_inst + +def verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path): + + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, + pool_name, exp_vol_path) + if status != PASS: + return status + + status = FAIL + res = [FAIL] + try: + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[pool_name]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, pool_name) + return PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[pool_name]['msg']) + if res[0] == PASS: + logger.error("Should not have been able to create the StorageVol '%s'", + vol_name) + + return FAIL + +def cleanup_pool_vol(server, virt, pool_name, exp_vol_path): + try: + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to undefine diskpool '%s'" % pool_name) + + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it "\ + "manually" % exp_vol_path) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + pool_name = "NETFS_POOL" + pool_type = NETFS_POOL + exp_vol_path = "%s/%s" % (vol_path, vol_name) + dp_cn = "DiskPool" + + try: + status = FAIL + status, pool_attr = get_pool_attr(server, pool_type) + if status != PASS: + return status + + # Creating NETFS pool to verify RPCS error + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + status = verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s' " % pool_name) + + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + netfs_cleanup(server, pool_attr) + ret = cleanup_pool_vol(server, virt, pool_name, exp_vol_path) + if status != PASS or ret != PASS : + return FAIL + + return PASS +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Mon Sep 14 17:55:06 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 14 Sep 2009 17:55:06 -0000 Subject: [Libvirt-cim] [PATCH 5 of 5] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1252950808 25200 # Node ID dd9934ef45513152cea0848d3caab08440199c43 # Parent c97d63289d40f9b64c8ab3b2a2c33538b9ad5907 [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r c97d63289d40 -r dd9934ef4551 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py Mon Sep 14 10:53:28 2009 -0700 @@ -0,0 +1,193 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS returns error when invalid values are +# passed. +# +# -Date: 08-09-2009 + +import sys +import os +from VirtLib import utils +from CimTest.Globals import logger +from pywbem import CIM_ERR_FAILED, CIM_ERR_INVALID_PARAMETER, CIMError +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib import rpcs_service +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, \ + get_stovol_rasd_from_sdc + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" +invalid_scen = { "INVALID_ADDRESS" : { 'val' : 'Junkvol_path', + 'msg' : 'no storage vol with '\ + 'matching path' }, + "NO_ADDRESS_FIELD" : { 'msg' :'Missing Address in '\ + 'resource RASD' }, + "MISSING_RESOURCE" : { 'msg' :"Missing argument `Resource'"}, + "MISSING_POOL" : { 'msg' :"Missing argument `Pool'"} + } + + +def get_sto_vol_rasd(virt, server, dp_cn, pool_name, exp_vol_path): + dv_rasds = None + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol for '%s' vol", exp_vol_path) + return FAIL + + for item in rasds: + if item['Address'] == exp_vol_path and item['PoolID'] == dp_inst_id: + dv_rasds = item + break + + return dv_rasds + + +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, + exp_vol_path, dp_inst): + for err_scen in invalid_scen.keys(): + logger.info("Verifying errors for '%s'....", err_scen) + status = FAIL + del_res = [FAIL] + try: + res_settings = get_sto_vol_rasd(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed to get the resource settings for '%s'" \ + " Vol" % vol_name) + + if not "MISSING" in err_scen: + exp_err_no = CIM_ERR_FAILED + if "NO_ADDRESS_FIELD" in err_scen: + del res_settings['Address'] + elif "INVALID_ADDRESS" in err_scen: + res_settings['Address'] = invalid_scen[err_scen]['val'] + + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource, + Pool=dp_inst) + else: + exp_err_no = CIM_ERR_INVALID_PARAMETER + if err_scen == "MISSING_RESOURCE": + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) + elif err_scen == "MISSING_POOL": + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) + + except CIMError, (err_no, err_desc): + if del_res[0] != PASS and invalid_scen[err_scen]['msg'] in err_desc\ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' for '%s'", + err_desc, err_scen) + status = PASS + else: + logger.error("Failed to get the error message '%s'", + invalid_scen[err_scen]['msg']) + + if del_res[0] == PASS: + logger.error("Should not have been able to delete Vol %s", vol_name) + return FAIL + + return status + +def cleanup_vol(server, exp_vol_path): + try: + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it " \ + "manually" % exp_vol_path) + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + pool_name = default_pool_name + pool_type = DIR_POOL + status = FAIL + res = del_res = [FAIL] + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + status = verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, + pool_name, exp_vol_path, dp_inst) + if status != PASS : + raise Exception("Failed to verify the error") + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_vol(server, exp_vol_path) + if status != PASS or ret != PASS: + return FAIL + + return PASS +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Mon Sep 14 18:09:54 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Mon, 14 Sep 2009 23:39:54 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <4AAA372C.4090102@linux.vnet.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> <4AA96C43.5090005@linux.vnet.ibm.com> <4AAA372C.4090102@linux.vnet.ibm.com> Message-ID: <4AAE86F2.3000306@linux.vnet.ibm.com> Deepti B Kalakeri wrote: > > > Kaitlin Rupert wrote: >> Deepti B. Kalakeri wrote: >>> # HG changeset patch >>> # User Deepti B. Kalakeri >>> # Date 1252590021 14400 >>> # Node ID 53b05fc42fbc04ce45eea4a09ad84881fbcf6d3e >>> # Parent 30196cc506c07d81642c94a01fc65b34421c0714 >>> [TEST] Adding verification for DestroySystem() of the domain. >>> >>> Tested with KVM and current sources on SLES11. >>> Signed-off-by: Deepti B. Kalakeri >>> >>> diff -r 30196cc506c0 -r 53b05fc42fbc >>> suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py >>> >> >> I get the following failure: >> >> Starting test suite: libvirt-cim >> Cleaned log files. >> >> Testing KVM hypervisor >> -------------------------------------------------------------------- >> VirtualSystemManagementService - 02_destroysystem.py: FAIL >> ERROR - CS instance not returned for test_domain. >> ERROR - RequestedState for dom 'test_domain' is not '3' >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> Referenced domain `test_domain' does not exist: Domain not found >> -------------------------------------------------------------------- >> >> >> However, the test passes for me if the patch isn't applied. > Yes! This test fails with the changes, this is because the > DestroySystem() is not just destroying the domain but also undefining it. > The VSMS/15*py tc with the new changes also fails for the same reason. > I am not sure if you got chance to look at the comments to "#2 Add try > / except to VSMS 15" patch. > Should I make a note of this in the libvirt.org and XFAIL this test ? > Any further comments on this ? -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Mon Sep 14 18:19:48 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Mon, 14 Sep 2009 11:19:48 -0700 Subject: [Libvirt-cim] [PATCH 3 of 3] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: <4AAE2BC3.2080800@linux.vnet.ibm.com> References: <616c8e4217a138a001a9.1252437876@elm3a148.beaverton.ibm.com> <4AA9656D.7080203@linux.vnet.ibm.com> <4AAE2BC3.2080800@linux.vnet.ibm.com> Message-ID: <4AAE8944.2040406@linux.vnet.ibm.com> Deepti B Kalakeri wrote: > > > Kaitlin Rupert wrote: >> >>> +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, + >>> exp_vol_path, dp_inst): >>> + for err_scen in invalid_scen.keys(): + logger.info("Verifying >>> errors for '%s'....", err_scen) >>> + status = FAIL >>> + del_res = [FAIL] + try: >> >> I would put the try / execpt outside of the for loop. This will save >> you some indentation. > I would need the try: except block .. so that I can catch the errors for > each of the invalid delete() scenarios. Agreed. Your code does something like: + for err_scen in invalid_scen.keys(): + try: + + except CIMError, (err_no, err_desc): Why not do: try: for except CIMError, (err_no, err_desc): except Exception, details: This would save you some indentation, and allow you to catch any unexpected errors in addition to the errors thrown by the delete call. >>> + resource = inst_to_mof(res_settings) + del_res = >>> rpcs_conn.DeleteResourceInPool(Resource=resource, >>> + Pool=dp_inst) >>> + else: >>> + exp_err_no = CIM_ERR_INVALID_PARAMETER >>> + if err_scen == "MISSING_RESOURCE": >>> + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) >>> + elif err_scen == "MISSING_POOL": >>> + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) >> >> Will invalid_scen.keys() already return the keys in the same order? >> I'm wondering if it is possible for resource to be undefined here >> since it only gets defined if "if not "MISSING" in err_scen:" has >> passed in a prior iteration of the loop. >> >> If "if not "MISSING" in err_scen:" fails the first time through the >> loop, resource will be undefined. >> > I am not sure I understand the comment here. If you look at the Python documentation, the keys are returned an arbitrary order (http://docs.python.org/library/stdtypes.html#dict.items). So taking a look at your code, let's say keys() returns err_scen == MISSING_POOL the first time through the loop... if not "MISSING" in err_scen: This check fails else: exp_err_no = CIM_ERR_INVALID_PARAMETER if err_scen == "MISSING_RESOURCE": del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) elif err_scen == "MISSING_POOL": del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) This code is executed, but resource hasn't been set yet. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Mon Sep 14 19:29:20 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Mon, 14 Sep 2009 12:29:20 -0700 Subject: [Libvirt-cim] [PATCH 5 of 5] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: References: Message-ID: <4AAE9990.5030003@linux.vnet.ibm.com> Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1252950808 25200 > # Node ID dd9934ef45513152cea0848d3caab08440199c43 > # Parent c97d63289d40f9b64c8ab3b2a2c33538b9ad5907 > [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() > > Tested with KVM and current sources on SLES11. > Signed-off-by: Deepti B. Kalakeri > > diff -r c97d63289d40 -r dd9934ef4551 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py See comments in other email. Also, this test fails with the following: ResourcePoolConfigurationService - 14_delete_storagevolume_errs.py: FAIL ERROR - Exception details: (1, u'Unable to create storage volume: invalid storage pool pointer in storage vol already exists') InvokeMethod(CreateResourceInPool): Unable to create storage volume: invalid storage pool pointer in storage vol already exists -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Mon Sep 14 19:29:37 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Mon, 14 Sep 2009 12:29:37 -0700 Subject: [Libvirt-cim] [PATCH 4 of 5] [TEST] Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: <4AAE99A1.7010102@linux.vnet.ibm.com> Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1252950606 25200 > # Node ID c97d63289d40f9b64c8ab3b2a2c33538b9ad5907 > # Parent dec604e54eceb2c28f9dce3c9b22d87b152eb614 > [TEST] Add new tc to verify the DeleteResourceInPool(). > > Tested with KVM and current sources on SLES11. > Signed-off-by: Deepti B. Kalakeri > Looks like this patch didn't send properly. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Mon Sep 14 19:40:06 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Mon, 14 Sep 2009 12:40:06 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <4AAE86F2.3000306@linux.vnet.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> <4AA96C43.5090005@linux.vnet.ibm.com> <4AAA372C.4090102@linux.vnet.ibm.com> <4AAE86F2.3000306@linux.vnet.ibm.com> Message-ID: <4AAE9C16.9060608@linux.vnet.ibm.com> >>> >>> Testing KVM hypervisor >>> -------------------------------------------------------------------- >>> VirtualSystemManagementService - 02_destroysystem.py: FAIL >>> ERROR - CS instance not returned for test_domain. >>> ERROR - RequestedState for dom 'test_domain' is not '3' >>> Referenced domain `test_domain' does not exist: Domain not found >>> >>> However, the test passes for me if the patch isn't applied. >> Yes! This test fails with the changes, this is because the >> DestroySystem() is not just destroying the domain but also undefining it. >> The VSMS/15*py tc with the new changes also fails for the same reason. >> I am not sure if you got chance to look at the comments to "#2 Add try >> / except to VSMS 15" patch. >> Should I make a note of this in the libvirt.org and XFAIL this test ? >> > Any further comments on this ? > I didn't see your previous email - sorry for the delay here. My original mail was in hopes that you knew why your changes caused the test to fail. Last time I tested VSMS 15 with the changes I submitted, I didn't see it fail. I haven't had a chance to revist that patch. I took a look at this test, and you're right - the reason it's failing is because DestroySystem() is also undefined the guest. So the answer here is to modify the test so that it doesn't call undefine(). Also, make sure the guest isn't in the inactive domain list either. Not sure why you want to XFAIL the test, as DestroySystem() is doing what is expected. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Mon Sep 14 20:48:30 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Mon, 14 Sep 2009 13:48:30 -0700 Subject: [Libvirt-cim] [PATCH 4 of 6] Support for resource indication was added to Virt_VirtualSystemManagementService In-Reply-To: References: Message-ID: <4AAEAC1E.2060002@linux.vnet.ibm.com> > +#define CREATED "ResourceAllocationSettingDataCreatedIndication" > +#define DELETED "ResourceAllocationSettingDataDeletedIndication" > +#define MODIFIED "ResourceAllocationSettingDataModifiedIndication" These are generic names - can you make them more specific to RASD indications? > + else if (rc == IM_RC_NOT_SUPPORTED) > + virt_set_status(_BROKER, &status, > + CMPI_RC_ERR_NOT_FOUND, > + conn, > + "Unable to raise resource indication"); IM_RC_NOT_SUPPORTED is set because a connection to libvirt cannot be made. Can you change the error message here to reflect this? > else if (rc == IM_RC_FAILED) > virt_set_status(_BROKER, &status, > CMPI_RC_ERR_NOT_FOUND, > @@ -2116,6 +2238,54 @@ > if (xml != NULL) { > CU_DEBUG("New XML:\n%s", xml); > connect_and_create(xml, ref, &s); > + > + if (func == &resource_add) { > + indication = strdup(CREATED); > + } > + else if (func == &resource_del) { > + indication = strdup(DELETED); > + } > + else { > + indication = strdup(MODIFIED); > + > + s = enum_rasds(_BROKER, > + ref, > + dominfo->name, > + type, > + props, > + &list); > + if (s.rc != CMPI_RC_OK) { > + CU_DEBUG("Failed to enumerate rasd"); > + goto out; > + } > + > + for(i=0; i < list.cur; i++) { Space needed between i and =, also between = and 0. > + prev_inst = list.list[i]; > + ret = cu_get_str_prop(prev_inst, > + "InstanceID", > + &inst_id); > + > + if (ret != CMPI_RC_OK) > + continue; > + > + if (STREQ(inst_id, > + get_fq_devid(dominfo->name, > + (char *)devid))) > + break; > + } Can you break this out into a separate function? There's lots of indention here, an this else statement makes the function quite long. > + > + } > + > + inst_list_init(&list); > + if (inst_list_add(&list, rasd) == 0) { > + CU_DEBUG("Unable to add RASD instance to the list\n"); > + goto out; > + } > + raise_rasd_indication(context, > + indication, > + prev_inst, > + ref, > + &list); > } else { > cu_statusf(_BROKER, &s, > CMPI_RC_ERR_FAILED, > @@ -2125,6 +2295,8 @@ > out: > cleanup_dominfo(&dominfo); > free(xml); > + free(indication); Why not make indication a const char *? Then you wouldn't need to free it. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Mon Sep 14 21:55:24 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Mon, 14 Sep 2009 14:55:24 -0700 Subject: [Libvirt-cim] [PATCH 6 of 6] Add resource indication provider In-Reply-To: <43076113ae79f638317b.1252687855@elm3b24.beaverton.ibm.com> References: <43076113ae79f638317b.1252687855@elm3b24.beaverton.ibm.com> Message-ID: <4AAEBBCC.2060109@linux.vnet.ibm.com> Sharad Mishra wrote: > # HG changeset patch > # User snmishra at us.ibm.com Can you fix your user id? You can do this by modifying your ~/.hgrc: Add / modify the following: [ui] username = Kaitlin Rupert Thanks! > # Date 1252687805 25200 > # Node ID 43076113ae79f638317b0b7a0669c59a864e7904 > # Parent 18b62ae07a118517ae81529fa8a9c10757a02a9b > Add resource indication provider. > > Signed-off-by: Sharad Mishra -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Tue Sep 15 05:40:52 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Tue, 15 Sep 2009 11:10:52 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <4AAE9C16.9060608@linux.vnet.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> <4AA96C43.5090005@linux.vnet.ibm.com> <4AAA372C.4090102@linux.vnet.ibm.com> <4AAE86F2.3000306@linux.vnet.ibm.com> <4AAE9C16.9060608@linux.vnet.ibm.com> Message-ID: <4AAF28E4.9010205@linux.vnet.ibm.com> Kaitlin Rupert wrote: >>>> >>>> Testing KVM hypervisor >>>> -------------------------------------------------------------------- >>>> VirtualSystemManagementService - 02_destroysystem.py: FAIL >>>> ERROR - CS instance not returned for test_domain. >>>> ERROR - RequestedState for dom 'test_domain' is not '3' >>>> Referenced domain `test_domain' does not exist: Domain not found > >>>> >>>> However, the test passes for me if the patch isn't applied. >>> Yes! This test fails with the changes, this is because the >>> DestroySystem() is not just destroying the domain but also >>> undefining it. >>> The VSMS/15*py tc with the new changes also fails for the same >>> reason. I am not sure if you got chance to look at the comments to >>> "#2 Add try / except to VSMS 15" patch. >>> Should I make a note of this in the libvirt.org and XFAIL this test ? >>> >> Any further comments on this ? >> > > I didn't see your previous email - sorry for the delay here. My > original mail was in hopes that you knew why your changes caused the > test to fail. No problem, I was wondering if I had to change the test case. > > Last time I tested VSMS 15 with the changes I submitted, I didn't see > it fail. I haven't had a chance to revist that patch. > > I took a look at this test, and you're right - the reason it's failing > is because DestroySystem() is also undefined the guest. So the answer > here is to modify the test so that it doesn't call undefine(). Also, > make sure the guest isn't in the inactive domain list either. > > Not sure why you want to XFAIL the test, as DestroySystem() is doing > what is expected. > I thought DestroySystem() is equivalent to "virsh destroy" command which would just destroy a running domain which was defined and started. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Tue Sep 15 08:40:54 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Tue, 15 Sep 2009 14:10:54 +0530 Subject: [Libvirt-cim] [PATCH 3 of 3] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: <4AAE8944.2040406@linux.vnet.ibm.com> References: <616c8e4217a138a001a9.1252437876@elm3a148.beaverton.ibm.com> <4AA9656D.7080203@linux.vnet.ibm.com> <4AAE2BC3.2080800@linux.vnet.ibm.com> <4AAE8944.2040406@linux.vnet.ibm.com> Message-ID: <4AAF5316.4020807@linux.vnet.ibm.com> Kaitlin Rupert wrote: > Deepti B Kalakeri wrote: >> >> >> Kaitlin Rupert wrote: >>> >>>> +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, >>>> + exp_vol_path, dp_inst): >>>> + for err_scen in invalid_scen.keys(): + logger.info("Verifying >>>> errors for '%s'....", err_scen) >>>> + status = FAIL >>>> + del_res = [FAIL] + try: >>> >>> I would put the try / execpt outside of the for loop. This will save >>> you some indentation. >> I would need the try: except block .. so that I can catch the errors >> for each of the invalid delete() scenarios. > > Agreed. Your code does something like: > > + for err_scen in invalid_scen.keys(): > > > > + try: > > > > + > + except CIMError, (err_no, err_desc): > > Why not do: > > try: > > for > > except CIMError, (err_no, err_desc): > > except Exception, details: If I change the existing code to try: for except CIMError, (err_no, err_desc): except Exception, details: Then, I will be able to execute the for loop only one time and the execution will come out with suitable message from verify_error*(). > This would save you some indentation, and allow you to catch any > unexpected errors in addition to the errors thrown by the delete call. > Good Point!! to include the exception for the other cases apart from the DeleteResourceInPool(). > >>>> + resource = inst_to_mof(res_settings) + del_res = >>>> rpcs_conn.DeleteResourceInPool(Resource=resource, >>>> + Pool=dp_inst) >>>> + else: >>>> + exp_err_no = CIM_ERR_INVALID_PARAMETER >>>> + if err_scen == "MISSING_RESOURCE": >>>> + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) >>>> + elif err_scen == "MISSING_POOL": >>>> + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) >>> >>> Will invalid_scen.keys() already return the keys in the same order? >>> I'm wondering if it is possible for resource to be undefined here >>> since it only gets defined if "if not "MISSING" in err_scen:" has >>> passed in a prior iteration of the loop. >>> >>> If "if not "MISSING" in err_scen:" fails the first time through the >>> loop, resource will be undefined. >>> >> I am not sure I understand the comment here. > > If you look at the Python documentation, the keys are returned an > arbitrary order (http://docs.python.org/library/stdtypes.html#dict.items). Yeah thats correct the keys() would not come in a particular oder unless sorted. > > So taking a look at your code, let's say keys() returns err_scen == > MISSING_POOL the first time through the loop... > > if not "MISSING" in err_scen: > This check fails > > else: > exp_err_no = CIM_ERR_INVALID_PARAMETER > if err_scen == "MISSING_RESOURCE": > del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) > elif err_scen == "MISSING_POOL": > del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) > > This code is executed, but resource hasn't been set yet. > > yeah!! the old patch did not set the resource for the MISSING_POOL case and that was a mistake. This has been included in the new patch that was sent yesterday. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Tue Sep 15 09:30:23 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Tue, 15 Sep 2009 15:00:23 +0530 Subject: [Libvirt-cim] [PATCH 4 of 5] [TEST] Add new tc to verify the DeleteResourceInPool() In-Reply-To: <4AAE99A1.7010102@linux.vnet.ibm.com> References: <4AAE99A1.7010102@linux.vnet.ibm.com> Message-ID: <4AAF5EAF.7050501@linux.vnet.ibm.com> Kaitlin Rupert wrote: > Deepti B. Kalakeri wrote: >> # HG changeset patch >> # User Deepti B. Kalakeri >> # Date 1252950606 25200 >> # Node ID c97d63289d40f9b64c8ab3b2a2c33538b9ad5907 >> # Parent dec604e54eceb2c28f9dce3c9b22d87b152eb614 >> [TEST] Add new tc to verify the DeleteResourceInPool(). >> >> Tested with KVM and current sources on SLES11. >> Signed-off-by: Deepti B. Kalakeri >> > > > Looks like this patch didn't send properly. > Oops! Sorry My error while using qpop qpush commands .. I seem to have forgotten to do hg add the test before I submitted it.. Sorry for the inconvenience. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Tue Sep 15 09:50:53 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 15 Sep 2009 09:50:53 -0000 Subject: [Libvirt-cim] [PATCH 1 of 5] [TEST] #3 Modified pool.py to support RPCS CreateResourceInPool In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253007144 25200 # Node ID df6777d1b6327e85aee0080be2cd7f296a301d36 # Parent 34b2abf04c101dfa45651e201476bb2055b4654c [TEST] #3 Modified pool.py to support RPCS CreateResourceInPool. Patch 3: -------- 1) Moved get_sto_vol_rasd() to pool.py as get_sto_vol_rasd_for_pool(), since it is used in RPCS/13*py and RPCS/14*py Patch 2: ------- 1) Added check in get_stovol_rasd_from_sdc() 2) Added get_diskpool() to pool.py as it is used in 10*py/11*py, RPCS/12*py and will be useful for further tests as well 3) Added rev for storagevol deletion NOTE: Please base this patch on the patch "Modifying common_util.py for netnfs" Patch 1: -------- Added the following two functions which are used in RPCS/10*py and RPCS/11*py 1) get_stovol_rasd_from_sdc() to get the stovol rasd from sdc 2) get_stovol_default_settings() to get default sto vol settings Also, modified common_util.py to remove the backed up exportfs file Added RAW_VOL_TYPE which is the FormatType supported by RPCS currently Once this patch gets accepted we can modify RPCS/10*py to refer to these functions. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 34b2abf04c10 -r df6777d1b632 suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Sun Sep 13 23:46:16 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Tue Sep 15 02:32:24 2009 -0700 @@ -582,6 +582,8 @@ try: # Backup the original exports file. if (os.path.exists(exports_file)): + if os.path.exists(back_exports_file): + os.remove(back_exports_file) move_file(exports_file, back_exports_file) fd = open(exports_file, "w") line = "\n %s %s(rw)" %(src_dir_for_mnt, server) diff -r 34b2abf04c10 -r df6777d1b632 suites/libvirt-cim/lib/XenKvmLib/pool.py --- a/suites/libvirt-cim/lib/XenKvmLib/pool.py Sun Sep 13 23:46:16 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/pool.py Tue Sep 15 02:32:24 2009 -0700 @@ -25,7 +25,7 @@ from CimTest.ReturnCodes import PASS, FAIL, SKIP from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.const import get_provider_version, default_pool_name -from XenKvmLib.enumclass import EnumInstances, GetInstance +from XenKvmLib.enumclass import EnumInstances, GetInstance, EnumNames from XenKvmLib.assoc import Associators from VirtLib.utils import run_remote from XenKvmLib.xm_virt_util import virt2uri, net_list @@ -34,11 +34,13 @@ from CimTest.CimExt import CIMClassMOF from XenKvmLib.vxml import NetXML, PoolXML from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.vsms import RASD_TYPE_STOREVOL cim_errno = pywbem.CIM_ERR_NOT_SUPPORTED cim_mname = "CreateChildResourcePool" input_graphics_pool_rev = 757 libvirt_cim_child_pool_rev = 837 +libvirt_rasd_spool_del_changes = 971 DIR_POOL = 1L FS_POOL = 2L @@ -48,6 +50,9 @@ LOGICAL_POOL = 6L SCSI_POOL = 7L +#Volume types +RAW_VOL_TYPE = 1 + def pool_cn_to_rasd_cn(pool_cn, virt): if pool_cn.find('ProcessorPool') >= 0: return get_typed_class(virt, "ProcResourceAllocationSettingData") @@ -297,3 +302,77 @@ status = PASS return status + +def get_stovol_rasd_from_sdc(virt, server, dp_inst_id): + rasd = None + ac_cn = get_typed_class(virt, "AllocationCapabilities") + an_cn = get_typed_class(virt, "SettingsDefineCapabilities") + key_list = {"InstanceID" : dp_inst_id} + + try: + inst = GetInstance(server, ac_cn, key_list) + if inst == None: + raise Exception("Failed to GetInstance for %s" % dp_inst_id) + + rasd = Associators(server, an_cn, ac_cn, InstanceID=inst.InstanceID) + if len(rasd) < 4: + raise Exception("Failed to get default StorageVolRASD , "\ + "Expected atleast 4, Got '%s'" % len(rasd)) + + except Exception, detail: + logger.error("Exception: %s", detail) + return FAIL, None + + return PASS, rasd + +def get_stovol_default_settings(virt, server, dp_cn, + pool_name, path, vol_name): + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, dp_rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol RASD's") + return None + + for dpool_rasd in dp_rasds: + if dpool_rasd['ResourceType'] == RASD_TYPE_STOREVOL and \ + 'Default' in dpool_rasd['InstanceID']: + + dpool_rasd['PoolID'] = dp_inst_id + dpool_rasd['Path'] = path + dpool_rasd['VolumeName'] = vol_name + break + + if not pool_name in dpool_rasd['PoolID']: + return None + + return dpool_rasd + +def get_diskpool(server, virt, dp_cn, pool_name): + dp_inst = None + dpool_cn = get_typed_class(virt, dp_cn) + pools = EnumNames(server, dpool_cn) + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + for pool in pools: + if pool['InstanceID'] == dp_inst_id: + dp_inst = pool + break + + return dp_inst + +def get_sto_vol_rasd_for_pool(virt, server, dp_cn, pool_name, exp_vol_path): + dv_rasds = None + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol for '%s' vol", exp_vol_path) + return FAIL + + for item in rasds: + if item['Address'] == exp_vol_path and item['PoolID'] == dp_inst_id: + dv_rasds = item + break + + return dv_rasds + From deeptik at linux.vnet.ibm.com Tue Sep 15 09:50:52 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 15 Sep 2009 09:50:52 -0000 Subject: [Libvirt-cim] [PATCH 0 of 5] [TEST] #2 Added tc to verify StorageVol deletion and creation/deletion errors Message-ID: Please base this patch on the patch "Modifying common_util.py for netnfs" From deeptik at linux.vnet.ibm.com Tue Sep 15 09:50:56 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 15 Sep 2009 09:50:56 -0000 Subject: [Libvirt-cim] [PATCH 4 of 5] [TEST] #2 Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253007574 25200 # Node ID e609ddcebda460ccf547ccae0bca195c52804f66 # Parent e19295361dd7f52fbd9a56dd33e0306b1b71b245 [TEST] #2 Add new tc to verify the DeleteResourceInPool(). Patch2: ------ 1) Added the missing test case. 2) Included get_sto_vol_rasd_for_pool() from pool.py Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r e19295361dd7 -r e609ddcebda4 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume.py Tue Sep 15 02:39:34 2009 -0700 @@ -0,0 +1,160 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS. +# +# -Date: 08-09-2009 + +import sys +import os +from VirtLib import utils +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib import rpcs_service +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, \ + get_stovol_rasd_from_sdc, get_sto_vol_rasd_for_pool + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" + +def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): + try: + if clean_vol == True: + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to undefine diskpool '%s'" \ + % pool_name) + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it " \ + "manually" % exp_vol_path) + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + # For now the test case support only the deletion of dir type based + # vol, we can extend dp_types to include netfs etc ..... + dp_types = { "DISK_POOL_DIR" : DIR_POOL } + + for pool_name, pool_type in dp_types.iteritems(): + status = FAIL + res = del_res = [FAIL] + clean_pool = True + try: + if pool_type == DIR_POOL: + pool_name = default_pool_name + clean_pool = False + else: + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed to get the resource settings for '%s'" \ + " Vol" % vol_name) + + resource_setting = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource_setting, + Pool=dp_inst) + + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings != None: + raise Exception("'%s' vol of '%s' pool was not deleted" \ + % (vol_name, pool_name)) + else: + logger.info("Vol '%s' of '%s' pool deleted successfully by " + "DeleteResourceInPool()", vol_name, pool_name) + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_pool_vol(server, virt, pool_name, + clean_pool, exp_vol_path) + if del_res[0] == PASS and ret == PASS : + status = PASS + else: + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Tue Sep 15 09:50:54 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 15 Sep 2009 09:50:54 -0000 Subject: [Libvirt-cim] [PATCH 2 of 5] [TEST] Added new tc to verify the RPCS error values with dir type pool In-Reply-To: References: Message-ID: <60213fdefc689d3bea45.1253008254@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253007197 25200 # Node ID 60213fdefc689d3bea45fca5964edc01879dad12 # Parent df6777d1b6327e85aee0080be2cd7f296a301d36 [TEST] Added new tc to verify the RPCS error values with dir type pool. This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when: 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE 2) Trying to create 2 Vol in the same Path Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r df6777d1b632 -r 60213fdefc68 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py Tue Sep 15 02:33:17 2009 -0700 @@ -0,0 +1,172 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when: +# 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE +# 2) Trying to create 2 Vol in the same Path +# +# -Date: 04-09-2009 + +import sys +import os +from VirtLib import utils +from random import randint +from pywbem.cim_types import Uint64 +from pywbem import CIM_ERR_FAILED, CIMError +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib import rpcs_service +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.pool import create_pool, undefine_diskpool, RAW_VOL_TYPE, \ + get_diskpool, get_stovol_rasd_from_sdc,\ + get_stovol_default_settings + +dir_pool_attr = { "Path" : "/tmp" } +vol_name = "cimtest-vol.img" + +INVALID_FTYPE = RAW_VOL_TYPE + randint(20,100) +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'INVALID_FTYPE': { 'msg' : "Unable to generate XML "\ + "for new resource" }, + 'DUP_VOL_PATH' : { 'msg' : "Unable to create storage volume"} + } + +def get_inputs(virt, server, dp_cn, key, exp_vol_path): + sv_rasd = dp_inst = None + pool_name = default_pool_name + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, + pool_name, exp_vol_path, + vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + if key == "INVALID_FTYPE": + sv_rasd['FormatType'] = Uint64(INVALID_FTYPE) + + sv_settings = inst_to_mof(sv_rasd) + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" % pool_name) + + except Exception, details: + logger.error("In get_inputs() Exception details: %s", details) + return FAIL, None, None + + return PASS, sv_settings, dp_inst + +def verify_vol_err(virt, server, dp_cn, key, exp_vol_path): + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, + key, exp_vol_path) + if status != PASS: + return status + + status = FAIL + res = ret = [FAIL] + try: + logger.info("Verifying err for '%s'...", key) + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + ret = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + # For duplicate vol path verfication we should have been able to + # create the first dir pool successfully before attempting the next + if key == 'DUP_VOL_PATH' and ret[0] == PASS: + # Trying to create the vol in the same vol path should return + # an error + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[key]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, key) + return PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[key]['msg']) + + if res[0] == PASS: + logger.error("Should not have been able to create Vol %s", vol_name) + cleanup_vol(server, exp_vol_path) + return FAIL + +def cleanup_vol(server, exp_vol_path): + try: + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it "\ + "manually" % exp_vol_path) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + dp_types = ['DUP_VOL_PATH', 'INVALID_FTYPE'] + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (dir_pool_attr['Path'], vol_name) + + try: + # err_key will contain either INVALID_FTYPE/DUP_VOL_PATH + # to be able access the err mesg + for err_key in dp_types: + status = FAIL + status = verify_vol_err(virt, server, dp_cn, err_key, exp_vol_path) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s'" % err_key) + + except Exception, details: + logger.error("In main() Exception details: %s", details) + status = FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Tue Sep 15 09:50:57 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 15 Sep 2009 09:50:57 -0000 Subject: [Libvirt-cim] [PATCH 5 of 5] [TEST] #2 Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: References: Message-ID: <8dd1bd4ad31da0334a78.1253008257@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253008070 25200 # Node ID 8dd1bd4ad31da0334a78ca7d1b7b9af58899c272 # Parent e609ddcebda460ccf547ccae0bca195c52804f66 [TEST] #2 Add new tc to verify the err values for RPCS DeleteResourceInPool() Patch 2: -------- 1) Added exception to verify_rpcs_err_val() to catch exceptions returned other than for DeleteResourceInPool() 2) Included get_sto_vol_rasd_for_pool() from pool.py Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r e609ddcebda4 -r 8dd1bd4ad31d suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py Tue Sep 15 02:47:50 2009 -0700 @@ -0,0 +1,186 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS returns error when invalid values are +# passed. +# +# -Date: 08-09-2009 + +import sys +import os +from VirtLib import utils +from CimTest.Globals import logger +from pywbem import CIM_ERR_FAILED, CIM_ERR_INVALID_PARAMETER, CIMError +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ + get_provider_version +from XenKvmLib import rpcs_service +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, \ + get_sto_vol_rasd_for_pool + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" +invalid_scen = { "INVALID_ADDRESS" : { 'val' : 'Junkvol_path', + 'msg' : 'no storage vol with '\ + 'matching path' }, + "NO_ADDRESS_FIELD" : { 'msg' :'Missing Address in '\ + 'resource RASD' }, + "MISSING_RESOURCE" : { 'msg' :"Missing argument `Resource'"}, + "MISSING_POOL" : { 'msg' :"Missing argument `Pool'"} + } + + +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, + exp_vol_path, dp_inst): + + for err_scen in invalid_scen.keys(): + logger.info("Verifying errors for '%s'....", err_scen) + status = FAIL + del_res = [FAIL] + try: + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed getting resource settings for '%s' vol"\ + " when executing '%s'" % (vol_name, err_scen)) + + if not "MISSING" in err_scen: + exp_err_no = CIM_ERR_FAILED + + if "NO_ADDRESS_FIELD" in err_scen: + del res_settings['Address'] + elif "INVALID_ADDRESS" in err_scen: + res_settings['Address'] = invalid_scen[err_scen]['val'] + + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource, + Pool=dp_inst) + else: + exp_err_no = CIM_ERR_INVALID_PARAMETER + + if err_scen == "MISSING_RESOURCE": + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) + elif err_scen == "MISSING_POOL": + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) + + except CIMError, (err_no, err_desc): + if del_res[0] != PASS and invalid_scen[err_scen]['msg'] in err_desc\ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' for '%s'", + err_desc, err_scen) + status = PASS + else: + logger.error("Unexpected error msg, Expected '%s'-'%s', Got" + "'%s'-'%s'", exp_err_no, + invalid_scen[err_scen]['msg'], err_no, err_desc) + return FAIL + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + if del_res[0] == PASS or status != PASS: + logger.error("Should not have been able to delete Vol %s", vol_name) + return FAIL + + return status + +def cleanup_vol(server, exp_vol_path): + try: + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it " \ + "manually" % exp_vol_path) + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + pool_name = default_pool_name + pool_type = DIR_POOL + status = FAIL + res = del_res = [FAIL] + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + status = verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, + pool_name, exp_vol_path, dp_inst) + if status != PASS : + raise Exception("Verification Failed for DeleterResourceInPool()") + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_vol(server, exp_vol_path) + if status != PASS or ret != PASS: + return FAIL + + return PASS +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Tue Sep 15 09:50:55 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 15 Sep 2009 09:50:55 -0000 Subject: [Libvirt-cim] [PATCH 3 of 5] [TEST] Added new tc to verify the RPCS error values for netfs pool In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253007202 25200 # Node ID e19295361dd7f52fbd9a56dd33e0306b1b71b245 # Parent 60213fdefc689d3bea45fca5964edc01879dad12 [TEST] Added new tc to verify the RPCS error values for netfs pool. This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when, Trying to create a Vol in a netfs storage pool. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 60213fdefc68 -r e19295361dd7 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py Tue Sep 15 02:33:22 2009 -0700 @@ -0,0 +1,193 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when, +# Trying to create a Vol in a netfs storage pool +# +# -Date: 04-09-2009 + +import sys +import os +from VirtLib import utils +from pywbem import CIM_ERR_FAILED, CIMError +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.const import do_main, platform_sup, get_provider_version +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib import rpcs_service +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import destroy_diskpool, nfs_netfs_setup, \ + netfs_cleanup +from XenKvmLib.pool import create_pool, undefine_diskpool, NETFS_POOL, \ + get_diskpool, get_stovol_rasd_from_sdc, \ + get_stovol_default_settings + +vol_name = "cimtest-vol.img" +vol_path = "/tmp/" + +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'NETFS_POOL' : { 'msg' : "This function does not "\ + "support this resource type"} + } + +def get_pool_attr(server, pool_type): + pool_attr = { } + status , host_addr, src_mnt_dir, dir_mnt_dir = nfs_netfs_setup(server) + if status != PASS: + logger.error("Failed to get pool_attr for NETFS diskpool type") + return status, pool_attr + + pool_attr['Host'] = host_addr + pool_attr['SourceDirectory'] = src_mnt_dir + pool_attr['Path'] = dir_mnt_dir + + return PASS, pool_attr + +def get_inputs(virt, server, dp_cn, pool_name, exp_vol_path): + sv_rasd = dp_inst = None + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, sv_rasd, dp_inst + + return PASS, sv_settings, dp_inst + +def verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path): + + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, + pool_name, exp_vol_path) + if status != PASS: + return status + + status = FAIL + res = [FAIL] + try: + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[pool_name]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, pool_name) + return PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[pool_name]['msg']) + if res[0] == PASS: + logger.error("Should not have been able to create the StorageVol '%s'", + vol_name) + + return FAIL + +def cleanup_pool_vol(server, virt, pool_name, exp_vol_path): + try: + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to undefine diskpool '%s'" % pool_name) + + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it "\ + "manually" % exp_vol_path) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + pool_name = "NETFS_POOL" + pool_type = NETFS_POOL + exp_vol_path = "%s/%s" % (vol_path, vol_name) + dp_cn = "DiskPool" + + try: + status = FAIL + status, pool_attr = get_pool_attr(server, pool_type) + if status != PASS: + return status + + # Creating NETFS pool to verify RPCS error + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + status = verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s' " % pool_name) + + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + netfs_cleanup(server, pool_attr) + ret = cleanup_pool_vol(server, virt, pool_name, exp_vol_path) + if status != PASS or ret != PASS : + return FAIL + + return PASS +if __name__ == "__main__": + sys.exit(main()) From sandeep.ms at hp.com Tue Sep 15 13:17:56 2009 From: sandeep.ms at hp.com (Subba Rao, Sandeep M (STSD)) Date: Tue, 15 Sep 2009 13:17:56 +0000 Subject: [Libvirt-cim] Live move b/w RHEL 5.3 hosts fail through libvirt-cim - xm migrate --live works Message-ID: Hi, I'm trying to perform a live move of a domU using libvirt-cim. The hosts are RHEL 5.3 hosts with the following: [root at RHEL53Xen1 tmp]# rpm -q tog-pegasus tog-pegasus-2.7.0-2.el5 [root at RHEL53Xen1 tmp]# rpm -q libvirt-cim libvirt-cim-0.5.1-4.el5 [root at RHEL53Xen1 tmp]# rpm -q sblim-cmpi-base sblim-cmpi-base-1.5.5-31.el5 [root at RHEL53Xen1 tmp]# cat /etc/Pegasus/cimserver_planned.conf | grep repositoryIsDefaultInstanceProvider repositoryIsDefaultInstanceProvider=true [root at RHEL53Xen1 tmp]# The destination host is also an RHEL 5.3 with similar configuration. So, I enabled the libvirt cim log on the source host and following is a excerpt from the log file. The log shows the operation failed. I'm I missing something here. Any help is much appreciated. infostore.c(88): Path is /etc/libvirt/cim/Xen_Domain-0 misc_util.c(199): URI of connection is: xen:/// misc_util.c(199): URI of connection is: xen:/// device_parsing.c(257): Disk node: disk infostore.c(88): Path is /etc/libvirt/cim/Xen_Copy_RHELVM3 misc_util.c(72): Connecting to libvirt with uri `xen' misc_util.c(199): URI of connection is: xen:/// instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' std_invokemethod.c(279): Method `MigrateVirtualSystemToHost' execution attempted std_invokemethod.c(230): Method parameter `ComputerSystem' validated type 0x1100 std_invokemethod.c(230): Method parameter `DestinationHost' validated type 0x1600 std_invokemethod.c(215): No optional parameter supplied for `MigrationSettingData' std_invokemethod.c(230): Method parameter `MigrationSettingData' validated type 0x1000 std_invokemethod.c(303): Executing handler for method `MigrateVirtualSystemToHost' misc_util.c(72): Connecting to libvirt with uri `xen' Virt_VSMigrationService.c(102): Using default values for MigrationSettingData param Virt_VSMigrationService.c(1351): Prepared migration job a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3 Virt_VSMigrationService.c(1283): Creating instance: root/virt:Virt_MigrationJob.InstanceID="a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3" Virt_VSMigrationService.c(783): Creating indication. misc_util.c(72): Connecting to libvirt with uri `xen' misc_util.c(199): URI of connection is: xen:/// Virt_VSMigrationService.c(757): Setting SourceInstance std_indication.c(70): Indications disabled for this provider std_invokemethod.c(305): Method `MigrateVirtualSystemToHost' returned 0 Virt_VSMigrationService.c(1184): Migration Job a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3 started Virt_VSMigrationService.c(833): MigrationJob ref: root/virt:Virt_MigrationJob.InstanceID="a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3" Virt_VSMigrationService.c(783): Creating indication. Virt_VSMigrationService.c(806): Setting PreviousInstance Virt_VSMigrationService.c(896): Modifying job a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3 (4:Running) misc_util.c(72): Connecting to libvirt with uri `xen' misc_util.c(199): URI of connection is: xen:/// Virt_VSMigrationService.c(757): Setting SourceInstance std_indication.c(70): Indications disabled for this provider misc_util.c(72): Connecting to libvirt with uri `xen' Virt_VSMigrationService.c(1135): Live migration Virt_VSMigrationService.c(937): Migrating Copy_RHELVM3 Virt_VSMigrationService.c(940): Migration failed Virt_VSMigrationService.c(833): MigrationJob ref: root/virt:Virt_MigrationJob.InstanceID="a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3" Virt_VSMigrationService.c(783): Creating indication. misc_util.c(72): Connecting to libvirt with uri `xen' misc_util.c(199): URI of connection is: xen:/// Virt_VSMigrationService.c(757): Setting SourceInstance std_indication.c(70): Indications disabled for this provider Virt_VSMigrationService.c(1189): Migration Job a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3 finished: 1 Virt_VSMigrationService.c(833): MigrationJob ref: root/virt:Virt_MigrationJob.InstanceID="a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3" Virt_VSMigrationService.c(783): Creating indication. Virt_VSMigrationService.c(806): Setting PreviousInstance Virt_VSMigrationService.c(896): Modifying job a92bb2ac-7dcf-4d74-bc47-4afa6e2047b3 (7:Migration Failed) misc_util.c(72): Connecting to libvirt with uri `xen' misc_util.c(199): URI of connection is: xen:/// Thanks, Sandeep From kaitlin at linux.vnet.ibm.com Tue Sep 15 14:42:23 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 15 Sep 2009 07:42:23 -0700 Subject: [Libvirt-cim] Live move b/w RHEL 5.3 hosts fail through libvirt-cim - xm migrate --live works In-Reply-To: References: Message-ID: <4AAFA7CF.1090907@linux.vnet.ibm.com> Subba Rao, Sandeep M (STSD) wrote: > Hi, > > I'm trying to perform a live move of a domU using libvirt-cim. The hosts are RHEL 5.3 hosts with the following: > > [root at RHEL53Xen1 tmp]# rpm -q tog-pegasus > tog-pegasus-2.7.0-2.el5 > [root at RHEL53Xen1 tmp]# rpm -q libvirt-cim > libvirt-cim-0.5.1-4.el5 > [root at RHEL53Xen1 tmp]# rpm -q sblim-cmpi-base > sblim-cmpi-base-1.5.5-31.el5 > [root at RHEL53Xen1 tmp]# cat /etc/Pegasus/cimserver_planned.conf | grep repositoryIsDefaultInstanceProvider > repositoryIsDefaultInstanceProvider=true > [root at RHEL53Xen1 tmp]# > > The destination host is also an RHEL 5.3 with similar configuration. So, I enabled the libvirt cim log on the source host and following is a excerpt from the log file. > The log shows the operation failed. > > I'm I missing something here. Any help is much appreciated. > Virt_VSMigrationService.c(1135): Live migration > Virt_VSMigrationService.c(937): Migrating Copy_RHELVM3 > Virt_VSMigrationService.c(940): Migration failed Hi Sandeep, Hmm.. the provider log isn't too helpful here. Are you able to migrate the guest through libvirt using virsh? virsh migrate --live Copy_RHELVM3 xen+ssh://HostB Also, can you look in /var/log/messages to see if you see any libvirt related errors? -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Tue Sep 15 15:29:01 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 15 Sep 2009 08:29:01 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <4AAF28E4.9010205@linux.vnet.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> <4AA96C43.5090005@linux.vnet.ibm.com> <4AAA372C.4090102@linux.vnet.ibm.com> <4AAE86F2.3000306@linux.vnet.ibm.com> <4AAE9C16.9060608@linux.vnet.ibm.com> <4AAF28E4.9010205@linux.vnet.ibm.com> Message-ID: <4AAFB2BD.1000904@linux.vnet.ibm.com> >> >> I took a look at this test, and you're right - the reason it's failing >> is because DestroySystem() is also undefined the guest. So the answer >> here is to modify the test so that it doesn't call undefine(). Also, >> make sure the guest isn't in the inactive domain list either. >> >> Not sure why you want to XFAIL the test, as DestroySystem() is doing >> what is expected. >> > I thought DestroySystem() is equivalent to "virsh destroy" command which > would just destroy a running domain which was defined and started. > > Nope, DestroySystem() does a "virsh destroy" and "virsh undefine". If you look at the System Virtualization Profile (DSP1042) under the heading " 8.2.2 CIM_VirtualSystemManagementService.DestroySystem( ) Method (Conditional)", DestroySystem() is defined as: "The execution of the DestroySystem( ) method shall effect the destruction of the referenced virtual system and all related virtual system configurations, including snapshots." -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Tue Sep 15 15:30:27 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 15 Sep 2009 08:30:27 -0700 Subject: [Libvirt-cim] [PATCH 3 of 3] [TEST] Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: <4AAF5316.4020807@linux.vnet.ibm.com> References: <616c8e4217a138a001a9.1252437876@elm3a148.beaverton.ibm.com> <4AA9656D.7080203@linux.vnet.ibm.com> <4AAE2BC3.2080800@linux.vnet.ibm.com> <4AAE8944.2040406@linux.vnet.ibm.com> <4AAF5316.4020807@linux.vnet.ibm.com> Message-ID: <4AAFB313.6030801@linux.vnet.ibm.com> >> >> Why not do: >> >> try: >> >> for >> >> except CIMError, (err_no, err_desc): >> >> except Exception, details: > If I change the existing code to > try: > > for > except CIMError, (err_no, err_desc): > > except Exception, details: > > Then, I will be able to execute the for loop only one time and the > execution will come out with suitable message from verify_error*(). > >> This would save you some indentation, and allow you to catch any >> unexpected errors in addition to the errors thrown by the delete call. >> Ah, yes - that's a fair point. That's my mistake here. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From snmishra at us.ibm.com Tue Sep 15 16:19:54 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 15 Sep 2009 09:19:54 -0700 Subject: [Libvirt-cim] [PATCH 1 of 6] Add resource indication feature Makefile changes In-Reply-To: References: Message-ID: <92570a0539103628c8cc.1253031594@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID 92570a0539103628c8ccf0166983e9d85bb7431d # Parent f4e1a60c1d64888c6f8e53c9ed4ea15651825a69 Add resource indication feature Makefile changes. MOF and Registration files for the resource indication provider were added. Changes were made to src/Makefile.am to build resource indication provider. Signed-off-by: Sharad Mishra diff -r f4e1a60c1d64 -r 92570a053910 Makefile.am --- a/Makefile.am Fri Sep 04 14:12:46 2009 -0700 +++ b/Makefile.am Fri Sep 11 09:00:47 2009 -0700 @@ -27,6 +27,7 @@ schema/RegisteredProfile.mof \ schema/ElementConformsToProfile.mof \ schema/ComputerSystemIndication.mof \ + schema/ResourceAllocationSettingDataIndication.mof \ schema/ComputerSystemMigrationIndication.mof \ schema/Virt_ResourceAllocationSettingData.mof \ schema/ResourceAllocationSettingData.mof \ @@ -101,6 +102,7 @@ schema/DiskPool.registration \ schema/HostedResourcePool.registration \ schema/ComputerSystemIndication.registration \ + schema/ResourceAllocationSettingDataIndication.registration \ schema/ComputerSystemMigrationIndication.registration \ schema/ResourceAllocationSettingData.registration \ schema/ResourcePoolConfigurationService.registration \ diff -r f4e1a60c1d64 -r 92570a053910 src/Makefile.am --- a/src/Makefile.am Fri Sep 04 14:12:46 2009 -0700 +++ b/src/Makefile.am Fri Sep 11 09:00:47 2009 -0700 @@ -48,6 +48,7 @@ libVirt_VirtualSystemSnapshotServiceCapabilities.la \ libVirt_SystemDevice.la \ libVirt_ComputerSystemIndication.la \ + libVirt_ResourceAllocationSettingDataIndication.la \ libVirt_ComputerSystemMigrationIndication.la \ libVirt_VirtualSystemManagementCapabilities.la \ libVirt_AllocationCapabilities.la \ @@ -86,6 +87,10 @@ libVirt_ComputerSystemIndication_la_SOURCES = Virt_ComputerSystemIndication.c libVirt_ComputerSystemIndication_la_LIBADD = -lVirt_ComputerSystem -lVirt_HostSystem -lpthread -lrt +libVirt_ResourceAllocationSettingDataIndication_la_DEPENDENCIES = libVirt_ComputerSystem.la +libVirt_ResourceAllocationSettingDataIndication_la_SOURCES = Virt_ResourceAllocationSettingDataIndication.c +libVirt_ResourceAllocationSettingDataIndication_la_LIBADD = -lVirt_ComputerSystem + libVirt_ComputerSystemMigrationIndication_la_DEPENDENCIES = libVirt_ComputerSystem.la libVirt_ComputerSystemMigrationIndication_la_SOURCES = Virt_ComputerSystemMigrationIndication.c libVirt_ComputerSystemMigrationIndication_la_LIBADD = -lVirt_ComputerSystem From snmishra at us.ibm.com Tue Sep 15 16:19:53 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 15 Sep 2009 09:19:53 -0700 Subject: [Libvirt-cim] [PATCH 0 of 6] (#2) Add resource indication feature. Message-ID: This patch adds feature to raise indications when resource(s) are added/deleted or modified. Signed-off-by: Sharad Mishra From snmishra at us.ibm.com Tue Sep 15 16:19:57 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 15 Sep 2009 09:19:57 -0700 Subject: [Libvirt-cim] [PATCH 4 of 6] (#2) Support for resource indication was added to Virt_VirtualSystemManagementService In-Reply-To: References: Message-ID: <25932f2392c13145502b.1253031597@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID 25932f2392c13145502b7e45baadffc0d4dff431 # Parent 74607c71855e6baeeb49bbc134b773acc39675fb (#2) Support for resource indication was added to Virt_VirtualSystemManagementService #2 - Took care of some coding style issues - Added debug message when failed to get InstanceID of an instance in _update_resources_for function. - Broke down _update_resources_for function into two functions for better readability. - Added "RES_IND_" to CREATED, DELETED and MODIFIED constants. - Added check and debug message for prev_inst returned in get_previous_instance(). Code added to call resource indication when resources are added or deleted or modified. Signed-off-by: Sharad Mishra diff -r 74607c71855e -r 25932f2392c1 src/Virt_VirtualSystemManagementService.c --- a/src/Virt_VirtualSystemManagementService.c Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_VirtualSystemManagementService.c Fri Sep 11 09:00:47 2009 -0700 @@ -63,6 +63,9 @@ #define BRIDGE_TYPE "bridge" #define NETWORK_TYPE "network" #define USER_TYPE "user" +#define RASD_IND_CREATED "ResourceAllocationSettingDataCreatedIndication" +#define RASD_IND_DELETED "ResourceAllocationSettingDataDeletedIndication" +#define RASD_IND_MODIFIED "ResourceAllocationSettingDataModifiedIndication" const static CMPIBroker *_BROKER; @@ -442,7 +445,7 @@ ret = cu_get_str_prop(inst, "VirtualSystemIdentifier", &val); if (ret != CMPI_RC_OK) goto out; - + free(domain->name); domain->name = strdup(val); @@ -1416,7 +1419,67 @@ return s; } -static CMPIInstance *create_system(CMPIInstance *vssd, +static CMPIStatus raise_rasd_indication(const CMPIContext *context, + const char *base_type, + CMPIInstance *prev_inst, + const CMPIObjectPath *ref, + struct inst_list *list) +{ + char *type; + CMPIStatus s = {CMPI_RC_OK, NULL}; + CMPIInstance *instc = NULL; + CMPIInstance *ind = NULL; + CMPIObjectPath *op = NULL; + int i; + + CU_DEBUG("raise_rasd_indication"); + + type = get_typed_class(CLASSNAME(ref), base_type); + ind = get_typed_instance(_BROKER, + CLASSNAME(ref), + base_type, + NAMESPACE(ref)); + if (ind == NULL) { + CU_DEBUG("Failed to get indication instance"); + s.rc = CMPI_RC_ERR_FAILED; + goto out; + } + + /* PreviousInstance is set only for modify case. */ + if (prev_inst != NULL) + CMSetProperty(ind, + "PreviousInstance", + (CMPIValue *)&prev_inst, + CMPI_instance); + + for (i = 0; i < list->cur; i++) { + instc = list->list[i]; + op = CMGetObjectPath(instc, NULL); + CMPIString *str = CMGetClassName(op, NULL); + + CU_DEBUG("class name is %s\n", CMGetCharsPtr(str, NULL)); + + CMSetProperty(ind, + "SourceInstance", + (CMPIValue *)&instc, + CMPI_instance); + set_source_inst_props(_BROKER, context, ref, ind); + + s = stdi_raise_indication(_BROKER, + context, + type, + NAMESPACE(ref), + ind); + } + + out: + free(type); + return s; + +} + +static CMPIInstance *create_system(const CMPIContext *context, + CMPIInstance *vssd, CMPIArray *resources, const CMPIObjectPath *ref, const CMPIObjectPath *refconf, @@ -1427,9 +1490,13 @@ const char *msg = NULL; virConnectPtr conn = NULL; virDomainPtr dom = NULL; + struct inst_list list; + const char *props[] = {NULL}; struct domain *domain = NULL; + inst_list_init(&list); + if (refconf != NULL) { *s = get_reference_domain(&domain, ref, refconf); if (s->rc != CMPI_RC_OK) @@ -1477,14 +1544,35 @@ CU_DEBUG("System XML:\n%s", xml); inst = connect_and_create(xml, ref, s); - if (inst != NULL) + if (inst != NULL) { update_dominfo(domain, CLASSNAME(ref)); + *s = enum_rasds(_BROKER, + ref, + domain->name, + CIM_RES_TYPE_ALL, + props, + &list); + + if (s->rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd\n"); + goto out; + } + + raise_rasd_indication(context, + RASD_IND_CREATED, + NULL, + ref, + &list); + } + + out: cleanup_dominfo(&domain); free(xml); virDomainFree(dom); virConnectClose(conn); + inst_list_free(&list); return inst; } @@ -1530,7 +1618,7 @@ if (s.rc != CMPI_RC_OK) goto out; - sys = create_system(vssd, res, reference, refconf, &s); + sys = create_system(context, vssd, res, reference, refconf, &s); if (sys == NULL) goto out; @@ -1564,12 +1652,15 @@ CMPIObjectPath *sys; virConnectPtr conn = NULL; virDomainPtr dom = NULL; + struct inst_list list; + const char *props[] = {NULL}; + inst_list_init(&list); conn = connect_by_classname(_BROKER, CLASSNAME(reference), &status); if (conn == NULL) { - rc = -1; + rc = IM_RC_NOT_SUPPORTED; goto error; } @@ -1580,6 +1671,18 @@ if (dom_name == NULL) goto error; + status = enum_rasds(_BROKER, + reference, + dom_name, + CIM_RES_TYPE_ALL, + props, + &list); + + if (status.rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd"); + goto error; + } + dom = virDomainLookupByName(conn, dom_name); if (dom == NULL) { CU_DEBUG("No such domain `%s'", dom_name); @@ -1605,11 +1708,17 @@ error: if (rc == IM_RC_SYS_NOT_FOUND) - virt_set_status(_BROKER, &status, + virt_set_status(_BROKER, + &status, CMPI_RC_ERR_NOT_FOUND, conn, "Referenced domain `%s' does not exist", dom_name); + else if (rc == IM_RC_NOT_SUPPORTED) + virt_set_status(_BROKER, &status, + CMPI_RC_ERR_NOT_FOUND, + conn, + "Unable to connect to libvirt"); else if (rc == IM_RC_FAILED) virt_set_status(_BROKER, &status, CMPI_RC_ERR_NOT_FOUND, @@ -1617,6 +1726,11 @@ "Unable to retrieve domain name"); else if (rc == IM_RC_OK) { status = (CMPIStatus){CMPI_RC_OK, NULL}; + raise_rasd_indication(context, + RASD_IND_DELETED, + NULL, + reference, + &list); trigger_indication(context, "ComputerSystemDeletedIndication", reference); @@ -1625,7 +1739,7 @@ virDomainFree(dom); virConnectClose(conn); CMReturnData(results, &rc, CMPI_uint32); - + inst_list_free(&list); return status; } @@ -2071,7 +2185,51 @@ return s; } -static CMPIStatus _update_resources_for(const CMPIObjectPath *ref, +static CMPIInstance *get_previous_instance(struct domain *dominfo, + const CMPIObjectPath *ref, + uint16_t type, + const char *devid) +{ + CMPIStatus s; + const char *props[] = {NULL}; + const char *inst_id; + struct inst_list list; + CMPIInstance *prev_inst = NULL; + int i, ret; + + inst_list_init(&list); + s = enum_rasds(_BROKER, ref, dominfo->name, type, props, &list); + if (s.rc != CMPI_RC_OK) { + CU_DEBUG("Failed to enumerate rasd"); + goto out; + } + + for(i = 0; i < list.cur; i++) { + prev_inst = list.list[i]; + ret = cu_get_str_prop(prev_inst, + "InstanceID", + &inst_id); + + if (ret != CMPI_RC_OK) { + CU_DEBUG("Cannot get InstanceID ... ignoring"); + continue; + } + + if (STREQ(inst_id, get_fq_devid(dominfo->name, (char *)devid))) + break; + } + + if (prev_inst == NULL) + CU_DEBUG("PreviousInstance is NULL"); + + out: + inst_list_free(&list); + + return prev_inst; +} + +static CMPIStatus _update_resources_for(const CMPIContext *context, + const CMPIObjectPath *ref, virDomainPtr dom, const char *devid, CMPIInstance *rasd, @@ -2081,8 +2239,12 @@ struct domain *dominfo = NULL; uint16_t type; char *xml = NULL; + const char *indication; CMPIObjectPath *op; + struct inst_list list; + CMPIInstance *prev_inst = NULL; + inst_list_init(&list); if (!get_dominfo(dom, &dominfo)) { virt_set_status(_BROKER, &s, CMPI_RC_ERR_FAILED, @@ -2116,6 +2278,27 @@ if (xml != NULL) { CU_DEBUG("New XML:\n%s", xml); connect_and_create(xml, ref, &s); + + if (func == &resource_add) { + indication = strdup(RASD_IND_CREATED); + } + else if (func == &resource_del) { + indication = strdup(RASD_IND_DELETED); + } + else { + indication = strdup(RASD_IND_MODIFIED); + prev_inst = get_previous_instance(dominfo, ref, type, devid); + } + + if (inst_list_add(&list, rasd) == 0) { + CU_DEBUG("Unable to add RASD instance to the list\n"); + goto out; + } + raise_rasd_indication(context, + indication, + prev_inst, + ref, + &list); } else { cu_statusf(_BROKER, &s, CMPI_RC_ERR_FAILED, @@ -2125,6 +2308,7 @@ out: cleanup_dominfo(&dominfo); free(xml); + inst_list_free(&list); return s; } @@ -2153,7 +2337,8 @@ return s; } -static CMPIStatus _update_resource_settings(const CMPIObjectPath *ref, +static CMPIStatus _update_resource_settings(const CMPIContext *context, + const CMPIObjectPath *ref, const char *domain, CMPIArray *resources, const CMPIResult *results, @@ -2208,9 +2393,14 @@ goto end; } - s = _update_resources_for(ref, dom, devid, inst, func); + s = _update_resources_for(context, + ref, + dom, + devid, + inst, + func); - end: + end: free(name); free(devid); virDomainFree(dom); @@ -2310,7 +2500,9 @@ return s; } - if (cu_get_ref_arg(argsin, "AffectedConfiguration", &sys) != CMPI_RC_OK) { + if (cu_get_ref_arg(argsin, + "AffectedConfiguration", + &sys) != CMPI_RC_OK) { cu_statusf(_BROKER, &s, CMPI_RC_ERR_INVALID_PARAMETER, "Missing AffectedConfiguration parameter"); @@ -2324,11 +2516,13 @@ return s; } - s = _update_resource_settings(reference, + s = _update_resource_settings(context, + reference, domain, arr, results, resource_add); + free(domain); return s; @@ -2351,7 +2545,8 @@ return s; } - return _update_resource_settings(reference, + return _update_resource_settings(context, + reference, NULL, arr, results, @@ -2384,7 +2579,8 @@ if (s.rc != CMPI_RC_OK) goto out; - s = _update_resource_settings(reference, + s = _update_resource_settings(context, + reference, NULL, resource_arr, results, From snmishra at us.ibm.com Tue Sep 15 16:19:58 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 15 Sep 2009 09:19:58 -0700 Subject: [Libvirt-cim] [PATCH 5 of 6] Add the mof and reg files needed to register the resource indication provider In-Reply-To: References: Message-ID: # HG changeset patch # User Sharad Mishra # Date 1253031557 25200 # Node ID cbcf788b362077e7ed289895dc1ce851e405b1ae # Parent 25932f2392c13145502b7e45baadffc0d4dff431 Add the mof and reg files needed to register the resource indication provider Signed-off-by: Sharad Mishra diff -r 25932f2392c1 -r cbcf788b3620 schema/ResourceAllocationSettingDataIndication.mof --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/schema/ResourceAllocationSettingDataIndication.mof Tue Sep 15 09:19:17 2009 -0700 @@ -0,0 +1,66 @@ +// Copyright IBM Corp. 2007 + +[Description ("Xen_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("Xen_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("Xen_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class Xen_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; + + +[Description ("KVM_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("KVM_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("KVM_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class KVM_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; + + +[Description ("LXC_ResourceAllocationSettingData created"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataCreatedIndication : CIM_InstCreation +{ +}; + +[Description ("LXC_ResourceAllocationSettingData deleted"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataDeletedIndication : CIM_InstDeletion +{ +}; + +[Description ("LXC_ResourceAllocationSettingData modified"), + Provider("cmpi::Virt_ResourceAllocationSettingDataIndication") +] +class LXC_ResourceAllocationSettingDataModifiedIndication : CIM_InstModification +{ +}; diff -r 25932f2392c1 -r cbcf788b3620 schema/ResourceAllocationSettingDataIndication.registration --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/schema/ResourceAllocationSettingDataIndication.registration Tue Sep 15 09:19:17 2009 -0700 @@ -0,0 +1,11 @@ +# Copyright IBM Corp. 2007 +# Classname Namespace ProviderName ProviderModule ProviderTypes +Xen_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +Xen_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +Xen_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +KVM_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataCreatedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataDeletedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method +LXC_ResourceAllocationSettingDataModifiedIndication root/virt Virt_ResourceAllocationSettingDataIndicationProvider Virt_ResourceAllocationSettingDataIndication indication method From snmishra at us.ibm.com Tue Sep 15 16:19:56 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 15 Sep 2009 09:19:56 -0700 Subject: [Libvirt-cim] [PATCH 3 of 6] Modify Virt_RASD so that rasd_from_vdev() can be used by other providers In-Reply-To: References: Message-ID: <74607c71855e6baeeb49.1253031596@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID 74607c71855e6baeeb49bbc134b773acc39675fb # Parent 44e2c3144f199c7e552e3f5066186289b424b5db Modify Virt_RASD so that rasd_from_vdev() can be used by other providers. Signed-off-by: Sharad Mishra diff -r 44e2c3144f19 -r 74607c71855e src/Virt_RASD.c --- a/src/Virt_RASD.c Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_RASD.c Fri Sep 11 09:00:47 2009 -0700 @@ -368,7 +368,7 @@ return s; } -static CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, +CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, struct virt_device *dev, const char *host, const CMPIObjectPath *ref, diff -r 44e2c3144f19 -r 74607c71855e src/Virt_RASD.h --- a/src/Virt_RASD.h Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_RASD.h Fri Sep 11 09:00:47 2009 -0700 @@ -66,6 +66,13 @@ const uint16_t type, const char *host, struct virt_device **list); + +CMPIInstance *rasd_from_vdev(const CMPIBroker *broker, + struct virt_device *dev, + const char *host, + const CMPIObjectPath *ref, + const char **properties); + #endif /* From snmishra at us.ibm.com Tue Sep 15 16:19:55 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 15 Sep 2009 09:19:55 -0700 Subject: [Libvirt-cim] [PATCH 2 of 6] Modify Virt_CS so set_source-inst_props() can be used by other providers In-Reply-To: References: Message-ID: <44e2c3144f199c7e552e.1253031595@elm3b24.beaverton.ibm.com> # HG changeset patch # User snmishra at us.ibm.com # Date 1252684847 25200 # Node ID 44e2c3144f199c7e552e3f5066186289b424b5db # Parent 92570a0539103628c8ccf0166983e9d85bb7431d Modify Virt_CS so set_source-inst_props() can be used by other providers. Signed-off-by: Sharad Mishra diff -r 92570a053910 -r 44e2c3144f19 src/Virt_ComputerSystemIndication.c --- a/src/Virt_ComputerSystemIndication.c Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_ComputerSystemIndication.c Fri Sep 11 09:00:47 2009 -0700 @@ -192,9 +192,9 @@ return ret; } -static void set_source_inst_props(const CMPIBroker *broker, +void set_source_inst_props(const CMPIBroker *broker, const CMPIContext *context, - CMPIObjectPath *ref, + const CMPIObjectPath *ref, CMPIInstance *ind) { const char *host; diff -r 92570a053910 -r 44e2c3144f19 src/Virt_ComputerSystemIndication.h --- a/src/Virt_ComputerSystemIndication.h Fri Sep 11 09:00:47 2009 -0700 +++ b/src/Virt_ComputerSystemIndication.h Fri Sep 11 09:00:47 2009 -0700 @@ -29,6 +29,10 @@ const CMPIObjectPath *newsystem, char *type); +void set_source_inst_props(const CMPIBroker *broker, + const CMPIContext *context, + const CMPIObjectPath *ref, + CMPIInstance *ind); #endif /* From snmishra at us.ibm.com Tue Sep 15 16:19:59 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 15 Sep 2009 09:19:59 -0700 Subject: [Libvirt-cim] [PATCH 6 of 6] Add resource indication provider In-Reply-To: References: Message-ID: <335b5e307df79e4e4cfd.1253031599@elm3b24.beaverton.ibm.com> # HG changeset patch # User Sharad Mishra # Date 1253031558 25200 # Node ID 335b5e307df79e4e4cfdfc15d13424c759da8b53 # Parent cbcf788b362077e7ed289895dc1ce851e405b1ae Add resource indication provider. Signed-off-by: Sharad Mishra diff -r cbcf788b3620 -r 335b5e307df7 src/Virt_ResourceAllocationSettingDataIndication.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/Virt_ResourceAllocationSettingDataIndication.c Tue Sep 15 09:19:18 2009 -0700 @@ -0,0 +1,155 @@ +/* + * Copyright IBM Corp. 2007 + * + * Authors: + * Sharad Mishra + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include + +static const CMPIBroker *_BROKER; + +DECLARE_FILTER(xen_created, + "Xen_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(xen_deleted, + "Xen_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(xen_modified, + "Xen_ResourceAllocationSettingDataModifiedIndication"); +DECLARE_FILTER(kvm_created, + "KVM_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(kvm_deleted, + "KVM_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(kvm_modified, + "KVM_ResourceAllocationSettingDataModifiedIndication"); +DECLARE_FILTER(lxc_created, + "LXC_ResourceAllocationSettingDataCreatedIndication"); +DECLARE_FILTER(lxc_deleted, + "LXC_ResourceAllocationSettingDataDeletedIndication"); +DECLARE_FILTER(lxc_modified, + "LXC_ResourceAllocationSettingDataModifiedIndication"); + +static struct std_ind_filter *filters[] = { + &xen_created, + &xen_deleted, + &xen_modified, + &kvm_created, + &kvm_deleted, + &kvm_modified, + &lxc_created, + &lxc_deleted, + &lxc_modified, + NULL, +}; + + +static CMPIStatus raise_indication(const CMPIBroker *broker, + const CMPIContext *ctx, + const CMPIInstance *ind) +{ + struct std_indication_ctx *_ctx = NULL; + CMPIStatus s = {CMPI_RC_OK, NULL}; + struct ind_args *args = NULL; + CMPIObjectPath *ref = NULL; + + _ctx = malloc(sizeof(struct std_indication_ctx)); + if (_ctx == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Unable to allocate indication context"); + goto out; + } + + _ctx->brkr = broker; + _ctx->handler = NULL; + _ctx->filters = filters; + _ctx->enabled = 1; + + args = malloc(sizeof(struct ind_args)); + if (args == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Unable to allocate ind_args"); + goto out; + } + + ref = CMGetObjectPath(ind, &s); + if (ref == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Got a null object path"); + goto out; + } + + /* FIXME: This is a Pegasus work around. Pegsus loses the namespace + when an ObjectPath is pulled from an instance */ + + + CMSetNameSpace(ref, "root/virt"); + args->ns = strdup(NAMESPACE(ref)); + args->classname = strdup(CLASSNAME(ref)); + args->_ctx = _ctx; + + s = stdi_deliver(broker, ctx, args, (CMPIInstance *)ind); + if (s.rc == CMPI_RC_OK) { + CU_DEBUG("Indication delivered"); + } else { + CU_DEBUG("Not delivered: %s", CMGetCharPtr(s.msg)); + } + + out: + return s; +} + +static struct std_indication_handler rasdi = { + .raise_fn = raise_indication, + .trigger_fn = NULL, + .activate_fn = NULL, + .deactivate_fn = NULL, + .enable_fn = NULL, + .disable_fn = NULL, +}; + +DEFAULT_IND_CLEANUP(); +DEFAULT_AF(); +DEFAULT_MP(); + +STDI_IndicationMIStub(, + Virt_ResourceAllocationSettingDataIndicationProvider, + _BROKER, + libvirt_cim_init(), + &rasdi, + filters); + +/* + * Local Variables: + * mode: C + * c-set-style: "K&R" + * tab-width: 8 + * c-basic-offset: 8 + * indent-tabs-mode: nil + * End: + */ From kaitlin at linux.vnet.ibm.com Wed Sep 16 03:37:49 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 15 Sep 2009 20:37:49 -0700 Subject: [Libvirt-cim] [PATCH 2 of 5] [TEST] Added new tc to verify the RPCS error values with dir type pool In-Reply-To: <60213fdefc689d3bea45.1253008254@elm3a148.beaverton.ibm.com> References: <60213fdefc689d3bea45.1253008254@elm3a148.beaverton.ibm.com> Message-ID: <4AB05D8D.8090307@linux.vnet.ibm.com> > + > + except CIMError, (err_no, err_desc): > + if res[0] != PASS and exp_err_values[key]['msg'] in err_desc \ > + and exp_err_no == err_no: > + logger.error("Got the expected error message: '%s' with '%s'", > + err_desc, key) > + return PASS In the case where you're attempting to create a volume with a name that is already in use, this returns and the volume isn't cleaned up properly. Sorry I missed this in previous reviews. > + else: > + logger.error("Failed to get the error message '%s'", > + exp_err_values[key]['msg']) > + > + if res[0] == PASS: > + logger.error("Should not have been able to create Vol %s", vol_name) > + cleanup_vol(server, exp_vol_path) -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Wed Sep 16 06:03:45 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Wed, 16 Sep 2009 11:33:45 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <4AAFB2BD.1000904@linux.vnet.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> <4AA96C43.5090005@linux.vnet.ibm.com> <4AAA372C.4090102@linux.vnet.ibm.com> <4AAE86F2.3000306@linux.vnet.ibm.com> <4AAE9C16.9060608@linux.vnet.ibm.com> <4AAF28E4.9010205@linux.vnet.ibm.com> <4AAFB2BD.1000904@linux.vnet.ibm.com> Message-ID: <4AB07FC1.2010505@linux.vnet.ibm.com> Kaitlin Rupert wrote: >>> >>> I took a look at this test, and you're right - the reason it's >>> failing is because DestroySystem() is also undefined the guest. So >>> the answer here is to modify the test so that it doesn't call >>> undefine(). Also, make sure the guest isn't in the inactive domain >>> list either. >>> >>> Not sure why you want to XFAIL the test, as DestroySystem() is doing >>> what is expected. >>> >> I thought DestroySystem() is equivalent to "virsh destroy" command >> which would just destroy a running domain which was defined and started. >> >> > > Nope, DestroySystem() does a "virsh destroy" and "virsh undefine". If > you look at the System Virtualization Profile (DSP1042) under the > heading " 8.2.2 CIM_VirtualSystemManagementService.DestroySystem( ) > Method (Conditional)", DestroySystem() is defined as: > > "The execution of the DestroySystem( ) method shall effect the > destruction of the referenced virtual system > and all related virtual system configurations, including snapshots." Oh! yeah I read this today. Its been long time I read the DSP1042. Thanks for the clarifications. Updated patch on its way. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Wed Sep 16 09:17:02 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 09:17:02 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] Updating RPCS/10_create_storagevolume.py Message-ID: <0a64f90aabb5dd63ac2a.1253092622@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253097826 14400 # Node ID 0a64f90aabb5dd63ac2ab677981a939b2fcf5eeb # Parent 9e08670a3c3749738a65fec7f2faa4c2b68a7092 [TEST] Updating RPCS/10_create_storagevolume.py Updating RPCS/10_create_storagevolume.py to create and use its own dir pool for StorageVol. If we try to use the default_pool_name then this will cause regression in the further tests which refer to the /tmp/cimtest-vol.img as all the information regarding this will get cleared only when the pool under which it is created it destoyed. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 9e08670a3c37 -r 0a64f90aabb5 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Thu Sep 10 09:32:01 2009 -0700 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Wed Sep 16 06:43:46 2009 -0400 @@ -31,8 +31,7 @@ from VirtLib import utils from CimTest.Globals import logger from CimTest.ReturnCodes import FAIL, PASS, SKIP -from XenKvmLib.const import do_main, platform_sup, default_pool_name, \ - get_provider_version +from XenKvmLib.const import do_main, platform_sup, get_provider_version from XenKvmLib.vsms import RASD_TYPE_STOREVOL from XenKvmLib.rasd import libvirt_rasd_storagepool_changes from XenKvmLib import rpcs_service @@ -129,17 +128,15 @@ return PASS -def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): +def cleanup_pool_vol(server, virt, pool_name, exp_vol_path): try: - if clean_vol == True: - status = destroy_diskpool(server, virt, pool_name) + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) if status != PASS: - raise Exception("Unable to destroy diskpool '%s'" % pool_name) - else: - status = undefine_diskpool(server, virt, pool_name) - if status != PASS: - raise Exception("Unable to undefine diskpool '%s'" \ - % pool_name) + raise Exception("Unable to undefine diskpool '%s'" % pool_name) except Exception, details: logger.error("Exception details: %s", details) return FAIL @@ -177,18 +174,13 @@ status = FAIL res = [FAIL] found = 0 - clean_pool=True try: - if pool_type == DIR_POOL: - pool_name = default_pool_name - clean_pool=False - else: - status = create_pool(server, virt, pool_name, pool_attr, - mode_type=pool_type, pool_type="DiskPool") + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type="DiskPool") - if status != PASS: - logger.error("Failed to create pool '%s'", pool_name) - return status + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status dp_inst_id = "%s/%s" % (dp_cn, pool_name) stovol_settings = get_stovol_settings(server, virt, @@ -211,18 +203,18 @@ found = verify_vol(server, virt, pool_name, exp_vol_path, found) stovol_status = verify_sto_vol_rasd(virt, server, dp_inst_id, exp_vol_path) + + ret = cleanup_pool_vol(server, virt, pool_name, exp_vol_path) + if res[0] == PASS and found == 1 and \ + ret == PASS and stovol_status == PASS: + status = PASS + else: + return FAIL except Exception, details: logger.error("Exception details: %s", details) status = FAIL - ret = cleanup_pool_vol(server, virt, pool_name, - clean_pool, exp_vol_path) - if res[0] == PASS and found == 1 and \ - ret == PASS and stovol_status == PASS: - status = PASS - else: - return FAIL return status if __name__ == "__main__": From deeptik at linux.vnet.ibm.com Wed Sep 16 11:37:14 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 11:37:14 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] #2 Adding verification for DestroySystem() of the domain Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253106219 14400 # Node ID a53276163e6a79ecc9c0d103bac4a256c2a0d031 # Parent 7d3c17e1c691b46c8b770f09ddad72f0d839f5aa [TEST] #2 Adding verification for DestroySystem() of the domain. Patch 2: -------- 1) Removed unnecessary import stmts 2) removed the undefine() call after the DestroySystem() call 3) Improved the log messages 4) Put a online description to the test case Tested with KVM and current sources on SLES11 and F11. Signed-off-by: Deepti B. Kalakeri diff -r 7d3c17e1c691 -r a53276163e6a suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py --- a/suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py Wed Sep 16 08:58:46 2009 -0400 +++ b/suites/libvirt-cim/cimtest/VirtualSystemManagementService/02_destroysystem.py Wed Sep 16 09:03:39 2009 -0400 @@ -5,6 +5,7 @@ # Authors: # Guolian Yun # Zhengang Li +# Deepti B. Kalakeri # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public @@ -20,13 +21,13 @@ # License along with this library; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # +# Test case to verify DestroySystem() of VSMS provider. +# +# import sys -import pywbem -from pywbem.cim_obj import CIMInstanceName -from VirtLib import utils from XenKvmLib.xm_virt_util import domain_list, active_domain_list -from XenKvmLib import vsms, vxml +from XenKvmLib import vxml from XenKvmLib.classes import get_typed_class from XenKvmLib.const import do_main from CimTest.Globals import logger @@ -43,47 +44,48 @@ def main(): options = main.options - service = vsms.get_vsms_class(options.virt)(options.ip) cxml = vxml.get_class(options.virt)(default_dom) - ret = cxml.cim_define(options.ip) - if not ret: - logger.error("Failed to define the dom: %s", default_dom) - return FAIL - ret = cxml.start(options.ip) - if not ret: - logger.error("Failed to start the dom: %s", default_dom) + + try: + ret = cxml.cim_define(options.ip) + if not ret: + logger.error("Failed to define the domain '%s'", default_dom) + return FAIL + + defined_domains = domain_list(options.ip, options.virt) + if default_dom not in defined_domains: + logger.error("Failed to find defined domain '%s'", default_dom) + return FAIL + + ret = cxml.cim_start(options.ip) + if ret: + logger.error("Failed to start the domain '%s'", default_dom) + cxml.undefine(options.ip) + return FAIL + + list_before = active_domain_list(options.ip, options.virt) + if default_dom not in list_before: + raise Exception("Domain '%s' is not in active domain list" \ + % default_dom) + + ret = cxml.cim_destroy(options.ip) + if not ret: + raise Exception("Failed to destroy domain '%s'" % default_dom) + + list_after = domain_list(options.ip, options.virt) + if default_dom in list_after: + raise Exception("DestroySystem() failed to destroy domain '%s'.." \ + "Provider did not return any error" % default_dom) + else: + logger.info("DestroySystem() successfully destroyed and undefined"\ + " domain '%s'", default_dom) + + except Exception, details: + logger.error("Exception details: %s", details) cleanup_env(options.ip, cxml) return FAIL - classname = get_typed_class(options.virt, 'ComputerSystem') - cs_ref = CIMInstanceName(classname, keybindings = { - 'Name':default_dom, - 'CreationClassName':classname}) - list_before = domain_list(options.ip, options.virt) - if default_dom not in list_before: - logger.error("Domain not in domain list") - cleanup_env(options.ip, cxml) - return FAIL - - try: - service.DestroySystem(AffectedSystem=cs_ref) - except Exception, details: - logger.error('Unknow exception happened') - logger.error(details) - cleanup_env(options.ip, cxml) - return FAIL - - list_after = domain_list(options.ip, options.virt) - - if default_dom in list_after: - logger.error("Domain %s not destroyed: provider didn't return error", - default_dom) - cleanup_env(options.ip, cxml) - status = FAIL - else: - status = PASS - - return status + return PASS if __name__ == "__main__": From deeptik at linux.vnet.ibm.com Wed Sep 16 17:51:34 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 17:51:34 -0000 Subject: [Libvirt-cim] [PATCH 1 of 5] [TEST] #4 Modified pool.py to support RPCS CreateResourceInPool In-Reply-To: References: Message-ID: <741c93090d6f7cffadc7.1253123494@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253106314 25200 # Node ID 741c93090d6f7cffadc7684563590282b073149d # Parent d35d1a2956a81c7798712ec3b6d4a3906c75e480 [TEST] #4 Modified pool.py to support RPCS CreateResourceInPool. Patch 4: -------- 1) Moved cleanup_pool_vol() to pool.py as it referenced by couple of tests PS: will update RPCS/10*py to reference cleanup_pool_vol() from pool.py once these patches get accepted. Patch 3: -------- 1) Moved get_sto_vol_rasd() to pool.py as get_sto_vol_rasd_for_pool(), since it is used in RPCS/13*py and RPCS/14*py Patch 2: ------- 1) Added check in get_stovol_rasd_from_sdc() 2) Added get_diskpool() to pool.py as it is used in 10*py/11*py, RPCS/12*py and will be useful for further tests as well 3) Added rev for storagevol deletion NOTE: Please base this patch on the patch "Modifying common_util.py for netnfs" Patch 1: -------- Added the following two functions which are used in RPCS/10*py and RPCS/11*py 1) get_stovol_rasd_from_sdc() to get the stovol rasd from sdc 2) get_stovol_default_settings() to get default sto vol settings Also, modified common_util.py to remove the backed up exportfs file Added RAW_VOL_TYPE which is the FormatType supported by RPCS currently Once this patch gets accepted we can modify RPCS/10*py to refer to these functions. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r d35d1a2956a8 -r 741c93090d6f suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Wed Sep 16 06:05:10 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Wed Sep 16 06:05:14 2009 -0700 @@ -582,6 +582,8 @@ try: # Backup the original exports file. if (os.path.exists(exports_file)): + if os.path.exists(back_exports_file): + os.remove(back_exports_file) move_file(exports_file, back_exports_file) fd = open(exports_file, "w") line = "\n %s %s(rw)" %(src_dir_for_mnt, server) diff -r d35d1a2956a8 -r 741c93090d6f suites/libvirt-cim/lib/XenKvmLib/pool.py --- a/suites/libvirt-cim/lib/XenKvmLib/pool.py Wed Sep 16 06:05:10 2009 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/pool.py Wed Sep 16 06:05:14 2009 -0700 @@ -21,11 +21,13 @@ # import sys +import os +from VirtLib import utils from CimTest.Globals import logger, CIM_NS from CimTest.ReturnCodes import PASS, FAIL, SKIP from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.const import get_provider_version, default_pool_name -from XenKvmLib.enumclass import EnumInstances, GetInstance +from XenKvmLib.enumclass import EnumInstances, GetInstance, EnumNames from XenKvmLib.assoc import Associators from VirtLib.utils import run_remote from XenKvmLib.xm_virt_util import virt2uri, net_list @@ -34,11 +36,14 @@ from CimTest.CimExt import CIMClassMOF from XenKvmLib.vxml import NetXML, PoolXML from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.vsms import RASD_TYPE_STOREVOL +from XenKvmLib.common_util import destroy_diskpool cim_errno = pywbem.CIM_ERR_NOT_SUPPORTED cim_mname = "CreateChildResourcePool" input_graphics_pool_rev = 757 libvirt_cim_child_pool_rev = 837 +libvirt_rasd_spool_del_changes = 971 DIR_POOL = 1L FS_POOL = 2L @@ -48,6 +53,9 @@ LOGICAL_POOL = 6L SCSI_POOL = 7L +#Volume types +RAW_VOL_TYPE = 1 + def pool_cn_to_rasd_cn(pool_cn, virt): if pool_cn.find('ProcessorPool') >= 0: return get_typed_class(virt, "ProcResourceAllocationSettingData") @@ -297,3 +305,100 @@ status = PASS return status + +def get_stovol_rasd_from_sdc(virt, server, dp_inst_id): + rasd = None + ac_cn = get_typed_class(virt, "AllocationCapabilities") + an_cn = get_typed_class(virt, "SettingsDefineCapabilities") + key_list = {"InstanceID" : dp_inst_id} + + try: + inst = GetInstance(server, ac_cn, key_list) + if inst == None: + raise Exception("Failed to GetInstance for %s" % dp_inst_id) + + rasd = Associators(server, an_cn, ac_cn, InstanceID=inst.InstanceID) + if len(rasd) < 4: + raise Exception("Failed to get default StorageVolRASD , "\ + "Expected atleast 4, Got '%s'" % len(rasd)) + + except Exception, detail: + logger.error("Exception: %s", detail) + return FAIL, None + + return PASS, rasd + +def get_stovol_default_settings(virt, server, dp_cn, + pool_name, path, vol_name): + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, dp_rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol RASD's") + return None + + for dpool_rasd in dp_rasds: + if dpool_rasd['ResourceType'] == RASD_TYPE_STOREVOL and \ + 'Default' in dpool_rasd['InstanceID']: + + dpool_rasd['PoolID'] = dp_inst_id + dpool_rasd['Path'] = path + dpool_rasd['VolumeName'] = vol_name + break + + if not pool_name in dpool_rasd['PoolID']: + return None + + return dpool_rasd + +def get_diskpool(server, virt, dp_cn, pool_name): + dp_inst = None + dpool_cn = get_typed_class(virt, dp_cn) + pools = EnumNames(server, dpool_cn) + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + for pool in pools: + if pool['InstanceID'] == dp_inst_id: + dp_inst = pool + break + + return dp_inst + +def get_sto_vol_rasd_for_pool(virt, server, dp_cn, pool_name, exp_vol_path): + dv_rasds = None + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol for '%s' vol", exp_vol_path) + return FAIL + + for item in rasds: + if item['Address'] == exp_vol_path and item['PoolID'] == dp_inst_id: + dv_rasds = item + break + + return dv_rasds + +def cleanup_pool_vol(server, virt, pool_name, exp_vol_path): + try: + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to undefine diskpool '%s'" % pool_name) + + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + ret, out = utils.run_remote(server, cmd) + if ret != 0: + raise Exception("'%s' was not removed, please remove it "\ + "manually" % exp_vol_path) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + return PASS + From deeptik at linux.vnet.ibm.com Wed Sep 16 17:51:36 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 17:51:36 -0000 Subject: [Libvirt-cim] [PATCH 3 of 5] [TEST] #2 Added new tc to verify the RPCS error values for netfs pool In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253106990 25200 # Node ID a8323b02007b6210f78bf92183c939e9e52b1f60 # Parent 681307f145a51198349c2915c39bc6064da886b2 [TEST] #2 Added new tc to verify the RPCS error values for netfs pool. Patch 2: -------- 1) Used the cleanup_pool_vol() from pool.py Patch 1: -------- This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when, Trying to create a Vol in a netfs storage pool. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 681307f145a5 -r a8323b02007b suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py Wed Sep 16 06:16:30 2009 -0700 @@ -0,0 +1,167 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when, +# Trying to create a Vol in a netfs storage pool +# +# -Date: 04-09-2009 + +import sys +from pywbem import CIM_ERR_FAILED, CIMError +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.const import do_main, platform_sup, get_provider_version +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib import rpcs_service +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import nfs_netfs_setup, netfs_cleanup +from XenKvmLib.pool import create_pool, NETFS_POOL, get_diskpool, \ + get_stovol_default_settings, cleanup_pool_vol + +vol_name = "cimtest-vol.img" +vol_path = "/tmp/" + +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'NETFS_POOL' : { 'msg' : "This function does not "\ + "support this resource type"} + } + +def get_pool_attr(server, pool_type): + pool_attr = { } + status , host_addr, src_mnt_dir, dir_mnt_dir = nfs_netfs_setup(server) + if status != PASS: + logger.error("Failed to get pool_attr for NETFS diskpool type") + return status, pool_attr + + pool_attr['Host'] = host_addr + pool_attr['SourceDirectory'] = src_mnt_dir + pool_attr['Path'] = dir_mnt_dir + + return PASS, pool_attr + +def get_inputs(virt, server, dp_cn, pool_name, exp_vol_path): + sv_rasd = dp_inst = None + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, sv_rasd, dp_inst + + return PASS, sv_settings, dp_inst + +def verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path): + + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, + pool_name, exp_vol_path) + if status != PASS: + return status + + status = FAIL + res = [FAIL] + try: + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[pool_name]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, pool_name) + return PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[pool_name]['msg']) + if res[0] == PASS: + logger.error("Should not have been able to create the StorageVol '%s'", + vol_name) + + return FAIL + + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + pool_name = "NETFS_POOL" + pool_type = NETFS_POOL + exp_vol_path = "%s/%s" % (vol_path, vol_name) + dp_cn = "DiskPool" + + try: + status = FAIL + status, pool_attr = get_pool_attr(server, pool_type) + if status != PASS: + return status + + # Creating NETFS pool to verify RPCS error + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + status = verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s' " % pool_name) + + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + netfs_cleanup(server, pool_attr) + ret = cleanup_pool_vol(server, virt, pool_name, exp_vol_path) + if status != PASS or ret != PASS : + return FAIL + + return PASS +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Wed Sep 16 17:51:33 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 17:51:33 -0000 Subject: [Libvirt-cim] [PATCH 0 of 5] [TEST] #3 Added tc to verify StorageVol deletion and creation/deletion errors Message-ID: Please base this patch on the patch "Modifying common_util.py for netnfs" From deeptik at linux.vnet.ibm.com Wed Sep 16 17:51:37 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 17:51:37 -0000 Subject: [Libvirt-cim] [PATCH 4 of 5] [TEST] #3 Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253107262 25200 # Node ID fac3bdc5d14cd1e71bb21bf73c2a43bf83217439 # Parent a8323b02007b6210f78bf92183c939e9e52b1f60 [TEST] #3 Add new tc to verify the DeleteResourceInPool(). Patch 3: -------- 1) Used the cleanup_pool_vol() from pool.py Patch2: ------ 1) Added the missing test case. 2) Included get_sto_vol_rasd_for_pool() from pool.py Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r a8323b02007b -r fac3bdc5d14c suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume.py Wed Sep 16 06:21:02 2009 -0700 @@ -0,0 +1,127 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS. +# +# -Date: 08-09-2009 + +import sys +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, get_provider_version +from XenKvmLib import rpcs_service +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.pool import create_pool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, cleanup_pool_vol,\ + get_sto_vol_rasd_for_pool + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + # For now the test case support only the deletion of dir type based + # vol, we can extend dp_types to include netfs etc ..... + dp_types = { "DISK_POOL_DIR" : DIR_POOL } + + for pool_name, pool_type in dp_types.iteritems(): + status = FAIL + res = del_res = [FAIL] + try: + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed to get the resource settings for '%s'" \ + " Vol" % vol_name) + + resource_setting = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource_setting, + Pool=dp_inst) + + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings != None: + raise Exception("'%s' vol of '%s' pool was not deleted" \ + % (vol_name, pool_name)) + else: + logger.info("Vol '%s' of '%s' pool deleted successfully by " + "DeleteResourceInPool()", vol_name, pool_name) + + ret = cleanup_pool_vol(server, virt, pool_name, exp_vol_path) + if del_res[0] == PASS and ret == PASS : + status = PASS + else: + return FAIL + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Wed Sep 16 17:51:38 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 17:51:38 -0000 Subject: [Libvirt-cim] [PATCH 5 of 5] [TEST] #3 Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253123447 25200 # Node ID bfe0b7369b698ec8ed47bca99bf0e1581dc00d65 # Parent fac3bdc5d14cd1e71bb21bf73c2a43bf83217439 [TEST] #3 Add new tc to verify the err values for RPCS DeleteResourceInPool() Patch 3: ------- 1) Included cleanup_pool_vol() of pool.py 2) Created a new dir pool Patch 2: -------- 1) Added exception to verify_rpcs_err_val() to catch exceptions returned other than for DeleteResourceInPool() 2) Included get_sto_vol_rasd_for_pool() from pool.py Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r fac3bdc5d14c -r bfe0b7369b69 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py Wed Sep 16 10:50:47 2009 -0700 @@ -0,0 +1,175 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS returns error when invalid values are +# passed. +# +# -Date: 08-09-2009 + +import sys +import os +from VirtLib import utils +from CimTest.Globals import logger +from pywbem import CIM_ERR_FAILED, CIM_ERR_INVALID_PARAMETER, CIMError +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, get_provider_version +from XenKvmLib import rpcs_service +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.pool import create_pool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, cleanup_pool_vol, \ + get_sto_vol_rasd_for_pool + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" +invalid_scen = { "INVALID_ADDRESS" : { 'val' : 'Junkvol_path', + 'msg' : 'no storage vol with '\ + 'matching path' }, + "NO_ADDRESS_FIELD" : { 'msg' :'Missing Address in '\ + 'resource RASD' }, + "MISSING_RESOURCE" : { 'msg' :"Missing argument `Resource'"}, + "MISSING_POOL" : { 'msg' :"Missing argument `Pool'"} + } + + +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, + exp_vol_path, dp_inst): + + for err_scen in invalid_scen.keys(): + logger.info("Verifying errors for '%s'....", err_scen) + status = FAIL + del_res = [FAIL] + try: + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed getting resource settings for '%s' vol"\ + " when executing '%s'" % (vol_name, err_scen)) + + if not "MISSING" in err_scen: + exp_err_no = CIM_ERR_FAILED + + if "NO_ADDRESS_FIELD" in err_scen: + del res_settings['Address'] + elif "INVALID_ADDRESS" in err_scen: + res_settings['Address'] = invalid_scen[err_scen]['val'] + + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource, + Pool=dp_inst) + else: + exp_err_no = CIM_ERR_INVALID_PARAMETER + + if err_scen == "MISSING_RESOURCE": + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) + elif err_scen == "MISSING_POOL": + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) + + except CIMError, (err_no, err_desc): + if del_res[0] != PASS and invalid_scen[err_scen]['msg'] in err_desc\ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' for '%s'", + err_desc, err_scen) + status = PASS + else: + logger.error("Unexpected error msg, Expected '%s'-'%s', Got" + "'%s'-'%s'", exp_err_no, + invalid_scen[err_scen]['msg'], err_no, err_desc) + return FAIL + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + if del_res[0] == PASS or status != PASS: + logger.error("Should not have been able to delete Vol %s", vol_name) + return FAIL + + return status + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + pool_name = 'DIR_POOL_VOL' + status = FAIL + res = del_res = [FAIL] + try: + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=DIR_POOL, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + status = verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, + pool_name, exp_vol_path, dp_inst) + if status != PASS : + raise Exception("Verification Failed for DeleteResourceInPool()") + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_pool_vol(server, virt, pool_name, exp_vol_path) + if status != PASS or ret != PASS: + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Wed Sep 16 17:51:35 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Wed, 16 Sep 2009 17:51:35 -0000 Subject: [Libvirt-cim] [PATCH 2 of 5] [TEST] #2 Added new tc to verify the RPCS error values with dir type pool In-Reply-To: References: Message-ID: <681307f145a51198349c.1253123495@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253106316 25200 # Node ID 681307f145a51198349c2915c39bc6064da886b2 # Parent 741c93090d6f7cffadc7684563590282b073149d [TEST] #2 Added new tc to verify the RPCS error values with dir type pool. Patch 2: -------- 1) cleaned the pool at the end the verify_vol_err() 2) Created new dir pool to vefify the errors 3) Moved clean_pool_vol() to pool.py as this is refernced in RPCS/10*py RPCS/11*py and will be handy for future tests as well. Patch 1: ------- This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when: 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE 2) Trying to create 2 Vol in the same Path Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 741c93090d6f -r 681307f145a5 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py Wed Sep 16 06:05:16 2009 -0700 @@ -0,0 +1,165 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when: +# 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE +# 2) Trying to create 2 Vol in the same Path +# +# -Date: 04-09-2009 + +import sys +import os +from random import randint +from CimTest.Globals import logger +from XenKvmLib import rpcs_service +from pywbem.cim_types import Uint64 +from pywbem import CIM_ERR_FAILED, CIMError +from XenKvmLib.xm_virt_util import virsh_version +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib.const import do_main, platform_sup, get_provider_version +from XenKvmLib.pool import create_pool, RAW_VOL_TYPE, DIR_POOL, get_diskpool,\ + get_stovol_default_settings, cleanup_pool_vol + +dir_pool_attr = { "Path" : "/tmp" } +vol_name = "cimtest-vol.img" + +INVALID_FTYPE = RAW_VOL_TYPE + randint(20,100) +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'INVALID_FTYPE': { 'msg' : "Unable to generate XML "\ + "for new resource" }, + 'DUP_VOL_PATH' : { 'msg' : "Unable to create storage volume"} + } + +def get_inputs(virt, server, dp_cn, key, exp_vol_path): + sv_rasd = dp_inst = None + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, + key, exp_vol_path, + vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + if key == "INVALID_FTYPE": + sv_rasd['FormatType'] = Uint64(INVALID_FTYPE) + + sv_settings = inst_to_mof(sv_rasd) + dp_inst = get_diskpool(server, virt, dp_cn, key) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" % key) + + except Exception, details: + logger.error("In get_inputs() Exception details: %s", details) + return FAIL, None, None + + return PASS, sv_settings, dp_inst + +def verify_vol_err(virt, server, dp_cn, key, exp_vol_path): + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, key, + exp_vol_path) + if status != PASS: + return status + + status = FAIL + res = ret = [FAIL] + try: + logger.info("Verifying err for '%s'...", key) + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + ret = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + # For duplicate vol path verfication we should have been able to + # create the first dir pool successfully before attempting the next + if key == 'DUP_VOL_PATH' and ret[0] == PASS: + # Trying to create the vol in the same vol path should return + # an error + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[key]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, key) + status = PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[key]['msg']) + + if (res[0] == PASS and key == 'DUP_VOL_PATH') or \ + (ret[0] == PASS and key == 'INVALID_FTYPE'): + logger.error("Should not have been able to create Vol %s", vol_name) + + return status + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + dp_types = ['DUP_VOL_PATH', 'INVALID_FTYPE'] + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (dir_pool_attr['Path'], vol_name) + + try: + # pool_name will contain either INVALID_FTYPE/DUP_VOL_PATH + # to be able access the err mesg + for pool_name in dp_types: + status = create_pool(server, virt, pool_name, dir_pool_attr, + mode_type=DIR_POOL, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + status = FAIL + status = verify_vol_err(virt, server, dp_cn, pool_name, exp_vol_path) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s'" % pool_name) + + ret = cleanup_pool_vol(server, virt, pool_name, exp_vol_path) + if ret != PASS: + raise Exception("Failed to clean the env") + + except Exception, details: + logger.error("In main() Exception details: %s", details) + status = FAIL + + + return status +if __name__ == "__main__": + sys.exit(main()) From snmishra at us.ibm.com Wed Sep 16 18:50:23 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Wed, 16 Sep 2009 11:50:23 -0700 Subject: [Libvirt-cim] [PATCH] Patch to fix property value for PreviousInstance Message-ID: # HG changeset patch # User Sharad Mishra # Date 1253126961 25200 # Node ID fc50acd35fe7f344e296441a88a00f42a7636ad6 # Parent 335b5e307df79e4e4cfdfc15d13424c759da8b53 Patch to fix property value for PreviousInstance. When resource(s) are modified, "PreviousInstance" is set with the original instance. This property was incorrectly being set. This patch moves the code block to get this property before the instance in modified. Signed-off-by: Sharad Mishra diff -r 335b5e307df7 -r fc50acd35fe7 src/Virt_VirtualSystemManagementService.c --- a/src/Virt_VirtualSystemManagementService.c Tue Sep 15 09:19:18 2009 -0700 +++ b/src/Virt_VirtualSystemManagementService.c Wed Sep 16 11:49:21 2009 -0700 @@ -2268,6 +2268,17 @@ goto out; } + if (func == &resource_add) { + indication = strdup(RASD_IND_CREATED); + } + else if (func == &resource_del) { + indication = strdup(RASD_IND_DELETED); + } + else { + indication = strdup(RASD_IND_MODIFIED); + prev_inst = get_previous_instance(dominfo, ref, type, devid); + } + s = func(dominfo, rasd, type, devid, NAMESPACE(ref)); if (s.rc != CMPI_RC_OK) { CU_DEBUG("Resource transform function failed"); @@ -2279,17 +2290,6 @@ CU_DEBUG("New XML:\n%s", xml); connect_and_create(xml, ref, &s); - if (func == &resource_add) { - indication = strdup(RASD_IND_CREATED); - } - else if (func == &resource_del) { - indication = strdup(RASD_IND_DELETED); - } - else { - indication = strdup(RASD_IND_MODIFIED); - prev_inst = get_previous_instance(dominfo, ref, type, devid); - } - if (inst_list_add(&list, rasd) == 0) { CU_DEBUG("Unable to add RASD instance to the list\n"); goto out; From kaitlin at linux.vnet.ibm.com Wed Sep 16 22:37:10 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 16 Sep 2009 15:37:10 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Adding verification for DestroySystem() of the domain In-Reply-To: <4AB07FC1.2010505@linux.vnet.ibm.com> References: <53b05fc42fbc04ce45ee.1252585069@elm3a148.beaverton.ibm.com> <4AA96C43.5090005@linux.vnet.ibm.com> <4AAA372C.4090102@linux.vnet.ibm.com> <4AAE86F2.3000306@linux.vnet.ibm.com> <4AAE9C16.9060608@linux.vnet.ibm.com> <4AAF28E4.9010205@linux.vnet.ibm.com> <4AAFB2BD.1000904@linux.vnet.ibm.com> <4AB07FC1.2010505@linux.vnet.ibm.com> Message-ID: <4AB16896.5030908@linux.vnet.ibm.com> Deepti B Kalakeri wrote: > > > Kaitlin Rupert wrote: >>>> >>>> I took a look at this test, and you're right - the reason it's >>>> failing is because DestroySystem() is also undefined the guest. So >>>> the answer here is to modify the test so that it doesn't call >>>> undefine(). Also, make sure the guest isn't in the inactive domain >>>> list either. >>>> >>>> Not sure why you want to XFAIL the test, as DestroySystem() is doing >>>> what is expected. >>>> >>> I thought DestroySystem() is equivalent to "virsh destroy" command >>> which would just destroy a running domain which was defined and started. >>> >>> >> >> Nope, DestroySystem() does a "virsh destroy" and "virsh undefine". If >> you look at the System Virtualization Profile (DSP1042) under the >> heading " 8.2.2 CIM_VirtualSystemManagementService.DestroySystem( ) >> Method (Conditional)", DestroySystem() is defined as: >> >> "The execution of the DestroySystem( ) method shall effect the >> destruction of the referenced virtual system >> and all related virtual system configurations, including snapshots." > Oh! yeah I read this today. Its been long time I read the DSP1042. > Thanks for the clarifications. > Updated patch on its way. > No problem =) Thanks for updating the patch! -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Wed Sep 16 23:11:14 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 16 Sep 2009 16:11:14 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Updating RPCS/10_create_storagevolume.py In-Reply-To: <0a64f90aabb5dd63ac2a.1253092622@elm3a148.beaverton.ibm.com> References: <0a64f90aabb5dd63ac2a.1253092622@elm3a148.beaverton.ibm.com> Message-ID: <4AB17092.2090404@linux.vnet.ibm.com> Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1253097826 14400 > # Node ID 0a64f90aabb5dd63ac2ab677981a939b2fcf5eeb > # Parent 9e08670a3c3749738a65fec7f2faa4c2b68a7092 > [TEST] Updating RPCS/10_create_storagevolume.py > > Updating RPCS/10_create_storagevolume.py to create and use its own dir pool for StorageVol. > If we try to use the default_pool_name then this will cause regression in the further tests which > refer to the /tmp/cimtest-vol.img as all the information regarding this will get cleared only > when the pool under which it is created it destoyed. Not sure what I understand what you mean here. Why not make sure /tmp/cimtest-vol.img is removed before the test exits? Then you can create the image in the default pool. Just remove the volume when you're done. I think you can still use the default pool if you'd like, but either approach is fine. However, it looks like you create a new diskpool, but the storage volume is still being created in /tmp. When I print the disk RASD: root/virt:KVM_DiskPool.InstanceID="DiskPool/DISK_POOL_DIR" When I print the exp_vol_path: /tmp/cimtest-vol.img The test still passed because vol_list() returns the list of volumes in /tmp, even though the pool_name is DISK_POOL_DIR - so something is off here. Can you take a look to see what might be causing this? -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 17 00:50:46 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 16 Sep 2009 17:50:46 -0700 Subject: [Libvirt-cim] [PATCH 4 of 5] [TEST] #3 Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: <4AB187E6.8090605@linux.vnet.ibm.com> > +import sys > +from CimTest.Globals import logger > +from CimTest.ReturnCodes import FAIL, PASS, SKIP > +from XenKvmLib.xm_virt_util import virsh_version > +from XenKvmLib.const import do_main, platform_sup, get_provider_version > +from XenKvmLib import rpcs_service > +from XenKvmLib.classes import get_typed_class, inst_to_mof > +from XenKvmLib.pool import create_pool, DIR_POOL, \ > + libvirt_rasd_spool_del_changes, get_diskpool, \ > + get_stovol_default_settings, cleanup_pool_vol,\ > + get_sto_vol_rasd_for_pool > + > +pool_attr = { 'Path' : "/tmp" } > +vol_name = "cimtest-vol.img" > + > + at do_main(platform_sup) > +def main(): > + options = main.options > + server = options.ip > + virt = options.virt > + > + libvirt_ver = virsh_version(server, virt) > + cim_rev, changeset = get_provider_version(virt, server) > + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: > + logger.info("Storage Volume deletion support is available with Libvirt" > + "version >= 0.4.1 and Libvirt-CIM rev '%s'", > + libvirt_rasd_spool_del_changes) > + return SKIP > + > + dp_cn = "DiskPool" > + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) I think this test suffers from the same problem 10 does. The path for your volume is "/tmp/cimtest-vol.img" but you want to create it in DIR_POOL. So I think you have a mismatch here. In that case, you don't even need to create a new pool - just create the volume in the default pool and then clean it up when the test exits. How about holding off on submitting this test until 10 is fixed correctly? > + > + # For now the test case support only the deletion of dir type based > + # vol, we can extend dp_types to include netfs etc ..... > + dp_types = { "DISK_POOL_DIR" : DIR_POOL } > + > + for pool_name, pool_type in dp_types.iteritems(): > + status = FAIL > + res = del_res = [FAIL] > + try: > + status = create_pool(server, virt, pool_name, pool_attr, > + mode_type=pool_type, pool_type=dp_cn) > + -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Thu Sep 17 00:57:21 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Wed, 16 Sep 2009 17:57:21 -0700 Subject: [Libvirt-cim] [PATCH 5 of 5] [TEST] #3 Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: References: Message-ID: <4AB18971.2070802@linux.vnet.ibm.com> > + > +pool_attr = { 'Path' : "/tmp" } > +vol_name = "cimtest-vol.img" > + > + at do_main(platform_sup) > +def main(): > + options = main.options > + server = options.ip > + virt = options.virt > + > + libvirt_ver = virsh_version(server, virt) > + cim_rev, changeset = get_provider_version(virt, server) > + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: > + logger.info("Storage Volume deletion support is available with Libvirt" > + "version >= 0.4.1 and Libvirt-CIM rev '%s'", > + libvirt_rasd_spool_del_changes) > + return SKIP > + > + dp_cn = "DiskPool" > + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) > + > + pool_name = 'DIR_POOL_VOL' This test has the same problem as test 10. You're creating the volume in /tmp, and the default pool already uses this location. So you might as well create this volume in the default pool. No need to create a new pool just for this volume. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 17 08:56:41 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 08:56:41 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py Message-ID: <5158d71836cdd6127473.1253177801@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253182950 14400 # Node ID 5158d71836cdd6127473221731f203c3af221b21 # Parent 26357e57d207c3437a06a0730e99c942111901f3 [TEST] [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py 1) Adding vol_delete() to xm_virt_util.py to delete a volume of a pool. 2) Updating RPCS/10_create_storagevolume.py to include vol_delete. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 26357e57d207 -r 5158d71836cd suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Wed Sep 16 09:03:39 2009 -0400 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Thu Sep 17 06:22:30 2009 -0400 @@ -38,7 +38,7 @@ from XenKvmLib import rpcs_service from XenKvmLib.assoc import Associators from XenKvmLib.enumclass import GetInstance, EnumNames -from XenKvmLib.xm_virt_util import virsh_version, vol_list +from XenKvmLib.xm_virt_util import virsh_version, vol_list, vol_delete from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.common_util import destroy_diskpool from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL @@ -129,9 +129,14 @@ return PASS -def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): +def cleanup_pool_vol(server, virt, pool_name, clean_pool, exp_vol_path): + status = FAIL try: - if clean_vol == True: + ret = vol_delete(server, virt, vol_name, pool_name) + if ret == None: + logger.error("Failed to delete the volume '%s'", vol_name) + + if clean_pool == True: status = destroy_diskpool(server, virt, pool_name) if status != PASS: raise Exception("Unable to destroy diskpool '%s'" % pool_name) @@ -140,16 +145,21 @@ if status != PASS: raise Exception("Unable to undefine diskpool '%s'" \ % pool_name) + + if os.path.exists(exp_vol_path): + cmd = "rm -rf %s" % exp_vol_path + res, out = utils.run_remote(server, cmd) + if res != 0: + raise Exception("'%s' was not removed, please remove it "\ + "manually", exp_vol_path) + except Exception, details: logger.error("Exception details: %s", details) + status = FAIL + + if ret == None or (clean_pool == True and status != PASS): return FAIL - - if os.path.exists(exp_vol_path): - cmd = "rm -rf %s" % exp_vol_path - ret, out = utils.run_remote(server, cmd) - if ret != 0: - logger.info("'%s' was not removed, please remove it manually", - exp_vol_path) + return PASS @do_main(platform_sup) @@ -211,18 +221,18 @@ found = verify_vol(server, virt, pool_name, exp_vol_path, found) stovol_status = verify_sto_vol_rasd(virt, server, dp_inst_id, exp_vol_path) + + ret = cleanup_pool_vol(server, virt, pool_name, + clean_pool, exp_vol_path) + if res[0] == PASS and found == 1 and \ + ret == PASS and stovol_status == PASS: + status = PASS + else: + return FAIL except Exception, details: logger.error("Exception details: %s", details) status = FAIL - - ret = cleanup_pool_vol(server, virt, pool_name, - clean_pool, exp_vol_path) - if res[0] == PASS and found == 1 and \ - ret == PASS and stovol_status == PASS: - status = PASS - else: - return FAIL return status if __name__ == "__main__": diff -r 26357e57d207 -r 5158d71836cd suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Wed Sep 16 09:03:39 2009 -0400 +++ b/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Thu Sep 17 06:22:30 2009 -0400 @@ -238,9 +238,9 @@ return names def vol_list(server, virt="KVM", pool_name=None): - """ Function to list the volumes part of a pool""" + """ Function to list the volumes of a pool""" - cmd = " virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ + cmd = "virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ % (virt2uri(virt), pool_name) ret, out = utils.run_remote(server, cmd) if ret != 0: @@ -248,6 +248,18 @@ return out +def vol_delete(server, virt="KVM", vol_name=None, pool_name=None): + """ Function to delete the volume of a pool""" + + cmd = "virsh -c %s vol-delete %s --pool %s"\ + % (virt2uri(virt), vol_name, pool_name) + ret, out = utils.run_remote(server, cmd) + if ret != 0: + return None + + return out + + def virsh_vcpuinfo(server, dom, virt="Xen"): cmd = "virsh -c %s vcpuinfo %s | grep VCPU | wc -l" % (virt2uri(virt), dom) From deeptik at linux.vnet.ibm.com Thu Sep 17 09:53:44 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 17 Sep 2009 15:23:44 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py In-Reply-To: <5158d71836cdd6127473.1253177801@elm3a148.beaverton.ibm.com> References: <5158d71836cdd6127473.1253177801@elm3a148.beaverton.ibm.com> Message-ID: <4AB20728.7020201@linux.vnet.ibm.com> Please ignore this patch... Updated one on its way... Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1253182950 14400 > # Node ID 5158d71836cdd6127473221731f203c3af221b21 > # Parent 26357e57d207c3437a06a0730e99c942111901f3 > [TEST] [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py > > 1) Adding vol_delete() to xm_virt_util.py to delete a volume of a pool. > 2) Updating RPCS/10_create_storagevolume.py to include vol_delete. > > Tested with KVM and current sources on SLES11. > Signed-off-by: Deepti B. Kalakeri > > diff -r 26357e57d207 -r 5158d71836cd suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py > --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Wed Sep 16 09:03:39 2009 -0400 > +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Thu Sep 17 06:22:30 2009 -0400 > @@ -38,7 +38,7 @@ > from XenKvmLib import rpcs_service > from XenKvmLib.assoc import Associators > from XenKvmLib.enumclass import GetInstance, EnumNames > -from XenKvmLib.xm_virt_util import virsh_version, vol_list > +from XenKvmLib.xm_virt_util import virsh_version, vol_list, vol_delete > from XenKvmLib.classes import get_typed_class, inst_to_mof > from XenKvmLib.common_util import destroy_diskpool > from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL > @@ -129,9 +129,14 @@ > return PASS > > > -def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): > +def cleanup_pool_vol(server, virt, pool_name, clean_pool, exp_vol_path): > + status = FAIL > try: > - if clean_vol == True: > + ret = vol_delete(server, virt, vol_name, pool_name) > + if ret == None: > + logger.error("Failed to delete the volume '%s'", vol_name) > + > + if clean_pool == True: > status = destroy_diskpool(server, virt, pool_name) > if status != PASS: > raise Exception("Unable to destroy diskpool '%s'" % pool_name) > @@ -140,16 +145,21 @@ > if status != PASS: > raise Exception("Unable to undefine diskpool '%s'" \ > % pool_name) > + > + if os.path.exists(exp_vol_path): > + cmd = "rm -rf %s" % exp_vol_path > + res, out = utils.run_remote(server, cmd) > + if res != 0: > + raise Exception("'%s' was not removed, please remove it "\ > + "manually", exp_vol_path) > + > except Exception, details: > logger.error("Exception details: %s", details) > + status = FAIL > + > + if ret == None or (clean_pool == True and status != PASS): > return FAIL > - > - if os.path.exists(exp_vol_path): > - cmd = "rm -rf %s" % exp_vol_path > - ret, out = utils.run_remote(server, cmd) > - if ret != 0: > - logger.info("'%s' was not removed, please remove it manually", > - exp_vol_path) > + > return PASS > > @do_main(platform_sup) > @@ -211,18 +221,18 @@ > found = verify_vol(server, virt, pool_name, exp_vol_path, found) > stovol_status = verify_sto_vol_rasd(virt, server, dp_inst_id, > exp_vol_path) > + > + ret = cleanup_pool_vol(server, virt, pool_name, > + clean_pool, exp_vol_path) > + if res[0] == PASS and found == 1 and \ > + ret == PASS and stovol_status == PASS: > + status = PASS > + else: > + return FAIL > > except Exception, details: > logger.error("Exception details: %s", details) > status = FAIL > - > - ret = cleanup_pool_vol(server, virt, pool_name, > - clean_pool, exp_vol_path) > - if res[0] == PASS and found == 1 and \ > - ret == PASS and stovol_status == PASS: > - status = PASS > - else: > - return FAIL > > return status > if __name__ == "__main__": > diff -r 26357e57d207 -r 5158d71836cd suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py > --- a/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Wed Sep 16 09:03:39 2009 -0400 > +++ b/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Thu Sep 17 06:22:30 2009 -0400 > @@ -238,9 +238,9 @@ > return names > > def vol_list(server, virt="KVM", pool_name=None): > - """ Function to list the volumes part of a pool""" > + """ Function to list the volumes of a pool""" > > - cmd = " virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ > + cmd = "virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ > % (virt2uri(virt), pool_name) > ret, out = utils.run_remote(server, cmd) > if ret != 0: > @@ -248,6 +248,18 @@ > > return out > > +def vol_delete(server, virt="KVM", vol_name=None, pool_name=None): > + """ Function to delete the volume of a pool""" > + > + cmd = "virsh -c %s vol-delete %s --pool %s"\ > + % (virt2uri(virt), vol_name, pool_name) > + ret, out = utils.run_remote(server, cmd) > + if ret != 0: > + return None > + > + return out > + > + > def virsh_vcpuinfo(server, dom, virt="Xen"): > cmd = "virsh -c %s vcpuinfo %s | grep VCPU | wc -l" % (virt2uri(virt), > dom) > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 17 09:45:50 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 09:45:50 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py Message-ID: <72616c6b52fe29ec35ac.1253180750@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253185946 14400 # Node ID 72616c6b52fe29ec35acd0f3c262b6c4247135ef # Parent 26357e57d207c3437a06a0730e99c942111901f3 [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py 1) Adding vol_delete() to xm_virt_util.py to delete a volume of a pool. 2) Updating RPCS/10_create_storagevolume.py to include vol_delete. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 26357e57d207 -r 72616c6b52fe suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Wed Sep 16 09:03:39 2009 -0400 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Thu Sep 17 07:12:26 2009 -0400 @@ -38,7 +38,7 @@ from XenKvmLib import rpcs_service from XenKvmLib.assoc import Associators from XenKvmLib.enumclass import GetInstance, EnumNames -from XenKvmLib.xm_virt_util import virsh_version, vol_list +from XenKvmLib.xm_virt_util import virsh_version, vol_list, vol_delete from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.common_util import destroy_diskpool from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL @@ -129,9 +129,22 @@ return PASS -def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): +def cleanup_pool_vol(server, virt, pool_name, clean_pool, vol_path): + status = res = FAIL + ret = None try: - if clean_vol == True: + ret = vol_delete(server, virt, vol_name, pool_name) + if ret == None: + logger.error("Failed to delete the volume '%s'", vol_name) + + if os.path.exists(vol_path): + cmd = "rm -rf %s" % vol_path + res, out = utils.run_remote(server, cmd) + if res != 0: + logger.error("'%s' was not removed, please remove it " + "manually", vol_path) + + if clean_pool == True: status = destroy_diskpool(server, virt, pool_name) if status != PASS: raise Exception("Unable to destroy diskpool '%s'" % pool_name) @@ -140,16 +153,18 @@ if status != PASS: raise Exception("Unable to undefine diskpool '%s'" \ % pool_name) + + except Exception, details: logger.error("Exception details: %s", details) + status = FAIL + + if (ret == None and res != PASS) or (clean_pool == True and status != PASS): + logger.error("Failed to clean the env.....") return FAIL - - if os.path.exists(exp_vol_path): - cmd = "rm -rf %s" % exp_vol_path - ret, out = utils.run_remote(server, cmd) - if ret != 0: - logger.info("'%s' was not removed, please remove it manually", - exp_vol_path) + else: + logger.info("DEBUG PAssed ") + return PASS @do_main(platform_sup) @@ -211,18 +226,19 @@ found = verify_vol(server, virt, pool_name, exp_vol_path, found) stovol_status = verify_sto_vol_rasd(virt, server, dp_inst_id, exp_vol_path) + + ret = cleanup_pool_vol(server, virt, pool_name, + clean_pool, exp_vol_path) + if res[0] == PASS and found == 1 and \ + ret == PASS and stovol_status == PASS: + status = PASS + else: + return FAIL except Exception, details: logger.error("Exception details: %s", details) status = FAIL - ret = cleanup_pool_vol(server, virt, pool_name, - clean_pool, exp_vol_path) - if res[0] == PASS and found == 1 and \ - ret == PASS and stovol_status == PASS: - status = PASS - else: - return FAIL return status if __name__ == "__main__": diff -r 26357e57d207 -r 72616c6b52fe suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Wed Sep 16 09:03:39 2009 -0400 +++ b/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Thu Sep 17 07:12:26 2009 -0400 @@ -238,9 +238,9 @@ return names def vol_list(server, virt="KVM", pool_name=None): - """ Function to list the volumes part of a pool""" + """ Function to list the volumes of a pool""" - cmd = " virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ + cmd = "virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ % (virt2uri(virt), pool_name) ret, out = utils.run_remote(server, cmd) if ret != 0: @@ -248,6 +248,18 @@ return out +def vol_delete(server, virt="KVM", vol_name=None, pool_name=None): + """ Function to delete the volume of a pool""" + + cmd = "virsh -c %s vol-delete %s --pool %s"\ + % (virt2uri(virt), vol_name, pool_name) + ret, out = utils.run_remote(server, cmd) + if ret != 0: + return None + + return out + + def virsh_vcpuinfo(server, dom, virt="Xen"): cmd = "virsh -c %s vcpuinfo %s | grep VCPU | wc -l" % (virt2uri(virt), dom) From deeptik at linux.vnet.ibm.com Thu Sep 17 09:58:57 2009 From: deeptik at linux.vnet.ibm.com (Deepti B Kalakeri) Date: Thu, 17 Sep 2009 15:28:57 +0530 Subject: [Libvirt-cim] [PATCH] [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py In-Reply-To: <72616c6b52fe29ec35ac.1253180750@elm3a148.beaverton.ibm.com> References: <72616c6b52fe29ec35ac.1253180750@elm3a148.beaverton.ibm.com> Message-ID: <4AB20861.8000900@linux.vnet.ibm.com> Please ignore this patch ... Update patch on its way.. Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1253185946 14400 > # Node ID 72616c6b52fe29ec35acd0f3c262b6c4247135ef > # Parent 26357e57d207c3437a06a0730e99c942111901f3 > [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py > > 1) Adding vol_delete() to xm_virt_util.py to delete a volume of a pool. > 2) Updating RPCS/10_create_storagevolume.py to include vol_delete. > > Tested with KVM and current sources on SLES11. > Signed-off-by: Deepti B. Kalakeri > > diff -r 26357e57d207 -r 72616c6b52fe suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py > --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Wed Sep 16 09:03:39 2009 -0400 > +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Thu Sep 17 07:12:26 2009 -0400 > @@ -38,7 +38,7 @@ > from XenKvmLib import rpcs_service > from XenKvmLib.assoc import Associators > from XenKvmLib.enumclass import GetInstance, EnumNames > -from XenKvmLib.xm_virt_util import virsh_version, vol_list > +from XenKvmLib.xm_virt_util import virsh_version, vol_list, vol_delete > from XenKvmLib.classes import get_typed_class, inst_to_mof > from XenKvmLib.common_util import destroy_diskpool > from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL > @@ -129,9 +129,22 @@ > return PASS > > > -def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): > +def cleanup_pool_vol(server, virt, pool_name, clean_pool, vol_path): > + status = res = FAIL > + ret = None > try: > - if clean_vol == True: > + ret = vol_delete(server, virt, vol_name, pool_name) > + if ret == None: > + logger.error("Failed to delete the volume '%s'", vol_name) > + > + if os.path.exists(vol_path): > + cmd = "rm -rf %s" % vol_path > + res, out = utils.run_remote(server, cmd) > + if res != 0: > + logger.error("'%s' was not removed, please remove it " > + "manually", vol_path) > + > + if clean_pool == True: > status = destroy_diskpool(server, virt, pool_name) > if status != PASS: > raise Exception("Unable to destroy diskpool '%s'" % pool_name) > @@ -140,16 +153,18 @@ > if status != PASS: > raise Exception("Unable to undefine diskpool '%s'" \ > % pool_name) > + > + > except Exception, details: > logger.error("Exception details: %s", details) > + status = FAIL > + > + if (ret == None and res != PASS) or (clean_pool == True and status != PASS): > + logger.error("Failed to clean the env.....") > return FAIL > - > - if os.path.exists(exp_vol_path): > - cmd = "rm -rf %s" % exp_vol_path > - ret, out = utils.run_remote(server, cmd) > - if ret != 0: > - logger.info("'%s' was not removed, please remove it manually", > - exp_vol_path) > + else: > + logger.info("DEBUG PAssed ") > + > return PASS > > @do_main(platform_sup) > @@ -211,18 +226,19 @@ > found = verify_vol(server, virt, pool_name, exp_vol_path, found) > stovol_status = verify_sto_vol_rasd(virt, server, dp_inst_id, > exp_vol_path) > + > + ret = cleanup_pool_vol(server, virt, pool_name, > + clean_pool, exp_vol_path) > + if res[0] == PASS and found == 1 and \ > + ret == PASS and stovol_status == PASS: > + status = PASS > + else: > + return FAIL > > except Exception, details: > logger.error("Exception details: %s", details) > status = FAIL > > - ret = cleanup_pool_vol(server, virt, pool_name, > - clean_pool, exp_vol_path) > - if res[0] == PASS and found == 1 and \ > - ret == PASS and stovol_status == PASS: > - status = PASS > - else: > - return FAIL > > return status > if __name__ == "__main__": > diff -r 26357e57d207 -r 72616c6b52fe suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py > --- a/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Wed Sep 16 09:03:39 2009 -0400 > +++ b/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Thu Sep 17 07:12:26 2009 -0400 > @@ -238,9 +238,9 @@ > return names > > def vol_list(server, virt="KVM", pool_name=None): > - """ Function to list the volumes part of a pool""" > + """ Function to list the volumes of a pool""" > > - cmd = " virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ > + cmd = "virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ > % (virt2uri(virt), pool_name) > ret, out = utils.run_remote(server, cmd) > if ret != 0: > @@ -248,6 +248,18 @@ > > return out > > +def vol_delete(server, virt="KVM", vol_name=None, pool_name=None): > + """ Function to delete the volume of a pool""" > + > + cmd = "virsh -c %s vol-delete %s --pool %s"\ > + % (virt2uri(virt), vol_name, pool_name) > + ret, out = utils.run_remote(server, cmd) > + if ret != 0: > + return None > + > + return out > + > + > def virsh_vcpuinfo(server, dom, virt="Xen"): > cmd = "virsh -c %s vcpuinfo %s | grep VCPU | wc -l" % (virt2uri(virt), > dom) > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim > -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Thu Sep 17 09:57:27 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 09:57:27 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py Message-ID: <0387cadda7d381253e26.1253181447@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253186627 14400 # Node ID 0387cadda7d381253e2645a0bd9ff8bfd9990fa6 # Parent 26357e57d207c3437a06a0730e99c942111901f3 [TEST] Adding vol_delete and modifying RPCS/10_create_storagevolume.py 1) Adding vol_delete() to xm_virt_util.py to delete a volume of a pool. 2) Updating RPCS/10_create_storagevolume.py to include vol_delete. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 26357e57d207 -r 0387cadda7d3 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Wed Sep 16 09:03:39 2009 -0400 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/10_create_storagevolume.py Thu Sep 17 07:23:47 2009 -0400 @@ -38,7 +38,7 @@ from XenKvmLib import rpcs_service from XenKvmLib.assoc import Associators from XenKvmLib.enumclass import GetInstance, EnumNames -from XenKvmLib.xm_virt_util import virsh_version, vol_list +from XenKvmLib.xm_virt_util import virsh_version, vol_list, vol_delete from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.common_util import destroy_diskpool from XenKvmLib.pool import create_pool, undefine_diskpool, DIR_POOL @@ -129,9 +129,22 @@ return PASS -def cleanup_pool_vol(server, virt, pool_name, clean_vol, exp_vol_path): +def cleanup_pool_vol(server, virt, pool_name, clean_pool, vol_path): + status = res = FAIL + ret = None try: - if clean_vol == True: + ret = vol_delete(server, virt, vol_name, pool_name) + if ret == None: + logger.error("Failed to delete the volume '%s'", vol_name) + + if os.path.exists(vol_path): + cmd = "rm -rf %s" % vol_path + res, out = utils.run_remote(server, cmd) + if res != 0: + logger.error("'%s' was not removed, please remove it " + "manually", vol_path) + + if clean_pool == True: status = destroy_diskpool(server, virt, pool_name) if status != PASS: raise Exception("Unable to destroy diskpool '%s'" % pool_name) @@ -140,16 +153,16 @@ if status != PASS: raise Exception("Unable to undefine diskpool '%s'" \ % pool_name) + + except Exception, details: logger.error("Exception details: %s", details) + status = FAIL + + if (ret == None and res != PASS) or (clean_pool == True and status != PASS): + logger.error("Failed to clean the env.....") return FAIL - - if os.path.exists(exp_vol_path): - cmd = "rm -rf %s" % exp_vol_path - ret, out = utils.run_remote(server, cmd) - if ret != 0: - logger.info("'%s' was not removed, please remove it manually", - exp_vol_path) + return PASS @do_main(platform_sup) @@ -211,18 +224,19 @@ found = verify_vol(server, virt, pool_name, exp_vol_path, found) stovol_status = verify_sto_vol_rasd(virt, server, dp_inst_id, exp_vol_path) + + ret = cleanup_pool_vol(server, virt, pool_name, + clean_pool, exp_vol_path) + if res[0] == PASS and found == 1 and \ + ret == PASS and stovol_status == PASS: + status = PASS + else: + return FAIL except Exception, details: logger.error("Exception details: %s", details) status = FAIL - ret = cleanup_pool_vol(server, virt, pool_name, - clean_pool, exp_vol_path) - if res[0] == PASS and found == 1 and \ - ret == PASS and stovol_status == PASS: - status = PASS - else: - return FAIL return status if __name__ == "__main__": diff -r 26357e57d207 -r 0387cadda7d3 suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Wed Sep 16 09:03:39 2009 -0400 +++ b/suites/libvirt-cim/lib/XenKvmLib/xm_virt_util.py Thu Sep 17 07:23:47 2009 -0400 @@ -238,9 +238,9 @@ return names def vol_list(server, virt="KVM", pool_name=None): - """ Function to list the volumes part of a pool""" + """ Function to list the volumes of a pool""" - cmd = " virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ + cmd = "virsh -c %s vol-list %s | sed -e '1,2 d' -e '$ d'" \ % (virt2uri(virt), pool_name) ret, out = utils.run_remote(server, cmd) if ret != 0: @@ -248,6 +248,18 @@ return out +def vol_delete(server, virt="KVM", vol_name=None, pool_name=None): + """ Function to delete the volume of a pool""" + + cmd = "virsh -c %s vol-delete %s --pool %s"\ + % (virt2uri(virt), vol_name, pool_name) + ret, out = utils.run_remote(server, cmd) + if ret != 0: + return None + + return out + + def virsh_vcpuinfo(server, dom, virt="Xen"): cmd = "virsh -c %s vcpuinfo %s | grep VCPU | wc -l" % (virt2uri(virt), dom) From deeptik at linux.vnet.ibm.com Thu Sep 17 18:53:37 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 18:53:37 -0000 Subject: [Libvirt-cim] [PATCH 3 of 5] [TEST] #3 Added new tc to verify the RPCS error values for netfs pool In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253213411 25200 # Node ID a3f6a0260872024f4da24ffeedb9c385baa931c2 # Parent 70bbb0c0ff907c0b4643a35c9e878c08d505944a [TEST] #3 Added new tc to verify the RPCS error values for netfs pool. PAtch 3: -------- Used the updated cleanup_pool_vol() from pool.py Patch 2: -------- 1) Used the cleanup_pool_vol() from pool.py Patch 1: -------- This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when, Trying to create a Vol in a netfs storage pool. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 70bbb0c0ff90 -r a3f6a0260872 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py Thu Sep 17 11:50:11 2009 -0700 @@ -0,0 +1,170 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when, +# Trying to create a Vol in a netfs storage pool +# +# -Date: 04-09-2009 + +import sys +from pywbem import CIM_ERR_FAILED, CIMError +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.const import do_main, platform_sup, get_provider_version +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib import rpcs_service +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.common_util import nfs_netfs_setup, netfs_cleanup +from XenKvmLib.pool import create_pool, NETFS_POOL, get_diskpool, \ + get_stovol_default_settings, cleanup_pool_vol + +vol_name = "cimtest-vol.img" +vol_path = "/tmp/" + +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'NETFS_POOL' : { 'msg' : "This function does not "\ + "support this resource type"} + } + +def get_pool_attr(server, pool_type): + pool_attr = { } + status , host_addr, src_mnt_dir, dir_mnt_dir = nfs_netfs_setup(server) + if status != PASS: + logger.error("Failed to get pool_attr for NETFS diskpool type") + return status, pool_attr + + pool_attr['Host'] = host_addr + pool_attr['SourceDirectory'] = src_mnt_dir + pool_attr['Path'] = dir_mnt_dir + + return PASS, pool_attr + +def get_inputs(virt, server, dp_cn, pool_name, exp_vol_path): + sv_rasd = dp_inst = None + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, sv_rasd, dp_inst + + return PASS, sv_settings, dp_inst + +def verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path): + + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, + pool_name, exp_vol_path) + if status != PASS: + return status + + status = FAIL + res = [FAIL] + try: + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[pool_name]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, pool_name) + return PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[pool_name]['msg']) + if res[0] == PASS: + logger.error("Should not have been able to create the StorageVol '%s'", + vol_name) + + return FAIL + + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + pool_name = "NETFS_POOL" + pool_type = NETFS_POOL + exp_vol_path = "%s/%s" % (vol_path, vol_name) + dp_cn = "DiskPool" + clean_pool = False + + try: + status = FAIL + status, pool_attr = get_pool_attr(server, pool_type) + if status != PASS: + return status + + # Creating NETFS pool to verify RPCS error + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + clean_pool = True + status = verify_vol_err(server, virt, dp_cn, pool_name, exp_vol_path) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s' " % pool_name) + + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_pool_vol(server, virt, pool_name, vol_name, exp_vol_path, + clean_pool) + netfs_cleanup(server, pool_attr) + if status != PASS or ret != PASS : + return FAIL + + return PASS +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Thu Sep 17 18:53:34 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 18:53:34 -0000 Subject: [Libvirt-cim] [PATCH 0 of 5] #4 Added tc to verify StorageVol deletion and creation/deletion errors Message-ID: From deeptik at linux.vnet.ibm.com Thu Sep 17 18:53:38 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 18:53:38 -0000 Subject: [Libvirt-cim] [PATCH 4 of 5] [TEST] #3 Add new tc to verify the DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253213416 25200 # Node ID a223739ebb2f9e8b4857a9f0a0d6a5e9bf0904eb # Parent a3f6a0260872024f4da24ffeedb9c385baa931c2 [TEST] #3 Add new tc to verify the DeleteResourceInPool(). PAtch 3: -------- Used the updated cleanup_pool_vol() from pool.py Patch2: ------ 1) Added the missing test case. 2) Included get_sto_vol_rasd_for_pool() from pool.py Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r a3f6a0260872 -r a223739ebb2f suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/13_delete_storagevolume.py Thu Sep 17 11:50:16 2009 -0700 @@ -0,0 +1,138 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS. +# +# -Date: 08-09-2009 + +import sys +from CimTest.Globals import logger +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, get_provider_version, \ + default_pool_name +from XenKvmLib import rpcs_service +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.pool import create_pool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, cleanup_pool_vol,\ + get_sto_vol_rasd_for_pool + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + # For now the test case support only the deletion of dir type based + # vol, we can extend dp_types to include netfs etc ..... + dp_types = { "DISK_POOL_DIR" : DIR_POOL } + + for pool_name, pool_type in dp_types.iteritems(): + status = FAIL + res = del_res = [FAIL] + clean_pool = True + clean_vol = False + try: + if pool_type == DIR_POOL: + pool_name = default_pool_name + clean_pool = False + else: + status = create_pool(server, virt, pool_name, pool_attr, + mode_type=pool_type, pool_type=dp_cn) + + if status != PASS: + logger.error("Failed to create pool '%s'", pool_name) + return status + + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed to get the resource settings for '%s'" \ + " Vol" % vol_name) + + resource_setting = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource_setting, + Pool=dp_inst) + + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings != None: + clean_vol = True + raise Exception("'%s' vol of '%s' pool was not deleted" \ + % (vol_name, pool_name)) + else: + logger.info("Vol '%s' of '%s' pool deleted successfully by " + "DeleteResourceInPool()", vol_name, pool_name) + + ret = cleanup_pool_vol(server, virt, pool_name, vol_name, + exp_vol_path, clean_pool, clean_vol) + if del_res[0] == PASS and ret == PASS : + status = PASS + else: + return FAIL + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + cleanup_pool_vol(server, virt, pool_name, vol_name, + exp_vol_path, clean_pool, clean_vol) + + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Thu Sep 17 18:53:35 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 18:53:35 -0000 Subject: [Libvirt-cim] [PATCH 1 of 5] [TEST] #5 Modified pool.py to support RPCS CreateResourceInPool In-Reply-To: References: Message-ID: <3e03b0796a05ce2890a4.1253213615@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253188171 25200 # Node ID 3e03b0796a05ce2890a4f47763a91af1675ab59b # Parent 0387cadda7d381253e2645a0bd9ff8bfd9990fa6 [TEST] #5 Modified pool.py to support RPCS CreateResourceInPool. Patch 5: -------- 1) Modified cleanup_pool_vol() in pool.py Patch 4: -------- 1) Moved cleanup_pool_vol() to pool.py as it referenced by couple of tests PS: will update RPCS/10*py to reference cleanup_pool_vol() from pool.py once these patches get accepted. Patch 3: -------- 1) Moved get_sto_vol_rasd() to pool.py as get_sto_vol_rasd_for_pool(), since it is used in RPCS/13*py and RPCS/14*py Patch 2: ------- 1) Added check in get_stovol_rasd_from_sdc() 2) Added get_diskpool() to pool.py as it is used in 10*py/11*py, RPCS/12*py and will be useful for further tests as well 3) Added rev for storagevol deletion NOTE: Please base this patch on the patch "Modifying common_util.py for netnfs" Patch 1: -------- Added the following two functions which are used in RPCS/10*py and RPCS/11*py 1) get_stovol_rasd_from_sdc() to get the stovol rasd from sdc 2) get_stovol_default_settings() to get default sto vol settings Also, modified common_util.py to remove the backed up exportfs file Added RAW_VOL_TYPE which is the FormatType supported by RPCS currently Once this patch gets accepted we can modify RPCS/10*py to refer to these functions. Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 0387cadda7d3 -r 3e03b0796a05 suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Thu Sep 17 07:23:47 2009 -0400 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Thu Sep 17 04:49:31 2009 -0700 @@ -582,6 +582,8 @@ try: # Backup the original exports file. if (os.path.exists(exports_file)): + if os.path.exists(back_exports_file): + os.remove(back_exports_file) move_file(exports_file, back_exports_file) fd = open(exports_file, "w") line = "\n %s %s(rw)" %(src_dir_for_mnt, server) diff -r 0387cadda7d3 -r 3e03b0796a05 suites/libvirt-cim/lib/XenKvmLib/pool.py --- a/suites/libvirt-cim/lib/XenKvmLib/pool.py Thu Sep 17 07:23:47 2009 -0400 +++ b/suites/libvirt-cim/lib/XenKvmLib/pool.py Thu Sep 17 04:49:31 2009 -0700 @@ -21,24 +21,29 @@ # import sys +import os +from VirtLib import utils from CimTest.Globals import logger, CIM_NS from CimTest.ReturnCodes import PASS, FAIL, SKIP from XenKvmLib.classes import get_typed_class, inst_to_mof from XenKvmLib.const import get_provider_version, default_pool_name -from XenKvmLib.enumclass import EnumInstances, GetInstance +from XenKvmLib.enumclass import EnumInstances, GetInstance, EnumNames from XenKvmLib.assoc import Associators from VirtLib.utils import run_remote -from XenKvmLib.xm_virt_util import virt2uri, net_list +from XenKvmLib.xm_virt_util import virt2uri, net_list, vol_delete from XenKvmLib import rpcs_service import pywbem from CimTest.CimExt import CIMClassMOF from XenKvmLib.vxml import NetXML, PoolXML from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.vsms import RASD_TYPE_STOREVOL +from XenKvmLib.common_util import destroy_diskpool cim_errno = pywbem.CIM_ERR_NOT_SUPPORTED cim_mname = "CreateChildResourcePool" input_graphics_pool_rev = 757 libvirt_cim_child_pool_rev = 837 +libvirt_rasd_spool_del_changes = 971 DIR_POOL = 1L FS_POOL = 2L @@ -48,6 +53,9 @@ LOGICAL_POOL = 6L SCSI_POOL = 7L +#Volume types +RAW_VOL_TYPE = 1 + def pool_cn_to_rasd_cn(pool_cn, virt): if pool_cn.find('ProcessorPool') >= 0: return get_typed_class(virt, "ProcResourceAllocationSettingData") @@ -297,3 +305,116 @@ status = PASS return status + +def get_stovol_rasd_from_sdc(virt, server, dp_inst_id): + rasd = None + ac_cn = get_typed_class(virt, "AllocationCapabilities") + an_cn = get_typed_class(virt, "SettingsDefineCapabilities") + key_list = {"InstanceID" : dp_inst_id} + + try: + inst = GetInstance(server, ac_cn, key_list) + if inst == None: + raise Exception("Failed to GetInstance for %s" % dp_inst_id) + + rasd = Associators(server, an_cn, ac_cn, InstanceID=inst.InstanceID) + if len(rasd) < 4: + raise Exception("Failed to get default StorageVolRASD , "\ + "Expected atleast 4, Got '%s'" % len(rasd)) + + except Exception, detail: + logger.error("Exception: %s", detail) + return FAIL, None + + return PASS, rasd + +def get_stovol_default_settings(virt, server, dp_cn, + pool_name, path, vol_name): + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, dp_rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol RASD's") + return None + + for dpool_rasd in dp_rasds: + if dpool_rasd['ResourceType'] == RASD_TYPE_STOREVOL and \ + 'Default' in dpool_rasd['InstanceID']: + + dpool_rasd['PoolID'] = dp_inst_id + dpool_rasd['Path'] = path + dpool_rasd['VolumeName'] = vol_name + break + + if not pool_name in dpool_rasd['PoolID']: + return None + + return dpool_rasd + +def get_diskpool(server, virt, dp_cn, pool_name): + dp_inst = None + dpool_cn = get_typed_class(virt, dp_cn) + pools = EnumNames(server, dpool_cn) + + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + for pool in pools: + if pool['InstanceID'] == dp_inst_id: + dp_inst = pool + break + + return dp_inst + +def get_sto_vol_rasd_for_pool(virt, server, dp_cn, pool_name, exp_vol_path): + dv_rasds = None + dp_inst_id = "%s/%s" % (dp_cn, pool_name) + status, rasds = get_stovol_rasd_from_sdc(virt, server, dp_inst_id) + if status != PASS: + logger.error("Failed to get the StorageVol for '%s' vol", exp_vol_path) + return FAIL + + for item in rasds: + if item['Address'] == exp_vol_path and item['PoolID'] == dp_inst_id: + dv_rasds = item + break + + return dv_rasds + +def cleanup_pool_vol(server, virt, pool_name, vol_name, + vol_path, clean_pool=False, clean_vol=False): + status = res = FAIL + ret = None + try: + + if clean_vol == True: + ret = vol_delete(server, virt, vol_name, pool_name) + if ret == None: + logger.error("Failed to delete the volume '%s'", vol_name) + + if os.path.exists(vol_path): + cmd = "rm -rf %s" % vol_path + res, out = utils.run_remote(server, cmd) + if res != 0: + logger.error("'%s' was not removed, please remove it " + "manually", vol_path) + + if clean_pool == True: + status = destroy_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to destroy diskpool '%s'" % pool_name) + else: + status = undefine_diskpool(server, virt, pool_name) + if status != PASS: + raise Exception("Unable to undefine diskpool '%s'" \ + % pool_name) + + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + if (clean_vol == True and ret == None) or \ + (clean_pool == True and status != PASS): + logger.error("Failed to clean the env.....") + return FAIL + + return PASS From deeptik at linux.vnet.ibm.com Thu Sep 17 18:53:36 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 18:53:36 -0000 Subject: [Libvirt-cim] [PATCH 2 of 5] [TEST] #3 Added new tc to verify the RPCS error values with dir type pool In-Reply-To: References: Message-ID: <70bbb0c0ff907c0b4643.1253213616@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253213395 25200 # Node ID 70bbb0c0ff907c0b4643a35c9e878c08d505944a # Parent 3e03b0796a05ce2890a4f47763a91af1675ab59b [TEST] #3 Added new tc to verify the RPCS error values with dir type pool. Patch 3: -------- 1) Revert back to using default pool for dir type 2) Used modified cleanup_pool_vol() Patch 2: -------- 1) cleaned the pool at the end the verify_vol_err() 2) Created new dir pool to vefify the errors 3) Moved clean_pool_vol() to pool.py as this is refernced in RPCS/10*py RPCS/11*py and will be handy for future tests as well. Patch 1: ------- This test case verifies the creation of the StorageVol using the CreateResourceInPool method of RPCS returns an error when invalid values are passed. The test case checks for the errors when: 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE 2) Trying to create 2 Vol in the same Path Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r 3e03b0796a05 -r 70bbb0c0ff90 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/11_create_dir_storagevolume_errs.py Thu Sep 17 11:49:55 2009 -0700 @@ -0,0 +1,164 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the creation of the StorageVol using the +# CreateResourceInPool method of RPCS returns an error when invalid values +# are passed. +# The test case checks for the errors when: +# 1) FormatType field in the StoragePoolRASD set to value other than RAW_TYPE +# 2) Trying to create 2 Vol in the same Path +# +# -Date: 04-09-2009 + +import sys +from random import randint +from CimTest.Globals import logger +from XenKvmLib import rpcs_service +from pywbem.cim_types import Uint64 +from pywbem import CIM_ERR_FAILED, CIMError +from XenKvmLib.xm_virt_util import virsh_version +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.rasd import libvirt_rasd_storagepool_changes +from XenKvmLib.const import do_main, platform_sup, get_provider_version, \ + default_pool_name +from XenKvmLib.pool import RAW_VOL_TYPE, get_diskpool,\ + get_stovol_default_settings, cleanup_pool_vol + +dir_pool_attr = { "Path" : "/tmp" } +vol_name = "cimtest-vol.img" + +INVALID_FTYPE = RAW_VOL_TYPE + randint(20,100) +exp_err_no = CIM_ERR_FAILED +exp_err_values = { 'INVALID_FTYPE': { 'msg' : "Unable to generate XML "\ + "for new resource" }, + 'DUP_VOL_PATH' : { 'msg' : "Unable to create storage volume"} + } + +def get_inputs(virt, server, dp_cn, key, exp_vol_path, pool_name): + sv_rasd = dp_inst = None + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, + pool_name, exp_vol_path, + vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + if key == "INVALID_FTYPE": + sv_rasd['FormatType'] = Uint64(INVALID_FTYPE) + + sv_settings = inst_to_mof(sv_rasd) + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" % pool_name) + + except Exception, details: + logger.error("In get_inputs() Exception details: %s", details) + return FAIL, None, None + + return PASS, sv_settings, dp_inst + +def verify_vol_err(virt, server, dp_cn, key, exp_vol_path, pool_name): + status, sv_settings, dp_inst = get_inputs(virt, server, dp_cn, key, + exp_vol_path, pool_name) + if status != PASS: + return status + + status = FAIL + res = ret = [FAIL] + try: + logger.info("Verifying err for '%s'...", key) + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + ret = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + # For duplicate vol path verfication we should have been able to + # create the first dir pool successfully before attempting the next + if key == 'DUP_VOL_PATH' and ret[0] == PASS: + # Trying to create the vol in the same vol path should return + # an error + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + + except CIMError, (err_no, err_desc): + if res[0] != PASS and exp_err_values[key]['msg'] in err_desc \ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' with '%s'", + err_desc, key) + status = PASS + else: + logger.error("Failed to get the error message '%s'", + exp_err_values[key]['msg']) + + if (res[0] == PASS and key == 'DUP_VOL_PATH') or \ + (ret[0] == PASS and key == 'INVALID_FTYPE'): + logger.error("Should not have been able to create Vol %s", vol_name) + + return status + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_storagepool_changes: + logger.info("Storage Volume creation support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_storagepool_changes) + return SKIP + + dp_types = ['DUP_VOL_PATH', 'INVALID_FTYPE'] + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (dir_pool_attr['Path'], vol_name) + pool_name = default_pool_name + + try: + # err_key will contain either INVALID_FTYPE/DUP_VOL_PATH + # to be able access the err mesg + for err_key in dp_types: + clean_vol = False + status = FAIL + status = verify_vol_err(virt, server, dp_cn, + err_key, exp_vol_path, pool_name) + if status != PASS : + raise Exception("Failed to verify the Invlaid '%s'" % err_key) + + if err_key == 'DUP_VOL_PATH': + clean_vol = True + + ret = cleanup_pool_vol(server, virt, pool_name, vol_name, + exp_vol_path, clean_vol=clean_vol) + if ret != PASS: + raise Exception("Failed to clean the env") + + except Exception, details: + logger.error("In main() Exception details: %s", details) + status = FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) From deeptik at linux.vnet.ibm.com Thu Sep 17 18:53:39 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Thu, 17 Sep 2009 18:53:39 -0000 Subject: [Libvirt-cim] [PATCH 5 of 5] [TEST] #4 Add new tc to verify the err values for RPCS DeleteResourceInPool() In-Reply-To: References: Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253213419 25200 # Node ID f5c62f54d1204d38ce15e48d269d3e887da69937 # Parent a223739ebb2f9e8b4857a9f0a0d6a5e9bf0904eb [TEST] #4 Add new tc to verify the err values for RPCS DeleteResourceInPool() PAtch 4: ------- 1) Using the updated cleanup_pool_vol() Patch 3: ------- 1) Included cleanup_pool_vol() of pool.py 2) Created a new dir pool Patch 2: -------- 1) Added exception to verify_rpcs_err_val() to catch exceptions returned other than for DeleteResourceInPool() 2) Included get_sto_vol_rasd_for_pool() from pool.py Tested with KVM and current sources on SLES11. Signed-off-by: Deepti B. Kalakeri diff -r a223739ebb2f -r f5c62f54d120 suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/14_delete_storagevolume_errs.py Thu Sep 17 11:50:19 2009 -0700 @@ -0,0 +1,173 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This test case verifies the deletion of the StorageVol using the +# DeleteResourceInPool method of RPCS returns error when invalid values are +# passed. +# +# -Date: 08-09-2009 + +import sys +import os +from VirtLib import utils +from CimTest.Globals import logger +from pywbem import CIM_ERR_FAILED, CIM_ERR_INVALID_PARAMETER, CIMError +from CimTest.ReturnCodes import FAIL, PASS, SKIP +from XenKvmLib.xm_virt_util import virsh_version +from XenKvmLib.const import do_main, platform_sup, get_provider_version,\ + default_pool_name +from XenKvmLib import rpcs_service +from XenKvmLib.classes import get_typed_class, inst_to_mof +from XenKvmLib.pool import create_pool, DIR_POOL, \ + libvirt_rasd_spool_del_changes, get_diskpool, \ + get_stovol_default_settings, cleanup_pool_vol, \ + get_sto_vol_rasd_for_pool + +pool_attr = { 'Path' : "/tmp" } +vol_name = "cimtest-vol.img" +invalid_scen = { "INVALID_ADDRESS" : { 'val' : 'Junkvol_path', + 'msg' : 'no storage vol with '\ + 'matching path' }, + "NO_ADDRESS_FIELD" : { 'msg' :'Missing Address in '\ + 'resource RASD' }, + "MISSING_RESOURCE" : { 'msg' :"Missing argument `Resource'"}, + "MISSING_POOL" : { 'msg' :"Missing argument `Pool'"} + } + + +def verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, pool_name, + exp_vol_path, dp_inst): + + for err_scen in invalid_scen.keys(): + logger.info("Verifying errors for '%s'....", err_scen) + status = FAIL + del_res = [FAIL] + try: + res_settings = get_sto_vol_rasd_for_pool(virt, server, dp_cn, + pool_name, exp_vol_path) + if res_settings == None: + raise Exception("Failed getting resource settings for '%s' vol"\ + " when executing '%s'" % (vol_name, err_scen)) + + if not "MISSING" in err_scen: + exp_err_no = CIM_ERR_FAILED + + if "NO_ADDRESS_FIELD" in err_scen: + del res_settings['Address'] + elif "INVALID_ADDRESS" in err_scen: + res_settings['Address'] = invalid_scen[err_scen]['val'] + + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource, + Pool=dp_inst) + else: + exp_err_no = CIM_ERR_INVALID_PARAMETER + + if err_scen == "MISSING_RESOURCE": + del_res = rpcs_conn.DeleteResourceInPool(Pool=dp_inst) + elif err_scen == "MISSING_POOL": + resource = inst_to_mof(res_settings) + del_res = rpcs_conn.DeleteResourceInPool(Resource=resource) + + except CIMError, (err_no, err_desc): + if del_res[0] != PASS and invalid_scen[err_scen]['msg'] in err_desc\ + and exp_err_no == err_no: + logger.error("Got the expected error message: '%s' for '%s'", + err_desc, err_scen) + status = PASS + else: + logger.error("Unexpected error msg, Expected '%s'-'%s', Got" + "'%s'-'%s'", exp_err_no, + invalid_scen[err_scen]['msg'], err_no, err_desc) + return FAIL + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL + + if del_res[0] == PASS or status != PASS: + logger.error("Should not have been able to delete Vol %s", vol_name) + return FAIL + + return status + + at do_main(platform_sup) +def main(): + options = main.options + server = options.ip + virt = options.virt + + libvirt_ver = virsh_version(server, virt) + cim_rev, changeset = get_provider_version(virt, server) + if libvirt_ver < "0.4.1" and cim_rev < libvirt_rasd_spool_del_changes: + logger.info("Storage Volume deletion support is available with Libvirt" + "version >= 0.4.1 and Libvirt-CIM rev '%s'", + libvirt_rasd_spool_del_changes) + return SKIP + + dp_cn = "DiskPool" + exp_vol_path = "%s/%s" % (pool_attr['Path'], vol_name) + + pool_name = default_pool_name + status = FAIL + res = del_res = [FAIL] + clean_vol = False + + try: + sv_rasd = get_stovol_default_settings(virt, server, dp_cn, pool_name, + exp_vol_path, vol_name) + if sv_rasd == None: + raise Exception("Failed to get the defualt StorageVolRASD info") + + sv_settings = inst_to_mof(sv_rasd) + + dp_inst = get_diskpool(server, virt, dp_cn, pool_name) + if dp_inst == None: + raise Exception("DiskPool instance for '%s' not found!" \ + % pool_name) + + rpcs = get_typed_class(virt, "ResourcePoolConfigurationService") + rpcs_conn = eval("rpcs_service." + rpcs)(server) + res = rpcs_conn.CreateResourceInPool(Settings=sv_settings, + Pool=dp_inst) + if res[0] != PASS: + raise Exception("Failed to create the Vol %s" % vol_name) + + status = verify_rpcs_err_val(virt, server, rpcs_conn, dp_cn, + pool_name, exp_vol_path, dp_inst) + if status != PASS : + clean_vol = True + raise Exception("Verification Failed for DeleteResourceInPool()") + + except Exception, details: + logger.error("Exception details: %s", details) + status = FAIL + + ret = cleanup_pool_vol(server, virt, pool_name, vol_name, exp_vol_path, + clean_vol) + if status != PASS or ret != PASS: + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) From snmishra at us.ibm.com Fri Sep 18 16:54:15 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Fri, 18 Sep 2009 09:54:15 -0700 Subject: [Libvirt-cim] [PATCH] This patch replaces get_previous_instance function with get_rasd_by_name() Message-ID: # HG changeset patch # User Sharad Mishra # Date 1253292817 25200 # Node ID f916b221ea7e21b091d36ef841eb3bde1813798d # Parent fc50acd35fe7f344e296441a88a00f42a7636ad6 This patch replaces get_previous_instance function with get_rasd_by_name(). Signed-off-by: Sharad Mishra diff -r fc50acd35fe7 -r f916b221ea7e src/Virt_VirtualSystemManagementService.c --- a/src/Virt_VirtualSystemManagementService.c Wed Sep 16 11:49:21 2009 -0700 +++ b/src/Virt_VirtualSystemManagementService.c Fri Sep 18 09:53:37 2009 -0700 @@ -2185,49 +2185,6 @@ return s; } -static CMPIInstance *get_previous_instance(struct domain *dominfo, - const CMPIObjectPath *ref, - uint16_t type, - const char *devid) -{ - CMPIStatus s; - const char *props[] = {NULL}; - const char *inst_id; - struct inst_list list; - CMPIInstance *prev_inst = NULL; - int i, ret; - - inst_list_init(&list); - s = enum_rasds(_BROKER, ref, dominfo->name, type, props, &list); - if (s.rc != CMPI_RC_OK) { - CU_DEBUG("Failed to enumerate rasd"); - goto out; - } - - for(i = 0; i < list.cur; i++) { - prev_inst = list.list[i]; - ret = cu_get_str_prop(prev_inst, - "InstanceID", - &inst_id); - - if (ret != CMPI_RC_OK) { - CU_DEBUG("Cannot get InstanceID ... ignoring"); - continue; - } - - if (STREQ(inst_id, get_fq_devid(dominfo->name, (char *)devid))) - break; - } - - if (prev_inst == NULL) - CU_DEBUG("PreviousInstance is NULL"); - - out: - inst_list_free(&list); - - return prev_inst; -} - static CMPIStatus _update_resources_for(const CMPIContext *context, const CMPIObjectPath *ref, virDomainPtr dom, @@ -2276,7 +2233,24 @@ } else { indication = strdup(RASD_IND_MODIFIED); - prev_inst = get_previous_instance(dominfo, ref, type, devid); + char *dummy_name = NULL; + + if (asprintf(&dummy_name, "%s/%s",dominfo->name, devid) == -1) { + CU_DEBUG("Unable to set name"); + goto out; + } + s = get_rasd_by_name(_BROKER, + ref, + dummy_name, + type, + NULL, + &prev_inst); + free(dummy_name); + + if (s.rc != CMPI_RC_OK) { + CU_DEBUG("Failed to get Previous Instance"); + goto out; + } } s = func(dominfo, rasd, type, devid, NAMESPACE(ref)); From rmaciel at linux.vnet.ibm.com Mon Sep 21 13:45:43 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Mon, 21 Sep 2009 10:45:43 -0300 Subject: [Libvirt-cim] [PATCH] This patch replaces get_previous_instance function with get_rasd_by_name() In-Reply-To: References: Message-ID: <4AB78387.7050205@linux.vnet.ibm.com> Well, I don't really know why, but this patch solves the problem with the PreviousInstance fields not being filled properly. From what I could dig, when you execute enum_rasds, the BusType property cannot be extract using cu_get_str_prop (the function returns a CMPI_RC_ERR_NO_SUCH_PROPERTY). On 09/18/2009 01:54 PM, Sharad Mishra wrote: > # HG changeset patch > # User Sharad Mishra > # Date 1253292817 25200 > # Node ID f916b221ea7e21b091d36ef841eb3bde1813798d > # Parent fc50acd35fe7f344e296441a88a00f42a7636ad6 > This patch replaces get_previous_instance function with get_rasd_by_name(). > > Signed-off-by: Sharad Mishra > > diff -r fc50acd35fe7 -r f916b221ea7e src/Virt_VirtualSystemManagementService.c > --- a/src/Virt_VirtualSystemManagementService.c Wed Sep 16 11:49:21 2009 -0700 > +++ b/src/Virt_VirtualSystemManagementService.c Fri Sep 18 09:53:37 2009 -0700 > @@ -2185,49 +2185,6 @@ > return s; > } > > -static CMPIInstance *get_previous_instance(struct domain *dominfo, > - const CMPIObjectPath *ref, > - uint16_t type, > - const char *devid) > -{ > - CMPIStatus s; > - const char *props[] = {NULL}; > - const char *inst_id; > - struct inst_list list; > - CMPIInstance *prev_inst = NULL; > - int i, ret; > - > - inst_list_init(&list); > - s = enum_rasds(_BROKER, ref, dominfo->name, type, props,&list); > - if (s.rc != CMPI_RC_OK) { > - CU_DEBUG("Failed to enumerate rasd"); > - goto out; > - } > - > - for(i = 0; i< list.cur; i++) { > - prev_inst = list.list[i]; > - ret = cu_get_str_prop(prev_inst, > - "InstanceID", > -&inst_id); > - > - if (ret != CMPI_RC_OK) { > - CU_DEBUG("Cannot get InstanceID ... ignoring"); > - continue; > - } > - > - if (STREQ(inst_id, get_fq_devid(dominfo->name, (char *)devid))) > - break; > - } > - > - if (prev_inst == NULL) > - CU_DEBUG("PreviousInstance is NULL"); > - > - out: > - inst_list_free(&list); > - > - return prev_inst; > -} > - > static CMPIStatus _update_resources_for(const CMPIContext *context, > const CMPIObjectPath *ref, > virDomainPtr dom, > @@ -2276,7 +2233,24 @@ > } > else { > indication = strdup(RASD_IND_MODIFIED); > - prev_inst = get_previous_instance(dominfo, ref, type, devid); > + char *dummy_name = NULL; > + > + if (asprintf(&dummy_name, "%s/%s",dominfo->name, devid) == -1) { > + CU_DEBUG("Unable to set name"); > + goto out; > + } > + s = get_rasd_by_name(_BROKER, > + ref, > + dummy_name, > + type, > + NULL, > +&prev_inst); > + free(dummy_name); > + > + if (s.rc != CMPI_RC_OK) { > + CU_DEBUG("Failed to get Previous Instance"); > + goto out; > + } > } > > s = func(dominfo, rasd, type, devid, NAMESPACE(ref)); > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From rmaciel at linux.vnet.ibm.com Mon Sep 21 15:08:21 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Mon, 21 Sep 2009 12:08:21 -0300 Subject: [Libvirt-cim] [PATCH] This patch replaces get_previous_instance function with get_rasd_by_name() In-Reply-To: References: Message-ID: <4AB796E5.2050307@linux.vnet.ibm.com> +1 On 09/18/2009 01:54 PM, Sharad Mishra wrote: > # HG changeset patch > # User Sharad Mishra > # Date 1253292817 25200 > # Node ID f916b221ea7e21b091d36ef841eb3bde1813798d > # Parent fc50acd35fe7f344e296441a88a00f42a7636ad6 > This patch replaces get_previous_instance function with get_rasd_by_name(). > > Signed-off-by: Sharad Mishra > > diff -r fc50acd35fe7 -r f916b221ea7e src/Virt_VirtualSystemManagementService.c > --- a/src/Virt_VirtualSystemManagementService.c Wed Sep 16 11:49:21 2009 -0700 > +++ b/src/Virt_VirtualSystemManagementService.c Fri Sep 18 09:53:37 2009 -0700 > @@ -2185,49 +2185,6 @@ > return s; > } > > -static CMPIInstance *get_previous_instance(struct domain *dominfo, > - const CMPIObjectPath *ref, > - uint16_t type, > - const char *devid) > -{ > - CMPIStatus s; > - const char *props[] = {NULL}; > - const char *inst_id; > - struct inst_list list; > - CMPIInstance *prev_inst = NULL; > - int i, ret; > - > - inst_list_init(&list); > - s = enum_rasds(_BROKER, ref, dominfo->name, type, props,&list); > - if (s.rc != CMPI_RC_OK) { > - CU_DEBUG("Failed to enumerate rasd"); > - goto out; > - } > - > - for(i = 0; i< list.cur; i++) { > - prev_inst = list.list[i]; > - ret = cu_get_str_prop(prev_inst, > - "InstanceID", > -&inst_id); > - > - if (ret != CMPI_RC_OK) { > - CU_DEBUG("Cannot get InstanceID ... ignoring"); > - continue; > - } > - > - if (STREQ(inst_id, get_fq_devid(dominfo->name, (char *)devid))) > - break; > - } > - > - if (prev_inst == NULL) > - CU_DEBUG("PreviousInstance is NULL"); > - > - out: > - inst_list_free(&list); > - > - return prev_inst; > -} > - > static CMPIStatus _update_resources_for(const CMPIContext *context, > const CMPIObjectPath *ref, > virDomainPtr dom, > @@ -2276,7 +2233,24 @@ > } > else { > indication = strdup(RASD_IND_MODIFIED); > - prev_inst = get_previous_instance(dominfo, ref, type, devid); > + char *dummy_name = NULL; > + > + if (asprintf(&dummy_name, "%s/%s",dominfo->name, devid) == -1) { > + CU_DEBUG("Unable to set name"); > + goto out; > + } > + s = get_rasd_by_name(_BROKER, > + ref, > + dummy_name, > + type, > + NULL, > +&prev_inst); > + free(dummy_name); > + > + if (s.rc != CMPI_RC_OK) { > + CU_DEBUG("Failed to get Previous Instance"); > + goto out; > + } > } > > s = func(dominfo, rasd, type, devid, NAMESPACE(ref)); > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Mon Sep 21 18:46:20 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 21 Sep 2009 18:46:20 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] Add new tc RASDIndications/01_guest_states_rasd_ind.py Message-ID: <7b7fa4294f3602db5aca.1253558780@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253558547 25200 # Node ID 7b7fa4294f3602db5aca1d5958ebfd6dc849ef46 # Parent f5c62f54d1204d38ce15e48d269d3e887da69937 [TEST] Add new tc RASDIndications/01_guest_states_rasd_ind.py To verify the Add|Deleted RASDIndication for the guest. Tested with Xen and current sources on RHEL5.3. Signed-off-by: Deepti B. Kalakeri diff -r f5c62f54d120 -r 7b7fa4294f36 suites/libvirt-cim/cimtest/RASDIndications/01_guest_states_rasd_ind.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/RASDIndications/01_guest_states_rasd_ind.py Mon Sep 21 11:42:27 2009 -0700 @@ -0,0 +1,157 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This testcase is used to verify the Created|Deleted +# RASD Indications for a guest. +# +# Date : 21-09-2009 +# + +import sys +from signal import SIGKILL +from socket import gethostname +from os import kill, fork, _exit +from XenKvmLib.vxml import get_class +from XenKvmLib.xm_virt_util import active_domain_list +from CimTest.Globals import logger +from XenKvmLib.const import do_main, CIM_ENABLE, CIM_DISABLE +from CimTest.ReturnCodes import PASS, FAIL +from XenKvmLib.common_util import poll_for_state_change +from XenKvmLib.indications import sub_ind, handle_request, poll_for_ind + +sup_types = ['KVM', 'Xen'] + +def create_guest(test_dom, ip, virt, cxml, ind_name): + try: + ret = cxml.cim_define(ip) + if not ret: + raise Exception("Failed to define domain %s" % test_dom) + + status, dom_cs = poll_for_state_change(ip, virt, test_dom, + CIM_DISABLE) + if status != PASS: + raise Exception("Dom '%s' not in expected state '%s'" \ + % (test_dom, CIM_DISABLE)) + + ret = cxml.cim_start(ip) + if ret: + raise Exception("Failed to start the domain '%s'" % test_dom) + cxml.undefine(ip) + + status, dom_cs = poll_for_state_change(ip, virt, test_dom, + CIM_ENABLE) + if status != PASS: + raise Exception("Dom '%s' not in expected state '%s'" \ + % (test_dom, CIM_ENABLE)) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, cxml + + return PASS, cxml + +def gen_indication(test_dom, s_sysname, virt, cxml, ind_name): + status = FAIL + try: + active_doms = active_domain_list(s_sysname, virt) + if test_dom not in active_doms: + status, cxml = create_guest(test_dom, s_sysname, virt, cxml, ind_name) + if status != PASS: + raise Exception("Error setting up the guest '%s'" % test_dom) + + if ind_name == "delete": + ret = cxml.cim_destroy(s_sysname) + if not ret: + raise Exception("Failed to destroy domain '%s'" % test_dom) + + except Exception, details: + logger.error("Exception details :%s", details) + return FAIL, cxml + + return PASS, cxml + + at do_main(sup_types) +def main(): + options = main.options + virt = options.virt + s_sysname = options.ip + + status = FAIL + test_dom = 'VM_' + gethostname() + ind_names = { + 'create' : 'ResourceAllocationSettingDataCreatedIndication', + 'delete' : 'ResourceAllocationSettingDataDeletedIndication' + } + + virt_xml = get_class(virt) + cxml = virt_xml(test_dom) + sub_list, ind_names, dict = sub_ind(s_sysname, virt, ind_names) + for ind in ind_names.keys(): + sub = sub_list[ind] + ind_name = ind_names[ind] + logger.info("\n Verifying '%s' indications ....", ind_name) + + try: + pid = fork() + if pid == 0: + status = handle_request(sub, ind_name, dict, + len(ind_names.keys())) + if status != PASS: + _exit(1) + _exit(0) + else: + try: + status, cxml = gen_indication(test_dom, s_sysname, + virt, cxml, ind) + if status != PASS: + kill(pid, SIGKILL) + raise Exception("Unable to generate indication") + + status = poll_for_ind(pid, ind_name) + except Exception, details: + kill(pid, SIGKILL) + raise Exception(details) + + except Exception, details: + logger.error("Exception: %s", details) + status = FAIL + + if status != PASS: + break + + #Make sure all subscriptions are really unsubscribed + for ind, sub in sub_list.iteritems(): + sub.unsubscribe(dict['default_auth']) + logger.info("Cancelling subscription for %s", ind_names[ind]) + + active_doms = active_domain_list(s_sysname, virt) + if test_dom in active_doms: + ret = cxml.cim_destroy(s_sysname) + if not ret: + logger.error("Failed to Destroy the domain") + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) + From deeptik at linux.vnet.ibm.com Mon Sep 21 18:46:54 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Mon, 21 Sep 2009 18:46:54 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py Message-ID: <20f8f3d7e3ef6d943e3b.1253558814@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253558705 25200 # Node ID 20f8f3d7e3ef6d943e3bbab928f8c9e5108262c6 # Parent 7b7fa4294f3602db5aca1d5958ebfd6dc849ef46 [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py To verify the Add|Modify|Deleted RASDIndication for the guest. Tested with Xen and current sources on RHEL5.3. Signed-off-by: Deepti B. Kalakeri diff -r 7b7fa4294f36 -r 20f8f3d7e3ef suites/libvirt-cim/cimtest/RASDIndications/02_guest_add_mod_rem_rasd_ind.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/RASDIndications/02_guest_add_mod_rem_rasd_ind.py Mon Sep 21 11:45:05 2009 -0700 @@ -0,0 +1,229 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This testcase is used to verify the Created|Modified|Deleted +# RASD Indications for a guest. +# +# Date : 21-09-2009 +# + +import sys +from signal import SIGKILL +from XenKvmLib import vsms +from XenKvmLib import vsms_util +from XenKvmLib.classes import get_typed_class +from XenKvmLib.enumclass import EnumNames +from socket import gethostname +from os import kill, fork, _exit +from XenKvmLib.vxml import get_class +from CimTest.Globals import logger +from XenKvmLib.const import do_main, CIM_DISABLE, CIM_ENABLE +from CimTest.ReturnCodes import PASS, FAIL +from XenKvmLib.common_util import poll_for_state_change +from XenKvmLib.indications import sub_ind, handle_request, poll_for_ind + +sup_types = ['KVM', 'Xen'] + +nmem = 256 +nmac = '00:11:22:33:44:55' + +def create_guest(test_dom, ip, virt, cxml): + try: + ret = cxml.cim_define(ip) + if not ret: + raise Exception("Failed to define domain %s" % test_dom) + + status, dom_cs = poll_for_state_change(ip, virt, test_dom, + CIM_DISABLE) + if status != PASS: + raise Exception("Dom '%s' not in expected state '%s'" \ + % (test_dom, CIM_DISABLE)) + + ret = cxml.cim_start(ip) + if ret: + raise Exception("Failed to start the domain '%s'" % test_dom) + cxml.undefine(ip) + + status, dom_cs = poll_for_state_change(ip, virt, test_dom, + CIM_ENABLE) + if status != PASS: + raise Exception("Dom '%s' not in expected state '%s'" \ + % (test_dom, CIM_ENABLE)) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, cxml + + return PASS, cxml + + +def get_rasd_rec(virt, cn, s_sysname, inst_id): + classname = get_typed_class(virt, cn) + recs = EnumNames(s_sysname, classname) + rasd = None + for rasd_rec in recs: + ret_pool = rasd_rec['InstanceID'] + if ret_pool == inst_id: + rasd = rasd_rec + break + + return rasd + +def gen_indication(test_dom, s_sysname, virt, cxml, service, ind_name, + rasd=None, nmem_disk=None): + status = FAIL + try: + + if ind_name == "add": + cn = 'VirtualSystemSettingData' + inst_id = '%s:%s' % (virt, test_dom) + classname = get_typed_class(virt, cn) + vssd_ref = get_rasd_rec(virt, cn, s_sysname, inst_id) + + if vssd_ref == None: + raise Exception("Failed to get vssd_ref for '%s'" % test_dom) + + status = vsms_util.add_disk_res(s_sysname, service, cxml, + vssd_ref, rasd, nmem_disk) + + elif ind_name == "modify": + status = vsms_util.mod_mem_res(s_sysname, service, cxml, + rasd, nmem_disk) + + elif ind_name == 'delete': + cn = 'GraphicsResourceAllocationSettingData' + inst_id = '%s/%s' % (test_dom, "graphics") + classname = get_typed_class(virt, cn) + nrasd = get_rasd_rec(virt, cn, s_sysname, inst_id) + + if nrasd == None: + raise Exception("Failed to get nrasd for '%s'" % test_dom) + + res = service.RemoveResourceSettings(ResourceSettings=[nrasd]) + status = res[0] + + except Exception, details: + logger.error("Exception details :%s", details) + return FAIL + + return status + + at do_main(sup_types) +def main(): + options = main.options + virt = options.virt + s_sysname = options.ip + + status = FAIL + test_dom = 'VM_' + gethostname() + ind_names = { + 'add' : 'ResourceAllocationSettingDataCreatedIndication', + 'modify' : 'ResourceAllocationSettingDataModifiedIndication', + 'delete' : 'ResourceAllocationSettingDataDeletedIndication' + } + + sub_list, ind_names, dict = sub_ind(s_sysname, virt, ind_names) + virt_xml = get_class(virt) + cxml = virt_xml(test_dom, mac=nmac) + service = vsms.get_vsms_class(options.virt)(options.ip) + ndpath = cxml.secondary_disk_path + + if virt == 'KVM': + nddev = 'hdb' + else: + nddev = 'xvdb' + + disk_attr = { 'nddev' : nddev, + 'src_path' : ndpath + } + dasd = vsms.get_dasd_class(options.virt)(dev=nddev, + source=cxml.secondary_disk_path, + name=test_dom) + masd = vsms.get_masd_class(options.virt)(megabytes=nmem, name=test_dom) + rasd_info = { 'add' : [dasd, disk_attr], + 'modify' : [masd, nmem] + } + + status, cxml = create_guest(test_dom, s_sysname, virt, cxml) + if status != PASS: + logger.error("Error setting up the guest '%s'" % test_dom) + return FAIL + + for ind in ind_names.keys(): + sub = sub_list[ind] + ind_name = ind_names[ind] + logger.info("\n Verifying '%s' indications ....", ind_name) + + try: + pid = fork() + if pid == 0: + status = handle_request(sub, ind_name, dict, + len(ind_names.keys())) + if status != PASS: + _exit(1) + + _exit(0) + else: + try: + if ind != 'delete': + rasd = rasd_info[ind][0] + val = rasd_info[ind][1] + status = gen_indication(test_dom, s_sysname, + virt, cxml, service, + ind, rasd, val) + else: + status = gen_indication(test_dom, s_sysname, + virt, cxml, service, + ind) + if status != PASS: + raise Exception("Unable to generate indication") + + status = poll_for_ind(pid, ind_name) + if status != PASS: + raise Exception("Poll for indication Failed") + + except Exception, details: + kill(pid, SIGKILL) + raise Exception(details) + + except Exception, details: + logger.error("Exception: %s", details) + status = FAIL + + if status != PASS: + break + + #Make sure all subscriptions are really unsubscribed + for ind, sub in sub_list.iteritems(): + sub.unsubscribe(dict['default_auth']) + logger.info("Cancelling subscription for %s", ind_names[ind]) + + ret = cxml.cim_destroy(s_sysname) + if not ret: + logger.error("Failed to destroy the domain '%s'", test_dom) + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) + From jfehlig at novell.com Tue Sep 22 18:15:50 2009 From: jfehlig at novell.com (Jim Fehlig) Date: Tue, 22 Sep 2009 12:15:50 -0600 Subject: [Libvirt-cim] [PATCH] Cleanup _get_rasds() in Virt_RASD.c Message-ID: <81b6cd4ae355024303a8.1253643350@jfehlig3.provo.novell.com> # HG changeset patch # User Jim Fehlig # Date 1253641563 21600 # Node ID 81b6cd4ae355024303a8459817b4f15339d17111 # Parent 7c5106b0b092147c521ef1f462b9a41a44a313f8 Cleanup _get_rasds() in Virt_RASD.c I received a bug report about a memory leak in _get_rasds(). While fixing the leak, I took the opportunity to do some other tidying in this function. Signed-off-by: Jim Fehlig diff -r 7c5106b0b092 -r 81b6cd4ae355 src/Virt_RASD.c --- a/src/Virt_RASD.c Wed Sep 16 11:49:21 2009 -0700 +++ b/src/Virt_RASD.c Tue Sep 22 11:46:03 2009 -0600 @@ -664,6 +664,7 @@ int count; int i; struct virt_device *devs = NULL; + const char *host = NULL; count = get_devices(dom, &devs, type); if (count <= 0) @@ -672,8 +673,13 @@ /* Bit hackish, but for proc we need to cut list down to one. */ if (type == CIM_RES_TYPE_PROC) { struct virt_device *tmp_dev = NULL; - tmp_dev = calloc(1, sizeof(*tmp_dev)); tmp_dev = virt_device_dup(&devs[count - 1]); + if (tmp_dev == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Failed to allocate memory for proc RASD"); + goto out; + } tmp_dev->id = strdup("proc"); @@ -685,15 +691,16 @@ count = 1; } + host = virDomainGetName(dom); + if (host == NULL) { + cu_statusf(broker, &s, + CMPI_RC_ERR_FAILED, + "Failed to get domain name"); + goto out; + } + for (i = 0; i < count; i++) { CMPIInstance *dev = NULL; - const char *host = NULL; - - host = virDomainGetName(dom); - if (host == NULL) { - cleanup_virt_device(&devs[i]); - continue; - } dev = rasd_from_vdev(broker, &devs[i], From kaitlin at linux.vnet.ibm.com Tue Sep 22 20:48:54 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 22 Sep 2009 13:48:54 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Add new tc RASDIndications/01_guest_states_rasd_ind.py In-Reply-To: <7b7fa4294f3602db5aca.1253558780@elm3a148.beaverton.ibm.com> References: <7b7fa4294f3602db5aca.1253558780@elm3a148.beaverton.ibm.com> Message-ID: <4AB93836.3060707@linux.vnet.ibm.com> > +sup_types = ['KVM', 'Xen'] This test should be identical for XenFV. Go ahead and put it in the support list. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From kaitlin at linux.vnet.ibm.com Tue Sep 22 21:03:11 2009 From: kaitlin at linux.vnet.ibm.com (Kaitlin Rupert) Date: Tue, 22 Sep 2009 14:03:11 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py In-Reply-To: <20f8f3d7e3ef6d943e3b.1253558814@elm3a148.beaverton.ibm.com> References: <20f8f3d7e3ef6d943e3b.1253558814@elm3a148.beaverton.ibm.com> Message-ID: <4AB93B8F.1040501@linux.vnet.ibm.com> > +sup_types = ['KVM', 'Xen'] Have this support XenFV as well. > + > +nmem = 256 > +nmac = '00:11:22:33:44:55' > + > +def create_guest(test_dom, ip, virt, cxml): > + try: > + ret = cxml.cim_define(ip) > + if not ret: > + raise Exception("Failed to define domain %s" % test_dom) > + > + status, dom_cs = poll_for_state_change(ip, virt, test_dom, > + CIM_DISABLE) > + if status != PASS: > + raise Exception("Dom '%s' not in expected state '%s'" \ > + % (test_dom, CIM_DISABLE)) > + > + ret = cxml.cim_start(ip) > + if ret: > + raise Exception("Failed to start the domain '%s'" % test_dom) > + cxml.undefine(ip) > + > + status, dom_cs = poll_for_state_change(ip, virt, test_dom, > + CIM_ENABLE) > + if status != PASS: > + raise Exception("Dom '%s' not in expected state '%s'" \ > + % (test_dom, CIM_ENABLE)) > + > + except Exception, details: > + logger.error("Exception details: %s", details) > + return FAIL, cxml > + > + return PASS, cxml > + > + > +def get_rasd_rec(virt, cn, s_sysname, inst_id): > + classname = get_typed_class(virt, cn) > + recs = EnumNames(s_sysname, classname) > + rasd = None > + for rasd_rec in recs: > + ret_pool = rasd_rec['InstanceID'] > + if ret_pool == inst_id: > + rasd = rasd_rec > + break > + > + return rasd > + > +def gen_indication(test_dom, s_sysname, virt, cxml, service, ind_name, > + rasd=None, nmem_disk=None): > + status = FAIL > + try: > + > + if ind_name == "add": > + cn = 'VirtualSystemSettingData' > + inst_id = '%s:%s' % (virt, test_dom) > + classname = get_typed_class(virt, cn) > + vssd_ref = get_rasd_rec(virt, cn, s_sysname, inst_id) > + > + if vssd_ref == None: > + raise Exception("Failed to get vssd_ref for '%s'" % test_dom) > + > + status = vsms_util.add_disk_res(s_sysname, service, cxml, > + vssd_ref, rasd, nmem_disk) > + > + elif ind_name == "modify": > + status = vsms_util.mod_mem_res(s_sysname, service, cxml, > + rasd, nmem_disk) KVM doesn't support ballooning of memory while the guest is running. You'll need to do this when the guest is shutdown. -- Kaitlin Rupert IBM Linux Technology Center kaitlin at linux.vnet.ibm.com From deeptik at linux.vnet.ibm.com Tue Sep 22 21:17:50 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 22 Sep 2009 21:17:50 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] #2 Add new tc RASDIndications/01_guest_states_rasd_ind.py Message-ID: # HG changeset patch # User Deepti B. Kalakeri # Date 1253654207 25200 # Node ID faf86189f60a2b7e5321996540c390c0598929c9 # Parent f5c62f54d1204d38ce15e48d269d3e887da69937 [TEST] #2 Add new tc RASDIndications/01_guest_states_rasd_ind.py Patch 2: -------- 1) Checked for RASDIndication support in libvirt-cim 2) Included support for XenFV To verify the Add|Deleted RASDIndication for the guest. Tested with Xen and current sources on RHEL5.3 and KVM with F10. Signed-off-by: Deepti B. Kalakeri diff -r f5c62f54d120 -r faf86189f60a suites/libvirt-cim/cimtest/RASDIndications/01_guest_states_rasd_ind.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/RASDIndications/01_guest_states_rasd_ind.py Tue Sep 22 14:16:47 2009 -0700 @@ -0,0 +1,164 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This testcase is used to verify the Created|Deleted +# RASD Indications for a guest. +# +# Date : 21-09-2009 +# + +import sys +from signal import SIGKILL +from socket import gethostname +from os import kill, fork, _exit +from XenKvmLib.vxml import get_class +from XenKvmLib.xm_virt_util import active_domain_list +from CimTest.Globals import logger +from XenKvmLib.const import do_main, CIM_ENABLE, CIM_DISABLE, \ + get_provider_version +from CimTest.ReturnCodes import PASS, FAIL, SKIP +from XenKvmLib.common_util import poll_for_state_change +from XenKvmLib.indications import sub_ind, handle_request, poll_for_ind + +sup_types = ['KVM', 'Xen', 'XenFV'] +libvirt_guest_rasd_indication_rev = 980 + +def create_guest(test_dom, ip, virt, cxml, ind_name): + try: + ret = cxml.cim_define(ip) + if not ret: + raise Exception("Failed to define domain %s" % test_dom) + + status, dom_cs = poll_for_state_change(ip, virt, test_dom, + CIM_DISABLE) + if status != PASS: + raise Exception("Dom '%s' not in expected state '%s'" \ + % (test_dom, CIM_DISABLE)) + + ret = cxml.cim_start(ip) + if ret: + raise Exception("Failed to start the domain '%s'" % test_dom) + cxml.undefine(ip) + + status, dom_cs = poll_for_state_change(ip, virt, test_dom, + CIM_ENABLE) + if status != PASS: + raise Exception("Dom '%s' not in expected state '%s'" \ + % (test_dom, CIM_ENABLE)) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, cxml + + return PASS, cxml + +def gen_indication(test_dom, s_sysname, virt, cxml, ind_name): + status = FAIL + try: + active_doms = active_domain_list(s_sysname, virt) + if test_dom not in active_doms: + status, cxml = create_guest(test_dom, s_sysname, virt, cxml, ind_name) + if status != PASS: + raise Exception("Error setting up the guest '%s'" % test_dom) + + if ind_name == "delete": + ret = cxml.cim_destroy(s_sysname) + if not ret: + raise Exception("Failed to destroy domain '%s'" % test_dom) + + except Exception, details: + logger.error("Exception details :%s", details) + return FAIL, cxml + + return PASS, cxml + + at do_main(sup_types) +def main(): + options = main.options + virt = options.virt + s_sysname = options.ip + + cim_rev, changeset = get_provider_version(virt, s_sysname) + if cim_rev < libvirt_guest_rasd_indication_rev: + logger.info("Support for Guest Resource Indications is available in " + "Libvirt-CIM rev '%s'", libvirt_guest_rasd_indication_rev) + return SKIP + + status = FAIL + test_dom = 'VM_' + gethostname() + ind_names = { + 'create' : 'ResourceAllocationSettingDataCreatedIndication', + 'delete' : 'ResourceAllocationSettingDataDeletedIndication' + } + + virt_xml = get_class(virt) + cxml = virt_xml(test_dom) + sub_list, ind_names, dict = sub_ind(s_sysname, virt, ind_names) + for ind in ind_names.keys(): + sub = sub_list[ind] + ind_name = ind_names[ind] + logger.info("\n Verifying '%s' indications ....", ind_name) + + try: + pid = fork() + if pid == 0: + status = handle_request(sub, ind_name, dict, + len(ind_names.keys())) + if status != PASS: + _exit(1) + _exit(0) + else: + try: + status, cxml = gen_indication(test_dom, s_sysname, + virt, cxml, ind) + if status != PASS: + raise Exception("Unable to generate indication") + + status = poll_for_ind(pid, ind_name) + except Exception, details: + kill(pid, SIGKILL) + raise Exception(details) + + except Exception, details: + logger.error("Exception: %s", details) + status = FAIL + + if status != PASS: + break + + #Make sure all subscriptions are really unsubscribed + for ind, sub in sub_list.iteritems(): + sub.unsubscribe(dict['default_auth']) + logger.info("Cancelling subscription for %s", ind_names[ind]) + + active_doms = active_domain_list(s_sysname, virt) + if test_dom in active_doms: + ret = cxml.cim_destroy(s_sysname) + if not ret: + logger.error("Failed to Destroy the domain") + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) + From deeptik at linux.vnet.ibm.com Tue Sep 22 21:33:16 2009 From: deeptik at linux.vnet.ibm.com (Deepti B. Kalakeri) Date: Tue, 22 Sep 2009 21:33:16 -0000 Subject: [Libvirt-cim] [PATCH] [TEST] #2 [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py Message-ID: <215cbc24f8f95f95543a.1253655196@elm3a148.beaverton.ibm.com> # HG changeset patch # User Deepti B. Kalakeri # Date 1253655145 25200 # Node ID 215cbc24f8f95f95543a24ecc7e3b1d80594ecdd # Parent faf86189f60a2b7e5321996540c390c0598929c9 [TEST] #2 [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py Patch 2: -------- 1) Checked for RASDIndication support in libvirt-cim 2) Included support for XenFV 3) Removed cim_start() fromt the testcase 4) Undefined the guest at the end of the test. To verify the Add|Modify|Deleted RASDIndication for the guest. Tested with Xen and current sources on RHEL5.3 and with KVM on F10. Signed-off-by: Deepti B. Kalakeri diff -r faf86189f60a -r 215cbc24f8f9 suites/libvirt-cim/cimtest/RASDIndications/02_guest_add_mod_rem_rasd_ind.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/suites/libvirt-cim/cimtest/RASDIndications/02_guest_add_mod_rem_rasd_ind.py Tue Sep 22 14:32:25 2009 -0700 @@ -0,0 +1,225 @@ +#!/usr/bin/python +# +# Copyright 2009 IBM Corp. +# +# Authors: +# Deepti B. Kalakeri +# +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# +# This testcase is used to verify the Created|Modified|Deleted +# RASD Indications for a guest. +# +# Date : 21-09-2009 +# + +import sys +from signal import SIGKILL +from XenKvmLib import vsms +from XenKvmLib import vsms_util +from XenKvmLib.classes import get_typed_class +from XenKvmLib.enumclass import EnumNames +from socket import gethostname +from os import kill, fork, _exit +from XenKvmLib.vxml import get_class +from CimTest.Globals import logger +from XenKvmLib.const import do_main, CIM_DISABLE, get_provider_version +from CimTest.ReturnCodes import PASS, FAIL, SKIP +from XenKvmLib.common_util import poll_for_state_change +from XenKvmLib.indications import sub_ind, handle_request, poll_for_ind + +sup_types = ['KVM', 'Xen', 'XenFV'] +libvirt_guest_rasd_indication_rev = 980 + +nmem = 256 +nmac = '00:11:22:33:44:55' + +def create_guest(test_dom, ip, virt, cxml): + try: + ret = cxml.cim_define(ip) + if not ret: + raise Exception("Failed to define domain %s" % test_dom) + + status, dom_cs = poll_for_state_change(ip, virt, test_dom, + CIM_DISABLE) + if status != PASS: + raise Exception("Dom '%s' not in expected state '%s'" \ + % (test_dom, CIM_DISABLE)) + + except Exception, details: + logger.error("Exception details: %s", details) + return FAIL, cxml + + return PASS, cxml + + +def get_rasd_rec(virt, cn, s_sysname, inst_id): + classname = get_typed_class(virt, cn) + recs = EnumNames(s_sysname, classname) + rasd = None + for rasd_rec in recs: + ret_pool = rasd_rec['InstanceID'] + if ret_pool == inst_id: + rasd = rasd_rec + break + + return rasd + +def gen_indication(test_dom, s_sysname, virt, cxml, service, ind_name, + rasd=None, nmem_disk=None): + status = FAIL + try: + + if ind_name == "add": + cn = 'VirtualSystemSettingData' + inst_id = '%s:%s' % (virt, test_dom) + classname = get_typed_class(virt, cn) + vssd_ref = get_rasd_rec(virt, cn, s_sysname, inst_id) + + if vssd_ref == None: + raise Exception("Failed to get vssd_ref for '%s'" % test_dom) + + status = vsms_util.add_disk_res(s_sysname, service, cxml, + vssd_ref, rasd, nmem_disk) + + elif ind_name == "modify": + status = vsms_util.mod_mem_res(s_sysname, service, cxml, + rasd, nmem_disk) + + elif ind_name == 'delete': + cn = 'GraphicsResourceAllocationSettingData' + inst_id = '%s/%s' % (test_dom, "graphics") + classname = get_typed_class(virt, cn) + nrasd = get_rasd_rec(virt, cn, s_sysname, inst_id) + + if nrasd == None: + raise Exception("Failed to get nrasd for '%s'" % test_dom) + + res = service.RemoveResourceSettings(ResourceSettings=[nrasd]) + status = res[0] + + except Exception, details: + logger.error("Exception details :%s", details) + return FAIL + + return status + + at do_main(sup_types) +def main(): + options = main.options + virt = options.virt + s_sysname = options.ip + + cim_rev, changeset = get_provider_version(virt, s_sysname) + if cim_rev < libvirt_guest_rasd_indication_rev: + logger.info("Support for Guest Resource Indications is available in " + "Libvirt-CIM rev '%s'", libvirt_guest_rasd_indication_rev) + return SKIP + + status = FAIL + test_dom = 'VM_' + gethostname() + ind_names = { + 'add' : 'ResourceAllocationSettingDataCreatedIndication', + 'modify' : 'ResourceAllocationSettingDataModifiedIndication', + 'delete' : 'ResourceAllocationSettingDataDeletedIndication' + } + + sub_list, ind_names, dict = sub_ind(s_sysname, virt, ind_names) + virt_xml = get_class(virt) + cxml = virt_xml(test_dom, mac=nmac) + service = vsms.get_vsms_class(options.virt)(options.ip) + ndpath = cxml.secondary_disk_path + + if virt == 'KVM': + nddev = 'hdb' + else: + nddev = 'xvdb' + + disk_attr = { 'nddev' : nddev, + 'src_path' : ndpath + } + dasd = vsms.get_dasd_class(options.virt)(dev=nddev, + source=cxml.secondary_disk_path, + name=test_dom) + masd = vsms.get_masd_class(options.virt)(megabytes=nmem, name=test_dom) + rasd_info = { 'add' : [dasd, disk_attr], + 'modify' : [masd, nmem] + } + + status, cxml = create_guest(test_dom, s_sysname, virt, cxml) + if status != PASS: + logger.error("Error setting up the guest '%s'" % test_dom) + return FAIL + + for ind in ind_names.keys(): + sub = sub_list[ind] + ind_name = ind_names[ind] + logger.info("\n Verifying '%s' indications ....", ind_name) + + try: + pid = fork() + if pid == 0: + status = handle_request(sub, ind_name, dict, + len(ind_names.keys())) + if status != PASS: + _exit(1) + + _exit(0) + else: + try: + if ind != 'delete': + rasd = rasd_info[ind][0] + val = rasd_info[ind][1] + status = gen_indication(test_dom, s_sysname, + virt, cxml, service, + ind, rasd, val) + else: + status = gen_indication(test_dom, s_sysname, + virt, cxml, service, + ind) + if status != PASS: + raise Exception("Unable to generate indication") + + status = poll_for_ind(pid, ind_name) + if status != PASS: + raise Exception("Poll for indication Failed") + + except Exception, details: + kill(pid, SIGKILL) + raise Exception(details) + + except Exception, details: + logger.error("Exception: %s", details) + status = FAIL + + if status != PASS: + break + + #Make sure all subscriptions are really unsubscribed + for ind, sub in sub_list.iteritems(): + sub.unsubscribe(dict['default_auth']) + logger.info("Cancelling subscription for %s", ind_names[ind]) + + ret = cxml.undefine(s_sysname) + if not ret: + logger.error("Failed to undefine the domain '%s'", test_dom) + return FAIL + + return status +if __name__ == "__main__": + sys.exit(main()) + From rmaciel at linux.vnet.ibm.com Wed Sep 23 01:02:07 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Tue, 22 Sep 2009 22:02:07 -0300 Subject: [Libvirt-cim] [PATCH] [TEST] #2 [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py In-Reply-To: <215cbc24f8f95f95543a.1253655196@elm3a148.beaverton.ibm.com> References: <215cbc24f8f95f95543a.1253655196@elm3a148.beaverton.ibm.com> Message-ID: <4AB9738F.3010704@linux.vnet.ibm.com> I've got the following message when executing this test: Testing KVM hypervisor -------------------------------------------------------------------- RASDIndications - 02_guest_add_mod_rem_rasd_ind.py: FAIL ERROR - Error invoking AddRS: add_disk_res ERROR - (1, u'CIM_ERR_FAILED: Internal error (xml generation failed)') ERROR - Exception: Unable to generate indication InvokeMethod(AddResourceSettings): CIM_ERR_FAILED: Internal error (xml generation failed) -------------------------------------------------------------------- I'm executing it on a Fedora 10 system with Pegasus. The command line used is: [root at F10 cimtest]# CIM_NS=root/virt CIM_USER=root CIM_PASS=1mud2ar3 ./runtests libvirt-cim -i localhost -c -d -v KVM -g RASDIndications -t 02_guest_add_mod_rem_rasd_ind.py --------------------------------------------------------- The log (after the system was created) is below. I marked some interesting messages with a <--- std_invokemethod.c(305): Method `DefineSystem' returned 0 misc_util.c(75): Connecting to libvirt with uri `qemu:///system' misc_util.c(202): URI of connection is: qemu:///system misc_util.c(202): URI of connection is: qemu:///system device_parsing.c(273): Disk node: disk infostore.c(88): Path is /etc/libvirt/cim/QEMU_VM_F10 instance_util.c(127): Number of keys: 2 instance_util.c(140): Comparing key 0: `CreationClassName' instance_util.c(140): Comparing key 1: `Name' std_indication.c(204): stdi_set_ind_filter_state std_indication.c(48): Ind Filter name (param): KVM_ResourceAllocationSettingDataCreatedIndication std_indication.c(50): Ind Filter name (list): Xen_ResourceAllocationSettingDataCreatedIndication std_indication.c(50): Ind Filter name (list): Xen_ResourceAllocationSettingDataDeletedIndication std_indication.c(50): Ind Filter name (list): Xen_ResourceAllocationSettingDataModifiedIndication std_indication.c(50): Ind Filter name (list): KVM_ResourceAllocationSettingDataCreatedIndication misc_util.c(75): Connecting to libvirt with uri `qemu:///system' misc_util.c(202): URI of connection is: qemu:///system device_parsing.c(273): Disk node: disk device_parsing.c(273): Disk node: disk Virt_VSSD.c(59): bootlist_ct = 1 Virt_VSSD.c(80): BootList[0]=hd Virt_VSSD.c(246): Unknown domain type 3 for creating VSSD <----- misc_util.c(202): URI of connection is: qemu:///system device_parsing.c(273): Disk node: disk Virt_VSSD.c(246): Unknown domain type -1 for creating VSSD device_parsing.c(1037): Unknown domain type -1 misc_util.c(202): URI of connection is: qemu:///system device_parsing.c(273): Disk node: disk Virt_VSSD.c(59): bootlist_ct = 1 Virt_VSSD.c(80): BootList[0]=hd Virt_VSSD.c(246): Unknown domain type 3 for creating VSSD std_invokemethod.c(279): Method `AddResourceSettings' execution attempted std_invokemethod.c(230): Method parameter `AffectedConfiguration' validated type 0x1100 eo_parser.c(100): Parsing MOF-style EI std_invokemethod.c(230): Method parameter `ResourceSettings' validated type 0x3000 std_invokemethod.c(303): Executing handler for method `AddResourceSettings' misc_util.c(75): Connecting to libvirt with uri `qemu:///system' device_parsing.c(273): Disk node: disk misc_util.c(75): Connecting to libvirt with uri `qemu:///system' misc_util.c(75): Connecting to libvirt with uri `qemu:///system' infostore.c(88): Path is /etc/libvirt/cim/QEMU_VM_F10 misc_util.c(409): Type is KVM Virt_VirtualSystemManagementService.c(1943): VS `VM_F10' not online; skipping dynamic update <------- xmlgen.c(728): Using existing UUID: 9041cf48-21f4-4d4e-a4cd-4387f27fe669 xmlgen.c(146): Disk: 2 /tmp/default-kvm-dimage hda xmlgen.c(146): Disk: 0 /tmp/default-kvm-dimage.2ND hdb xmlgen.c(791): Failed to create XML: Unknown disk type <---------- std_invokemethod.c(305): Method `AddResourceSettings' returned 1 misc_util.c(75): Connecting to libvirt with uri `qemu:///system' misc_util.c(202): URI of connection is: qemu:///system Virt_HostSystem.c(203): SBLIM: Returned instance On 09/22/2009 06:33 PM, Deepti B. Kalakeri wrote: > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1253655145 25200 > # Node ID 215cbc24f8f95f95543a24ecc7e3b1d80594ecdd > # Parent faf86189f60a2b7e5321996540c390c0598929c9 > [TEST] #2 [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py > > Patch 2: > -------- > 1) Checked for RASDIndication support in libvirt-cim > 2) Included support for XenFV > 3) Removed cim_start() fromt the testcase > 4) Undefined the guest at the end of the test. > > To verify the Add|Modify|Deleted RASDIndication for the guest. > > Tested with Xen and current sources on RHEL5.3 and with KVM on F10. > Signed-off-by: Deepti B. Kalakeri > > diff -r faf86189f60a -r 215cbc24f8f9 suites/libvirt-cim/cimtest/RASDIndications/02_guest_add_mod_rem_rasd_ind.py > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/suites/libvirt-cim/cimtest/RASDIndications/02_guest_add_mod_rem_rasd_ind.py Tue Sep 22 14:32:25 2009 -0700 > @@ -0,0 +1,225 @@ > +#!/usr/bin/python > +# > +# Copyright 2009 IBM Corp. > +# > +# Authors: > +# Deepti B. Kalakeri > +# > +# > +# This library is free software; you can redistribute it and/or > +# modify it under the terms of the GNU General Public > +# License as published by the Free Software Foundation; either > +# version 2.1 of the License, or (at your option) any later version. > +# > +# This library is distributed in the hope that it will be useful, > +# but WITHOUT ANY WARRANTY; without even the implied warranty of > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > +# General Public License for more details. > +# > +# You should have received a copy of the GNU General Public > +# License along with this library; if not, write to the Free Software > +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA > +# > +# > +# This testcase is used to verify the Created|Modified|Deleted > +# RASD Indications for a guest. > +# > +# Date : 21-09-2009 > +# > + > +import sys > +from signal import SIGKILL > +from XenKvmLib import vsms > +from XenKvmLib import vsms_util > +from XenKvmLib.classes import get_typed_class > +from XenKvmLib.enumclass import EnumNames > +from socket import gethostname > +from os import kill, fork, _exit > +from XenKvmLib.vxml import get_class > +from CimTest.Globals import logger > +from XenKvmLib.const import do_main, CIM_DISABLE, get_provider_version > +from CimTest.ReturnCodes import PASS, FAIL, SKIP > +from XenKvmLib.common_util import poll_for_state_change > +from XenKvmLib.indications import sub_ind, handle_request, poll_for_ind > + > +sup_types = ['KVM', 'Xen', 'XenFV'] > +libvirt_guest_rasd_indication_rev = 980 > + > +nmem = 256 > +nmac = '00:11:22:33:44:55' > + > +def create_guest(test_dom, ip, virt, cxml): > + try: > + ret = cxml.cim_define(ip) > + if not ret: > + raise Exception("Failed to define domain %s" % test_dom) > + > + status, dom_cs = poll_for_state_change(ip, virt, test_dom, > + CIM_DISABLE) > + if status != PASS: > + raise Exception("Dom '%s' not in expected state '%s'" \ > + % (test_dom, CIM_DISABLE)) > + > + except Exception, details: > + logger.error("Exception details: %s", details) > + return FAIL, cxml > + > + return PASS, cxml > + > + > +def get_rasd_rec(virt, cn, s_sysname, inst_id): > + classname = get_typed_class(virt, cn) > + recs = EnumNames(s_sysname, classname) > + rasd = None > + for rasd_rec in recs: > + ret_pool = rasd_rec['InstanceID'] > + if ret_pool == inst_id: > + rasd = rasd_rec > + break > + > + return rasd > + > +def gen_indication(test_dom, s_sysname, virt, cxml, service, ind_name, > + rasd=None, nmem_disk=None): > + status = FAIL > + try: > + > + if ind_name == "add": > + cn = 'VirtualSystemSettingData' > + inst_id = '%s:%s' % (virt, test_dom) > + classname = get_typed_class(virt, cn) > + vssd_ref = get_rasd_rec(virt, cn, s_sysname, inst_id) > + > + if vssd_ref == None: > + raise Exception("Failed to get vssd_ref for '%s'" % test_dom) > + > + status = vsms_util.add_disk_res(s_sysname, service, cxml, > + vssd_ref, rasd, nmem_disk) > + > + elif ind_name == "modify": > + status = vsms_util.mod_mem_res(s_sysname, service, cxml, > + rasd, nmem_disk) > + > + elif ind_name == 'delete': > + cn = 'GraphicsResourceAllocationSettingData' > + inst_id = '%s/%s' % (test_dom, "graphics") > + classname = get_typed_class(virt, cn) > + nrasd = get_rasd_rec(virt, cn, s_sysname, inst_id) > + > + if nrasd == None: > + raise Exception("Failed to get nrasd for '%s'" % test_dom) > + > + res = service.RemoveResourceSettings(ResourceSettings=[nrasd]) > + status = res[0] > + > + except Exception, details: > + logger.error("Exception details :%s", details) > + return FAIL > + > + return status > + > + at do_main(sup_types) > +def main(): > + options = main.options > + virt = options.virt > + s_sysname = options.ip > + > + cim_rev, changeset = get_provider_version(virt, s_sysname) > + if cim_rev< libvirt_guest_rasd_indication_rev: > + logger.info("Support for Guest Resource Indications is available in " > + "Libvirt-CIM rev '%s'", libvirt_guest_rasd_indication_rev) > + return SKIP > + > + status = FAIL > + test_dom = 'VM_' + gethostname() > + ind_names = { > + 'add' : 'ResourceAllocationSettingDataCreatedIndication', > + 'modify' : 'ResourceAllocationSettingDataModifiedIndication', > + 'delete' : 'ResourceAllocationSettingDataDeletedIndication' > + } > + > + sub_list, ind_names, dict = sub_ind(s_sysname, virt, ind_names) > + virt_xml = get_class(virt) > + cxml = virt_xml(test_dom, mac=nmac) > + service = vsms.get_vsms_class(options.virt)(options.ip) > + ndpath = cxml.secondary_disk_path > + > + if virt == 'KVM': > + nddev = 'hdb' > + else: > + nddev = 'xvdb' > + > + disk_attr = { 'nddev' : nddev, > + 'src_path' : ndpath > + } > + dasd = vsms.get_dasd_class(options.virt)(dev=nddev, > + source=cxml.secondary_disk_path, > + name=test_dom) > + masd = vsms.get_masd_class(options.virt)(megabytes=nmem, name=test_dom) > + rasd_info = { 'add' : [dasd, disk_attr], > + 'modify' : [masd, nmem] > + } > + > + status, cxml = create_guest(test_dom, s_sysname, virt, cxml) > + if status != PASS: > + logger.error("Error setting up the guest '%s'" % test_dom) > + return FAIL > + > + for ind in ind_names.keys(): > + sub = sub_list[ind] > + ind_name = ind_names[ind] > + logger.info("\n Verifying '%s' indications ....", ind_name) > + > + try: > + pid = fork() > + if pid == 0: > + status = handle_request(sub, ind_name, dict, > + len(ind_names.keys())) > + if status != PASS: > + _exit(1) > + > + _exit(0) > + else: > + try: > + if ind != 'delete': > + rasd = rasd_info[ind][0] > + val = rasd_info[ind][1] > + status = gen_indication(test_dom, s_sysname, > + virt, cxml, service, > + ind, rasd, val) > + else: > + status = gen_indication(test_dom, s_sysname, > + virt, cxml, service, > + ind) > + if status != PASS: > + raise Exception("Unable to generate indication") > + > + status = poll_for_ind(pid, ind_name) > + if status != PASS: > + raise Exception("Poll for indication Failed") > + > + except Exception, details: > + kill(pid, SIGKILL) > + raise Exception(details) > + > + except Exception, details: > + logger.error("Exception: %s", details) > + status = FAIL > + > + if status != PASS: > + break > + > + #Make sure all subscriptions are really unsubscribed > + for ind, sub in sub_list.iteritems(): > + sub.unsubscribe(dict['default_auth']) > + logger.info("Cancelling subscription for %s", ind_names[ind]) > + > + ret = cxml.undefine(s_sysname) > + if not ret: > + logger.error("Failed to undefine the domain '%s'", test_dom) > + return FAIL > + > + return status > +if __name__ == "__main__": > + sys.exit(main()) > + > -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com From snmishra at us.ibm.com Wed Sep 23 06:17:23 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 22 Sep 2009 23:17:23 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] #2 [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py In-Reply-To: <215cbc24f8f95f95543a.1253655196@elm3a148.beaverton.ibm.com> References: <215cbc24f8f95f95543a.1253655196@elm3a148.beaverton.ibm.com> Message-ID: I am seeing the following error - [root at elm3a148 cimtest]# CIM_NS=root/virt CIM_USER=root CIM_PASS=elm3a148 ./runtests libvirt-cim -i localhost -c -d -v KVM -g RASDIndications -t 02_guest_add_mod_rem_rasd_ind.py Starting test suite: libvirt-cim Cleaned log files. Testing KVM hypervisor -------------------------------------------------------------------- RASDIndications - 02_guest_add_mod_rem_rasd_ind.py: FAIL ERROR - Did not recieve indication KVM_ResourceAllocationSettingDataModifiedIndication ERROR - Received Indication error: '256' ERROR - Exception: [Errno 3] No such process -------------------------------------------------------------------- Running on Fedora 11 with sfcb. Thanks Sharad Mishra System x Enablement Linux Technology Center IBM libvirt-cim-bounces at redhat.com wrote on 09/22/2009 02:33:16 PM: > "Deepti B. Kalakeri" > Sent by: libvirt-cim-bounces at redhat.com > > 09/22/2009 02:33 PM > > Please respond to > List for discussion and development of libvirt CIM > > To > > libvirt-cim at redhat.com > > cc > > Subject > > [Libvirt-cim] [PATCH] [TEST] #2 [TEST] Add new tc RASDIndications/ > 02_guest_add_mod_rem_rasd_ind.py > > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1253655145 25200 > # Node ID 215cbc24f8f95f95543a24ecc7e3b1d80594ecdd > # Parent faf86189f60a2b7e5321996540c390c0598929c9 > [TEST] #2 [TEST] Add new tc RASDIndications/02_guest_add_mod_rem_rasd_ind.py > > Patch 2: > -------- > 1) Checked for RASDIndication support in libvirt-cim > 2) Included support for XenFV > 3) Removed cim_start() fromt the testcase > 4) Undefined the guest at the end of the test. > > To verify the Add|Modify|Deleted RASDIndication for the guest. > > Tested with Xen and current sources on RHEL5.3 and with KVM on F10. > Signed-off-by: Deepti B. Kalakeri > > diff -r faf86189f60a -r 215cbc24f8f9 suites/libvirt-cim/cimtest/ > RASDIndications/02_guest_add_mod_rem_rasd_ind.py > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/suites/libvirt-cim/cimtest/RASDIndications/ > 02_guest_add_mod_rem_rasd_ind.py Tue Sep 22 14:32:25 2009 -0700 > @@ -0,0 +1,225 @@ > +#!/usr/bin/python > +# > +# Copyright 2009 IBM Corp. > +# > +# Authors: > +# Deepti B. Kalakeri > +# > +# > +# This library is free software; you can redistribute it and/or > +# modify it under the terms of the GNU General Public > +# License as published by the Free Software Foundation; either > +# version 2.1 of the License, or (at your option) any later version. > +# > +# This library is distributed in the hope that it will be useful, > +# but WITHOUT ANY WARRANTY; without even the implied warranty of > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > +# General Public License for more details. > +# > +# You should have received a copy of the GNU General Public > +# License along with this library; if not, write to the Free Software > +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA > +# > +# > +# This testcase is used to verify the Created|Modified|Deleted > +# RASD Indications for a guest. > +# > +# Date : 21-09-2009 > +# > + > +import sys > +from signal import SIGKILL > +from XenKvmLib import vsms > +from XenKvmLib import vsms_util > +from XenKvmLib.classes import get_typed_class > +from XenKvmLib.enumclass import EnumNames > +from socket import gethostname > +from os import kill, fork, _exit > +from XenKvmLib.vxml import get_class > +from CimTest.Globals import logger > +from XenKvmLib.const import do_main, CIM_DISABLE, get_provider_version > +from CimTest.ReturnCodes import PASS, FAIL, SKIP > +from XenKvmLib.common_util import poll_for_state_change > +from XenKvmLib.indications import sub_ind, handle_request, poll_for_ind > + > +sup_types = ['KVM', 'Xen', 'XenFV'] > +libvirt_guest_rasd_indication_rev = 980 > + > +nmem = 256 > +nmac = '00:11:22:33:44:55' > + > +def create_guest(test_dom, ip, virt, cxml): > + try: > + ret = cxml.cim_define(ip) > + if not ret: > + raise Exception("Failed to define domain %s" % test_dom) > + > + status, dom_cs = poll_for_state_change(ip, virt, test_dom, > + CIM_DISABLE) > + if status != PASS: > + raise Exception("Dom '%s' not in expected state '%s'" \ > + % (test_dom, CIM_DISABLE)) > + > + except Exception, details: > + logger.error("Exception details: %s", details) > + return FAIL, cxml > + > + return PASS, cxml > + > + > +def get_rasd_rec(virt, cn, s_sysname, inst_id): > + classname = get_typed_class(virt, cn) > + recs = EnumNames(s_sysname, classname) > + rasd = None > + for rasd_rec in recs: > + ret_pool = rasd_rec['InstanceID'] > + if ret_pool == inst_id: > + rasd = rasd_rec > + break > + > + return rasd > + > +def gen_indication(test_dom, s_sysname, virt, cxml, service, ind_name, > + rasd=None, nmem_disk=None): > + status = FAIL > + try: > + > + if ind_name == "add": > + cn = 'VirtualSystemSettingData' > + inst_id = '%s:%s' % (virt, test_dom) > + classname = get_typed_class(virt, cn) > + vssd_ref = get_rasd_rec(virt, cn, s_sysname, inst_id) > + > + if vssd_ref == None: > + raise Exception("Failed to get vssd_ref for '%s'" %test_dom) > + > + status = vsms_util.add_disk_res(s_sysname, service, cxml, > + vssd_ref, rasd, nmem_disk) > + > + elif ind_name == "modify": > + status = vsms_util.mod_mem_res(s_sysname, service, cxml, > + rasd, nmem_disk) > + > + elif ind_name == 'delete': > + cn = 'GraphicsResourceAllocationSettingData' > + inst_id = '%s/%s' % (test_dom, "graphics") > + classname = get_typed_class(virt, cn) > + nrasd = get_rasd_rec(virt, cn, s_sysname, inst_id) > + > + if nrasd == None: > + raise Exception("Failed to get nrasd for '%s'" % test_dom) > + > + res = service.RemoveResourceSettings(ResourceSettings= [nrasd]) > + status = res[0] > + > + except Exception, details: > + logger.error("Exception details :%s", details) > + return FAIL > + > + return status > + > + at do_main(sup_types) > +def main(): > + options = main.options > + virt = options.virt > + s_sysname = options.ip > + > + cim_rev, changeset = get_provider_version(virt, s_sysname) > + if cim_rev < libvirt_guest_rasd_indication_rev: > + logger.info("Support for Guest Resource Indications is available in " > + "Libvirt-CIM rev '%s'", > libvirt_guest_rasd_indication_rev) > + return SKIP > + > + status = FAIL > + test_dom = 'VM_' + gethostname() > + ind_names = { > + 'add' : 'ResourceAllocationSettingDataCreatedIndication', > + 'modify' : > 'ResourceAllocationSettingDataModifiedIndication', > + 'delete' : 'ResourceAllocationSettingDataDeletedIndication' > + } > + > + sub_list, ind_names, dict = sub_ind(s_sysname, virt, ind_names) > + virt_xml = get_class(virt) > + cxml = virt_xml(test_dom, mac=nmac) > + service = vsms.get_vsms_class(options.virt)(options.ip) > + ndpath = cxml.secondary_disk_path > + > + if virt == 'KVM': > + nddev = 'hdb' > + else: > + nddev = 'xvdb' > + > + disk_attr = { 'nddev' : nddev, > + 'src_path' : ndpath > + } > + dasd = vsms.get_dasd_class(options.virt)(dev=nddev, > + source=cxml.secondary_disk_path, > + name=test_dom) > + masd = vsms.get_masd_class(options.virt)(megabytes=nmem, name=test_dom) > + rasd_info = { 'add' : [dasd, disk_attr], > + 'modify' : [masd, nmem] > + } > + > + status, cxml = create_guest(test_dom, s_sysname, virt, cxml) > + if status != PASS: > + logger.error("Error setting up the guest '%s'" % test_dom) > + return FAIL > + > + for ind in ind_names.keys(): > + sub = sub_list[ind] > + ind_name = ind_names[ind] > + logger.info("\n Verifying '%s' indications ....", ind_name) > + > + try: > + pid = fork() > + if pid == 0: > + status = handle_request(sub, ind_name, dict, > + len(ind_names.keys())) > + if status != PASS: > + _exit(1) > + > + _exit(0) > + else: > + try: > + if ind != 'delete': > + rasd = rasd_info[ind][0] > + val = rasd_info[ind][1] > + status = gen_indication(test_dom, s_sysname, > + virt, cxml, service, > + ind, rasd, val) > + else: > + status = gen_indication(test_dom, s_sysname, > + virt, cxml, service, > + ind) > + if status != PASS: > + raise Exception("Unable to generate indication") > + > + status = poll_for_ind(pid, ind_name) > + if status != PASS: > + raise Exception("Poll for indication Failed") > + > + except Exception, details: > + kill(pid, SIGKILL) > + raise Exception(details) > + > + except Exception, details: > + logger.error("Exception: %s", details) > + status = FAIL > + > + if status != PASS: > + break > + > + #Make sure all subscriptions are really unsubscribed > + for ind, sub in sub_list.iteritems(): > + sub.unsubscribe(dict['default_auth']) > + logger.info("Cancelling subscription for %s", ind_names[ind]) > + > + ret = cxml.undefine(s_sysname) > + if not ret: > + logger.error("Failed to undefine the domain '%s'", test_dom) > + return FAIL > + > + return status > +if __name__ == "__main__": > + sys.exit(main()) > + > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -------------- next part -------------- An HTML attachment was scrubbed... URL: From snmishra at us.ibm.com Wed Sep 23 06:23:26 2009 From: snmishra at us.ibm.com (Sharad Mishra) Date: Tue, 22 Sep 2009 23:23:26 -0700 Subject: [Libvirt-cim] [PATCH] [TEST] #2 Add new tc RASDIndications/01_guest_states_rasd_ind.py In-Reply-To: References: Message-ID: Seeing this error with FC11 and sfcb. [root at elm3a148 cimtest]# CIM_NS=root/virt CIM_USER=root CIM_PASS=elm3a148 ./runtests libvirt-cim -i localhost -c -d -v KVM -g RASDIndications -t 01_guest_states_rasd_ind.py Starting test suite: libvirt-cim Cleaned log files. Testing KVM hypervisor -------------------------------------------------------------------- RASDIndications - 01_guest_states_rasd_ind.py: FAIL ERROR - Did not recieve indication KVM_ResourceAllocationSettingDataDeletedIndication ERROR - Received Indication error: '256' -------------------------------------------------------------------- Thanks Sharad Mishra System x Enablement Linux Technology Center IBM libvirt-cim-bounces at redhat.com wrote on 09/22/2009 02:17:50 PM: > "Deepti B. Kalakeri" > Sent by: libvirt-cim-bounces at redhat.com > > 09/22/2009 02:17 PM > > Please respond to > List for discussion and development of libvirt CIM > > To > > libvirt-cim at redhat.com > > cc > > Subject > > [Libvirt-cim] [PATCH] [TEST] #2 Add new tc RASDIndications/ > 01_guest_states_rasd_ind.py > > # HG changeset patch > # User Deepti B. Kalakeri > # Date 1253654207 25200 > # Node ID faf86189f60a2b7e5321996540c390c0598929c9 > # Parent f5c62f54d1204d38ce15e48d269d3e887da69937 > [TEST] #2 Add new tc RASDIndications/01_guest_states_rasd_ind.py > > Patch 2: > -------- > 1) Checked for RASDIndication support in libvirt-cim > 2) Included support for XenFV > > To verify the Add|Deleted RASDIndication for the guest. > Tested with Xen and current sources on RHEL5.3 and KVM with F10. > Signed-off-by: Deepti B. Kalakeri > > diff -r f5c62f54d120 -r faf86189f60a suites/libvirt-cim/cimtest/ > RASDIndications/01_guest_states_rasd_ind.py > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/suites/libvirt-cim/cimtest/RASDIndications/ > 01_guest_states_rasd_ind.py Tue Sep 22 14:16:47 2009 -0700 > @@ -0,0 +1,164 @@ > +#!/usr/bin/python > +# > +# Copyright 2009 IBM Corp. > +# > +# Authors: > +# Deepti B. Kalakeri > +# > +# > +# This library is free software; you can redistribute it and/or > +# modify it under the terms of the GNU General Public > +# License as published by the Free Software Foundation; either > +# version 2.1 of the License, or (at your option) any later version. > +# > +# This library is distributed in the hope that it will be useful, > +# but WITHOUT ANY WARRANTY; without even the implied warranty of > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > +# General Public License for more details. > +# > +# You should have received a copy of the GNU General Public > +# License along with this library; if not, write to the Free Software > +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA > +# > +# > +# This testcase is used to verify the Created|Deleted > +# RASD Indications for a guest. > +# > +# Date : 21-09-2009 > +# > + > +import sys > +from signal import SIGKILL > +from socket import gethostname > +from os import kill, fork, _exit > +from XenKvmLib.vxml import get_class > +from XenKvmLib.xm_virt_util import active_domain_list > +from CimTest.Globals import logger > +from XenKvmLib.const import do_main, CIM_ENABLE, CIM_DISABLE, \ > + get_provider_version > +from CimTest.ReturnCodes import PASS, FAIL, SKIP > +from XenKvmLib.common_util import poll_for_state_change > +from XenKvmLib.indications import sub_ind, handle_request, poll_for_ind > + > +sup_types = ['KVM', 'Xen', 'XenFV'] > +libvirt_guest_rasd_indication_rev = 980 > + > +def create_guest(test_dom, ip, virt, cxml, ind_name): > + try: > + ret = cxml.cim_define(ip) > + if not ret: > + raise Exception("Failed to define domain %s" % test_dom) > + > + status, dom_cs = poll_for_state_change(ip, virt, test_dom, > + CIM_DISABLE) > + if status != PASS: > + raise Exception("Dom '%s' not in expected state '%s'" \ > + % (test_dom, CIM_DISABLE)) > + > + ret = cxml.cim_start(ip) > + if ret: > + raise Exception("Failed to start the domain '%s'" % test_dom) > + cxml.undefine(ip) > + > + status, dom_cs = poll_for_state_change(ip, virt, test_dom, > + CIM_ENABLE) > + if status != PASS: > + raise Exception("Dom '%s' not in expected state '%s'" \ > + % (test_dom, CIM_ENABLE)) > + > + except Exception, details: > + logger.error("Exception details: %s", details) > + return FAIL, cxml > + > + return PASS, cxml > + > +def gen_indication(test_dom, s_sysname, virt, cxml, ind_name): > + status = FAIL > + try: > + active_doms = active_domain_list(s_sysname, virt) > + if test_dom not in active_doms: > + status, cxml = create_guest(test_dom, s_sysname, virt, > cxml, ind_name) > + if status != PASS: > + raise Exception("Error setting up the guest '%s'" %test_dom) > + > + if ind_name == "delete": > + ret = cxml.cim_destroy(s_sysname) > + if not ret: > + raise Exception("Failed to destroy domain '%s'" %test_dom) > + > + except Exception, details: > + logger.error("Exception details :%s", details) > + return FAIL, cxml > + > + return PASS, cxml > + > + at do_main(sup_types) > +def main(): > + options = main.options > + virt = options.virt > + s_sysname = options.ip > + > + cim_rev, changeset = get_provider_version(virt, s_sysname) > + if cim_rev < libvirt_guest_rasd_indication_rev: > + logger.info("Support for Guest Resource Indications is available in " > + "Libvirt-CIM rev '%s'", > libvirt_guest_rasd_indication_rev) > + return SKIP > + > + status = FAIL > + test_dom = 'VM_' + gethostname() > + ind_names = { > + 'create' : 'ResourceAllocationSettingDataCreatedIndication', > + 'delete' : 'ResourceAllocationSettingDataDeletedIndication' > + } > + > + virt_xml = get_class(virt) > + cxml = virt_xml(test_dom) > + sub_list, ind_names, dict = sub_ind(s_sysname, virt, ind_names) > + for ind in ind_names.keys(): > + sub = sub_list[ind] > + ind_name = ind_names[ind] > + logger.info("\n Verifying '%s' indications ....", ind_name) > + > + try: > + pid = fork() > + if pid == 0: > + status = handle_request(sub, ind_name, dict, > + len(ind_names.keys())) > + if status != PASS: > + _exit(1) > + _exit(0) > + else: > + try: > + status, cxml = gen_indication(test_dom, s_sysname, > + virt, cxml, ind) > + if status != PASS: > + raise Exception("Unable to generate indication") > + > + status = poll_for_ind(pid, ind_name) > + except Exception, details: > + kill(pid, SIGKILL) > + raise Exception(details) > + > + except Exception, details: > + logger.error("Exception: %s", details) > + status = FAIL > + > + if status != PASS: > + break > + > + #Make sure all subscriptions are really unsubscribed > + for ind, sub in sub_list.iteritems(): > + sub.unsubscribe(dict['default_auth']) > + logger.info("Cancelling subscription for %s", ind_names[ind]) > + > + active_doms = active_domain_list(s_sysname, virt) > + if test_dom in active_doms: > + ret = cxml.cim_destroy(s_sysname) > + if not ret: > + logger.error("Failed to Destroy the domain") > + return FAIL > + > + return status > +if __name__ == "__main__": > + sys.exit(main()) > + > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmaciel at linux.vnet.ibm.com Wed Sep 23 14:49:31 2009 From: rmaciel at linux.vnet.ibm.com (Richard Maciel) Date: Wed, 23 Sep 2009 11:49:31 -0300 Subject: [Libvirt-cim] [PATCH] Cleanup _get_rasds() in Virt_RASD.c In-Reply-To: <81b6cd4ae355024303a8.1253643350@jfehlig3.provo.novell.com> References: <81b6cd4ae355024303a8.1253643350@jfehlig3.provo.novell.com> Message-ID: <4ABA357B.8090904@linux.vnet.ibm.com> +1 On 09/22/2009 03:15 PM, Jim Fehlig wrote: > # HG changeset patch > # User Jim Fehlig > # Date 1253641563 21600 > # Node ID 81b6cd4ae355024303a8459817b4f15339d17111 > # Parent 7c5106b0b092147c521ef1f462b9a41a44a313f8 > Cleanup _get_rasds() in Virt_RASD.c > > I received a bug report about a memory leak in _get_rasds(). While > fixing the leak, I took the opportunity to do some other tidying in > this function. > > Signed-off-by: Jim Fehlig > > diff -r 7c5106b0b092 -r 81b6cd4ae355 src/Virt_RASD.c > --- a/src/Virt_RASD.c Wed Sep 16 11:49:21 2009 -0700 > +++ b/src/Virt_RASD.c Tue Sep 22 11:46:03 2009 -0600 > @@ -664,6 +664,7 @@ > int count; > int i; > struct virt_device *devs = NULL; > + const char *host = NULL; > > count = get_devices(dom,&devs, type); > if (count<= 0) > @@ -672,8 +673,13 @@ > /* Bit hackish, but for proc we need to cut list down to one. */ > if (type == CIM_RES_TYPE_PROC) { > struct virt_device *tmp_dev = NULL; > - tmp_dev = calloc(1, sizeof(*tmp_dev)); > tmp_dev = virt_device_dup(&devs[count - 1]); > + if (tmp_dev == NULL) { > + cu_statusf(broker,&s, > + CMPI_RC_ERR_FAILED, > + "Failed to allocate memory for proc RASD"); > + goto out; > + } > > tmp_dev->id = strdup("proc"); > > @@ -685,15 +691,16 @@ > count = 1; > } > > + host = virDomainGetName(dom); > + if (host == NULL) { > + cu_statusf(broker,&s, > + CMPI_RC_ERR_FAILED, > + "Failed to get domain name"); > + goto out; > + } > + > for (i = 0; i< count; i++) { > CMPIInstance *dev = NULL; > - const char *host = NULL; > - > - host = virDomainGetName(dom); > - if (host == NULL) { > - cleanup_virt_device(&devs[i]); > - continue; > - } > > dev = rasd_from_vdev(broker, > &devs[i], > > _______________________________________________ > Libvirt-cim mailing list > Libvirt-cim at redhat.com > https://www.redhat.com/mailman/listinfo/libvirt-cim -- Richard Maciel, MSc IBM Linux Technology Center rmaciel at linux.vnet.ibm.com