Resolution: Use the 64 bit version of statvfs(), that is,statvfs64(), to obtain the information about the mounted file systems. (SR: QXCR1001108631) SYMANTEC Incident Number: 2231022 (2029 Sign-in Register Site help Skip Probably something obvious but I can't seem to figure it out. VxVM is not able to view the attached disk (which is disconnected earlier) when the disk is reconnected to the host. (SR: QXCR1001012439) SYMANTEC Incident Number: 1907668 (1742702) When the Logical The pipe between the script and the vxrecover (1M) command is not closed properly and it keeps calling the script waiting for the vxrecover(1M) command to complete. this contact form
Thin provisioning support is not enabled for P9500 array. (SR: QXCR1001178905) SYMANTEC Incident Number: 2578845 (2233611) The Hitachi Array Support Library (ASL), libvxhdsusp, does not claim the VSP R700 array. Close Sign In Print Article Products Article Languages Subscribe to this Article Manage your Subscriptions Problem After adding 2 new nodes to a 5 node RAC cluster, an error is Therefore, it is not possible to decide whether the CVM reconfiguration process is hung or it is progressing and till what state.
Resolution: The code is modified to abort the kernel land transaction immediately to avoid the hang when it fails on the master during CVM reconfiguration. (SR: QXCR1001178906) SYMANTEC Incident Number: 2578858 The reason I ask is that it appears that your fencing disks are imported on node 1. The problem is visible in presence of the log subdisks. VxVM is not able to view the attached disk (which is disconnected earlier) when the disk is reconnected to the host. (SR :QXCR1001123243) SYMANTEC Incident Number: 2262429 (1742702) When the Logical
Filesystem has been resized to 2097153 sectors vxfs fsadm: ERROR: V-3-23643: Retry the operation after freeing up some space VxVM vxresize ERROR V-5-1-7514 Problem running fsadm command for volume testvol, in Vxvm Vxdg Error V 5 1 10978 Disk Group Import Failed In some cases, a *.dginfo file has two lines starting with "dgid:. Then another configuration change in the same private DG results in a duplicate record in the VxVM configuration database leading to a core dump or DG import issue. to uninstall SFCFS, cd /opt/VRTS/install ./uninstallinfr Troubbleshooting Watch the logs, cd /var/VRTSvcs/log ls -ltr tail -n 500 -F engine_A.log VxVM vxdisksetup ERROR V-5-2-3461 sdb: Disk does not have
The core dump displays the following stack trace: req_dg_get_info_common+0x14e0 req_dg_get_info_name+0x210 request_loop+0x23f0 main+0x2f30 (SR: QXCR1000792590) SYMANTEC Incident Number : 1314248(1237998) When a storage enclosure is disconnected from the server, running "vxdmpadm listenclosure The startup script "/sbin/init.d/vxvm-startup2" does not check for the presence of the "install-db" file and starts the vxesd(1M) daemon. (SR: QXCR1001190574) SYMANTEC Incident Number: 2658078 (2070531) When the "siteconsistent" attribute is Please update. (SR: QXCR1001108622) SYMANTEC Incident Number: 2231009 (1528932) In the Cluster Volume Manager (CVM) environment, when the "vxdg destroy
What is going on ??? coordinator dg should be deported on all the nodes ... "cannot Create: Error In Cluster Processing" However, it is possible to set the .site consistent. Vxconfigd Is Currently Disabled When the 'vxconfigd(1M)' level CVM join is hung in the user layer, 'vxdctl(1M) -c mode' on the SLAVE node displays the following: bash-3.00# vxdctl -c mode mode: enabled: cluster active -
In the third scenario, the operation to create and delete Replicator Volume Group (RVG) repeatedly along with the multiple "vrstat -n 2" commands causes the vradmind daemon to dump core. (SR: No Yes Did this article save you the trouble of contacting technical support? Attachment Products Subscribe to Article Search Survey Did this article answer your question or resolve your issue? The target path h/w path is 0/6/1/0.0x50001fe1500ccc59 Nov 27 23:24:07 hp3600b vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x990 belonging to the dmpnode 5/0x20 Nov 27 23:24:07 hp3600b vmunix: NOTICE: VxVM Disk Private Region Contents Are Invalid
The three different scenarios under which the command dumps core are as following: In the first scenario, the vradmind daemon dumps core when the log owner node is switched frequently. to remove a volume, #vxassist remove volume vol0 You should now be able to access the volume from any node in the cluster. Sorry, we couldn't post your feedback right now, please try again later. Remove the install-db file to proceed" (SR: QXCR1001158817) SYMANTEC Incident Number: 2280641 (2280624) At least two sites must be added in the Disk Group(DG) before setting the 'site consistent' flag to
The '-n' option reads the configuration from the default configuration file and dgcfgrestore(1M) fails. This causes the vxconfigbackup(1M) command to fail. Create/Manage Case QUESTIONS?
But the "install-db" file exists on the system for a VxVM rootable system and this leads to the system being unbootable. For example, # vxdisk list cXtXdXs2 Device: diskX_p2 devicetag: cXtXdXs2 .. This leads to a memory leak. PHCO_38412: (SR: QXCR1000863582) SYMANTEC Incident Number : 1413261 The issue occurs in a CVM campus cluster setup with two sites wherein the master node and the slave node are each located
Hence, the command acts on the incorrect device. The presence of install-db file stops vxconfigd to startup thereby making the system not bootable. The vxconfigd daemon is found in a tight loop with the following stack trace: msgtail() msg() send_slaves() master_send_abort() send_slaves() master_get_results() commit() req_vol_commit() request_loop() main() __start() The following messages can be seen This causes backup failure.
Resolution: The code is modified to verify if the "install-db" file exists on the system. The log volumes are recovered in a serial manner by design. to destroy a DG, #vxdg destroy group0 Create a volume On only one node, create a new volume with that disk (50GB here), vxassist maxsize vxassist make vol0 104820736 If they are removed, the problem is not seen. (SR: QXCR1001030289) SYMANTEC Incident Number: 1969412(786357) The voldrl_volumemax_drtregs variable that sets the maximum number of dirty regions allowed in a volume should
In this specific case, other commands like "vxdmpadm(1M) getsubpaths dmpnode=<>.