###########sample 1

OCR corruption messages are reported in crsd.log, automatic OCR backup is failing. Ocrcheck complains "Device/File intergrity check failed":

[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
        Version                  :          3
        Total space (kbytes)     :     262120
        Used space (kbytes)      :       3372
        Available space (kbytes) :     258748
        ID                       : 1423232882
        Device/File Name         :   +DBFS_DG
                                   Device/File integrity check failed <<<<<<<<<<<<<<<<<<<<<<<<

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check failed

Logical corruption check bypassed due to insufficient quorum

Note that for 11.2.0.X versions, the below mentioned logs will be found in <GRID_HOME>/log, and for versions 12.1.0.2 and higher, logs will be found in <GRID_BASE>/diag

alert<racnode1>.log shows:

[crsd(77158)]CRS-1006:The OCR location +DBFS_DG is inaccessible. Details in /u01/app/11.2.0.3/grid/log/racnode1/crsd/crsd.log.
2014-07-28 19:12:18.023: 
[/u01/app/11.2.0.3/grid/bin/orarootagent.bin(77413)]CRS-5822:Agent '/u01/app/11.2.0.3/grid/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:2:6} in /u01/app/11.2.0.3/grid/log/racnode1/agent/crsd/orarootagent_root/orarootagent_root.log.
2014-07-28 19:12:47.718: 
[ohasd(40904)]CRS-2765:Resource 'ora.crsd' has failed on server 'racnode1'.
2014-07-28 19:12:54.369: 
[crsd(9099)]CRS-1012:The OCR service started on node racnode1.
2014-07-28 19:12:55.702: 
[crsd(9099)]CRS-1201:CRSD started on node racnode1.
2014-07-29 03:45:36.471: 
[crsd(9099)]CRS-1006:The OCR location +DBFS_DG is inaccessible. Details in /u01/app/11.2.0.3/grid/log/racnode1/crsd/crsd.log.

crsd.log shows:

2014-07-31 07:13:09.240: [  OCRRAW][2175183168]proprior:1 Retrying buffer read from another mirror for disk group [+DBFS_DG] for block at offset [7696384]
2014-07-31 07:13:09.244: [  OCRASM][2175183168]proprasmres: Block from mirror #1 is same as buffer passed
2014-07-31 07:13:09.254: [  OCRASM][2175183168]proprasmres: Block from mirror #2 is same as buffer passed
2014-07-31 07:13:09.278: [  OCRASM][2175183168]proprasmres: Total 2 mirrors detected
2014-07-31 07:13:09.278: [  OCRASM][2175183168]proprasmres: Block from mirror #1 same as block from mirror #2
2014-07-31 07:13:09.278: [  OCRASM][2175183168]proprasmres: 2 mirrors found in this disk group.
2014-07-31 07:13:09.278: [  OCRASM][2175183168]proprasmres: The buffer passed matches the buffers read from all 2 mirrors.
2014-07-31 07:13:09.278: [  OCRASM][2175183168]proprasmres: Need to invoke checkdg. The buffer passed matches with buffer from all mirrors.
2014-07-31 07:13:09.488: [  OCRASM][2175183168]proprasmres: Successfully returned after calling Check DG.
2014-07-31 07:13:09.488: [  OCRRAW][2175183168]proprior:1 ASM re silver returned [22]
2014-07-31 07:13:09.488: [  OCRRAW][2175183168]gst: Dev/Page/Block [0/843/1904] is CORRUPT (header)       <<<
2014-07-31 07:13:09.488: [  OCRRAW][2175183168]rbkp:2: Problem [26]. Could not read the free list
2014-07-31 07:13:09.488: [  OCRRAW][2175183168]gst:could not read fcl page 1 
2014-07-31 07:13:09.488: [  OCRRAW][2175183168]rbkp:2: Problem [26]. Could not read the free list
2014-07-31 07:13:09.488: [  OCRRAW][2175183168]gst:could not read fcl page 2
2014-07-31 07:13:09.488: [  OCRSRV][2175183168]th_snap:6''':Failed corruption check reading device [+DBFS_DG]. Not taking backup.  <<<
2014-07-31 07:13:09.488: [  OCRSRV][2175183168]th_snap:8:failed to take backup retval [0] corruption [1]
2014-07-31 07:13:36.549: [UiServer][2156271936] CS(0x2628b20)set Properties ( grid,0x7f747c01ff30)

CHANGES

No recent Change

CAUSE

The OCR is corrupted for some reason, the root cause is unknown. This also leads to automatic OCR backup failure.

SOLUTION

Restore OCR from a good backup is the only way to move forward. Please refer to Note 1062983.1 How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems for details. Here is the simplified steps for restore OCR only:

1. Locate the latest automatic OCR backup, check all nodes in the cluster:

   ls -lrt <GRID_HOME>/cdata/<clustername>/

2. Make sure the Grid Infrastructure is shutdown on all nodes, as root user:

  # <GRID_HOME>/bin/crsctl stop crs -f

3. Start the CRS stack in exclusive mode on the node where the ocr backup is located:

  # <GRID_HOME>/bin/crsctl start crs -excl -nocrs        (for 11.2.0.2+)

4. Restore the latest OCR backup

  # cd <GRID_HOME>/cdata/<clustername>/
  # <GRID_HOME>/bin/ocrconfig -restore backup00.ocr  << replace the backup00.ocr with a proper ocr backup file name

5. Shutdown and restart Grid Infrastructure (on all nodes)

  # <GRID_HOME>/bin/crsctl stop crs -f  
  # <GRID_HOME>/bin/crsctl start crs

6. Rerun ocrcheck command to verify it now reports "Device/File integrity check succeeded"

###########sample  2

OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)

The goal of this note is to provide steps to add, remove, replace or move an Oracle Cluster Repository (OCR) and/or Voting Disk in Oracle Clusterware 10gR2, 11gR1 and 11gR2 environment. It will also provide steps to move OCR / voting and ASM devices from raw device to block device. For Oracle Clusterware 12c, please refer to Document 1558920.1 Software Patch Level and 12c Grid Infrastructure OCR Backup/Restore.

This article is intended for DBA and Support Engineers who need to modify, or move OCR and voting disks files, customers who have an existing clustered environment deployed on a storage array and might want to migrate to a new storage array with minimal downtime.

Typically, one would simply cp or dd the files once the new storage has been presented to the hosts. In this case, it is a little more difficult because:

1. The Oracle Clusterware has the OCR and voting disks open and is actively using them. (Both primary and mirrors)
2. There is an API provided for this function (ocrconfig and crsctl), which is the appropriate interface than typical cp and/or dd commands.

It is highly recommended to take a backup of the voting disk, and OCR device before making any changes.

Note: while the OCR and Voting disk files may be stored together, such as in OCFS (for example in pre-11.2 Clusterware environments) or in the same ASM diskgroup (for example in 11.2 Oracle Clusterware environments), OCR and Voting disk files are in fact two separate files or entities and so if the intention is to modify or move both OCR and Voting disk files, then one must follow steps provided for both of these types of files.

SOLUTION

Prepare the disks


For OCR or voting disk addition or replacement, new disks need to be prepared. Please refer to Clusteware/Gird Infrastructure installation guide for different platform for the disk requirement and preparation.

1. Size

For 10.1:
OCR device minimum size (each): 100M
Voting disk minimum size (each): 20M

For 10.2:
OCR device minimum size (each): 256M
Voting disk minimum size (each): 256M

For 11.1:
OCR device minimum size (each): 280M
Voting disk minimum size (each): 280M

For 11.2:
OCR device minimum size (each): 300M
Voting disk minimum size (each): 300M

2. For raw or block device (pre 11.2)

Please refer to Clusterware installation guide on different platform for more details.
On windows platform the new raw device link is created via $CRS_HOME\bin\GUIOracleOBJManager.exe, for example:
\\.\VOTEDSK2
\\.\OCR2

3. For ASM disks (11.2+)

On Windows platform, please refer to Document 331796.1 How to setup ASM on Windows 
On Linux platform, please refer to Document 580153.1 How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block Devices? 
For other platform, please refer to Clusterware/Gird Infrastructure installation guide on OTN (Chapter: Oracle Automatic Storage Management Storage Configuration).

4. For cluster file system

If OCR is on cluster file system, the new OCR or OCRMIRROR file must be touched before add/replace command can be issued. Otherwise PROT-21: Invalid parameter (10.2/11.) or PROT-30 The Oracle Cluster Registry location to be added is not accessible (for 11.2) will occur.

As root user
# touch /cluster_fs/ocrdisk.dat
# touch /cluster_fs/ocrmirror.dat
# chown root:oinstall /cluster_fs/ocrdisk.dat  /cluster_fs/ocrmirror.dat
# chmod 640 /cluster_fs/ocrdisk.dat  /cluster_fs/ocrmirror.dat

It is not required to pre-touch voting disk file on cluster file system.

After delete command is issued, the ocr/voting files on the cluster file system require to be removed manually.

5. Permissions

For OCR device:
chown root:oinstall <OCR device>
chmod 640 <OCR device>

For Voting device:
chown <crs/grid>:oinstall <Voting device>
chmod 644 <Voting device>

For ASM disks used for OCR/Voting disk:
chown griduser:asmadmin <asm disks>
chmod 660 <asm disks>

6. Redundancy

For Voting disks (never use even number of voting disks):
External redundancy requires minimum of 1 voting disk (or 1 failure group)
Normal redundancy requires minimum of 3 voting disks (or 3 failure group)
High redundancy requires minimum of 5 voting disks (or 5 failure group)

Insufficient failure group in respect of redundancy requirement could cause voting disk creation failure. For example: ORA-15274: Not enough failgroups (3) to create voting files

For OCR: 
10.2 and 11.1, maximum 2 OCR devices: OCR and OCRMIRROR
11.2+, upto 5 OCR devices can be added.

For more information, please refer to platform specific Oracle® Grid Infrastructure Installation Guide.

ADD/REMOVE/REPLACE/MOVE OCR Device

Note: You must be logged in as the root user, because root owns the OCR files. "ocrconfig -replace" command can only be issued when CRS is running, otherwise "PROT-1: Failed to initialize ocrconfig" will occur.

Please ensure CRS is running on ALL cluster nodes during this operation, otherwise the change will not reflect in the CRS down node, CRS will have problem to startup from this down node. "ocrconfig -repair" option will be required to fix the ocr.loc file on the CRS down node.

For 11.2+ with OCR on ASM diskgroup, due to unpublished Bug 8604794 - FAIL TO CHANGE OCR LOCATION TO DG WITH 'OCRCONFIG -REPAIR -REPLACE', "ocrconfig -repair" to change OCR location to different ASM diskgroup does not work currently. Workaround is to manually edit /etc/oracle/ocr.loc or /var/opt/oracle/ocr.loc or Windows registry HYKEY_LOCAL_MACHINE\SOFTWARE\Oracle\ocr,  point to desired diskgroup.

If there is any issue with OLR, please refer to How to restore OLR in 11.2 Grid Infrastructure Note 1193643.1.

Make sure there is a recent copy of the OCR file before making any changes:

ocrconfig -showbackup

If there is not a recent backup copy of the OCR file, an export can be taken for the current OCR file. Use the following command to generate an export of the online OCR file:

In 10.2

# ocrconfig -export <OCR export_filename> -s online

In 11.1 and 11.2

# ocrconfig -manualbackup
node1 2008/08/06 06:11:58 /crs/cdata/crs/backup_20080807_003158.ocr

To recover using this file, the following command can be used:

# ocrconfig -import <OCR export_filename>

From 11.2+, please also refer How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems Document 1062983.1

To see whether OCR is healthy, run an ocrcheck, which should return with like below.

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 497928
Used space (kbytes) : 312
Available space (kbytes) : 497616
ID : 576761409
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2
Device/File integrity check succeeded

Cluster registry integrity check succeeded

For 11.1+, ocrcheck as root user should also show:
Logical corruption check succeeded

1. To add an OCRMIRROR device when only OCR device is defined:

To add an OCR mirror device, provide the full path including file name. 
10.2 and 11.1:

# ocrconfig -replace ocrmirror <filename>
eg:
# ocrconfig -replace ocrmirror /dev/raw/raw2
# ocrconfig -replace ocrmirror /dev/sdc1
# ocrconfig -replace ocrmirror /cluster_fs/ocrdisk.dat
> ocrconfig -replace ocrmirror \\.\OCRMIRROR2  - for Windows

11.2+: From 11.2 onwards, upto 4 ocrmirrors can be added

# ocrconfig -add <filename>
eg:
# ocrconfig -add +OCRVOTE2
# ocrconfig -add /cluster_fs/ocrdisk.dat

2. To remove an OCR device

To remove an OCR device: 
10.2 and 11.1:

# ocrconfig -replace ocr

11.2+:

# ocrconfig -delete <filename>
eg:
# ocrconfig -delete +OCRVOTE1
* Once an OCR device is removed, ocrmirror device automatically changes to be OCR device.
* It is not allowed to remove OCR device if only 1 OCR device is defined, the command will return PROT-16.

To remove an OCR mirror device: 
10.2 and 11.1:

# ocrconfig -replace ocrmirror

11.2+:

# ocrconfig -delete <ocrmirror filename>
eg:
# ocrconfig -delete +OCRVOTE2

After removal, the old OCR/OCRMIRROR can be deleted if they are on cluster filesystem.

3. To replace or move the location of an OCR device

Note. 1. An ocrmirror must be in place before trying to replace the OCR device. The ocrconfig will fail with PROT-16, if there is no ocrmirror exists. 
2. If an OCR device is replaced with a device of a different size, the size of the new device will not be reflected until the clusterware is restarted.

10.2 and 11.1:
To replace the OCR device with <filename>, provide the full path including file name.

# ocrconfig -replace ocr <filename>
eg:
# ocrconfig -replace ocr /dev/sdd1
$ ocrconfig -replace ocr \\.\OCR2 - for Windows

To replace the OCR mirror device with <filename>, provide the full path including file name.

# ocrconfig -replace ocrmirror <filename>
eg:
# ocrconfig -replace ocrmirror /dev/raw/raw4
# ocrconfig -replace ocrmirror \\.\OCRMIRROR2  - for Windows

11.2+:
The command is same for replace either OCR or OCRMIRRORs (at least 2 OCR exist for replace command to work):

# ocrconfig -replace <current filename> -replacement <new filename>
eg:
# ocrconfig -replace /cluster_file/ocr.dat -replacement +OCRVOTE
# ocrconfig -replace +CRS -replacement +OCRVOTE

4. To restore an OCR when clusterware is down

When OCR is not accessible, CRSD process will not start, hence the clusterware stack will not start completely. A restore of OCR device access and good OCR content is required.
To view the automatic OCR backup:

# ocrconfig -showbackup

To restore the OCR backup:

# ocrconfig -restore <path/filename of OCR backup>

For 11.2+: If OCR is located in ASM disk and ASM disk is also lost, please check out:
How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems Document 1062983.1
How to Restore OCR After the 1st ASM Diskgroup is Lost on Windows Document 1294915.1

If there is no valid backup of OCR presented, reinitialize OCR and Voting is required.
For 10.2 and 11.1:
Please refer to How to Recreate OCR/Voting Disk Accidentally Deleted Document 399482.1

For 11.2+:
Deconfig the clusterware stack and rerun root.sh on all nodes is required.

ADD/DELETE/MOVE Voting Disk

Note: 1. crsctl votedisk commands must be run as root for 10.2 and 11.1, but can be run as grid user for 11.2+
2. For 11.2+, when using ASM disks for OCR and voting, the command is same for Windows and Unix platform.

For pre 11.2, to take a backup of voting disk:

$ dd if=voting_disk_name of=backup_file_name

For Windows:

ocopy \\.\votedsk1 o:\backup\votedsk1.bak

For 11.2+, it is no longer required to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change. The voting disk files are backed up automatically by Oracle Clusterware if the contents of the files have changed in the following ways:

  • Configuration parameters, for example misscount, have been added or modified

  • After performing voting disk add or delete operations

The voting disk contents are restored from a backup automatically when a new voting disk is added or replaced.

For 10gR2 release

Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making any modification to the voting disk. Determine the current voting disk location using:
crsctl query css votedisk

1. To add a Voting Disk, provide the full path including file name:

# crsctl add css votedisk <VOTEDISK_LOCATION> -force
eg:
# crsctl add css votedisk /dev/raw/raw1 -force
# crsctl add css votedisk /cluster_fs/votedisk.dat -force
> crsctl add css votedisk \\.\VOTEDSK2 -force   - for windows

2. To delete a Voting Disk, provide the full path including file name:

# crsctl delete css votedisk <VOTEDISK_LOCATION> -force
eg:
# crsctl delete css votedisk /dev/raw/raw1 -force
# crsctl delete css votedisk /cluster_fs/votedisk.dat -force
> crsctl delete css votedisk \\.\VOTEDSK1 -force   - for windows

3. To move a Voting Disk, provide the full path including file name, add a device first before deleting the old one:

# crsctl add css votedisk <NEW_LOCATION> -force
# crsctl delete css votedisk <OLD_LOCATION> -force
eg:
# crsctl add css votedisk /dev/raw/raw4 -force
# crsctl delete css votedisk /dev/raw/raw1 -force

After modifying the voting disk, start the Oracle Clusterware stack on all nodes

# crsctl start crs

Verify the voting disk location using

# crsctl query css votedisk

For 11gR1 release

Starting with 11.1.0.6, the below commands can be performed online (CRS is up and running).

1. To add a Voting Disk, provide the full path including file name:

# crsctl add css votedisk <VOTEDISK_LOCATION>
eg:
# crsctl add css votedisk /dev/raw/raw1
# crsctl add css votedisk /cluster_fs/votedisk.dat
> crsctl add css votedisk \\.\VOTEDSK2        - for windows

2. To delete a Voting Disk, provide the full path including file name:

# crsctl delete css votedisk <VOTEDISK_LOCATION>
eg:
# crsctl delete css votedisk /dev/raw/raw1 -force
# crsctl delete css votedisk /cluster_fs/votedisk.dat
> crsctl delete css votedisk \\.\VOTEDSK1     - for windows

3. To move a Voting Disk, provide the full path including file name:

# crsctl add css votedisk <NEW_LOCATION> 
# crsctl delete css votedisk <OLD_LOCATION>
eg:
# crsctl add css votedisk /dev/raw/raw4
# crsctl delete css votedisk /dev/raw/raw1

Verify the voting disk location:

# crsctl query css votedisk

For 11gR2 release and later

From 11.2, votedisk can be stored on either ASM diskgroup or cluster file systems. The following commands can only be executed when Grid Infrastructure is running. As grid user:

1. To add a Voting Disk
a. When votedisk is on cluster file system:

$ crsctl add css votedisk <cluster_fs/filename>

b. When votedisk is on ASM diskgroup, no add option available.
The number of votedisk is determined by the diskgroup redundancy. If more copies of votedisks are desired, one can move votedisk to a diskgroup with higher redundancy. See step 4.
If a votedisk is removed from a normal or high redundancy diskgroup for abnormal reason, it can be added back using:

alter diskgroup <vote diskgroup name> add disk '</path/name>' force;

2. To delete a Voting Disk
a. When votedisk is on cluster file system:

$ crsctl delete css votedisk <cluster_fs/filename>
or
$ crsctl delete css votedisk <vdiskGUID>     (vdiskGUID is the File Universal Id from 'crsctl query css votedisk')

b. When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup

3. To move a Voting Disk on cluster file system

$ crsctl add css votedisk <new_cluster_fs/filename>
$ crsctl delete css votedisk <old_cluster_fs/filename>
or
$ crsctl delete css votedisk <vdiskGUID>

4. To move voting disk on ASM from one diskgroup to another diskgroup due to redundancy change or disk location change

$ crsctl replace votedisk <+diskgroup>|<vdisk>

Example here is moving from external redundancy +OCRVOTE diskgroup to normal redundancy +CRS diskgroup

1. create new diskgroup +CRS as desired

2. $ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   5e391d339a594fc7bf11f726f9375095 (ORCL:ASMDG02) [+OCRVOTE]
Located 1 voting disk(s).

3. $ crsctl replace votedisk +CRS
Successful addition of voting disk 941236c324454fc0bfe182bd6ebbcbff.
Successful addition of voting disk 07d2464674ac4fabbf27f3132d8448b0.
Successful addition of voting disk 9761ccf221524f66bff0766ad5721239.
Successful deletion of voting disk 5e391d339a594fc7bf11f726f9375095.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced

4. $ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   941236c324454fc0bfe182bd6ebbcbff (ORCL:CRSD1) [CRS]
 2. ONLINE   07d2464674ac4fabbf27f3132d8448b0 (ORCL:CRSD2) [CRS]
 3. ONLINE   9761ccf221524f66bff0766ad5721239 (ORCL:CRSD3) [CRS]
Located 3 voting disk(s).

5. To move voting disk between ASM diskgroup and cluster file system
a. Move from ASM diskgroup to cluster file system:

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   6e5850d12c7a4f62bf6e693084460fd9 (ORCL:CRSD1) [CRS]
 2. ONLINE   56ab5c385ce34f37bf59580232ea815f (ORCL:CRSD2) [CRS]
 3. ONLINE   4f4446a59eeb4f75bfdfc4be2e3d5f90 (ORCL:CRSD3) [CRS]
Located 3 voting disk(s).

$ crsctl replace votedisk /rac_shared/oradata/vote.test3
Now formatting voting disk: /rac_shared/oradata/vote.test3.
CRS-4256: Updating the profile
Successful addition of voting disk 61c4347805b64fd5bf98bf32ca046d6c.
Successful deletion of voting disk 6e5850d12c7a4f62bf6e693084460fd9.
Successful deletion of voting disk 56ab5c385ce34f37bf59580232ea815f.
Successful deletion of voting disk 4f4446a59eeb4f75bfdfc4be2e3d5f90.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   61c4347805b64fd5bf98bf32ca046d6c (/rac_shared/oradata/vote.disk) []
Located 1 voting disk(s). 

b. Move from cluster file system to ASM diskgroup

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   61c4347805b64fd5bf98bf32ca046d6c (/rac_shared/oradata/vote.disk) []
Located 1 voting disk(s).

$ crsctl replace votedisk +CRS
CRS-4256: Updating the profile
Successful addition of voting disk 41806377ff804fc1bf1d3f0ec9751ceb.
Successful addition of voting disk 94896394e50d4f8abf753752baaa5d27.
Successful addition of voting disk 8e933621e2264f06bfbb2d23559ba635.
Successful deletion of voting disk 61c4347805b64fd5bf98bf32ca046d6c.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced

[oragrid@auw2k4 crsconfig]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   41806377ff804fc1bf1d3f0ec9751ceb (ORCL:CRSD1) [CRS]
 2. ONLINE   94896394e50d4f8abf753752baaa5d27 (ORCL:CRSD2) [CRS]
 3. ONLINE   8e933621e2264f06bfbb2d23559ba635 (ORCL:CRSD3) [CRS]
Located 3 voting disk(s).

6. To verify:

$ crsctl query css votedisk

 

For online OCR/Voting diskgroup change

For disk storage migration, if using ASM diskgroup and the size / diskgroup redundancy remain the same, then one can use add failure group contain new storage and drop failure group which contain old storage to achieve this online change.

For more information, please refer to How to Swap Voting Disks Across Storage in a Diskgroup (Doc ID 1558007.1) and Exact Steps To Migrate ASM Diskgroups To Another SAN/Disk-Array/DAS/etc Without Downtime. (Doc ID 837308.1)

For Voting disk maintenance in Extended Cluster

Please refer to Oracle White paper: Oracle Clusterware 11g Release 2 (11.2) – Using standard NFS to support a third voting file for extended cluster configurations

If there is any issue using asmca tool, please refer to How to Manually Add NFS voting disk to an Extended Cluster using ASM in 11.2 Note 1421588.1 for detailed commands.

Community Discussions

Still have questions? Use the communities window below to search for similar discussions or start a new discussion on this subject. (Window is the live community not a screenshot)

Click here to open in main browser window

#######sample 1  10g 更换vote 和 OCR

一、voting disk换盘 
注意:voting disk是在crs,rdbms状态正常的情况下进行 
1、raw4用于新的voting disk,raw5用于新的ocr disk,注意其权限 
[root@node1 ~]# ls -l /dev/raw/raw4 
crw-rw---- 1 oracle dba 162, 4 Feb 28 15:22 /dev/raw/raw4 
[root@node1 ~]# ls -l /dev/raw/raw5 
crw-rw---- 1 oracle dba 162, 5 Feb 28 15:22 /dev/raw/raw5 
2、观察目前系统中的voting disk 
[root@node1 oracle]# crsctl query css votedisk 
0.     0    /dev/raw/raw2

located 1 votedisk(s).

3、系统中添加voting disk 
[root@node1 oracle]# crsctl add css votedisk /dev/raw/raw4 
Cluster is not in a ready state for online disk addition

[root@node1 oracle]# crsctl add css votedisk /dev/raw/raw4 -force 
Now formatting voting disk: /dev/raw/raw4 
successful addition of votedisk /dev/raw/raw4.

4、系统中删除voting disk 
[root@node1 oracle]# crsctl delete css votedisk /dev/raw/raw2 
Cluster is not in a ready state for online disk removal 
[root@node1 oracle]# crsctl delete css votedisk /dev/raw/raw2 -force 
successful deletion of votedisk /dev/raw/raw2.

5、可以看到voting disk换盘成功 
[root@node1 oracle]#  crsctl query css votedisk 
0.     0    /dev/raw/raw4

located 1 votedisk(s).

二、ocr换盘 
ocr换盘关闭crs,以下为简要操作步骤 
1、双节点关闭crs 
[root@node1 oracle]# crsctl stop crs 
Stopping resources. This could take several minutes. 
Successfully stopped CRS resources. 
Stopping CSSD. 
Shutting down CSS daemon. 
Shutdown request successfully issued. 
2、ocr检查 
[root@node1 oracle]# ocrcheck 
Status of Oracle Cluster Registry is as follows : 
         Version                  :          2 
         Total space (kbytes)     :     511744 
         Used space (kbytes)      :       3832 
         Available space (kbytes) :     507912 
         ID                       : 1127674663 
         Device/File Name         : /dev/raw/raw1 
                                    Device/File integrity check succeeded

Device/File not configured 
         Cluster registry integrity check succeeded 
         
         
3、导出ocr盘内容 
[root@node1 oracle]# ocrconfig -export /tmp/ocrfile.dmp 
[root@node1 oracle]# ls -l /tmp/ocrfile.dmp 
-rw-r--r-- 1 root root 85125 Feb 28 15:40 /tmp/ocrfile.dmp

4、修改双节点/etc/oracle/ocr.loc内容,将ocr位置替换为新的ocr盘 
[root@node1 oracle]# cat /etc/oracle/ocr.loc 
ocrconfig_loc=/dev/raw/raw5 
local_only=FALSE

5、将ocr内容导入至新盘 
[root@node1 oracle]# ocrconfig -import /tmp/ocrfile.dmp

6、检查ocr新位置是否生效 
[root@node1 oracle]# ocrcheck 
Status of Oracle Cluster Registry is as follows : 
         Version                  :          2 
         Total space (kbytes)     :     511744 
         Used space (kbytes)      :       3832 
         Available space (kbytes) :     507912 
         ID                       : 1884769518 
         Device/File Name         : /dev/raw/raw5 
                                    Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

OCR 维护 crsd.log的更多相关文章

  1. ORACLE清理、截断监听日志文件(listener.log)

    在ORACLE数据库中,如果不对监听日志文件(listener.log)进行截断,那么监听日志文件(listener.log)会变得越来越大,想必不少人听说过关于"LISTENER.LOG日 ...

  2. Oracle RAC OCR 与健忘症

    OCR就好比Windows的一个注册表,存储了所有与集群,RAC数据库相关的配置信息.而且是公用的配置,也就是说多个节点共享相同的配置信息.因此该配置应当存储于共享磁盘.本文主要基于Oracle 10 ...

  3. 开源图片文字识别引擎——Tesseract OCR

    Tessseract为一款开源.免费的OCR引擎,能够支持中文十分难得.虽然其识别效果不是很理想,但是对于要求不高的中小型项目来说,已经足够用了. 文字识别可应用于许多领域,如阅读.翻译.文献资料的检 ...

  4. MySQL 详细解读undo log :insert undo,update undo

    转自aobao.org/monthly/2015/04/01/ 本文是对整个Undo生命周期过程的阐述,代码分析基于当前最新的MySQL5.7版本.本文也可以作为了解整个Undo模块的代码导读.由于涉 ...

  5. 【Oracle 集群】ORACLE DATABASE 11G RAC 知识图文详细教程之RAC 工作原理和相关组件(三)

    RAC 工作原理和相关组件(三) 概述:写下本文档的初衷和动力,来源于上篇的<oracle基本操作手册>.oracle基本操作手册是作者研一假期对oracle基础知识学习的汇总.然后形成体 ...

  6. RAC 相关概念解释

    1.1 并发控制 在集群环境中, 关键数据通常是共享存放的,比如放在共享磁盘上. 而各个节点的对数据有相同的访问权限, 这时就必须有某种机制能够控制节点对数据的访问. Oracle RAC 是利用DL ...

  7. Oracle RAC学习笔记:基本概念及入门

    Oracle RAC学习笔记:基本概念及入门 2010年04月19日 10:39 来源:书童的博客 作者:书童 编辑:晓熊 [技术开发 技术文章]    oracle 10g real applica ...

  8. RAC 的一些概念性和原理性的知识(转)

    一 集群环境下的一些特殊问题 1.1 并发控制 在集群环境中, 关键数据通常是共享存放的,比如放在共享磁盘上. 而各个节点的对数据有相同的访问权限, 这时就必须有某种机制能够控制节点对数据的访问. O ...

  9. 【RAC】RAC相关基础知识

    [RAC]RAC相关基础知识 1.CRS简介    从Oracle 10G开始,oracle引进一套完整的集群管理解决方案—-Cluster-Ready Services,它包括集群连通性.消息和锁. ...

随机推荐

  1. AdapterPattern(23种设计模式之一)

    设计模式六大原则(1):单一职责原则 设计模式六大原则(2):里氏替换原则 设计模式六大原则(3):依赖倒置原则 设计模式六大原则(4):接口隔离原则 设计模式六大原则(5):迪米特法则 设计模式六大 ...

  2. ZROI2018提高day3t1

    传送门 分析 我们可以用贪心的思想.对于所有并没有指明关系的数一定是将小的放在前面.于是我们按顺序在每一个已经指明大小顺序的数前面插入所有比它小且没有指明关系的数.详见代码. 代码 #include& ...

  3. Shell表达式,如${file##*/}

    Shell表达式,如${file##*/} 2017年10月26日 15:24:40 阅读数:343 今天看一个脚本文件的时候有一些地方不太懂,找了一篇文章看了一些,觉得不错,保留下来. 假设我们定义 ...

  4. try-catch-finally 规则( 异常处理语句的语法规则 )

    1)  必须在 try 之后添加 catch 或 finally 块.try 块后可同时接 catch 和 finally 块,但至少有一个块. 2) 必须遵循块顺序:若代码同时使用 catch 和 ...

  5. C# 实现文件(夹)在ftp服务器间的同步【无需将文件(夹)保存到本地】

    C#实现不同ftp服务器间文件(夹)同步 图1 实现不同ftp服务器间文件(夹)同步的实现思路图 /// <summary> /// 将文件夹1从ftp服务器1移到ftp服务器2文件夹2 ...

  6. SpringMVC @RequestBody 自动转json Http415错误

    转自: http://blog.csdn.net/tiantiandjava/article/details/46125141 项目中想用@RequestBody直接接收json串转成对象 网上查了使 ...

  7. signalR之java client的websocket BUG处理

    最近在用SignalR,服务端已经写好(老铁,没毛病,很稳),然后有坑的是我还得写App端,那就撸吧,java也不是什么很难的东西.奈何坑多(已经踩了一波android的控件bug),这次遇到了MS的 ...

  8. c#操作word类,进行html和word文档的互相转换

    实例引用:http://www.7es.cn/Software_development/171.shtml using Microsoft.Office.Core;using Word = Micro ...

  9. 用Apache James 3.3.0 搭建个人邮箱服务器

    准备域名 比如域名为example.net,则邮箱格式为test@example.net.在自己的域名管理界面,添加一条A记录(mail.example.net  xxx.xxx.xxx.xxx),指 ...

  10. XCode9: iPhone is busy: Preparing debugger support for iPhone

    这个好像是等一阵子就可以了 参考链接