Vishal Gupta's Blog

Archive for the ‘Exadata Patching’ Category

Exadata 11.2.2.4.0 – Flash Card firmware Issue

Posted by Vishal Gupta on Nov 6, 2011

After patching Exadata cell to 11.2.2.4.0, on running CheckHWnFwProfile or Exachk it reports the firmware version for Flash Card is not same as expected.


# /opt/oracle.SupportTools/CheckHWnFWProfile
[WARNING] The hardware and firmware are not supported. See details below

[PCISlot:HBA:LSIModel:LSIhw:MPThw:LSIfw:MPTBios:DOM:OSDevice:DOMMake:DOMModel:DOMfw:CountAuraCountDOM]
Requires:
   AllSlots_AllHBAs SAS1068E B3orC0 105 011b5c00 06.26.00.00 AllDOMs_NotApplicable MARVELL SD88SA02 D20Y 4_16
Found:
   AllSlots_AllHBAs SAS1068E B3orC0 105 011b5b00 06.26.00.00 AllDOMs_NotApplicable MARVELL SD88SA02 D20Y 4_16

[WARNING] The hardware and firmware are not supported. See details above

CheckHWnFWProfile script is expecting firmware version to be 011b5c00 where it is found to be 011b5b00, so it reports the check failure. This is due to BUG 13088963. All flash card firmware versions are not being upgraded due to bug in CheckHWnFwProfile  script, which is also responsible for firmware upgrade on the cells.

MOS Note 1372320.1  – Problem : [Warning] The Hardware And Firmware Are Not Supported on 11.2.2.4

There is a patch 13089037 available for this problem. This patch also fixes following bugs.

This update will:

  • Update ibdiagtools to latest revision (Bug 13089037).
    •  STORAGE EXPANSION RACK SUPPORT AMISS IN IBDIAGTOOLS IN 11.2.2.4.0
  • Correct the flash firmware update regression (Bug 13088963).
    •  CHECKHWNFWPROFILE HARDWARE/FIRMWARE NOT SUPPORTED REQUIRES LSIFW 011B5C00
  • Correct the missing support for X4800 for special bios update package (Bug 13089050)
    •  FLASH_BIOS PACKAGE DOES NOT WORK FOR X4800* MODELS
Advertisement

Posted in Exadata Patching, Oracle, Oracle Exadata | Leave a Comment »

Exadata 11.2.2.4.0 – Critical Bug

Posted by Vishal Gupta on Nov 5, 2011

Following issue has been identified in Exadata Storage Server software patch 11.2.2.4.0. It only affects compute node minimal pack on X2-2 and X2-8 racks sporting a 10GigE network ports. It does not affect the V1 or V2 models as they dont have 10GigE network ports. For more information please refer to MOS Note 1348647.1

Critical Issues Discovered Post-Release

1) Bug 13083530 – 10Gb Ethernet network interfaces shut down unexpectedly.

For environments configured with 10GigE Ethernet for the database server hosts running Oracle Linux, do NOT apply the 11.2.2.4 minimal pack to the database server hosts. A loss of connectivity problem for 10GigE was reported and confirmed. An update on this will be available soon and will be published as an update to the patch README and the corresponding patch MOS Note. Customers who are not already running with 11.2.2.3.5 on their compute nodes are recommended to applying the 11.2.2.3.5 minimal pack (available in Patch 12849110) until further updates are available.

Apply the 11.2.2.4 cell patch as usual. The cells do not use 10GigE. It is supported to run with the latest 11.2.2.4 version on the cells together with an earlier minimal pack release.

 

Posted in Exadata Patching, Oracle, Oracle Exadata | 2 Comments »

Exadata Storage/Compute Node remote re-imaging

Posted by Vishal Gupta on Oct 23, 2011

When you have a Exadata machine in the lab and you are testing lot of different things or giving hand-ons training to production DBA on lab Exadata to familiarize them with Exadata patching, one has to frequently start from scratch i.e . a particular storage/compute node image. Wouldn’t it be nice if you could write a script to re-image the servers? But alas!!! that is not possible with Exadata. One has to do all the reimaging manually. Even with the manual process, one has to insert the external USB into each storage/compute node and then remote it before reboot to complete the process. With ILOM (Integrated Lights Out Management) it is possible to mount a remote cdrom image (*.iso) file as a virtual cdrom. One can create an storage/compute node image iso using computeImageMaker_<imageversion>.x86_64.tar or cellImageMaker_<imageversion>.x86_64.tar file.

For example for compute node, please note we navigate into dl380 for compute node

# tar -pxvf computeImageMaker_11.2.1.2.3_LINUX.X64_100305-1.x86_64.tar
# cd dl380
# makeImageMedia.sh computeNodeImage.iso

Or for cell node, please note we navigate into dl180 directory for cell.

# tar -pxvf cellImageMaker_11.2.1.2.3_LINUX.X64_100305-1.x86_64.tar
# cd dl180
# makeImageMedia.sh cellImage.iso

Would it not have been nice, if one could just mount this iso file via ILOM as virtual cdrom, change the boot order in BIOS by booting into bios, which can also be force either via ILOM gui or via /usr/bin/biosconfig -set_boot_override <xmlfile> command, and choose virtual cdrom as first boot device. But problem with this approach is cell/compute node imaging process resets the ILOM, which means that even our virtual cdrom iso image is also removed during this process and this results in imaging process not completing properly.

One could try leaving a external USB permanently attached on lab exadata, and then via ILOM try to reimage, so that during ILOM reset boot image device is not detached and once can simply reconnect to ILOM to continue answering the on-screen messages. But one problem with this approach is, as part of imaging one has to remove the external USB stick otherwise automated configuration scripts dont funtion properly. I tried it on the lab exadata and could get the cell to reimage properly with external USB still attached. Even after manually changing the boot order to internal USB and then harddrive, which is what cell checks for during each reboot validation, cell just sat on a blinking prompt without going forward. I left the cell overnight as well, thinking that properly some processing is going on, but with no luck. Once i removed the external USB stick, automated configuration scripts were able to properly complete the imaging process and came to automated run of ipconf script, which set the various setting on the cell at first boot after reimaging process.

Would it have been nice, it all this could be done remotely via ILOM once ilom has been connected to the network, as ilom network configuration is not reset during re-imaging process. But that would be wishful thinking !!!

I was trying this with 11.2.1.2.3 image, as that was my starting image on compute/cell nodes in prod/dr/test, so wanted to replicated the same history. I have not tried with later image versions yet to see if process has been improved in this regard, that trial would be for some time later.

Regards,

Vishal


				

Posted in Exadata Installation, Exadata Patching, Oracle Exadata | 4 Comments »

Exadata Storage Server 11.2.2.4.0 Patching

Posted by Vishal Gupta on Oct 22, 2011


Non-interactive shell issue for Database Host minimal pack 

Recently i set about patching Exadata Storage Server software to from 11.2.2.x.x to 11.2.2.4.0, which is the latest patch from Oracle Corporation. I was testing and documenting the process for one of my client and wanted to automate this as much as possible, as in past people actually executing the commands had missed running few commands on certain nodes. As with any Exadata storage server software patch, there is cell node component of patch which is patched using patchmgr either in rolling or non-rolling fashion. And there is database host component, called database minimal pack. Release note of 11.2.2.4.0 asks the install the patch (after running some prerequisites) using ./install.sh -force option. I was taking the approach of install patch on one cell node and if successful, then apply on rest of the cell nodes in parallel. Similarly apply the database minimal pack patch on one compute node, then if successful, apply it on rest of the compute nodes in parallel. And what could be more convenient to run command in parallel on many nodes than dcli command. So i programatically created an dbs_group_withoutfirstnode file with all the compute nodes apart from first compute node. Then installed the patch on first compute node, which was successful. After that using dcli i transferred the patch to other nodes, extracted its contents in parallel. Then using dcli command ran the (cd <patch_directory>; ./install.sh -force) command on rest of the compute nodes. But guess what, compute node patch does not like the running via dcli. DCLI simply runs the command on a remote host using “ssh command” method in simple terms. Though its slightly more complex. Effect of running command via dcli is that, all command are run in non-interactive session i.e. without tty terminal or standard output/error. It means that if your script is not redirecting all standard output and standard error messages to a file, then it will exit with a non-zero (i.e unsuccessful) exit code. install.sh script gives a call to dopatch.sh, which in turn calls a series of functions listed. As part of one of the function, it tries to set update the image version and adds it to image history. In this function, it tries to output the error messages explicitly to /dev/stderr device. As a result of this, if compute node patch is run via some automated script, it exits at this step and fails to run any further steps which include firmware update to ILOM and BIOS upgrade etc.

Now after this has happened, imageinfo command will show the new version, but there will be empty status and activation date. imagehistory will also not show the new image version. If you try to rollback the patch using ./install.sh -rollback-ib command, it will complain that version is not valid, as it is not set with success status. So if you try run /opt/oracle.cellos/imagestatus -set success , then it will complain. But you can force it by using /opt/oracle.cellos/imagestatus -set success -force db_patch. After this you will be able to use the rollback. And then you can install the patch again using an interactive shell.

grub.conf Symoblic link Issue

I also noticed that symbolic link /etc/grub.conf which points to /boot/grub/grub.conf is missing on OEL5.5 compute/cell nodes. OEL5.5 is installed starting with 11.2.1.3.1 cell image.

Suggestions for Oracle Exadata Development

Exadata development team could write their upgrade/patching so that they are compatible with dcli, it allows to automated the patch and save lot of hassle.

Summary

– Don’t use non-interactive shell or dcli to run compute node patching commands.
– Check your /etc/grub.conf symbolic link exists which needs to point to /boot/grub/grub.conf.

Hopefully this will save some hassle to someone out there patching production Exadata’s.

[Update, 05-Nov-2011]

One can redirect all the standard output and standard error to a file, then it will be possibile to run install.sh to install compute minimal patch via dcli.

cd /opt/oracle.Support/onecommand/
dcli -l root -g dbs_group "mkdir -p /opt/oracle.Support/onecommand/patches/patch_11.2.2.4.0.110929"

# Transfer the compute node minimal patch file
dcli -l root -g dbs_group -d /opt/oracle.Support/onecommand/patches/patch_11.2.2.4.0.110929/ -f /opt/oracle.Support/onecommand/patches/patch_11.2.2.4.0.110929/db_patch_11.2.2.4.0.110929.zip

# Unzip the compute node patch file
dcli -l root -g dbs_group  "(cd /opt/oracle.Support/onecommand/patches/patch_11.2.2.4.0.110929/; unzip -o db_patch_11.2.2.4.0.110929.zip)"

# Run the compute node patch
dcli -l root -g dbs_group "(cd /opt/oracle.Support/onecommand/patches/patch_11.2.2.4.0.110929/db_patch_11.2.2.4.0.110929 ; ./install.sh >> install.sh.log 2>&1)"

Cheers,
Vishal Gupta

Posted in Exadata Patching, Oracle, Oracle Exadata | Leave a Comment »

 
%d bloggers like this: