Tuesday, September 8, 2009

Learning ASM: Part 1

Oracle's ASM (Automatic Storage Management) may well be one of the most significant additions to the Oracle database, beginning in 10g, and becomes more interesting in 11g Release 2. I wanted to learn it at home, before I run into it at a client, and because it is extremely interesting technology. This post logs my experiences setting it up for the first time. This may become my personal HOWTO reference on the topic - expect to see extreme detail as we get into it.

About ASM:

If you are unfamiliar with ASM, it is storage management software that comes with the database. It provides the performance of using RAW devices without the need to manage an storage volume through the operating system or a logical volume manager. For the DBA this translates into more direct control over the storage and more efficiency. For the business, this may translate into reduced costs, if it saves on a license for 3rd-party volume management.

You provide ASM with a pool of storage devices, and then tell ASM how to divide that up logically. You can add or remove devices as needed, without interrupting the database, and ASM will automatically spread your data and load across all the devices. This can be a viable substitute for RAID systems, eliminating another layer of software and hardware. (However, if you have a solid RAID strategy in place already, you can use ASM for managing the volumes, and still let the RAID handle redundancy.)

In 11g Release 2 it gets even more interesting with the introduction of ASM Cluster File System (ACFS). ACFS provides an operating-system-usable file system running on top of Oracle ASM. "Why would I do that?" to further simplify the life of the DBA. In 10g, ASM could not be used for files outside the database, such as binaries, log files, etc. With ACFS all of those files can also be moved into ASM storage. You can set up a cluster of machines running ASM and it can pool storage across the entire cluster, and present it to Oracle and/or the filesystem as if it were a big pool of local storage.

The setup:

I'm slow to adopt new hardware at home, so to make this more interesting I am going to be exploring this on a single-CPU Celeron machine, circa 2005. I don't expect speed or performance. I just want to learn the syntax and management. And yes, Oracle will run on this... just don't expect to do much with it besides self-education. This is a home-class machine, so I have 2 hard drives. One with some older data in a Windows partition I want to save, The other a larger drive with a Windows partition, a Linux partition, and a bunch of extra space to play with.

Hardware: Single CPU Celeron
RAM 1.5 GB
Storage: 1 250GB Hard Drive, in multiple partitions.

ASM is designed to pool storage across multiple devices. I don't have multiple devices, but can simulate that for the sake of education by using multiple partitions. I fully expect this to perform slowly since these are on the same physical device.


I'm going with CentOS 5.3, which is equivalent to RHEL 5. I had previously installed 11g Release 1 on this machine, so I know the Oracle Database will run, at least.

Here is my partition layout, yours may be different.
/dev/hda1 Windows Partition NTFS 102398 MB

/dev/hda5 / ext3 23454 MB
/dev/hda6-15 10 virtual physical devices LVM PV 10236 MB each

/dev/hda16 swap swap 10236 MB

In a real system I wouldn't lump root, /home, etc all on the same partition, but for this exercise I'm going simple. I made my partitions roughly 10GB in size for simplicity later.

After this I finished the Linux installation, and made sure all the required packages for the Oracle Database were installed, as directed in the 11gR2 Installation Guide for Linux.

  • Post-install configuration included setting the following options:
  • Opened TCP port 1521 in the firewall.
  • SELinux Setting set to Permissive
  • Created a non-root user (bbontrag, in my case) to login as
  • Configured Linux to connect to my local network, and verified Internet access
  • Verified that boot parameters are only attaching drives 0 and 4 (/ and swap, or /dev/hda1 and /dev/hda5). This was verified in /boot/grub/grub.conf and by viewing the partitions in CentOS Logical Volume Management.
  • Rebooted at this point to ensure networking was fully operational
  • Used the Package Manager to verify all required packages from the Oracle Install Guide.
  • On a normal development or production system I would install O/S patches at this point. For this machine, I am skipping that step, so I can practice patching AFTER a database is installed. (But that's another post).

In reviewing the documentation for 11gR2 I found this little nugget... "Oracle Database 11g Release 2 introduces the Grid Infrastructure installation. For single instance databases, the grid infrastructure includes Automatic Storage Management, the listener, and Oracle Restart."..."If you want to use grid infrastructure for a standalone server, then you must install the oracle software from the grid infrastructure media before you install the database." In 11gR1 ASM was included on the Database Install media. In this release it is separate.
(What's New in Oracle Database 11g )

(If you are following this as a HOWTO, hold off on this section until AFTER Oracle Grid Infrastructure is installed in the next step. I got ahead fo myself - don't get yourself into the same trouble I did.) I prepared to install the Oracle 11gR2 software according to according to the Quick Install Guide.

  • Checked installed packages
  • Setup groups and users, since this was the first install of Oracle software
  • Set kernel parameters and other configuration parameters
  • I created oracle11r2 instead of oracle as the oracle User Id, to allow a future install of previous versions to practice upgrades.
  • I had trouble with the default profile script trying to use ksh as the default shell for oracle11r2. I changed the default shell to bash and it worked fine
At this point I copied the install media onto the server, but did not install the database yet.


I installed ASM according to the Oracle Grid Infrastructure Install Guide.

  • created user oragrid, where the instructions called for "oracle". This will allow me to understand the difference between grid infrastructure files and normal database files. (I may get into trouble by doing this... we'll see). Note: after creating new users I had to logout and log back in, to restart the X server. New users did not have authority to run X apps, such as the OUI, so this was my workaround.
  • Since permissions for oracle were already set on /u01/app I reset ownership to be what the Infrastructure guide called for.
  • Since my installation is not registered with the Unbreakable Linux Network, I had the joy of installing ASMLIB manually, which is a set of Linux libraries for interacting with ASM. The steps are documented in the Install Guide. Follow them carefully, On my first attempt I downloaded the wrong version. On my second attempt I still missed the oraclasm driver for my kernel. There are 3 packages to install, if the install guide is not clear on that.
  • Install the oracleasm package first, then the other packages. The oracleasmlib package depends on it.
  • Once the packages successfully installed I ran oracleasm configure -i as instructed.
  • The install guide is not clear what to enter here. I used user=oragrid, group=oinstall, Start on boot=y, Scan on boot=y
  • My first attempt at the following step failed with an error "Instantiating disk: failed". I found through a quick Google search that ASM was not initialized, so I started ASM by issuing oracleasm init. After that the following createdisk commands worked as expected.
  • Next I set my empty partitions as candidate groups for ASM.
oracleasm createdisk data1 /dev/hda6 oracleasm createdisk data2 /dev/hda7
oracleasm createdisk data3 /dev/hda8

oracleasm createdisk data4 /dev/hda9 oracleasm createdisk data5 /dev/hda10 oracleasm createdisk data6 /dev/hda11
  • I only created 6 at first, so I can play with adding more later. Verify with oracleasm listdisks. 6 disks are displayed DISK1-DISK6. Great.

At this point I unzipped my install media, and ran the Grid Infrastructure installer. Note: The documentation does not say switch to the 'oracle' user here, but runInstaller will not run as root, so I switched to the new oragrid user created earlier.
  • This is where I hit my first permissions hiccup of having separate IDs for Infrastructure and Database. I happened to have the media files under /home/oracle11r2 not under /oragrid. Once I changed the group on /home/oracle11r2 to oinstall and granted group read/execute on it, oragrid could run the installer.
  • Once OUI started, I selected "Install and Configure Grid Infrastructure for a Standalone Server"
  • I picked English as my language.
  • Next I was prompted to set up DISK GROUPS. Let's review..
ASM Disk Groups are the logical grouping of physical disks, representing the basic unit of storage that ASM presents either to the database for use, or to the operating system, via ACFS. Each Disk Group has a redundancy level, which sets the level of mirroring that ASM manages for you. High redundancy keeps 3 or more copies of the data within the disk group. Normal redundancy keeps 2 copies of data within the disk group (basic mirroring).
  • First, I named my disk group ASMDATA, selected Normal redundancy, and selected all 6 devices as Candidate Disks. The failure groups for redundancy may be defined later? We'll find out.
  • I set passwords for the SYS and ASMSNP accounts. Let's review again...
ASM uses Oracle Database technology to keep track of the storage it manages. ASM is essentially an instance of the Oracle Database, specially tailored for managing the data. It is relatively small overhead on the system as compared to an Oracle Database for an application. Many of the concepts of maintaining an Oracle database instance carry over to managing ASM, such as the SYS password, in this step. Treat ASM SYS and other passwords as securely as you do SYS for your Oracle Database.
  • I assigned all the authentication groups to "oinstall" (default).
  • Oracle Base = /u01/app/oragrid (default)
  • Software Location = /u01/app/oragrid/product/11.2.0/grid (default, Very OFA-ish, no?)
  • Inventory Directory = /u01/app/oraInventory (default)
  • OUI checked for installation requirements, and here is where my home machine starts to fall short. The Physical Memory check failed... 1.5GB was required, and my system reports 1.48GB. It also did not recognize my swap space.
  • There was a package ASM expected (pdksh-5.2.14) which I found, downloaded, and tried to install. I found it conflicted with ksh and bash already installed, so I left things as-is.
  • The max file descriptor limit was set too low. OUI was able to generate a fixit script to take care of this one.
  • I told OUI to ignore the remaining prerequisite checks and continue anyway... The goal is to learn so if this screws something up I will learn even more because of it.
  • I elected to save a repsonse file. I'm not setting up a cluster, but this will be helpful to study, even on this standalone install.
  • Installation continued without incident, and I ran the 2 scripts as root when prompted.
  • ps -fu oragrid shows all the processes I would expect to see from an Oracle database instance
  • Pulling up the System Monitor, I see a little bit of load and memory usage, but nothing that concerns me.
  • /etc/mtab now contains a mount point for oracleasmfs
  • I modified /etc/oratab to automatically start the instance "+ASM" on startup.


There are several interesting things in the post-install steps.

Of particular interest:
  • In a real system, patching is important. THe first post-install instruction is to apply patches. Do so. I'm not here, as it is out of scope for this exercise. Patching will be tried later.
  • The 11gR2 Grid Infrastructure includes the listener. When managing the listener the ORACLE_HOME for the ASM/Grid Infrastructure installation must be used. I have to say I like this. I was never a fan of multiple listeners on a single node. I understood the reasoning of the DBAs who chose to run that way, but disagreed on the need.
  • The 11gR2 Grid Infrastructure will support databases as old as 9.2.
  • Backup the orainstRoot.sh script in /u01/app/oraInventory for future reference and troubleshooting. OK, done.
  • We set up a single disk group during the installation. For installing 11gR2 we will want another disk group for the Flash Recovery Area. I'm already planning to pare down the 6 disks to maybe 4. We still have 4 partitions which we can use for an ACFS playground.
  • The ASM utility binaries do not seem to be in the location expected in the postinstallation steps. This symbolic link is a helpful shortcut: ln -s /u01/app/oragrid/product/11.2.0/grid/bin /u01/app/11.2.0/grid/bin
  • Before adding new disk grounds, I need to enable to other partitions for ASM, with oracleasm (as root). Done. data7-data10 now exist.
  • Logged in as oragrid, I go to /u01/app/11.2.0/grid/bin and run ./asmca to manage disk groups.
  • I go to create new group. The HEADER STATUS column shows the state of all the devices available. I see my 4 new partitions PROVISIONED, and if I click "Show All" Member Disks, I see my other 6 have a status of MEMBER
  • I created a disk group FRA from DISK7 and DISK8, with normal redundancy.
  • I created a disk group FS from DISK9 and DISK10, with normal redundancy.
  • We notice on each of these the Usable capacity is half the Size. normal redundancy is mirroring the data.


At this point, ASM has been successfully installed, and we can add and remove disks from disk groups at will. In the next installments we are ready to add a Database to the mix, try out the new-fangled File System, and (oh yeah, shut down and start everything up cleanly). I have 8 or 9 hours invested to this point, and that includes installing the operating system and writing this blog along the way.

No comments:

Post a Comment