Some time ago i´ve seen a discussion on OTN about ASM and thin provisioned volumes.
Hi there, sorry for the x-post from database-general but it was suggested that I do so. Anyhow, we've got 11g (11.1.0.7 with the 6851110 ASM patch recently applied) running on OEL 5 x86_64, with ASM connected to a raw, thin-provisioned ISCSI volume partitioned for +DATA and +FRA, and in every case where we do so, the SAN device reports within a few weeks that the whole volume has been allocated even though the DB (configured with autoextend on) is only holding about one tenth of the amount of available space on the device. What this means in systems terms is that somehow ASM is marking writes to nearly every block on the drive if only momentarily.
In the original thread, there was speculation that a process of indexing AUs has led to the dirtying of the whole volume, but this would make more sense if the whole disk had been allocated immediately rather than over the course of a few weeks. My question is: what else could account for this behavior, and what steps can I take to help ensure that ASM behaves correctly in a thin-provisioned volume? (by "correctly" I mean writes contiguous blocks of data and doesn't dirty the whole thing)
Yesterday i had some spare time available to dig a little bit deeper. I had a Windows-based system at hand and performed some smaller tests.
As storage i used an Opensolaris with ZFS thin provisioning. On database and asm site i used a 11g R2 database with 11g R2 ASM running on Windows. I created two LUNs and exported them via iSCSI. On the ASM side i formed a disk group with external redundancy and created one bigfile tablespace and put a tablespace with approx 15 GB total size in it.
After disk group and tablespace creation the storage systems shows the LUNs as follows:
NAME PROPERTY VALUE SOURCE
pool1/iscsi-racwin-temp05 volsize 15G local
pool1/iscsi-racwin-temp05 usedbydataset 7.45G -
pool1/iscsi-racwin-temp06 volsize 15G local
pool1/iscsi-racwin-temp06 usedbydataset 7.45G -
The total reported size (“volsize”) is 15 GB while 7.45 GB are currently used (“usedbydataset”). Thats the expected result of creating a tablespace with 15 gb size in the disk group.
During the past day and night i ran a simple test script which imported a schema (approx 12 gb total size) and dropped it afterwards intinitely.
After running for more than 24 hours the thin provisioned disks look like this:
NAME PROPERTY VALUE SOURCE
pool1/iscsi-racwin-temp05 volsize 15G local
pool1/iscsi-racwin-temp05 usedbydataset 7.47G -
pool1/iscsi-racwin-temp06 volsize 15G local
pool1/iscsi-racwin-temp06 usedbydataset 7.47G -
As you can see there is a extremely small growth in size (from 7.45 GB to 7.47 GB). I observed this growth shortly after starting the very first import. Subsequent imports did not increased the actual allocated volume size.
At this point of my investigation i cannot conclude the posters claim but i admit i changed several aspects of this test case. If we exclude the storage as a source for this behavior there might be the fact that ASM behaves different in 11g R2 and 11g R1. Even the patch applied by the poster and the different operating systems might change the behavior.
If i have again some amount of time i will try to reproduce the posters environment en detail….