A brief background on In-Memory OLTP Acceleration or RAM Cache
In this brief blog post a new feature introduced in the Exadata Storage Server image 18.1 will be presented. You probably heard a lot about the new Exadata X8M and its new Intel Optane DC Persistent Memory (PMem) in the storage server. This new feature+architecture allows the database processes running on the database servers to remotely read and write through a protocol called Remote Direct Memory Access (RDMA) from/to the PMem cards in the storage servers. RDMA exists in the Exadata architecture since the beginning, so this is, by far, not new:
RDMA was introduced to Exadata with InfiniBand and is a foundational part of Exadata's high-performance architecture. (Introducing Exadata X8M: In-Memory Performance with All the Benefits of Shared Storage for both OLTP and Analytics)
What you probably do not know is that there is a feature called In-Memory OLTP Acceleration (or simply RAM Cache) which was introduced in the Exadata Storage Server image 184.108.40.206.0 when the Exadata X7 was released. That feature allows read access to the Storage Server RAM on any Exadata system (X6 or higher) running that version or above. It is not the same thing as PMem since RAM is not persistent, but it is indeed very cool since you could take advantage of the RAM available in the Storage Servers.
Modern generations of Exadata Storage Servers come with a lot of RAM available. X8 and X7 come with 192GB of RAM by default while X6 used to come with 128GB of RAM.
Unfortunately this feature is only available on Storage Servers >=X6 and these are the requirements:
Oracle Exadata System Software 18c (18.1.0)
Oracle Exadata Storage Server X6, X7 or X8
Oracle Database version 220.127.116.11 April 2018 DBRU or 18.1 or higher
That large amount of RAM is rarely fully utilized by the Exadata Storage Servers. Therefore with this RAMCache feature you can use all or part of the available RAM in the Storage Servers to extend your database buffer cache to the Storage Servers RAM for reading operations.
In the new Exadata X8M the I/O latency is under 19µs for read operations due to the PMem cache combined with the RoCE network while in the Exadata X7/X8 the I/O latency for reads with RAM Cache using RDMA over InfiniBand is around 100µs and without RAM Cache the number goes up to 250µs reading directly from the Flash Cache.
For OLTP workloads Exadata uniquely implements In-Memory OLTP Acceleration. This feature utilizes the memory installed in Exadata Storage Servers as an extension of the memory cache (buffer cache) on database servers. Specialized algorithms transfer data between the cache on database servers and in-memory cache on storage servers. This reduces the IO latency to 100 us for all IOs served from in-memory cache. Exadata’s uniquely keeps only one in-memory copy of data across database and storage servers, avoiding memory wastage from caching the same block multiple times. This greatly improves both efficiency and capacity and is only possible because of Exadata’s unique end-to-end integration.
How I set it up in the Exadata Storage Servers
As I mentioned previously, the recent generation of Exadata Storage Servers come with a lot of RAM which is normally not used at its fullest by the cellsrv services and features. Having said that what I normally take into consideration is the amount of free memory (RAM) in the Storage Servers. I pick the Storage Server that is using more RAM and then do the math: freemem*0.7=RAM Cache value. I set the RAM Cache to 70% of the free memory of the Storage Server that is using more RAM than the others. I do that because the Storage Servers might need more memory for the storage indexes or something else in the future, so avoid using all the free memory in the RAM Cache.
Let's say my busiest Storage Server has 73GB of free memory. Applying the formula we get to: 73*0.7=51.1GB.
Exadata architecture was built to spread the workload evenly across all the Storage grid, so you will notice that the Storage Servers use pretty much the same amount of memory (RAM).
Here comes the action and fun. To implement this we must first check how much memory is available in our Storage Servers by running this from dcli (make sure your cell_group file is up-to-date):
In my case the cel01 is the Storage Server which is using more memory than others. Let's check some details of this Storage Server:
From the output below we can see that the parameter ramCacheMode is set to Auto while ramCacheMaxSize and ramCacheSize are 0. These are the default values and mean the RAM Cache feature is not enabled.
This Storage Server has ~73GB of free/available memory (RAM):
Now we can enable the RAM Cache feature by changing the parameter ramCacheMode to On:
Immediately after the change we check the free/available memory (RAM) in the Storage Server operation system:
Please notice that not much was changed, that is because the memory is available for the Storage Server to use for RAM Cache, but it does not mean that it will allocate/use it as soon as we enable the RAM Cache feature.
We can see that only 10GB was defined in the ramCacheMaxSize and ramCacheSize parameters:
To easily notice that we can run the following query from cellcli:
To reduce the memory used by the RAM Cache feature we can simply change the ramCacheMaxSize parameter:
If we check the values of the RAM Cache parameters we will see this:
As soon as the database blocks start being copied to the RAM Cache we will see the ramCacheSize value increasing:
Increasing a bit more:
When checking again it takes a while for the cellsrv to populate the RAM Cache with some blocks copied from the Flash Cache:
Setting it back to Auto make it clear everything again:
Now we will adjust to the value we got from our calculation of 70% of the free memory:
With that configuration in place in case we want to be notified if the Storage Server is running out memory we can quickly create a threshold based on the Cell Memory Utilization (CL_MEMUT) metric to notify us when the memory utilization goes beyond 95%:
To sum up, RAM Cache or, originally named, In-Memory OLTP Acceleration is a feature available only on Oracle Exadata Database Machine X6 or higher with at least the 18.1 image. Also only available for the Oracle Database 18.104.22.168 with April 2018 DBRU or higher. It helps us to extend the database buffer cache to the free RAM in the Storage Servers, but only for read operations, since RAM is not persistent. For persistent memory Oracle introduced the Persistent Memory Cache with Oracle Exadata Database Machine X8M.
It is worth to mention that a database will only leverage RAM Cache when there is pressure on the database buffer cache. The data blocks which are present in the RAM Cache are persistently stored in the Storage Server's Flash Cache, so when a server process on the database side requests a block that is no longer stored in the database buffer cache, but is in the RAM Cache, the cellsrv will send this block from the RAM Cache to the buffer cache for the server process to read it and it will be faster to read from the RAM Cache instead of reading it from the Flash Cache or disk
I consider the In-Memory OLTP Acceleration feature just a plus for our Exadata environment, this is not a magic solution. Since almost always we see free memory in the Storage Server this is a way of using the resources we already paid for.
Happy caching! See you next time!