Friday, December 18, 2020

Oracle Long Term and Innovation Releases.... And... Whatever Happened to Oracle Database 20c?

This post will strive to discuss how Oracle does Oracle Database Releases. Oracle’s database release model can be confusing and even frustrating at times. It can also lead to a lot of questions about what version you should be on, and what version you should target to upgrade too in the future.

Well, let me see if I can help you with these questions.

The Oracle Database Release Model

Oracle database software releases will now be categorized in two different ways:

1.      Long Term Release

2.      Innovation Release

3.      Preview Releases

Let me explain what these categorizations mean.

Long Term Releases

Long term database releases are for you if you are primarily interested in

·         A stable database environment.

·         Longer periods of premium and extended support

Designed for the more traditional production database platform, a Long Term Release Model database will offer 5 years of Premium Support (assuming no extensions). This will be followed by an additional 3 years of Extended Support (again, assuming no extensions). This longer support model helps to relieve the pressure to constantly upgrade the database purely for support purposes.

However, this comes with a price. Long Term Releases will roll in many of the new features introduced by previous Innovation Releases (which I will discuss in a second). However, since the timeframe between Long Term Release model databases will be longer, the introduction of newer features will also be fewer and farther between in these databases. For many organizations, this will probably be more than an acceptable trade off.

Currently, Oracle classifies database version 19c as a Long Term Release. Premier Support for 19c is slated to end 4/30/2024 with Extended support slated to end 4/30/2027. I expect that we will see long term releases coming every 3-5 years at this point.

Innovation Releases

In between Long Term Releases of the Oracle Database, Oracle will be releasing additional versions of the Oracle Database product called Innovation Releases.

Innovation Releases trade off database stability for the introduction of new features. The support paradigm for Innovation Releases is quite different than that for a Long Term Release. Oracle offers 2 years of Premier Support for an Innovation Release without any offering for Extended Support. As a result, if you are running on an Innovation Release your going to be forced to upgrade much more often.

Currently Oracle considers 18c and 21c to be Innovation releases. Oracle Database 18c will have support available until 6/30/2021. Oracle Database 21c will have support available until 6/30/2023.

Note that Oracle Database 21c (as of the time of this writing) is not generally available for on premise. It became available in the cloud in December 2020. I would not expect to see Oracle Database 21c available on premise until 2021. I would expect that we will see yearly Innovation Releases.

Preview Releases

I decided to throw in the Preview Release here even though Oracle does not clearly define what a Preview Release really is. At least, nowhere that I could find. 

It seems that a Preview Release is a “maybe, just maybe we will release it affair”. Oracle released Oracle Database 20c into the cloud and had even published estimated release dates for on-prem back in early 2020. However, you will find now that Oracle has determined that Oracle Database 20c will not be released for on-prem at all. I think the messaging here is that if you’re going to muck around with a “Preview Release” you do so at your own peril. My understanding is that 20c Preview Version is to be totally removed from the Cloud (if it is not already).

The upshot here. Be careful of Preview Releases.Don't count on them becoming available in GA. Also I would not count on being able to upgrade any Preview Release database to any future database version and I'm not even sure I'd count on utilities like Oracle Data Pump working with this version and other database versions.

What Does This Mean to Me?

What are the considerations when deciding what release you are going to land on? Here are a few thoughts:

1.       Obviously, the application you will be running the database on may drive the decision. If it requires a specific version of the database, then decision made.

2.       In my mind, using a Long Term Release version of the database makes the most sense for most production databases for these reasons:

a.       Stability – Most organizations prefer stability over leading edge feature sets, especially in production.  

b.       Reduced costs – There are significant costs associated with any database upgrade project. Long Term Releases offer reduced support driven upgrade projects.

c.       Solid New Feature sets – There will be a slower adoption timeline for new feature sets when using the Long Term Release versions of the database. However, given that a new Long Term Release will be available every 3-5 years, you won’t have to wait that long for that great new feature to become available. The greatest benefit is that the new feature will have been stabilized.

What About Oracle Database 20c

It seems that Oracle Database 20c has surrendered to COVID. Oracle has stated the following:

Oracle is mindful of the unprecedented economic and business disruption our customers faced in 2020. We decided to not promote our Database 20c Preview version to General Availability. Instead, we incorporated all the 20c functionality into the 21c release and made 21c available in our Autonomous Database Free Tier so that all can test drive the new features and enhancements for free.

This seems to make it clear that Oracle Database 20c is the release that almost was. As I stated above, it was always touted as a “Preview Release”, whatever that officially means. However, for a long time Oracle stated intentions on releasing it to GA, and that it has decided not to do so has caused a bit of confusion.

Friday, November 20, 2020

ORA-27301, Oracle Database 12.2 and later running on Linux 7 and later...

There is always something to learn, isn’t there? A few fun notes to share with you from some support related adventures we have had this week….

RHEL 7 and the MTU size of the lo Interface. Database Outages…

Ran into an issue this week that caused an outage of one node of a 2-node Oracle Database 12.2 RAC cluster. The database started to show signs of memory exhaustion. Processes could not be started and other processes were failing. Error messages were appearing in the alert logs faster than hotdogs being consumed in an eating contest.

This database had been experiencing heavy loads, as they had been doing significant load testing on it over a period of several weeks. They had also performed some high intensity operations such as mass data deletes and highly parallelized index rebuild operations. Looking at OEM, we found periods where they had clearly been bounded by CPU – but there was no indication that they were near to memory starvation. 

I kept hearing Margo say, "... and why is the carpet all wet Todd?". 

"I don't KNOW Margo!"

Researching the problem, I discovered that the answer lay not in memory exhaustion, but instead in memory fragmentation. What the MTU?

What the MTU?

Upon some more investigation, we discovered an existing problem involving RHEL Version 7 and Oracle database that has to do with severely fragmented memory. This problem impacts both new processes and existing processes.

  • New processes fail initial requests for memory when starting because there were was not a contiguous chunk of memory to satisfy the allocation request.
  • Existing processes would fail when they tried to allocate additional memory and a contiguous chunk of memory was not available to satisfy the additional allocation request.

It turns out that this problem has to do with the MTU size of the lo adapter. By default the MTU size of this interface is set to 65536. If you look in Bugzilla, it appears that this problem has been around for some time in the Linux kernel, but it appears that it’s not impacted Oracle database until RHEL 7. I can’t find any information that indicates what has changed, but I’d guess something in the Linux kernel changed in RHEL 7 that’s now impacting Oracle.

Reducing the MTU?

The solution to this problem is to reduce the MTU size on the interface from the default 65536 to a value of 16384. This new value of 16384 is the recommended value for RHEL 7 running Oracle database.

You can see the current MTU size for lo by running the netstat command as seen here:

root@localhost ~]# netstat -i

Kernel Interface table
Iface             MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0             1500    50796      0      0 0         35754      0      0      0 BMRU
lo              65536   199576      0      0 0        199576      0      0      0 LRU
virbr0           1500        0      0      0 0             0      0      0      0 BMU

Use the ifconfig command to change the MTU size of the lo interface as seen here:

[root@localhost ~]# ifconfig lo mtu 16384

And we see the MTU size has changed:

[root@localhost ~]# netstat -i
Kernel Interface table
Iface             MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0             1500    50796      0      0 0         35754      0      0      0 BMRU
lo              16384   199590      0      0 0        199590      0      0      0 LRU
virbr0           1500        0      0      0 0             0      0      0      0 BMU

A couple of notes. You will need root privileges to execute the ifconfig command to change the adapter setting. The netstat -i setting will work from most Linux accounts, such as the grid or oracle software owning accounts.

How to Tell if there is Fragmentation of Memory

So, you might wonder…. “How do I tell if there is memory fragmentation? Can I monitor for it?”. I’m glad you asked! It actually leads to my next topic – the Magic SysRQ Key in Linux!

The Magic SysRQ Key in Linux – Do you know about it?

So, how do you figure out how to determine the extent of memory fragmentation in Linux 7. Well, the Magic SysRq key functionality in Linux came to the rescue! Not aware of this functionality? Well, now you are! 

You can read about the SysRq functionality here! You can read how I used it to look at memory fragmentation in Linux here

Here is an example of dumping the current memory information, including fragmentation. The results will be in /var/log/messages:

#echo m > /proc/sysrq-trigger
#See the results…
#dmesg

I won’t fill this email with the length output from dmesg it will provide you with a the memory distribution on a line that looks like this:

[2732874.679539] Node 0 Normal: 15149*4kB (UEM) 10237*8kB (UEM) 3244*16kB (MR) 1*32kB (R) 2*64kB (R) 2*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 1*2048kB (R) 0*4096kB = 197116kB

Note the large number of 4kb pages here, and the lack of bigger chunk sizes…. This might indicate memory fragmentation in the system. Essentially the problem is that you have a lot of small chunks of memory available, but there are few, if any, larger chunks available to service larger memory allocations. 

One possible solution is to force some memory compaction:

#force memory compaction
# echo 2>/proc/sys/vm/compact_memory

(see https://www.kernel.org/doc/Documentation/sysctl/vm.txt for more on using this command to force memory compaction).

Note that once you get to the point that you have started generating ORA_27301 errors, memory compacting probably won't solve the issue. Your probably going to have to restart the impacted database servers to get back in business quickly. Thats right, the server. It's probably not going to be enough to just stop and restart the cluster services. 

Anyway…. Hope you find this useful!

Tuesday, October 13, 2020

Oracle support for GDA2020

Australia is moving from the GDA94 Datum to the new GDA2020 datum. To support this, you can run the following script in an Oracle database versions 11.2, 12.1 and 12.2.to support the new data. 

These scripts must be run as MDSYS.You may need to unlock the MDSYS account to use it, and you might also need to change the password - both of which are fine. It's a good idea to lock the account after running these scripts too.

Legal note: I take no responsibility for these scripts. I've not tested them, I have not used them, I don't guarantee you that they work in any way.... they are here just for your amusement, perusal and to give you some guidelines on using such scripts...

First, if your running in 11.2.0.4 you will need to run this script:

insert into "MDSYS"."SDO_COORD_OPS" values
(17359,'SA Lambert (EPSG OP 17359)','CONVERSION',NULL,NULL,NULL,NULL,9802,NULL,NULL,
'South Australia Department for Environment and Heritage, Spatial Information Committee (SICOM); http://www.environment.sa.gov.au/mapland/sicom/sicom/lambert.html',
'OGP',1,'FALSE',NULL,1,1,1);

Then to update the SDR's... run these scripts....

update
sdo_datums
set
shift_x = 0.06155,
shift_y = -0.01087,
shift_z = -0.04019,
rotate_x = 0.0394924,
rotate_y = 0.0327221,
rotate_z = 0.0328979,
scale_adjust = -0.009994
where
datum_id = 6283;

INSERT INTO SDO_DATUMS(DATUM_ID,DATUM_NAME,DATUM_TYPE,ELLIPSOID_ID,PRIME_MERIDIAN_ID,INFORMATION_SOURCE,
DATA_SOURCE,IS_LEGACY)
VALUES
(1168,'Geocentric Datum of Australia 2020','GEODETIC',7019,8901,'Geodesy Section, Geoscience Australia.',
'IOGP','FALSE' );

commit;

INSERT INTO SDO_COORD_REF_SYSTEM
(SRID,COORD_REF_SYS_NAME,COORD_REF_SYS_KIND,COORD_SYS_ID,DATUM_ID,GEOG_CRS_DATUM_ID,
INFORMATION_SOURCE,DATA_SOURCE,IS_LEGACY,IS_VALID,SUPPORTS_SDO_GEOMETRY)
VALUES
(7843,'GDA2020','GEOGRAPHIC3D',6423,1168,1168,'Geodesy Section, Geoscience Australia.','IOGP','FALSE',
'TRUE','TRUE');

INSERT INTO SDO_COORD_REF_SYSTEM
(SRID,COORD_REF_SYS_NAME,COORD_REF_SYS_KIND,COORD_SYS_ID,DATUM_ID,GEOG_CRS_DATUM_ID,
INFORMATION_SOURCE,DATA_SOURCE,IS_LEGACY,IS_VALID,SUPPORTS_SDO_GEOMETRY)
VALUES
(7844,'GDA2020','GEOGRAPHIC2D',6422,1168,1168,'Geodesy Section, Geoscience Australia.','IOGP','FALSE',
'TRUE','TRUE');

insert into MDSYS.SDO_COORD_REF_SYSTEM
(SRID,COORD_REF_SYS_NAME,COORD_REF_SYS_KIND,COORD_SYS_ID,DATUM_ID,GEOG_CRS_DATUM_ID,SOURCE_GEOG_SRID,
PROJECTION_CONV_ID,CMPD_HORIZ_SRID,CMPD_VERT_SRID,INFORMATION_SOURCE,DATA_SOURCE,IS_LEGACY,LEGACY_CODE,LEGACY_WKTEXT,LEGACY_CS_BOUNDS)
VALUES
(7852,'GDA2020 / MGA zone 52','PROJECTED',4400,NULL,1168,7844,17352,NULL,NULL,'Geodesy Section, Geoscience Australia.',
'IOGP','FALSE',NULL,NULL,NULL);

insert into MDSYS.SDO_COORD_REF_SYSTEM
(SRID,COORD_REF_SYS_NAME,COORD_REF_SYS_KIND,COORD_SYS_ID,DATUM_ID,GEOG_CRS_DATUM_ID,SOURCE_GEOG_SRID,PROJECTION_CONV_ID,
CMPD_HORIZ_SRID,CMPD_VERT_SRID,INFORMATION_SOURCE,DATA_SOURCE,IS_LEGACY,LEGACY_CODE,LEGACY_WKTEXT,LEGACY_CS_BOUNDS)
VALUES
(7853,'GDA2020 / MGA zone 53','PROJECTED',4400,NULL,1168,7844,17353,NULL,NULL,'Geodesy Section, Geoscience Australia.',
'IOGP','FALSE',NULL,NULL,NULL);

call sdo_cs.update_wkts_for_epsg_datum(datum_id => 6283);

insert into MDSYS.SDO_COORD_REF_SYSTEM
(SRID, COORD_REF_SYS_NAME,COORD_REF_SYS_KIND,COORD_SYS_ID, DATUM_ID,GEOG_CRS_DATUM_ID,SOURCE_GEOG_SRID,PROJECTION_CONV_ID,CMPD_HORIZ_SRID,CMPD_VERT_SRID,INFORMATION_SOURCE,DATA_SOURCE,IS_LEGACY,
LEGACY_CODE,LEGACY_WKTEXT,LEGACY_CS_BOUNDS)
VALUES
(7854,'GDA2020 / MGA zone 54','PROJECTED',4400,NULL,1168,7844,17354,NULL,NULL,'Geodesy Section, Geoscience Australia.',
'IOGP','FALSE',NULL,NULL,NULL);

insert into MDSYS.SDO_COORD_REF_SYSTEM
(SRID,COORD_REF_SYS_NAME,COORD_REF_SYS_KIND,COORD_SYS_ID,DATUM_ID,GEOG_CRS_DATUM_ID,SOURCE_GEOG_SRID,PROJECTION_CONV_ID,CMPD_HORIZ_SRID,
CMPD_VERT_SRID,INFORMATION_SOURCE,DATA_SOURCE,IS_LEGACY,LEGACY_CODE,LEGACY_WKTEXT,LEGACY_CS_BOUNDS)
VALUES
(8059,'GDA2020 / SA Lambert','PROJECTED',4400,NULL,1168,7844,17359,NULL,NULL,'Development Division, South Australia Department of Planning,
Transport and Infrastructure.','IOGP','FALSE',NULL,NULL,NULL);

commit;

and to test the scripts...

-- Test:
select sdo_cs.transform(sdo_geometry(3001, 4939, sdo_point_type(133.88551329, -23.67012389, 603.3466), null, null), 7843) from dual;
select sdo_cs.transform(sdo_geometry(3001, 4939, sdo_point_type(129.8102423, -17.78524706, 450.404), null, null), 7843) from dual;

select srid, wktext from cs_srs where srid in (7852, 7853) order by srid;

select sdo_cs.transform(sdo_geometry(3001, 7852, sdo_point_type(500000, 10000000, null), null, null), 7844) from dual;
select sdo_cs.transform(sdo_geometry(3001, 7853, sdo_point_type(500000, 10000000, null), null, null), 7844) from dual;

Friday, September 18, 2020

RMAN Cross Check Command Against the MML Layer

So, we have had a couple of cases open recently where we had RMAN backup set piece files present on the remote MML layer storage, but there was no record of those files in the control file or the Recovery Catalog. This leads to a slight issue with recovering databases. You need a way to catalog files on the remote media back into the Recovery Catalog or the database control file.

If the backup set piece files are on disk, this is easy with the RMAN catalog command. However, if you read through the documentation (until 19c), it's unclear on how to catalog backup piece files if you are using the MML layer. Fortunately, there is an undocumented way to catalog MML backup set piece files using the catalog command!. 

In Oracle Databsae 19c Oracle started to document the availability of the device type parameter in the RMAN catalog command. This parameter, followed by the backuppiece parameter, would allow you to catalog specific backup set pieces that existed in the MML layer. The only catch is that you have to first know the backup set piece name, which you will need to either get from backup logs, or you will have to go directly to the MML catalog and find the name of the backup set piece you want. 

While this parameter is documented in 19c, it appears to be available in an undocumented manner since Oracle Database version 10.1.  

For example, let's say that you needed to catalog some missing MML files. You might do something like the following:

configure channel device type 'SBT_TAPE'
params 'SBT_LIBRARY=/app/oracle/backup/lib/libobk.so,
env=(OB_MEDIA_FAMILY=RMAN-DEFAULT)';

catalog device type 'SBT_TAPE' backuppiece='0ggp1_1_1';

Obviously, you will need to put in the correct env parameter settings for the MML layer you are working on. For example, Netbackup would expect you set the nb_ora_class parameter in the configure channel command.