Monthly Archives: April 2023

23c – Lock-free reservations… LOCKED????

Continuing exploring and testing the 23c new feature Lock-free reservations, that I already covered here and here.

Today I noticed something that I had not yet. Take a look:

create table t1 (
id number primary key,
nores1 number,
res1 number reservable
);

Table created.

insert into t1 values (1,1,1);

1 row created.

commit;

Commit complete.

So, I created a simple table with column NORES1 as NOT RESERVABLE and RES1 as RESERVABLE.

Session 1:

select userenv('sid');

USERENV('SID')
--------------
           300

1 row selected.

update t1 set nores1 = 99 where id=1;

1 row updated.

Session 2:

select userenv('sid');

USERENV('SID')
--------------
            49

update t1 set res1 = res1 + 1 where id=1;

1 row updated.

commit;

*** LOCKED !!! ***

WOW, LOCKED!!! I was not expecting this! So, in Session 1 I just updated the non-reservable column, and in Session 2 just the reservable. When I tried to commit the Session 2, with the transaction still pending in Session 1, I got stuck on a row lock:

select sid, event from v$session where sid=49;

   SID EVENT
------ -----------------------------------
    49 enq: TX - row lock contention

So, as opposed to what I mentioned in the first post about lock-free reservations, the reservable columns are not completely exempt from row locks! If a transaction changes a non-reservable column in a row, that row will be locked even if another transaction updates only reservable columns. The difference is that this lock will occur at COMMIT time instead of at statement time.

Thinking deeper about it, this is the correct behavior and keeps transaction consistencly intact: when you issue the update in Session 1, even only changing the non-reservable column, the update is visible instantaneously at that session, and it shows the reservable column unchanged:

select userenv('sid');

USERENV('SID')
--------------
           300

1 row selected.

select * from t1;

        ID     NORES1       RES1
---------- ---------- ----------
         1         99          1

1 row selected.

So, in order to keep the DURABILITY of the transaction (the D of ACID 🙂 ), we should expect that right after the commit the values would persist in the database, and only new changes from transactions started AFTER that one could change the data again! If Oracle allowed the Session 2 to commit before Session 1, the change would take place in Session 2 during the transaction still active in Session 1, which would violate the ATOMICITY and ISOLATION as well ;-). Thanks, Oracle, well done!

I am editing the original post and correcting it.

So, be aware: reservable columns are exempt of locks only if transactions do not touch non-reservable columns.

I think there is more to study and understand this feature. It seems the more I test it, the more I get confused 🙂

See you soon.

23c – Continuing exploration of Lock-free reservations

Hello,

Days ago I posted here about one of the nicest features of Oracle 23c: the lock-free reservations. Please read that post if you haven’t yet 🙂 .

One of the things that let me intrigued was to understand how Oracle manages the journal table, how a session could only see its own reservations and if there would be a way of reading all the pending reservations.

So, I went a little further on the investigation. I tried to find if there was a VPD, generated a 10043 trace of a query against the journal table to see if Oracle was adding a filter predicated in the background, and finally I dumped a block of the journal table to see the content.

There in the dump, I saw the two transactions in the ITL, and then realized the obvious: of course, it’s just the case that the INSERT to the journal table and the pending UPDATE to the main table they represent, are both in the same transaction!

This makes total sense: as always has been the case, only the session that is executing a transaction can see the modified data. Other sessions cannot, because this would be a dirty read, that is never allowed in Oracle.

So, this solve my understanding of how Oracle manages the pending updates to a reservable column.

But, you know, this is kind of weird in a way, because as opposed to what happens in a regular transaction with regular row locks, a transaction with reservations is not totally isolated: yes, you heard right! In a totally isolated transaction, you don’t need to know what is happening in other sessions, since they are completely isolated. One transaction can only change the same data after the previous finishes manipulating that row. This, via row locks, is what guarantee the isolation.

In a transaction with lock-free reservations, on the other hand, your transaction needs to see what is pending in all other transactions for that table, to then decide for example if your update will work or will receive an error for violating a constraint.

So, I still think it would be important to have a global view of all pending reservations in a table. Consider for example, you issue an UPDATE decreasing 5 against a reservable column that has a value of 20 and receive an error for violating the check constraint that avoids negative numbers. You go and check again, value is 20, I’m decreasing 5, final result should be 15, why am I receiving an error?

Yes, we know technically that this is because there are 16 or more reserved across all other transactions, but at this point we simply cannot determine what transaction is reserving it. If I knew it, a DBA could for example kill the reserving session to release the reservation if my transaction was for some reason more important than the others.

The fact is that there is a lack of visibility of pending reservations that thus affect other transactions. And I think this should be improved.

Let me explain in more details the difference between a regular transaction and one with lock-free reservations:

Regular Transaction without lock-free reservations

If you instruct Oracle to update a non-reservable column with the following statement:

update my_table set my_column = my_column + 1;

What Oracle will really do is:

  1. Read the CURRENT value of MY_COLUMN. This is a consistent read, so it considers the value of the last committed transaction. Any ongoing transactions are ignored (remember the ACID acronym for transactions? Theoretically a transaction occurs entirely in one atomic moment which is the COMMIT time; before that moment, that transaction does not exist).
  2. Calculate the new value of MY_COLUMN, adding 1 to the current value.
  3. Check if the new value would hit any existing constraint. If yes, throw an error, if not store the new value in MY_COLUMN and lock the row.

So, let’s say the value was 3. Your update will add 1, so put 4 in the column.

Before the COMMIT, other transactions could only see the “old” value, before your update (consistent read), and would be locked if tried to update it. After the COMMIT the row will be unlocked, meaning any other session can now read the new value and also change it in a new transaction if needed.

Transaction on lock-free reservations columns

This is not the case for lock-free reservations! If you instruct Oracle to do a similar update but now on a reservable column:

update my_table set my_reservable_column = my_reservable_column + 1;

this is what Oracle will do:

  1. Read the CURRENT value of MY_RESERVABLE_COLUMN.
  2. Get all the pending reserved updates FROM ALL TRANSACTIONS to get the theoretical current value. So, Oracle is “invading” other’s transactions are to check what they are doing.
  3. Calculate the theoretical final value considering all pending reservations and your new update.
  4. Check if this theoretical final value will violate an existing constraint. If yes, throw an error, if not store the new reservation in the journal table.

Then, when you COMMIT the transaction:

  1. Read again the CURRENT value of MY_RESERVABLE_COLUMN. It may be changed since the first verification.
  2. Please note that the constraint validation does not need to be rechecked since the update was reserved.
  3. Calculate the new value with your Increment/decrement.
  4. Store the new value in MY_RESERVABLE_COLUMN.

I hope this made the differences more clear to you.

So, what’s my point when I said it’s weird not being able see all the pending reservations?

Let’s imagine a very common case that would benefit from lock-free reservations: to control the inventory of products. So I would have a table with my products, and a reservable column INVENTORY that would contain the number of available pieces of a that particular product, and it would be RESERVABLE to allow lock-free reservations. This way, you could start a new order, put a lot of different products in the cart, and since the moment you put a certain quantity in your cart, that quantity is reserved for you until you issue the order (COMMIT your transaction).

This is the perfect case for lock-free reservations. You don’t lock other people putting the same product in the cart. Each different order can reserve the quantity they need, having guaranteed the reservations and avoiding the disgust of receiving an “out of stock” error message when checking out. Perfect, amazing.

But now, let’s see you want to purchase all the available inventory for a certain product, no matter how many they are. The ideal situation in this case would be: let’s see how many units are in stock, deduct all the reservations (meaning, people with the item in the cart), and create an order with all the remaining.

For example, if you have a current inventory of 20 and 6 units reserved in other people’s carts, your ideal order would be of 14 units, to bring the inventory down to 0.

Well, this is not exactly possible with lock-free reservations because you simply cannot see all the pending reservations.

So, translating it back to Oracle, if I have a CHECK constraint avoiding the inventory to go negative, and I see there are 20 units in stock, I can TRY to put the 20 in the cart. This would translate into an UPDATE like:

update products set inventory = inventory - 20;

As explained above, Oracle would verify the current value and all the pending reservations and decide if my update would be allowed or not. Considering the example with 6 units already reserved, my UPDATE would violate the CHECK constraint and if I really wanted to buy all available, I would need to be guessing values, decreasing the quantity one by one, until I do not violate the constraint.

They would be blind tests! Why? Because I cannot see the total number of reservations.

You may argue that my examples are not strong enough, or this is a situation that would not exist in real world. Maybe, but I still think there should be a way to see all the pending transactions. I don’t think it would be considered a dirty read per se, since it’s not a change executed but not committed; in this case, it’s a change NOT executed yet, just reserved to take place at COMMIT.

That’s it.

I know, a lot of text and not much sample codes this time, but I think this was necessary.

Let me know you thoughts. Do you agree with my point? Was I clear enough explaining it?

See you on the next one!

23c – Sample schemas (or just objects) easier and more flexible

Prior to 23c, installing the Oracle sample schemas that are reference frequently in the documentation samples was not that much user-friendly not flexible, as Connor McDonald explains here.

For example, take a look at the README file for Oracle 21c here. Or yet, see the names of the scripts in the human_resources folder of the scripts:

Kind of scary, right 🙂 ?

With Oracle 23c, the scripts are much easier to understand and use, and they are also more flexible. Again, Connor talks about it here. For comparison, check out the README and the files in the human_resources folder:

Better, right?

Now, you can install easily just one of the schemas if you want. You don’t need SYSDBA anymore. And yet, you can create the objects under your own schema instead of having the user like HR, OE, SH in your database. For the sake of security, better to not have a “common” user that anyone could use, right?

So, if you take a look at the README and the scripts, you will see you have the following scripts;

  • hr_install.sql: just like in previous versions, it will create the HR user and all the necessary objects and data under it. It calls hr_create.sql, hr_populate.sql and hr_code.sql.
  • hr_create.sql: creates the tables, indexes, sequences and views.
  • hr_populate.sql: inserts data into the tables.
  • hr_code.sql: creates the PL/SQL objects.
  • hr_uninstall.sql: drops the HR user and removes everything under it.

So, if you want to have the HR user created in your database, just go ahead and run the script hr_install.sql.

But, if you want to create the tables under your own user for not to have the HR user, now you easily can: just skip the hr_install script and call yourself hr_create, hr_populate and hr_code, one after the other. Of course, you will need the privileges to create the objects, but you don’t need to be a DBA. If you just have the new 23c DB_DEVELOPER_ROLE and quota in a tablespace, you are good to go.

Let me show you how to do that from scratch.

First, if you haven’t done this yet, download the 23c sample schemas zip file here.

Then, go to the unzipped folder and connect to your user. As I said, your user must have the proper privileges.

For the sake of this example, I will create a new user that will hold the objects, but it will be my named user, not the standard HR user.

eduardo.claro human_resources % pwd
/Users/eduardo.claro/Downloads/db-sample-schemas-23.1/human_resources
eduardo.claro human_resources % sqlplus test/test@//localhost:1522/FREEPDB1

...
Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

test@FREE>
grant DB_DEVELOPER_ROLE to eduardo identified by eduardo;

Grant succeeded.

alter user eduardo quota unlimited on users;

User altered.

conn eduardo/eduardo@//localhost:1522/FREEPDB1
Connected.

show user
USER is "EDUARDO"

select * from session_roles;

ROLE
-------------------
DB_DEVELOPER_ROLE
SODA_APP
CTXAPP

Now I can run the scripts (the output is edited for brevity):

@hr_create
******  Creating REGIONS table ....

Table created.

...
...
...

Comment created.

@hr_populate

Session altered.

****** Populating REGIONS table ....

...
...
...

Table altered.

@hr_code

Procedure created.

Trigger created.

Trigger altered.

Procedure created.

Trigger created.

Commit complete.

Done, now we have all the objects and data of the Oracle’s HR sample schema, but under my own user:

show user
USER is "EDUARDO"
select object_name,object_type from user_objects;

OBJECT_NAME                         OBJECT_TYPE
----------------------------------- ---------------
REGIONS                             TABLE
REG_ID_PK                           INDEX
COUNTRIES                           TABLE
COUNTRY_C_ID_PK                     INDEX
LOCATIONS                           TABLE
LOC_ID_PK                           INDEX
LOCATIONS_SEQ                       SEQUENCE
DEPARTMENTS                         TABLE
DEPT_ID_PK                          INDEX
DEPARTMENTS_SEQ                     SEQUENCE
JOBS                                TABLE
JOB_ID_PK                           INDEX
EMPLOYEES                           TABLE
EMP_EMAIL_UK                        INDEX
EMP_EMP_ID_PK                       INDEX
EMPLOYEES_SEQ                       SEQUENCE
JOB_HISTORY                         TABLE
JHIST_EMP_ID_ST_DATE_PK             INDEX
EMP_DETAILS_VIEW                    VIEW
EMP_DEPARTMENT_IX                   INDEX
EMP_JOB_IX                          INDEX
EMP_MANAGER_IX                      INDEX
EMP_NAME_IX                         INDEX
DEPT_LOCATION_IX                    INDEX
JHIST_JOB_IX                        INDEX
JHIST_EMPLOYEE_IX                   INDEX
JHIST_DEPARTMENT_IX                 INDEX
LOC_CITY_IX                         INDEX
LOC_STATE_PROVINCE_IX               INDEX
LOC_COUNTRY_IX                      INDEX
SECURE_DML                          PROCEDURE
SECURE_EMPLOYEES                    TRIGGER
ADD_JOB_HISTORY                     PROCEDURE
UPDATE_JOB_HISTORY                  TRIGGER

34 rows selected.

select count(1) from employees;

  COUNT(1)
----------
       107

1 row selected.

And that’s it for now. I hope you like it.

See you on the next one.

23c DEFAULT ON NULL for UPDATEs

Another nice little feature of 23c is that the DEFAULT ON NULL clause was extended to UPDATEs as well, not only INSERTs anymore.

This feature is to avoid the boring “cannot insert NULL” errors, when you inadvertently leave empty a column that does not accept that.

So, when you INSERT or UPDATE a row and leave a column empty and this column has the DEFAULT ON NULL configured, this default value will be automatically put in the column to avoid errors.

Here is an example:

create table t1 (
col1 number not null,
col2 number default on null 0 not null,
col3 number default on null for insert only 0,
col4 number default on null for insert and update 0);

Table created.

desc t1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                      NOT NULL NUMBER
 COL2                                      NOT NULL NUMBER
 COL3                                      NOT NULL NUMBER
 COL4                                      NOT NULL NUMBER

Please note some things:

  • I intentionally mentioned the NOT NULL clause only for the columns COL1 and COL2, but all were created as NOT NULL because the DEFAULT ON NULL clause assumes the column cannot accept NULLs.
  • Pre 23c, the DEFAULT ON NULL clause only worked for INSERTs, so the clauses FOR INSERT ONLY or FOR INSERT AND UPDATE did not exist.
  • If you don’t mention the FOR clause, Oracle assumes it is FOR INSERT ONLY to keep backwards compatibility. So in my example, COL2 and COL3 are the same.

Let me test each of the columns for INSERTs:

insert into t1 values (NULL,1,1,1);
insert into t1 values (NULL,1,1,1)
                       *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("TEST"."T1"."COL1")


insert into t1 values (1,NULL,1,1);

1 row created.

insert into t1 values (1,1,NULL,1);

1 row created.

insert into t1 values (1,1,1,NULL);

1 row created.

test@FREE> select * from t1;

      COL1       COL2       COL3       COL4
---------- ---------- ---------- ----------
         1          0          1          1
         1          1          0          1
         1          1          1          0

3 rows selected.

Comments on the INSERTs:

  • COL1 don’t have a DEFAULT ON NULL clause, so I received an error when tried to insert NULL on it.
  • COL2, COL3 and COL4 all have DEFAULT ON NULL, and all accept it for INSERTs, so the NULL insert on each of them worked, and the values were properly replaced with the default value 0.

Now, let me test UPDATEs:

update t1 set col1 = null;
update t1 set col1 = null
              *
ERROR at line 1:
ORA-01407: cannot update ("TEST"."T1"."COL1") to NULL


update t1 set col2 = null;
update t1 set col2 = null
              *
ERROR at line 1:
ORA-01407: cannot update ("TEST"."T1"."COL2") to NULL


update t1 set col3 = null;
update t1 set col3 = null
              *
ERROR at line 1:
ORA-01407: cannot update ("TEST"."T1"."COL3") to NULL


update t1 set col4 = null;

3 rows updated.

Comments on the UPDATEs:

  • COL1, COL2 and COL3 were created either without the DEFAULT ON NULL, or with it but only for INSERTs. O all the three updates failed.
  • Only COL4 was created with DEFAULT NULL ON INSERT AND UPDATE. So this was the only update that worked.

Here is the data dictionary view of the columns:

col column_name for a15
col data_default for a12
col DEF_INS for a7
col DEF_UPD for a7
select column_name, nullable, data_default, default_on_null as DEF_INS, default_on_null_upd as DEF_UPD
from user_tab_columns where table_name='T1';

COLUMN_NAME     N DATA_DEFAULT DEF_INS DEF_UPD
--------------- - ------------ ------- -------
COL1            N              NO      NO
COL2            N 0            YES     NO
COL3            N 0            YES     NO
COL4            N 0            YES     YES

4 rows selected.

That’s it, enjoy. See you soon!

Are you sure all your NOT NULL constraints are the same?

Some time ago, I was analyzing the tables in a system to determine which tables should be partitioned, and the best partitioning key and method for each of them. For some tables, I decided to go with REF partitioning.

Then I built a script to automate the process, and while testing I came across some issues. You may know, to be able to create a REF partitioning table, both the primary key on the parent table and the foreign key on the child table must be enabled and validated. Additionally, the foreign key column must be NOT NULL. Since our model had some constraints as NOVALIDATE and also some FK columns accepting nulls, I first had to fix this in order to continue.

But, event after doing all this fixes, for some tables I was still receiving the following error: “ORA-14652: reference partitioning foreign key is not supported“.

After some more investigation, I found the problem: some of the NOT NULL constraints were created as CHECK constraints directly. And here starts my demo today 🙂 .

Have you realized that actually all the NOT NULL constraints you create are actually created as CHECK constraints?

Basically, you can create NOT NULL constraints in three different ways:

create table T1 (
	not_null_no_name number not null, 
	not_null_name number constraint nnn not null, 
	not_null_check number constraint nnc check (not_null_check is not null)
);

Table created.

desc T1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 NOT_NULL_NO_NAME                          NOT NULL NUMBER
 NOT_NULL_NAME                             NOT NULL NUMBER
 NOT_NULL_CHECK                                     NUMBER

Notice that I create three constraints that avoid NULL values in the columns: the first simply mentions NOT NULL after the data type (the most common way I would say); the second do the same, but I put a name for the constraint; and the third I created as a CHECK constraint directly, also avoiding NULL values. Also notice that the first two show up when a issue a describe of the table, but not the third.

Nevertheless, all three do avoid NULL values:

insert into T1 values (NULL,1,1);
insert into T1 values (NULL,1,1)
                       *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("CLAROE"."T1"."NOT_NULL_NO_NAME")


insert into T1 values (1,NULL,1);
insert into T1 values (1,NULL,1)
                         *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("CLAROE"."T1"."NOT_NULL_NAME")


insert into T1 values (1,1,NULL);
insert into T1 values (1,1,NULL)
*
ERROR at line 1:
ORA-02290: check constraint (CLAROE.NNC) violated

Note that while the two created as simple NOT NULL show a message referring that you tried to insert NULL, while the third, created as CHECK, simply mentions you violated the constraint.

Let’s see how these constraints show up in the data dictionary:

col constraint_name for a15
col search_condition for a30
select constraint_name, constraint_type, status, validated, search_condition
from user_constraints where table_name = 'T1';

CONSTRAINT_NAME C STATUS   VALIDATED     SEARCH_CONDITION
--------------- - -------- ------------- ------------------------------
SYS_C002171496  C ENABLED  VALIDATED     "NOT_NULL_NO_NAME" IS NOT NULL
NNN             C ENABLED  VALIDATED     "NOT_NULL_NAME" IS NOT NULL
NNC             C ENABLED  VALIDATED     not_null_check is not null

3 rows selected.

As I mentioned, all the three constraints were created as CHECK constraints in the background. Oracle created a name for the one I didn’t name. Note the search condition is the same, with the only difference that the ones created as NOT NULL are formatted, while the CHECK is exactly as I typed.

Ok, this is the situation I had in my environment as I mentioned in the beginning.

Now I will try to REF partition the table using as key each of the three columns and see what happens. But first I need to have a parent table, the foreign keys and some data in both tables:

create table PARENT_T1 (id number primary key)
partition by range (id)
(partition p1 values less than (5),
partition p2 values less than (MAXVALUE));

Table created.

alter table T1 add (
constraint FK_not_null_no_name FOREIGN KEY (not_null_no_name) references PARENT_T1,
constraint FK_not_null_name    FOREIGN KEY (not_null_name   ) references PARENT_T1,
constraint FK_not_null_check   FOREIGN KEY (not_null_check  ) references PARENT_T1
);

Table altered.

insert into PARENT_T1 select rownum from dual connect by level <= 10;

10 rows created.

insert into T1 select mod(rownum,10)+1, mod(rownum,10)+1, mod(rownum,10)+1 from dual connect by level <= 100;

100 rows created.

commit;

Commit complete.

Now let’s try to partition with each of the columns:

alter table T1 modify partition by reference (FK_not_null_no_name) ;

Table altered.

alter table T1 modify partition by reference (FK_not_null_name) ;

Table altered.

alter table T1 modify partition by reference (FK_not_null_check) ;
alter table T1 modify partition by reference (FK_not_null_check)
                                              *
ERROR at line 1:
ORA-14652: reference partitioning foreign key is not supported

Aha! So, for the two constraints created as simple NOT NULL, they worked for the REF partitioning. But the one created as CHECK failed! Despite it indeed avoid NULL values, it’s not embedded in the table structure and this is required for the REF partitioning.

If I simply remove the CHECK constraint a modify the column to NOT NULL, then I can REF partition by that third column:

alter table t1 drop constraint nnc;

Table altered.

alter table t1 modify (not_null_check not null);

Table altered.

alter table T1 modify partition by reference (FK_not_null_check) ;

Table altered.

Now it worked as expected!

And that’s it. I suppose there are other things that would not work well if you have a CHECK constraint instead of a real NOT NULL constraint. Let me know if you have faced similar or other issues related to it.

Enjoy, see you soon!

NUM_ROWS=0 in DBA_INDEXES for a Partitioned Index?

These days a client I support had severe performance issues in a lower environment that, after some investigation, we determined were caused by some indexes with wrong statistics. Consequently, bad plans were being generated and caused the performance degradation.

There is a process we run to mask sensitive data in lower environments. This is a long, complex process that involves a lot of phases, and in order to speed up some phases of this process, indexes are put in UNUSABLE state during the data changes and movements, and rebuilt afterwards.

During the execution of this masking process weeks ago, a problem occurred with the script and some of the local partitioned indexes ended up not being rebuilt, so they were still as UNUSABLE.

Somebody from the team detected the problem later and rebuilt all the UNUSABLE partitioned indexes (it must be done by partition, it’s not possible to rebuild partitioned indexes as a whole).

But, even after the rebuild, people testing the application continued complaining about poor performance and we found that it was being caused by that indexes that had UNUSED partitions. Even after rebuilding the partitions, the statistics at the index level were still wrong, causing the performance problems.

I know, long explanation, hard to figure out. So I’ve created a test case to reproduce the issue at will and to better understand it.

First, I will create a partitioned table with some data, a local partitioned index, and gather stats:

create table t1
partition by range (category)
(
partition p1 values less than (3),
partition p2 values less than (6),
partition p3 values less than (MAXVALUE)
) as
select rownum as ID, case when rownum <= 6 then rownum else 99 end as CATEGORY, lpad('X',1000,'X') as NAME
from dual connect by level <= 50000;

Table created.

create index i1 on t1 (category) local;

Index created.

exec dbms_stats.gather_table_stats(user,'T1',cascade=>TRUE)

PL/SQL procedure successfully completed.

Here is the data generated and the stats for the index and its partitions:

select category, count(1) from t1 group by category order by 1;

  CATEGORY   COUNT(1)
---------- ----------
         1          1
         2          1
         3          1
         4          1
         5          1
         6          1
        99      49994

7 rows selected.

INDEX_NAME                          PARTITION_NAME       STATUS     NUM_ROWS LAST_ANALYZED
----------------------------------- -------------------- -------- ---------- -------------------
I1                                                       N/A           50000 2023-04-19 17:23:18
I1                                  P1                   USABLE            2 2023-04-19 17:23:18
I1                                  P2                   USABLE            3 2023-04-19 17:23:18
I1                                  P3                   USABLE        49995 2023-04-19 17:23:18

4 rows selected.

Please note that N/A in the STATUS simply means that the index is partitioned, so the status that counts is at the partition level.

Since the data I’ve create is skewed, different predicates for CATEGORY would generate different execution plans:

set feedback only
select * from t1 where category = 1;

1 row selected.

set feedback off
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category = 1

Plan hash value: 3860318639

-----------------------------------------------------------
| Id  | Operation                                  | Name |
-----------------------------------------------------------
|   0 | SELECT STATEMENT                           |      |
|   1 |  PARTITION RANGE SINGLE                    |      |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1   |
|   3 |    INDEX RANGE SCAN                        | I1   |
-----------------------------------------------------------

set feedback only
select * from t1 where category = 99;

49994 rows selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category = 99

Plan hash value: 1737577267

---------------------------------------
| Id  | Operation              | Name |
---------------------------------------
|   0 | SELECT STATEMENT       |      |
|   1 |  PARTITION RANGE SINGLE|      |
|   2 |   TABLE ACCESS FULL    | T1   |
---------------------------------------

set feedback only
select * from t1 where category <= 100;

50000 rows selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category <= 100

Plan hash value: 589593414

------------------------------------
| Id  | Operation           | Name |
------------------------------------
|   0 | SELECT STATEMENT    |      |
|   1 |  PARTITION RANGE ALL|      |
|   2 |   TABLE ACCESS FULL | T1   |
------------------------------------

Now the fun begins :-). I will simulate what that masking process does in order to reproduce the issue.

First, I will make the index UNUSABLE. This is a very common step to speed up processes that make bulk data loads or changes. It’s much faster just do all the data changes and rebuild the index at the end than letting Oracle maintain the index for every single change we are doing.

alter index i1 unusable;

Index altered.

select index_name, partition_name, status, num_rows, last_analyzed from user_ind_partitions where index_name='I1'
UNION ALL
select index_name, NULL partition_name, status, num_rows, last_analyzed from user_indexes where index_name='I1'
order by 2 nulls first;

INDEX_NAME                          PARTITION_NAME       STATUS     NUM_ROWS LAST_ANALYZED
----------------------------------- -------------------- -------- ---------- -------------------
I1                                                       N/A           50000 2023-04-19 17:23:18
I1                                  P1                   UNUSABLE          2 2023-04-19 17:23:18
I1                                  P2                   UNUSABLE          3 2023-04-19 17:23:18
I1                                  P3                   UNUSABLE      49995 2023-04-19 17:23:18

4 rows selected.

Note that the status of the index partitions changed to UNUSED, but the index’s remains as N/A.

Now I will simulate data changes (the “masking” process):

alter table t1 enable row movement;

Table altered.

update t1 set id = 1000000-id;

50000 rows updated.

commit;

Commit complete.

I had to enable row movement for the table because the update I executed move rows among partitions.

Now, instead of rebuilding the UNUSABLE partitions before, I will “forget” this step and just gather new statistics for the index:

exec dbms_stats.gather_index_stats(user,'I1')

PL/SQL procedure successfully completed.

select index_name, partition_name, status, num_rows, last_analyzed from user_ind_partitions where index_name='I1'
UNION ALL
select index_name, NULL partition_name, status, num_rows, last_analyzed from user_indexes where index_name='I1'
order by 2 nulls first;

INDEX_NAME                          PARTITION_NAME       STATUS     NUM_ROWS LAST_ANALYZED
----------------------------------- -------------------- -------- ---------- -------------------
I1                                                       N/A               0 2023-04-19 17:28:53
I1                                  P1                   UNUSABLE          0 2023-04-19 17:28:53
I1                                  P2                   UNUSABLE          0 2023-04-19 17:28:53
I1                                  P3                   UNUSABLE          0 2023-04-19 17:28:53

4 rows selected.

Note that the NUM_ROWS both at the partitions and the index level are all 0.

I am not showing here, but all other statistics-related columns are with 0 as well.

I would guess Oracle should let these columns as NULL, since having 0 there can be kind of misleading. Let me know if you agree or disagree.

Let’s see how it impacts the execution plans:

set feedback only
select * from t1 where category = 1;

1 row selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category = 1

Plan hash value: 1737577267

---------------------------------------
| Id  | Operation              | Name |
---------------------------------------
|   0 | SELECT STATEMENT       |      |
|   1 |  PARTITION RANGE SINGLE|      |
|   2 |   TABLE ACCESS FULL    | T1   |
---------------------------------------


14 rows selected.

set feedback only
select * from t1 where category = 99;

49994 rows selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category = 99

Plan hash value: 1737577267

---------------------------------------
| Id  | Operation              | Name |
---------------------------------------
|   0 | SELECT STATEMENT       |      |
|   1 |  PARTITION RANGE SINGLE|      |
|   2 |   TABLE ACCESS FULL    | T1   |
---------------------------------------


14 rows selected.

set feedback only
select * from t1 where category <= 100;

50000 rows selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category <= 100

Plan hash value: 589593414

------------------------------------
| Id  | Operation           | Name |
------------------------------------
|   0 | SELECT STATEMENT    |      |
|   1 |  PARTITION RANGE ALL|      |
|   2 |   TABLE ACCESS FULL | T1   |
------------------------------------


14 rows selected.

Wait, WHAT??? All the queries doing FULL SCAN now? Why?

Oh ok, of course, since the index is UNUSABLE, Oracle is skipping it. If I change the DB parameter SKIP_UNUSED_INDEXES, I will receive an error:

alter session set skip_unusable_indexes = FALSE;

Session altered.

select * from t1 where category = 1;
select * from t1 where category = 1
*
ERROR at line 1:
ORA-01502: index 'CLAROE.I1' or partition of such index is in unusable state

alter session set skip_unusable_indexes = TRUE;

Session altered.

Ok, now I will solve the problem by rebuild the index partitions:

alter index i1 rebuild partition p1;

Index altered.

alter index i1 rebuild partition p2;

Index altered.

alter index i1 rebuild partition p3;

Index altered.

select index_name, partition_name, status, num_rows, last_analyzed from user_ind_partitions where index_name='I1'
UNION ALL
select index_name, NULL partition_name, status, num_rows, last_analyzed from user_indexes where index_name='I1'
order by 2 nulls first;

INDEX_NAME                          PARTITION_NAME       STATUS     NUM_ROWS LAST_ANALYZED
----------------------------------- -------------------- -------- ---------- -------------------
I1                                                       N/A               0 2023-04-19 17:28:53
I1                                  P1                   USABLE            2 2023-04-19 17:34:09
I1                                  P2                   USABLE            3 2023-04-19 17:34:11
I1                                  P3                   USABLE        49995 2023-04-19 17:34:14

4 rows selected.

Great, now the index partitions are USABLE again. But notice that the NUM_ROWS at the index level is still 0! The partitions rebuilt had their stats updated automatically, but the index-level stats did not change.

So, this is exactly the situation we were facing in our environment, and causing weird performance problems.

Here is an example:

set feedback only
select * from t1 where category <= 100;

50000 rows selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category <= 100

Plan hash value: 1976518054

-----------------------------------------------------------
| Id  | Operation                                  | Name |
-----------------------------------------------------------
|   0 | SELECT STATEMENT                           |      |
|   1 |  PARTITION RANGE ALL                       |      |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1   |
|   3 |    INDEX RANGE SCAN                        | I1   |
-----------------------------------------------------------


15 rows selected.

Did you notice that??? My predicate is CATEGORY <= 100, which would bring ALL the 50000 rows. So clearly a FULL SCAN would be preferable. But the Optimizer chose to go via the index. Probably because the index stats are messed.

Let me finally fix this and see what happens:

exec dbms_stats.gather_index_stats(user,'I1')

PL/SQL procedure successfully completed.

select index_name, partition_name, status, num_rows, last_analyzed from user_ind_partitions where index_name='I1'
UNION ALL
select index_name, NULL partition_name, status, num_rows, last_analyzed from user_indexes where index_name='I1'
order by 2 nulls first;

INDEX_NAME                          PARTITION_NAME       STATUS     NUM_ROWS LAST_ANALYZED
----------------------------------- -------------------- -------- ---------- -------------------
I1                                                       N/A           50000 2023-04-19 17:41:27
I1                                  P1                   USABLE            2 2023-04-19 17:41:27
I1                                  P2                   USABLE            3 2023-04-19 17:41:27
I1                                  P3                   USABLE        49995 2023-04-19 17:41:27

4 rows selected.

Finally, only after rebuild the index partitions AND re-gatherings statistics on the index, now everything seems normal again.

Let me see if my query now picks the right plan, first removing the plan from memory to make sure the Optimizer will recalculate it:

begin
	for i in (
		select address || ',' || hash_value as add_hash
		from V$SQLAREA where sql_text like 'select * from t1 where category%'
	) loop
		sys.DBMS_SHARED_POOL.PURGE (i.add_hash, 'C');
	end loop;
end;
/

PL/SQL procedure successfully completed.

select address || ',' || hash_value as add_hash
from V$SQLAREA where sql_text like 'select * from t1 where category%';

no rows selected

set feedback only
select * from t1 where category = 1;

1 row selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category = 1

Plan hash value: 3860318639

-----------------------------------------------------------
| Id  | Operation                                  | Name |
-----------------------------------------------------------
|   0 | SELECT STATEMENT                           |      |
|   1 |  PARTITION RANGE SINGLE                    |      |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1   |
|   3 |    INDEX RANGE SCAN                        | I1   |
-----------------------------------------------------------


15 rows selected.

set feedback only
select * from t1 where category <= 100;

50000 rows selected.

set feedback on
select * from table(dbms_xplan.display_cursor(null,null,'basic'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from t1 where category <= 100

Plan hash value: 589593414

------------------------------------
| Id  | Operation           | Name |
------------------------------------
|   0 | SELECT STATEMENT    |      |
|   1 |  PARTITION RANGE ALL|      |
|   2 |   TABLE ACCESS FULL | T1   |
------------------------------------


14 rows selected.

Wonderful, now the queries are again picking the right plan, everything is OK!

NOTE: this was tested in Oracle versions 19.18 and 23.2, with the same results.

And that’s it. I hope you understood how dangerous this situation is, performance-wise. If you find something similar in the future, you already know: re-gather stats at the index level AFTER rebuilding the UNUSABLE partitions, and everything is going to be OK.

I hope you enjoy.

See you on the next one!

23c Expansion of the CASE syntax in PL/SQL

In the new Oracle version 23c, the CASE statement in PL/SQL was extended to comply with the “new” SQL ANSI 2003 Standard.

This makes easier to write CASE statements that use comparison operators other that equality. But note that the new syntax is still only valid in PL/SQL. So the SQL CASE statement DID NOT receive the new features. Probably something to be implemented in the next release 😉 .

When using CASE, you pretty much have two syntax options, that the documentation call Simple and Searched.

Basically, a Simple CASE statement is (or was, before 23c) the one when you have one single expression that need to tested against several possible values, using only the EQUALITY operator. The left expression goes right after the CASE clause, and then you include your WHENs, THENs and the optional ELSE clauses. The CASE does equality comparisons for each WHEN, so VALUE = 1, VALUE = 2 and so on in the example below:

set serveroutput on;

declare
  VALUE number := 2;
  text varchar2(100);
begin
  text := 
    case VALUE
      when 1 then 'ONE'
      when 2 then 'TWO'
      when 3 then 'THREE'
      else 'NOT ONE, TWO NOR THREE'
    end;
  dbms_output.put_line(text);
end;
/

TWO

PL/SQL procedure successfully completed.

If I want to do a comparison using something other than equality, like “>”, “<” or even BETWEEN or IN, then I need to use the Searched CASE syntax:

set serveroutput on;

declare
  VALUE number := 2;
  text varchar2(100);
begin
  text := 
    case 
      when VALUE < 0 OR VALUE > 5 then 'OUT OF RANGE'
      when VALUE in (1,2) then 'ONE OR TWO'
      when VALUE between 0 and 3 then 'BETWEEN 0 AND 3 BUT NOT 1 NOR 2'
      else 'OTHER'
    end;
  dbms_output.put_line(text);
end;
/

ONE OR TWO

PL/SQL procedure successfully completed.

With the Searched CASE syntax by the way, you can not only use any comparison operator, but also use completely different expressions both in the left and right of each comparison.

Now with 23c, the Simple CASE syntax was expanded to allow what they call Dangling Predicates that can use any comparison operator:

select VERSION_FULL from v$instance;

VERSION_FULL
-----------------
23.2.0.0.0

set serveroutput on;

declare
  VALUE number := 2;
  text varchar2(100);
begin
  text := 
    case VALUE
      when < 0, > 5 then 'OUT OF RANGE'
      when in (1,2) then 'ONE OR TWO'
      when between 0 and 3 then 'BETWEEN 0 AND 3 BUT NOT 1 NOR 2'
      else 'OTHER'
    end;
  dbms_output.put_line(text);
end;
/

ONE OR TWO

PL/SQL procedure successfully completed.

Note that the comma separator means OR for the comparisons. You cannot use AND, OR or NOT operators:

select VERSION_FULL from v$instance;

VERSION_FULL
-----------------
23.2.0.0.0

set serveroutput on;

declare
  VALUE number := 2;
  text varchar2(100);
begin
  text := 
    case VALUE
      when < 0 OR > 5 then 'OUT OF RANGE'
    end;
  dbms_output.put_line(text);
end;
/

      when < 0 OR > 5 then 'OUT OF RANGE'
               *
ERROR at line 7:
ORA-06550: line 7, column 16:
PLS-00103: Encountered the symbol "OR" when expecting one of the following:
, * & - + / at mod remainder rem then <an exponent (**)> ||
multiset
The symbol ", was inserted before "OR" to continue.

It’s also important to note that the Searched CASE syntax is still the only option when for example you need to use different expressions on the left of each comparison, like this:

set serveroutput on;

declare
  CHR varchar2(3) := 'ABC';
  VALUE number := 5;
  text varchar2(100);
begin
  text := 
    case 
      when CHR like 'X%' then 'STARTS WITH AN X'
      when CHR = 'ABC' and VALUE = 5 then 'ABC,5'
      when CHR is NULL then 'NULL'
      else 'OTHER'
    end;
  dbms_output.put_line(text);
end;
/

ABC,5

PL/SQL procedure successfully completed.

Again, this new, extended syntax of the Simple CASE does not work in pure SQL, only PL/SQL:

select VERSION_FULL from v$instance;

VERSION_FULL
-----------------
23.2.0.0.0

WITH v as (select 2 as grade from dual)
select
    case VALUE
      when < 0, > 5 then 'OUT OF RANGE'
      when in (1,2) then 'ONE OR TWO'
      when between 0 and 3 then 'BETWEEN 0 AND 3 BUT NOT 1 NOR 2'
      else 'OTHER'
    end
from v;

      when < 0, > 5 then 'OUT OF RANGE'
           *
ERROR at line 4:
ORA-00936: missing expression

That’s it, enjoy.

Let me know if you have any questions.

See you on the next one!

23c Interval datatype aggregations

Hello,

Oracle 23c brought an interesting improvement when dealing with INTERVAL data types: the possibility of directly use it in aggregations, specifically SUM and AVG.

Let me explain better with an example.

I don’t use INTERVAL data types on a daily basis, but I do end up dealing with some of them mostly when doing math with TIMESTAMPs. For example, if I want to query DBA_HIST_SNAPSHOT and get the interval of a snapshot, I subtract END_INTERVAL_DATE from BEGIN_INTERVAL_DATE and the result is an INTERVAL value:

select end_interval_time - begin_interval_time as snapshot_interval
from dba_hist_snapshot
order by snap_id desc fetch first 1 rows only;

SNAPSHOT_INTERVAL
---------------------------------------------------------------------------
+000000000 01:00:01.441

1 row selected.

So the result shows that my last snapshot interval was about 1h (the default).

But the problem comes when I want to do math with these INTERVALs, or aggregations (which are math as well). Let me create sample data to show it.

create table t1 (begin_time timestamp, end_time timestamp);

Table created.

insert into t1 values (systimestamp-1, systimestamp-1/2);

1 row created.

insert into t1 values (systimestamp-3/24, systimestamp-1/24);

1 row created.

commit;

Commit complete.

Now let me check the intervals of my data:

col begin_time for a30
col end_time for a30
col interval_time for a30
select begin_time, end_time, end_time - begin_time as interval_time
from t1;

BEGIN_TIME                     END_TIME                       INTERVAL_TIME
------------------------------ ------------------------------ ------------------------------
2023-04-17 23:28:48.000000     2023-04-18 11:28:48.000000     +000000000 12:00:00.000000
2023-04-18 20:28:50.000000     2023-04-18 22:28:50.000000     +000000000 02:00:00.000000

2 rows selected.

So, I’ve create two intervals: the first with 12 hours and the second with 2 hours.

If I now want to get the sum and average of these intervals, of course it would return 14 and 7 hours, right? WRONG, because before Oracle 23c it was not possible to use INTERVAL types in aggregate functions.

Then, this query would return an error in pre-23c, since these aggregate functions don’t support INTERVAL types:

select VERSION_FULL from v$instance;

VERSION_FULL
-----------------
19.18.0.0.0

1 row selected.

select 
sum(end_time - begin_time) as total_interval,
avg(end_time - begin_time) as avg_interval
from t1;
avg(end_time - begin_time) as avg_interval
             *
ERROR at line 3:
ORA-00932: inconsistent datatypes: expected NUMBER got INTERVAL DAY TO SECOND

Before showing how 23c made our life easier here, let me show first what we would need to do in order to get the desired result in pre-23c:

First, I would need to convert the INTERVAL data type to a numeric one, in seconds for example, do the aggregates, and then convert back to INTERVAL. Below I show each of these three steps in separate SQL statements. But the result we want is in the last one:

select begin_time, end_time, 86400 * (sysdate + (end_time - begin_time) - sysdate) as seconds from t1;

BEGIN_TIME                     END_TIME                          SECONDS
------------------------------ ------------------------------ ----------
2023-04-17 15:13:01.000000     2023-04-18 03:13:01.000000          43200
2023-04-18 12:13:05.000000     2023-04-18 14:13:05.000000           7200

2 rows selected.

WITH A as (select 86400 * (sysdate + (end_time - begin_time) - sysdate) as seconds from t1)
select sum(SECONDS), avg(SECONDS) from A;

SUM(SECONDS) AVG(SECONDS)
------------ ------------
       50400        25200

1 row selected.

col sum_interval for a30
col avg_interval for a30
WITH A as (select 86400 * (sysdate + (end_time - begin_time) - sysdate) as seconds from t1)
select 
	NUMTODSINTERVAL(sum(SECONDS), 'second') as sum_interval,
	NUMTODSINTERVAL(avg(SECONDS), 'second') as avg_interval
from A;

SUM_INTERVAL                   AVG_INTERVAL
------------------------------ ------------------------------
+000000000 14:00:00.000000000  +000000000 07:00:00.000000000

1 row selected.

Now, the easier way in 23c, where INTERVAL types are accepted for aggregations:

select VERSION_FULL from v$instance;

VERSION_FULL
-----------------
23.2.0.0.0

1 row selected.

col sum_interval for a30
col avg_interval for a30
select 
sum(end_time - begin_time) as sum_interval,
avg(end_time - begin_time) as avg_interval
from t1;

TOTAL_INTERVAL                 SUM_INTERVAL
------------------------------ ------------------------------
+000000000 14:00:00.000000000  +000000000 07:00:00.000000000

1 row selected.

That’s it, enjoy!

See you soon.

Testing Oracle 23c Lock-Free Reservations

Hello,

In this post I will practice test how the new 23c feature Lock-Free Reservations work.

If you are not aware, here is a summary of what is Lock-Free Reservations:

  • It allows a session to update a specific column in a table row without locking the row.
  • The column must be defined with the new attribute RESERVABLE.
  • This is only valid for NUMERIC columns.
  • The UPDATE statement must change ONLY reservable columns, and filter the rows using the primary key.
  • The column value must be increased or decreased. You cannot specify a literal value.
  • The updated value after the COMMIT will be based on the value available at COMMIT time, not at transaction start time. The value could have been modified by other sessions after your transaction started, using lock-free reservation updates.
  • The proposed update is “promised” and reserved, but will only be effective at COMMIT time.
  • The pending updates are registered in a journal.

So, lets test it in real world.

Creating a table and inserting data

First, I will create a sample table with two reservable columns, and insert some data. By the way, my INSERT statement is inserting 3 rows in the VALUES clause, another nice new feature in 23c called Table Value Constructor:

create table t1 (
id number primary key, 
value number,
res1 number reservable constraint ck_res1 check (res1 >= 0),
res2 number reservable constraint ck_res2 check (res2 >= 0));

Table created.

insert into t1 values 
(1, 0, 10, 10),
(2, 0, 10, 10),
(3, 0, 10, 10)
;

3 rows created.

commit;

commit complete.

Updates on reservable columns must increase or decrease the value

Note below that the first update fails because I tried to put a specific value in the reservable column. The second one works because I used an increment:

update t1 set res1 = 102 where id=1;
update t1 set res1 = 102 where id=1
       *
ERROR at line 1:
ORA-55746: Reservable column update statement only supports + or - operations on a reservable column.

update t1 set res1 = res1 + 1 where id=1;

1 row updated.

Updates cannot mix reservable and non-reservable columns

Again, below the first update fails because I tried to update reservable and non-reservable columns at the same time. The second one works and shows that I can update multiple reservable columns at the same time:

update t1 set value = 1, res1 = res1 + 1 where id=1;
update t1 set value = 1, res1 = res1 + 1 where id=1
       *
ERROR at line 1:
ORA-55735: Reservable and non-reservable columns cannot be updated in the same statement.

update t1 set res1 = res1 + 1, res2 = res2 + 1 where id=1;

1 row updated.

Updates on reservable columns are only effective at COMMIT time

When you update a reservable column and it’s successful, you actually receive a promise that you will be able to make the update at COMMIT time. So you won’t see your update right after issuing it. It’s important to note that for application purposes:

rollback;

Rollback complete.

select * from t1 where id=1;

        ID      VALUE       RES1       RES2
---------- ---------- ---------- ----------
         1          0         10         10

1 row selected.

update t1 set res1 = res1 + 1, res2 = res2 + 1 where id=1;

1 row updated.

select * from t1 where id=1;

        ID      VALUE       RES1       RES2
---------- ---------- ---------- ----------
         1          0         10         10

commit;

Commit complete.

select * from t1 where id=1;

        ID      VALUE       RES1       RES2
---------- ---------- ---------- ----------
         1          0         11         11

1 row selected.

Updates on reservable columns must filter rows by Primary Key

If I don’t mention the primary key in the WHERE clause, an error is raised:

update t1 set res1 = res1 + 1;
update t1 set res1 = res1 + 1
       *
ERROR at line 1:
ORA-55732: Reservable column update should specify all the primary key columns in the WHERE clause.

update t1 set res1 = res1 + 1 where value=1;
update t1 set res1 = res1 + 1 where value=1
       *
ERROR at line 1:
ORA-55732: Reservable column update should specify all the primary key columns in the WHERE clause.

Updates on reservable columns cannot use other columns’ values, even if the mentioned column is also reservable

And I cannot even mention the column itself, in an update that syntactically would do nothing 🙂 :

update t1 set res1 = res2 + 1 where id=1;
update t1 set res1 = res2 + 1 where id=1
       *
ERROR at line 1:
ORA-55747: The SET clause of a reservable update statement requires the amount to be added or subtracted from the same reservable column, "RES2".

update t1 set res1 = res1 where id=1;
update t1 set res1 = res1 where id=1
       *
ERROR at line 1:
ORA-55746: Reservable column update statement only supports + or - operations on a reservable column.

The same transaction can update reservable and a non-reservable columns

You cannot mix updates on reservable and non-reservable columns in the same statement, but you can in the same transaction.

EDIT 04/25/2023: as opposed to what I mentioned here initially, if you update a non-reservable column, the updated row will be locked even for other transactions, even if they only try to update reservable columns. I explain this better here.

In this case, the UPDATE on the non-reservable column will hold a lock as usual, and prevent other transactions from updating that row, even if only updating the reservable columns.

Session 1:

select userenv('sid');

USERENV('SID')
--------------
301

1 row selected.
update t1 set res1=res1+1 where id=1;

1 row updated.

update t1 set value=1 where id=1;

1 row updated.

Session 2:

select userenv('sid');

USERENV('SID')
--------------
300

1 row selected.
update t1 set res1=res1+2 where id=1;

1 row updated.

update t1 set value=1 where id=1;

(LOCKED...)

Checking the journal for pending reservations

When you create a table with reservable columns, Oracle creates an associated table to record the journal of pending updates on these columns. The name of the table is SYS_RESERVJRNL_<object_id>.
Please note that each session can only see its own reservations, not all of them. I could not find a way to see all pending reservations from all sessions, nor could find anything related in the documentation.

Session 1:

select userenv('sid');

USERENV('SID')
--------------
301

1 row selected.
select object_id from user_objects where object_name='T1';

 OBJECT_ID
----------
     78433

1 row selected.

select * from SYS_RESERVJRNL_78433;

ORA_SAGA_ID$                     ORA_TXN_ID$      ORA_STATUS$  ORA_STMT_TYPE$           ID RES1_OP RES1_RESERVED RES2_OP RES2_RESERVED
-------------------------------- ---------------- ------------ ---------------- ---------- ------- ------------- ------- -------------
                                 0800080003020000 ACTIVE       UPDATE                    1 +                   1

1 row selected.

commit;

commit complete.

Session 2:

select userenv('sid');

USERENV('SID')
--------------
300

1 row selected.

select * from SYS_RESERVJRNL_78433;

ORA_SAGA_ID$                     ORA_TXN_ID$      ORA_STATUS$  ORA_STMT_TYPE$           ID RES1_OP RES1_RESERVED RES2_OP RES2_RESERVED
-------------------------------- ---------------- ------------ ---------------- ---------- ------- ------------- ------- -------------
                                 0900110009020000 ACTIVE       UPDATE                    1 +                   2

1 row selected.

commit;

commit complete.

Note that each session sees its own transaction reservation (the TXN_IDs are different).

The journal table acts like a temporary table, although it’s not:

select temporary from dba_tables where table_name='SYS_RESERVJRNL_78433';

T
-
N

1 row selected.

The constraints are validated at the statement time

Since the updates reserve the amounts requested, the constraints are validated at the statement time and will raise an error if your update along with all the reservations together would violate the constraint.
Note that in this case, the validation is still pessimistic: if some or all the reservations do not confirm (transactions rollback), your update could make through, but since it’s not guaranteed at that point, the constraint has to raise an error.

Session 1:

The first UPDATE receives an error because it alone would violate the constraint, since I’m trying to decreasing the value by 20, which would result in a negative value.

The second one works since it’s decreasing the value by 10 and the resulting value would not violate the constraint.

select userenv('sid');

USERENV('SID')
--------------
301

1 row selected.

rollback;

Rollback complete.

select * from t1 where id=1;

        ID      VALUE       RES1       RES2
---------- ---------- ---------- ----------
         1          0         14         11

1 row selected.

update t1 set res1 = res1 - 20 where id=1;
update t1 set res1 = res1 - 20 where id=1
*
ERROR at line 1:
ORA-02290: check constraint (TEST.CK_RES1) violated

update t1 set res1 = res1 - 10 where id=1;

1 row updated.

Session 2:

Note that here, the same update that worked for Session 1 does not work because now the combined reservations would be a decrease of 20, again resulting in a negative number that is not valid for the constraint.

select userenv('sid');

USERENV('SID')
--------------
300

1 row selected.

rollback;

Rollback complete.

update t1 set res1 = res1 - 10 where id=1;
update t1 set res1 = res1 - 10 where id=1
*
ERROR at line 1:
ORA-02290: check constraint (TEST.CK_RES1) violated

Triggers on reservable columns are triggered at COMMIT time

A row-level trigger on a non-reservable column would always be triggered at the statement time, but this is not the case for reservable columns.
Note that I will create two triggers, one on the column VALUE, that is not reservable, and another on RES1 which is reservable. I will create a log table to use in the triggers:

create table t1_log_reservations 
(id number, timestamp date, column_name varchar2(30), value number);

Table created.

create or replace TRIGGER trg_on_value
after UPDATE of value ON t1 
for each row
BEGIN
	insert into t1_log_reservations
	values (:new.id, sysdate, 'VALUE', :new.value);
END;
/

Trigger created.

create or replace TRIGGER trg_on_res1
after UPDATE of res1 ON t1 
for each row
BEGIN
	insert into t1_log_reservations
	values (:new.id, sysdate, 'RES1', :new.res1);
END;
/

Trigger created.

Now I will update the two columns and check my log table that is populated by the triggers:

update t1 set res1=res1+1 where id=1;

1 row updated.

update t1 set value=10 where id=1;

1 row updated.

select * from t1_log_reservations;

        ID TIMESTAMP           COLUMN_NAME                         VALUE
---------- ------------------- ------------------------------ ----------
         1 2023-04-18 16:54:58 VALUE                                  10

1 row selected.

Note that only the trigger on VALUE was triggered so far.

Now, let’s COMMIT and check again:

commit;

Commit complete.

select * from t1_log_reservations;

        ID TIMESTAMP           COLUMN_NAME                         VALUE
---------- ------------------- ------------------------------ ----------
         1 2023-04-18 16:54:58 VALUE                                  10
         1 2023-04-18 16:57:06 RES1                                    2

2 rows selected.

Now the trigger on RES1 was triggered.

You cannot drop a table with reservable columns

If you try to drop a table with reservable columns you get an error. So, you first need to ALTER the columns to NOT RESERVABLE and the you can drop the table:

drop table t1;
drop table t1
           *
ERROR at line 1:
ORA-55764: Cannot DROP or MOVE tables with reservable columns. First run "ALTER TABLE  MODIFY ( NOT RESERVABLE)" and then DROP or MOVE the table.

alter table t1 modify (res1 not reservable, res2 not reservable);

Table altered.

drop table t1;

Table dropped.

I’m pretty sure I may have missed some important points to address about this new, amazing feature of Oracle 23c.

Let me know in the comments and I can include or create a supplemental post.

I hope I hope it can be useful.

See you soon!

How to easily get the corresponding AWR Snapshots interval for an interval date

Hello,

This is another short script to help you translate a provided date interval into the corresponding AWR snapshots interval.

I don’t know if you also struggle with this, but sometimes I have difficulties in selecting the exact begin and end SNAP_ID based on a specific date interval. It’s because sometimes all that BEGIN_INTERVAL_TIME and END_INTERVAL_TIME confuses me a little bit 🙂 .

For example, if I want to get an AWR from yesterday 10 am until today 10 am, it’s not uncommon for me to select the initial snapshot as the one with BEGIN_INTERVAL_TIME at yesterday 10 am, that would have its END_INTERVAL_TIME at 11 am (assuming the default 1h snapshots interval). And then, when I extract the AWR, I realize its time window actually starts at 11 am instead of 10 am. This is because we need to use the SNAP_ID that ENDS at the starting point of our desired window.

I know, I know, it’s confusing. And this is why I wrote this script, to simplify things and make it easy.

This script just receives as parameters the BEGIN_DATE, END_DATE and the INST_ID of the instance. And then it returns the corresponding BEGIN_SNAP, END_SNAP, and also a flag showing is the database was restarted during the provided time window.

If the database was restarted, you CANNOT extract an AWR for that interval.

If there is no restart during the interval, then you can just extract your AWR by providing the outputted BEGIN and END_SNAP. Maybe using my provided script that will generate the AWR with a better file name 😉 .

So, here is the script dates_to_snaps.sql:

/*
dates_to_snaps.sql
This script shows the snapshot interval corresponding to a provided date/time interval, for a particular instance
Created by Eduardo Claro
*/

undef begin_time
undef end_time
undef inst_id

col min_time for a30
col max_time for a30
col restarted for a10

select min(SNAP_ID) begin_snap, max(SNAP_ID) end_snap, 
	min(END_INTERVAL_TIME) min_time, max(END_INTERVAL_TIME) max_time,
	case when count(distinct STARTUP_TIME) > 1 then 'YES' else 'NO' end RESTARTED
from (
	select
		a.*, lead(in_the_interval) over (order by SNAP_ID) as next_in_the_interval
	from (
		select SNAP_ID, END_INTERVAL_TIME, STARTUP_TIME,
			case when trunc(END_INTERVAL_TIME,'mi') > &&begin_time
				  and trunc(BEGIN_INTERVAL_TIME,'mi') < &&end_time
				then 1 else 0 end in_the_interval,
			row_number() over (order by snap_id desc) rn
		from dba_hist_snapshot s
		join v$database d using (DBID)
		where INSTANCE_NUMBER = nvl('&&inst_id','1')
		order by SNAP_ID
		) a
	) b
where in_the_interval = 1 or next_in_the_interval = 1;

Please note that I am truncating the snapshot dates at the minutes, disregarding the seconds. This is because it takes some seconds for the snapshot to be generated. Then, if you have a new snapshot at the top of every hour, most likely your snapshot dates will be like ’07:00:23′ instead of the “ideal” ’07:00:00′.

Another thing to note is that I am considering ANY snapshot that has at least a partial overlap with the provided time window. For example, if your time window is from 3:30 pm until 5:30 pm, and you have hourly snapshots, then the interval snapshots provided would be from 3 pm until 6 pm, thus covering your entire interval. In order to have this “expanded” window, I compare the END_INTERVAL_TIME with the provided begin time, and the BEGIN_INTERVAL_TIME with the provided end time. I know again, it sounds weird, but it works ;-).

And here is a sample run:

SQL> select sysdate from dual;

SYSDATE
-------------------
2023-04-13 14:20:08

1 row selected.

Elapsed: 00:00:00.16
SQL> @blog_posts/dates_to_snaps
Enter value for begin_time: sysdate-1
Enter value for end_time: sysdate
Enter value for inst_id: 1

BEGIN_SNAP   END_SNAP MIN_TIME                       MAX_TIME                       RESTARTED
---------- ---------- ------------------------------ ------------------------------ ----------
     57646      57742 2023-04-12 14:15:41.111        2023-04-13 14:15:19.739        NO

1 row selected.

In this example, I can run an AWR for the last 24 hours because there was no restart during the interval, and I would get it by providing the SNAP_IDs 57646 and 57742:

Screenshot 2023-04-13 at 4.21.40 PM

That’s it, I hope it can be useful.

Let me know if you have any questions, I would be happy to answer.

See you soon.