Storing LOBs In and Out of a Row- Large Objects

By default, up to 8,000 bytes of a LOB column are stored inline with the table row. Prior to 23c it was up to 4,000 bytes. If the LOB is more than 8,000 bytes, then Oracle automatically stores it outside the row data. The main advantage of storing a LOB in a row is that small LOBs require less I/O, because Oracle does not have to search out of the row for the LOB data.

However, storing LOB data in a row is not always desirable. The disadvantage of storing LOBs in a row is that the table row sizes are potentially longer.

This can affect the performance of full-table scans, range scans, and updates to columns other than the LOB column. In these situations, you may want to disable storage in the row.

For example,you explicitly instruct Oracle to store the LOB outside the row with the DISABLE STORAGE IN ROW clause:

SQL> create table patchmain( patch_id number, patch_desc clob , lob_file blob)

lob(patch_desc, lob_file) store as (tablespace lob_data disable storage in row);

ENABLE STORAGE IN ROW is enabled by default and will store up to 8,000 bytes of a LOB in the table row. The LOB locator is aways stored inline with the row even if you specify DISABLE STORAGE IN ROW.

You cannot modify the LOB storage in a row after the table has been created. The only ways to alter storage in a row are to move the LOB column or drop and re-create the table. This example alters the storage in a row by moving the LOB column:

SQL> alter table patchmain move lob(patch_desc)

store as (enable storage in row);

You can verify the in-row storage via the IN_ROW column of USER_LOBS:

SQL> select table_name, column_name, tablespace_name, in_row from user_lobs;

TABLE_NAME   COLUMN_NAME   TABLESPACE_NAME IN_ROW

PATCHMAIN PATCHMAIN LOG_FILE PATCH_DESC
LOB_DATA         YES LOB_DATA                 YES

Viewing LOB Metadata

You can use any of the DBA/ALL/USER_LOBS views to display information about LOBs in your database:

SQL> select table_name, column_name, index_name, tablespace_name from all_lobs order by table_name;

Also keep in mind that a LOB segment has a corresponding index segment:

SQL> select segment_name, segment_type, tablespace_name from user_segments where segment_name like ‘SYS_LOB%’ or segment_name like ‘SYS_IL%’;

In this way, you can query both the segment and the index in the DBA/ALL/USER_ SEGMENTS views for LOB information.

You can use the DBA/ALL/USER_SEGMENTS to get the space consumed by the LOB segment. Here is a sample query:

SQL> select segment_name, segment_type, segment_subtype, bytes/1024/1024 meg_bytes from user_segments;

You can modify the query to report on only LOBs by joining to the DBA/ALL/USER_ LOBS view:

SQL> select a.table_name, a.column_name, a.segment_name, a.index_name ,b.bytes/1024/1024 meg_bytes

from user_lobs a, user_segments b where a.segment_name = b.segment_name;

You can also use the DBMS_SPACE.SPACE_USAGE package and procedure to report on the blocks being used by a LOB.

Creating a Table with a LOB Column- Large Objects

The default underlying LOB architecture is SecureFiles. It is recommended to create a LOB as a SecureFiles. As discussed previously, SecureFiles allows you to use features such as compression and encryption.

The DB_SECUREFILE parameter is not required, but you can explicitly state which LOB architecture to implement in the create statement.

SQL> create table patchmain ( patch_id number, patch_desc clob)

lob(patch_desc) store as securefile (tablespace lob_data);

If the tablespace is not specified, the LOB segment is stored in the same tablespace as its table.

Before viewing the data dictionary details regarding the LOB columns, insert a record into the table to ensure that segment information is available; for example,

SQL> insert into patchmain values(1,’clob text’);

You can now verify a LOB’s architecture by querying the USER_SEGMENTS view:

SQL> select segment_name, segment_type, segment_subtype from user_segments;

Here is some sample output, indicating that a LOB segment is a SecureFiles type:

SEGMENT_NAME                                     SEGMENT_TYPE                                                            

SEGMENT_SU

PATCHMAIN SYS_IL0000022340C00002$$
You can also query the USER_LOBS view to verify the SecureFiles LOB architecture:

SQL> select table_name, segment_name, index_name, securefile, in_row from user_lobs;

Here is the output:

You can create a partitioned table that has a LOB column. Doing so lets you spread a LOB across multiple tablespaces. Such partitioning helps with balancing I/O, maintenance, and backup and recovery operations.

All partitioning schemes supported by Oracle are fully supported on LOBs. The next example creates a LIST-partitioned table in which LOB column data is stored in tablespaces separate from those of the table data:

SQL> CREATE TABLE patchmain( patch_id NUMBER,region VARCHAR2(16) ,patch_desc CLOB)

LOB(patch_desc) STORE AS (TABLESPACE patch1) PARTITION BY LIST (REGION) (PARTITION p1 VALUES (‘EAST’) LOB(patch_desc) STORE AS SECUREFILE (TABLESPACE patch1 COMPRESS HIGH) TABLESPACE inv_data1

PARTITION p2 VALUES (‘WEST’) LOB(patch_desc) STORE AS SECUREFILE (TABLESPACE patch2 DEDUPLICATE NOCOMPRESS) TABLESPACE inv_data2

PARTITION p3 VALUES (DEFAULT) LOB(patch_desc) STORE AS SECUREFILE (TABLESPACE patch3 COMPRESS LOW) TABLESPACE inv_data3

Note that each LOB partition is created with its own storage options; again, it is optional, but the default will be in the same tablespace as the table. You can view the details about the LOB partitions as shown here:

SQL> select table_name, column_name, partition_name, tablespace_name ,compression, deduplication from user_lob_partitions;

Here is some sample output:

TABLE_NAME COLUMN_NAME   PARTITION_ TABLESPACE_NAME   

You can also view DBA/ALL/USER_PART_LOBS for information about partitioned LOBs.

You can change the storage characteristics of a partitioned LOB column after it is created. To do so, use the ALTER TABLE … MODIFY PARTITION statement. You can have compression and deduplication as well:

SQL> alter table patchmain modify partition p2 lob(patch_desc) (compress high);

SQL> alter table patchmain modify partition p3 lob(patch_desc) (deduplicate lob);

You can transparently encrypt a SecureFiles LOB column (just like any other column). Or you can also use tablespace encryption to encrypt the tablespace used for your table and LOBs. The ENCRYPT clause enables Secure Files encryption, using Oracle Transparent Data Encryption (TDE). To do this at the column level or tablespace level, you will need to enable a common keystore for the CDB and PDBs using united mode so you can manage the keys in the common keystore.

The Oracle Base Database service creates the keystore when it creates the database for the TDE. The parameters WALLET_ROOT and TDE_CONFIGURATION will have the details of the configuration.

If the database was not created with creating a keystore, you can create the keystore with the following commands:

SQL> administer key management create keystore ‘/opt/oracle/dcs/ commonstore/wallets/db23c_db23c/tde’ identified by “Cr4zyPa$$word1”;

Then the PDB will need to have the keystore enabled and open:

SQL> administer key management set keystore open identified by “Cr4zyPa$$word1”;

After the TDE wallet configuration is all set up, a table can be created with an encrypted column:

SQL> create table patchmain( patch_id number, patch_desc clob encrypt)

lob (patch_desc) store as securefile;

You can also alter a column to enable encryption:

SQL> alter table patchmain modify (patch_desc clob encrypt);

The tablespace can be created with encryption:

SQL> create tablespace lob_data encryption using ‘AES256’ encrypt;

Note partitioning, advanced compression, and advanced security options, which have been discussed in this chapter, are extra-cost options that are available only with the Oracle enterprise edition.

Showing Primary Key and Foreign Key Relationships- Data Dictionary Fundamentals

Sometimes when you are diagnosing constraint issues, it is useful to display data dictionary information regarding what primary key constraint is associated with a foreign key constraint.

For example, perhaps you are attempting to insert into a child table, an error is thrown indicating that the parent key does not exist, and you want to display more information about the parent key constraint.

The following script queries the DBA_CONSTRAINTS data dictionary view to determine the parent primary key constraints that are related to child foreign key constraints. You need to provide as input to the script the owner of the table and the child table for which you want to display primary key constraints:

SQL> select a.constraint_type cons_type

,a.table_name child_table ,a.constraint_name child_cons ,b.table_name parent_table ,b.constraint_name parent_cons ,b.constraint_type cons_type from dba_constraints a ,dba_constraints bwhere a.owner = upper(‘&owner’)

and a.table_name = upper(‘&table_name’) and a.constraint_type = ‘R’

and a.r_owner = b.owner

and a.r_constraint_name = b.constraint_name;

The preceding script prompts you for two SQL*Plus ampersand variables (OWNER, TABLE_NAME); if you are not using SQL*Plus, then you may need to modify the script with the appropriate values before you run it.

The following output shows that there are two foreign key constraints. It also shows the parent table primary key constraints:

C CHILD_TABLE CHILD_CONS PARENT_TABLE PARENT_CONS C

R REG_COMPANIES REG_COMPANIES_FK2 D_COMPANIES D_COMPANIES_PK P

R REG_COMPANIES REG_COMPANIES_FK1 CLUSTER_BUCKETS CLUSTER_BUCKETS_PK P

When the CONSTRAINT_TYPE column (of DBA/ALL/USER_CONSTRAINTS) contains an R value, this indicates that the row describes a referential integrity constraint, which means that the child table constraint references a primary key constraint.

You use the technique of joining to the same table twice to retrieve the primary key constraint information. The child constraint columns (R_OWNER, R_CONSTRAINT_NAME) match with another row in the DBA_CONSTRAINTS view that contains the primary key information.

You can also do the reverse of the prior query in this section; for a primary key constraint, you want to find the foreign key columns (if any) that correlate to it.

The next script takes the primary key record and looks to see if it has any child records with a constraint type of R. When you run this script, you are prompted for the primary key table owner and name:

SQL> selectb.table_name primary_key_table ,a.table_name fk_child_table,a.constraint_name fk_child_table_constraint from dba_constraints a,dba_constraints bwhere a.r_constraint_name = b.constraint_name and a.r_owner = b.owner

and a.constraint_type = ‘R’

and b.owner = upper(‘&table_owner’)

and b.table_name = upper(‘&table_name’);

Here is some sample output:

PRIMARY_KEY_TABLE FK_CHILD_TABLE FK_CHILD_TABLE_CONSTRAINT CLUSTER_BUCKETS CLUSTER_BUCKETS CLUSTER_BUCKETS

A Different View of Metadata- Data Dictionary Fundamentals-2

Logical objects are viewable from SQL only after the database has been started. In contrast, physical objects can be viewed via OS utilities even if the instance is not started.

Figure 10-3 does not show all the relationships of all logical and physical aspects of an Oracle database. Rather, it focuses on components that you are most likely to encounter on a daily basis. This base relational diagram forms a foundation for leveraging Oracle’s data dictionary infrastructure.

Keep an image of Figure 10-3 open in your mind; now, add it to Figure 10-4.

Figure 104. Relationships of commonly used data dictionary views

Voilà, these data dictionary views map very closely to almost all the logical and physical elements of an Oracle database. Figure 10-4 does not show every data dictionary view. Indeed, the figure barely scratches the surface. However, this diagram does provide you with a secure foundation on which to build your understanding of how to leverage the data dictionary views to get the data you need to do your job.

The diagram shows the relationships between views, but it does not specify which columns to use when joining views together. You will have to describe the tables and make an educated guess as to how the views should be joined.

Use the diagram as a guide for where to start looking for information and how to write SQL queries that will provide answers to problems and expand your knowledge of Oracle’s internal architecture and inner workings. This anchors your problem-solving skills on a solid foundation. Once you firmly understand the relationships of Oracle’s logical and physical components and how this relates to the data dictionary, you can confidently address any type of database issue.

Note there are several thousand CDB/DBA/ALL/USER static views and more than 900 V$ dynamic performance views.

A Few Creative Uses of the Data Dictionary

In every chapter of this book, you will find several SQL examples of how to leverage the data dictionary to better understand concepts and resolve problems. Having said that, it is worth showing a few offbeat examples of how DBAs leverage the data dictionary. The next few sections do just that. Keep in mind that this is just the tip of the iceberg: there are endless number of queries and techniques that DBAs employ to extract and use data dictionary information.

Dynamic Performance Views- Data Dictionary Fundamentals

The dynamic performance data dictionary views are colloquially referred to as the V$ and GV$ views. These views are constantly updated by Oracle and reflect the current condition of the instance and database. Dynamic views are critical for diagnosing real-time performance issues.

The V$ and GV$ views are indirectly based on underlying X$ tables, which are internal memory structures that are instantiated when you start your Oracle instance. Some of the V$ views are available the moment the Oracle instance is started.

For example, V$PARAMETER contains meaningful data after the STARTUP NOMOUNT command has been issued and does not require the database to be mounted or open. Other dynamic views (such as V$CONTROLFILE) depend on information in the control file and therefore contain significant information only after the database has been mounted.

Some V$ views (such as V$DB) provide kernel-processing information and thus have useful results only after the database has been opened.

At the top layer, the V$ views are actually synonyms that point to underlying SYS.V_$ views. At the next layer down, the SYS.V_$ objects are views created on top of another layer of SYS.V$ views. The SYS.V$ views in turn are based on the SYS.GV$ views. At the bottom layer, the SYS.GV$ views are based on the X$ memory structures.

The top-level V$ synonyms and SYS.V_$ views are created when you run the catalog.sql script, which you usually do after the database is initially created. Figure 10-2 shows the process for creating the V$ dynamic performance views.

Figure10-2.CreatingtheV$dynamicperformancedatadictionaryviews

Accessing the V$ views through the topmost synonyms is usually adequate for dynamic performance information needs. On rare occasions, you will want to query internal information that may not be available through the V$ views. In these situations, it is critical to understand the X$ underpinnings.

If you work with Oracle Real Application Clusters (RACs), you should be familiar with the GV$ global views. These views provide global dynamic performance information regarding all instances in a cluster (whereas the V$ views are instance specific). The GV$ views contain an INST_ID column for identifying specific instances in a clustered environment.

You can display the V$ and GV$ view definitions by querying the VIEW_DEFINITION column of the V$FIXED_VIEW_DEFINITION view. For instance, this query displays the definition of the V$CONTROLFILE:

SQL> select view_definition from v$fixed_view_definition where view_ name=’V$CONTROLFILE’; VIEW_DEFINITION

select STATUS , NAME, IS_RECOVERY_DEST_FILE, BLOCK_SIZE, FILE_SIZE_ BLKS,   CON_ID from GV$CONTROLFILE where inst_id = USERENV(‘Instance’)

Data Dictionary Architecture- Data Dictionary Fundamentals

If you inherit a database and are asked to maintain and manage it, typically you will inspect the contents of the data dictionary to determine the physical structure of the database and see what events are currently transacting. Besides figuring out what you inherited, these views help to automate processes and troubleshoot problems. Toward this end, Oracle provides two general categories of read-only data dictionary views:

•     The contents of your database, such as users, tables, indexes, constraints, privileges, and other objects. These are sometimes referred to as the static CDB/DBA/ALL/USER data dictionary views, and they are based on internal tables stored in the SYSTEM tablespace. The term static, in this sense, means that the information within these views changes only as you make changes to your database, such as adding a user, creating a table, or modifying a column.

•     A real-time view of activity in the database, such as users connected to the database, SQL currently executing, memory usage, locks, and I/O statistics. These views are based on virtual memory tables and are referred to as the dynamic performance views. The information in these views is continuously updated by Oracle as events take place within the database. The views are also sometimes called the V$ or GV$ views. GV$ views are global views across all nodes in the database system and normally have an additional column to let you know which node they are referring to.

These types of data dictionary views are described in further detail in the next two sections.

Static Views

Oracle refers to a subset of the data dictionary views as static and based on the physical tables maintained internally by Oracle. The term static can sometimes be a misnomer. For example, the DBA_SEGMENTS and DBA_EXTENTS views change dynamically as the amount of data in your database grows and shrinks. Regardless, Oracle has made the distinction between static and dynamic, and it is important to understand this architecture nuance when querying the data dictionary. There are four levels of static views:
• USER

• ALL

• DBA

• CDB
The USER views contain information available to the current user. For example, the USER_TABLES view contains information about tables owned by the current user. No special privileges are required to select from the USER-level views.


At the next level are the ALL static views. The ALL views show you all object information the current user has access to. For example, the ALL_TABLES view displays all database tables on which the current user can perform any type of DML operation. No special privileges are required to query from the ALL-level views.


Next are the DBA static views. The DBA views contain metadata describing all objects in the database (regardless of ownership or access privilege). To access the DBA views, a DBA role or SELECT_CATALOG_ROLE must be granted to the current user.


The CDB-level views provide information about all pluggable databases within a container database. The CDB-level views report across all containers (root, seed, and all pluggable databases) in a CDB. For instance, if you wanted to view all users within a CDB database, you would do so from the root container, by querying CDB_USERS.

You will notice that many of the static data dictionary and dynamic performance views have a new column, CON_ID. This column uniquely identifies each pluggable database within a container database. The root container has a CON_ID of 1. The seed has a CON_ID of 2. Each new pluggable database created within the CDB is assigned a unique sequential
container ID.


The static views are based on internal Oracle tables, such as USER$, TAB$, and IND$. If you have access to the SYS schema, you can view underlying tables directly via SQL. For most situations, you need to access only the static views that are based on the underlying internal tables.


The data dictionary tables (such as USER$, TAB$, and IND$) are created during the execution of the CREATE DATABASE command. As part of creating a database, the sql. bsq file is executed, which builds these internal data dictionary tables.

The sql.bsq file is generally located in the ORACLE_HOME/rdbms/admin directory; you can view it via an OS editing utility (such as vi, in Linux/Unix, or Notepad in Windows).

The static views are created when you run the catalog.sql script (usually, you run this script once the CREATE DATABASE operation succeeds). The catalog.sql script is located in the ORACLE_HOME/rdbms/admin directory. Figure 10-1 shows the process of creating the static data dictionary views.

Figure 101. Creating the static data dictionary views

Creating a Fast Refreshable MV Based on a Complex Query- Views, Duality Views, and Materialized Views

In many situations, when you base an MV on a query that joins multiple tables, it is deemed complex and therefore is available only for a complete refresh. However, in some scenarios, you can create a fast refreshable MV when you reference two tables that are joined together in the MV query.

This section describes how to use the EXPLAIN_MVIEW procedure in DBMS_MVIEW to determine whether it is possible to fast refresh a complex query. To help you completely understand the example, this section shows the SQL used to create the base tables. Say you have two base tables, defined as follows:

SQL> create table region( region_id number ,reg_desc varchar2(30),constraint region_pk primary key(region_id)); —

SQL> create table sales( sales_id number ,sales_amt number ,region_id number ,sales_dtt date,constraint sales_pk primary key(sales_id),constraint sales_fk1 foreign key (region_id) references region(region_id));

Additionally, REGION and SALES have MV logs created on them, as shown here:

SQL> create materialized view log on region with primary key; SQL> create materialized view log on sales with primary key;

Also, for this example, the base tables have these data inserted into them:

SQL> insert into region values(10,’East’), (20,’West’),(30,’South’), (40,’North’); —

SQL> insert into sales values (1,100,10,sysdate), (2,200,20,sysdate-20), (3,300,30,sysdate-30);

Suppose you want to create an MV that joins the REGION and SALES base tables as follows:

SQL> create materialized view sales_mv asselect a.sales_id,b.reg_desc from sales      a,region bwhere a.region_id = b.region_id;

Next, let’s attempt to fast refresh the MV:

SQL> exec dbms_mview.refresh(‘SALES_MV’,’F’);

This error is thrown:

ORA-12032: cannot use rowid column from materialized view log…

The error indicates that the MV has issues and cannot be fast refreshed. To determine whether this MV can become fast refreshable, use the output of the EXPLAIN_ MVIEW procedure of the DBMS_MVIEW package. This procedure requires that you first create an MV_CAPABILITIES_TABLE. Oracle provides a script to do this. Run this script as the owner of the MV:

SQL> @?/rdbms/admin/utlxmv.sql

After you create the table, run the EXPLAIN_MVIEW procedure to populate it:

SQL> exec dbms_mview.explain_mview(mv=>’SALES_MV’,stmt_id=>’100′);

Now, query MV_CAPABILITIES_TABLE to see what potential issues this MV may have:

SQL> select capability_name, possible, msgtxt, related_text from mv_capabilities_table

where capability_name like ‘REFRESH_FAST_AFTER%’ and statement_id = ‘100’order by 1;

Next is a partial listing of the output. The P (for possible) column contains an N (for no) for every fast refresh possibility:

CAPABILITY_NAME             P    MSGTXT                    RELATED_TEXT

REFRESH_FAST_AFTER_INSERT     N
the SELECT list does not have    Bthe rowids of all the detail tablesREFRESH_FAST_AFTER_INSERT     N

REFRESH_FAST_AFTER_INSERT   N
mv log must have ROWID mv log must have ROWID
MV_MAINT.REGION MV_MAINT.SALES

MSGTXT indicates the issues: The MV logs need to be ROWID based, and the ROWID of the tables must appear in the SELECT clause. So, first drop and re-create the MV logs with ROWID (instead of a primary key):

SQL> drop materialized view log on region; SQL> drop materialized view log on sales; —

SQL> create materialized view log on region with rowid; SQL> create materialized view log on sales with rowid; —

SQL> drop materialized view sales_mv; —

SQL> create materialized view sales_mv asselecta.rowid sales_rowid ,b.rowid region_rowid ,a.sales_id ,b.reg_descfrom sales     a ,region bwhere a.region_id = b.region_id;

Next, reset the MV_CAPABILITIES_TABLE, and repopulate it via the EXPLAIN_MVIEW procedure:

SQL> delete from mv_capabilities_table where statement_id=100; SQL> exec dbms_mview.explain_mview(mv=>’SALES_MV’,stmt_id=>’100′);

The output shows that it is now possible to fast refresh the MV:

CAPABILITY_NAME          P MSGTXT   RELATED_TEXT

REFRESH_FAST_AFTER_ANY_DML           Y REFRESH_FAST_AFTER_INSERT          Y

REFRESH_FAST_AFTER_ONETAB_DML  Y

Execute the following statement to see if the fast refresh works:

SQL> exec dbms_mview.refresh(‘SALES_MV’,’F’); PL/SQL procedure successfully completed.

The EXPLAIN_MVIEW procedure is a powerful tool that allows you to determine whether a refresh capability is possible and, if it is not possible, why it is not and how to potentially resolve the issue.

Oracle Database 23c also supports fast refresh on ANSI join syntax for MVs. Our query in the example had the following:

from sales     a ,region b

where a.region_id = b.region_id

But now this syntax is also valid:

from sales      a join region b

on (a.region_id = b.region_id)

JSON Relational Duality Views- Views, Duality Views, and Materialized Views

You might see this as a developer topic inserted into a database administration book, but this idea of having relational tables and delivering data in different formats while maintaining data consistency and performance is too important to keep just to the developers or just to DBAs. You can use SQL, graph syntax, PL/SQL, JavaScript, or your favorite programming language to access data from the database.

JSON relational duality views make it easy to maintain the JSON documents without needing back and forth to the database for ID or updates against other documents to make sure you have a data consistency maintained across the board. JSON relational duality views leverage the relational tables by providing JSON documents using the data you have as part of other applications or the relational tables that you use for transactions.

Also, if you have gotten into the practice of providing data services from the database through APIs or views, this is going to show you how easy it is to use JSON in the Oracle Database and provide JSON documents for applications to read, insert, and modify.

The first part of this chapter was demonstrating the basic view concepts with a few examples to implement, create, and maintain views as database objects. Now let’s dive into how we can create JSON duality views on some of the same types of tables. Here are the create table statements for these examples:

SQL> create table emp ( emp_id number primary key , emp_name varchar2(30), emp_email varchar2(30), job_profile varchar2(30));

You can also simply select from the table in JSON formatting:

Now, we can create a simple duality view on just the emp table:

Here we used the SQL syntax to create the view, and the insert, update, and delete operations allow us to perform those actions against the view. The relational table will be updated as a result.

Select from the view to see the JSON format:

You will notice the emp_id is the primary key and additional fields of metadata have been added. The etag can be used for optimistic locking, and it represents the current state of the object. The asof metadata is the system change number (SCN) for consistent reads. If you update the emp, the state of the object has changed for those updated records, and the etag would then change. The etag will provide the checks and validations for the updates with no optimistic locking. It really makes it easy to work withhigh concurrency systems and scales very nicely.

From the select statement, you can see every row is one employee, and each employee is a JSON document in the view.

Now let’s look at a duality view on multiple tables. Also, notice that this is using the graph syntax that is built into the Oracle Database. This is another way of querying the data. You can also build the view with SQL syntax like we did for the emp_v.

The JSON document is being made up from different entities, and the data is shared in the views. With the insert/update/delete, you can perform these actions on the view, and that update or insert will occur in the relational tables that populate these views.

This eliminates managing all of the JSON documents to update every document with the change. Also, having everything available in the JSON document will avoid making more round-trips to the database to fetch IDs as things change because it is based on the relational table data.

The JSON duality view has all of the data that is needed available, including the changes, as data is modified through the view to the tables. As you can see, I excluded a couple of columns from the view, and not all columns from the table are required in the duality view.

So, what does this view look like (the first regular SQL statement is just to return the rows, and then it appears in pretty print so the JSON data is more readable):

Now let’s insert a row into the emp_v for the new employee:

Now let’s insert a row into the emp_dept_v with a new team name:

Selecting from the emp table will show another entry:

Selecting from the teams table inserts the row with the department and manager:

Finally, the JSON duality view shows us the JSON data:

Not all of the fields were inserted, but we also didn’t provide that data through the view. Depending on the application and how the data is being used, those are areas that can be handled programmatically or through triggers or just by including the data in the insert.

There are also data dictionary views that provide metadata about the duality views: DBA/ALL/USER_JSON_DUALITY_VIEWS, DBA/ALL/USER_JSON_DUALITY_VIEW_TABS, DBA/ALL/USER_JSON_DUALITY_VIEW_TAB_COLS, and DBA/ALL/USER_JSON_DUALITY_ VIEW_LINKS.

SQL> select view_name from user_json_duality_views; VIEW_NAME

——————————————–EMP_V

EMP_DEPT_V

With Oracle Database 23c, there are simplified ways to handle JSON documents. There are JSON data types and JSON schemas along with the functions to view the JSON format easier. It is easier to use SQL or graph syntax for easier-to-read JSON; and now JSON duality views give applications ways to get and put data, read data, and modify it when necessary while sharing the same data source and avoiding costly integrations and data consistency issues.

Table Locks and Foreign Keys- Tables and Constraints

Here is a simple example that demonstrates the locking issue when foreign key columns are not indexed. First, create two tables (DEPT and EMP) and associate them with a foreign key constraint:

SQL> create table emp(emp_id number primary key, dept_id number); SQL> create table dept(dept_id number primary key);

SQL> alter table emp add constraint emp_fk1 foreign key (dept_id) references dept(dept_id);

Next, insert some data:

SQL> insert into dept values(10), (20),(30);

SQL> insert into emp values(1,10), (2,20),(3,10);

SQL> commit; Open two terminal sessions. From one, delete one record from the child table (do not commit):

SQL> delete from emp where dept_id = 10; Then, attempt to delete from the parent table some data not affected by the child table delete:

SQL> delete from dept where dept_id = 30; The delete from the parent table hangs until the child table transaction is committed.

Without a regular B-tree index on the foreign key column in the child table, any time you attempt to insert or delete in the child table, a table-wide lock is placed on the parent table; this prevents deletes or updates in the parent table until the child table transaction completes.

Now, run the prior experiment, except this time, additionally create an index on the foreign key column of the child table:

SQL> create index emp_fk1 on emp(dept_id);

You should be able to run the prior two delete statements independently. When you have a B-tree index on the foreign key columns, if deleting from the child table, Oracle will not excessively lock all rows in the parent table.

Indexes are important tools for performance when querying the data. There is some cost involved when using them and creating, but with analyzing the SQL statements and planning the storage and maintenance indexes allow for faster data access.

Implementing an Index on a Foreign Key Column- Tables and Constraints

Say you have a requirement that every record in the ADDRESS table be assigned a corresponding CUST_ID column from the CUST table. To enforce this relationship, you create the ADDRESS table and a foreign key constraint, as follows:

SQL> create table address(address_id number ,cust_address varchar2(2000),cust_id number); —

SQL> alter table address add constraint addr_fk1 foreign key (cust_id) references cust(cust_id);

Note a foreign key column must reference a column in the parent table that has a primary key or unique key constraint defined on it. Otherwise, you will receive the error ORA-02270: no matching unique or primary key for this column-list.

You realize that the foreign key column is used extensively when joining the CUST and ADDRESS tables and that an index on the foreign key column will increase performance. In this situation, you have to create an index manually. For instance, a regular B-tree index is created on the foreign key column of CUST_ID in the ADDRESS table.

SQL> create index addr_fk1 on address(cust_id);

You do not have to name the index the same name as the foreign key (as I did in these lines of code). It is a personal preference as to whether you do that. I feel it is easier to maintain environments when the constraint and corresponding index have the same name.

When creating an index, if you do not specify the tablespace name, Oracle places the index in the user’s default tablespace. It is usually a good idea to explicitly specify which tablespace the index should be placed in; for example,

SQL> create index addr_fk1 on address(cust_id) tablespace reporting_index;

Determining Whether Foreign Key Columns Are Indexed

If you are creating an application from scratch, it is fairly easy to create the code and ensure that each foreign key constraint has a corresponding index. However, if you have inherited a database, it is prudent to check if the foreign key columns are indexed.

You can use data dictionary views to verify if all columns of a foreign key constraint have a corresponding index. The task is not as simple as it might first seem. For example, here is a query that gets you started in the right direction:

SQL> SELECT DISTINCT

a.owner              owner

,a.constraint_name cons_name

,a.table_name ,tab_name

b.column_name cons_column

NVL(c.column_name,’***Check index****’) ind_column FROM dba_constraints a,dba_cons_columns b ,dba_ind_columns c

WHERE constraint_type = ‘R’

AND a.owner = UPPER(‘&&user_name’) AND a.owner = b.owner

AND a.constraint_name = b.constraint_name AND b.column_name = c.column_name(+)

AND b.table_name = c.table_name(+) AND b.position = c.column_position(+) ORDER BY tab_name, ind_column;

This query, while simple and easy to understand, does not correctly report on unindexed foreign keys for all situations.

For example, in the case of multicolumn foreign keys, it does not matter if the constraint is defined in an order different from that of the index columns, as long as the columns defined in the constraint are in the leading edge of the index.

In other words, for a constraint on (COL1, COL2), an index will work on either (COL1,COL2) or (COL2,COL1);the order of the same set of columns doesn’t matter. Also, an index on (COL1,COL2,COL3) will also work, because the extra index column is OK as long as the leading columns match.

SQL> create index column_test_idx on table1 (col2, col1);

SQL> create index column_test_idx on table1 (col1, col2, col3);

Another issue is that a B-tree index protects you from locking issues, but a bitmap index does not. In this situation, the query should also check the index type.

In these scenarios, you will need a more sophisticated query to detect indexing issues related to foreign key columns. The query checks index type and finds the related tables and their indexes to show what the index columns are with the tables and indexes.

The following example is a more complex query that uses the LISTAGG analytic function to compare columns (returned as a string in one row) in a foreign key constraint with corresponding indexed columns:

SQL> SELECT

CASE WHEN ind.index_name IS NOT NULL THEN CASE WHEN ind.index_type IN (‘BITMAP’) THEN ‘** Bitmp idx **’

ELSE ‘indexed’ END

ELSE

‘** Check idx **’ END checker ,ind.index_type,cons.owner, cons.table_name, ind.index_name, cons.constraint_name, cons.cols

FROM (SELECT

c.owner, c.table_name, c.constraint_name

, LISTAGG(cc.column_name, ‘,’ ) WITHIN GROUP (ORDER BY cc.column_name) cols FROM dba_constraints c

,dba_cons_columns cc WHERE c.owner = cc.owner

AND c.owner = UPPER(‘&&schema’)

AND c.constraint_name = cc.constraint_name AND c.constraint_type = ‘R’

GROUP BY c.owner, c.table_name, c.constraint_name) cons LEFT OUTER JOIN

(SELECT

table_owner, table_name, index_name, index_type, cbr ,LISTAGG(column_name, ‘,’ ) WITHIN GROUP (ORDER BY column_name) cols FROM (SELECT

ic.table_owner, ic.table_name, ic.index_name

,ic.column_name, ic.column_position, i.index_type ,CONNECT_BY_ROOT(ic.column_name) cbr

FROM dba_ind_columns ic ,dba_indexes i

WHERE ic.table_owner = UPPER(‘&&schema’) AND ic.table_owner = i.table_owner

AND ic.table_name = i.table_name AND ic.index_name = i.index_name

CONNECT BY PRIOR ic.column_position-1 = ic.column_position AND PRIOR ic.index_name = ic.index_name)

GROUP BY table_owner, table_name, index_name, index_type, cbr) ind ON cons.cols = ind.cols

AND cons.table_name = ind.table_name AND cons.owner = ind.table_owner

ORDER BY checker, cons.owner, cons.table_name;

This query will prompt you for a schema name and then will display all foreign key constraints that do not have corresponding indexes. This query also checks for the index type; as previously stated, bitmap indexes may exist on foreign key columns but do not prevent locking issues.