Storing LOBs In and Out of a Row- Large Objects

By default, up to 8,000 bytes of a LOB column are stored inline with the table row. Prior to 23c it was up to 4,000 bytes. If the LOB is more than 8,000 bytes, then Oracle automatically stores it outside the row data. The main advantage of storing a LOB in a row is that small LOBs require less I/O, because Oracle does not have to search out of the row for the LOB data.

However, storing LOB data in a row is not always desirable. The disadvantage of storing LOBs in a row is that the table row sizes are potentially longer.

This can affect the performance of full-table scans, range scans, and updates to columns other than the LOB column. In these situations, you may want to disable storage in the row.

For example,you explicitly instruct Oracle to store the LOB outside the row with the DISABLE STORAGE IN ROW clause:

SQL> create table patchmain( patch_id number, patch_desc clob , lob_file blob)

lob(patch_desc, lob_file) store as (tablespace lob_data disable storage in row);

ENABLE STORAGE IN ROW is enabled by default and will store up to 8,000 bytes of a LOB in the table row. The LOB locator is aways stored inline with the row even if you specify DISABLE STORAGE IN ROW.

You cannot modify the LOB storage in a row after the table has been created. The only ways to alter storage in a row are to move the LOB column or drop and re-create the table. This example alters the storage in a row by moving the LOB column:

SQL> alter table patchmain move lob(patch_desc)

store as (enable storage in row);

You can verify the in-row storage via the IN_ROW column of USER_LOBS:

SQL> select table_name, column_name, tablespace_name, in_row from user_lobs;

TABLE_NAME   COLUMN_NAME   TABLESPACE_NAME IN_ROW

PATCHMAIN PATCHMAIN LOG_FILE PATCH_DESC
LOB_DATA         YES LOB_DATA                 YES

Viewing LOB Metadata

You can use any of the DBA/ALL/USER_LOBS views to display information about LOBs in your database:

SQL> select table_name, column_name, index_name, tablespace_name from all_lobs order by table_name;

Also keep in mind that a LOB segment has a corresponding index segment:

SQL> select segment_name, segment_type, tablespace_name from user_segments where segment_name like ‘SYS_LOB%’ or segment_name like ‘SYS_IL%’;

In this way, you can query both the segment and the index in the DBA/ALL/USER_ SEGMENTS views for LOB information.

You can use the DBA/ALL/USER_SEGMENTS to get the space consumed by the LOB segment. Here is a sample query:

SQL> select segment_name, segment_type, segment_subtype, bytes/1024/1024 meg_bytes from user_segments;

You can modify the query to report on only LOBs by joining to the DBA/ALL/USER_ LOBS view:

SQL> select a.table_name, a.column_name, a.segment_name, a.index_name ,b.bytes/1024/1024 meg_bytes

from user_lobs a, user_segments b where a.segment_name = b.segment_name;

You can also use the DBMS_SPACE.SPACE_USAGE package and procedure to report on the blocks being used by a LOB.

Illustrating LOBs, Locators, Indexes, and Chunks- Large Objects

Internal LOBs (CLOB, NCLOB, BLOB) store data in pieces called chunks. A chunk is the smallest unit of allocation for a LOB and is made up of one or more database blocks. LOB locators are stored in rows containing a LOB column. The LOB locator points to a LOB index. The LOB index stores information regarding the location of LOB chunks. When a table is queried, the database uses the LOB locator and associated LOB index to locate the appropriate LOB chunks. Figure 11-1 shows the relationship between a table, a row, a LOB locator, and a LOB locator’s associated index and chunks.

Figure 111. Relationship of table, row, LOB locator, LOB index, and LOB segment

The LOB locator for a BFILE stores the directory path and filename on the OS. Figure 11-2 shows a BFILE LOB locator that references a file on the OS.

Figure 112. The BFILE LOB locator contains information for locating a file on the OS

Note the DBMS_LOB package performs operations on LOBs through the LOB locator.

SecureFiles

LOBs have two different storage architectures: the original BasicFiles and the newer SecureFiles. With Oracle 23c, you need to be using SecureFiles. Unless you are using an ancient version of Oracle, you should always use the SecureFiles architecture, which has better performance and advanced features, including the following:

•     Encryption (requires Oracle Advanced Security Option)

•     Compression (requires Oracle Advanced Compression Option)

•     Deduplication (requires Oracle Advanced Compression Option)

SecureFiles encryption lets you transparently encrypt LOB data (just like other data types). The compression feature allows for significant space savings. The deduplication feature eliminates duplicate LOBs that otherwise would be stored multiple times.

You need to do a small amount of planning before using SecureFiles. Specifically, using SecureFiles requires the following:

•     A SecureFiles LOB must be stored in a tablespace using ASSM.

•     The DB_SECUREFILE initialization parameter controls whether a SecureFiles file can be used and also defines the default LOB architecture for your database.

To create an ASSM-enabled tablespace, specify the SEGMENT SPACE MANAGEMENT AUTO clause, which is the default; for example,

SQL> create tablespace lob_data

datafile ‘/u01/dbfile/db23c/lob_data01.dbf’ size 1000m

extent management local uniform size 1m

segment space management auto;

If you have existing tablespaces, you can verify the use of ASSM by querying the DBA_ TABLESPACES view. The SEGMENT_SPACE_MANAGEMENT column should have a value of AUTO for any tablespaces that you want to use with SecureFiles:

SQL> select tablespace_name, segment_space_management from dba_tablespaces;

Also, SecureFiles usage is governed by the DB_SECUREFILE database parameter. You can use either ALTER SYSTEM or ALTER SESSION to modify the value of DB_ SECUREFILE. Table 11-2 describes the valid values for DB_SECUREFILE.

Table11-2.DescriptionofDB_SECUREFILE Settings

Table11-2.(continued)

Displaying Table Row Counts- Data Dictionary Fundamentals

When you are investigating performance or space issues, it is useful to display each table’s row count. To calculate row counts manually, you would write a query such as this for each table that you own:

SQL> select count(*) from <table_name>;

Manually crafting the SQL is time-consuming and error prone. In this situation, it is more efficient to use SQL to generate the SQL required to solve the problem.

To that end, this next example dynamically selects the required text, based on information in the DBA_TABLES view. Create a SQL script file named tabcount_generator.sql with the following code. In SQL*Plus, run that script as a DBA-privileged user with a command like @tabcount_generator.sql.

The script prompts you for a username each time, generates a new filename named tabcount_<user>.sql that contains all the necessary SELECT COUNT(*) commands for that schema, and then runs that new script and displays the table names and row counts.

UNDEFINE user

SPOOL tabcount_&&user..sql

SET LINESIZE 132 PAGESIZE 0 TRIMSPOOL OFF VERIFY OFF FEED OFF TERM OFF SELECT

‘SELECT RPAD(‘ || ””  || table_name ||””||’,30)’

|| ‘,’ || ‘ COUNT(1) FROM &&user..’ || table_name || ‘;’ FROM dba_tables

WHERE owner = UPPER(‘&&user’) ORDER BY 1;

SPOOL OFF; SET TERM ON

@@tabcount_&&user..sql SET VERIFY ON FEED ON

If the username you provide to the script is INVUSER, then you can manually run the generated script as follows:

SQL> @tabcount_invuser.sql

Keep in mind that if the table row counts are high, this script can take a long time to run (several minutes).

Developers and DBAs often use SQL to generate SQL statements. This is a useful technique when you need to apply the same SQL process (repetitively) to many differentobjects, such as all tables in a schema. If you do not have access to DBA-level views, you can query the USER_TABLES view; for example,

SPOOL tabcount.sql

SET LINESIZE 132 PAGESIZE 0 TRIMSPO OFF VERIFY OFF FEED OFF TERM OFF SELECT

‘SELECT RPAD(‘ || ”” || table_name || ”” ||’,30)’ || ‘,’ || ‘ COUNT(*) FROM ‘ || table_name || ‘;’

FROM user_tables ORDER BY 1; SPOOL OFF;

SET TERM ON @@tabcount.sql

SET VERIFY ON FEED ON

If you have accurate statistics, you can query the NUM_ROWS column of the CDB/ DBA/ALL/USER_TABLES views. This column normally has a close row count if statistics are generated on a regular basis. The following query selects NUM_ROWS from the USER_ TABLES view:

SQL> select table_name, num_rows from user_tables;

One final note: if you have partitioned tables and want to show row counts by partition, use the next few lines of SQL and PL/SQL code to generate the SQL required:

UNDEFINE user

SET SERVEROUT ON SIZE 1000000 VERIFY OFF SPOOL part_count_&&user..txt

DECLARE

counter NUMBER;

sql_stmt VARCHAR2(1000); CURSOR c1 IS

SELECT table_name, partition_name FROM dba_tab_partitions

WHERE table_owner = UPPER(‘&&user’); BEGIN

FOR r1 IN c1 LOOP

sql_stmt := ‘SELECT COUNT(1) FROM &&user..’ || r1.table_name ||’ PARTITION ( ‘||r1.partition_name ||’ )’;

EXECUTE IMMEDIATE sql_stmt INTO counter; DBMS_OUTPUT.PUT_LINE(RPAD(r1.table_name ||'(‘||r1.partition_name||’)’,30) ||’ ‘||TO_CHAR(counter)); END LOOP; END; / SPOOL OFF

A Different View of Metadata- Data Dictionary Fundamentals-1

DBAs commonly face the following types of database issues:

•     Database refusing connections because the maximum number of sessions is exceeded.

•     An application is hung, apparently because of some sort of locking issue.

•     An insert into a table fails because a tablespace cannot extend.

•     A PL/SQL statement is failing, with a memory error.

•     A user is trying to update a record, but a unique key constraint violation is thrown.

•     A SQL statement has been running for hours longer than normal.

•     Application users have reported that performance seems sluggish and that something must be wrong with the database.

The prior list is a small sample of the typical issues a DBA encounters on a daily basis. A certain amount of knowledge is required to be able to efficiently diagnose and handle these types of problems. A fundamental piece of that knowledge is an understanding of Oracle’s physical structures and corresponding logical components.

For example, if a table cannot extend because a tablespace is full, what knowledge do you rely on to solve this problem? You need to understand that when a database is created, it contains multiple logical space containers called tablespaces. Each tablespace consists of one or more physical data files. Each data file consists of many OS blocks. Each table consists of a segment, and every segment contains one or more extents. As a segment needs space, it allocates additional extents within a physical data file.

Once you understand the logical and physical concepts involved, you intuitively look in data dictionary views such as DBA_TABLES, DBA_SEGMENTS, DBA_TABLESPACES, and DBA_DATA_FILES to pinpoint the issue and add space as required.

In a wide variety of troubleshooting scenarios, your understanding of the relationships of various logical and physical constructs will allow you to focus on querying views that will help you quickly resolve the problem at hand.

To that end, inspect Figure 10-3. This diagram describes the relationships between logical and physical structures in an Oracle database.

The rounded rectangle shapes represent logical constructs, and the sharp-cornered rectangles are physical files.

Figure10-3.Oracle database logical and physical structure relationships

Dynamic Performance Views- Data Dictionary Fundamentals

The dynamic performance data dictionary views are colloquially referred to as the V$ and GV$ views. These views are constantly updated by Oracle and reflect the current condition of the instance and database. Dynamic views are critical for diagnosing real-time performance issues.

The V$ and GV$ views are indirectly based on underlying X$ tables, which are internal memory structures that are instantiated when you start your Oracle instance. Some of the V$ views are available the moment the Oracle instance is started.

For example, V$PARAMETER contains meaningful data after the STARTUP NOMOUNT command has been issued and does not require the database to be mounted or open. Other dynamic views (such as V$CONTROLFILE) depend on information in the control file and therefore contain significant information only after the database has been mounted.

Some V$ views (such as V$DB) provide kernel-processing information and thus have useful results only after the database has been opened.

At the top layer, the V$ views are actually synonyms that point to underlying SYS.V_$ views. At the next layer down, the SYS.V_$ objects are views created on top of another layer of SYS.V$ views. The SYS.V$ views in turn are based on the SYS.GV$ views. At the bottom layer, the SYS.GV$ views are based on the X$ memory structures.

The top-level V$ synonyms and SYS.V_$ views are created when you run the catalog.sql script, which you usually do after the database is initially created. Figure 10-2 shows the process for creating the V$ dynamic performance views.

Figure10-2.CreatingtheV$dynamicperformancedatadictionaryviews

Accessing the V$ views through the topmost synonyms is usually adequate for dynamic performance information needs. On rare occasions, you will want to query internal information that may not be available through the V$ views. In these situations, it is critical to understand the X$ underpinnings.

If you work with Oracle Real Application Clusters (RACs), you should be familiar with the GV$ global views. These views provide global dynamic performance information regarding all instances in a cluster (whereas the V$ views are instance specific). The GV$ views contain an INST_ID column for identifying specific instances in a clustered environment.

You can display the V$ and GV$ view definitions by querying the VIEW_DEFINITION column of the V$FIXED_VIEW_DEFINITION view. For instance, this query displays the definition of the V$CONTROLFILE:

SQL> select view_definition from v$fixed_view_definition where view_ name=’V$CONTROLFILE’; VIEW_DEFINITION

select STATUS , NAME, IS_RECOVERY_DEST_FILE, BLOCK_SIZE, FILE_SIZE_ BLKS,   CON_ID from GV$CONTROLFILE where inst_id = USERENV(‘Instance’)

Data Dictionary Architecture- Data Dictionary Fundamentals

If you inherit a database and are asked to maintain and manage it, typically you will inspect the contents of the data dictionary to determine the physical structure of the database and see what events are currently transacting. Besides figuring out what you inherited, these views help to automate processes and troubleshoot problems. Toward this end, Oracle provides two general categories of read-only data dictionary views:

•     The contents of your database, such as users, tables, indexes, constraints, privileges, and other objects. These are sometimes referred to as the static CDB/DBA/ALL/USER data dictionary views, and they are based on internal tables stored in the SYSTEM tablespace. The term static, in this sense, means that the information within these views changes only as you make changes to your database, such as adding a user, creating a table, or modifying a column.

•     A real-time view of activity in the database, such as users connected to the database, SQL currently executing, memory usage, locks, and I/O statistics. These views are based on virtual memory tables and are referred to as the dynamic performance views. The information in these views is continuously updated by Oracle as events take place within the database. The views are also sometimes called the V$ or GV$ views. GV$ views are global views across all nodes in the database system and normally have an additional column to let you know which node they are referring to.

These types of data dictionary views are described in further detail in the next two sections.

Static Views

Oracle refers to a subset of the data dictionary views as static and based on the physical tables maintained internally by Oracle. The term static can sometimes be a misnomer. For example, the DBA_SEGMENTS and DBA_EXTENTS views change dynamically as the amount of data in your database grows and shrinks. Regardless, Oracle has made the distinction between static and dynamic, and it is important to understand this architecture nuance when querying the data dictionary. There are four levels of static views:
• USER

• ALL

• DBA

• CDB
The USER views contain information available to the current user. For example, the USER_TABLES view contains information about tables owned by the current user. No special privileges are required to select from the USER-level views.


At the next level are the ALL static views. The ALL views show you all object information the current user has access to. For example, the ALL_TABLES view displays all database tables on which the current user can perform any type of DML operation. No special privileges are required to query from the ALL-level views.


Next are the DBA static views. The DBA views contain metadata describing all objects in the database (regardless of ownership or access privilege). To access the DBA views, a DBA role or SELECT_CATALOG_ROLE must be granted to the current user.


The CDB-level views provide information about all pluggable databases within a container database. The CDB-level views report across all containers (root, seed, and all pluggable databases) in a CDB. For instance, if you wanted to view all users within a CDB database, you would do so from the root container, by querying CDB_USERS.

You will notice that many of the static data dictionary and dynamic performance views have a new column, CON_ID. This column uniquely identifies each pluggable database within a container database. The root container has a CON_ID of 1. The seed has a CON_ID of 2. Each new pluggable database created within the CDB is assigned a unique sequential
container ID.


The static views are based on internal Oracle tables, such as USER$, TAB$, and IND$. If you have access to the SYS schema, you can view underlying tables directly via SQL. For most situations, you need to access only the static views that are based on the underlying internal tables.


The data dictionary tables (such as USER$, TAB$, and IND$) are created during the execution of the CREATE DATABASE command. As part of creating a database, the sql. bsq file is executed, which builds these internal data dictionary tables.

The sql.bsq file is generally located in the ORACLE_HOME/rdbms/admin directory; you can view it via an OS editing utility (such as vi, in Linux/Unix, or Notepad in Windows).

The static views are created when you run the catalog.sql script (usually, you run this script once the CREATE DATABASE operation succeeds). The catalog.sql script is located in the ORACLE_HOME/rdbms/admin directory. Figure 10-1 shows the process of creating the static data dictionary views.

Figure 101. Creating the static data dictionary views

Creating a Never Refreshable MV- Views, Duality Views, and Materialized Views

You may never want an MV to be refreshed. For example, you may want to guarantee that you have a snapshot of a table at a point in time for auditing purposes. Specify the NEVER REFRESH clause when you create the MV to achieve this:

SQL> create materialized view sales_mv never refresh asselect sales_id, sales_amt from sales;

If you attempt to refresh a nonrefreshable MV, you receive this error:

ORA-23538: cannot explicitly refresh a NEVER REFRESH materialized view

You can alter a never refreshable view to be refreshable. Use the ALTER MATERIALIZED VIEW statement to do this:

SQL> alter materialized view sales_mv refresh on demand complete;

You can verify the refresh mode and method with the following query:

SQL> select mview_name, refresh_mode, refresh_method from user_mviews;

Creating MVs for Query Rewrite

The query rewrite feature allows the optimizer to recognize that an MV can be used to fulfill the requirements of a query instead of using the underlying master (base) tables. If you have an environment in which users frequently write their own queries and are unaware of the available MVs, this feature can greatly help with performance. There are three prerequisites for enabling query rewrite:

•     Oracle Enterprise Edition

•     Setting database initialization parameter QUERY_REWRITE_ ENABLED to TRUE

•     MV either created or altered with the ENABLE QUERY REWRITE clause

This example creates an MV with query rewrite enabled:

SQL> create materialized view sales_daily_mv segment creation immediate refresh complete on demand enable query rewrite as select sum(sales_amt) sales_amt ,trunc(sales_dtt) sales_dtt from sales group by trunc(sales_dtt);

You can verify that query rewrite is in use by examining a query’s explain plan via the autotrace utility:

SQL> set autotrace trace explain

Now, suppose a user runs the following query, unaware that an MV exists that already aggregates the required data:

SQL> select sum(sales_amt) sales_amt,trunc(sales_dtt) sales_dtt from sales group by trunc(sales_dtt);

Here is a partial listing of autotrace output that verifies that query rewrite is in use:

—————————————————————————–| Id | Operation                                    | Name                   | Cost (%CPU)| Time                                                   | —————————————————————————–| 0 | SELECT STATEMENT                      |                             | 3        (0)     | 00:00:01 | | 1 | MAT_VIEW REWRITE ACCESS FULL | SALES_DAILY_MV | 3                                                             (0)                     | 00:00:01 | —————————————————————————–

As you can see from the prior output, even though the user selected directly from the SALES table, the optimizer determined that it could more efficiently satisfy the results of the query by accessing the MV.

You can tell if query rewrite is enabled for an MV by selecting the REWRITE_ENABLED column from USER_MVIEWS:

SQL> select mview_name, rewrite_enabled, rewrite_capability from user_mviews

where mview_name = ‘SALES_DAILY_MV’;

If for any reason a query is not using the query rewrite functionality and you think it should be, use the EXPLAIN_REWRITE procedure of the DBMS_MVIEW package to diagnose issues.

Modifying and Dropping a View- Views, Duality Views, and Materialized Views

If you need to modify the SQL query on which a view is based, then either drop and re-create the view or use the CREATE or REPLACE syntax, as in the previous examples.

For instance, say you add a REGION column to the SALES table:

SQL> alter table sales add (region varchar2(30)); SQL> create or replace view sales_rockies as select sales_id, amnt, state, region from sales where state in (‘CO’, ‘UT’, ‘WY’, ‘ID’, ‘AZ’) with read only;

The advantage of using the CREATE OR REPLACE method is that you do not have to reestablish access to the view for users with previously granted permissions.

Also, you don’t have to re-create the view if you do not want to include the new column in the view.

However, if you remove a column that the view is using, the view will compile with errors, and you will have to re-create the view without the column.

Renaming a view is also possible with the RENAME statement.

SQL> rename sales_rockies to sales_rockies_old;

You should see this message:

Table renamed.

It would make more sense if it said “View renamed,” but the message in this case does not exactly match the operation.

Dropping a view makes sense if you are no longer using it.

SQL> drop view sales_rockies_old;

Keep in mind that when you drop a view, any dependent views, materialized views, and synonyms become invalid. Additionally, any grants associated with the dropped view are also removed.

Table Locks and Foreign Keys- Tables and Constraints

Here is a simple example that demonstrates the locking issue when foreign key columns are not indexed. First, create two tables (DEPT and EMP) and associate them with a foreign key constraint:

SQL> create table emp(emp_id number primary key, dept_id number); SQL> create table dept(dept_id number primary key);

SQL> alter table emp add constraint emp_fk1 foreign key (dept_id) references dept(dept_id);

Next, insert some data:

SQL> insert into dept values(10), (20),(30);

SQL> insert into emp values(1,10), (2,20),(3,10);

SQL> commit; Open two terminal sessions. From one, delete one record from the child table (do not commit):

SQL> delete from emp where dept_id = 10; Then, attempt to delete from the parent table some data not affected by the child table delete:

SQL> delete from dept where dept_id = 30; The delete from the parent table hangs until the child table transaction is committed.

Without a regular B-tree index on the foreign key column in the child table, any time you attempt to insert or delete in the child table, a table-wide lock is placed on the parent table; this prevents deletes or updates in the parent table until the child table transaction completes.

Now, run the prior experiment, except this time, additionally create an index on the foreign key column of the child table:

SQL> create index emp_fk1 on emp(dept_id);

You should be able to run the prior two delete statements independently. When you have a B-tree index on the foreign key columns, if deleting from the child table, Oracle will not excessively lock all rows in the parent table.

Indexes are important tools for performance when querying the data. There is some cost involved when using them and creating, but with analyzing the SQL statements and planning the storage and maintenance indexes allow for faster data access.

Implementing an Index on a Foreign Key Column- Tables and Constraints

Say you have a requirement that every record in the ADDRESS table be assigned a corresponding CUST_ID column from the CUST table. To enforce this relationship, you create the ADDRESS table and a foreign key constraint, as follows:

SQL> create table address(address_id number ,cust_address varchar2(2000),cust_id number); —

SQL> alter table address add constraint addr_fk1 foreign key (cust_id) references cust(cust_id);

Note a foreign key column must reference a column in the parent table that has a primary key or unique key constraint defined on it. Otherwise, you will receive the error ORA-02270: no matching unique or primary key for this column-list.

You realize that the foreign key column is used extensively when joining the CUST and ADDRESS tables and that an index on the foreign key column will increase performance. In this situation, you have to create an index manually. For instance, a regular B-tree index is created on the foreign key column of CUST_ID in the ADDRESS table.

SQL> create index addr_fk1 on address(cust_id);

You do not have to name the index the same name as the foreign key (as I did in these lines of code). It is a personal preference as to whether you do that. I feel it is easier to maintain environments when the constraint and corresponding index have the same name.

When creating an index, if you do not specify the tablespace name, Oracle places the index in the user’s default tablespace. It is usually a good idea to explicitly specify which tablespace the index should be placed in; for example,

SQL> create index addr_fk1 on address(cust_id) tablespace reporting_index;

Determining Whether Foreign Key Columns Are Indexed

If you are creating an application from scratch, it is fairly easy to create the code and ensure that each foreign key constraint has a corresponding index. However, if you have inherited a database, it is prudent to check if the foreign key columns are indexed.

You can use data dictionary views to verify if all columns of a foreign key constraint have a corresponding index. The task is not as simple as it might first seem. For example, here is a query that gets you started in the right direction:

SQL> SELECT DISTINCT

a.owner              owner

,a.constraint_name cons_name

,a.table_name ,tab_name

b.column_name cons_column

NVL(c.column_name,’***Check index****’) ind_column FROM dba_constraints a,dba_cons_columns b ,dba_ind_columns c

WHERE constraint_type = ‘R’

AND a.owner = UPPER(‘&&user_name’) AND a.owner = b.owner

AND a.constraint_name = b.constraint_name AND b.column_name = c.column_name(+)

AND b.table_name = c.table_name(+) AND b.position = c.column_position(+) ORDER BY tab_name, ind_column;

This query, while simple and easy to understand, does not correctly report on unindexed foreign keys for all situations.

For example, in the case of multicolumn foreign keys, it does not matter if the constraint is defined in an order different from that of the index columns, as long as the columns defined in the constraint are in the leading edge of the index.

In other words, for a constraint on (COL1, COL2), an index will work on either (COL1,COL2) or (COL2,COL1);the order of the same set of columns doesn’t matter. Also, an index on (COL1,COL2,COL3) will also work, because the extra index column is OK as long as the leading columns match.

SQL> create index column_test_idx on table1 (col2, col1);

SQL> create index column_test_idx on table1 (col1, col2, col3);

Another issue is that a B-tree index protects you from locking issues, but a bitmap index does not. In this situation, the query should also check the index type.

In these scenarios, you will need a more sophisticated query to detect indexing issues related to foreign key columns. The query checks index type and finds the related tables and their indexes to show what the index columns are with the tables and indexes.

The following example is a more complex query that uses the LISTAGG analytic function to compare columns (returned as a string in one row) in a foreign key constraint with corresponding indexed columns:

SQL> SELECT

CASE WHEN ind.index_name IS NOT NULL THEN CASE WHEN ind.index_type IN (‘BITMAP’) THEN ‘** Bitmp idx **’

ELSE ‘indexed’ END

ELSE

‘** Check idx **’ END checker ,ind.index_type,cons.owner, cons.table_name, ind.index_name, cons.constraint_name, cons.cols

FROM (SELECT

c.owner, c.table_name, c.constraint_name

, LISTAGG(cc.column_name, ‘,’ ) WITHIN GROUP (ORDER BY cc.column_name) cols FROM dba_constraints c

,dba_cons_columns cc WHERE c.owner = cc.owner

AND c.owner = UPPER(‘&&schema’)

AND c.constraint_name = cc.constraint_name AND c.constraint_type = ‘R’

GROUP BY c.owner, c.table_name, c.constraint_name) cons LEFT OUTER JOIN

(SELECT

table_owner, table_name, index_name, index_type, cbr ,LISTAGG(column_name, ‘,’ ) WITHIN GROUP (ORDER BY column_name) cols FROM (SELECT

ic.table_owner, ic.table_name, ic.index_name

,ic.column_name, ic.column_position, i.index_type ,CONNECT_BY_ROOT(ic.column_name) cbr

FROM dba_ind_columns ic ,dba_indexes i

WHERE ic.table_owner = UPPER(‘&&schema’) AND ic.table_owner = i.table_owner

AND ic.table_name = i.table_name AND ic.index_name = i.index_name

CONNECT BY PRIOR ic.column_position-1 = ic.column_position AND PRIOR ic.index_name = ic.index_name)

GROUP BY table_owner, table_name, index_name, index_type, cbr) ind ON cons.cols = ind.cols

AND cons.table_name = ind.table_name AND cons.owner = ind.table_owner

ORDER BY checker, cons.owner, cons.table_name;

This query will prompt you for a schema name and then will display all foreign key constraints that do not have corresponding indexes. This query also checks for the index type; as previously stated, bitmap indexes may exist on foreign key columns but do not prevent locking issues.