Storing LOBs In and Out of a Row- Large Objects

By default, up to 8,000 bytes of a LOB column are stored inline with the table row. Prior to 23c it was up to 4,000 bytes. If the LOB is more than 8,000 bytes, then Oracle automatically stores it outside the row data. The main advantage of storing a LOB in a row is that small LOBs require less I/O, because Oracle does not have to search out of the row for the LOB data.

However, storing LOB data in a row is not always desirable. The disadvantage of storing LOBs in a row is that the table row sizes are potentially longer.

This can affect the performance of full-table scans, range scans, and updates to columns other than the LOB column. In these situations, you may want to disable storage in the row.

For example,you explicitly instruct Oracle to store the LOB outside the row with the DISABLE STORAGE IN ROW clause:

SQL> create table patchmain( patch_id number, patch_desc clob , lob_file blob)

lob(patch_desc, lob_file) store as (tablespace lob_data disable storage in row);

ENABLE STORAGE IN ROW is enabled by default and will store up to 8,000 bytes of a LOB in the table row. The LOB locator is aways stored inline with the row even if you specify DISABLE STORAGE IN ROW.

You cannot modify the LOB storage in a row after the table has been created. The only ways to alter storage in a row are to move the LOB column or drop and re-create the table. This example alters the storage in a row by moving the LOB column:

SQL> alter table patchmain move lob(patch_desc)

store as (enable storage in row);

You can verify the in-row storage via the IN_ROW column of USER_LOBS:

SQL> select table_name, column_name, tablespace_name, in_row from user_lobs;

TABLE_NAME   COLUMN_NAME   TABLESPACE_NAME IN_ROW

PATCHMAIN PATCHMAIN LOG_FILE PATCH_DESC
LOB_DATA         YES LOB_DATA                 YES

Viewing LOB Metadata

You can use any of the DBA/ALL/USER_LOBS views to display information about LOBs in your database:

SQL> select table_name, column_name, index_name, tablespace_name from all_lobs order by table_name;

Also keep in mind that a LOB segment has a corresponding index segment:

SQL> select segment_name, segment_type, tablespace_name from user_segments where segment_name like ‘SYS_LOB%’ or segment_name like ‘SYS_IL%’;

In this way, you can query both the segment and the index in the DBA/ALL/USER_ SEGMENTS views for LOB information.

You can use the DBA/ALL/USER_SEGMENTS to get the space consumed by the LOB segment. Here is a sample query:

SQL> select segment_name, segment_type, segment_subtype, bytes/1024/1024 meg_bytes from user_segments;

You can modify the query to report on only LOBs by joining to the DBA/ALL/USER_ LOBS view:

SQL> select a.table_name, a.column_name, a.segment_name, a.index_name ,b.bytes/1024/1024 meg_bytes

from user_lobs a, user_segments b where a.segment_name = b.segment_name;

You can also use the DBMS_SPACE.SPACE_USAGE package and procedure to report on the blocks being used by a LOB.

Maintaining LOB Columns- Large Objects

The following sections describe some common maintenance tasks that are performed on LOB columns or that otherwise involve LOB columns, including moving columns between tablespaces and adding new LOB columns to a table.

Moving a LOB Column

As mentioned previously, if you create a table with a LOB column and do not specify a tablespace, then, by default, the LOB is created in the same tablespace as its table. If the LOB column has started to consume large amounts of disk space and the DBA didn’t think about it before, you can use the ALTER TABLE … MOVE … STORE AS statement to move a LOB column to a different tablespace.

SQL> alter table patchmain move lob(patch_desc)

store as securefile (tablespace lob_data2);

You can verify that the LOB was moved by querying USER_LOBS:

SQL> select table_name, column_name, tablespace_name from user_lobs;

To summarize, if the LOB column is populated with large amounts of data, you almost always want to store the LOB in a tablespace separate from that of the rest of the table data. In these scenarios, the LOB data has different growth and storage requirements and is best maintained in its own tablespace.

Adding a LOB Column

If you have an existing table to which you want to add a LOB column, use the ALTER TABLE … ADD statement. The next statement adds the INV_IMAGE column to a table:

SQL> alter table patchmain add(inv_image blob);

This statement is fine for quickly adding a LOB column to a development environment. For anything else, you should specify the storage characteristics. For instance, this command specifies the LOB tablespace:

SQL> alter table patchmain add(inv_image blob) lob(inv_image) store as securefile(tablespace lob_data);

Removing a LOB Column

You may a have scenario in which your business requirements change and you no longer need a column. Before you remove a column, consider renaming it so that you can better identify whether any applications or users are still accessing it. LOBs have larger amounts (really, sizes) of data, and restoring can take longer.

SQL> alter table patchmain rename column patch_desc to patch_desc_old;

After you determine that nobody is using the column, use the ALTER TABLE … DROP statement to just remove that column:

SQL> alter table patchmain drop (patch_desc_old);

You can also remove a LOB column by dropping and re-creating a table (without the LOB column). This, of course, permanently removes any data as well.

Also, keep in mind that if your recycle bin is enabled, then when you do not drop a table with the PURGE clause, space is still consumed by the dropped table. If you want to remove the space associated with the table, use the PURGE clause, or purge the recycle bin after dropping the table.

Caching LOBs

By default, when reading and writing LOB columns, Oracle does not cache LOBs in memory. You can change the default behavior by setting the cache-related storage options. This example specifies that Oracle should cache a LOB column in memory:

SQL> create table patchmain ( patch_id number, patch_desc clob)

lob(patch_desc) store as (tablespace lob_data cache);

You can verify LOB caching with this query:

SQL> select table_name, column_name, cache from user_lobs;

Here is some sample output:

TABLE_NAME         COLUMN_NAME       CACHE

PATCHMAIN             PATCH_DESC            YES

Displaying Object Dependencies

Say you need to drop a table, but before you drop it, you want to display any objects that are dependent on it. For example, you may have a table that has synonyms, views, materialized views, functions, procedures, and triggers that rely on it. Before making the change, you want to review what other objects are dependent on the table. You can use the DBA_DEPENDENCIES data dictionary view to display object dependencies. The following query prompts you for a username and an object name:

In the output, each object listed has a dependency on the object you entered. Lines are indented to show the dependency of an object on the object in the preceding line:

DEP_TREE

+   TRIGGER STAR2.D_COMPANIES_BU_TR1

+ MATERIALIZED VIEW CIA.CB_RAD_COUNTS

+ SYNONYM STAR1.D_COMPANIES

+ SYNONYM CIA.D_COMPANIES

+ MATERIALIZED VIEW CIA.CB_RAD_COUNTS

In this example, the object being analyzed is a table named D_COMPANIES. Several synonyms, materialized views, and one trigger are dependent on this table. For instance, the materialized view CB_RAD_COUNTS, owned by CIA, is dependent on the synonym D_COMPANIES, owned by CIA, which in turn is dependent on the D_COMPANIES synonym, owned by STAR1.

The DBA_DEPENDENCIES view contains a hierarchical relationship between the OWNER, NAME, and TYPE columns and their referenced column names of REFERENCED_OWNER, REFERENCED_NAME, and REFERENCED_TYPE.

Oracle provides a number of constructs to perform hierarchical queries. For instance, START WITH and CONNECT BY allow you to identify a starting point in a tree and walk either up or down the hierarchical relationship.

The previous SQL query in this section operates on only one object. If you want to inspect every object in a schema, you can use SQL to generate SQL to create scripts that display all dependencies for a schema’s objects.

The piece of code in the next example does that. For formatting and output, the code uses some constructs specific to SQL*Plus, such as setting the page sizes and line size and spooling the output:

UNDEFINE owner

SET LINESIZE 132 PAGESIZE 0 VERIFY OFF FEEDBACK OFF TIMING OFF SPO dep_dyn_&&owner..sql

You should now have a script named dep_dyn_<owner>.sql, created in the same directory from which you ran the script. This script contains all the SQL required to display dependencies on objects in the owner you entered. Run the script to display object dependencies. In this example, the owner is CIA:

SQL> @dep_dyn_cia.sql

When the script runs, it spools a file with the format dep_dyn_<owner>.txt. You can open that text file with an OS editor to view its contents. Here is a sample of the output from this example:

TABLE: DOMAIN_NAMES

+ FUNCTION STAR2.GET_DERIVED_COMPANY

+ TRIGGER STAR2.DOMAIN_NAMES_BU_TR1

+ SYNONYM CIA_APP.DOMAIN_NAMES

This output shows that the table DOMAIN_NAMES has three objects that are dependent on it: a function, a trigger, and a synonym.

Derivable Documentation- Data Dictionary Fundamentals

Sometimes, if you are troubleshooting an issue and are under pressure, you need to quickly extract information from the data dictionary to help resolve the problem.

However, you may not know the exact name of a data dictionary view or its associated columns. If you are like me, it is impossible to keep all the data dictionary view names and column names in your head. Additionally, if you work with databases in different versions, it is sometimes difficult to keep track of which view may be available with a given release of Oracle.

I queried CDB_OBJECTS to get a count of the number of views for 23c:

SQL> select count(1) from dba_objects where object_name like ‘CDB%’;

COUNT(1) ———-

Books, posters, and Google searches can provide this information, but if you cannot find exactly what you are looking for, you can use the documentation contained in the data dictionary itself. You can query from four views, in particular:

CDB_OBJECTS DBA_OBJECTS DICTIONARY DICT_COLUMNS

If you know roughly the name of the view from which you want to select information, you can first query from DBA_OBJECTS. For instance, if you are troubleshooting an issue regarding materialized views and you cannot remember the exact names of the data dictionary views associated with materialized views, you can do this:

SQL> select object_name from dba_objects where object_name like ‘DBA_MV%’;

OBJECT_NAME
DBA_MVIEW_ANALYSIS
DBA_MVIEW_ANALYSIS
DBA_MVIEW_AGGREGATES
DBA_MVIEW_AGGREGATES
DBA_MVIEW_DETAIL_RELATIONS
DBA_MVIEW_DETAIL_RELATIONS
DBA_MVIEW_KEYS
DBA_MVIEW_KEYS
DBA_MVIEW_JOINS
DBA_MVREF_STATS
DBA_MVREF_STATS

38 rows selected.

That may be enough to get you in the ballpark or have a short list to look through. But often you need more information about each view. This is when the DICTIONARY and DICT_COLUMNS views can be invaluable. The DICTIONARY view stores the names of the data dictionary views. It has two columns:

Again, we can look at the dictionary for a description of the MV data dictionary views:

DBA_MVIEW_ANALYSIS    Description of the materialized views accessible to dba

In this manner, you can quickly determine which view you need to access. If you want further information about the view, you can describe it; for example,

SQL> desc dba_mviews

If that does not give you enough information regarding the column names, you can query the DICT_COLUMNS view. This view provides comments about the columns of a data dictionary view; for example,

Owner of the materialized view Name of the materialized view

Name of the materialized view container table

The defining query that the materialized view instantiates

In this way, you can generate and view documentation regarding most data dictionary objects. The technique allows you to quickly identify appropriate views and the columns that may help you in a troubleshooting situation.

Data Dictionary Architecture- Data Dictionary Fundamentals

If you inherit a database and are asked to maintain and manage it, typically you will inspect the contents of the data dictionary to determine the physical structure of the database and see what events are currently transacting. Besides figuring out what you inherited, these views help to automate processes and troubleshoot problems. Toward this end, Oracle provides two general categories of read-only data dictionary views:

•     The contents of your database, such as users, tables, indexes, constraints, privileges, and other objects. These are sometimes referred to as the static CDB/DBA/ALL/USER data dictionary views, and they are based on internal tables stored in the SYSTEM tablespace. The term static, in this sense, means that the information within these views changes only as you make changes to your database, such as adding a user, creating a table, or modifying a column.

•     A real-time view of activity in the database, such as users connected to the database, SQL currently executing, memory usage, locks, and I/O statistics. These views are based on virtual memory tables and are referred to as the dynamic performance views. The information in these views is continuously updated by Oracle as events take place within the database. The views are also sometimes called the V$ or GV$ views. GV$ views are global views across all nodes in the database system and normally have an additional column to let you know which node they are referring to.

These types of data dictionary views are described in further detail in the next two sections.

Static Views

Oracle refers to a subset of the data dictionary views as static and based on the physical tables maintained internally by Oracle. The term static can sometimes be a misnomer. For example, the DBA_SEGMENTS and DBA_EXTENTS views change dynamically as the amount of data in your database grows and shrinks. Regardless, Oracle has made the distinction between static and dynamic, and it is important to understand this architecture nuance when querying the data dictionary. There are four levels of static views:
• USER

• ALL

• DBA

• CDB
The USER views contain information available to the current user. For example, the USER_TABLES view contains information about tables owned by the current user. No special privileges are required to select from the USER-level views.


At the next level are the ALL static views. The ALL views show you all object information the current user has access to. For example, the ALL_TABLES view displays all database tables on which the current user can perform any type of DML operation. No special privileges are required to query from the ALL-level views.


Next are the DBA static views. The DBA views contain metadata describing all objects in the database (regardless of ownership or access privilege). To access the DBA views, a DBA role or SELECT_CATALOG_ROLE must be granted to the current user.


The CDB-level views provide information about all pluggable databases within a container database. The CDB-level views report across all containers (root, seed, and all pluggable databases) in a CDB. For instance, if you wanted to view all users within a CDB database, you would do so from the root container, by querying CDB_USERS.

You will notice that many of the static data dictionary and dynamic performance views have a new column, CON_ID. This column uniquely identifies each pluggable database within a container database. The root container has a CON_ID of 1. The seed has a CON_ID of 2. Each new pluggable database created within the CDB is assigned a unique sequential
container ID.


The static views are based on internal Oracle tables, such as USER$, TAB$, and IND$. If you have access to the SYS schema, you can view underlying tables directly via SQL. For most situations, you need to access only the static views that are based on the underlying internal tables.


The data dictionary tables (such as USER$, TAB$, and IND$) are created during the execution of the CREATE DATABASE command. As part of creating a database, the sql. bsq file is executed, which builds these internal data dictionary tables.

The sql.bsq file is generally located in the ORACLE_HOME/rdbms/admin directory; you can view it via an OS editing utility (such as vi, in Linux/Unix, or Notepad in Windows).

The static views are created when you run the catalog.sql script (usually, you run this script once the CREATE DATABASE operation succeeds). The catalog.sql script is located in the ORACLE_HOME/rdbms/admin directory. Figure 10-1 shows the process of creating the static data dictionary views.

Figure 101. Creating the static data dictionary views

Real-Time Materialized Views- Views, Duality Views, and Materialized Views

It was discussed that materialized views can be fast refreshed, but there can still be a lag. Real-time MVs can roll forward information based on the logs, which is like doing a fast refresh, but it is completing it in real time.

Real-time functionality is available if the following is true:

•     QUERY_REWRITE_INTEGRITY is enforced or TRUSTED

•    MV is not set to REFRESH … ON COMMIT

•     MV must be able to do a fast refresh

•     ENABLE ON QUERY COMPUTATION is used

Now when the MV is queried, the data will wind forward of a stale state making the data appear fresh to the statement. The changes are not persisted in the MV does a refresh.

Oracle Views

This chapter covered the database objects and views. However, these were very different views. A view gives you a way to logically present data to an application, reporting,

or other database and data management tools. So, even though views, JSON duality views, and materialized views provide different functionality and have different purposes, they expose data stored in relational tables. JSON duality gives you the data as JSON documents that are generated on demand and organized both relationally and hierarchically.

Materialized views store the snapshot of the data and provide better performance and can be used to perform analytical queries to also store.

The simplicity of using native database commands to create various views and set up access to these views through normal database security is a great tool to provide the needed data for applications and APIs.

Now with Oracle 23c having JSON-relational duality views provides standardized, straightforward joins with all sorts of data including JSON to state-of-the art analytics, machine learning, and reporting.

Materialized Views- Views, Duality Views, and Materialized Views

This topic fits into our data management coverage and preparing the data for reporting, other applications, and data services. Materialized views are valuable tools to use in your database environment.

An MV allows you to execute a SQL query at a point in time and store the result set in a table (either locally or in a remote database). After the MV is initially populated, you can later rerun the MV query and store the fresh results in the underlying table. There are ways to automate refreshes as well as real-time MVs.

There are three main uses for MVs:

•     Replicating of data to offload query workloads to separate reporting databases

•     Improving performance of queries by periodically computing and storing the results of complex aggregations of data, which lets users query point-in-time results

•     Stopping the query from executing if the query rewrite does not happen

The MV can be a query based on tables, views, and other MVs. The base tables are often referred to as master tables. When you create an MV, Oracle internally creates atable (with the same name as the MV) as well as an MV object (visible in DBA/ALL/USER_ OBJECTS).

MV Terminology

There are many terms related to refreshing MVs. You should be familiar with these terms before implementing the features. Table 9-1 defines the various terms relevant to MVs.

Table9-1.MVTerminology

This table will serve as a good reference as you read the rest of the chapter. The examples will further explain these terms and concepts.

Just like with other objects in the database and what we saw with JSON duality views, there are data dictionary views that are helpful when working with MVs. Table 9-2 describes the MV-related data dictionary views.

Table9-2.MVDataDictionaryViewDefinitions

DBA/ALL/USER_REFRESH DBA_RGROUP

DBA_RCHILD

Making Indexes Unusable- Tables and Constraints

If you have identified an index that is no longer being used, you can mark it UNUSABLE. From that point forward, Oracle will not maintain the index, nor will the optimizer consider the index for use in SELECT statements.

The advantage of marking the index UNUSABLE (rather than dropping it) is that if you later determine that the index is being used, you can alter it to a USABLE state and rebuild it without needing the DDL on hand to re-create it.

Here is an example of marking an index UNUSABLE:

SQL> alter index cust_idx1 unusable;

You can verify that it is unusable via this query:

SQL> select index_name, status from user_indexes;

The index has an UNUSABLE status:

INDEX_NAME                  STATUS

——————– ——–CUST_IDX1                             UNUSABLE

If you determine that the index is needed (before you drop it), then it must be rebuilt to become usable again:

SQL> alter index cust_idx1 rebuild;

Another common scenario for marking indexes UNUSABLE is that you are performing a large data load. When you want to maximize table-loading performance, you can mark the indexes UNUSABLE before performing the load. After you have loaded the table, you must rebuild the indexes to make them usable again.

Note the alternative to setting an index to UNUSABLE is to drop and re-create it. this approach requires the CREATE INDEXddL.

Dropping an Index

If you have determined that an index is not being used, then it is a good idea to drop it. Unused indexes take up space and can potentially slow down DML statements (because the index must be maintained as part of those DML operations). You can always test the performance by making an index invisible first before dropping. Remember there is time and resources involved in creating a large index on a widely used table, so the validation is a good set up before hours of poor performance while you rebuild an index. Use the DROP INDEX statement to drop an index:

SQL> drop index cust_idx1;

Dropping an index is a permanent DDL operation; there is no way to undo an index drop other than to re-create the index. Before you drop an index, it does not hurt to quickly capture the DDL required to re-create the index. Doing so will allow you to re-create the index in the event you subsequently discover that you did need it after all.

Indexing Foreign Key Columns

Foreign key constraints ensure that when inserting into a child table, a corresponding parent table record exists. This is the mechanism for guaranteeing that data conforms to parent–child business relationship rules. Foreign keys are also known as referential integrity constraints.

Unlike primary key and unique key constraints, Oracle does not automatically create indexes on foreign key columns. Therefore, you must create a foreign key index manually, based on the columns defined as the foreign key constraint. In most scenarios, you should create indexes on columns associated with a foreign key. Here are two good reasons:

•     Oracle can often make use of an index on foreign key columns to improve the performance of queries that join a parent table and child table (using the foreign key columns).

•     If no B-tree index exists on the foreign key columns, when you insert or delete a record from a child table, all rows in the parent table are locked for the duration of the statement, for the update or delete. For applications that actively modify both the parent and child tables, this can cause locking and deadlock issues, but for the duration of the statement.

One could argue that if you know your application well enough and can predict that queries will not be issued and that join tables on foreign key columns and that certain update/delete scenarios will never be encountered (that result in entire tables being locked), then, by all means, do not place an index on foreign key columns.

In my experience, however, this is seldom the case: developers rarely think about how the “black-box database” might lock tables; some DBAs are equally unaware of common causes of locking; teams experience high turnover rates, and the DBA de jour is left holding the bag for issues of poor database performance and hung sessions.

Considering the time and resources spent chasing down locking and performance issues, it does not cost that much to put an index on each foreign key column in your application.

I know some purists will argue against this, but I tend to avoid pain, and an unindexed foreign key column is a ticking bomb.

Having made my recommendation, I’ll first cover creating a B-tree index on a foreign key column. Then, I’ll show you some techniques for detecting unindexed foreign key columns.

Maintaining Indexes- Tables and Constraints

As applications age, you invariably have to perform some maintenance activities on existing indexes. You may need to rename an index to conform to newly implemented standards, or you may need to rebuild a large index to move it to a different tablespace that better suits the index’s storage requirements. The following list shows common tasks associated with index maintenance:

•     Renaming an index

•     Displaying the DDL for an index

•     Rebuilding an index

•     Setting indexes to unusable

•     Monitoring an index

•     Dropping an index

Each of these items is discussed in the following sections.

Renaming an Index

Sometimes you need to rename an index. The index may have been erroneously named when it was created, or perhaps you want a name that better conforms to naming standards. Use the ALTER INDEX … RENAME TO statement to rename an index:

SQL> alter index cust_idx1 rename to cust_index1;

You can verify that the index was renamed by querying the data dictionary:

SQL> select table_name ,index_name ,index_type ,tablespace_name ,status from user_indexes order by table_name, index_name;

Displaying Code to Re-create an Index

You may be performing routine maintenance activities, such as moving an index to a different tablespace, and before you do so, you want to verify the current storage settings. You can use the DBMS_METADATA package to display the DDL required to re-create an index. Here is an example:

SQL> set long 10000

SQL> select dbms_metadata.get_ddl(‘INDEX’,’CUST_IDX1′) from dual;

Here is a partial listing of the output:

SQL> CREATE INDEX “MV_MAINT”.”CUST_IDX1″ ON “MV_MAINT”.”CUST” (“CUST_ID”) PCTFREE 10 INITRANS 2 MAXTRANS 255 INVISIBLE COMPUTE STATISTICS

To show all index DDL for a user, run this query:

SQL> select dbms_metadata.get_ddl(‘INDEX’,index_name) from user_indexes;

You can also display the DDL for a particular user. You must provide as input to the GET_DDL function the object type, object name, and schema; for example,

SQL> select

dbms_metadata.get_ddl(object_type=>’INDEX’, name=>’CUST_IDX1′, schema=>’INV’)

from dual;

Rebuilding an Index

There are a couple of good reasons to rebuild an index:

•     Modifying storage characteristics, such as changing the tablespace

•     Rebuilding an index that was previously marked unusable to make it usable again

Use the REBUILD clause to rebuild an index. This example rebuilds an index named CUST_IDX1:

SQL> alter index cust_idx1 rebuild;

Oracle attempts to acquire a lock on the table and rebuild the index. If there are any active transactions that haven’t committed, Oracle will not be able to obtain a lock, and the following error will be thrown:

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

In this scenario, you can either wait until there is little activity in the database or try setting the DDL_LOCK_TIMEOUT parameter:

SQL> alter session set ddl_lock_timeout=15;

The DDL_LOCK_TIMEOUT initialization parameter instructs Oracle to repeatedly attempt to obtain a lock (for 15 seconds, in this case).

Transactions can block rebuilding indexes, but the index rebuild itself can block other transactions until the rebuild is complete. To avoid a rebuild from blocking transactions, use the key ONLINE:

SQL> alter index cust_idx1 rebuild online;

If no tablespace is specified, Oracle rebuilds the index in the tablespace in which the index currently exists. Specify a tablespace if you want the index rebuilt in a different tablespace:

SQL> alter index cust_idx1 rebuild tablespace reporting_index;

If you are working with a large index, you may want to consider using features such as NOLOGGING or PARALLEL, or both. This next example rebuilds an index in parallel, while generating a minimal amount of redo:

SQL> alter index cust_idx1 rebuild parallel nologging;

Making an Existing Index Invisible- Tables and Constraints

Suppose you have identified an index that is not being used and are considering dropping it. In earlier releases of Oracle, you could mark the index UNUSABLE and then later drop indexes that you were certain weren’t being used. If you later determined that you needed an unusable index, the only way to re-enable the index was to rebuild it. For large indexes, this could take a great amount of time and database resources.

Making an index invisible has the advantage of telling only the optimizer not to use the index. The invisible index is still maintained as the underlying table has records inserted, updated, and deleted. If you decide that you later need the index, there is no need to rebuild it; you simply make it visible again.

You can create an index as invisible or alter an existing index to be invisible; for example,

SQL> create index cust_idx2 on cust(first_name) invisible; SQL> alter index cust_idx1 invisible;

You can verify the visibility of an index via this query:

SQL> select index_name, status, visibility from user_indexes;

Here is some sample output:

INDEX_NAME

——————–CUST_IDX1

CUST_IDX2

USERS_IDX1

STATUS

——–VALID VALID

VALID

VISIBILITY

———-INVISIBLE INVISIBLE

VISIBLE

Use the VISIBLE clause to make an invisible index visible to the optimizer again:

SQL> alter index cust_idx1 visible;

Note If you have a B-tree index on a foreign key column and you decide to make it invisible, Oracle can still use the index to prevent certain locking issues. Before you drop an index on a column associated with a foreign key constraint, ensure that it is not used by Oracle to prevent locking issues. See the section “Indexing Foreign Key Columns,” later in this chapter, for details.

Guaranteeing Application Behavior Is Unchanged When You Add an Index

You can also use an invisible index when you are working with third-party applications. Often, third-party vendors do not support customers adding their own indexes to an application. However, there may be a scenario in which you are certain you can increase a query’s performance without affecting other queries in the application.

You can create the index as invisible and then use the OPTIMIZER_USE_INVISIBLE_ INDEXES parameter to instruct the optimizer to consider invisible indexes. This parameter can be set at the system or session level. Here is an example:

SQL> create index cust_idx1 on cust(cust_id) invisible;

Now, set the OPTIMIZER_USE_INVISIBLE_INDEXES database parameter to TRUE. This instructs the optimizer to consider invisible indexes for the currently connected session:

SQL> alter session set optimizer_use_invisible_indexes=true;

You can verify that the index is being used by setting AUTOTRACE to on and running the SELECT statement:

SQL> set autotrace trace explain;

SQL> select cust_id from cust where cust_id = 3;

Here is some sample output, indicating that the optimizer chose to use the invisible index:

—————————————————————————| Id | Operation               | Name           | Rows | Bytes | Cost (%CPU)| Time —-| —————————————————————————| 0 | SELECT STATEMENT |                    | 1        | 5          | 1 (0)           | 00:00:01 | |* 1 | INDEX RANGE SCAN| CUST_IDX1 | 1                      | 5                                          | 1 (0)         | 00:00:01 | —————————————————————————

Keep in mind that invisible index simply means an index the optimizer cannot see. Just like any other index, an invisible index consumes space and resources during DML statements.