Storing LOBs In and Out of a Row- Large Objects

By default, up to 8,000 bytes of a LOB column are stored inline with the table row. Prior to 23c it was up to 4,000 bytes. If the LOB is more than 8,000 bytes, then Oracle automatically stores it outside the row data. The main advantage of storing a LOB in a row is that small LOBs require less I/O, because Oracle does not have to search out of the row for the LOB data.

However, storing LOB data in a row is not always desirable. The disadvantage of storing LOBs in a row is that the table row sizes are potentially longer.

This can affect the performance of full-table scans, range scans, and updates to columns other than the LOB column. In these situations, you may want to disable storage in the row.

For example,you explicitly instruct Oracle to store the LOB outside the row with the DISABLE STORAGE IN ROW clause:

SQL> create table patchmain( patch_id number, patch_desc clob , lob_file blob)

lob(patch_desc, lob_file) store as (tablespace lob_data disable storage in row);

ENABLE STORAGE IN ROW is enabled by default and will store up to 8,000 bytes of a LOB in the table row. The LOB locator is aways stored inline with the row even if you specify DISABLE STORAGE IN ROW.

You cannot modify the LOB storage in a row after the table has been created. The only ways to alter storage in a row are to move the LOB column or drop and re-create the table. This example alters the storage in a row by moving the LOB column:

SQL> alter table patchmain move lob(patch_desc)

store as (enable storage in row);

You can verify the in-row storage via the IN_ROW column of USER_LOBS:

SQL> select table_name, column_name, tablespace_name, in_row from user_lobs;

TABLE_NAME   COLUMN_NAME   TABLESPACE_NAME IN_ROW

PATCHMAIN PATCHMAIN LOG_FILE PATCH_DESC
LOB_DATA         YES LOB_DATA                 YES

Viewing LOB Metadata

You can use any of the DBA/ALL/USER_LOBS views to display information about LOBs in your database:

SQL> select table_name, column_name, index_name, tablespace_name from all_lobs order by table_name;

Also keep in mind that a LOB segment has a corresponding index segment:

SQL> select segment_name, segment_type, tablespace_name from user_segments where segment_name like ‘SYS_LOB%’ or segment_name like ‘SYS_IL%’;

In this way, you can query both the segment and the index in the DBA/ALL/USER_ SEGMENTS views for LOB information.

You can use the DBA/ALL/USER_SEGMENTS to get the space consumed by the LOB segment. Here is a sample query:

SQL> select segment_name, segment_type, segment_subtype, bytes/1024/1024 meg_bytes from user_segments;

You can modify the query to report on only LOBs by joining to the DBA/ALL/USER_ LOBS view:

SQL> select a.table_name, a.column_name, a.segment_name, a.index_name ,b.bytes/1024/1024 meg_bytes

from user_lobs a, user_segments b where a.segment_name = b.segment_name;

You can also use the DBMS_SPACE.SPACE_USAGE package and procedure to report on the blocks being used by a LOB.

Maintaining LOB Columns- Large Objects

The following sections describe some common maintenance tasks that are performed on LOB columns or that otherwise involve LOB columns, including moving columns between tablespaces and adding new LOB columns to a table.

Moving a LOB Column

As mentioned previously, if you create a table with a LOB column and do not specify a tablespace, then, by default, the LOB is created in the same tablespace as its table. If the LOB column has started to consume large amounts of disk space and the DBA didn’t think about it before, you can use the ALTER TABLE … MOVE … STORE AS statement to move a LOB column to a different tablespace.

SQL> alter table patchmain move lob(patch_desc)

store as securefile (tablespace lob_data2);

You can verify that the LOB was moved by querying USER_LOBS:

SQL> select table_name, column_name, tablespace_name from user_lobs;

To summarize, if the LOB column is populated with large amounts of data, you almost always want to store the LOB in a tablespace separate from that of the rest of the table data. In these scenarios, the LOB data has different growth and storage requirements and is best maintained in its own tablespace.

Adding a LOB Column

If you have an existing table to which you want to add a LOB column, use the ALTER TABLE … ADD statement. The next statement adds the INV_IMAGE column to a table:

SQL> alter table patchmain add(inv_image blob);

This statement is fine for quickly adding a LOB column to a development environment. For anything else, you should specify the storage characteristics. For instance, this command specifies the LOB tablespace:

SQL> alter table patchmain add(inv_image blob) lob(inv_image) store as securefile(tablespace lob_data);

Removing a LOB Column

You may a have scenario in which your business requirements change and you no longer need a column. Before you remove a column, consider renaming it so that you can better identify whether any applications or users are still accessing it. LOBs have larger amounts (really, sizes) of data, and restoring can take longer.

SQL> alter table patchmain rename column patch_desc to patch_desc_old;

After you determine that nobody is using the column, use the ALTER TABLE … DROP statement to just remove that column:

SQL> alter table patchmain drop (patch_desc_old);

You can also remove a LOB column by dropping and re-creating a table (without the LOB column). This, of course, permanently removes any data as well.

Also, keep in mind that if your recycle bin is enabled, then when you do not drop a table with the PURGE clause, space is still consumed by the dropped table. If you want to remove the space associated with the table, use the PURGE clause, or purge the recycle bin after dropping the table.

Caching LOBs

By default, when reading and writing LOB columns, Oracle does not cache LOBs in memory. You can change the default behavior by setting the cache-related storage options. This example specifies that Oracle should cache a LOB column in memory:

SQL> create table patchmain ( patch_id number, patch_desc clob)

lob(patch_desc) store as (tablespace lob_data cache);

You can verify LOB caching with this query:

SQL> select table_name, column_name, cache from user_lobs;

Here is some sample output:

TABLE_NAME         COLUMN_NAME       CACHE

PATCHMAIN             PATCH_DESC            YES

Creating a Table with a LOB Column- Large Objects

The default underlying LOB architecture is SecureFiles. It is recommended to create a LOB as a SecureFiles. As discussed previously, SecureFiles allows you to use features such as compression and encryption.

The DB_SECUREFILE parameter is not required, but you can explicitly state which LOB architecture to implement in the create statement.

SQL> create table patchmain ( patch_id number, patch_desc clob)

lob(patch_desc) store as securefile (tablespace lob_data);

If the tablespace is not specified, the LOB segment is stored in the same tablespace as its table.

Before viewing the data dictionary details regarding the LOB columns, insert a record into the table to ensure that segment information is available; for example,

SQL> insert into patchmain values(1,’clob text’);

You can now verify a LOB’s architecture by querying the USER_SEGMENTS view:

SQL> select segment_name, segment_type, segment_subtype from user_segments;

Here is some sample output, indicating that a LOB segment is a SecureFiles type:

SEGMENT_NAME                                     SEGMENT_TYPE                                                            

SEGMENT_SU

PATCHMAIN SYS_IL0000022340C00002$$
You can also query the USER_LOBS view to verify the SecureFiles LOB architecture:

SQL> select table_name, segment_name, index_name, securefile, in_row from user_lobs;

Here is the output:

You can create a partitioned table that has a LOB column. Doing so lets you spread a LOB across multiple tablespaces. Such partitioning helps with balancing I/O, maintenance, and backup and recovery operations.

All partitioning schemes supported by Oracle are fully supported on LOBs. The next example creates a LIST-partitioned table in which LOB column data is stored in tablespaces separate from those of the table data:

SQL> CREATE TABLE patchmain( patch_id NUMBER,region VARCHAR2(16) ,patch_desc CLOB)

LOB(patch_desc) STORE AS (TABLESPACE patch1) PARTITION BY LIST (REGION) (PARTITION p1 VALUES (‘EAST’) LOB(patch_desc) STORE AS SECUREFILE (TABLESPACE patch1 COMPRESS HIGH) TABLESPACE inv_data1

PARTITION p2 VALUES (‘WEST’) LOB(patch_desc) STORE AS SECUREFILE (TABLESPACE patch2 DEDUPLICATE NOCOMPRESS) TABLESPACE inv_data2

PARTITION p3 VALUES (DEFAULT) LOB(patch_desc) STORE AS SECUREFILE (TABLESPACE patch3 COMPRESS LOW) TABLESPACE inv_data3

Note that each LOB partition is created with its own storage options; again, it is optional, but the default will be in the same tablespace as the table. You can view the details about the LOB partitions as shown here:

SQL> select table_name, column_name, partition_name, tablespace_name ,compression, deduplication from user_lob_partitions;

Here is some sample output:

TABLE_NAME COLUMN_NAME   PARTITION_ TABLESPACE_NAME   

You can also view DBA/ALL/USER_PART_LOBS for information about partitioned LOBs.

You can change the storage characteristics of a partitioned LOB column after it is created. To do so, use the ALTER TABLE … MODIFY PARTITION statement. You can have compression and deduplication as well:

SQL> alter table patchmain modify partition p2 lob(patch_desc) (compress high);

SQL> alter table patchmain modify partition p3 lob(patch_desc) (deduplicate lob);

You can transparently encrypt a SecureFiles LOB column (just like any other column). Or you can also use tablespace encryption to encrypt the tablespace used for your table and LOBs. The ENCRYPT clause enables Secure Files encryption, using Oracle Transparent Data Encryption (TDE). To do this at the column level or tablespace level, you will need to enable a common keystore for the CDB and PDBs using united mode so you can manage the keys in the common keystore.

The Oracle Base Database service creates the keystore when it creates the database for the TDE. The parameters WALLET_ROOT and TDE_CONFIGURATION will have the details of the configuration.

If the database was not created with creating a keystore, you can create the keystore with the following commands:

SQL> administer key management create keystore ‘/opt/oracle/dcs/ commonstore/wallets/db23c_db23c/tde’ identified by “Cr4zyPa$$word1”;

Then the PDB will need to have the keystore enabled and open:

SQL> administer key management set keystore open identified by “Cr4zyPa$$word1”;

After the TDE wallet configuration is all set up, a table can be created with an encrypted column:

SQL> create table patchmain( patch_id number, patch_desc clob encrypt)

lob (patch_desc) store as securefile;

You can also alter a column to enable encryption:

SQL> alter table patchmain modify (patch_desc clob encrypt);

The tablespace can be created with encryption:

SQL> create tablespace lob_data encryption using ‘AES256’ encrypt;

Note partitioning, advanced compression, and advanced security options, which have been discussed in this chapter, are extra-cost options that are available only with the Oracle enterprise edition.

Illustrating LOBs, Locators, Indexes, and Chunks- Large Objects

Internal LOBs (CLOB, NCLOB, BLOB) store data in pieces called chunks. A chunk is the smallest unit of allocation for a LOB and is made up of one or more database blocks. LOB locators are stored in rows containing a LOB column. The LOB locator points to a LOB index. The LOB index stores information regarding the location of LOB chunks. When a table is queried, the database uses the LOB locator and associated LOB index to locate the appropriate LOB chunks. Figure 11-1 shows the relationship between a table, a row, a LOB locator, and a LOB locator’s associated index and chunks.

Figure 111. Relationship of table, row, LOB locator, LOB index, and LOB segment

The LOB locator for a BFILE stores the directory path and filename on the OS. Figure 11-2 shows a BFILE LOB locator that references a file on the OS.

Figure 112. The BFILE LOB locator contains information for locating a file on the OS

Note the DBMS_LOB package performs operations on LOBs through the LOB locator.

SecureFiles

LOBs have two different storage architectures: the original BasicFiles and the newer SecureFiles. With Oracle 23c, you need to be using SecureFiles. Unless you are using an ancient version of Oracle, you should always use the SecureFiles architecture, which has better performance and advanced features, including the following:

•     Encryption (requires Oracle Advanced Security Option)

•     Compression (requires Oracle Advanced Compression Option)

•     Deduplication (requires Oracle Advanced Compression Option)

SecureFiles encryption lets you transparently encrypt LOB data (just like other data types). The compression feature allows for significant space savings. The deduplication feature eliminates duplicate LOBs that otherwise would be stored multiple times.

You need to do a small amount of planning before using SecureFiles. Specifically, using SecureFiles requires the following:

•     A SecureFiles LOB must be stored in a tablespace using ASSM.

•     The DB_SECUREFILE initialization parameter controls whether a SecureFiles file can be used and also defines the default LOB architecture for your database.

To create an ASSM-enabled tablespace, specify the SEGMENT SPACE MANAGEMENT AUTO clause, which is the default; for example,

SQL> create tablespace lob_data

datafile ‘/u01/dbfile/db23c/lob_data01.dbf’ size 1000m

extent management local uniform size 1m

segment space management auto;

If you have existing tablespaces, you can verify the use of ASSM by querying the DBA_ TABLESPACES view. The SEGMENT_SPACE_MANAGEMENT column should have a value of AUTO for any tablespaces that you want to use with SecureFiles:

SQL> select tablespace_name, segment_space_management from dba_tablespaces;

Also, SecureFiles usage is governed by the DB_SECUREFILE database parameter. You can use either ALTER SYSTEM or ALTER SESSION to modify the value of DB_ SECUREFILE. Table 11-2 describes the valid values for DB_SECUREFILE.

Table11-2.DescriptionofDB_SECUREFILE Settings

Table11-2.(continued)

Describing LOB Types- Large Objects

Since earlier versions of Oracle, the ability to store large files in the database vastly improved with the CLOB, NCLOB, BLOB, and BFILE data types. These additional LOB data types let you store much more data, with greater functionality. Table 11-1 summarized the types of Oracle LOBs available and their descriptions.

Table11-1.Oracle Large Object Data Types

* NCLOB along with the other national character sets are not needed to store multibyte characters. I normally use CLOB and VARCHAR2 with just making sure the sizing is large enough to handle multibyte character.

As you can see from Table 11-1, the maximum size of a LOB can range from 8 TB to 128 TB depending on how the database is configured.

A CLOB such as text or XML, as well as JSON, can also be stored as a CLOB or JSON data type. Also, as we saw in a previous chapter, you can just use the relational data to create a JSON duality view.

BLOBs are not human readable. Typical uses for a BLOB are spreadsheets, documents, images, and audio and video data.

CLOBs, NCLOBs, and BLOBs are known as internal LOBs. This is because these data types are stored inside the Oracle database in data files. Internal LOBs participate in transactions and are covered by Oracle’s database security as well as its backup and recovery features.

BFILEs are known as external LOBs. BFILE columns store a pointer to a file on the OS that is outside the database.

You can think of a BFILE as a mechanism for providing read-only access to large binary files outside the database on the OS file system.

Sometimes, the question arises as to whether you should use a BLOB or a BFILE. BLOBs participate in database transactions and can be backed up, restored, and recovered by Oracle.

BFILEs do not participate in the database transactions, are read-only, and are not covered by backup and recovery of the database.

BFILEs are more appropriate for large binary files that are read-only and that do not change while an application is running.

For instance, you may have large binary video files that are referenced by a database application. In this scenario, the business determines that you do not need to create and maintain a 500TB database when all the application really needs is a pointer (stored in the database) to the locations of the large files on disk.

The Dual Table- Data Dictionary Fundamentals

The DUAL table is part of the data dictionary. This table contains one row and one column and is useful when you want to return exactly one row, and you do not have to retrieve data from a particular table. In other words, you just want to return a value.

For example, you can perform arithmetic operations, as follows:

SQL> select 34*.15 from dual; 34*.15———-5.1

SQL> select sysdate from dual; SYSDATE———25-JUN-23

With Oracle 23c the dual table is still available; however, it is no longer needed for such queries. You can now do a SELECT without a FROM clause:

SQL> select 34*.15;34*.15 ———-5.1

SQL> select sysdate;SYSDATE———25-JUN-23

The data dictionary is very useful for viewing the objects and configuration of the database.

The static information in the CDB/DBA/ALL/USER views provides a ton of detail that can even be used in application information about the objects.

The dynamic performance views offer a real-time window into events currently transacting in the database.

These views provide information about currently connected users, SQL executing, where resources are being consumed, and so on.

DBAs use these views extensively to monitor and troubleshoot performance issues.

Displaying Object Dependencies

Say you need to drop a table, but before you drop it, you want to display any objects that are dependent on it. For example, you may have a table that has synonyms, views, materialized views, functions, procedures, and triggers that rely on it. Before making the change, you want to review what other objects are dependent on the table. You can use the DBA_DEPENDENCIES data dictionary view to display object dependencies. The following query prompts you for a username and an object name:

In the output, each object listed has a dependency on the object you entered. Lines are indented to show the dependency of an object on the object in the preceding line:

DEP_TREE

+   TRIGGER STAR2.D_COMPANIES_BU_TR1

+ MATERIALIZED VIEW CIA.CB_RAD_COUNTS

+ SYNONYM STAR1.D_COMPANIES

+ SYNONYM CIA.D_COMPANIES

+ MATERIALIZED VIEW CIA.CB_RAD_COUNTS

In this example, the object being analyzed is a table named D_COMPANIES. Several synonyms, materialized views, and one trigger are dependent on this table. For instance, the materialized view CB_RAD_COUNTS, owned by CIA, is dependent on the synonym D_COMPANIES, owned by CIA, which in turn is dependent on the D_COMPANIES synonym, owned by STAR1.

The DBA_DEPENDENCIES view contains a hierarchical relationship between the OWNER, NAME, and TYPE columns and their referenced column names of REFERENCED_OWNER, REFERENCED_NAME, and REFERENCED_TYPE.

Oracle provides a number of constructs to perform hierarchical queries. For instance, START WITH and CONNECT BY allow you to identify a starting point in a tree and walk either up or down the hierarchical relationship.

The previous SQL query in this section operates on only one object. If you want to inspect every object in a schema, you can use SQL to generate SQL to create scripts that display all dependencies for a schema’s objects.

The piece of code in the next example does that. For formatting and output, the code uses some constructs specific to SQL*Plus, such as setting the page sizes and line size and spooling the output:

UNDEFINE owner

SET LINESIZE 132 PAGESIZE 0 VERIFY OFF FEEDBACK OFF TIMING OFF SPO dep_dyn_&&owner..sql

You should now have a script named dep_dyn_<owner>.sql, created in the same directory from which you ran the script. This script contains all the SQL required to display dependencies on objects in the owner you entered. Run the script to display object dependencies. In this example, the owner is CIA:

SQL> @dep_dyn_cia.sql

When the script runs, it spools a file with the format dep_dyn_<owner>.txt. You can open that text file with an OS editor to view its contents. Here is a sample of the output from this example:

TABLE: DOMAIN_NAMES

+ FUNCTION STAR2.GET_DERIVED_COMPANY

+ TRIGGER STAR2.DOMAIN_NAMES_BU_TR1

+ SYNONYM CIA_APP.DOMAIN_NAMES

This output shows that the table DOMAIN_NAMES has three objects that are dependent on it: a function, a trigger, and a synonym.

Showing Primary Key and Foreign Key Relationships- Data Dictionary Fundamentals

Sometimes when you are diagnosing constraint issues, it is useful to display data dictionary information regarding what primary key constraint is associated with a foreign key constraint.

For example, perhaps you are attempting to insert into a child table, an error is thrown indicating that the parent key does not exist, and you want to display more information about the parent key constraint.

The following script queries the DBA_CONSTRAINTS data dictionary view to determine the parent primary key constraints that are related to child foreign key constraints. You need to provide as input to the script the owner of the table and the child table for which you want to display primary key constraints:

SQL> select a.constraint_type cons_type

,a.table_name child_table ,a.constraint_name child_cons ,b.table_name parent_table ,b.constraint_name parent_cons ,b.constraint_type cons_type from dba_constraints a ,dba_constraints bwhere a.owner = upper(‘&owner’)

and a.table_name = upper(‘&table_name’) and a.constraint_type = ‘R’

and a.r_owner = b.owner

and a.r_constraint_name = b.constraint_name;

The preceding script prompts you for two SQL*Plus ampersand variables (OWNER, TABLE_NAME); if you are not using SQL*Plus, then you may need to modify the script with the appropriate values before you run it.

The following output shows that there are two foreign key constraints. It also shows the parent table primary key constraints:

C CHILD_TABLE CHILD_CONS PARENT_TABLE PARENT_CONS C

R REG_COMPANIES REG_COMPANIES_FK2 D_COMPANIES D_COMPANIES_PK P

R REG_COMPANIES REG_COMPANIES_FK1 CLUSTER_BUCKETS CLUSTER_BUCKETS_PK P

When the CONSTRAINT_TYPE column (of DBA/ALL/USER_CONSTRAINTS) contains an R value, this indicates that the row describes a referential integrity constraint, which means that the child table constraint references a primary key constraint.

You use the technique of joining to the same table twice to retrieve the primary key constraint information. The child constraint columns (R_OWNER, R_CONSTRAINT_NAME) match with another row in the DBA_CONSTRAINTS view that contains the primary key information.

You can also do the reverse of the prior query in this section; for a primary key constraint, you want to find the foreign key columns (if any) that correlate to it.

The next script takes the primary key record and looks to see if it has any child records with a constraint type of R. When you run this script, you are prompted for the primary key table owner and name:

SQL> selectb.table_name primary_key_table ,a.table_name fk_child_table,a.constraint_name fk_child_table_constraint from dba_constraints a,dba_constraints bwhere a.r_constraint_name = b.constraint_name and a.r_owner = b.owner

and a.constraint_type = ‘R’

and b.owner = upper(‘&table_owner’)

and b.table_name = upper(‘&table_name’);

Here is some sample output:

PRIMARY_KEY_TABLE FK_CHILD_TABLE FK_CHILD_TABLE_CONSTRAINT CLUSTER_BUCKETS CLUSTER_BUCKETS CLUSTER_BUCKETS

Displaying Table Row Counts- Data Dictionary Fundamentals

When you are investigating performance or space issues, it is useful to display each table’s row count. To calculate row counts manually, you would write a query such as this for each table that you own:

SQL> select count(*) from <table_name>;

Manually crafting the SQL is time-consuming and error prone. In this situation, it is more efficient to use SQL to generate the SQL required to solve the problem.

To that end, this next example dynamically selects the required text, based on information in the DBA_TABLES view. Create a SQL script file named tabcount_generator.sql with the following code. In SQL*Plus, run that script as a DBA-privileged user with a command like @tabcount_generator.sql.

The script prompts you for a username each time, generates a new filename named tabcount_<user>.sql that contains all the necessary SELECT COUNT(*) commands for that schema, and then runs that new script and displays the table names and row counts.

UNDEFINE user

SPOOL tabcount_&&user..sql

SET LINESIZE 132 PAGESIZE 0 TRIMSPOOL OFF VERIFY OFF FEED OFF TERM OFF SELECT

‘SELECT RPAD(‘ || ””  || table_name ||””||’,30)’

|| ‘,’ || ‘ COUNT(1) FROM &&user..’ || table_name || ‘;’ FROM dba_tables

WHERE owner = UPPER(‘&&user’) ORDER BY 1;

SPOOL OFF; SET TERM ON

@@tabcount_&&user..sql SET VERIFY ON FEED ON

If the username you provide to the script is INVUSER, then you can manually run the generated script as follows:

SQL> @tabcount_invuser.sql

Keep in mind that if the table row counts are high, this script can take a long time to run (several minutes).

Developers and DBAs often use SQL to generate SQL statements. This is a useful technique when you need to apply the same SQL process (repetitively) to many differentobjects, such as all tables in a schema. If you do not have access to DBA-level views, you can query the USER_TABLES view; for example,

SPOOL tabcount.sql

SET LINESIZE 132 PAGESIZE 0 TRIMSPO OFF VERIFY OFF FEED OFF TERM OFF SELECT

‘SELECT RPAD(‘ || ”” || table_name || ”” ||’,30)’ || ‘,’ || ‘ COUNT(*) FROM ‘ || table_name || ‘;’

FROM user_tables ORDER BY 1; SPOOL OFF;

SET TERM ON @@tabcount.sql

SET VERIFY ON FEED ON

If you have accurate statistics, you can query the NUM_ROWS column of the CDB/ DBA/ALL/USER_TABLES views. This column normally has a close row count if statistics are generated on a regular basis. The following query selects NUM_ROWS from the USER_ TABLES view:

SQL> select table_name, num_rows from user_tables;

One final note: if you have partitioned tables and want to show row counts by partition, use the next few lines of SQL and PL/SQL code to generate the SQL required:

UNDEFINE user

SET SERVEROUT ON SIZE 1000000 VERIFY OFF SPOOL part_count_&&user..txt

DECLARE

counter NUMBER;

sql_stmt VARCHAR2(1000); CURSOR c1 IS

SELECT table_name, partition_name FROM dba_tab_partitions

WHERE table_owner = UPPER(‘&&user’); BEGIN

FOR r1 IN c1 LOOP

sql_stmt := ‘SELECT COUNT(1) FROM &&user..’ || r1.table_name ||’ PARTITION ( ‘||r1.partition_name ||’ )’;

EXECUTE IMMEDIATE sql_stmt INTO counter; DBMS_OUTPUT.PUT_LINE(RPAD(r1.table_name ||'(‘||r1.partition_name||’)’,30) ||’ ‘||TO_CHAR(counter)); END LOOP; END; / SPOOL OFF

Displaying User Information- Data Dictionary Fundamentals

You may find yourself in an environment that contains hundreds of databases located on dozens of different servers. In such a scenario, you want to ensure that you do not run the wrong commands or connect to the wrong database, or both. When performing DBA tasks, it is prudent to verify that you are connected as the appropriate account and to the correct database. You can run the following types of SQL commands to verify the currently connected user and database:

SQL> show user;

SQL> select * from user_users; SQL> select name from v$database;

SQL> select instance_name, host_name from v$instance; SQL> show pdbs;

An efficient way of staying aware of your environment is to set your prompt automatically, via the login.sql script, to display user and instance information. This example manually sets the SQL prompt:

SQL> set sqlprompt ‘&_USER.@&_CONNECT_IDENTIFIER.> ‘

Here is what the SQL prompt now looks like:

SYS@mmdb23>

You can also use the SYS_CONTEXT built-in SQL function to extract information from the data dictionary regarding details about your currently connected session. The general syntax for this function is as follows:

SYS_CONTEXT(‘<namespace>’,'<parameter>’,[length])

This example displays the user, authentication method, host, and instance:

SYS@mmdb23> select sys_context(‘USERENV’,’CURRENT_USER’) usr

,sys_context(‘USERENV’,’AUTHENTICATION_METHOD’) auth_mth ,sys_context(‘USERENV’,’HOST’) host ,sys_context(‘USERENV’,’INSTANCE_NAME’) inst from dual;

USERENV is a built-in Oracle namespace. More than 50 parameters are available when you use the USERENV namespace with the SYS_CONTEXT function. See the Oracle SQL Language Reference, which can be freely downloaded from the Technology Network area of the Oracle website (https://docs.oracle.com/database) for a complete list of parameters.

Determining Your Environment’s Details

Sometimes, when deploying code through various development, test, beta, and production environments, it is handy to be prompted as to whether you are in the correct environment.

The technique for accomplishing this requires two files: answer_yes.sql and answer_no.sql. Here are the contents of answer_yes.sql:

— answer_yes.sql PROMPT PROMPT Continuing…

And here is answer_no.sql: — answer_no.sql PROMPT

PROMPT Quitting and discarding changes… ROLLBACK; EXIT;

Now, you can insert the following code into the first part of your deployment script; the code will prompt you as to whether you are in the right environment and if you want to continue:

WHENEVER SQLERROR EXIT FAILURE ROLLBACK; WHENEVER OSERROR EXIT FAILURE ROLLBACK; select host_name from v$instance; select name as db_name from v$database; SHOW user; SET ECHO OFF; PROMPT ACCEPT answer PROMPT ‘Correct environment? Enter yes to continue: ‘ @@answer_&answer..sql

If you type in yes, then the answer_yes.sql script will execute, and you will continue to run any other scripts you call. If you type in no, then the answer_no.sql script will run, and you will exit from SQL*Plus and end up at the OS prompt. If you press the Enter key without typing either, you will also exit and return to the OS prompt.