Oracle8 Parallel Server Concepts & Administration Release 8.0 A58238-01 |
|
This chapter provides an overview of the locking mechanisms that are internal to the parallel server. The chapter is organized as follows:
This section covers the following topics:
You must understand locking mechanisms if you are to effectively harness parallel processing and parallel database capabilities. You can influence each kind of locking through the way you set initialization parameters, administer the system, and design applications. If you do not use locks effectively, your system may spend so much time synchronizing shared resources that no speedup and no scaleup is achieved; your parallel system could even suffer performance degradation compared to a single instance system.
Locks are used for two main purposes in Oracle Parallel Server:
Transaction locks are used to implement row level locking for transaction consistency. Row level locking is supported in both single instance Oracle and Oracle Parallel Server.
Instance locks (also commonly known as distributed locks) guarantee cache coherency. They ensure that data and other resources distributed among multiple instances belonging to the same database remain consistent. Instance locks include PCM and non-PCM locks.
Figure 7-1 shows latches and enqueues: locking mechanisms which are synchronized within a single instance. These are used in Oracle with or without the Parallel Server Option, and whether parallel server is enabled or disabled.
* The mount lock is obtained if the Parallel Server Option has been linked in to your Oracle executable.
Latches are simple, low level serialization mechanisms to protect in-memory data structures in the SGA. Latches do not protect datafiles. They are entirely automatic, are held for a very short time, and can only be held in exclusive mode. Being local to the node, internal locks and latches do not provide internode synchronization.
Enqueues are shared memory structures which serialize access to resources in the database. These locks can be local to one instance or global to a database. They are associated with a session or transaction, and can be in any mode: shared, exclusive, protected read, protected write, concurrent read, concurrent write, or null.
Enqueues are held longer than latches, have more granularity and more modes, and protect more resources in the database. For example, if you request a table lock (a DML lock) you will receive an enqueue.
Certain enqueues are local to a single instance, when parallel server is disabled. But when parallel server is enabled, those enqueues can no longer be managed on the instance level: they need to be maintained on a system-wide level, managed by the Integrated Distributed Lock Manager (IDLM).
When parallel server is enabled, most of the local enqueues become global enqueues. This is reflected in Figure 7-1 and Figure 7-2. They will all appear as enqueues in the fixed tables-no distinction is made there between local and global enqueues. Global enqueues are handled in a distributed fashion.
Note: Transaction locks are simply a subset of enqueues.
Figure 7-2 illustrates the instance locks that come into play when Oracle Parallel Server is enabled. In OPS implementations, the status of all Oracle locking mechanisms is tracked and coordinated by the Integrated DLM component.
Instance locks (other than the mount lock) only come into existence if you start an Oracle instance with parallel server enabled. They synchronize between instances, communicating the current status of a resource among the instances of an Oracle Parallel Server.
Instance locks are held by background processes of instances, rather than by transactions. An instance owns an instance lock that protects a resource (such as a data block or data dictionary entry) when the resource enters its SGA.
The Integrated DLM component of Oracle handles locking only for resources accessed by more than one instance of a Parallel Server, to ensure cache coherency. The IDLM communicates requests for instance locks and the status of the locks between the lock processes of each instance. The V$DLM_LOCKS view lists information on all locks currently known to the IDLM.
Instance locks are of two types: parallel cache management (PCM) locks and non-PCM locks.
Parallel cache management locks are instance locks that cover one or more data blocks (table or index blocks) in the buffer cache. PCM locks do not lock any rows on behalf of transactions. PCM locks are implemented in two ways:
With hashed locking, an instance never disowns a PCM lock unless another instance asks for it. This minimizes the overhead of instance lock operations in systems that have relatively low contention for resources. With fine grain locking, once the block is released, the lock is released. (Note that non-PCM locks are disowned.)
Non-PCM locks of many different kinds control access to data and control files, control library and dictionary caches, and perform various types of communication between instances. These locks do not protect datafile blocks. Examples are DML enqueues (table locks), transaction enqueues, and DDL or dictionary locks. The System Change Number (SCN), and the mount lock are global locks, not enqueues.
Note: The context of Oracle Parallel Server causes most local enqueues to become global; they can still be seen in the fixed tables and views which show enqueues (such as V$LOCK). The V$LOCK table does not, however, show instance locks, such as SCN locks, mount locks, and PCM locks.
Although PCM locks are typically far more numerous than non-PCM locks, there is still a substantial enough number of non-PCM locks that you must carefully plan adequate IDLM capacity for them. Typically 5% to 10% of locks are non-PCM. Non-PCM locks do not grow in volume the same way that PCM locks do.
The user controls PCM locks in detail by setting initialization parameters to allocate the number desired. However, the user has almost no control over non-PCM locks. You can try to eliminate the need for table locks by setting DML_LOCKS = 0 or by using the ALTER TABLE ENABLE/DISABLE TABLE LOCK command, but other non-PCM locks will still persist.
See Also: Chapter 16, "Ensuring IDLM Capacity for All Resources & Locks"
With the Oracle Parallel Server, up to ten Lock processes (LCK0 through LCK9) provide inter-instance locking.
LCK processes manage most of the locks used by an instance and coordinate requests for those locks by other instances. LCK processes maintain all of the PCM locks (hashed or fine grain) and some of the non-PCM locks (such as row cache or library cache locks). LCK0 will handle PCM as well as non-PCM locks. Additional lock processes, LCK1 through LCK9, are available for systems that require exceptionally high throughput of instance lock requests; they will only handle PCM locks. Multiple LCK processes can improve recovery time and startup time.
Although instance locks are mainly handled by the LCK processes, some instance locks are directly acquired by other background or shadow foreground processes. In general, if a background process such as LCK owns an instance lock, it is for the whole instance. If a foreground process owns an instance lock, it is just for that particular process. For example, the log writer (LGWR) will get the SCN instance lock, the database writer (DBWR) will get the media recovery lock. The bulk of all these locks, however, are handled by the LCK processes.
Attention: Foreground processes obtain transaction locks-LCK processes do not. Transaction locks are associated with the session/transaction unit, not with the process.
See Also: Oracle8 Concepts for more information about the LCKn processes.
The LMON and LMD0 processes implement the global lock management subsystem of Oracle Parallel Server. LMON performs lock cleanup and lock invalidation after the death of an Oracle shadow process or another Oracle instance. It also reconfigures and redistributes the global locks as Oracle Parallel Server instances are started and stopped.
The LMD0 process handles remote lock requests for global locks (that is, lock requests originating from another instance for a lock owned by the current instance). All messages pertaining to global locks that are directed to an Oracle Parallel Server instance are handled by the LMD0 process of that instance.
To effectively implement locks, you need to carefully evaluate their relative expense. As a rule of thumb, latches are cheap; local enqueues are more expensive; instance locks and global enqueues are quite expensive. In general, instance locks and global enqueues have equivalent performance impact. (When parallel server is disabled, all enqueues are local; when parallel server is enabled, most are global.)
Table 7-1 dramatizes the relative expense of latches, enqueues, and instance locks. The elapsed time required per lock will vary by system--the values used in the "Actual Time Required" column are only examples.
Microseconds, milliseconds, and tenths of a second all sound like negligible units of time. However, if you imagine the cost of locks using grossly exaggerated values such as those listed in the "Relative Time Required" column, you can grasp the need to carefully calibrate the use of locks in your system and applications. In a big OLTP situation, for example, unregulated use of instance locks would be impermissible. Imagine waiting hours or days to complete a transaction in real life!
Stored procedures are available for analyzing the number of PCM locks an application will use if it performs particular functions. You can set values for your initialization parameters and then call the stored procedure to see the projected expenditure in terms of locks.
This section covers the following topics:
All Oracle enqueues and instance locks are named using one of the following formats:
type ID1 ID2
or type, ID1, ID2
or type (ID1, ID2)
where
type |
A two-character type name for the lock type, as described in the V$LOCK table, and listed in Table 7-2 and Table 7-3. |
ID1 |
The first lock identifier, used by the IDLM. The convention for this identifier differs from one lock type to another. |
ID2 |
The second lock identifier, used by the IDLM. The convention for this identifier differs from one lock type to another. |
For example, a space management lock might be named ST 1 0. A PCM lock might be named BL 1 900.
The V$LOCK table contains a list of local and global Oracle enqueues currently held or requested by the local instance. The "lock name" is actually the name of the resource; locks are taken out against the resource.
All PCM locks are Buffer Cache Management locks.
Type | Lock Name |
---|---|
BL |
Buffer Cache Management |
The syntax of PCM lock names is type ID1 ID2, where
Sample PCM lock names are:
Non-PCM locks have many different names.
See Also: Oracle8 Reference for descriptions of all these non-PCM locks.
The Integrated DLM component is a distributed resource manager that is internal to the Oracle Parallel Server. This section explains how the IDLM coordinates locking mechanisms that are internal to Oracle. Chapter 8, "Integrated Distributed Lock Manager: Access to Resources" presents a detailed description of IDLM features and functions.
This section covers the following topics:
In Oracle Parallel Server implementations, the Integrated DLM facility keeps an inventory of all the Oracle instance locks and global enqueues held against the resources of your system. It acts as a referee when conflicting lock requests arise.
In Figure 7-3 the IDLM is represented as an inventory sheet listing resources and the current status of locks on each resource across the parallel server. Locks are represented as follows: S for shared mode, N for null mode, X for exclusive mode.
This inventory includes all instances. For example, resource BL 1, 101 is held by three instances with shared locks and three instances with null locks. Since the table reflects up to 6 locks on one resource, at least 6 instances are evidently running on this system.
Oracle database resources are mapped to IDLM resources, with the necessary mapping performed by the instance. For example, a hashed lock on an Oracle database block with a given data block address (such as file 2 block 10) becomes translated as a BL resource with the class of the block and the lock element number (such as BL 9 1). The data block address (DBA) is translated from the Oracle resource level to the IDLM resource level; the hashing function used is dependent on GC_* parameter settings. The IDLM resource name identifies the physical resource in views such as V$LOCK.
Note: For DBA fine grain locking, the database address is used as the second identifier, rather than the lock element number.
Figure 7-5 illustrates the way in which IDLM locks and PCM locks relate. For instance B to read the value of data at data block address x, it must first check for locks on that data. The instance translates the block's database resource name to the IDLM resource name, and asks the IDLM for a shared lock in order to read the data.
As illustrated in the following conceptual diagram, the IDLM checks all the outstanding locks on the granted queue and determines that there are already two shared locks on the resource BL1,441. Since shared locks are compatible with read-only requests, the IDLM grants a shared lock to Instance B. The instance then proceeds to query the database to read the data at data block address x. The database returns the data.
Note: The global lock space is managed in distributed fashion by the LMDs of all the instances cooperatively.
If the required block already had an exclusive lock on it by another instance, then Instance B would have to wait for this to be released. The IDLM would place on the convert queue the shared lock request from Instance B. The IDLM would notify the instance when the exclusive lock was removed, and then grant its request for a shared lock.
The term IDLM lock refers simply to the IDLM's notations for tracking and coordinating the outstanding locks on a resource.
The IDLM provides one lock per instance on a PCM resource. As illustrated in Figure 7-6, if you have a four-instance system and require a buffer lock on a single resource, you will actually end up with four locks-one per instance.
The number of non-PCM locks may depend on the type of lock.
See Also: Chapter 10, "Non-PCM Instance Locks"