7.23- Lock Based Protocols- Granularity of Locks | Concurrency Control Techniques | DBMS Free Course
Notes Link:
http://www.tutorialsspace.com/Downloa...
Complete Playlist:
(Eng) DBMS Tutorials | Sql Tutorials | RDBMS Lectures
• (Eng) DBMS Tutorials | Sql Tutorials | RDB...
DBMS - Data Base Management System Tutorials
• DBMS Complete Syllabus- All University exa...
[With Notes & PDF File] | Database Management System In HINDI
• [With Notes & PDF File] | Database Managem...
Transactions & Concurrency Control In DBMS | Serializability | Recoverability | recovery System | Time stamp
• Transactions & Concurrency Control In DBMS...
transaction in dbms, transaction management system in dbms, dbms ugc net lectures,
dbms gate lectures,
serial schedule in dbms, what is schedule in dbms, schedule in dbms, non serial schedule in dbms, serializable schedule, Serializability in dbms, conflict serializable schedule in dbms, conflict serializable schedule, view serializability in dbms, recoverability of schedule, recoverable schedule, cascading rollback shedule, cascade less schedule, concurrency control techniques, lock based protocol in dbms, granularity of locks, shared and exclusive lock, two phase locking in transaction, time stamp based protocol in transaction, validation based protocol in dbms, multiversion concurrency control,
#dbms #gatedbms #ugcnetdbms #gatedbmslectures #ugcnetdbmslectures #dbmsLectures #dbmsTutorials
#dbmsTutorials
Social Links
Twitter Account: / tutorialsspace
FaceBook Page: / tutorialsspace
Instagram : / tutorialsspace
Telegram Channel: https://t.me/TutorialsSpace
Telegram Group: t.me/TutorialsSpace
Pin-Interst: / tutorialsspace
youtube: / tutorialsspace
Lock-Based Protocols
To maintain consistency, restrict modification of the data item by any other transaction which
is presently accessed by some transaction. These restrictions can be applied by applying locks
on data items. To access any data item, transactions have to obtain lock on it.
Coarse Granularity (Table or File or Database Locks): The data manager could lock
at a coarse granularity such as tables or files. If the locks are at coarse granularity,
the data manager doesn’t have to set many locks, because each lock covers a great
amount of data. Thus, the overhead of setting and releasing locks is low. However,
by locking large chunks of data, the data manager is usually locking more data
than a transaction needs. For example, even if a transaction T accesses only a few
records of a table, a data manager that locks at the granularity of tables will lock the
whole table, thereby preventing other transactions from locking any other records of
the table, most of which are not needed by transaction T. This reduces the number
of transactions that can run concurrently, which both reduces the throughput and
increases the response time of transactions.
(b) Fine Granularity (Records or Fields Locks): The data manager could lock at a fine
granularity, such as records or fields. If the locks are at fine granularity, the data
manager only locks the specific data actually accessed by a transaction. These locks
do not artificially interfere with other transaction, as coarse grain locks do. However,
the data manager must now lock every piece of data accessed by a transaction,
which can generate much locking overhead. For example, if a transaction issues
an SQL query that accesses tens of thousands of records, a data manager that do
record granularity locking would set tens of thousands of locks, which can be quite
costly. In addition to the record locks, locks on associated indexes are also needed,
which compounds the problem. There is a fundamental tradeoff between amount of
concurrency and locking overhead, depending on the granularity of locking. Coarse-grained locking has low overhead but low concurrency. Fine-grained locking has high
concurrency but high overhead.
Transactions and Concurrency Control 353
Информация по комментариям в разработке