Fine-grained locking and synchronization are advanced techniques in concurrent programming aimed at improving performance and reducing contention by minimizing the scope and duration of locks.Â
Understanding Fine-Grained Locking:
Fine-grained locking involves dividing the synchronization scope into smaller, more manageable units. Instead of using a single lock for an entire object or data structure, you use multiple locks to protect smaller sections of shared state. This approach aims to:
- Reduce Contention: By allowing multiple threads to access different parts of the shared resource concurrently.
- Improve Scalability: Minimize the time threads spend waiting for locks, thus increasing overall throughput.
- Enhance Responsiveness: Ensure that operations that do not conflict can proceed independently.
1. Field-Level Locking
Field-level locking involves using separate locks for different fields within an object or data structure. This allows different threads to access and modify different fields concurrently without blocking each other.
In this example, we’ll use separate locks for different fields within an object to allow concurrent access to each field independently.
//FieldLevelLockingDemo.java class MyClass { private final Object lock1 = new Object(); private final Object lock2 = new Object(); int field1; int field2; // Constructor to initialize fields public MyClass(int field1, int field2) { this.field1 = field1; this.field2 = field2; } public void method1() { synchronized (lock1) { // Access or modify field1 field1++; System.out.println("Thread " + Thread.currentThread().getName() + " - Field1: " + field1); } } public void method2() { synchronized (lock2) { // Access or modify field2 field2--; System.out.println("Thread " + Thread.currentThread().getName() + " - Field2: " + field2); } } } public class FieldLevelLockingDemo { public static void main(String[] args) { // Initialize MyClass with starting values MyClass obj = new MyClass(10, 20); // Create two threads to access method1 and method2 Thread thread1 = new Thread(() -> { obj.method1(); }, "Thread-1"); Thread thread2 = new Thread(() -> { obj.method2(); }, "Thread-2"); // Start both threads thread1.start(); thread2.start(); // Wait for both threads to finish try { thread1.join(); thread2.join(); } catch (InterruptedException e) { e.printStackTrace(); } // Final values of field1 and field2 System.out.println("Final Field1: " + obj.field1); System.out.println("Final Field2: " + obj.field2); } } /* C:\>javac FieldLevelLockingDemo.java C:\>java FieldLevelLockingDemo Thread Thread-1 - Field1: 11 Thread Thread-2 - Field2: 19 Final Field1: 11 Final Field2: 19 */
2. Striped Locks
Striped locking divides the shared resource into multiple stripes or segments, each protected by its own lock. This technique is useful when the shared resource can be partitioned into independent parts, allowing threads accessing different parts to operate concurrently without contention. Here’s a simplified example of striped locking using an array of locks:
In this example, we’ll implement striped locking using an array of locks to manage access to a shared resource across different stripes.
//StripedLockDemo.java import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; class StripedLockTable<T> { private final int numOfLocks = 16; private final Lock[] locks = new ReentrantLock[numOfLocks]; public StripedLockTable() { for (int i = 0; i < numOfLocks; i++) { locks[i] = new ReentrantLock(); } } public void performOperation(T key) { int hash = key.hashCode(); int lockIndex = (hash & 0x7fffffff) % numOfLocks; locks[lockIndex].lock(); try { // Perform operation on key System.out.println("Performing operation on key: " + key); } finally { locks[lockIndex].unlock(); } } } public class StripedLockDemo{ public static void main(String[] args) { StripedLockTable<Integer> table = new StripedLockTable<>(); for (int i = 0; i < 10; i++) { final int key = i; new Thread(() -> { table.performOperation(key); }).start(); } } } /* C:\>javac StripedLockDemo.java C:\>java StripedLockDemo Performing operation on key: 7 Performing operation on key: 4 Performing operation on key: 3 Performing operation on key: 0 Performing operation on key: 5 Performing operation on key: 1 Performing operation on key: 6 Performing operation on key: 8 Performing operation on key: 9 Performing operation on key: 2 C:\>java StripedLockDemo Performing operation on key: 3 Performing operation on key: 8 Performing operation on key: 9 Performing operation on key: 4 Performing operation on key: 0 Performing operation on key: 7 Performing operation on key: 2 Performing operation on key: 5 Performing operation on key: 1 Performing operation on key: 6 C:\Users\AITS_CCF\Desktop\Bhava Advanced Concurrency\Bhava Edited\Concurrency Programming Best Practices\04>java StripedLockDemo Performing operation on key: 7 Performing operation on key: 0 Performing operation on key: 3 Performing operation on key: 1 Performing operation on key: 8 Performing operation on key: 2 Performing operation on key: 9 Performing operation on key: 5 Performing operation on key: 6 Performing operation on key: 4 */
3. Read-Write Locks (ReadWriteLock)
Read-write locks differentiate between read and write operations on a shared resource. Multiple threads can read concurrently, while write operations are exclusive. This approach is effective when reads are more frequent than writes and can significantly improve performance over exclusive locks. Here’s an example using ReadWriteLock:
In this example, we’ll use ReadWriteLock to differentiate between read and write operations on a shared resource.
//ReadWriteLockDemo.java import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; class MyResource { private final ReadWriteLock rwLock = new ReentrantReadWriteLock(); private final Lock readLock = rwLock.readLock(); private final Lock writeLock = rwLock.writeLock(); private String data; public String readData() { readLock.lock(); try { return data; } finally { readLock.unlock(); } } public void writeData(String newData) { writeLock.lock(); try { data = newData; } finally { writeLock.unlock(); } } } public class ReadWriteLockDemo { public static void main(String[] args) { MyResource resource = new MyResource(); // One thread writing data first new Thread(() -> { resource.writeData("Lotus Java Prince"); System.out.println("Write operation completed."); }).start(); // Introduce a small delay to ensure write happens before reads try { Thread.sleep(100); // Delay to ensure the write happens first } catch (InterruptedException e) { e.printStackTrace(); } // Multiple threads reading data concurrently after the write for (int i = 0; i < 5; i++) { new Thread(() -> { System.out.println("Read data: " + resource.readData()); }).start(); } } } /* C:\>javac ReadWriteLockDemo.java C:\>java ReadWriteLockDemo Write operation completed. Read data: Lotus Java Prince Read data: Lotus Java Prince Read data: Lotus Java Prince Read data: Lotus Java Prince Read data: Lotus Java Prince */
Fine-grained locking and synchronization are advanced techniques used in concurrent programming to maximize parallelism while preserving data consistency. Rather than locking entire objects or large code blocks—which can create contention and degrade performance—fine-grained locking involves applying locks to the smallest possible units of data or operations. This minimizes the time each thread holds a lock and allows more threads to proceed concurrently without interference.