A race condition occurs when two or more threads access shared data and try to modify it at the same time. The final result of the operation depends on the relative timing of the threads’ execution, which can lead to unpredictable and incorrect results.
A class or method is considered thread - safe if it can be safely used by multiple threads without causing any data inconsistency or other concurrency issues. Thread - safe classes typically use synchronization mechanisms to protect shared resources.
Synchronization is the process of coordinating the access of multiple threads to shared resources. By using synchronization mechanisms, we can ensure that only one thread can access a shared resource at a time, thus preventing race conditions.
public class SynchronizedBlockExample {
private int counter = 0;
private final Object lock = new Object();
public void increment() {
synchronized (lock) {
counter++;
}
}
public int getCounter() {
synchronized (lock) {
return counter;
}
}
}
In this example, the synchronized
block ensures that only one thread can execute the code inside the block at a time, protecting the counter
variable from race conditions.
public class SynchronizedMethodExample {
private int counter = 0;
public synchronized void increment() {
counter++;
}
public synchronized int getCounter() {
return counter;
}
}
Here, the synchronized
keyword is used on methods. When a thread calls a synchronized method, it acquires the lock of the object, preventing other threads from calling any other synchronized methods on the same object.
import java.util.concurrent.locks.ReentrantLock;
public class ReentrantLockExample {
private int counter = 0;
private final ReentrantLock lock = new ReentrantLock();
public void increment() {
lock.lock();
try {
counter++;
} finally {
lock.unlock();
}
}
public int getCounter() {
lock.lock();
try {
return counter;
} finally {
lock.unlock();
}
}
}
ReentrantLock
provides more flexibility than synchronized
blocks. It allows for more advanced locking features such as timed waits, interruptible locks, and fair locking.
import java.util.concurrent.Semaphore;
public class SemaphoreExample {
private final Semaphore semaphore = new Semaphore(1);
private int counter = 0;
public void increment() {
try {
semaphore.acquire();
counter++;
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
semaphore.release();
}
}
public int getCounter() {
try {
semaphore.acquire();
return counter;
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return -1;
} finally {
semaphore.release();
}
}
}
A Semaphore
is used to control the number of threads that can access a shared resource simultaneously. In this example, only one thread can access the counter
variable at a time.
import java.util.concurrent.CountDownLatch;
public class CountDownLatchExample {
public static void main(String[] args) throws InterruptedException {
int threadCount = 3;
CountDownLatch latch = new CountDownLatch(threadCount);
for (int i = 0; i < threadCount; i++) {
new Thread(() -> {
try {
// Simulate some work
Thread.sleep(1000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
latch.countDown();
}
}).start();
}
latch.await();
System.out.println("All threads have completed their work.");
}
}
CountDownLatch
is used to make one or more threads wait until a set of operations being performed in other threads completes.
As shown in the previous examples, using synchronization mechanisms to protect shared resources such as variables, collections, and files is a common practice. This ensures that multiple threads can access and modify these resources safely.
import java.util.LinkedList;
import java.util.Queue;
public class ProducerConsumerExample {
private final Queue<Integer> queue = new LinkedList<>();
private final int capacity = 5;
public synchronized void produce() throws InterruptedException {
while (queue.size() == capacity) {
wait();
}
int item = (int) (Math.random() * 100);
queue.add(item);
System.out.println("Produced: " + item);
notifyAll();
}
public synchronized void consume() throws InterruptedException {
while (queue.isEmpty()) {
wait();
}
int item = queue.poll();
System.out.println("Consumed: " + item);
notifyAll();
}
}
The producer - consumer problem is a classic concurrency problem. The producer thread adds items to a shared queue, and the consumer thread removes items from the queue. Synchronization and the wait()
and notifyAll()
methods are used to ensure that the producer does not add items when the queue is full and the consumer does not remove items when the queue is empty.
public class Singleton {
private static volatile Singleton instance;
private Singleton() {}
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}
In the singleton pattern, we want to ensure that only one instance of a class is created. By using the double - checked locking mechanism with synchronization, we can achieve thread - safe singleton creation.
Synchronization comes with a performance overhead. To minimize this overhead, we should only synchronize the code that absolutely needs to be synchronized. For example, if only a small part of a method accesses shared resources, we should use a synchronized block instead of a synchronized method.
Java provides many high - level concurrency utilities such as ConcurrentHashMap
, CopyOnWriteArrayList
, and ExecutorService
. These utilities are designed to be thread - safe and can simplify the development of multithreaded applications.
Deadlocks occur when two or more threads are blocked forever, each waiting for the other to release a lock. To avoid deadlocks, we should acquire locks in a consistent order, use timed locks, and avoid nested locks whenever possible.
Java multithreading synchronization mechanisms are essential for developing thread - safe applications. By understanding the fundamental concepts, usage methods, common practices, and best practices, developers can write efficient and reliable multithreaded code. Whether using simple synchronized
blocks or more advanced concurrency utilities, the key is to ensure that shared resources are accessed and modified in a thread - safe manner.