How to detect and fix Java memory leaks in production

Introduction

Java applications are generally the safe choice for memory management. You've got automatic garbage collection, battle-tested runtimes, and decades of tooling. Then a service that used to idle at 400 MB starts climbing every deploy until it hits java heap space or the node begins swapping and latency goes sideways. Heap usage is climbing, garbage collection pauses are getting longer, and eventually the dreaded OutOfMemoryError crashes everything.

This is the classic memory leak pattern in Java. Despite automatic garbage collection handling most memory management for you, leaks still happen regularly in production systems.

In this guide, you'll learn how to spot the warning signs early, identify memory leaks using heap dump analysis and memory profiling, and apply fixes that prevent potential memory leak problems from reaching production.

What is a Java memory leak?

A Java memory leak occurs when your application holds references to Java objects that are no longer needed, preventing the garbage collector from reclaiming that heap memory.

Unlike languages such as C or C++ where you manually allocate memory and free it, Java relies on automatic garbage collection to automatically clean up unused objects. The problem is that the garbage collector can only remove objects that have no active references pointing to them.

The Java Virtual Machine (JVM) divides Java heap space into various memory pools for different object lifecycles.

When you allocate memory for new Java objects, they initially land in the young generation. Objects that survive multiple garbage collection cycles get promoted to the old generation, where they consume heap memory until they're no longer referenced.

The garbage collector periodically scans these memory pools, and any objects that can't be reached from a garbage collector root (GC root) -- such as static fields, stack trace variables, or active thread references -- get garbage collected.

Reference objects in the heap fall into two categories: referenced objects, which can still be reached through a chain of references from a GC root, and unreferenced objects that have no path back to any root.

GC roots include local variables on thread stacks, static variables in loaded classes, and active thread objects themselves. The garbage collector efficiently finds and removes unreferenced objects during its collection cycles.

Memory leaks happen when Java objects are no longer logically needed by your application but remain technically referenced. Consider a cache that grows without bounds, or event listeners that are never unregistered. Such objects sit in heap memory indefinitely because the garbage collector sees valid references to them and assumes they're still in use.

The key distinction from manual memory management languages is that Java memory leaks are typically caused by unintentional reference retention rather than forgetting to call free(). You don't lose track of the memory address; you lose track of the logical lifetime of objects your code is holding onto.

Beyond heap memory, Java applications also consume native memory for JVM internal allocations, compressed class space, and native code execution.

A native memory leak, where native memory allocations grow unbounded, won't show up in your standard heap size metrics. If you're using Java Native Interface (JNI) or native libraries, you may need to track native memory allocated separately and diagnose native memory leaks using platform-native tools.

Native memory tracking can be enabled with JVM flags to get native memory usage details, helping you monitor memory consumption outside the Java heap.

Symptoms of memory leaks in Java

So when is a memory leak actually the problem? Several warning signs indicate your Java applications may be suffering from potential memory leaks.

The most obvious symptom is an OutOfMemoryError: Java heap space exception.

When your application exhausts available heap memory because leaked objects have accumulated over time, the JVM throws this error and typically terminates the affected thread. By the time you see this error in production, the leak has already consumed all the memory resources available to your application. In extreme cases, the system may even run out of swap space as the JVM attempts to allocate memory beyond physical RAM.

Performance degradation over time is a subtler but equally important indicator. As heap memory fills up with leaked objects, the garbage collector runs more frequently and takes longer to complete. You might notice response times gradually increasing over days or weeks, with periodic spikes during full GC pauses. An application that ran fine after a fresh restart but shows declining Java code performance progressively is a strong candidate for memory leak investigation.

Monitoring your garbage collection logs helps you identify patterns that indicate potential memory leak issues. If you see the live set -- the amount of memory still in use after a full garbage collection -- growing steadily over time, that's a clear signal. Healthy applications maintain a relatively stable live set, while leaking applications show a rising baseline that never returns to previous levels. Enable verbose garbage collection logging to capture this data.

Another common symptom related to memory leaks is when database connections run out. When connections, file handles, or network sockets are opened but never properly closed, you'll eventually exhaust your connection pool. The application starts throwing exceptions about being unable to acquire new connections, even though previous operations should have released theirs.

When the garbage collector spends more time running but reclaims less heap space each cycle, leaked objects are likely accumulating. The JVM works harder to find reclaimable memory blocks while the leak continues consuming Java heap allocation capacity.

Common causes of Java memory leaks

Memory leaks in Java typically stem from a handful of recurring patterns. Understanding these common causes helps you avoid memory leaks from the start and quickly diagnose memory leaks during debugging.

Static fields holding object references

Static fields live for the entire lifetime of the application -- or more precisely, for as long as the class remains loaded. When static variables reference collections that continuously grow without elements being removed, you've created a textbook memory leak that will eventually consume all the memory available to your heap.

public class MetricsCollector {
    // This list grows forever
    private static final List<Metric> allMetrics = new ArrayList<>();

    public void recordMetric(Metric metric) {
        allMetrics.add(metric);
    }
}

Every call to recordMetric() adds an object to a list that will never be garbage collected. The fix involves either removing entries when they're no longer needed, using bounded collections, or switching to weak references that allow garbage collection when heap size limits approach.

public class MetricsCollector {
    private static final int MAX_METRICS = 10000;
    private static final Deque<Metric> recentMetrics = new ArrayDeque<>();

    public synchronized void recordMetric(Metric metric) {
        if (recentMetrics.size() >= MAX_METRICS) {
            recentMetrics.removeFirst();
        }
        recentMetrics.addLast(metric);
    }
}

Unclosed resources

Database connections, file streams, network sockets, and other resources that implement AutoCloseable allocate memory and often hold onto native memory outside the Java heap. Failing to close these memory resources causes leaks that can manifest as both heap memory growth and native heap exhaustion.

// Leaky code - connection never closed if exception occurs
public List<User> getUsers() throws SQLException {
    Connection conn = dataSource.getConnection();
    PreparedStatement stmt = conn.prepareStatement("SELECT * FROM users");
    ResultSet rs = stmt.executeQuery();

    List<User> users = new ArrayList<>();
    while (rs.next()) {
        users.add(mapRow(rs));
    }
    return users;
    // Connection leaked if we reach here or throw earlier
}

Each leaked connection consumes heap memory for the Java object, as well as native memory for the underlying socket and buffers. Over time, you'll exhaust both your connection pool and available memory resources. The best fix uses try-with-resources to guarantee cleanup regardless of whether the operation succeeds or throws an exception.

public List<User> getUsers() throws SQLException {
    try (Connection conn = dataSource.getConnection();
         PreparedStatement stmt = conn.prepareStatement("SELECT * FROM users");
         ResultSet rs = stmt.executeQuery()) {

        List<User> users = new ArrayList<>();
        while (rs.next()) {
            users.add(mapRow(rs));
        }
        return users;
    }
}

Improper equals() and hashCode() implementations

When you use custom objects as keys in hash-based collections like HashMap or HashSet without properly overriding equals() and hashCode(), the collection can't recognize duplicate entries. Each insertion creates a new entry even when you intended to update an existing one, causing the collection to leak memory continuously.

public class CacheKey {
    private final String userId;
    private final String sessionId;

    public CacheKey(String userId, String sessionId) {
        this.userId = userId;
        this.sessionId = sessionId;
    }
    // No equals() or hashCode() override
}

// Every put() creates a new entry, even with identical values
Map<CacheKey, UserSession> cache = new HashMap<>();
for (int i = 0; i < 1000; i++) {
    cache.put(new CacheKey("user123", "session456"), new UserSession());
}
System.out.println(cache.size()); // Prints 1000, not 1

Without equals() and hashCode(), the HashMap uses object identity based on memory address to determine equality. Two CacheKey instances with identical field values are treated as different keys. These objects accumulate in the collection, and you leak memory with every duplicate insertion. The fix requires implementing both methods consistently.

@Override
public boolean equals(Object o) {
    if (this == o) return true;
    if (!(o instanceof CacheKey)) return false;
    CacheKey other = (CacheKey) o;
    return Objects.equals(userId, other.userId)
        && Objects.equals(sessionId, other.sessionId);
}

@Override
public int hashCode() {
    return Objects.hash(userId, sessionId);
}

Non-static inner classes referencing outer classes

Non-static inner classes in Java hold an implicit reference to their enclosing outer class instance. This hidden reference can prevent the outer class from being garbage collected, even when you think you've released all references to it. The inner class maintains a strong reference to the outer class, keeping all the memory associated with it alive.

public class DataProcessor {
    private byte[] largeBuffer = new byte[10 * 1024 * 1024]; // 10MB

    public Runnable createTask() {
        // This anonymous inner class holds implicit reference to DataProcessor
        return new Runnable() {
            @Override
            public void run() {
                System.out.println("Processing...");
            }
        };
    }
}

Even though the Runnable doesn't use largeBuffer, the non-static inner classes automatically reference outer classes through a hidden field. If these tasks are queued for later execution, the outer instances accumulate in memory.

Each queued task keeps its DataProcessor alive, and you effectively leak memory equal to that 10MB buffer for every task created.

The solution is to use a static nested class when you don't need access to the outer class's instance members. Static inner classes don't hold an implicit reference to the enclosing instance.

public class DataProcessor {
    private byte[] largeBuffer = new byte[10 * 1024 * 1024];

    public Runnable createTask() {
        return new ProcessingTask();
    }

    private static class ProcessingTask implements Runnable {
        @Override
        public void run() {
            System.out.println("Processing...");
        }
    }
}

ThreadLocal variables without cleanup

ThreadLocal provides thread-isolated storage where each thread maintains its own independent copy of a variable. With this method, you can achieve thread safety without synchronization, as each thread accesses only its own current thread's value. The problem arises when modern application servers reuse threads from a pool rather than creating new ones for each request.

public class RequestContext {
    private static final ThreadLocal<UserSession> currentSession = new ThreadLocal<>();

    public static void setSession(UserSession session) {
        currentSession.set(session);
    }

    public static UserSession getSession() {
        return currentSession.get();
    }
    // No cleanup method!
}

When a thread finishes handling one request and returns to the pool, the ThreadLocal value persists because the thread itself remains alive. The next request using that thread might see stale data, and the UserSession objects from previous requests remain in heap memory indefinitely because the thread is still alive.

The solution is simple but easy to forget: always call remove() in a finally block.

public class RequestContext {
    private static final ThreadLocal<UserSession> currentSession = new ThreadLocal<>();

    public static void setSession(UserSession session) {
        currentSession.set(session);
    }

    public static UserSession getSession() {
        return currentSession.get();
    }

    public static void clear() {
        currentSession.remove();
    }
}

// In your request filter or interceptor
try {
    RequestContext.setSession(loadSession(request));
    chain.doFilter(request, response);
} finally {
    RequestContext.clear();
}

With these common causes in mind, let's look at the tools that help you find leaks when they occur.

Java memory leak detection tools

Finding memory leaks requires visibility into what's happening inside the JVM. Several tools and techniques give you that visibility, ranging from simple logging to sophisticated memory profiling.

Verbose garbage collection logging

The easiest way to start investigating memory issues is by enabling verbose garbage collection. Add the following JVM parameters to your application startup to capture garbage collection logs:

# Java 9+
-Xlog:gc*:file=gc.log:time,uptime,level,tags

# Java 8
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:gc.log

Garbage collection logs show you when collections occur, how long they take, and how much heap memory is reclaimed. A healthy application shows stable memory usage after full collections.

A leaking application shows the post-GC heap size trending upward over time, indicating that the garbage collector cannot reclaim all the memory being consumed.

Tools like GCeasy analyze this periodically collected output and visualize the trends, making it easier to spot gradual memory growth that might not be obvious from raw log output. You can identify potential memory leaks by watching whether the live set (memory remaining after garbage collection) increases over successive full GC cycles.

Memory profilers

Memory profiling tools attach to a running JVM and provide real-time visibility into memory allocations, object counts, and reference chains. Several options support the diagnosis of memory leaks, depending on your environment and budget.

VisualVM is a free Java profiler that ships separately from the JDK (as of Java 9). It provides heap monitoring, garbage collection visualization, and basic profiling capabilities. Connect to a running Java process, switch to the Monitor tab to watch heap memory usage, or use the Sampler tab for allocation profiling. It's lightweight enough for development but can introduce overhead that makes it unsuitable for production memory profiling.

Eclipse Memory Analyzer, commonly called MAT, excels at heap dump analysis. This powerful tool can process dumps containing hundreds of millions of Java objects and automatically identifies leak suspects. MAT's dominator tree view shows which objects are keeping the most memory alive, while its path-to-GC-roots feature reveals why specific objects can't be collected. The tool generates Eclipse memory leak warnings that highlight suspicious memory consumption patterns.

Commercial profilers like YourKit and JProfiler offer more sophisticated analysis with lower overhead. They're particularly useful for production profiling where minimizing impact on your Java code's performance is critical. Java Mission Control paired with Java Flight Recorder provides similar capabilities and is included with Oracle JDK distributions. Java Flight Recorder captures detailed runtime data with minimal overhead, making it suitable for always-on production monitoring.

For native memory leak investigation, you'll need additional native tools. Enable native memory tracking with -XX:NativeMemoryTracking=summary or -XX:NativeMemoryTracking=detail to track native memory allocated by the JVM.

Use jcmd $PID VM.native_memory to view native memory usage details including JVM internal allocations, compressed class space consumption, and native heap usage.

Heap dumps

A heap dump is a snapshot of all objects in the JVM at a specific moment. It contains every object instance, its field values, and the reference graph connecting them. Heap dump analysis is the most direct way to identify memory leaks and understand what's consuming your Java heap space.

You can generate heap dumps in several ways:

  • On-demand using jmap -- jmap -dump:live,format=b,file=heap.hprof
  • Automatically on OutOfMemoryError (add to JVM arguments) -- -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/app/
  • Via Java Management Extensions (JMX) using jconsole or programmatically

The -XX:+HeapDumpOnOutOfMemoryError flag is essential for production systems. When your Java process runs out of heap memory, you get a dump showing exactly what filled up the heap, even if no one is watching when the crash happens. This automatic capture is often your only chance to diagnose memory leaks that occur unpredictably.

Once you have a dump file, open it in Eclipse Memory Analyzer for detailed heap dump analysis. The Leak Suspects report automatically analyzes the dump and highlights objects consuming disproportionate amounts of memory. The histogram view shows object counts by class, sorted by retained size. If you see millions of instances of a class that should only have hundreds, you've likely found where memory leaks happen.

You can also examine the stack trace associated with object allocations to understand where in your code the problematic objects originate. MAT's "path to GC roots" feature shows why specific Java objects remain in heap memory instead of being garbage collected.

Code reviews and IDE warnings

Prevention beats detection. Modern IDEs can flag potential resource leaks during development. In Eclipse, go to Preferences > Java > Compiler > Errors/Warnings and configure Eclipse memory leak warnings by setting resource leak warnings to Error. IntelliJ IDEA provides similar inspections under Settings > Editor > Inspections > Java > Resource management.

// Eclipse will flag this as: Resource leak: 'stream' is never closed
InputStream stream = new FileInputStream("data.txt");
byte[] data = stream.readAllBytes();

Code reviews should specifically look for the patterns described earlier: static fields that reference growing collections, unclosed database connections and streams, non-static inner classes that might outlive their outer class, and ThreadLocal usage without corresponding remove() calls.

How to fix memory leaks in Java

Once you've identified the source of a leak using the tools above, applying the fix is usually straightforward. The challenge is often just finding the right place in the code where Java objects are being retained unnecessarily.

For static field leaks, either remove the static modifier if the data doesn't truly need application-wide scope, or implement a cleanup strategy such as bounded size limits, time-based expiration, or weak references. The WeakHashMap class automatically removes entries when keys are no longer strongly referenced elsewhere.

For unclosed resources, wrap all AutoCloseable instances in try-with-resources blocks, as this ensures database connections, file handles, and network sockets are closed properly even when exceptions occur. If you're working with legacy code that doesn't use this pattern, add explicit close() calls in finally blocks to guarantee cleanup and prevent the application from continuing to leak memory.

For collections used with custom key classes, ensure both equals() and hashCode() are overridden consistently. Use your IDE's generation feature or the Objects.hash() and Objects.equals() helpers to avoid manual implementation errors.

For inner class issues, convert anonymous or non-static inner classes to static inner classes when they don't need access to the outer instance. If they do need outer class access, ensure the outer class isn't held longer than necessary. Breaking the implicit reference between inner and outer classes is often the simplest fix.

For ThreadLocal leaks, establish a cleanup pattern that calls remove() at the end of each unit of work. In web applications running on modern application servers, a servlet filter is the natural place for this cleanup. Always clear the current thread's value before the thread returns to the pool.

Monitoring Java memory in production

Production environments typically can't run profilers continuously due to overhead concerns. Instead, you need to monitor memory metrics passively and react when trends indicate potential memory leak problems.

JVM monitoring through JMX exposes memory pool sizes, garbage collection statistics, and other metrics that observability platforms can collect. Key metrics to track include:

  • Heap memory usage after GC (live set trend over time)
  • GC pause times and frequency from garbage collection logs
  • Old generation occupancy in various memory pools
  • Metaspace usage for class loading issues
  • Native memory consumption via native memory tracking

A steadily increasing live set over days or weeks signals a leak, even if memory usage appears stable during normal operations. The garbage collector keeps things running by working harder, but eventually it can't reclaim enough space and your Java applications crash with OutOfMemoryError.

Set alerts for heap size exceeding percentage thresholds after full GC. If your application typically settles at 40% heap usage after collection but suddenly stays at 70%, investigate before it reaches 100% and exhausts all the memory available.

Java Flight Recorder deserves special mention for production monitoring. It collects detailed performance data -- including memory allocations, garbage collection activity, and thread behavior with minimal overhead -- typically under 2%. You can run continuous recordings and analyze the periodically collected output later when issues arise, or configure automatic dump triggers based on thresholds.

Browser automation and Java memory leaks

If your Java applications orchestrate headless browsers for web scraping, testing, or PDF generation, you're dealing with a particularly challenging category of memory leaks. Each browser instance consumes substantial native memory -- often hundreds of megabytes -- and improper session management can quickly exhaust available memory resources.

Headless Chrome and Firefox processes spawned from Java through tools like Selenium or Playwright require explicit cleanup. When a browser session crashes or times out without a proper shutdown, the child process may continue running, holding onto native heap memory that the JVM can't reclaim. Your Java heap might look fine while native memory allocations grow unchecked outside the monitored memory pools.

Common patterns that cause problems include:

  • Creating browser instances without closing them in finally blocks
  • Holding onto page or context references longer than needed
  • Failing to terminate zombie browser processes after timeouts
  • Not implementing proper error handling that ensures cleanup on exceptions

The solution involves rigorous lifecycle management: always close browser instances in finally blocks, implement timeout handlers that forcibly terminate hung processes, and monitor memory usage for both JVM heap and native memory consumption.

If you're running browser automation at scale, managing the infrastructure yourself becomes increasingly painful. You're not just dealing with Java memory anymore; you're handling Chrome process crashes, version compatibility, resource exhaustion under load, and the operational burden of keeping everything running smoothly.

When browser automation runs inside your Java infrastructure, potential memory leak issues compound quickly. You're managing the Java heap space, Chrome's native memory footprint, native process lifecycles, and the interactions between all three. Browserless offloads the browser execution entirely, eliminating the most unpredictable source of memory leaks from your stack.

Isolated browser sessions with automatic cleanup

With Browserless, each automation task runs in an isolated browser session on managed infrastructure. When your Java code connects to Browserless via WebSocket, the browser instance lives on Browserless servers rather than consuming memory resources on your application servers. Session timeouts and cleanup happen automatically. If a script hangs or a connection drops, Browserless terminates the browser and reclaims all the memory without intervention.

Here's a typical Java integration using Playwright:

import com.microsoft.playwright.*;
import java.nio.file.Paths;

public class BrowserlessExample {
    public static void main(String[] args) {
        try (Playwright playwright = Playwright.create()) {
            String token = System.getenv("BROWSERLESS_TOKEN");

            // Browser runs on Browserless, not your server
            Browser browser = playwright.chromium().connectOverCDP(
                String.format("wss://production-sfo.browserless.io?token=%s", token)
            );

            BrowserContext context = browser.newContext();
            Page page = context.newPage();

            page.navigate("https://example.com");
            page.screenshot(new Page.ScreenshotOptions()
                .setPath(Paths.get("screenshot.png")));

            browser.close();
        }
    }
}

Your Java application maintains a lightweight WebSocket connection. The heavy lifting -- rendering engines, JavaScript execution, DOM manipulation -- happens remotely. If your JVM experiences memory pressure, it doesn't cascade into browser crashes. If Chrome has issues, your application continues running.

No zombie processes or orphaned memory

Self-hosted browser automation frequently leaves behind zombie Chrome processes. A timeout occurs, an exception isn't handled properly, or the Java process terminates unexpectedly. The browser processes keep running, consuming native memory until someone notices or the server runs out of memory resources.

Browserless manages browser lifecycles independently. Configurable session timers automatically terminate browsers that exceed their allotted time. Health checks detect unresponsive sessions and clean them up. The reconnect API lets you persist sessions intentionally when needed while ensuring everything else gets cleaned up on schedule.

This architectural separation means your Java applications never accumulate orphaned browser processes. Native memory growth from browser automation becomes a non-issue because browsers don't run in your native heap. You can focus on preventing memory leaks in your Java code without worrying about Chrome's memory behavior.

Scaling without memory management overhead

When you scale browser automation on your own infrastructure, memory becomes the primary constraint. Each concurrent browser instance needs hundreds of megabytes of native memory. Running ten parallel sessions might require dedicating 4-8GB just to Chrome processes, separate from your JVM heap. Autoscaling based on request volume risks overwhelming available memory before the CPU becomes a bottleneck.

Browserless handles scaling and load balancing internally. Request surges queue automatically rather than spawning unbounded browser instances that would leak memory under load. You configure concurrency limits on your Browserless account rather than engineering memory-aware autoscaling for your application servers. The result is predictable memory usage on your side -- just the overhead of maintaining WebSocket connections -- while Browserless absorbs the variable load.

Monitoring without profiler overhead

Profiling tools like VisualVM work well for JVM memory analysis, but can't easily observe memory consumption in spawned Chrome processes. You end up cobbling together JVM metrics, OS-level process monitoring via native tools, and custom instrumentation to get a complete picture of memory usage during browser automation.

Browserless provides built-in monitoring for browser sessions. Dashboard metrics show session durations, queue times, success rates, and for enterprise customers, CPU and memory usage on dedicated workers. When something goes wrong, the session replay feature lets you watch exactly what happened in the browser. You don't need to correlate heap dumps with Chrome process memory to understand where memory resources went.

When to move browser automation to Browserless

If your Java applications experience any of these patterns, Browserless likely solves the underlying problem:

  • Memory usage climbs steadily when browser automation is active, even with proper cleanup code
  • Zombie Chrome processes accumulate between deployments or after exceptions
  • GC pauses spike during parallel browser sessions
  • Native memory errors occur despite healthy JVM heap metrics
  • Operations spends significant time debugging browser-related resource exhaustion

The integration is straightforward: replace local browser launches with WebSocket connections to Browserless endpoints. Your automation logic stays the same. Playwright works with simple connection string changes. Java applications using any of these libraries can migrate incrementally, starting with the most problematic automation tasks.

Conclusion

Memory leaks in Java are rarely catastrophic bugs that crash your application immediately. They're slow accumulations that degrade the Java code's performance over weeks or months, making them easy to miss until they become critical. By understanding the common causes -- static field retention, unclosed resources, missing equals() and hashCode() implementations, inner class references to outer classes, and ThreadLocal misuse -- you can avoid memory leaks from the start.

When leaks do occur, verbose garbage collection logs, Java profiler tools like VisualVM, and heap dump analysis with Eclipse Memory Analyzer give you the visibility to diagnose memory leaks effectively. Production monitoring with passive metrics collection catches problems early, before they impact users.

For teams running browser automation alongside Java applications, the memory management challenges multiply. Every local browser instance adds unpredictable native memory consumption that's difficult to profile and even harder to control at scale.

Browserless eliminates this entire category of problems by moving browser execution off your servers.

You get the same automation capabilities through Playwright, but the memory-hungry browser processes run on managed infrastructure with automatic cleanup, session timeouts, and built-in monitoring. Instead of debugging native memory leaks in your Java infrastructure, you connect to browsers that scale automatically and clean up after themselves. Sign up for a free Browserless account and connect your Java automation in minutes.

Java memory leak FAQs

How to check memory leaks in Java applications?

Start by enabling GC logging with -Xlog:gc* (Java 9+) or -verbose:gc (Java 8) to monitor heap usage trends. If the post-GC heap size grows steadily over time, you likely have a leak. For deeper investigation, capture a heap dump using jmap -dump:live,format=b,file=heap.hprof $PID and analyze it with Eclipse Memory Analyzer to identify which objects are consuming memory and why they can't be collected.

How to check memory leaks in Java Spring Boot?

Spring Boot applications can use the same techniques as any Java application. Add the JVM argument -XX:+HeapDumpOnOutOfMemoryError to automatically capture dumps on crash. Spring Boot Actuator exposes JVM memory metrics at /actuator/metrics/jvm.memory.used that you can monitor over time.

Common Spring-specific leak sources include beans with singleton scope holding request-scoped data, unclosed JdbcTemplate connections, and @Async tasks that reference large objects.

How to handle memory leaks in Java?

Once identified, fixing a leak typically involves one of these approaches:

  • Removing unnecessary static references and implementing cleanup for collections that must be static.
  • Wrapping all AutoCloseable resources in try-with-resources blocks.
  • Implementing proper equals() and hashCode() methods for objects used as collection keys.
  • Converting non-static inner classes to static nested classes when outer class access isn't needed.
  • Calling ThreadLocal.remove() at the end of each request or task.

How to debug a memory leak in Java?

Debugging memory leaks follows a systematic process.

  1. First, reproduce the issue by running the application under load until heap usage grows. Capture a heap dump during the growth phase.
  2. Open the dump in Eclipse MAT and run the Leak Suspects report.
  3. Examine the dominator tree to see which object graphs retain the most memory.
  4. Use the path-to-GC-roots feature to understand why specific objects can't be collected.

The reference chain usually points directly to the problematic code holding onto objects too long.