Skip to main content
  1. Articels/

Netty source code analysis and memory overflow ideas

·1570 words·8 mins
source code analysis Java language Netty
Weaxs
Author
Weaxs
Table of Contents

Through an accident of a memory overflow, I took a look at the netty-related source code and would like to share my own gains.

Ways to troubleshoot memory overflow
#

You can use the jcmd command to view Naive Memory, for example

jcmd {pid} VM.native_memory summary scale=MB

Netty source code analysis
#

Netty Allocator
#

Netty allocator is divided into two types: pooled and unpooled. It can be modified through the VM option io.netty.allocator.type, and the default is pooled. Whether it is pooled or unpooled, you can choose to go to heap or direct, which refers to in-heap and out-of-heap modes respectively.

The specific methods used to implement pooled and unpooled in the Netty Allocator are PooledByteBufAllocator and UnpooledByteBufAllocator. The specific inheritance relationship is shown in the figure below:

AbstractByteBufAllocator
#

AbstractByteBufAllocator mainly does 2 things:

  1. Determine whether to use the heap or direct. The default is to use direct, which means off-heap memory. You can use the VM parameters io.netty.noUnsafe and io.netty.noPreferDirect to set whether to use the heap or off-heap memory.
  2. Monitor for memory leaks. After each new ByteBuf is created, the toLeakAwareBuffer() method is called, which traces the corresponding ByteBuf to determine whether there is a risk of memory leaks.

PooledByteBufAllocator
#

PooledByteBufAllocator is the allocator in pooled mode, which defines many related parameters, such as chunkSize, pageSize, smallCacheSize, normalCacheSize, etc. in the pool. I will not go into the details here.

The specific memory allocation in pooled mode uses the PoolArena class. Let’s first look at the UML diagram of the related classes:

The first thing to note is that

  • Memory allocation is done by the PooledByteBufAllocator by calling the allocate() method of PoolArena, and the PooledByteBuf class defined in netty is returned.
  • Memory release is implemented by the PooledByteBuf class by calling the reallocate() or free() method of PoolArena.
  • PoolArena returns a PooledByteBuf class to the allocator, but the actual newChunk operation is performed on a generic type. This generic type is a ByteBuffer in the direct off-heap mode or a byte[] array in the heap in-heap mode.

Therefore, we know that when allocate allocates memory, it will create a PooledByteBuf and manage the actual memory operation in the ByteBuffer or byte[] array in the nio. Memory is released by creating a PooledByteBuf object to control. The following is a detailed explanation of allocation and release:

direct off-heap memory The PooledByteBuf that manages direct off-heap memory is divided into two categories based on whether it is safe:

  • In the safe case, PooledDirectByteBuf objects are used for management
  • In the unsafe case, PooledUnsafeDirectByteBuf objects are used for management

As we have already seen above, in the PooledByteBuf of direct off-heap memory, it is the ByteBuffer in NIO that actually allocates or occupies memory. The allocation and release of ByteBuffer are implemented in two ways depending on nocleaner:

  • Memory allocation in nocleaner mode is done by creating a ByteBuffer object using the Constructor in reflect reflection;
  • Memory release in nocleaner mode is done directly via the UNSAFE native method freeMemory0(long address);
  • Memory allocation in cleaner mode is done directly using new DirectByteBuffer;
  • Memory release in cleaner mode is done by calling the cleaner of DirectByteBuffer in nio via reflect.
static final class DirectArena extends PoolArena<ByteBuffer> {

    @Override
    protected PoolChunk<ByteBuffer> newChunk(int pageSize, int maxPageIdx,
        int pageShifts, int chunkSize) {
        if (directMemoryCacheAlignment == 0) {
            ByteBuffer memory = allocateDirect(chunkSize);
            return new PoolChunk<ByteBuffer>(this, memory, memory, pageSize, pageShifts,
                    chunkSize, maxPageIdx);
        }
        final ByteBuffer base = allocateDirect(chunkSize + directMemoryCacheAlignment);
        final ByteBuffer memory = PlatformDependent.alignDirectBuffer(base, directMemoryCacheAlignment);
        return new PoolChunk<ByteBuffer>(this, base, memory, pageSize,
                pageShifts, chunkSize, maxPageIdx);
    }

    private static ByteBuffer allocateDirect(int capacity) {
        return PlatformDependent.useDirectBufferNoCleaner() ?
            PlatformDependent.allocateDirectNoCleaner(capacity) : ByteBuffer.allocateDirect(capacity);
    }

    @Override
    protected void destroyChunk(PoolChunk<ByteBuffer> chunk) {
        if (PlatformDependent.useDirectBufferNoCleaner()) {
            PlatformDependent.freeDirectNoCleaner((ByteBuffer) chunk.base);
        } else {
            PlatformDependent.freeDirectBuffer((ByteBuffer) chunk.base);
        }
    }

    @Override
    protected PooledByteBuf<ByteBuffer> newByteBuf(int maxCapacity) {
        if (HAS_UNSAFE) {
            return PooledUnsafeDirectByteBuf.newInstance(maxCapacity);
        } else {
            return PooledDirectByteBuf.newInstance(maxCapacity);
        }
    }
}

heap memory

The PooledByteBuf that manages heap memory is also divided into two categories based on whether it is safe:

  • In the safe case, the PooledHeapByteBuf object is used for management
  • In the unsafe case, the PooledUnsafeHeapByteBuf object is used for management

As we learned above, heap memory is implemented by maintaining a byte[] array. One of the more important parameters is io.netty.uninitializedArrayAllocationThreshold , which is used to set the threshold for uninitialized byte arrays. The default is 1024.

The specific allocation and release are as follows:

  • For memory allocation, if the size of the byte array to be allocated does not exceed the threshold, it is created directly using new byte []; if it exceeds the threshold, the byte array is created using reflection
  • For memory release, it directly depends on the GC
static final class HeapArena extends PoolArena<byte[]> {

    private static byte[] newByteArray(int size) {
        return PlatformDependent.allocateUninitializedArray(size);
    }

    @Override
    protected PoolChunk<byte[]> newChunk(int pageSize, int maxPageIdx, int pageShifts, int chunkSize) {
        return new PoolChunk<byte[]>(
                this, null, newByteArray(chunkSize), pageSize, pageShifts, chunkSize, maxPageIdx);
    }

    @Override
    protected void destroyChunk(PoolChunk<byte[]> chunk) {
        // Rely on GC.
    }

    @Override
    protected PooledByteBuf<byte[]> newByteBuf(int maxCapacity) {
        return HAS_UNSAFE ? PooledUnsafeHeapByteBuf.newUnsafeInstance(maxCapacity)
                : PooledHeapByteBuf.newInstance(maxCapacity);
    }

}

UnpooledByteBufAllocator
#

UnpooledByteBufAllocator is the allocator for the non-pooled mode. Several inner classes are defined here:

  • InstrumentedUnpooledUnsafeHeapByteBuf
  • InstrumentedUnpooledHeapByteBuf
  • InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf
  • InstrumentedUnpooledUnsafeDirectByteBuf
  • InstrumentedUnpooledDirectByteBuf

direct off-heap memory The direct heap memory that needs attention is InstrumentedUnpooledDirectByteBuf, InstrumentedUnpooledUnsafeDirectByteBuf and InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf. The following is a differentiation between the three:

  • If it is safe, use InstrumentedUnpooledDirectByteBuf
  • If it is unsafe + cleaner mode, use InstrumentedUnpooledUnsafeDirectByteBuf
  • If it is an unsafe case + no-cleaner mode, use InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf

These three types actually also use the ByteBuffer in NIO to allocate and release memory. The specific implementation is as follows:

  • InstrumentedUnpooledDirectByteBuf: memory is allocated using new DirectByteBuffer; memory is released by calling the NIO DirectByteBuffer cleaner implementation using reflection.
  • InstrumentedUnpooledUnsafeDirectByteBuf: memory is allocated using new DirectByteBuffer; memory is released using the nio DirectByteBuffer cleaner method called by reflect.
  • InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf: memory is allocated using the Constructor in reflect to create a ByteBuffer object; memory is released directly using the UNSAFE native method freeMemory0(long address).

public class UnpooledUnsafeDirectByteBuf extends UnpooledDirectByteBuf {

    protected ByteBuffer allocateDirect(int initialCapacity) {
        return ByteBuffer.allocateDirect(initialCapacity);
    }

    protected void freeDirect(ByteBuffer buffer) {
        PlatformDependent.freeDirectBuffer(buffer);
    }
}

class UnpooledUnsafeNoCleanerDirectByteBuf extends UnpooledUnsafeDirectByteBuf {

    UnpooledUnsafeNoCleanerDirectByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
        super(alloc, initialCapacity, maxCapacity);
    }

    @Override
    protected ByteBuffer allocateDirect(int initialCapacity) {
        return PlatformDependent.allocateDirectNoCleaner(initialCapacity);
    }

    @Override
    protected void freeDirect(ByteBuffer buffer) {
        PlatformDependent.freeDirectNoCleaner(buffer);
    }

}

heap memory

heap memory needs to be concerned about is InstrumentedUnpooledHeapByteBuf and InstrumentedUnpooledUnsafeHeapByteBuf

These two types of memory use byte[] arrays for implementation as well. The specific allocation and release of memory is as follows:

  • InstrumentedUnpooledHeapByteBuf: memory is allocated directly using new byte[].
  • InstrumentedUnpooledUnsafeHeapByteBuf: memory is allocated using reflect to create a byte array.
  • Memory release for both is dependent on GC
public class UnpooledUnsafeHeapByteBuf extends UnpooledHeapByteBuf {

    @Override
    protected byte[] allocateArray(int initialCapacity) {
        return PlatformDependent.allocateUninitializedArray(initialCapacity);
    }

}

public class UnpooledHeapByteBuf extends AbstractReferenceCountedByteBuf
 {
     @Override
    protected byte[] allocateArray(int initialCapacity) {
        return new byte[initialCapacity];
    }

    protected void freeArray(byte[] array) {
        // NOOP
    }
 }

Netty ByteBuf
#

This part focuses on understanding the implementation and integration relationship between ByteBuf. It provides a deeper understanding of ByteBuf and the underlying ByteBuffer or byte array. The specific UML diagram is as follows, and the key points are described below.

AbstractReferenceCountedByteBuf:

  1. Internally defines a ReferenceCountUpdater for reference counting
  2. Implements the release method to explicitly release the updater reference count, and then calls the subclass deallocate() method to release the memory

PooledByteBuf:

  1. Defines the generic T memory for storing byte[] or ByteBuffer
  2. Records the allocator
  3. Defines the pooled recyclerHandle recycler handler, whose 4 subclasses define the four RECYCLER recyclers, which are handled uniformly by the handle in the parent class. The recycler here recycles the ByteBuf object.
  4. Implements the deallocate method to release memory through the RECYCLER

UnpooledDirectByteBuf:

  1. Defines the ByteBuffer used in direct off-heap memory
  2. Record allocator allocator
  3. Implements memory allocation and release in safe + cleaner mode; other cases are overridden and implemented by subclasses UnpooledUnsafeDirectByteBuf and UnpooledUnsafeNoCleanerDirectByteBuf.

UnpooledHeapByteBuf:

  1. Defines the byte[] array used in the heap memory
  2. Records the allocator
  3. Implements memory allocation in safe mode; unsafe mode is implemented by the subclass UnpooledUnsafeHeapByteBuf.

Java NIO ByteBuffer
#

After the above introduction, we learned that netty ByteBuf actually operates on Java nio’s ByteBuffer in direct mode. The following is an explanation of its key points.

MappedByteBuffer:

  • Defines a file descriptor fd for io operations on off-heap memory

DiractByteBuffer:

  • Defines the implementation of the cleaner used for memory release

Note: The netty heap does not directly use ByteBuffer. Only when needed will the byte[] array be converted to a ByteBuffer

Spring DataBuffer
#

In the actual use of reactor netty, we rarely come into contact with nio’s ByteBuffer or netty’s ByteBuf, but we will use DataBuffer defined in the spring-core package.

Let’s take a look at the UML diagram related to DataBuffer

Memory leak risk: When using DataBuffer directly, if the implementation is NettyDataBuffer, you need to pay attention to the release of DataBuffer. Improper use may cause the risk of memory leak.

The relationship between ByteBuffer/ByteBuf/DataBuffer
#

Finally, a diagram is used to illustrate the relationship between ByteBuffer/ByteBuf/DataBuffer.

As can be seen from the following diagram, there are two main implementations of DataBuffer:

  • The DefaultDataBuffer defines the member variable BtyeBuffer, which is implemented through the nio object
  • The NettyDataBuffer defines the member variable ByteBuf, which is implemented through the netty object

Secondly, for ByteBuf, the underlying layer is also implemented using the nio DyteBuffer.

Reference links #

https://juejin.cn/post/6844904037146443784

https://projectreactor.io/docs/netty/release/reference/index.html#faq.memory-leaks

https://netty.io/wiki/reference-counted-objects.html

https://tech.meituan.com/2018/10/18/netty-direct-memory-screening.html