A
MemoryAwareThreadPoolExecutor which makes sure the events from the
same
Channel are executed sequentially.
NOTE: This thread pool inherits most characteristics of its super
type, so please make sure to refer to
MemoryAwareThreadPoolExecutorto understand how it works basically.
Event execution order
For example, let's say there are two executor threads that handle the events
from the two channels:
-------------------------------------> Timeline ------------------------------------>
Thread X: --- Channel A (Event A1) --. .-- Channel B (Event B2) --- Channel B (Event B3) --->
\ /
X
/ \
Thread Y: --- Channel B (Event B1) --' '-- Channel A (Event A2) --- Channel A (Event A3) --->
As you see, the events from different channels are independent from each
other. That is, an event of Channel B will not be blocked by an event of
Channel A and vice versa, unless the thread pool is exhausted.
Also, it is guaranteed that the invocation will be made sequentially for the
events from the same channel. For example, the event A2 is never executed
before the event A1 is finished. (Although not recommended, if you want the
events from the same channel to be executed simultaneously, please use
MemoryAwareThreadPoolExecutor instead.)
However, it is not guaranteed that the invocation will be made by the same
thread for the same channel. The events from the same channel can be
executed by different threads. For example, the Event A2 is executed by the
thread Y while the event A1 was executed by the thread X.
Using a different key other than
Channel to maintain event order
OrderedMemoryAwareThreadPoolExecutor uses a
Channel as a key
that is used for maintaining the event execution order, as explained in the
previous section. Alternatively, you can extend it to change its behavior.
For example, you can change the key to the remote IP of the peer:
public class RemoteAddressBasedOMATPE extends
OrderedMemoryAwareThreadPoolExecutor {
... Constructors ...
@Overrideprotected ConcurrentMap<Object, Executor> newChildExecutorMap() {
// The default implementation returns a special ConcurrentMap that
// uses identity comparison only (see
IdentityHashMap).
// Because SocketAddress does not work with identity comparison,
// we need to employ more generic implementation.
return new ConcurrentHashMap<Object, Executor>
}
protected Object getChildExecutorKey(
ChannelEvent e) {
// Use the IP of the remote peer as a key.
return ((InetSocketAddress) e.getChannel().getRemoteAddress()).getAddress();
}
// Make public so that you can call from anywhere.
public boolean removeChildExecutor(Object key) {
super.removeChildExecutor(key);
}
}
Please be very careful of memory leak of the child executor map. You must
call
#removeChildExecutor(Object) when the life cycle of the key
ends (e.g. all connections from the same IP were closed.) Also, please
keep in mind that the key can appear again after calling
#removeChildExecutor(Object)(e.g. a new connection could come in from the same old IP after removal.)
If in doubt, prune the old unused or stall keys from the child executor map
periodically:
RemoteAddressBasedOMATPE executor = ...;
on every 3 seconds:
for (Iterator<Object> i = executor.getChildExecutorKeySet().iterator; i.hasNext();) {
InetAddress ip = (InetAddress) i.next();
if (there is no active connection from 'ip' now &&
there has been no incoming connection from 'ip' for last 10 minutes) {
i.remove();
}
}
If the expected maximum number of keys is small and deterministic, you could
use a weak key map such as
ConcurrentWeakHashMap
or synchronized
WeakHashMap instead of managing the life cycle of the
keys by yourself.