Using TCP
The code snippets through-out this section assume the following imports:
import java.net.InetSocketAddress;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.actor.UntypedActor;
import akka.io.Tcp;
import akka.io.Tcp.Bound;
import akka.io.Tcp.CommandFailed;
import akka.io.Tcp.Connected;
import akka.io.Tcp.ConnectionClosed;
import akka.io.Tcp.Received;
import akka.io.TcpMessage;
import akka.japi.Procedure;
import akka.util.ByteString;
All of the Akka I/O APIs are accessed through manager objects. When using an I/O API, the first step is to acquire a
reference to the appropriate manager. The code below shows how to acquire a reference to the Tcp
manager.
final ActorRef tcpManager = Tcp.get(getContext().system()).manager();
The manager is an actor that handles the underlying low level I/O resources (selectors, channels) and instantiates workers for specific tasks, such as listening to incoming connections.
Connecting
public class Client extends UntypedActor {
final InetSocketAddress remote;
final ActorRef listener;
public static Props props(InetSocketAddress remote, ActorRef listener) {
return Props.create(Client.class, remote, listener);
}
public Client(InetSocketAddress remote, ActorRef listener) {
this.remote = remote;
this.listener = listener;
final ActorRef tcp = Tcp.get(getContext().system()).manager();
tcp.tell(TcpMessage.connect(remote), getSelf());
}
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof CommandFailed) {
listener.tell("failed", getSelf());
getContext().stop(getSelf());
} else if (msg instanceof Connected) {
listener.tell(msg, getSelf());
getSender().tell(TcpMessage.register(getSelf()), getSelf());
getContext().become(connected(getSender()));
}
}
private Procedure<Object> connected(final ActorRef connection) {
return new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof ByteString) {
connection.tell(TcpMessage.write((ByteString) msg), getSelf());
} else if (msg instanceof CommandFailed) {
// OS kernel socket buffer was full
} else if (msg instanceof Received) {
listener.tell(((Received) msg).data(), getSelf());
} else if (msg.equals("close")) {
connection.tell(TcpMessage.close(), getSelf());
} else if (msg instanceof ConnectionClosed) {
getContext().stop(getSelf());
}
}
};
}
}
The first step of connecting to a remote address is sending a Connect
message to the TCP manager; in addition to the simplest form shown above there
is also the possibility to specify a local InetSocketAddress
to bind
to and a list of socket options to apply.
注釈
The SO_NODELAY (TCP_NODELAY on Windows) socket option defaults to true in
Akka, independently of the OS default settings. This setting disables Nagle's
algorithm, considerably improving latency for most applications. This setting
could be overridden by passing SO.TcpNoDelay(false)
in the list of socket
options of the Connect
message.
The TCP manager will then reply either with a CommandFailed
or it will
spawn an internal actor representing the new connection. This new actor will
then send a Connected
message to the original sender of the
Connect
message.
In order to activate the new connection a Register
message must be
sent to the connection actor, informing that one about who shall receive data
from the socket. Before this step is done the connection cannot be used, and
there is an internal timeout after which the connection actor will shut itself
down if no Register
message is received.
The connection actor watches the registered handler and closes the connection when that one terminates, thereby cleaning up all internal resources associated with that connection.
The actor in the example above uses become
to switch from unconnected
to connected operation, demonstrating the commands and events which are
observed in that state. For a discussion on CommandFailed
see
Throttling Reads and Writes below. ConnectionClosed
is a trait,
which marks the different connection close events. The last line handles all
connection close events in the same way. It is possible to listen for more
fine-grained connection close events, see Closing Connections below.
Accepting connections
public class Server extends UntypedActor {
final ActorRef manager;
public Server(ActorRef manager) {
this.manager = manager;
}
public static Props props(ActorRef manager) {
return Props.create(Server.class, manager);
}
@Override
public void preStart() throws Exception {
final ActorRef tcp = Tcp.get(getContext().system()).manager();
tcp.tell(TcpMessage.bind(getSelf(),
new InetSocketAddress("localhost", 0), 100), getSelf());
}
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof Bound) {
manager.tell(msg, getSelf());
} else if (msg instanceof CommandFailed) {
getContext().stop(getSelf());
} else if (msg instanceof Connected) {
final Connected conn = (Connected) msg;
manager.tell(conn, getSelf());
final ActorRef handler = getContext().actorOf(
Props.create(SimplisticHandler.class));
getSender().tell(TcpMessage.register(handler), getSelf());
}
}
}
To create a TCP server and listen for inbound connections, a Bind
command has to be sent to the TCP manager. This will instruct the TCP manager
to listen for TCP connections on a particular InetSocketAddress
; the
port may be specified as 0
in order to bind to a random port.
The actor sending the Bind
message will receive a Bound
message signaling that the server is ready to accept incoming connections;
this message also contains the InetSocketAddress
to which the socket
was actually bound (i.e. resolved IP address and correct port number).
From this point forward the process of handling connections is the same as for
outgoing connections. The example demonstrates that handling the reads from a
certain connection can be delegated to another actor by naming it as the
handler when sending the Register
message. Writes can be sent from any
actor in the system to the connection actor (i.e. the actor which sent the
Connected
message). The simplistic handler is defined as:
public class SimplisticHandler extends UntypedActor {
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof Received) {
final ByteString data = ((Received) msg).data();
System.out.println(data);
getSender().tell(TcpMessage.write(data), getSelf());
} else if (msg instanceof ConnectionClosed) {
getContext().stop(getSelf());
}
}
}
For a more complete sample which also takes into account the possibility of failures when sending please see Throttling Reads and Writes below.
The only difference to outgoing connections is that the internal actor managing
the listen port—the sender of the Bound
message—watches the actor
which was named as the recipient for Connected
messages in the
Bind
message. When that actor terminates the listen port will be
closed and all resources associated with it will be released; existing
connections will not be terminated at this point.
Closing connections
A connection can be closed by sending one of the commands Close
, ConfirmedClose
or Abort
to the connection
actor.
Close
will close the connection by sending a FIN
message, but without waiting for confirmation from
the remote endpoint. Pending writes will be flushed. If the close is successful, the listener will be notified with
Closed
.
ConfirmedClose
will close the sending direction of the connection by sending a FIN
message, but data
will continue to be received until the remote endpoint closes the connection, too. Pending writes will be flushed. If the close is
successful, the listener will be notified with ConfirmedClosed
.
Abort
will immediately terminate the connection by sending a RST
message to the remote endpoint. Pending
writes will be not flushed. If the close is successful, the listener will be notified with Aborted
.
PeerClosed
will be sent to the listener if the connection has been closed by the remote endpoint. Per default, the
connection will then automatically be closed from this endpoint as well. To support half-closed connections set the
keepOpenOnPeerClosed
member of the Register
message to true
in which case the connection stays open until
it receives one of the above close commands.
ErrorClosed
will be sent to the listener whenever an error happened that forced the connection to be closed.
All close notifications are sub-types of ConnectionClosed
so listeners who do not need fine-grained close events
may handle all close events in the same way.
Writing to a connection
Once a connection has been established data can be sent to it from any actor in the form of a Tcp.WriteCommand
.
Tcp.WriteCommand
is an abstract class with three concrete implementations:
- Tcp.Write
- The simplest
WriteCommand
implementation which wraps aByteString
instance and an "ack" event. AByteString
(as explained in this section) models one or more chunks of immutable in-memory data with a maximum (total) size of 2 GB (2^31 bytes). - Tcp.WriteFile
- If you want to send "raw" data from a file you can do so efficiently with the
Tcp.WriteFile
command. This allows you do designate a (contiguous) chunk of on-disk bytes for sending across the connection without the need to first load them into the JVM memory. As suchTcp.WriteFile
can "hold" more than 2GB of data and an "ack" event if required. - Tcp.CompoundWrite
Sometimes you might want to group (or interleave) several
Tcp.Write
and/orTcp.WriteFile
commands into one atomic write command which gets written to the connection in one go. TheTcp.CompoundWrite
allows you to do just that and offers three benefits:- As explained in the following section the TCP connection actor can only handle one single write command at a time.
By combining several writes into one
CompoundWrite
you can have them be sent across the connection with minimum overhead and without the need to spoon feed them to the connection actor via an ACK-based message protocol. - Because a
WriteCommand
is atomic you can be sure that no other actor can "inject" other writes into your series of writes if you combine them into one singleCompoundWrite
. In scenarios where several actors write to the same connection this can be an important feature which can be somewhat hard to achieve otherwise. - The "sub writes" of a
CompoundWrite
are regularWrite
orWriteFile
commands that themselves can request "ack" events. These ACKs are sent out as soon as the respective "sub write" has been completed. This allows you to attach more than one ACK to aWrite
orWriteFile
(by combining it with an empty write that itself requests an ACK) or to have the connection actor acknowledge the progress of transmitting theCompoundWrite
by sending out intermediate ACKs at arbitrary points.
- As explained in the following section the TCP connection actor can only handle one single write command at a time.
By combining several writes into one
Throttling Reads and Writes
The basic model of the TCP connection actor is that it has no internal buffering (i.e. it can only process one write at a time, meaning it can buffer one write until it has been passed on to the O/S kernel in full). Congestion needs to be handled at the user level, for both writes and reads.
For back-pressuring writes there are three modes of operation
- ACK-based: every
Write
command carries an arbitrary object, and if this object is notTcp.NoAck
then it will be returned to the sender of theWrite
upon successfully writing all contained data to the socket. If no other write is initiated before having received this acknowledgement then no failures can happen due to buffer overrun. - NACK-based: every write which arrives while a previous write is not yet
completed will be replied to with a
CommandFailed
message containing the failed write. Just relying on this mechanism requires the implemented protocol to tolerate skipping writes (e.g. if each write is a valid message on its own and it is not required that all are delivered). This mode is enabled by setting theuseResumeWriting
flag tofalse
within theRegister
message during connection activation. - NACK-based with write suspending: this mode is very similar to the
NACK-based one, but once a single write has failed no further writes will
succeed until a
ResumeWriting
message is received. This message will be answered with aWritingResumed
message once the last accepted write has completed. If the actor driving the connection implements buffering and resends the NACK’ed messages after having awaited theWritingResumed
signal then every message is delivered exactly once to the network socket.
These write models (with the exception of the second which is rather specialised) are demonstrated in complete examples below. The full and contiguous source is available on GitHub.
For back-pressuring reads there are two modes of operation
- Push-reading: in this mode the connection actor sends the registered reader actor
incoming data as soon as available as
Received
events. Whenever the reader actor wants to signal back-pressure to the remote TCP endpoint it can send aSuspendReading
message to the connection actor to indicate that it wants to suspend the reception of new data. NoReceived
events will arrive until a correspondingResumeReading
is sent indicating that the receiver actor is ready again. - Pull-reading: after sending a
Received
event the connection actor automatically suspends accepting data from the socket until the reader actor signals with aResumeReading
message that it is ready to process more input data. Hence new data is "pulled" from the connection by sendingResumeReading
messages.
注釈
It should be obvious that all these flow control schemes only work between one writer/reader and one connection actor; as soon as multiple actors send write commands to a single connection no consistent result can be achieved.
ACK-Based Write Back-Pressure
For proper function of the following example it is important to configure the
connection to remain half-open when the remote side closed its writing end:
this allows the example EchoHandler
to write all outstanding data back
to the client before fully closing the connection. This is enabled using a flag
upon connection activation (observe the Register
message):
connection.tell(TcpMessage.register(handler,
true, // <-- keepOpenOnPeerClosed flag
true), getSelf());
With this preparation let us dive into the handler itself:
public class SimpleEchoHandler extends UntypedActor {
final LoggingAdapter log = Logging
.getLogger(getContext().system(), getSelf());
final ActorRef connection;
final InetSocketAddress remote;
public static final long maxStored = 100000000;
public static final long highWatermark = maxStored * 5 / 10;
public static final long lowWatermark = maxStored * 2 / 10;
public SimpleEchoHandler(ActorRef connection, InetSocketAddress remote) {
this.connection = connection;
this.remote = remote;
// sign death pact: this actor stops when the connection is closed
getContext().watch(connection);
}
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof Received) {
final ByteString data = ((Received) msg).data();
buffer(data);
connection.tell(TcpMessage.write(data, ACK), getSelf());
// now switch behavior to “waiting for acknowledgement”
getContext().become(buffering, false);
} else if (msg instanceof ConnectionClosed) {
getContext().stop(getSelf());
}
}
private final Procedure<Object> buffering = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof Received) {
buffer(((Received) msg).data());
} else if (msg == ACK) {
acknowledge();
} else if (msg instanceof ConnectionClosed) {
if (((ConnectionClosed) msg).isPeerClosed()) {
closing = true;
} else {
// could also be ErrorClosed, in which case we just give up
getContext().stop(getSelf());
}
}
}
};
// storage omitted ...
}
The principle is simple: when having written a chunk always wait for the
Ack
to come back before sending the next chunk. While waiting we switch
behavior such that new incoming data are buffered. The helper functions used
are a bit lengthy but not complicated:
protected void buffer(ByteString data) {
storage.add(data);
stored += data.size();
if (stored > maxStored) {
log.warning("drop connection to [{}] (buffer overrun)", remote);
getContext().stop(getSelf());
} else if (stored > highWatermark) {
log.debug("suspending reading");
connection.tell(TcpMessage.suspendReading(), getSelf());
suspended = true;
}
}
protected void acknowledge() {
final ByteString acked = storage.remove();
stored -= acked.size();
transferred += acked.size();
if (suspended && stored < lowWatermark) {
log.debug("resuming reading");
connection.tell(TcpMessage.resumeReading(), getSelf());
suspended = false;
}
if (storage.isEmpty()) {
if (closing) {
getContext().stop(getSelf());
} else {
getContext().unbecome();
}
} else {
connection.tell(TcpMessage.write(storage.peek(), ACK), getSelf());
}
}
The most interesting part is probably the last: an Ack
removes the oldest
data chunk from the buffer, and if that was the last chunk then we either close
the connection (if the peer closed its half already) or return to the idle
behavior; otherwise we just send the next buffered chunk and stay waiting for
the next Ack
.
Back-pressure can be propagated also across the reading side back to the writer
on the other end of the connection by sending the SuspendReading
command to the connection actor. This will lead to no data being read from the
socket anymore (although this does happen after a delay because it takes some
time until the connection actor processes this command, hence appropriate
head-room in the buffer should be present), which in turn will lead to the O/S
kernel buffer filling up on our end, then the TCP window mechanism will stop
the remote side from writing, filling up its write buffer, until finally the
writer on the other side cannot push any data into the socket anymore. This is
how end-to-end back-pressure is realized across a TCP connection.
NACK-Based Write Back-Pressure with Suspending
public class EchoHandler extends UntypedActor {
final LoggingAdapter log = Logging
.getLogger(getContext().system(), getSelf());
final ActorRef connection;
final InetSocketAddress remote;
public static final long MAX_STORED = 100000000;
public static final long HIGH_WATERMARK = MAX_STORED * 5 / 10;
public static final long LOW_WATERMARK = MAX_STORED * 2 / 10;
private static class Ack implements Event {
public final int ack;
public Ack(int ack) {
this.ack = ack;
}
}
public EchoHandler(ActorRef connection, InetSocketAddress remote) {
this.connection = connection;
this.remote = remote;
// sign death pact: this actor stops when the connection is closed
getContext().watch(connection);
// start out in optimistic write-through mode
getContext().become(writing);
}
private final Procedure<Object> writing = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof Received) {
final ByteString data = ((Received) msg).data();
connection.tell(TcpMessage.write(data, new Ack(currentOffset())), getSelf());
buffer(data);
} else if (msg instanceof Integer) {
acknowledge((Integer) msg);
} else if (msg instanceof CommandFailed) {
final Write w = (Write) ((CommandFailed) msg).cmd();
connection.tell(TcpMessage.resumeWriting(), getSelf());
getContext().become(buffering((Ack) w.ack()));
} else if (msg instanceof ConnectionClosed) {
final ConnectionClosed cl = (ConnectionClosed) msg;
if (cl.isPeerClosed()) {
if (storage.isEmpty()) {
getContext().stop(getSelf());
} else {
getContext().become(closing);
}
}
}
}
};
// buffering ...
// closing ...
// storage omitted ...
}
The principle here is to keep writing until a CommandFailed
is
received, using acknowledgements only to prune the resend buffer. When a such a
failure was received, transition into a different state for handling and handle
resending of all queued data:
protected Procedure<Object> buffering(final Ack nack) {
return new Procedure<Object>() {
private int toAck = 10;
private boolean peerClosed = false;
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof Received) {
buffer(((Received) msg).data());
} else if (msg instanceof WritingResumed) {
writeFirst();
} else if (msg instanceof ConnectionClosed) {
if (((ConnectionClosed) msg).isPeerClosed())
peerClosed = true;
else
getContext().stop(getSelf());
} else if (msg instanceof Integer) {
final int ack = (Integer) msg;
acknowledge(ack);
if (ack >= nack.ack) {
// otherwise it was the ack of the last successful write
if (storage.isEmpty()) {
if (peerClosed)
getContext().stop(getSelf());
else
getContext().become(writing);
} else {
if (toAck > 0) {
// stay in ACK-based mode for a short while
writeFirst();
--toAck;
} else {
// then return to NACK-based again
writeAll();
if (peerClosed)
getContext().become(closing);
else
getContext().become(writing);
}
}
}
}
}
};
}
It should be noted that all writes which are currently buffered have also been
sent to the connection actor upon entering this state, which means that the
ResumeWriting
message is enqueued after those writes, leading to the
reception of all outstanding CommandFailed
messages (which are ignored
in this state) before receiving the WritingResumed
signal. That latter
message is sent by the connection actor only once the internally queued write
has been fully completed, meaning that a subsequent write will not fail. This
is exploited by the EchoHandler
to switch to an ACK-based approach for
the first ten writes after a failure before resuming the optimistic
write-through behavior.
protected Procedure<Object> closing = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof CommandFailed) {
// the command can only have been a Write
connection.tell(TcpMessage.resumeWriting(), getSelf());
getContext().become(closeResend, false);
} else if (msg instanceof Integer) {
acknowledge((Integer) msg);
if (storage.isEmpty())
getContext().stop(getSelf());
}
}
};
protected Procedure<Object> closeResend = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof WritingResumed) {
writeAll();
getContext().unbecome();
} else if (msg instanceof Integer) {
acknowledge((Integer) msg);
}
}
};
Closing the connection while still sending all data is a bit more involved than
in the ACK-based approach: the idea is to always send all outstanding messages
and acknowledge all successful writes, and if a failure happens then switch
behavior to await the WritingResumed
event and start over.
The helper functions are very similar to the ACK-based case:
protected void buffer(ByteString data) {
storage.add(data);
stored += data.size();
if (stored > MAX_STORED) {
log.warning("drop connection to [{}] (buffer overrun)", remote);
getContext().stop(getSelf());
} else if (stored > HIGH_WATERMARK) {
log.debug("suspending reading at {}", currentOffset());
connection.tell(TcpMessage.suspendReading(), getSelf());
suspended = true;
}
}
protected void acknowledge(int ack) {
assert ack == storageOffset;
assert !storage.isEmpty();
final ByteString acked = storage.remove();
stored -= acked.size();
transferred += acked.size();
storageOffset += 1;
if (suspended && stored < LOW_WATERMARK) {
log.debug("resuming reading");
connection.tell(TcpMessage.resumeReading(), getSelf());
suspended = false;
}
}
Read Back-Pressure with Pull Mode
When using push based reading, data coming from the socket is sent to the actor as soon as it is available. In the case of the previous Echo server example this meant that we needed to maintain a buffer of incoming data to keep it around since the rate of writing might be slower than the rate of the arrival of new data.
With the Pull mode this buffer can be completely eliminated as the following snippet demonstrates:
@Override
public void preStart() throws Exception {
connection.tell(TcpMessage.resumeReading(), getSelf());
}
@Override
public void onReceive(Object message) throws Exception {
if (message instanceof Tcp.Received) {
ByteString data = ((Tcp.Received) message).data();
connection.tell(TcpMessage.write(data, new Ack()), getSelf());
} else if (message instanceof Ack) {
connection.tell(TcpMessage.resumeReading(), getSelf());
}
}
The idea here is that reading is not resumed until the previous write has been
completely acknowledged by the connection actor. Every pull mode connection
actor starts from suspended state. To start the flow of data we send a
ResumeReading
in the preStart
method to tell the connection actor that
we are ready to receive the first chunk of data. Since we only resume reading when
the previous data chunk has been completely written there is no need for maintaining
a buffer.
To enable pull reading on an outbound connection the pullMode
parameter of
the Connect
should be set to true
:
final List<Inet.SocketOption> options = new ArrayList<Inet.SocketOption>();
tcp.tell(
TcpMessage.connect(new InetSocketAddress("localhost", 3000), null, options, null, true),
getSelf()
);
Pull Mode Reading for Inbound Connections
The previous section demonstrated how to enable pull reading mode for outbound
connections but it is possible to create a listener actor with this mode of reading
by setting the pullMode
parameter of the Bind
command to true
:
tcp = Tcp.get(getContext().system()).manager();
final List<Inet.SocketOption> options = new ArrayList<Inet.SocketOption>();
tcp.tell(
TcpMessage.bind(getSelf(), new InetSocketAddress("localhost", 0), 100, options, true),
getSelf()
);
One of the effects of this setting is that all connections accepted by this listener actor will use pull mode reading.
Another effect of this setting is that in addition of setting all inbound connections to
pull mode, accepting connections becomes pull based, too. This means that after handling
one (or more) Connected
events the listener actor has to be resumed by sending
it a ResumeAccepting
message.
Listener actors with pull mode start suspended so to start accepting connections
a ResumeAccepting
command has to be sent to the listener actor after binding was successful:
public void onReceive(Object message) throws Exception {
if (message instanceof Tcp.Bound) {
listener = getSender();
// Accept connections one by one
listener.tell(TcpMessage.resumeAccepting(1), getSelf());
} else if (message instanceof Tcp.Connected) {
ActorRef handler = getContext().actorOf(Props.create(PullEcho.class, getSender()));
getSender().tell(TcpMessage.register(handler), getSelf());
// Resume accepting connections
listener.tell(TcpMessage.resumeAccepting(1), getSelf());
}
}
As shown in the example after handling an incoming connection we need to resume accepting again.
The ResumeAccepting
message accepts a batchSize
parameter that specifies how
many new connections are accepted before a next ResumeAccepting
message
is needed to resume handling of new connections.
Contents