11 min read

(For more resources related to this topic, see here.)

We will implement first a synchronous application, and then an asynchronous application, so you can easily compare them:

Some of the following code has been trimmed to save space. You will find the full code in the code accompanying this book.

TCP Echo server/clients

For TCP, we can have an extra guarantee; each message ends in line feed (‘n’). Coding Echo servers/clients synchronously is extremely easy.

We will present programs, such as synchronous client, a synchronous server, a asynchronous client, and an asynchronous server.

TCP synchronous client

In most non-trivial examples, it’s usually the client that is easier to code, than the server (since the server needs to deal with multiple clients).

The following code shows an exception to the rule:

ip::tcp::endpoint ep( ip::address::from_string("127.0.0.1"), 8001);
size_t read_complete(char * buf, const error_code & err, size_t bytes)
{
if ( err) return 0;
bool found = std::find(buf, buf + bytes, 'n') < buf + bytes;
// we read one-by-one until we get to enter, no buffering
return found ? 0 : 1;
}
void sync_echo(std::string msg) {
msg += "n";
ip::tcp::socket sock(service);
sock.connect(ep);
sock.write_some(buffer(msg));
char buf[1024];
int bytes = read(sock, buffer(buf), boost::bind(read_
complete,buf,_1,_2));
std::string copy(buf, bytes - 1);
msg = msg.substr(0, msg.size() - 1);
std::cout << "server echoed our " << msg << ": "
<< (copy == msg ? "OK" : "FAIL") << std::endl;
sock.close();
}
int main(int argc, char* argv[]) {
char* messages[] = { "John says hi", "so does James",
"Lucy just got home", "Boost.Asio is Fun!", 0
};
boost::thread_group threads;
for ( char ** message = messages; *message; ++message) {
threads.create_thread( boost::bind(sync_echo, *message));
boost::this_thread::sleep( boost::posix_time::millisec(100));
}
threads.join_all();
}

The function to watch for is sync_echo. It contains all the logic for connecting to a server, sending it a message and waiting for the echo back.

You’ll notice that, for reading, I’ve used the free function read(), because I want to read everything up to ‘n’. The sock.read_some() function would not be enough, since that would only read what’s available, which is not necessarily the whole message.

The third argument to the read() function is a completion handler. It will return 0 when it’s read the full message. Otherwise, it will return the maximum buffer it can read in the next step (until read is complete). In our case, this is always 1, because we never want to mistakenly read more than we need.

In main(), we create several threads; one thread for each message to send to the client, and wait for them to complete. If you run the program, you’ll see the following output:

server echoed our John says hi: OK
server echoed our so does James: OK
server echoed our Lucy just got home: OK
server echoed our Boost.Asio is Fun!: OK

Notice that since we’re synchronous, there’s no need to call service.run().

TCP synchronous server

The Echo synchronous server is quite easy to write, as shown in the following code snippet:

io_service service;
size_t read_complete(char * buff, const error_code & err, size_t
bytes) {
if ( err) return 0;
bool found = std::find(buff, buff + bytes, 'n') < buff + bytes;
// we read one-by-one until we get to enter, no buffering
return found ? 0 : 1;
}
void handle_connections() {
ip::tcp::acceptor acceptor(service, ip::tcp::endpoint(ip::tcp:
:v4(),8001));
char buff[1024];
while ( true) {
ip::tcp::socket sock(service);
acceptor.accept(sock);
int bytes = read(sock, buffer(buff),
boost::bind(read_complete,buff,_1,_2));
std::string msg(buff, bytes);
sock.write_some(buffer(msg));
sock.close();
}
}
int main(int argc, char* argv[]) {
handle_connections();
}

The logic of the server is handle_connections(). Since we’re single-threaded, we accept a new client, read the message it sends us, echo it back, and then wait for the next client. Let’s say, if two clients connect at once, the second client will have to wait for the server to service the first client.

Notice again that since we’re synchronous, there’s no need to call service.run().

TCP asynchronous client

Once we go asynchronous, the code becomes a bit more complicated. We’ll model the connection class.

By looking at the following code snippets in this section, you will notice that every asynchronous operation starts a new asynchronous operation, keeping the service. run() busy.

First, the core functionality is:

#define MEM_FN(x) boost::bind(&self_type::x, shared_from_this())
#define MEM_FN1(x,y) boost::bind(&self_type::x, shared_from_
this(),y)
#define MEM_FN2(x,y,z) boost::bind(&self_type::x, shared_from_
this(),y,z)
class talk_to_svr : public boost::enable_shared_from_this<talk_to_svr>
, boost::noncopyable {
typedef talk_to_svr self_type;
talk_to_svr(const std::string & message)
: sock_(service), started_(true), message_(message) {}
void start(ip::tcp::endpoint ep) {
sock_.async_connect(ep, MEM_FN1(on_connect,_1));
}
public:
typedef boost::system::error_code error_code;
typedef boost::shared_ptr<talk_to_svr> ptr;
static ptr start(ip::tcp::endpoint ep, const std::string &
message) {
ptr new_(new talk_to_svr(message));
new_->start(ep);
return new_;
}
void stop() {
if ( !started_) return;
started_ = false;
sock_.close();
}
bool started() { return started_; }
...
private:
ip::tcp::socket sock_;
enum { max_msg = 1024 };
char read_buffer_[max_msg];
char write_buffer_[max_msg];
bool started_;
std::string message_;
};

We want to always use shared pointers to talk_to_svr, so that as long as there are asynchronous operations on an instance of talk_to_svr, that instance is alive. In order to avoid mistakes, such as constructing an instance of the talk_to_svr object on the stack, I’ve made the constructor private and disallowed copy construction (derived from boost::noncopyable).

We have the core functions, such as start(), stop(), and started() that do just what their names say. To construct a connection, just call talk_to_ svr::start(endpoint, message). We also have one read and one write buffer (read_buffer_ and write_buffer_).

The MEM_FN* macros are convenience macros, and they enforce always using a shared pointer to *this, via the shared_ptr_from_this() function.

The following lines are very different than explained earlier:

// equivalent to "sock_.async_connect(ep, MEM_FN1(on_connect,_1));"
sock_.async_connect(ep,
boost::bind(&talk_to_svr::on_connect,shared_ptr_from_this(),_1));
sock_.async_connect(
ep, boost::bind(&talk_to_svr::on_connect,this,_1));

In the former case, we’re creating the async_connect completion handler correctly; it will hold a shared pointer to the talk_to_server instance until it calls the completion handler, thus, making sure we’re still alive when that happens.

In the latter case, we’re creating the completion handler incorrectly. By the time it gets called, the talk_to_server instance could have been deleted!

To read from or write to the socket, you’ll use following code snippet:

void do_read() {
async_read(sock_, buffer(read_buffer_),
MEM_FN2(read_complete,_1,_2), MEM_FN2(on_read,_1,_2));
}
void do_write(const std::string & msg) {
if ( !started() ) return;
std::copy(msg.begin(), msg.end(), write_buffer_);
sock_.async_write_some( buffer(write_buffer_, msg.size()),
MEM_FN2(on_write,_1,_2));
}
size_t read_complete(const boost::system::error_code & err, size_t
bytes) {
// similar to the one shown in TCP Synchronous Client
}

The do_read() function will make sure that we read a line from the server, at which point on_read() is called. The do_write() function will first copy the message into the buffer (since msg will probably go out of scope and be destroyed by the time the async_write actually takes place), and then make sure on_write() is called after the actual write takes place.

And the most important functions, the one that contain the main logic of the class:

void on_connect(const error_code & err) {
if ( !err) do_write(message_ + "n");
else stop();
}
void on_read(const error_code & err, size_t bytes) {
if ( !err) {
std::string copy(read_buffer_, bytes - 1);
std::cout << "server echoed our " << message_ << ": "
<< (copy == message_ ? "OK" : "FAIL") <<
std::endl;
}
stop();
}
void on_write(const error_code & err, size_t bytes) {
do_read();
}

After we’re connected, we send the message to the server, do_write(). When the write operation is finished, on_write() gets called, which initiates a do_read() function. When do_read() is complete, on_read() gets called; here, we simply check that the message from the server is simply an echo, and exit from it.

We’ll send three messages to the server just to make it a bit more interesting:

int main(int argc, char* argv[]) {
ip::tcp::endpoint ep( ip::address::from_string("127.0.0.1"),
8001);
char* messages[] = { "John says hi", "so does James", "Lucy got
home", 0 };
for ( char ** message = messages; *message; ++message) {
talk_to_svr::start( ep, *message);
boost::this_thread::sleep( boost::posix_time::millisec(100));
}
service.run();
}

The preceding code snippet will generate the following code:

server echoed our John says hi: OK
server echoed our so does James: OK
server echoed our Lucy just got home: OK

TCP asynchronous server

The core functionality is similar to the one from the asynchronous client, shown as follows:

class talk_to_client : public boost::enable_shared_from_this<talk_to_
client>
, boost::noncopyable {
typedef talk_to_client self_type;
talk_to_client() : sock_(service), started_(false) {}
public:
typedef boost::system::error_code error_code;
typedef boost::shared_ptr<talk_to_client> ptr;
void start() {
started_ = true;
do_read();
}
static ptr new_() {
ptr new_(new talk_to_client);
return new_;
}
void stop() {
if ( !started_) return;
started_ = false;
sock_.close();
}
ip::tcp::socket & sock() { return sock_;}
...
private:
ip::tcp::socket sock_;
enum { max_msg = 1024 };
char read_buffer_[max_msg];
char write_buffer_[max_msg];
bool started_;
};

Since we’ve a very simple Echo server, there is no need for an is_started() function. For each client, just read its message, echo it back, and close it.

The do_read(), do_write() and read_complete() functions are exactly the same as in the TCP asynchronous client.

The main logic of the class is again in on_read() and on_write():

void on_read(const error_code & err, size_t bytes) {
if ( !err) {
std::string msg(read_buffer_, bytes);
do_write(msg + "n");
}
stop();
}
void on_write(const error_code & err, size_t bytes) {
do_read();
}

Dealing with the clients is done as follows:

ip::tcp::acceptor acceptor(service, ip::tcp::endpoint(ip::tcp::v4(),
8001));
void handle_accept(talk_to_client::ptr client, const error_code & err)
{
client->start();
talk_to_client::ptr new_client = talk_to_client::new_();
acceptor.async_accept(new_client->sock(),
boost::bind(handle_accept,new_client,_1));
}
int main(int argc, char* argv[]) {
talk_to_client::ptr client = talk_to_client::new_();
acceptor.async_accept(client->sock(),
boost::bind(handle_accept,client,_1));
service.run();
}

Each time a client connects to the server, handle_accept is called, which will asynchronously start reading from that client, and also asynchronously wait for a new client.

The code

You’ll find all four applications (TCP Echo Sync Client, TCP Echo Sync Server, TCP Echo Sync Client, TCP Echo Sync Server) in the code accompanying this book. When testing, you can use any client/server combination (such as, an asynchronous client versus a synchronous server).

UDP Echo server/clients

Since in UDP not all messages reach the recipient, we can’t have the “message ends in enter” guarantee.

Each message we receive, we simply echo back with no socket to close (on the server side), since we’re UDP.

UDP synchronous Echo client

The UDP Echo client is simpler than the TCP Echo client:

ip::udp::endpoint ep( ip::address::from_string("127.0.0.1"), 8001);
void sync_echo(std::string msg) {
ip::udp::socket sock(service, ip::udp::endpoint(ip::udp::v4(), 0)
);
sock.send_to(buffer(msg), ep);
char buff[1024];
ip::udp::endpoint sender_ep;
int bytes = sock.receive_from(buffer(buff), sender_ep);
std::string copy(buff, bytes);
std::cout << "server echoed our " << msg << ": "
<< (copy == msg ? "OK" : "FAIL") << std::endl;
sock.close();
}
int main(int argc, char* argv[]) {
char* messages[] = { "John says hi", "so does James", "Lucy got
home", 0 };
boost::thread_group threads;
for ( char ** message = messages; *message; ++message) {
threads.create_thread( boost::bind(sync_echo, *message));
boost::this_thread::sleep( boost::posix_time::millisec(100));
}
threads.join_all();
}

The whole logic is in synch_echo(); connect to the server, send the message, receive the echo from server, and close the connection.

UDP synchronous Echo server

The UDP Echo server is the easiest server you’ll ever write:

io_service service;
void handle_connections() {
char buff[1024];
ip::udp::socket sock(service, ip::udp::endpoint(ip::udp::v4(),
8001));
while ( true) {
ip::udp::endpoint sender_ep;
int bytes = sock.receive_from(buffer(buff), sender_ep);
std::string msg(buff, bytes);
sock.send_to(buffer(msg), sender_ep);
}
}
int main(int argc, char* argv[]) {
handle_connections();
}

That’s simple, and quite self-explanatory.

I’ll leave the asynchronous UDP client and server as an exercise for the reader.

Summary

We’ve written full applications and finally put Boost.Asio to work. The Echo application is a very good tool to start learning a library. You can always study and run the code shown in this article to easily remember the library’s fundamentals.

Resources for Article :


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here