Saturday, May 18, 2013

My take on serialization (Part III: deserialize)

After seeing serialization, one fundamental thing is missing, i.e. deserialization!

There is not much to say here, this is the inverse operation we performed during serialization... therefore similar patters apply.

And we are done! 

In the std::tuple<T...> deserializer I would have liked to use the commented line. With that line I could remove 20+ lines of code which are used by the deserialize_tuple method. However, in that way the object is deserialized in the inverse order. The type is correct, but since it seems that the arguments of the make_tuple function are evaluated right-to-left, the resulting elements of the tuple are inverted. Therefore a serialized tuple (1,2,3) is deserialized back as (3,2,1) :(. This is caused by the fact that the apply function has side-effects and in C++ we cannot rely on the evaluation order of the arguments of a function, therefore this code is not safe and better take the safe solution (however it might work in some C++ compiler). 

Just to see if everything is working fine we write our usual test cases using google-test: 

Last thing to do is running a complete benchmark where we serialize and deserialize an object and compare it with boost::serialization. This time I compiled everything with optimizations enabled (-O3) and I am using gcc 4.8.0 20130502 (pre-release).


The code is similar to the one we saw in the previous post this time I add a call to deserialize and a stupid if to be sure the compiler is not doing any dead-code elimination. The code for boost::serialization is similar, just trust me (I know I am Italian... it might be difficult... but come on, as my supervisor says... "give me a break").

The result is well... quite impressive. I didn't do this exercise with performance in mind, rather than my goal was to eliminate a dependency on the boost libraries. Now I realize that boost is definitely doing something really wrong in the serialization library. The added storing of typing info does not justify the huge performance penalty. My solution is 20x faster! Since the messages I produce are half of the size (thanks to the missing typing info) I would expect boost to be twice as slow.

I am frankly quite pleased by the performance improvements I saw within the libWater project after replacing boost::serialization with this solution. We had a 10% performance improvement which in HPC is quite welcome.

The full code, plus the test cases are available on github (under the BSD license): https://github.com/motonacciu/meta-serialization
(contributions are welcome)

Read: PART I: get_size(...)


C++ <3

Friday, May 17, 2013

My take on serialization (Part II: serialize)

After discussed the get_size(...) functor, which given an object returns its serialized size in bytes, we can go on and write the serialize function.

We can follow the same pattern of the get_size, but this time we have to store the content to a stream.

As before, we have a specialization of the serialize_helper template for tuples, vectors, strings and POD datatypes. In order to be performance efficient we presize the vector in lines 66-67 (because we know how many bytes we need to store the object) thanks to the get_size predicate.

Let's write some unit tests (because they are very important):


So it's time for running some benchmarks. YAY! Let's compare this serializer with boost and check whether spending time re implementing this was worth some. Since I don't have much time, we only run 1 experiment... if you are interested you can run more :) 


So, as long as size of the serialized stream is concerned, boost uses 63 bytes while ours only 26 bytes (more than half). Performance-wise boost::serialization needs 4.410 secs to run the test, our serialization solution 0.975 seconds, more than 4 times faster. Of course we are not saying that boost::serialization sucks... as a matter of fact it solves a different problem since it also store the typing information which allows everyone (even a Java program) to reload the stream. Our serialization strategy (as I explained in the first post) is based on the fact that the type is known at receiver side (therefore we can avoid to store typing).

Let's show some graphs, comparison of our serialization strategy relative to boost::serialization:

Thursday, May 16, 2013

My take on C++ serialization (Part I: get_size)

Long time, no see!

Lately I have been busy writing... and writing... and we know that: "All writing and no coding makes Simone a dull boy" :) so I needed to go back to C++ writing... which means new blog entries!

Lately, in a project (libwater, soon released) I have been using boost::serialization in order to transfer C++ objects from one node to another via network. Boost serialization is pretty neat, especially when you are under a deadline, but releasing the code with a dependency on boost is in most of the case an overkill.  Especially when the boost library you are using is not an "header-only" library, which means the user needs it installed in his system. Since the project focuses on large clusters, this is always a big problem because these advanced libraries are in the majority of the cases missing (or a very old version is installed) meaning that the user has to do lot of installation work just to try out our library.

Beside that, boost::serialization was doing more than we actually needed, in fact I am in the particular case where I exactly know the type of the object I am receiving on the remote node, therefore I don't need to store in the serialization stream the type information like boost does. This is typical when for example you are implementing a network protocol where the structure of messages is known. In such case we can assign a tag to each message type and easily assign a C++ type to it. Since the list of messages is known statically, we can use some meta-programming just to have some fun.

Actually I could literally half the size of each transferred message by only storing the object content. Therefore I started to work on a "header-only"  serialization interface... of course it had to involve template meta-programming! :) I am going to post the various pieces of my take on object serialization on multiple posts (otherwise it may get to heavy to read).

The first thing (which I will introduce today) was defining a functor which given an object returns the size (in bytes) of its serialized representation, I called it size_t get_size(const T& obj). This is some easy code which needs to be written, so let's get to it.

Stated that the types I deal with are (for now) restricted to std::string, integral types (uint8_t, uint16_t, uint32_t, uint64_t), std::vector<T> and std::tuple<T...>, this code should make the trick. While for tuples and integral types we directly store the value, for vectors and string we prepend the number of elements which will be stored (since this information is not explicit in the type). 

For tuples I I hope I had a better solution using template expansion. This could be possible if for example the get_size was more like sizeof... which means it can be computed based on the object type . This would allow me to write something like this: std::sum(sizeof(T)...).

However, since we may have strings and vectors as elements of the tuple, this is not possible because we would need to also unpack the elements of the tuple (not only the type). Actually I would appreciated anyone that could give me a better solution for that. 

A simple test:



C++ <3