Last month I wrote my first report for GigaOM Research and it is now online: “The promise of next-generation WAN optimization”.

Here the beginning of the executive summary:

Bandwidth, throughput, and latency aren’t issues when you are within the boundaries of a data center, but things drastically change when you have to move data over a distance. Applications are designed to process data and provide results as fast as possible, because users and business processes now require instant access to resources of all kinds. This is not easy to accomplish when data is physically far from where it is needed.

In the past decade, with the exponential growth of internet, remote connectivity, and, later, large quantities of data, lack of bandwidth has become a major issue. A first generation of wide area network (WAN) optimizing solutions appeared in the market with the intent of overcoming the constraints of limited bandwidth connectivity. Sophisticated data-reduction techniques like compression, deduplication, traffic shaping, caching, proxying, and so on were integrated to minimize traffic between data centers and branch offices or for DC-to-DC communication. WAN optimization can contribute significantly in improving the quality and the quantity of services delivered to branch offices, replicate storage at longer distances for disaster recovery (DR) or business continuity (BC), reduce WAN costs, and improve mobile connectivity.

Recently things have changed significantly. Traditional WAN optimization was mainly conceived for solving lack of bandwidth in a time when legacy protocols were designed for local area network (LAN) connectivity. Data was neither compressed nor encrypted, and computers were unable to manage huge amounts of complex data. Now things are the other way around…

If you are interested in reading more, please follow this link.