Tuesday, March 19, 2024
spot_img

How Microsoft truly uses Spark and Cassandra for Big Data analytics

Last updated on September 21st, 2015 at 03:08 pm

At the recentĀ @Scale Conference in San Jose, Calif., leading figures and experts in computer engineering, coding and cloud computing gathered to share news, views, successes and failures of their profession. One of those experts, Arun Jayandra, software development lead at Microsoft, shared his experiences using Spark cluster computing and Cassandra database technologies for Big Data analytics.

With his involvement in Office 365, Jayandra and his team at Microsoft designed the online office productivity suite to run with three-nines and four-nines availabilityā€”99.9 and 99.99 percent reliability.

Office 365 tenants, or customers, were not satisfied with this level of performance, according to Jayandra. But another issue underlay customer satisfaction with the Office 365 experienceā€”analyzing actual reliability of the applications. ā€œTo date, weā€™re not the most experienced at measuring the availability the tenant is getting,ā€ Jayandra says.

Big believer in Big Data

Naturally, Jayandraā€™s Microsoft team wanted to use their internal IP to create a Big Data analytics engine for Office 365. But after trying to build the analytics engine with proprietary Microsoft technology, the development team turned to open source solutions to replace their own products.

They did so at least in part based on their need for real time and batch mode analytics. For these purposes, a weekā€™s worth of user data insights seemed sufficient, according to Jayandra. But even seven days of user storage and retrieval information proved daunting. ā€œWith Office 365 data there is much data velocity,ā€ Jayandra says. ā€œItā€™s very high frequency data with 10 terabytes (TB) stored a day.ā€

Having so much customer data on hand posed a lot of risk for the Office 365 team, which brought about the needĀ to create a protection methodology with resilience and redundancy. ā€œThe customer data needed to be protected in multiple geographies replicated across datacenters,ā€ Jayandra says.

He also spotted an issue relating to data signals.Ā ā€œA small set of signals tend to double every eight months.Ā So we needed a model that can scale linearly.ā€ In other words, Microsoft wanted Cassandra, with its ā€œcontinuous availability, linear scale performance, operational simplicity and easy data distribution across multiple datacenters and cloud availability zones,ā€ as its website notes.

Canā€™t start a fire without Spark

Possessing ability to run on top of Hadoop, standalone or in the cloud, Sparkā€™s made for processing large scales of data quickly. Jayandra had particular interest in using the Spark Streaming solution for building fault-tolerant computer clusters. ā€œWe spent time building fault tolerance and resilience,ā€ he says.

Using the Spark connector to Cassandra made Office 365ā€™s performance better, according to Jayandra. For example, the gateway services for Azure, Microsoftā€™s own cloud computing solution, can pull data from Spark and push it into Cassandra. ā€œIn the cluster, we run Spark and Cassandra,ā€ Jayandra says. ā€œAnalytics run in the other datacenter.ā€

However, this was only for batch mode analytics. ā€œWe cannot have real-time apps,ā€ Jayandra says. ā€œEven Spark Streaming has no support to pull real time data.ā€

Data never rests

With geo-redundancy in Microsoftā€™s Spark strategy, itā€™s a matter of having a similar passive stack in a different region: one on the U.S. East Coast and one on the U.S. West Coast. ā€œThe web server that powers the interface can query both datacenters, depending on which the user is closest to,ā€ Jayandra says. That said, Office 365 does not use the analytics cluster in the passive region.

In other cases, the analytics cluster cannot access data due to legal restrictions in some countries against storing customer data abroad. ā€œSo we have to replicate data in country to make data queries faster,ā€ Jayandra says.

LessonsĀ and mistakes with Spark, Cassandra

Overall, while building 36 nodes of Cassandra and Spark, Jayandra came to several conclusions: It is not a low maintenance process, cannot be built just with open source Apache products. Also it needed to take bits from DataStax, a leading technology provider to Big Data applications developers.

As Microsoftā€™s first open source project, Jayandra says they made some rookie mistakes. For example, rows were too wide, which led to compaction slowing down and COM errors. Records became really big and rules were too large to load into memory. ā€œWhat was a stable system had to be remodeled after just three weeksā€ Jayandra says.

Despite the Spark and Cassandra configuration passing stability tests, when the project was moved to bigger production servers it really slowed the system, according to Jayandra. ā€œYou canā€™t test for this,ā€ he says. ā€œInstead of a manual update of tables, the admin created a state where it went up by hundreds of thousands. It got us into a state where there were 200,000 files per node.ā€ And you cannot let a node get like that. ā€œBecause thereā€™s no going back,ā€ Jayandra says.

In Azure, only a small bandwidth exists between datacenters, making it impossible to rebuild a datacenter, according to Jayandra. ā€œInstead, we need to back up and restore.ā€ Monitoring is very important in those scenarios where there are datacenter replication problems. Jayandra learned to take a datacenter out of the cluster if problems manifest themselves.

As it is today, Office 365 running on Spark and Cassandra is a low volume activity, with only tens of jobs on a daily basis. ā€œAs we increase jobs, we see there is no good job server,ā€ Jayandra says. ā€œWe have not had good luck with open source job servers.ā€

What theyā€™ve done to compensate for lack of reliable job server solutions is to create an alert when performance drops by 10 to 15 percent. ā€œThat way we use Cassandra data as a deterministic test to check on the pipeline.ā€

Photo via Derek Handova

Featured

Unleashing the Power of AI in B2B Marketing: Strategies for 2023

The digital marketing landscape is evolving rapidly, with artificial...

How To Check if a Backlink is Indexed

Backlinks are an essential aspect of building a good...

How to Find Any Business Ownerā€™s Name

Have you ever wondered how to find the owner...

Do You Have the Right Attributes for a Career in Software Engineering?

Software engineers are in high demand these days. With...

6 Strategies to Make Sure Your Business Survives a Recession

Small businesses are always hit the hardest during an...
Derek Handova
Derek Handova
Derek Handova is a veteran journalist writing on various B2B vertical beats. He started out as associate editor of Micro Publishing News, a pioneer in coverage of the desktop publishing space and more recently as a freelance writer for Digital Journal, Economy Lead (finance and IR beats) and Intelligent Utility (electrical transmission and distribution beats).