Revolution Analytics Targets R Language, Platform at Growing Need to Handle 'Big Data' Crunching

Updated: August 04, 2010

With RevoScaleR, we've focused on making analytical models not just scale to the big data sets, but run the analysis in a fraction of the time compared to traditional systems," says David Smith, vice president of Community and Marketing at Revolution Analytics. "For example, the FAA publishes a data set that contains every commercial airline take off and landing between 1987 and 2008. That's more than 13 gigabytes of data. By analyzing that data, we can figure out the likelihood of airline delays in one second."

A rows-and-columns approach

One second to analyze 13 GB of data should turn some heads because it takes 300 seconds with traditional methods. Under the hood of RevoScaleR is rapid fire access to data. For example, the RevoScaleR uses an XDF file format, a new binary big data file format with an interface to the R language that offers high-speed access to arbitrary rows, blocks and columns of data.

We've taken that one step further to develop a system that accesses the database by rows and columns at the same time



"The new SQL movement was all about going from relational databases to a flat file on a disk that offers fast to access by columns. A lot of the technology that's behind things like Twitter and Facebook take this approach," Smith said. "We've taken that one step further to develop a system that accesses the database by rows and columns at the same time, which is really well-attuned to doing these statistical computations."

RevoScaleR also relies on a collection of the most-common statistical algorithms optimized for big data, including high-performance implementations of summary statistics, linear regression, binomial logistic regression and crosstabs. Data reading and transformation tools let users interactively explore and prepare large data sets for analysis. And, extensibility lets expert R users develop and extend their own statistical algorithms.