Subscribe to Dr. Granville's Weekly Digest

Guest forum post by Yuanjen Chen

To be short, in-memory computing takes advantage of physical memory, which is expected to process data much faster than disk. In-place, on the other hand, fully utilizes the address space of 64bit architecture. Both are gifts from the modern computer science; both are essences of the BigObject. 

In-place computing only becomes possible upon the introduction of 64bit architecture, whose address space is big enough to hold the entire data set for most of cases we are dealing with today. In that case, we are able to trade space for time and thus make real-time big data analysis possible. As data is preloaded to the memory space under in-memory approach, it still hits the limit when data size is larger than the swap space - and the performance drops drastically. This fortunately doesn't happen to in-place computing. 

The mission of BigObject is to offer affordable computing power for everyone can fulfill big data applications. For the record, we used a laptop with 8G memory to compute 100 millions of data, and it took only 5 seconds. As a result, the tremendous investment in hardware equipment shouldn't hold you back from diving in to big data analysis anymore. As long as you possess data, you can build model, then you are entitled to reveal the insight. 

More information and free trial please visit the BigObject.

You need to be a member of Data Computing to add comments!

Join Data Computing

Email me when people reply –

Webinar Series

Follow Us

@DataScienceCtrl | RSS Feeds

More News