Big Data is Going to Drive User Engagement

Posted by Kate Jurras on Tue, Nov 6, 2012

jason fields resized 600We're all about Big Data right now. We asked a group of exceptional data experts to enlighten us. They answered this tough question: “How will the recent explosion of big data affect marketing & advertising innovation and productivity in the future?” This post is by Jason Fields. Jason brings a well-rounded and forward thinking vision of the industry to Agency Oasis. With components of design, development and experience architecture in his past he offers an expansive range of input on his accounts bringing together robust applications from start to finish. In support of his industry work Jason holds a Faculty position with Emerson College instructing Masters Candidates on web design, development and maintenance prior to their matriculation. Jason holds a BA in Communication from UMass Amherst and a MA in Media Studies from New School University.


elephant resized 600

When I say big data I am not referring to the size of the data. Though the ability to store and own petabyte, exabyte, zettabyte, yottabyte’s worth of information is dropping in capital and operational costs, the idea of storing information for information sake isn’t valuable. Kind of like the several terabytes of music I have sitting on externals no longer plugged into a machine.

Instead big data should be thought of in terms of Volume, Velocity and Variability (discussion points at O’Reilly Stratus or GigaOm conferences). In actuality, and rooted in common sense is the idea of ‘it isn’t how much you store, but what you do with it.’ And for that you need tools, resources and most of all logic. What will happen with the legacy systems of bohemith organizations are brought up to modern standards to take into account personal information when determining the content we should be seeing?

Two things are holding back the onslaught of this trend of personal or transactional data driving user experience across all devices. First is, it’s creepy. (see the Target Case Study in Charles Dhigg’s book “The Power of Habit”). Giving people exactly what they need without them knowing how you know it, is uncomfortable.

The second reason is cost. For organizations so large that an upgrade from IE6 to IE7 is an operational expense that hits high six figures, the idea of “dynamicising” big data isn’t just a budget add on, it is a shift in culture, value, channel & operations. Dynamic content has one purpose, ROI, and ROI at this level of detail can cause changes in operations from budgeting to bonuses. You can see why it hasn’t caught on as much as you might think.

The companies that are largely doing this well, have been doing it from their inception. Amazon and Overstock are very good at leveraging their big data against the consumer experience, but they knew nothing else! They had no roots in Madison Ave driving the market share of their business, the idea of not utilizing that data is foreign to them.

Big business should take a learning from a competition that Anand Rajaraman set up for his data mining class at Stanford. When given a DB of 18k movies and asked to determine the best algorithm to recommend movies the winner wasn’t the author of the most complicated piece of logic, instead it was a less robust algorithm which took advantage of a second data set (IMDB). The lesson being that a simple algorithm with more data beats a complicated algorithm with less. (Google validated this approach with their PageRank logic). The biggest risk that owners of big data have is themselves; don’t try to do it all at once.

For more details on this subject please visit the original and lengthier post here.

Check out the last post in this series, by Chris Emme.