The term Big Data has been around since 2005, but what does it actually mean? Exactly how big is big? We are creating data every second. It’s generated across all industries and by myriad devices, from computers to industrial sensors to weather balloons and countless other sources. According to a recent study conducted by Data Never Sleeps, there are a quintillion bytes of data generated each minute, and the forecast is that our data will only keep growing at an unprecedented rate.
We have also come to realize just how important data really is. Some liken its value to something as precious to our existence as water or oil, although those aren’t really valid comparisons. Water supplies can fall and petroleum stores can be depleted, but data isn’t going anywhere. It only continues to grow—not just in volume, but in variety and velocity. Thankfully, over the past decade, data storage has become cheaper, faster and more easily available, and as a result, where to store all this information isn’t the biggest concern anymore. Industries that work in the IoT and faster payments space are now starting to push data through at a very high speed and that data is constantly changing shape.
In essence, all this gives rise to a “data demon.” Our data has become so complex that normal techniques for harnessing it often fail, keeping us from realizing data’s full potential.
Most organizations currently treat data as a cost center. Each time a data project is spun off, there is an “expense” attached to it. It’s contradictive—on the one side, we’re proclaiming that data is our most valuable asset, but on the other side, we perceive it as a liability. It’s time to change that perception, especially when it comes to banks. The volumes of data financial institutions have can be used to create tremendous value. Note that I’m not talking about “selling the data,” but leveraging it more effectively to provide crisp analytics that delivers knowledge and drive better business decisions.
What’s stopping people from converting data from an expense to an asset, then? The technology and talent exist, but the thought process is lacking.
Data warehouses have been around for a long time and traditionally were the only way to store large amounts of data that’s used for analytical and reporting purposes. However, a warehouse, as the name suggests, immediately makes one think of a rigid structure that’s limited. In a physical warehouse, you can store products in three dimensions: length, breadth, and height. These dimensions, though, are limited by your warehouse’s architecture. If you want to add more products, you must go through a massive upgrade process. Technically, it’s doable, but not ideal. Similarly, data warehouses present a bit of rigidity when handling constantly changing data elements.
Data lakes are a modern take on big data. When you think of a lake, you cannot define its shape and size, nor can you define what lives in it and how. Lakes just form—even if they are man-made, there is still an element of randomness to them and it’s this randomness that helps us in situations where the future is, well, sort of unpredictable. Lakes expand and contract, they change over periods of time, and they have an ecosystem that’s home to various types of animals and organisms. This lake can be a source of food (such as fish) or freshwater and can even be the locale for water-based adventures. Similarly, a data lake contains a vast body of data and is able to handle that data’s volume, velocity, and variety.
When the mammoth data organizations like Yahoo, Google, Facebook, and LinkedIn started to realize that their data and data usage were drastically different and that it was almost impossible to use traditional methods to analyze it, they had to innovate. This, in turn, gave rise to technologies like document-based databases and big data engines like Hadoop, Spark, HPCC Systems and others. These technologies were designed to allow the flexibility one needs when handling unpredictable data inputs.
“If you’re at the earliest stage of maturity, you’re used to asking questions of a SQL or NoSQL database or data warehouse in the form of reports,” said Flavio Villanustre, VP of Technology for HPCC Systems and CISO at LexisNexis Risk Solutions. “In a modern data lake that has a deep learning capability with anomaly detection, you also get new insights that could have a profound effect on your company or customers, such as the discovery of a security breach or other crimes in progress, the early warning signs of a disease outbreak or fraud.”
Jeff Lewis is SVP of Payments at Sutton Bank, a small community bank that’s challenging the status quo for other banks in the payments space. “Banks have to learn to move on from data warehouses to data lakes. The speed, accuracy, and flexibility of information coming out of a data lake is crucial to the increased operational efficiency of employees and to provide better regulatory oversight,” said Lewis. “Bankers are no longer old school and are ready to innovate with the FinTechs of the world. A data centric thought process and approach is crucial for success.”
Data lakes are a natural choice to handle the complexity of such data, and the application of machine learning and AI are also becoming more common, as well. From using AI to clean and augment incoming data, to running complex algorithms to correlate different sources of information to detect complex fraud, there is an algorithm for just about everything. And now, with the help of distributed processing, these algorithms can be run on multiple clusters and the workload can be spread across nodes.
One thing to remember is that you should be building a data lake and not a data swamp. It’s hard to control a swamp. You cannot drink from it, nor can you navigate it easily. So, when you look at creating a data lake, think about what the ecosystem looks like and who your consumers are. Then, embark on a journey to build a lake on your own.