Not long ago, enterprises were reluctant to move their sensitive data to the cloud due to security concerns. Today, security is a core competency of most major cloud service providers and a major reason why organizations are migrating to the cloud. In fact, security, coupled with agility and cloud economics, has many CIOs slowing investment in on-premises storage and consolidating their data centers.\r\n\r\nAlong with major cloud initiatives, other modern IT priorities include predictive analytics, mobility, machine learning, automation, agility, and the Internet of Things (IoT). Collectively, these trends fall under the umbrella of digital transformation. It\u2019s our belief that cloud services will also need to transform for organizations to cost-effectively store the massive amounts of business-changing data that will be generated in this new era.\r\n\r\nHere\u2019s why the world is ready for cloud storage 2.0:\r\n1. Big data is getting bigger\r\nWhen Amazon launched its Amazon Web Services (AWS) in 2006, the world generated 161 exabytes of data a year, according to IDC. That\u2019s equivalent to 12 stacks of books stretching from the Earth to the Sun. Today, a little over a decade later, the world churns out an additional 2.5 exabytes every single day. That\u2019s like adding another stack of books every five days. By 2020, estimates from multiple sources estimate the total for global digital data at 44 zettabytes. That\u2019s 44 trillion gigabytes!\r\n\r\nThis is big data at a scale few people were thinking about a decade ago. And we\u2019re just getting started:\r\n\r\n \tVideo \u2013 one hour of footage shot in Ultra HD uses nearly seven terabytes of storage.\r\n \tMachines \u2013 an oil rig can produce eight terabytes of operational data a day. A Boeing 787 generates 40 terabytes of data per hour of flight.\r\n \tScience \u2013 One human genome requires approximately 200 gigabytes of storage\r\n \tHealthcare \u2013 High-resolution medical imaging and advanced microscopy are creating truly massive files. One cubic millimeter of brain tissue represents one petabyte of data.\r\n \tSelf-driving cars \u2013 one autonomous car will generate 1 gigabyte of data per second, according to the data storage consulting firm, Coughlin Associates. At that rate, 30 seconds of driving would max out the memory of a typical iPhone.\r\n\r\n2. A.I. and analytics are changing executive expectations for agility\r\nThanks to predictive analytics and machine learning, business executives have access to data-driven insights in minutes. These rapid insights are driving the push for digital transformation and the need to store more data longer. Once executives get a taste, they will never accept anything short of immediate results. That means the data you use to derive those insights will need to be readily accessible, not stored on tape or in glacially slow cloud services.\r\n3. Increasing data volume and velocity requires the need for speed\r\nIf you generate terabytes of data per day, but your storage system can only ingest a portion of what you produce, you have a problem. As data volumes grow, the additional cost of premium or accelerated services on top of already expensive cloud storage will be prohibitive for most organizations. Next-generation cloud storage must not force customers into a price\/performance tradeoff.\r\n4. Legacy cloud storage is too expensive\r\nA petabyte of data stored with any of the major cloud vendors, such as Amazon S3, Microsoft Azure, or Google Cloud, costs approximately $250,000 per year. There are, of course, lower cost tiers of service, but they are designed for infrequent access\u2014too slow for agile business analytics, and a landmine of hidden costs should you require more access to your data than you originally intended. And as the volume, variety, and velocity of data grow exponentially, so will storage decision complexity.\r\n5. No time for storage tiers and lifecycle management\r\nTrying to figure out complicated storage tiers and opaque pricing models are challenging enough today. (Check out these tables comparing the fee structures of AWS, Azure, and Google.) Now imagine having to calculate what to store in standard, infrequent, nearline, or cold storage as the number and variety of data sources, file sizes, and data velocity continues to expand. The data tsunami of the big data era will require a next-generation storage solution with one universal tier of service at one flat rate.\r\nCloud storage 2.0 is hot\r\nWasabi has a name for this next generation of cloud storage. We call it hot storage. Wasabi hot storage is significantly less expensive and markedly faster than frequent-access storage services like AWS S3, so it can be universally applied to any storage use case, such as active archiving. And it\u2019s fully compatible with S3 APIs so it works seamlessly with your existing storage applications and backup and recovery tools.\r\n\r\nIn the New Economics of Cloud Storage, we spell out the exact differences in terms of price, performance, and protection, and compare hot storage to the different tiers of service offered by Amazon, Microsoft, and Google. Even if you have no interest in learning more about Wasabi hot storage, The New Economics of Cloud Storage is a fantastic resource for understanding the complex pricing models of the big three cloud providers.\r\n\r\nMore and more, the ability to gain the edge in the digital economy will depend on your ability to extract insight from multiple, growing sources of information. That data can\u2019t be mined if it can\u2019t be stored effortlessly and affordably.