One of the unavoidable side effects of the shift to Electronic Medical Records (EMR) has been a nearly exponential surge in the amount of digital data storage being occupied by the healthcare industry, which some are referring to as the “data deluge.” In all honesty, this data flood of epic proportions is something that everyone really should have seen coming a long time ago. It’s been pretty obvious for quite some time. For a number of years already, in fact, medical-related data has been consuming increasingly bigger pieces of the global storage pie.
Back in 2012, the Ponemon Institute published a study in which they found that thirty percent of all the electronic data storage in the world was occupied by the healthcare industry. They also surveyed a number of healthcare providers, and forty-five percent of respondents stated that their facilities intended to upgrade their digital data storage capacities by at least one Terabyte or more within a year.
A Terabyte or more... let’s put that into perspective, shall we?
A Terabyte of storage is equivalent to roughly 1,000 of those old 3-½” floppy disks (or 1,000 Gigabytes). To translate that into physical terms, a single Terabyte is enough space to digitally store all of the books and journals in an entire library. And we are talking about one seriously big library, here… as in one that’s about ten floors high, each floor filled wall-to-wall with fully stocked bookshelves.
So, this raises a question: Why would a single healthcare facility need to upgrade to that much electronic storage space?
The answer is pretty simple, if you think about it. The amount of EMR data being stored in any healthcare facility will never reduce, and that is because the amount of data per individual patient will also never reduce. In fact, the amount can only increase. Even if a patient leaves your care, you may still be required by law to retain his/her records for a period of seven years. As a result, the data storage needs for healthcare facilities are ever evolving and expanding.
Some experts have estimated that each patient will add 4 megabytes of data to his/her EMR storage per year. Therefore, healthcare providers must ensure they have enough space for this additional patient data while retaining previous data. In only 4 years, this can start to get pretty daunting. A first year patient may only require 4 megabytes of storage, but after only 4 years that same patient will require 16 megabytes of storage. If you factor in imaging, each patient adds an estimated 76 megabytes per year in addition to their text data. Combining imaging and text records means each patient is adding 80 megabytes of data to your system annually. A four year patient now requires as much as 320 megabytes of storage.
Here is a fairly conservative example. Based on the above, let’s say you have 100 patients who have been with your practice for 4 years. That equals out to 32,000 Megabytes (or 32 Gigabytes). The next year, those same 100 patients will require 40,000 Megabytes (40 Gigabytes). In only a decade, you’re looking at 80 gigabytes for just that set of patients. This example does not even factor in the data required for new patients.
For standard local server storage, the cost is about 34 cents per gigabyte. If you have an offsite replication server (which is recommended) it comes out to 68 cents per gigabyte. For high performance servers, the cost is 55 cents per gigabyte and 89 cents per gigabyte if you have replication on a standard offsite server. Over time, this can start to add up fairly quickly.
Of course, the problem of increasing data storage needs is not only caused by patient data. A number of other elements are also exacerbating the problem. In addition to storing patient records, many incentive programs and other regulations now require doctors be able to electronically share patient data with other healthcare providers. This need for interoperability is estimated to consume as much as 195 Gigabytes a year for a single practice.
There is some good news, however, and it’s a little thing called “Moore's Law.” This is a computing law based on the observations of Gordon E. Moore (co-founder of Intel) who, in a 1965 paper, stated that the number of components per integrated circuit would double every year over the following decade. In 1975, he published a new paper that adjusted the original rate and predicted that this doubling would now occur every two years for the foreseeable future. In layman’s terms, this more or less means that the chips we use for computers will only continue to get smaller and smaller as time goes on, while increasing in storage capacity.
Here we are, 40 years later, and Moore’s Law has not once failed to be proven correct. And that, my dear readers, is the good news.
You may be wondering: Why is this good news?
Well, it means that the hardware available for data storage will continue to become smaller and cheaper, with ever-increasing capacity. This also means that Cloud computing, which is quickly becoming the standard for applications and data storage, will only become more affordable and sustainable as the world’s data needs evolve.
At least… that’s how things are expected to go. Truth be told, everything should turn out totally fine for as long as Moore’s Law continues to hold up (and many expect it will). However, in the spirit of full disclosure, I feel obligated to mention that there are actually some pretty smart people in the world (such as Dr. Michio Kaku) who predict that it will likely collapse within the next decade.
If that happens… well… the outcome is anybody's guess.
Then again, what’s the worst that could happen?