Data management

Automated Data Management Helps Take the Pain Out of Analytics

Public clouds are one of the emerging technologies that are minimizing the cost of processing the big data of the oil and gas industry. Among the hurdles to the wide-scale adoption of the cloud are security and access cost.

ogf-2017-09-datamanagement-hero.jpg
Getty Images

It’s no secret that in this day of low oil prices, oil and gas leaders are looking to cut costs wherever possible. The challenge is to cut costs while still maximizing business value. Gartner noted in a recent report that even though CIOs have already cut costs from 30 to 50%, they are still being asked to cut more. Emerging technologies enable oil and gas companies to overcome this seemingly insurmountable task by automating data management and making it easier to put existing resources to work while saving costs with the cloud. Let’s examine how.

Accelerating E&P With Digital Drilling

Gartner analysts noted in another report, “The traditional oil and gas operating model was developed to support prolonged periods of high prices and rapid, parallel growth. Because of this, the traditional models present serious business risk due to increasingly dynamic markets. By 2020, 90% of oil and gas industry leaders will digitally innovate at scale, faster and more effectively than laggards.”

We are already witnessing this change in exploration and production (E&P), as many oil and gas companies now drill digitally, first, to determine how much to bid on a site and where to physically place a well. While this digital approach is increasing margins and revenues by improving the speed, accuracy, and quality of E&P operations, the data-intensive nature of these applications can be costly. Sophisticated modeling and simulation technology requires vast amounts of data and metadata analysis both from current and historical data points. It is common for a large exploration project to produce many petabytes of data, pushing the boundaries of traditional data management.

ogf-2017-09-datamanagement-fig1.jpg
Full 3D acquisitions generate between eight to 20 GB/sec of sensor seismic data, totaling 250 terabytes to 1 petabyte/km2. Left: Seismic Data Collection; Right: 3D Model of Subsurface Features.

 

Fortunately, even as E&P applications are increasing top-line revenues, emerging technologies are minimizing the cost of processing data to increase the bottom line. Public clouds are one such technology, as they give companies a cost-effective way to store cold data. However, there are two hurdles toward wide-scale adoption of the cloud in the oil and gas industry.

First, many companies in highly regulated industries, such as oil and gas, have concerns about cloud security—even though Gartner considers the cloud more secure than traditional data centers when implemented correctly and the rapidly growing cloud-based security industry is improving security further.

The second hurdle is the cost to access data in the cloud. Many cloud providers make it easy to get data into the cloud, but charge by bandwidth to retrieve data back to on-premises storage. This can become extremely costly if an entire LUN [logical unit number, or collection of physical or virtual storage devices], snapshot, or backup needs to come back from the cloud. Since some E&P applications analyze historical data, and it’s not easy to determine which data might be needed in the future, many companies might find it easier to just keep a lot of cold data on-premises, which means more money is spent buying hardware and keeping it running.

Metadata engine software solves the problem of cloud data access, as well as the problem of cost-effectively providing the horsepower needed for data-intensive analytics. Metadata is the data about your data—such as when a file was last opened, by whom, when it was changed, and so on—and it is the key to transforming enterprise efficiency with automated, intelligent data management.

A metadata engine can nondisruptively place data on the ideal storage resource for meeting application requirements for each step of the E&P pipeline. For example, it can place transient data, such as scratch and operating system (OS) swap space on nonvolatile memory flash in application servers, active application data on performance network-attached storage, and less frequently used data on capacity storage or cloud/object storage. [Swap space is a portion of a hard disk drive (HDD) that is used for virtual memory. Virtual memory is the space on a HDD to simulate additional main memory. Memory is used to hold portions of the OS, programs, and data that are currently in use or that are frequently used.] These metadata management capabilities can turn multiday E&P tasks into single-day tasks to accelerate productivity for geoscientists.

Importantly, IT maintains full control over data placement through objective-based management. IT can also see exactly which data are hot, warm, or cold, and have warm data automatically move to lower-cost storage, and cold data to public cloud or on-premises object storage. This ensures that only cold data move to the cheaper, but slower, storage tier, and at the same time, enables companies to purchase expensive more performant storage capacity only for data that requires those capabilities.

A metadata engine also eliminates the painful experiences with typical archiving solutions when data are needed but take time to locate and retrieve by maintaining visibility and accessibility of data stored in cloud or object stores. Bandwidth charges are minimized by the ability to retrieve just the files that applications actually need, as opposed to retrieving an entire backup, as required by traditional manual approaches.

Automation Enables CIOs To Do More With Less

As CIOs struggle to innovate with fewer staff than they are used to, cloud computing and a metadata engine can dramatically reduce the amount of work IT currently performs to do more with less. Moving cold data into the cloud can dramatically reduce the storage sprawl that consumes more and more of IT’s maintenance time. A metadata engine can determine what’s cold and then automate migrations and upgrades, activities which currently easily consume half of IT staff time, when asset procurement, deployment, and planning to minimize downtime are factored in.

The ability to automatically manage data by objectives enables enterprises to maximize the value of existing infrastructure and easily integrate with the cloud, while minimizing IT resource costs. A metadata engine makes this possible by collecting metadata of a client’s data access and how it experiences the input/output operations per second, latency, bandwidth, and availability provided by storage. Intelligent analytics are then applied against the metadata, enabling the software to match business requirements for performance, cost and reliability, make real-time automated decisions, move data without application disruption to overcome or prevent outages, and maintain compliance of service level agreements.

Innovating with automated data management will be key to enabling oil and gas companies to thrive—not just survive—during this period of low prices. Cloud computing or on-premises object storage can help companies reduce costs to store increasing volumes of data. Technologies that automate resource allocation, such as metadata engine software, can help oil and gas companies increase service quality and reliability, while reducing service cost and risk.


douglas-fallstrom-2017.jpg

Douglas Fallstrom is the vice president of product management at Primary Data. He oversees the development of the company’s DataSphere data orchestration platform. He has more than 2 decades of experience providing leadership in product management across storage and server virtualization, data center architecture, and storage management, including prior positions at Coho Data, Symantec, VERITAS, and Sun Microsystems.