Data Management for Oil & Gas: High Performance Computing

Data Management and Technology

The oil and gas industry is dealing with data management on a scale never seen before. One approach to quickly get at relevant data is with High Performance Computing (HPC).

HPC is dedicated to the analysis and display of very large amounts of data that needs to be processed rapidly for best use.

One application is the analysis of technical plays with complex folding. In order to understand the subsurface, three dimensional high definition images are required.

The effective use of HPC in unconventional oil and gas extraction is helping drive the frenetic pace of investment, growth and development that will provide international fuel reserves for the next 50 years. Oil and gas software supported by data intelligence drives productive unconventional operations.

Evolving Data Management Needs

As far back as 2008, the Microsoft High-Performance Computing Oil and Gas Industry Survey conducted by the Oil & Gas Journal Online Research Center indicated that many industry geoscientists and engineers have access to the computing performance levels they require.

However, computing needs are growing more complex, so significant room for improvement exists. NumerousOil and Gas: High Performance Computingrespondents believe that making HPC available to more people industry wide can increase production, enhance decision-making, reduce delays in drilling, and reduce the overall risk of oil and gas projects.

Chesapeake is the largest leasehold owner of Marcellus Shale Play, which reaches from Southern NY to West Virginia. They employ HPC  in their shales and tight sands operations.

3-D imaging enables technical staff to detect fine-scale fracturing and directional dependency characteristics. Seismic data provides a structural road map that helps identify dip changes, small faults and natural fracture orientation.

High Performance Computing in the Real World

Chesapeake routinely performs inversions of pre-stack and post-stack data management. Datasets for imaging and inversion support models that represent complex earth structures and physical parameters, where true inversion results are known.

Reservoir maps require constant updating. Advanced pre-stack 3-D techniques are used to extract detailed rock properties that aid in discriminating good rock from bad rock at Marcellus.

Focusing on pre-stack data management has significantly increased computational requirements. Depending on the acquisition method, collecting multicomponent 3-D data can increase data size by orders of magnitude.

Advanced algorithms provide results in a matter of days, making it possible to realistically deal with a lease schedule.

Clustered super-computing systems are becoming well priced and scalable. HPC options are not only realistic, but a requirement for independents who want to bring advanced processing capabilities in house.

Check out this blog post on how oil and gas companies are using data management to improve processes here…

Custom Software: DIY Advantages and Disadvantages

Good Planning a Key Differentiator for Custom Software

Custom software can be a great tool to match processes to your business. The recent proliferation of do-it-yourself tools makes this even easier because they allow people who aren’t professional programmers to create their own software.

This change in custom application development is part of the trend towards disruptive innovation, in which an innovation disrupts an existing technology. The ability of disruptive innovation to change traditional value organization and delivery has resulted in tools with bad user interfaces and poor performance.

A recent article in provides examples that illustrate the advantages and disadvantages of DIY software development.

DIY Custom Software Advantages

DIY custom software can be a great fit in those cases where they provide greater success than traditional off-the-shelf software. One example would be if a small business needed a basic level of reporting.

A well set-up Excel spreadsheet that is shared across the organization would probably be a fine solution. Even better would be if the business then uploaded that information into Tableau. This would bring a visual component to their report with a fairly low amount of effort.

DIY Disadvantages

This approach isn’t always the answer however. As more users started to use the spreadsheet, it would become bloated and difficult to share. The lack of a good user interface would also probably mean that the owner of the spreadsheet would start to spend more time explaining how to use it.

With the addition of more data, Tableau also can benefit from the kind of sound data management strategies that the average business user is not familiar with.

Sound Strategic Thinking

There are other reasons that DIY software solutions may not be the best fit for your business. Sound strategic thinking can also be a factor.

The following case illustrates why unassisted use of DIY tools doesn’t always work. As Mike from Brainzooming highlights, the organizer of an event created a post-event survey using SurveyMonkey for attendees to complete.

Custom Software: bad PlanningThe categories began with “very satisfied” on the left and progress towards “very dissatisfied” on the right. It’s not obvious to the layperson, but an expert in marketing research would have immediately recognized that these categories were in the opposite order from which surveys typically present them.

Respondents completing the survey may have made their choices based on habit, instead of actually reading the category headings before making their selection.

As a result, the results of this survey are unusable because the organizer has no way of knowing if the satisfaction ratings accurately reflect the respondents’ opinions. DIY tools failed in this case because the application required expertise in marketing research.

The bottom line on DIY custom software is that you should use and even embrace this option when it can provide you with an advantage over traditional methods of software development. However, you need to employ strategic thinking to ensure that your efforts provide the desired result.

For more on this topic, check out our series on the custom software buy versus build spectrum.

Custom Software: Four Moments of Truth

Moments of Truth for Custom Software

During a recent leadership conference the Entrance team began brainstorming how to make our custom software consulting even better. The leadership team has since started an active conversation among our consultant team on this topic.

One of the main points the speaker made was that every business has moments of truth that make all the Custom software: Moments of truthdifference. For a restaurant, great food and service can be destroyed by a dirty floor or cockroaches. For a clothing store, the most stylish dresses can’t be outweighed by long lines and unfriendly clerks.

One of the Entrance values is “Improve everything,” or as some of us say, “Suck less every day.” As apotential client you may be wondering how we live out this value.

We see moments of truth as one huge opportunity to bring this value front and center. The below is directly from the Entrance custom software team themselves. We see this list as just a few of the places that we strive to improve the quality of our work every day!

Four Moments of Truth in Software

  • Any sprint demo

This is the first chance that clients have to see how the Agile methodology works. This isn’t just about selling an idea. It has to meet our client’s needs and efficiently deliver software that works.

  • Fixing custom software bugs

Every custom software application gets bugs once in a while. A good development team will identify the problem and fix it as quickly as possible. It’s just not acceptable to say a bug is fixed if it isn’t.

  • Owning mistakes

By the same token, every team makes mistakes. It’s how that team owns up to them and makes it better that defines this moment of truth.

For one client, the developer communicated to the client about his mistake. He then quickly fixed it. As a result, the client appreciated his work even more than they would have if there had been no mistake at all!

  • Requirements sign-off

This is one of those steps near the end of the custom software process that can make all the difference in terms of satisfaction. The development team and the client sit down to review what was promised and what has been delivered.

This can help bring to the surface any gaps in the final deliverable. If any are discovered, the team can develop a plan for making it right.

Improve Everything with Custom Software

Improving everything is a value that the Entrance team must live out every day. In addition, all of these moments of truth involve a degree of transparency.

As a client, it’s your job clear about what you need and to stay engaged through the process. The result of transparency on Entrance’s side is that you always know where your project is and how we’re delivering on your business need.

For more on quality custom software check out our Agile series, “Getting the Most for Your Money.”

Business Intelligence Deployment Misconceptions

Deploying Business Intelligence

Business intelligence, also commonly referred to as BI throughout the industry, is a piece of technology that allows a business to obtain actionable information that they can then use throughout their day to day operations. While business intelligence solutions certainly have their fair share of advantages, it is also important to realize that they are not the be-all, end-all solution for guidance that many people think they are.

There are certain business intelligence deployment misconceptions that businesses make over and over again to their detriment. Understanding these misconceptions will allow you to successfully avoid them and use BI to its fullest potential.

The Benefits of Business Intelligence

  • The information provided is accurate, fast and most importantly visible to aid with making critical decisions relating to the growth of a business, as well as its movement.
  • Business intelligence can allow for automated report delivery using pre-calculated metrics.
  • Data can be delivered using real-time solutions that increase their accuracy and reduce the overall risk to the business owner.
  • The burden on business managers to consolidate information assets can be greatly reduced through the additional delivery and organizational benefits inherent in the proper implementation of business intelligence solutions.
  • The return on investment for organizations with regards to business intelligence is far reaching and significant.

Business Intelligence Deployment Misconceptions

Business intelligence misconceptionsOne of the most prevalent misconceptions about business intelligence deployment is the idea that the systems are fully automated right out of the box. While it is true that the return on investment for such systems can be quite significant, that is only true if the systems have been designed, managed and deployed properly.

A common misconception is that a single business intelligence tool is all a company needs to get the relevant information to guide themselves into the next phase of their operations. According to Rick Sherman, the founder of Athena IT Solutions, the average Fortune 1000 company implements no less than six different BI tools at any given time.

All of these systems are closely monitored and the information provided by them is then used to guide the business through its operations. No single system will have the accuracy, speed or power to get the job done on its own.

Another widespread misconception is the idea that all companies are using business intelligence in the present term and your company has all the information it needs to in order to stay competitive. In reality, only about 25 percent of all business users have been reported as using BI technology in the past few years. The 25% number is actually a plateau – growth has been stagnant for some time.

One unfortunate misconception involves the idea that “self-service” business intelligence systems indicate that you only need to give users access to the available data to achieve success. In reality, self-service tools often need additional support over what most people plan for.

This support is also required on a continuing basis in order to prevent the systems from returning data that is both incomplete and inconsistent.

One surprising misconception about the deployment of business intelligence is that BI systems have completely replaced the spreadsheet as the ideal tool for analysis. In truth, many experts agree that spreadsheets are and will continue to be the only pervasive business intelligence tool for quite some time.

Spreadsheets, when used to track the right data and perform the proper analysis, have uses that shouldn’t be overlooked. Additionally, business users that find BI deployment too daunting or unwieldy will likely return to spreadsheets for all of their analysis needs.

According to the website Max Metrics, another common misconception is that business intelligence is a tool that is only to be used for basic reporting purposes.

In reality, BI can help business users identify customer behaviors and trends, locate areas of success that may have previously been overlooked and find new ways to effectively target a core audience. BI is a great deal more than just a simple collection of stats and numbers.

For more on this topic, check out our series, “What is business intelligence?

Custom Software Delivers: Ten Reasons Your Project May be Value Challenged

Bringing Value to Custom Software

The past few weeks, Entrance’s custom software consultants have been in an ongoing discussion about producing quality final products that give our clients value. This week, I’d like to share a few things to look out for during a project that might indicate that it is value challenged.

10 Warning Signs for Bad Software

In a collaborative Yammer discussion, several Entrance consultants shared the following ten things to look out for as your custom software project moves along.

  • Bloated backlog
    The list of additions to your custom software application is so big that the deadline for completion keeps getting pushed, or no one can prioritize what should come first.
  • Indecision about what to do next
    Maybe your team isn’t working in two week sprints, or perhaps the overall end goal of the project was never well defined in the first place. Either way, a lack of good planning can prevent your team from working in an organized way towards an agreed upon end-goal.
  • Huge gaps between what is expected and what is actually delivered
  • Rushing to deliver more and sacrificing quality to do so
  • Allowing excessive scope creep (or creating it)
    Last week your manager decided to add credit card processing to your custom web app, this week, the team decides to add another customer facing feature. More functionality can be great, but it also pushes deadlines, stretches budgets, and can have unforeseen consequences in the long run that can affect the quality of your final product.
  • Inability to accurately estimate deliverables, causing delays or missed deadlines
  • Inconsistencies in design, coding, methodologies and performance
  • Inconsistent nomenclature throughout a given solution
    Every custom software application should be written in such a way that a new developer can come in and see how and why it was created that way. Consistent nomenclature also contributes to a good user experience because it makes software easier to navigate.
  • Forms, reports and dashboards that do not reflect the “As a ___” part of their respective user stories
  • Solutions that are created in a vacuum separate from the client

For more on producing custom software that has good value for your company, check out our three part series, “Agile and Custom Software: Getting the Most for Your Money.”

Data Management for Oil and Gas

Keeping Up With Data Management

Data management involves handling data throughout its life cycle in addition to the infrastructure upon which it resides. The amount of data that oil and gas companies generate is continually growing, as is the infrastructure upon which that data resides.

This is primarily due to the increase in data collection capability, although business reorganizations such as mergers, acquisitons, and joint ventures also contribute to the volume of data in this industry. Changes in storage methods such as cloud computing further complicate data management for oil and gas enterprises.

The best practices in data management often include an open strategy that allows multiple solutions to inter-operate effectively.


Data management for oil and gasModern 3D seismic surveys are one reason for larger data sets in the oil and gas industry. NetApp reports that this surveying method contains 16 times more data than a traditional 2D survey, and a 4D survey that monitors changes over time requires an additional 32-fold increase in data over a 3D survey.

New algorithms also increase data volume by using up to 40 well attributes such as dip azimuth, event continuity, instantaneous frequency and instantaneous phase. The increase in oil prices contributes towards data volume by increasing the number of fields that are now cost effective to mine.

Technological advances such as gravity magnetics, ground-penetrating radar and pore-pressure prediction also increase data sets by requiring surveyors to digitize more data. All of these factors result in an annual increase in data volume of about 30 to 70 percent, along with a corresponding increase in data management costs.


The classification of data in the oil and gas industry is a challenging data management task. The growth of the data sets and changes in infrastructure increase the difficulty of locating a specific piece of data, which often results in data duplication.

The turnover of personnel and changes in their responsibility also leads to problems in identifying the owner of a particular data set. These factors contribute towards the large amount of data that isn’t stored in a structured database, which some analysts estimate at 70 to 80 percent of the total data.

This large quantity of unstructured data complicates the process of planning storage expenditures and archiving data.


The increasing frequency of business reorganizations further complicates data management in the oil and gas sector. The participants in these reorganizations typically perform an inventory of the data set that each party possesses to identify duplicate data.

They must also analyze these data sets to determine the ones that should be archived or deleted. This process allows the business to avoid purchasing data that it already has such as lease block sales.

Data Storage

The proliferation of storage techniques for big data also makes data management more difficult. The traditional solution to sharing data is the Network File System, a file-sharing protocol that’s available in the UNIX operating system.

However, data storage in oil and gas is currently trending towards the Common Internet File System, which is only available in Windows operating systems. A transition from one file-sharing protocol to another involves the creation of duplicate data, which increases the cost of data storage.


Effective data management requires managing the infrastructure as well as the data itself. The current challenges in data management include the increasing size of the data set, frequent business reorganizations and migrating between file storage systems.

Additional tasks in modern data management include migrating to another storage tier, identifying data suitable for deletion and planning for disaster recovery. Challenges in infrastructure management include achieving the optimum balance between hardware and software to meet business needs such as cost reduction, flexibility, scalability and regulatory compliance.

For more, read this article on why analytics matter for effective oil and gas data management

Data Management for Oil and Gas: Unconventional Data versus Big Data

Unconventional Data Management

In Unconventional Hydrocarbons, Big Data, and Analytics, Allen Gilmer of Drilling Info (DI,) explores unconventional data management in the hydrocarbon Unconventional Data Managementindustry. Each day about 50 experiments take place.

Whenever an experiment succeeds, properties change value instantaneously across the board. Knowing and acting on data resulting from experiments and incorporating it into your own information base gives you more substantive quantification.

Incorporating geological and petrophysical signatures associated with hydrocarbon production enhances the success rate for discovering hidden plays.

In the last 2 years, DI has re-architected their latest system to integrate interpreted geological data, while continuously updating statistical grading of acreage and operations in popular unconventional plays.

Over 100 terabytes of information from over two million historical inventories of scout cards and well logs from worldwide sources allow clients to accelerate their productivity.

Some of the numerous benefits of this emergent architecture are:

  • DI’s data and interpreted knowledge products will have a higher availability on other platforms.
  • Users can create immersive, secure environments, enabling them to view work from distributed teams.
  • Cross disciplinary collaborative workflows through the DI web, or as managed client components, will contribute to optimal data management effectiveness.

Improving Forecasting with Data Management

The Society of Petroleum Engineers published Probabilistic Performance Forecasting for Unconventional Reservoirs with Stretched-Exponential Model. The value of this approach in oil and gas exploration is investigated.

Unlike deterministic estimates, probabilistic approaches provide a measure of uncertainty in reserve estimates. In a probabilistic model, statistical analysis is used as a tool that performs estimates based on historic data and a set of current traits. The probability of an event occurring again is determined.

The Polish Geological Institute created a report for recoverable shale, gas and oil resources assessment, using a broad range of geological, geochemical, geophysical and geo-mechanical data. For a specific basin under analysis, some key data are still to be determined.

Data such as porosity and permeability of shale reservoir and gas composition falls into this category.

As a result of this, some assessment data is based partly on assumptions from analogue basins, and results in increased analytical error bars for calculation of hydrocarbon resources.

Data Lineage as part of Real Time Data Integration

Part of employing unconventional data may require using data lineage. With this technique, time becomes a characterizing aspect of data. Data lineage can trace data from its origin to its current state.

Many sources and transformations may have contributed to the final value within a given time segment. A selected instance of data may possess a lineage path that runs through cubes and database views, datamarts, intermediate staging tables and scripts.

The ability to view the lineage path visually fosters greater comprehension.

A System for Integrated Management of Data, Uncertainty, and Lineage

Stanford’s Infolab offers Trio as a new database management system especially designed to process data, data uncertainty, and data lineage. Trio is based on an extended relational model for uncertainty and lineage called ULDB. The SQL-based query language supporting it is TriQL.

Scientific and sensor data management, information extraction systems, data cleansing and approximate and hypothetical query processing are product representative.

Petris Technology Inc., is a supplier of data management and geosciences applications to the global oil and gas industry, and has implemented a Statoil project providing borehole data management technology.

Data management for unconventional resources provides the opportunity to create a new body of knowledge that will help guide producers by utilizing data based on a history of operational field data. The extraction of oil from oil sands and oil shale requires such advanced operations necessary for effective use of proppant and other highly expensive fracture fluids.

For more on data management in the field, check out this post on sorting out a common language for well data.

Mergers and Acquisitions: Getting the Numbers Right

Merger and Acquisitions and Valuation

Amassing the right data is paramount to being able to arrive at an accurate valuation during mergers and acquisitions. Without data, it is easy to make decisions based on inaccurate intelligence.

Take the example of Evolution Capital’s 2009 purchase of the Accurate Group, a small mortgage serving company based in Charlotte, North Carolina. Evolution Capital, a small Cleveland-based private equity firm saw potential in Accurate’s numbers, even though the physical appearance of the company left a lot to be desired.

First Impressions During an Acqusition

According to Inc. magazine, when Evolution co-owners Brendan Anderson and Jeff Kadlic first visited Accurate, the company’s servers (which powered their most promising asset, a software program that was capable of handling many thousands of real estate transactions at the same time) were being kept dry via a canopy of blue tarps hung under the company’s leaking roof and being cooled by discount store oscillating fans.

Based on this inauspicious beginning, it was difficult to judge whether their assets added up to a good investment.

Doing the Math

When the pair at Evolution Capital began crunching the numbers, they found that although the company’s EBITDA (earnings before interest, tax, Mergers and acquisitions: looking at the numbersdepreciation and amortization) was lower than Evolution’s target number, Accurate’s profit margin was a healthy 10 percent.

They bet that sales growth could help to beef up earnings. They also found an enthusiastic, knowledgeable CEO to run their new company, allowing the Accurate owner to step aside and do what he preferred, spending time on his yacht. They were right in their estimation.

Evolution purchased Accurate in 2009 for $6 million. By 2012, the company’s EBITDA had increased from $350,000 to $7 million and Evolution began to prepare the company for sale.

They found a buyer in late 2012 and sold Accurate for $55 million, earning the company’s investors a healthy 100 percent return on their investment, boosting Evolution’s total return to 31 percent and bumping up the company’s reputation as a financial player.

Making Investments Work

The lesson in the Evolution/Accurate acquisition and subsequent sale is to trust your instincts, but also be sure of the numbers.  Only after Anderson and Kadlic were able to verify that the financials were good could they look beyond the pitiful computer room at Accurate to a product that was worth investing in.

For more on valuing assets before mergers and acquisitions, check out this case study!

The Data Management Business Case

Transforming the Numbers With Data Management

Increasingly complicated data sets, like the influx of production data for oil and gas, can require more robust data management strategies than we’re accustomed to. Excel or other small ad-hoc databases just aren’t up to the job of complex data warehousing. In addition, these more primitive solutions don’t provide the visibility or dashboards that are required by most decision makers.

Data ManagementSo while the reasons for implementing data management solutions are clear, making the business case and getting the job done isn’t so easy.

As SearchDataManagement highlights in a white paper about how master data management requires true business sense, “This is not an IT refresh like updating phones with Windows 8 — to go at it like an IT initiative is the kiss of death…this is a business initiative aligned with the strategic direction of the company.”

Selling Stakeholders on Data Management

As I’ve highlighted in a previous post, establishing an understanding of your problem, and then developing a solution that matches the scale of that problem is very important.

In addition, many champions of data management will find that they must be good marketers of the solution. As SearchDataManagement highlights, the best recipe for success is, “ignoring the technical aspects of the situation, focusing on the bottom line, and explaining the consequences of inaction.”

Drilling Down Into the Business Problem

SearchDataManagement’s article goes on to highlight a few useful questions that every business should consider as they evaluate data management tools.

  • Why do we need this tool?

It’s tempting to go with a solution that feels comfortable, but you run the risk of choosing a solution that doesn’t fit the actual need at all. Check out this blog post on problem solving for software here.

  • How is it going to benefit the company?

Starting out with an idea of what success looks like for your data management project will make your efforts more focused. Read this blog post on user stories for more on establishing benefits for your company.

  • What is the payback?

Think broadly about payback. Hours saved or the ability to make better decisions due to dashboards all tie into the payback, and have a direct financial tie-in to your company’s bottom line. See this post on the power of business intelligence here.

  • How long will it take to realize value?

From the start, your team should have realistic expectations for your data management project and the timeline to success. For more on what data management success should look like, read this case study.

  • What happens if we fail to act?

Good data management practices can represent a true competitive differentiator. When your competitors have intelligence you don’t, you’re only falling behind. Read this blog for more on why the investment in data management is worth it.

Data Management for Upstream: Six Sigma and Control Charts

Data Management is an Evolving Strategy

The Upstream industry has proven with the shale boom that innovation is a gamble worth taking. As the industry settles into unprecedented drilling and production going forward, one initiative that can push E&P companies even further is data management.

John Weathington from TechRepublic published an article this week called, “The best big data strategy is the strategy that keeps adapting.” One of his main points was data is only a sample:

“All samples have error, but the most insidious sampling errors introduce themselves while you’re in operation.”

Control Charts and Data Management

One strategy Weathington offers for effectively managing data as it is being collected is the Six Sigma strategy of Control Charts. “What makes it a Control Chart, is that it also monitors the stability or consistency of your average and variance over time.”

Data management and control chartsTo give an oil and gas example for this, say an Upstream company wants to know the number of days it takes to drill to target depth in a well. A Control Chart would track progress towards this goal.

When you set up the Control Chart, you would create “different rules that determine whether your information is following a stable pattern.” Any dips above and below the mean, such as a delay in drilling or less progress than expected in a given number of days would indicate to the project owner that action should be taken.

Upstream and the Value of Iteration

Data management is important for every organization. For Upstream in particular, using data to iterate and apply learnings to the next well can mean better production with less drilling.

Control Charts have the purpose of ensuring sustainable success over time. Based on this data, it is then possible to evaluate how a given drill bit or rock formation plays into time to target depth. When the company acquires a new property, it is possible to use these previous lessons learned to apply the best techniques to that well.

It may seem overwhelming to decide what factors to control for a given project. For E&P, applying data management principles to controlling drilling times is obvious one because it ties back so clearly to profitability.

Implementing New Data Management Strategies

Once your company has decided to implement new data management control strategies, what roadblocks should you look out for?

1.       Information Support

Is the data you need in an easily accessible dashboard? Are reports timely and accurate? How much time do you have to spend manually processing data before it is ready for decision making?

2.       Collaboration Tools

How easily can you share information or assets with co-workers? Is versioning a problem? When the owner of a process leaves, is that process documented in such a way that it can be duplicated by someone else?

3.       Company Culture

A culture of questioning and innovation is key to making data management control work. Is management open to change and a certain level of risk when the data supports it?

Big Data and Your Well

Big data is one of those buzzwords we hear a lot about without knowing how it applies to our business. Nowhere is the idea of big data more evident than in an active well. The stream of data is constant, and because it is both temporal and geo-located, Control Charts are one very practical way to monitor these multiple factors, their effect, and how improvements can be applied.

For more on making an organizational switch to data based decision making, check out this post on asking the right questions.

Read this blog about Six Sigma and master data management for more on DMAIC…