3 Reasons Why Data Management Should Be Strategic Rather Than Tactical

Global Business Communication

During the 18th International Conference on Petroleum Data Integration, Information and Data Management (commonly known as the “PNEC” conference) on May 20-22, 2014, there was a session in the first day dedicated to the topic of professional data management. During the panel discussion, an attendee asked the following question: Why do we even need a professional role dedicated to managing data, since data service is a supportive role to various operations?

Trudy Curtis, CEO of PPDM Association, answered this by emphasizing that data management (especially data content management) should not be viewed as a tactical function, but a strategic one, needing lots of focus and planning to help businesses truly benefit from the potential of strong, quality data.

Many businesses indeed do not view data management as a strategic function. Below, I will give three reasons why data management deserves to be a key strategic function within any modern digital business.

Data is the blood of modern business workflow

When considering business processes, or workflows, of a business, many would consider them comprising mainly operation procedures. For many of these workflows, the role of data is supportive (i.e. they are the inputs and artifacts of the workflow, but the data itself would not alter how the workflow is run). Enter the digital age, though, and you suddenly have important workflows that cannot run without putting data management into a much more strategic, proactive role.

Suppose that an upstream company manages leases where, upon a certain level of production from the wells of the land, division of interest changes. For example, when the company produces X barrels of crude oil, a particular division of interest doubles (while proportionally shrinking interests of other entities). The data about the leases and the data of production accounting are stored in separate places. In this case, two challenges would occur:

  • If the production level data is not accurate (data quality issue), it may trigger the change of division of interest at the wrong level, or not trigger at the right level. This will bring losses to the company, and/or damage relationships with customers.
  • If the production level data is not accessible from the lease management department (data accessibility), then the whole workflow completely relies on someone at the accounting department to notify the land department in order to make the necessary change. Not only is this cumbersome, but the probability of missing the change notice is very high.

As you can see, today’s workflows are increasingly dependent upon data quality, accessibility, governance, etc. to ensure the execution quality of the process. To minimize negative impact due to data issues, data management needs to be done at a strategic level, so that it can plan forward and ensure that all processes in the company are well supported by the needed data. If there is no plan, when you need it, it will not be there.

Unplanned data cannot give meaningful information

One wave that the industry is catching on is Business Intelligence (BI). By utilizing data integration, dashboard, data warehouse, etc., it provides a powerful platform to generate useful information, helping the business line to make better decisions. There is, though, not enough discussion about the underlying requirement: data quality.

Simply put, the data needs to be a certain quality to support BI objectives. One common challenge is that in order to do a useful rollup of a certain dataset, there will be certain required data often not captured. BI projects rely on well-captured data to be successful and useful; if the data has not been captured, a BI project will not miraculously fix this problem.

As the saying goes: “garbage in, garbage out.” BI projects also rely on data that is in good quality, with accurate and precise data to do correct rollups, so that it can provide adequate and realistic information. In fact, the most costly portion of many BI projects is data cleansing, which is required to make the projects successful.

If the data has already been managed strategically, ensuring certain quality, governance and availability, projects and operations that rely on this data will be much more cost efficient and successful.

Data maturity needs to grow with the business itself

Many people talk about data growth in terms of volume. Data volume is certainly a key factor, but it would be unwise to overlook the fact that data maturity needs to grow with the business itself as well. It will not magically catch up with the business, and ignoring it in the business roadmap can lead to negative impacts.

Realistically speaking, setting up a mature data model and strategy is costly and time-consuming. For small businesses, they need quick wins to maintain positive cash flow; therefore, most small businesses could not afford high data maturity, and “getting the job done” is what they focus on.

As the organization grows, though, the data has to become more mature with the organization. Since the business requirement will expand, or even become different, when the organization grows, the original data model, quality, governance, etc. will not be able to support the growing operations.

Projects to improve data quality, set up governance, ensure accessibility, etc. are expensive and time-consuming, therefore these data improvement projects need to be planned ahead, in accordance with the organization’s roadmap.

Moving forward

Just like IT, data management used to be viewed under the spotlight of a supportive, tactical function. However, in this new digital age, data management deserves better management. Align data management with your company strategy roadmap, and your organization will have a head start to quality data, ensuring operation efficiency and cost savings in the long run.

The Importance of Aligning Business Units and IT Department

During the PNEC conference on May 20-22, 2014, there were multiple presentations showcasing their various level of success in IT and data management projects. One key theme that kept appearing was the topic of “aligning business units with IT departments” for a joint effort of implementation. However, for most of the presentations, there was no further explanation of how to make this happen.

This is an epidemic among multiple industries, but it is particularly severe in the energy space. For many organizations, IT departments work in silos, and business units do not know how to manage them.

The traditional role of IT department

Even though technology has evolved tremendously since its inception into corporation, the way of managing IT departments has not changed significantly. In many organizations, the main responsibility of the IT department is to support various operations, and the role of IT is usually tactical, not strategic.

These IT departments are usually a monolithic structure. By and large, the IT department is usually a distinctive sub-organization within a corporation. With this organizational structure, IT usually operates in a silo, and out of sync with the organization’s development. While IT needs to support multiple business units, it does not blend well with any one of them, since it needs special purpose and required talents.

As business units normally do not have a solid understanding of how IT works, the IT department is usually a mystery to the whole corporation. Operating under the impression of a black box (even worse if any part, or the whole set, of IT functions is outsourced), there is no measurement of how aligned the IT department is to the whole business, as long as the bare minimum of digital functions seem to work for the most part.

There are many articles on the internet talking about how to manage IT departments in new ways, for example decentralizing the monolithic IT department into smaller units, spread into different cross-functional teams. This may be an enormous task to many big corporations. However, we can look at managing IT projects better by discerning their different purposes and functions.

Organize IT Projects by different purposes

There is definitely no single way to categorize IT projects. Here, I would try to put IT projects under four categories by purpose, for better clarify and ease of management:

Digital Fabric: the infrastructure

This is the backbone of every IT project and all digital capabilities. These types of projects include everything from hardware to networking to server software to technology toolkits. Their main goal is to provide the organization with the best capability for developing its digital assets. The talents championing in this area are not necessarily good at facing end users; rather, they need the best technical excellence to ensure a robust system that is easy for other development teams to develop upon in a development-friendly manner, and with the best availability (uptime).

Common Services: cost- and operation-efficiency focus

IT departments have an inherit function to support the end users in various ways. Many of them are quite commonplace; they are about ensuring operation efficiency, reducing cost, and giving users (internal and external) non-invasive ways to continue their day-to-day operations. Implementing file-sharing services and shopping carts are some of the examples. Talents under this area excel in focusing on what the users want, user interface design, and serviceability.

Enterprise Solutions: extension of business capabilities

The truly interesting IT projects are usually not those focusing on cost reduction; instead, the most interesting IT projects belong to this category. The end goal for these enterprise projects is to extend and expand the company’s capability, in order to generate revenue and to gain competitive advantage. Enterprise software projects, custom application developments, master data management, etc. are all under this category. These are also the types of projects that get deeply embedded into business processes; once implemented, the solution would greatly enhance the process’s capability to achieve goals that are much more efficient and capable than pure human resources. The cost, however, is that once these solutions are so embedded into processes, they are not easily swappable; and the project duration tends to be longer. That’s why these projects need to be championed by someone who excels in both the technical expertise, as well as the specific business operations, such as business analysts. Enterprise solution projects require the most strategic attention in planning and resource management.

Analytics: the cutting-edge insights and researches

These projects help business make better decisions. Information and insights are the keys. Business Intelligence (BI) is one example. These types of projects are usually quite cutting-edge, experimenting and discovering new ways of looking at data and information to help business gain new insights, make better decisions, and discover new revenues. People who excel in this category are fearless and enthusiastic innovators.

 

How to align business unit and IT departments

As the above shows, even though traditional IT departments are monolithic organizations, they actually handle various types of projects that have different goals, and require different kinds of talents. One of the biggest hurdles for business units dealing with IT projects is about the wrong project-talent match: the project may be about cost reduction, and IT professionals who are more enthusiastic about innovation may find it boring. Another hurdle is that business units typically manage custom solution projects in the way of common service, not realizing the timeline, resource and strategic attention the project requires.

Entrance has extensive experience in custom solution development and BI projects. We have a proven track record of implementing solutions that extend business capabilities of clients to unprecedented levels, unlocking competitive advantage and preparing for the organization’s next wave of growth. We understand how to tie solutions to business processes for optimal efficiencies, as well as helping business owners make the best decisions with our BI expertise.

3 Signs that your data is trapped and 3 simple steps to unleash it

I attended the 18th International Conference on Petroleum Data Integration, Information and Data Management (commonly known as the “PNEC” conference) on May 20-22, 2014.  As I reflected on the various papers and talks, it was clear that “Big Data” is both a significant challenge and an amazing opportunity for upstream oil and gas.  I was inspired by a talk on the topic of Big Data and it got me thinking about how you actually go about unlocking the power of Big Data.

3 signs that your data is trapped

  1. Too much data
    Consumers of data in the business typically lack the technical skill set required for data integration and analysis.  In order to effectively mine your data, you must be able to assemble the data in a readily consumable format.  Due to the volume of data being generated daily in a modern upstream oil and gas operation, there is so much data any one business user won’t know what exists, where it is or how to get it.
  2. Too many repositories
    As a result of the volume of data and the specialization of various departments (e.g., G&G, drilling, production, accounting, etc.) each department typically has its own preferred system of record.  While having specialized systems is beneficial, if not outright required, it results in “stovepipe systems” or data silos.  Additionally, because common business metadata is logically shared between departments, data is duplicated and data quality suffers.  With no single version of the truth, it is difficult to ascertain what a well name is, what the spud date was or in many cases what even a “well” is (e.g., a completion, a wellbore, a wellbore completion, etc.).
  3. Unable to analyze effectively (can’t ask the “What if…” questions)
    Even if you have all of your data in one place, making it accessible to the business users in an easily consumable format requires specialized Business Intelligence software.  A modern BI platform allows you to put the power in the business users’ hands to get the insights they need without requiring IT to spoon-feed the information.  IT’s responsibility should be to put a solid and extensible platform in place to empower the creativity and natural curiosity of the subject matter experts within the business.

How do you unlock the value of your information assets?

  1. Identify the systems of record
    To get started, you have to know the systems in which the data resides and how it gets into those systems. This is a discovery process that is required because even if you think you know where everything is, more than likely you don’t have the full picture. Keep in mind that vast quantities of data may be locked-up in spreadsheet files on a user’s desktop or even on paper in a remote field office!
  2. Create an integrated repository
    Once you’ve identified the systems of record, the next step is to build a “Master Data Management” (MDM) solution that aggregates and synchronizes all of the key business metadata.  In addition, a comprehensive data warehouse should be layered on top of the MDM solution to provide a single location against which aggregate business intelligence analysis may be performed.  It is important to leverage standards when embarking on MDM and data warehousing projects (I talk about that more in another blog post).
  3. Use a modern Business Intelligence tool
    Excel, while it remains the Swiss Army Knife of business software, does not provide the powerful data visualization capabilities of a modern BI tool.  There are many great tools in the BI software space today.  Applications such as Tableau, Spotfire, Power BI and many others put business users in the driver’s seat with powerful yet easy-to-use data visualization features.  However, by themselves the usefulness of these tools is highly limited unless you’ve done your homework to build out MDM and a data warehouse for your business.  Ultimately, you want the business to be agile, i.e., able to quickly adapt and change direction based on insights from the organization’s collective data.  When the business users can ask the “what if” questions and easily get answers, then you know you’ve reached BI Zen.

What does all of this mean in the end?

The ability to efficiently aggregate and analyze data is the goal of the “big data” movement. Buzzwords aside, ask yourself if you are extracting all of the insights that your data can provide. If not, then it’s time to Frac Your Data and Produce Insight. The energy software experts at Entrance have the experience and skills you need to unleash your trapped data.

How to leverage standards in the Oil and Gas Industry

The oil and gas industry is both blessed and cursed by the proliferation of various standards for data organization and exchange.  Standards are vital to efficiency in creating and managing a master data management or system integration initiative in any industry, none more so than oil and gas.  The Standards Leadership Council is a roll-up organization that includes represents the industry standards leaders:

Standards are leveraged in two key ways:

  1. Creating a single version of the truth within your business (master data management).
  2. Facilitating inter-system data exchange within your business or between multiple businesses (system integration).

Effectively implementing Master Data Management necessarily involves system integration so it is important to consider both challenges together.  PPDM’s data model is the de facto standard for oil and gas data management.

How do you get started?

Oil and gas master data management always starts with:

  1.  Well naming and identification
    Human-friendly “text” names for wells frequently vary depending on the person, department or software being referenced and are never a good enterprise data key regardless of industry.  Standards such as the API Well Number are very useful but don’t fulfill the true need of a globally unique identifier because API well numbers are typically only assigned after permitting.  How does your organization uniquely identify a well in its lifecycle before it is permitted?  A surrogate key must be established that is guaranteed to be unique.  Furthermore, you must establish a cross-reference table to disambiguate keys across all systems including your master data management solution.  Have you identified all of the source systems in your organization, what they are used for and what data they share?
  2. Well data hierarchy
    Have you asked the question “What is a well?” in your organization?  If so, how many different answers did you get?  Probably more than one.  PPDM has led the way in defining industry standards for the well taxonomy.  Have you thought about the difference between Well Origin, Wellbore, Wellbore Segment, Wellbore Contact Interval, Wellbore Completion and Wellhead Stream?  The beauty of thinking about wells in this way is that necessarily every well has only one valid “Well Origin” at any point in time – this may seem simple but it is vital to executing proper oil and gas data management.

Where do you go from here?

It depends on what your specific business objectives are for the master data management initiative.  Usually, normalizing the concept of “well status” is near the top of the list because status changes in a well (or wellbore or wellbore completion, etc.) may be a workflow trigger in the lifecycle of the asset (e.g., is a lease provision triggered due to spudding or completion).

While the world of oil and gas standards may be daunting to contemplate, software experts such as those of us at Entrance can show you the way.

An Agile Approach to Data Management in Oil and Gas

I attended the PPDMQ2 Houston Luncheon last month where Meena Sunderam, from BP, discussed data management challenges and emphasized the frequent disconnect between the business and IT.  He discussed key challenges of the business’s perception of the IT department when it is responsible for delivering data management projects:

  • Ultimately the project does not meet the business need.
  • IT is too slow to deliver.
  • The business feels compelled to use Excel as a work-around.
  • Inevitably “shadow IT” solutions are built without IT involvement.

I would like to propose an approach for addressing these issues by borrowing methodology from the Project Management world. As a project manager, the challenges of “not meeting the need” and “too slow to deliver” strike a chord with me because they are classic symptoms of failed waterfall projectsAgile project management methodologies have evolved with specific intent to address commonly perceived failures of waterfall projects by:

In waterfall methodology, the process is highly linear: sequential phases follow one after the other. Typically, requirements are gathered and documented up-front. Once the requirements are defined they are then “set in stone” for the implementation phase.  For long (multiple months) duration IT projects, the likelihood that nothing in the business will change over the duration of the project is highly unlikely. Furthermore, due to the human element of any project, it is also very likely that something will be inadvertently overlooked in the initial requirements and that could have a significant impact on project outcome. An agile approach addresses these concerns in three key ways:

  1. Delivering something usable after every iteration.
  2. Involving the business stakeholders in the process continuously and frequently.
  3. Allowing the requirements to change after each iteration.

While agile is classically associated with software development projects, it is a conceptual methodology that is applicable to management of any project. Further adoption of agile approaches in the corporate IT environment would be very beneficial to the success of IT projects. At Entrance, we use agile scrum methodology across all of our projects because we strongly believe that it ultimately gets our clients what they really want in the most cost efficient and timely manner.

Entrance to present on Master Data Management (MDM) at PNEC 2014

Entrance is excited to be a sponsor and featured presenter at the 18th International Conference on Petroleum Data Integration, Information, and Data Management taking place in Houston, Texas, May 20-22, 2014 at the JW Marriott hotel.

The 2014 PNEC is a power-packed, two-and-a-half-day technical program featuring 48 in-depth technical presentations and panels led by data management professionals and experts from around the world. The sessions will focus on real-world issues, best practices, developments, and cross-discipline advances that address the ever-expanding and complex data demands in today’s oil and gas industry.
Learn More: PNEC 2014 Conference Program

WHY MDM PROJECTS FAIL AND AVOIDING THE MOST COMMON MISTAKES

Entrance President, Nate Richards, is presenting in the first Best Practices and Developments session on Tuesday, May 20th. Don’t miss Unexpected Insights from a Master Data Management Failure, which starts at 8:40 a.m.

Here is what Nate had to say about what attendees can expect to learn in his session:

 

“Providing workers with easy access to accurate, complete, and up-to-date information is often the critical factor between success and failure of an oil and gas initiative. Implementing a Master Data Management (MDM) solution is an important step toward ensuring the correct data is being used across the organization for critical decision-making. However, research demonstrates that the majority of MDM projects fail. In my session, I’ll highlight the main reasons why MDM projects fail and what the project owners and companies must do to avoid making the same costly mistakes.”

WE LOOK FORWARD TO SEEING YOU AT PNEC!

Entrance will also be exhibiting at PNEC. Visit us in booth 413 to learn how Entrance can help you “frac your data” by integrating your disparate data and information systems. In addition to our branded giveaways, we’ll also be distributing our bound copies of our 2014 Energy Software Outlook.

Learn more:  Entrance Data Management Services

If you have something you would like to discuss specifically with Entrance at PNEC – do not leave it to chance. Schedule a meeting now by calling 832.786.6536 or emailing info@entrancesoftware.com.

Data Management for Oil & Gas: High Performance Computing

Data Management and Technology

The oil and gas industry is dealing with data management on a scale never seen before. One approach to quickly get at relevant data is with High Performance Computing (HPC).

HPC is dedicated to the analysis and display of very large amounts of data that needs to be processed rapidly for best use.

One application is the analysis of technical plays with complex folding. In order to understand the subsurface, three dimensional high definition images are required.

The effective use of HPC in unconventional oil and gas extraction is helping drive the frenetic pace of investment, growth and development that will provide international fuel reserves for the next 50 years. Oil and gas software supported by data intelligence drives productive unconventional operations.

Evolving Data Management Needs

As far back as 2008, the Microsoft High-Performance Computing Oil and Gas Industry Survey conducted by the Oil & Gas Journal Online Research Center indicated that many industry geoscientists and engineers have access to the computing performance levels they require.

However, computing needs are growing more complex, so significant room for improvement exists. NumerousOil and Gas: High Performance Computingrespondents believe that making HPC available to more people industry wide can increase production, enhance decision-making, reduce delays in drilling, and reduce the overall risk of oil and gas projects.

Chesapeake is the largest leasehold owner of Marcellus Shale Play, which reaches from Southern NY to West Virginia. They employ HPC  in their shales and tight sands operations.

3-D imaging enables technical staff to detect fine-scale fracturing and directional dependency characteristics. Seismic data provides a structural road map that helps identify dip changes, small faults and natural fracture orientation.

High Performance Computing in the Real World

Chesapeake routinely performs inversions of pre-stack and post-stack data management. Datasets for imaging and inversion support models that represent complex earth structures and physical parameters, where true inversion results are known.

Reservoir maps require constant updating. Advanced pre-stack 3-D techniques are used to extract detailed rock properties that aid in discriminating good rock from bad rock at Marcellus.

Focusing on pre-stack data management has significantly increased computational requirements. Depending on the acquisition method, collecting multicomponent 3-D data can increase data size by orders of magnitude.

Advanced algorithms provide results in a matter of days, making it possible to realistically deal with a lease schedule.

Clustered super-computing systems are becoming well priced and scalable. HPC options are not only realistic, but a requirement for independents who want to bring advanced processing capabilities in house.

Check out this blog post on how oil and gas companies are using data management to improve processes here…

Keys to Business Intelligence

Five key insights from business intelligence expert David Loshin

In a recent interview, David Loshin, president of business intelligence consultancy Knowledge Integrity, Inc., Business  Intelligence Implementation named five key things organizations can do to promote business intelligence success:

  • Design configurable business intelligence dashboards that can provide needed metrics in real time
  • Provide drill-down capabilities for metrics that are of specific concern for the business
  • Ensure agreement about performance goals and targets throughout the organization
  • Create a cultural understanding of how metrics should be used
  • Experiment with different analyses to determine which ones can provide business value

Design configurable business intelligence dashboards that can provide needed metrics in real time

According to Loshin, the key goal of any business intelligence program should be to provide performance metrics in a way that is informative, but not intrusive. In other words, business intelligence dashboards need to be highly configurable in order to make sure that business users are getting access to the exact data they need, without falling victim to data paralysis caused by having to sift through all the data they don’t need.

In addition, business intelligence dashboards need to be able to provide updates in real time, in order to ensure that business users are making decisions based on the most current view of metrics.

Provide drill-down capabilities for metrics that are of specific concern for the business

Every organization wants different insights from their business intelligence solutions. As a result, business intelligence dashboards should not be one-size-fits-all in the insights they provide.

If an organization knows in advance that a specific metric could be particularly helpful for their business, they should plan ahead to make sure their BI dashboard includes drill-down capabilities for that metric, so that they will be able to get a deeper level of insight when the need arises.

Ensure agreement about performance goals and targets throughout the organization

What are the most important insights that can be gained from a business intelligence solution? For some organizations, it’s figuring out the best way to generate new revenue. For others, it may be reducing costs or mitigating risks.

Either way, it’s important that all key stakeholders understand the values that matter most to the business, and know how BI metrics will be used to help meet those performance goals and targets.

Create a cultural understanding of how metrics should be used

An efficient business intelligence solution should allow individuals to take independent action, but there should also be an organization-wide understanding of how each individual is expected to use the insights provided by the BI solution.

C-level executives set the standard for what data is important to monitor, but they won’t be the ones actually drilling down into the data. As a result, it’s important that all business users have an understanding of how BI can help improve their decision-making.

Experiment with different analyses to determine which ones can provide business value

Business intelligence is most likely to be successful when it has executive support, but executives will probably only provide support for programs that have demonstrated value in the past. Loshin compares this situation to a chicken/egg problem: business users need executive support to implement quality BI solutions, but they often need to prove the value of business intelligence solutions before they can get executive support.

To overcome this problem, Loshin recommends undertaking a series of short experiments to find which BI analyses can provide business value, while weeding out the ones that can’t. It’s quite likely that many of the tested analyses won’t prove valuable, but the ones that do should provide sufficient return to make the experimentation worthwhile.

For more, read this post on the ROI for business intelligence

Business Intelligence: What is Hadoop?

Hadoop: The New Face of Business Intelligence

Big data has changed the way businesses handle their business intelligence initiatives, requiring them to capture, process, and analyze exponentially larger amounts of information. Traditional business intelligence tools like relational databases are no longer sufficient to handle the level of data businesses are experiencing.

If businesses are going to take advantage of the insights offered by big data—instead of drowning in the flood of useless and irrelevant data—they are going to need new tools to help them handle the data crush.

Enter Hadoop. In just a few short years, Hadoop has become one of the most powerful and widely used tools for turning big data into useful insights.

What is Hadoop exactly?

It may have a strange name, but there’s no reason to intimidated or confused about what Hadoop actually is. Hadoop is simply an open-source software platform, produced by the non-profit Apache Software Foundation, for the storage and processing of massive data sets.

Hadoop is designed to spread files and workloads across clusters of hardware. This arrangement allows for the increased computational power needed to handle massive amounts of data, and helps organizations protect their workloads from hardware failure.

The Hadoop framework is made up of a number of different modules, including Hadoop Distributed File System (HDFS). HDFS distributes very large files across hardware clusters to ensure maximum aggregate bandwidth. Hadoop MapReduce is a programming model for processing very large data sets.

Why do I need to learn about Hadoop?

Simply put, Hadoop has already experienced a very high level of adoption from the business world. It promises to be the standard tool for big data management going forward.

Hadoop is already being used by more than half of Fortune 50 companies, including major names like Yahoo! and Facebook. Eric Baldeschwieler, CEO of Hortonworks, has predicted that as much as half of the world’s data will be processed using Hadoop by the year 2017.

If your business works with data at all, you need to know the name Hadoop. It will touch your organization in some way, if it hasn’t done so already.

What are the advantages of Hadoop?

Hadoop gives your developers the power to conduct batch processing on data sets that include structured, unstructured, and semi-structured data. This makes it a perfect fit for the realities of today’s big data environment.

It also allows it to succeed in ways that traditional business intelligence tools can’t. It is also highly scalable, and offers enterprise-level big data analytics at a price that midmarket companies can afford.

What are the disadvantages of Hadoop?

With so much fanfare around Hadoop, identifying its shortcomings might seem difficult, but they certainly exist. Hadoop isn’t the simple answer to all of your data management problems.

It’s important that you understand what it can and can’t do before you pursue a Hadoop-based big data solution for your business.

Hadoop is a tool aimed specifically at developers. As a result it can segregate tech users from the business users who actually need to make use of data insights.

If the insights you gain from Hadoop data processing aren’t getting into the right hands, then your Hadoop deployment is just wasting your time and resources.

As an open-source framework, Hadoop should be looked at as a work in progress. Many industry analysts have suggested that the current iteration of Hadoop is not mature enough to provide real-time Business Intelligence Security: Hadoopanalytics or ensure the security of sensitive data. Businesses can gain a lot of value by using Hadoop, but they also need to learn about these limitations first.

For more on this topic, read our three part series on the components of a business intelligence solution

Lease Compliance and Disparate Data

Lease Compliance and Resolution of Data

Lease compliance issues in domestic shale have become critical to E&P asset protection. Lease jeopardy has grown into a major component of risk management of upstream resources.

It demands a well-defined, cohesive data management strategy that creates business intelligence from disparate sources.

Land Administration Information Systems Inadequate

Land Administration is a traditional role within an E&P enterprise.  Within Royalty complianceits domain falls the safeguarding of all of a company’s land assets, among them the oil and gas leases.

Landmen are highly specialized members of an E&P exploration and production team whose duties include assisting in the analysis of deeds, leases and other contracts.  A landman’s responsibility typically includes ensuring that the company’s land assets database is maintained and key attributes of lease provisions recorded and updated.

Unfortunately, these talented and versatile members of the upstream resources team are increasingly handicapped in their lease administration duties. This is because information technology hasn’t kept pace with changing demands in the leasing arena.

The data they’re so meticulously maintaining is in a static data repository. As a result it’s unused by advanced oil and gas software with lease analysis and business intelligence capabilities.

Changing Aspects of Lease Compliance

Several factors are driving changes in shale lease compliance.  As competitors move rapidly in vying for a shrinking pool of unconventional land assets of increasingly significant value, lessors are reexamining the royalty structure of their existing leases.

This and falling natural gas prices have helped motivated lessors, newly focused on their monthly royalty payments, to analyze their lease provisions, often with professional legal advice.  This has become a highly litigious area, with more and more attorneys building a lucrative practice area in oil and gas lease compliance and its growing body of case law.

“Royalty and land and lease rights disputes were the most common types of unconventional oil and gas litigation during 2012,” reports Navigant.

“Oil and gas companies involved in unconventional exploration continue to face not only the challenge of differing state laws, but also a constantly evolving legal landscape as landmark cases make their way through state courts,” noted Navigant, referring to Butler v. Powers Estate, litigation based upon the provisions of a 19th century oil and minerals’ lease.

If an E&P company isn’t performing due diligence on lease provision compliance, it is exposing itself to the potential of costly litigation and risking the devaluation or loss of key productive assets. Overnight what was regarded as a stable corporate asset can become a legal liability if lease oversight has failed.

A company lacking a consistently reliable system for proactive lease compliance is apt to be blindsided by an eager attorney who’s found a non-complaint lease under which he can get his fingernails.

Consolidating Disparate Lease Data into Business Intelligence

Most companies already have on hand the data needed to be proactive in their unconventional E&P lease compliance.  It often exists in various forms that convey different meanings in non-integrated data stacks. Examples include the property management data base, the financials system, the reserves model, and the drilling information.

With the right analytical methodology, this disparate data can be consolidated into an information system that provides reports and alerts that help lessen lease jeopardy.

Building a Viable Lease Compliance System

A multi-part, iterative process is used to build and deploy a viable lease compliance application.

First, leases are identified and reviewed, with their critical provisions defined, e.g., key dates, key quantities.  Next, the different systems of record are identified, and the key attributes that define the leases’ critical provisions are mapped to the destination system, with inconsistencies corrected.

At this point the source data is cleansed — incorrect or inaccurate data is modified, deleted or replaced. The resulting “good” data is synchronized to the master data set, with links established to the various source systems.

Integration with the initial lease provision inventory now occurs, with processes put into place to ensure the lease provision inventory remains accurate and current.  Lastly, lease provision tracking is established, with sensitivity cases defined.

Some of the critical sensitivity issues are royalty escalation, cessation of production and continuous drilling. Business process owners are assigned to the system and critical, sensitivity-driven alerts established for key users.

For more, read our post on leveraging unconventional information assets for Upstream…