Custom Software Apps & SharePoint Consulting

RFID Technology and Oil & Gas: Streamline with Custom Software

Oil and Gas RFID Software

We are all pretty familiar with RFID technology, as most big ticket items we see at the store are protected by them. When it comes to the oil and gas industry, the uptake has been much lower, which is surprising when you think about how much more they stand to lose. If a pipe goes missing, or even worse, doesn’t get needed repairs because nobody could find it, the stakes are upwards of thousands and millions of dollars.

Read More

Man looking at data analytics and visualization tools on computer

The Landscape of Big Data Analytics and Visualization

Are you considering which of the big data analytics and visualization tools will work best for you? For our money, Tableau is leading the pack. However, there are several competitors who offer some interesting options. Here is a look at a few of those.

Please keep in mind this is a high-level sampling of capabilities mostly pulled from company websites and customer feedback where we can find it. This is not meant to be an in-depth study of features or a recommendation as to which tool would best meet your needs.

To get that, you’ve got to invest in a disciplined software selection process, something that we highly recommend prior to investing in a new tool that will play an important role in your business.

Sisense

When thinking about big data analytics and visualization tools, one of the first that comes to my mind is from Sisense. Sisense boasts an in-chip technology that is best in its class as an analytics engine and is 10 to 100 times faster than in-memory technology. It goes through terabytes easily and eliminates the need for data prep work. You simply access your info through any data source, drag and drop the data you want to see, and clean up the data you no longer need after you’re done. The Sisense tool allows you to turn data into usable data that can be instantly managed, allowing you to change or add data sources. Some of the things you can do with Sisense include:

  • Instantly blending large amounts of data.
  • Analyze data without the need for hard coding, aggregating, or remodeling.
  • Unlock analytics with instant response time and smarter shortcuts.
  • Receive automatic alerts for your most important KPIs.
  • Choose a deployment strategy that works for you, including on-premises, cloud, or hybrid. Sisense provides managed service, as well.
  • Secure your data with comprehensive controls.

Sisense users are pleased with the way they’re able to take millions of lines of data, even from multiple social networks, and put it all together into one comprehensive information package. Additionally, users have commented that they like how Sisense’s software allows anyone to collect and manage data, even if they’re not a technology expert.

QlikView

With its Associative Difference tool, Qlik offers a way to explore your data without limiting you to query-based, linear exploration like other modern BI tools do. According to the Qlik website, QlikView’s Associative Difference enables you to:

  • Search and explore data analytics in any direction.
  • Gain quick insights with interactive selections.
  • Combine all your data insights, regardless of their size.
  • Calculate and aggregate with a single click.
  • Index all data relationships to find insights you wouldn’t catch on your own.
  • Rapidly build and deploy analytic apps.
  • Control analytic apps, permissions, and data with strong security.
  • Customize the exact tools you need for your organization.
  • Use global search to accelerate discovery.
  • Create consistent reports and templates.

Qlik has been providing its technology for a while, but remains in the pack of possibilities for the up-to-date data analytics and visualization tool you need. The company states that its mission is to provide people, businesses, organizations, and government with the ability to explore and share all of the data that is available to them in a single platform.

Dundas BI

The Dundas BI platform looks a lot like Tableau to me. It has some cool features, including a data pipeline that allows various types of users to start their exploration of the data in accordance with their preferences. In addition, any content that a user creates may be shared and used across the entire organization. Some of the other features that Dundas BI offers include:

  • The use of modern HTML5 and fully open APIs to customize and meet your users’ design requirements.
  • The ability to connect, interact, and analyze data from any device, even mobile devices.
  • A wide range of visualizations and layout options.
  • Highly customizable visualizations, including interactive charts, gauges, scorecards, and maps.
  • The ability to run ad-hoc queries and quickly create reports and dashboards.
  • Drag-and-drop functionality to add your own data files.
  • Automatic data preparation and smart defaults to make quick work of your collection.
  • One-click setup of a wide range of statistical formulas and period-over-period comparisons that is done directly on your visualizations in order to provide you with instant results.

Users of Dundas BI say that the product is flexible, dynamic, and easy to use.

Offerings From Companies You Already Know

Team of business people discussing digital marketing metrics report and return on investment strategy for advertisement campaign, data analytics dashboard on computer screen in office.
A team of business people discussing digital marketing metrics report and return on investment strategy for an advertisement campaign, data analytics dashboard on a computer screen in the office.

In addition to the three alternatives discussed above, a number of companies you already know and have experience with also offer an array of BI tools that may be what you need. Here are some suggestions to look at:

  • SAP: SAP Analytics Cloud provides simple cloud deployment, collaborative enterprise planning, advanced predictive analytics and machine learning, response to natural language queries, and the ability to easily model complex queries. Intuitive self-service features allow users to easily explore information from across the organization.
  • IBM Cognos: IBM Cognos allows for smart exploration that uses machine learning and pattern detection to analyze data. Additionally, it provides automated visualizations, storytelling capabilities that can be enhanced with media, web pages, images, shapes, and text. Content can be re-used from existing dashboards and reports. Predictive analytics provide users with the ability to identify key patterns and variables and natural language insights.
  • Oracle Analytics: Oracle states that its Analytics offering provides faster insights without the need for help from IT. Additionally, it offers collaboration features for analytics that are both shareable and traceable. In its fourth year since becoming available to the general public, Oracle recently refreshed its data visualization tool with autonomous analytics to help users find and compile compelling stories that are powered by machine learning.
  • Microsoft Power BI: Microsoft states that its Power BI allows you to connect to hundreds of data sources, prep the data, and create beautiful reports all in a matter of minutes. Millions of users worldwide are familiar with the Power Query based experience for data preparation, and the platform provides users with the ability to re-use their data preparation logic across multiple reports and dashboards. Users find the Power BI dataflows’ ability to handle large datasets to be an exciting new feature.

Learn More About Data Analytics and Visualization Tools

Your ultimate decision about which product you choose for your big data analytics and visualization needs depends on ease of deployments and use. Ultimately, it depends on which features are most important to you. Be sure during your research to check out the demos that these companies offer for a helpful glimpse as to how the product works. When you pick one, drop us a line and let us know which one you chose and why!

AWS Lambda and Virtual Machines

AWS Lambda and Virtual Machines | Use Cases and Pricing

This article compares AWS Lambda and virtual machines, discussing when to use each and digging into pricing.

A virtual machine isn’t the only way to get computing power on AWS, and it isn’t always the most cost-effective. Sometimes you just need to set up a service that will perform a task on demand, and you don’t care about the file system or runtime environment. For cases like these. AWS Lambda may well be the better choice.

Amazon makes serious use of Lambda for internal purposes. It’s the preferred way to create “skills,” extended capabilities for its Alexa voice assistant. The range of potential uses is huge.

AWS Lambda and virtual machines both exist on a spectrum of abstraction wherein you take on less and less of the responsibility for managing and patching the thing running your code. For this reason, Lambda is usually the better bet when your use case is a good fit.

What is AWS Lambda

Note: For a full analysis breaking down what AWS Lambda is with pricing examples, see our earlier post, “What is AWS Lambda – and Why You’re About to Become a Huge Fan“.

Lambda is a “serverless” service. It runs on a server, of course, like anything else on AWS. The “serverless” part means that you don’t see the server and don’t need to manage it. What you see are functions that will run when invoked.

You pay only per invocation. If there are no calls to the service for a day or a week, you pay nothing. There’s a generous zero-cost tier. How much each invocation costs depends on the amount of computing time and memory it uses.

The service scales automatically. If you make a burst of calls, each one runs separately from the others. Lambda is stateless; it doesn’t remember anything from one invocation to the next. It can call stateful services if necessary, such as Amazon S3 for storing and retrieving data. These services carry their own costs as usual.

Lambda supports programming in Node.js, Java, Go, C#, and Python.

Comparison with EC2 instances

When you need access to a virtual machine, Amazon EC2 offers several ways to obtain one. It has three ways to set up an instance which is a VM.

On-demand instances charge per second or per hour of usage, and there’s no cost when they’re inactive. The difference from Lambda is that the instance is a full computing environment. An application running on it can read and write local files, invoke services on the same machine, and maintain the state of a process. It has an IP address and can make network services available.

Reserved instances belong to the customer for a period of time, and billing is for the usage period. They’re suitable for running ongoing processes or handling nearly continuous workloads.

Spot instances are discounted services which run when there is spare capacity available. They can be interrupted if AWS needs the capacity and will pick up later from where they left off. This approach has something in common with Lambda, in that it’s used intermittently and charges only for usage, but it’s still a full VM, with all the abilities that imply. Unlike Lambda, it’s not suitable for anything that needs real-time attention; it could be minutes or longer before a spot instance can run.

Abstract 3D illustration. Mathematical symbol lambda; as a chip with electronic tracks in a puzzle of glass.

Use cases for Lambda

Making the right choice between AWS Lambda and virtual machines means considering your needs and making sure the use case matches the approach.

The best uses for Lambda are ones where you need “black box” functionality. You can read and write a database or invoke a remote service, but you don’t need any persistent local state for the operation. Parameters can provide a state for each invocation. Cases which this functionality could be good for include:

  • Complex numeric calculations, such as statistical analysis or multidimensional transformations
  • Heavy-duty encryption and decryption
  • Conversion of a file from one format to another
  • Generating thumbnail images
  • Performing bulk transformations on data
  • Generating analytics

Invoking a Lambda service is called “triggering.” This can mean calling a function directly, setting up an event which makes it run, or running on a schedule. With the Amazon API Gateway, it’s even possible to respond to HTTP requests.

AWS Step Functions, which are part of the AWS Serverless Platform, enhance what Lambda can do. They let a developer define an application as a series of steps, each of which can trigger a Lambda function. Step Functions implement a state machine, providing a way to get around Lambda’s statelessness. Applications can handle errors and perform retries. It’s not the full capability of a programming language, but this approach is suitable for many kinds of workflow automation.

AWS Lambda and Virtual Machines | Comparing costs

Check out Amazon’s pricing calculators for full details on Lambda pricing and EC2 pricing.

Like other factors when comparing AWS Lambda and Virtual Machines, Lambda wins out on cost if your use case supports using it.

Lambda wins on cost when it’s employed for a suitable use case and when the amount of usage is relatively low. “Relatively low” leaves a lot of headroom. The first million requests per month, up to 400,000 GB-seconds, are free. Customers that don’t need more than that can use the free tier with no expiration date. If they use more, the cost at Amazon’s standard rates is $0.0000002 per request — that’s just 20 micro cents! — plus $0.00001667 per GB-second.

The lowest on-demand price for an EC2 instance is $0.0058 per hour. By simple division, neglecting the GB-second cost, a Lambda service can be triggered up to 29,000 times per hour and be more cost-effective.

Many factors come into play, of course. If each request involves a lot of processing, the costs will go up. A compute-heavy service on EC2 could require a more powerful instance, so the cost will be higher either way.

Some needs aren’t suitable for a Lambda environment. A business that needs detailed control over the runtime system will want to stay with a VM. Some cases can be managed with Lambda but will require external services at additional cost. When using a virtual machine, everything might be doable without paying for other AWS services.

The benefits of simplicity

When it comes to AWS Lambda and virtual machines, it comes down to using the simpler method as long as it meets your needs. If a serverless service is all that’s needed, then the simplicity of managing it offers many benefits beyond the monthly bill. There’s nothing to patch except your own code, and the automatic scaling feature means you don’t have to worry about whether you have enough processing power. It isn’t necessary to set up and maintain an SSL certificate. That frees up IT people to focus their attention elsewhere.

With the Lambda service, Amazon takes care of all security issues except for the customer’s own code. This can mean a safer environment with very little effort. It’s necessary to limit access to authorized users and to protect those accounts, but the larger runtime environment is invisible to the customer. Amazon puts serious effort into defending its servers, making sure all vulnerabilities are promptly fixed.

With low cost, simple operation, and built-in scalability, Lambda is an effective way to host many kinds of services on AWS.

Spotfire Dashboard Demo of a Shale Type Ternary Diagram, Well Map, and Source Wells by Formation

Since rock type will have a bearing on completion and drilling practices, you definitely want to know the results of wells with similar rock types. Normally, geologists, petrophysicists, and reservoir engineers who need to find data from wells and formations by rock type to perform evaluations have to plot out data in Excel, but making selections off of an Excel chart is impossible, and filtering table data can often end up being hit-or-miss. Instead, you could create a ternary diagram in TIBCO Spotfire paired with a geographic map of the location of the wells to perform formation evaluations. The Spotfire dashboard demo here provides interactive functionality with a well map and ternary diagram.

Click the video to play the demo:

As you can see, there will be a direct bearing on what to pay for land in a shale oil or shale gas play when you monitor how the lithology of a formation is changing by moving your area of interest.

To purchase a version of this Spotfire dashboard tailored to your own data, give us a call at 1-888-343-KNOW or send me a note using the form to the right.

3 Reasons Why Data Management Should Be Strategic Rather Than Tactical

During the 18th International Conference on Petroleum Data Integration, Information and Data Management (commonly known as the “PNEC” conference) on May 20-22, 2014, there was a session in the first day dedicated to the topic of professional data management. During the panel discussion, an attendee asked the following question: Why do we even need a professional role dedicated to managing data, since data service is a supportive role to various operations?

Read More

The Importance of Aligning Business Units and IT Department

During the PNEC conference on May 20-22, 2014, there were multiple presentations showcasing their various level of success in IT and data management projects. One key theme that kept appearing was the topic of “aligning business units with IT departments” for a joint effort of implementation. However, for most of the presentations, there was no further explanation of how to make this happen.

This is an epidemic among multiple industries, but it is particularly severe in the energy space. For many organizations, IT departments work in silos, and business units do not know how to manage them. Read More

Managing the Big Data Crisis

PNEC 2014 Recap: Dealing With The ‘Big Data’ Crisis

I attended the 18th International Conference on Petroleum Data Integration, Information and Data Management (commonly known as the “PNEC” conference) on May 20-22, 2014.  As I reflected on the various papers and talks, it was clear that “Big Data” is both a significant challenge and an amazing opportunity for upstream oil and gas.  I was inspired by a talk on the topic of Big Data and it got me thinking about how you actually go about unlocking the power of Big Data.

Read More

Entrance to present on Master Data Management (MDM) at PNEC 2014

Entrance is excited to be a sponsor and featured presenter at the 18th International Conference on Petroleum Data Integration, Information, and Data Management taking place in Houston, Texas, May 20-22, 2014 at the JW Marriott hotel.

The 2014 PNEC is a power-packed, two-and-a-half-day technical program featuring 48 in-depth technical presentations and panels led by data management professionals and experts from around the world. The sessions will focus on real-world issues, best practices, developments, and cross-discipline advances that address the ever-expanding and complex data demands in today’s oil and gas industry. Read More

Entrance Demonstrates How to Frac Your Data at Microsoft Global Energy Forum

Showcasing Custom Software Solutions

Entrance is a proud sponsor of the annual Microsoft Global Energy Forum, to take place on February 20th in Houston at the George R. Brown Convention Center. Now in its 11th year, this prestigious technology-focused energy industry event welcomes business leaders from both the Business Operations and Information Technology arenas.

As a long-standing Gold Microsoft Partner, Entrance will showcase custom software solutions for oil and gas. The SharePoint enterprise content management company, KnowledgeLake, will be joining as a co-sponsor to share how upstream companies can leverage digitization solutions.

Actionable Insight for the Field

Attendees of the Global Energy Forum can stop by booth number 116 to learn more about:

  • Business Intelligence and Data Management: Tying together disparate databases to reveal actionable insight for the field
  • SharePoint and KnowledgeLake: Instant, searchable, digitized access to any paper assets
  • Business Process Automation: Reduce overall lifting costs by automating data from the field

Entrance president, Nate Richards, commented on the upcoming event, “Information is the next unconventional play for oil and gas,” said Entrance president Nate Richards. “Decision makers need to be able to frac their data across departments to access intelligence that is readily available and accurate. This is the key to driving up production and pushing down costs for the successful producer.”

The Move Towards Digitization

Now more than ever, the oil and gas industry is grappling with an overwhelming amount of unstructured paper data and paper. Entrance and KnowledgeLake have partnered for the Global Energy Forum to showcase a digitization and well management solution based in SharePoint for land and well files that will help drive true competitive advantage.

“We are excited to partner with Entrance Software to provide the next generation of Energy Information Management solutions based on Microsoft SharePoint.  By leveraging SharePoint to manage unstructured content and structured data, we are able to provide upstream customers with real time actionable information for Well File, Land and corporate records groups that was only before possible with multiple point solutions,” said Vice President of Engineering at KnowledgeLake, Ben Vierck.

Visit the team on February 20th for a demo, or to pick up a copy of the first annual Outlook on Energy.

For more, check out our presentation that explains how your company can frac your data to avoid lease jeopardy. Or find out more about the Microsoft Global Energy Forum.

Data Management for Oil & Gas: High Performance Computing

Data Management and Technology

The oil and gas industry is dealing with data management on a scale never seen before. One approach to quickly get at relevant data is with High Performance Computing (HPC).

HPC is dedicated to the analysis and display of very large amounts of data that needs to be processed rapidly for best use.

One application is the analysis of technical plays with complex folding. In order to understand the subsurface, three dimensional high definition images are required.

The effective use of HPC in unconventional oil and gas extraction is helping drive the frenetic pace of investment, growth and development that will provide international fuel reserves for the next 50 years. Oil and gas software supported by data intelligence drives productive unconventional operations. Read More

Business Intelligence: What is Hadoop?

Hadoop: The New Face of Business Intelligence

Big data has changed the way businesses handle their business intelligence initiatives, requiring them to capture, process, and analyze exponentially larger amounts of information. Traditional business intelligence tools like relational databases are no longer sufficient to handle the level of data businesses are experiencing.

If businesses are going to take advantage of the insights offered by big data—instead of drowning in the flood of useless and irrelevant data—they are going to need new tools to help them handle the data crush. Read More

Oil and Gas Software: Decision Making and Forecasting

Oil and Gas Software and the Accuracy of Decisions

Oil and gas software can be one key to integrated, well structured, well defined IT systems. Data support utilizing an oil and gas specific framework leads to intelligent decision making and a reduction in overhead.

Case study: Oil and Gas Business Intelligence Framework

Data access, forecasting and analysis, reporting and decision making software for dedicated professionals, from the field to the boardroom, are available with solutions developed by Halliburton, Dell, Qualcomm and Microsoft.

Oil and Gas Software: iLink Business IntelligenceThe Oil & Gas Business Intelligence framework, iLink, offers a comprehensive real-time overview of your business. You can track performance metrics that will provide you with the necessary information to make informed business decisions. Dashboards offer an easy to access overview. The process of resource optimization will help to make reliable forecasts based on the most current data. Managers and technicians in the field can communicate data, while tickets and alerts can be created for field personnel.

iLinks key features are:

·         Identifying Key Performance Metrics

·         Monitoring Well Performance

·         Creating and assigning well issue ticketing and alerts

·         Global well map

Microsoft’s partnering with Halliburton has developed the architecture for their oil and gas upstream framework, MURA.  This architecture supports business with the goals of gaining maximum insight from business data and maximizing worker productivity. The framework supports real time analytics, including robust statistical and analysis packages for data mining, research and consumer reporting.

It also supports stream-processing engines capable of detecting and filtering real-time events, on site or in the cloud.

Information integration enables diverse data and software to seamlessly function without being trapped in a pipeline to nowhere. Workers need tools that help them gain deeper insights into ever growing quantities of relevant data.

Simplifying the process of finding, selecting, and exploring their data in flexible ways is essential. This process needs to be intuitive for them, and they should not have to rely on IT to fetch data or write custom reports.

You can deliver cost effective solutions with MURA that are highly mobile because of interoperability in the cloud and on-site. MURA supports industry standards. An industry wide interface makes new, competent implementation such as PPDM understandable and manageable.

MURA interfaces are published for open industry use. All the elements of an interface are well defined so that applications can be independently developed once per function. Wasteful duplication of effort can be avoided.

Information models employ a consistent naming system for referring to assets. This makes information sharing, exchange and comprehension an understandable process.

Security implementation is robust and well defined, including authentication, identity lifecycle management, authorization, certificates, claims and threat models. This facilitates secure interoperable design and deployment.  Solutions that integrate business processes, workers, workflow and IT processes with essential and immediate data support intelligent workflow.

Business Intelligence in Action

K2 uses MURA to manage declining well process. The MURA framework provides guidance for oil and gas companies for creating a dashboard of applications that are integrated at the user interface layer.

This kind of oversight capability gives well operators the data and context they require be alerted to issues that need timely corrective action. K2 optimizes information provided to decision makers, while minimizing data that is superfluous or in an obscure format.  IT departments appreciate MURA and K2 because costs are restrained while business demands are effectively met.

Demand forecasting simplifies and automates enterprise-wide forecasting, and improves the quality and measurability of marketing planning. When you improve accuracy significantly, it is likely to result in improved capital decisions and effective communication that will ultimately bring a reduction in overhead and operating costs.

For more on how oil and gas software can help your business, read this post on the benefits of analytics.

Data Management Problems: Data Cleansing

Data Management and Quality

Many processes in data management affect the quality of data in a database. These processes may be classified into three categories, such as processes that add new data to the database and those that manipulate existing data in the database.

The remaining category includes processes such as data cleansing that reduce the accuracy of the data over time without actually changing the data. This loss of accuracy is typically the result of changes in the real world that isn’t captured by the database’s collection processes.

Arkady Maydanchik describes the issue of data cleansing in the first chapter of his book Data Quality Assessment.

Cleansing in Data Management

Data management: quality issuesData cleansing is becoming increasingly common as more organizations incorporate this process into their data management policies. Traditional data cleansing is a relatively safe process since it’s performed manually, meaning that a staff member must review the data before making any corrections.

However, modern data cleansing techniques generally involve automatically making corrections to the data based on a set of rules. This rules-driven approach makes corrections more quickly than a manual process, but it also increases the risk of introducing inaccurate data since an automated process affects more records.

Computer programs often implement these rules, which represent an additional source of data inaccuracy since these programs may have their own bugs that can affect data cleansing.

Problems with Data Cleansing

Part of the risk of automatic data cleansing is due to the complexity of the rules in a typical data management environment, which frequently fail to reflect the organization’s actual data requirements. The data may still be incorrect after executing the data-cleansing process, even when it complies with the theoretical data model. The complexity and unrelated nature of many problems with data quality may result in the creation of additional problems in related data elements after performing data cleansing.

For example, employee data includes attributes that are closely related such as employment history, pay history and position history. Correcting one of these attributes is likely to make it inconsistent with the other employment data attributes.

Another factor that contributes to the problems with modern data cleansing is the complacency that data management personnel often exhibit after implementing this process. The combination of these factors often means that data cleansing creates more problems than it solves.

Case Study

The following case study from Maydanchik’s book illustrates the risk of data cleansing, which involved a large corporation with over 15,000 employees and a history of acquiring other businesses. This client needed to cleanse the employment history in its human resources system, primarily due to the large number of incorrect or missing hire dates for its employees.

These inaccuracies were a significant problem because the hire date was used to calculate retirement benefits for the client’s employees. Several sources of legacy data were available, allowing for the creation of several algorithms to cleanse the employment data.

However, many of these employees were hired by an acquired business rather than directly hired by the client corporation. The calculation of the retirement benefits was supposed to be based on the date that the client acquired the employee instead of the employee’s original hire date, but the original data specifications didn’t reflect this business requirement.

This discrepancy caused the data-cleansing process to apply many changes incorrectly. Fortunately, this process also produced a complete audit trail of the changes, which allowed the data analyst to correct these inconsistencies without too much difficulty.

This data-cleansing project was completed satisfactorily in a relatively short period of time, but many such projects create errors that remain in the database for years.

For more on solving data management issues, check out this post on managing data entry.

Business Intelligence Deployment Misconceptions

Deploying Business Intelligence

Business intelligence, also commonly referred to as BI throughout the industry, is a piece of technology that allows a business to obtain actionable information that they can then use throughout their day-to-day operations. While business intelligence solutions certainly have their fair share of advantages, it is also important to realize that they are not the be-all, end-all solution for guidance that many people think they are.

There are certain business intelligence deployment misconceptions that businesses make over and over again to their detriment. Understanding these misconceptions will allow you to successfully avoid them and use BI to its fullest potential.

The Benefits of Business Intelligence

  • The information provided is accurate, fast and, most importantly, visible to aid with making critical decisions relating to the growth of a business, as well as its movement.
  • Business intelligence can allow for automated report delivery using pre-calculated metrics.
  • Data can be delivered using real-time solutions that increase their accuracy and reduce the overall risk to the business owner.
  • The burden on business managers to consolidate information assets can be greatly reduced through the additional delivery and organizational benefits inherent in the proper implementation of business intelligence solutions.
  • The return on investment for organizations with regard to business intelligence is far-reaching and significant.

Business Intelligence Deployment Misconceptions

Business intelligence misconceptionsOne of the most prevalent misconceptions about business intelligence deployment is the idea that the systems are fully automated right out of the box. While it is true that the return on investment for such systems can be quite significant, that is only true if the systems have been designed, managed and deployed properly.

A common misconception is that a single business intelligence tool is all a company needs to get the relevant information to guide themselves into the next phase of their operations. According to Rick Sherman, the founder of Athena IT Solutions, the average Fortune 1000 company implements no less than six different BI tools at any given time.

All of these systems are closely monitored, and the information provided by them is then used to guide the business through its operations. No single system will have the accuracy, speed or power to get the job done on its own.

Another widespread misconception is the idea that all companies are using business intelligence in the present term and your company has all the information it needs in order to stay competitive. In reality, only about 25 percent of all business users have been reported as using BI technology in the past few years. The 25% number is actually a plateau – growth has been stagnant for some time.

One unfortunate misconception involves the idea that “self-service” business intelligence systems indicate that you only need to give users access to the available data to achieve success. In reality, self-service tools often need additional support than what most people plan for.

This support is also required on a continuing basis in order to prevent the systems from returning data that is both incomplete and inconsistent.

One surprising misconception about the deployment of business intelligence is that BI systems have completely replaced the spreadsheet as the ideal tool for analysis. In truth, many experts agree that spreadsheets are and will continue to be the only pervasive business intelligence tool for quite some time.

Spreadsheets, when used to track the right data and perform the proper analysis, have uses that shouldn’t be overlooked. Additionally, business users that find BI deployment too daunting or unwieldy will likely return to spreadsheets for all of their analysis needs.

According to the website Max Metrics, another common misconception is that business intelligence is a tool that is only to be used for basic reporting purposes.

In reality, BI can help business users identify customer behaviors and trends, locate areas of success that may have previously been overlooked and find new ways to effectively target a core audience. BI is a great deal more than just a simple collection of stats and numbers.

For more on this topic, check out our series, “What is business intelligence?

Oil and Gas Software: Real-Time Information Improves Collaboration

SCADA and Oil and Gas Software

Providing upstream SCADA information systems with near real time field metrics using oil and gas software is essential to the efficient management of shale gas exploration and development.  Rapid, informed response to fast changing dynamics across multiple upstream sites requires reliable source data capture and quick availability.

Read More

Skip to content