Community Articles
Find and share helpful community-sourced technical articles
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Guru

1 Challenges managing LAS files

Siloed datasets

  • Working with disparate, complex datasets under a traditional analysis model limits innovation and does not allow for the speed required for unconventionals

LAS File Volume

  • A single well could have 10s or 100s of LAS files making it difficult to provide a consolidated view for analysis
  • Extrapolating this volume out across 1000s of wells requires an automated approach

Manual QC process

  • Identifying out of range data is time consuming and challenging even for experienced geoscientists and petrophysicists

Management and storage is expensive

  • What if cost could be reduced from $23/Gb to $.19/Gb; $55 GB could cost $1,200 or $10
  • Delta is 1-2 orders of magnitude

Download Sample Data Set

The wellbook concept is about a single view of an oil well and its history- something akin to a "Facebook Wall" for oil wells.

This repo is built from data collected and made available by the North Dakota Industrial Commission.

I used the wellindex.csv file to obtain a list of well file numbers (file_no), scraped their respective Production, Injection, Scout Ticket web pages, any available LAS format well logfiles, and loaded them into HDFS (/user/dev/wellbook/) for analysis.

To avoid the HDFS small files problem I used the Apache Mahout seqdirectory tool for combining my textfiles into SequenceFiles: the keys are the filenames and the values are the contents of each textfile.

Then I used a combination of Hive queries and the pyquery Python library for parsing relevant fields out of the raw HTML pages.

List of Tables:

  1. wellbook.wells -- well metadata including geolocation and owner
  2. wellbook.well_surveys -- borehole curve
  3. wellbook.production -- how much oil, gas, and water was produced for each well on a monthly basis
  4. wellbook.auctions -- how much was paid for each parcel of land at auction
  5. wellbook.injections -- how much fluid and gas was injected into each well (for enhanced oil recovery and disposal purposes)
  6. wellbook.log_metadata -- metadata for each LAS well log file
  7. wellbook.log_readings -- sensor readings for each depth step in all LAS well log files wellbook.log_key -- map of log mnemonics to their descriptions
  8. wellbook.formations -- manually annotated map of well depths to rock formations
  9. wellbook.formations_key -- Descriptions of rock formations wellbook.water_sites -- metadata for water quality monitoring stations in North Dakota

2 Watch video to get started

Automated Analysis of LAS Files

3 Join with Production / EOR / Auction data (Power BI)

Get a 360-degree view of the well

<Hive tables - Master>

a. Predictive Analytics (Linear Regression)

b. Visualize the data using Yarn Ready applications

4 Dynamic Well Logs

Query for multiple mnemonic readings per well or multiple wells in a given region. Normalize and graph data for specific depth steps on the fly.

5 Dynamic Time Warping

Run the algorithm per well or for all wells and all mnemonics and visualize the results to know what readings belong to the same curve class. Using supervised machine learning, enable automatic bucketing of mnemonics belonging to the same curve class.

Build on your own

Clone the git below and follow the steps in Readme to create your own demo.

$ git clone https://github.com/vedantja/wellbook.git

For more questions, please contact Vedant Jain.

Special thanks to Randy Gelhausen and Ofer Mendelevitch for the work and help put into this.

791 Views
Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎09-22-2016 01:49 PM
Updated by:
 
Contributors
Top Kudoed Authors