Member since
08-01-2016
11
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7227 | 06-13-2016 02:43 PM |
09-29-2016
08:02 AM
2 Kudos
Restarting Cloudera manger agent on the host solved the problem for me..!
... View more
06-13-2016
02:43 PM
Here is the solution Hawq installation have caused a change to /etc/sysctl.conf Had to change it back to default value and got rid of all the OOM errors. vm.overcommit_memory=2
Had to change to default value
vm.overcommit_memory=0
... View more
05-20-2016
06:58 PM
Thank you for the response, Tried that one but seem to have no impact
... View more
05-20-2016
06:56 PM
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d)
unlimited scheduling priority (-e) 0
file size (blocks, -f)
unlimited pending signals (-i) 274125
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m)
unlimited open files (-n) 32000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t)
unlimited max user processes (-u) 16000
virtual memory (kbytes, -v)
unlimited file locks (-x) unlimited
... View more
05-19-2016
06:15 PM
I have changed the configuration in Ambari it resulted as follows: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000400000000, 16106127360, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 16106127360 bytes for committing reserved memory. # An error report file with more information is saved as: # /var/log/hbase/hs_err_pid35924.log
... View more
05-19-2016
05:40 PM
Please help me out, Unable to figure out what the problem is while starting regionserver, It exits with following message starting regionserver, logging to /var/log/hbase/hbase-root-regionserver-ip-172-31-19-88.out
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000300000000, 20401094656, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 20401094656 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /var/log/hbase/hs_err_pid31016.log below are the free -m o/p: total |used |free|shared|buffers|cached
Mem: 68555 |21134 |47420|0|181 |14850 -/+ buffers/cache: 6102 62452
Swap: 0 0 0 I am not not sure if it is a Java Error or OS error Below is are system configs: Ram : 70gb
os: Redhat centos 6.7
Ambari : 2.2.2
HDP : 2.4.2
Java Version (Ambari): 1.8.0_40
... View more
Labels:
- Labels:
-
Apache HBase
04-26-2016
09:07 PM
#!/usr/local/bin/python2.7
import os,gzip,sys,csv,shutil,time,subprocess
#Read the TagListDataOnly.csv file
ifile = open(sys.argv[3],'rU')
reader = csv.reader(ifile, delimiter=',')
#Building dictionary with key as code and value as metric name
mydict = dict((str(rows[1]),str(rows[3])) for rows in reader)
pattern = '%d-%m-%Y %H:%M:%S.%f'
finallines=[]
inputfilename=str(sys.argv[6])+str(sys.argv[2])
serialno=sys.argv[2].split('_')[-2][:-1]+'9'
with gzip.open(inputfilename,'r') as data:
for line in data:
elements = line.split(',')
epoch = int(time.mktime(time.strptime(elements[3].rstrip(), pattern)))
origMetricName = mydict[elements[0]]
if int(elements[2]) == 192:
quality='0'
else:
quality='1'
if elements[1].rstrip().lstrip('-').replace('.','',1).isdigit():
if origMetricName.rsplit('_',-1)[-1].isdigit():
metricname ='_'.join(mydict[elements[0]].rsplit('_',-1)[0:-1])
finalline = metricname+' '+str(epoch)+' '+elements[1].rstrip()+' SerialNo='+sys.argv[2].split('_')[-2][:-1]+'9'+' Index='+origMetricName.split('_')[-1]+' DataQuality='+quality+ ' DataType=RAW DataSource=OPC \n'
else:
metricname = origMetricName
finalline = metricname+' '+str(epoch)+' '+elements[1].rstrip()+' SerialNo='+sys.argv[2].split('_')[-2][:-1]+'9'+' DataQuality='+quality+' DataType=RAW DataSource=OPC \n'
finallines.append(finalline)
else:
pass
outputcsv=str(sys.argv[4])+str(sys.argv[2]).split('.')[0]+'9'+"_Changed.csv"
outputcsvgzdir=str(sys.argv[5])+'/'+serialno+'/Sensor/'
if not os.path.exists(outputcsvgzdir):
os.makedirs(outputcsvgzdir)
outputcsvgz=str(sys.argv[5])+'/'+serialno+'/Sensor/'+str(sys.argv[2]).split('.')[0]+'9'+"_Changed.csv.gz"
target = open(outputcsv,"w")
target.writelines(finallines)
target.close()
with open(outputcsv, 'rb') as f_in, gzip.open(outputcsvgz, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
os.remove(outputcsv)
#os.remove(inputfilename)
command="/opt/opentsdb/build/tsdb import --auto-metric --skip-errors --zkquorum=172.31.19.88:2181 --zkbasedir=/hbase-unsecure %s" % outputcsvgz
process = subprocess.Popen(command,shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE);
process.wait();
output, err = process.communicate()
debug=open('/import/debug/debug-'+sys.argv[2].split('.')[0]+'9'+'.txt','w')
debug.write('---err txt---\n')
debug.write(err)
debug.write('---op txt---\n')
debug.write(output)
debug.write(outputcsvgz)
debug.close()
print process.returncode;
HI Joe, Thank you on quick response. I am trying to perform ETL process which tranforms data and bulk ingest data into opentsdb. below are python script and NIFI xml file etl-1.xml Thanks Raghu
... View more
04-26-2016
08:32 PM
1 Kudo
Unable start processor after once stopping it, upon right click no start or edit configuration . Files gets queued up Execute stream processor wont pick them up. Unable to empty the queue using empty queue. Requires nifi to restarted. Hardware configuration : 8 cores 60 GB Ram 160gb Harddisk
... View more
Labels:
- Labels:
-
Apache NiFi