Member since
08-17-2016
9
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1513 | 08-18-2016 01:42 PM |
05-08-2018
08:23 AM
Thanks for the response but like I said on my question, I read the similar problems and solution is the tuning gc. I didn't find any context that resolves the region server gc pause problem.
... View more
05-07-2018
01:29 PM
Hi Everyone, We are getting below exception. org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired After that Region Server is shutting down. According to technical notes, the problem occurs because of a garbage collector pause time. gc-log.pdf is the gc stats before shutting down. We have long gc pauses and it happens very often. The reason is specifically in the young generation gc. So we decrease the young generation heap to 2500 mb. And also added below gc parameters but it didn't solve our problem. -XX:+UseConcMarkSweepGC
-XX:PermSize=128m
-XX:MaxPermSize=128m
-XX:SurvivorRatio=4
-XX:+PerfDisableSharedMem
-XX:ParallelGCThreads=8
-XX:CMSInitiatingOccupancyFraction=50
-XX:+UseCMSInitiatingOccupancyOnly What steps do I take to solve this? Thanks.
... View more
Labels:
- Labels:
-
Apache HBase
02-08-2018
06:55 AM
Hi All; Our regionservers crashes with java error after 2 or 3 hours later. I pasted error.log that we get when issue occured. Do you have any idea why this happens and any suggestions to handle. redhat 6.9 kernel: 2.6.32-696.13.2.el6.x86_64 java version : jdk1.8.0_60 /usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh: line 214: 19819 Segmentation fault (core dumped) nice -n $HBASE_NICENESS "$HBASE_HOME"/bin/hbase --config "${HBASE_CONF_DIR}" $command "$@" start >> ${HBASE_LOGOUT} 2>&1 2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] # # A fatal error has been detected by the Java Runtime Environment: # # SIGBUS (0x7) at pc=0x00007fcd350521e9, pid=19819, tid=140518532314880 # # JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode linux-amd64 compressed oops) # Problematic frame: # v ~StubRoutines::jbyte_disjoint_arraycopy # # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again # # An error report file with more information is saved as: # /var/log/hbase/hs_err_pid19819.log
... View more
Labels:
- Labels:
-
Apache HBase
04-11-2017
12:36 PM
Hi Everyone, Yesterday our root diectory(/) has no space left in namenode server. After that namenode terminated. After restart namenode all the files in HDFS were corrupted. If i run hdfs hdfs fsck / and search one block that listed corrupted in datanode, that block is reachable and healthy. How can i found the problem and how can i recover that problem. also it says under replicated but nothing happened since yesterday. Thanks.
... View more
Labels:
- Labels:
-
Apache Hadoop
08-19-2016
08:49 AM
Hi We are experiencing trouble in flowFile = session.write(flowFile, ModJSON()) that line. javax.script.ScriptException: TypeError: write(): 1st arg can't be coerced to int
, byte[] in <script> at line number 64
Do you think what is the problem: import json
import java.io
from org.apache.commons.io import IOUtils
from java.nio.charset import StandardCharsets
from org.apache.nifi.processor.io import StreamCallback
class ModJSON(StreamCallback):
def __init__(self):
pass
def process(self, inputStream, outputStream):
text = IOUtils.toString(inputStream, StandardCharsets.UTF_8)
obj = json.loads(text)
endorsements = obj['endorsement'].split(',')
categoryIds = obj['categoryIds'].split(',')
orderItemIds = obj['orderItemIds'].split(',')
seq_num = 0
seq_num1 = 1
for endorsement in endorsements:
if seq_num == 0 and seq_num1 == len(endorsements):
newObj = '[{"endorsement":' + endorsement \
+ ',"eventId":"' + obj['eventId'] \
+ '", "categoryId":"' + categoryIds[seq_num] \
+ '", "createDate":"' + obj['createDate'] \
+ '", "buyerId":"' + obj['buyerId'] \
+ '", "channel":"' + obj['channel'] + '", "city":"' \
+ obj['city'] + '", "orderItemId": "'+orderItemIds[seq_num]+'"}]'
seq_num += 1
seq_num1 += 1
elif seq_num == 0:
newObj = '[{"endorsement":' + endorsement + ',"eventId":"' \
+ obj['eventId'] + '", "categoryId":"' \
+ categoryIds[seq_num] + '", "createDate":"' \
+ obj['createDate'] + '", "buyerId":"' \
+ obj['buyerId'] + '", "channel":"' + obj['channel'
] + '", "city":"' + obj['city'] + '", "orderItemId": "'+orderItemIds[seq_num]+'"},'
seq_num += 1
seq_num1 += 1
elif seq_num1 == len(endorsements) :
newObj += '{"endorsement":' + endorsement + ',"eventId":"' \
+ obj['eventId'] + '", "categoryId":"' \
+ categoryIds[seq_num] + '", "createDate":"' \
+ obj['createDate'] + '", "buyerId":"' \
+ obj['buyerId'] + '", "channel":"' + obj['channel'
] + '", "city":"' + obj['city'] + '", "orderItemId": "'+orderItemIds[seq_num]+'"}]'
seq_num += 1
seq_num1 += 1
else:
newObj += '{"endorsement":' + endorsement + ',"eventId":"' \
+ obj['eventId'] + '", "categoryId":"' \
+ categoryIds[seq_num] + '", "createDate":"' \
+ obj['createDate'] + '", "buyerId":"' \
+ obj['buyerId'] + '", "channel":"' + obj['channel'
] + '", "city":"' + obj['city'] + '", "orderItemId": "'+orderItemIds[seq_num]+'"},'
seq_num += 1
seq_num1 += 1
outputStream.write(bytearray(newObj.encode('utf-8')))
flowFile = session.get()
if flowFile != None:
flowFile = session.write(flowFile, ModJSON())
flowFile = session.putAttribute(flowFile, 'filename',
flowFile.getAttribute('filename'
).split('.')[0] + '_translated.json'
)
session.transfer(flowFile, REL_SUCCESS)
session.commit()
... View more
08-18-2016
01:42 PM
i found the problem. Reason was the hbase, i am sending same values as key so, it is impossible. After changing key values, everything working fine. Thanks
... View more
08-18-2016
07:34 AM
because of the attachment limit i continue with this message: two splitted json: two putsql queryy:
... View more
08-17-2016
06:01 PM
Splitting is done correctly. I am watching al the processes till Last process.putsql processes passes insert sql statements according to data provenance. But still last json part as a record is inserted all the time.
... View more
08-17-2016
02:13 PM
1 Kudo
Hi I'am using splitjson for splitting json. After that i am using convertJsonToSql and putsql but nifi only puts last json part of the splitted jsons. I want to insert all parts. Do you have any idea why i can't do that. Thanks. source json: {
"endorsement" : 59.9,
"eventId" : "c54902db-50de-4dbd-b908-4317981931fc",
"categoryId" : "1000230",
"createDate" : "2016-08-18T10:15:00.361Z",
"buyerId" : "4748764",
"channel" : "MOBILE_IOS",
"city" : "Bursa"
}, {
"endorsement" : 13.11,
"eventId" : "c54902db-50de-4dbd-b908-4317981931fc",
"categoryId" : "1000491",
"createDate" : "2016-08-18T10:15:00.361Z",
"buyerId" : "4748764",
"channel" : "MOBILE_IOS",
"city" : "Bursa"
} ]
after split json there are two jsons:
... View more
Labels:
- Labels:
-
Apache NiFi