Member since
03-01-2016
104
Posts
97
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1667 | 06-03-2018 09:22 PM | |
28170 | 05-21-2018 10:31 PM | |
2173 | 10-19-2016 07:13 AM |
04-03-2017
08:03 AM
@Rohan Pednekar Could you provide pointers for HBASE APIs which should be used during data writing so that the data reaches only desired nodes ?
... View more
03-30-2017
06:32 PM
Connecting to HBase throws the below exceptions: [root@hl1 hbase]# cd /usr/hdp/current/phoenix-client/bin/
[root@hl1 bin]# python sqlline.py localhost:2181:/hbase-unsecure
sun.misc.SignalHandler not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:/etc/hbase/conf/,file:/usr/hdp/2.5.3.0-37/phoenix/bin/../phoenix-4.7.0.2.5.3.0-37-client.jar,file:./,file:/etc/hadoop/conf/,file:/usr/hdp/2.5.3.0-37/hadoop/conf/,file:/usr/hdp/2.5.3.0-37/hadoop-hdfs/./], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost:2181:/hbase-unsecure none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost:2181:/hbase-unsecurejava.lang.ClassFormatError: org.apache.phoenix.jdbc.PhoenixDriver (unrecognized class file version)
at java.lang.VMClassLoader.defineClass(libgcj.so.10)
at java.lang.ClassLoader.defineClass(libgcj.so.10)
at java.security.SecureClassLoader.defineClass(libgcj.so.10)
at java.net.URLClassLoader.findClass(libgcj.so.10)
at java.lang.ClassLoader.loadClass(libgcj.so.10)
at java.lang.ClassLoader.loadClass(libgcj.so.10)
at java.lang.Class.forName(libgcj.so.10)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:115)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at java.lang.reflect.Method.invoke(libgcj.so.10)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:804)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)sqlline version 1.1.8java.lang.NullPointerException
at sqlline.SqlLine.begin(SqlLine.java:680)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)java.lang.NullPointerException
at sqlline.SqlLine.begin(SqlLine.java:680)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)java.lang.NullPointerException
at sqlline.SqlLine.begin(SqlLine.java:680)
at sqlline.SqlLine.start(SqlLine.java:398) ROOT CAUSE: We didn't have JAVA_HOME set. Script was referring GCJ. SOLUTION: export JAVA_HOME=/usr/jdk64/jdk1.8.0_77/ export HBASE_CONF_PATH=/etc/hbase/conf/ export PATH=$JAVA_HOME/bin:$PATH
... View more
Labels:
01-19-2017
04:47 PM
DESCRIPTION: CREATE TABLE EXAMPLE_REPORT1
> (
> key string,
> claim_type_code string,
> yearservice string,
> monthservice string
> )
> STORED BY "org.apache.hadoop.hive.hbase.HBaseStorageHandler"
> WITH SERDEPROPERTIES("hbase.columns.mapping" = ":key,claim_type_code:claim_type_code,
> yearservice:yearservice,monthservice:monthservice")
> TBLPROPERTIES("hbase.table.name"="EXAMPLE_REPORT1”); Exception received was as follows: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:208)at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:161)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
[...]
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:395)
at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:562)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1192) ROOT CAUSE: Hive attempted to contact Zookeeper to get Meta location but could not retrieve required info. This is a known issue described in HBASE-16732 WORKAROUND In this instance of issue, restarting zookeeper fixed the issue as it reinitialized hbase znodes and Hive was able to get the required info.
... View more
Labels:
01-19-2017
08:40 AM
1 Kudo
DESCRIPTION: Received frequent alerts for connection timeout with journal node. Upon checking the connectivity, it results in below output. curl -v http://123.example.com:8480--max-time 4 | tail -4
* About to connect() to 123.example.com:8480 port 8480 (#0)
* Trying 10.24.16.11... connected * Connected to 123.example.com (10.24.16.11) port 8480 (#0)
> GET / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.18 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 123.example.com:8480 > Accept: */*
> % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0* Operation timed out after 4000 milliseconds with 0 bytes received 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0* Closing connection #0
curl: (28) Operation timed out after 4000 milliseconds with 0 bytes received Checking netstat command further for port 8480 gives us huge number of CLOSE_WAIT messages. [root@123 ~]# netstat -putane | grep -i 8480
tcp 0 0 0.0.0.0:8480 0.0.0.0:* LISTEN 72383 1586576877 1719/java
tcp 1 0 10.24.16.11:8480 10.24.17.11:46572 CLOSE_WAIT 72383 1587407492 1719/java
tcp 1 0 10.24.16.11:8480 10.24.17.11:57944 CLOSE_WAIT 72383 1586744345 1719/java
tcp 1 0 10.24.16.11:8480 10.24.17.11:57462 CLOSE_WAIT 72383 1586708412 1719/java Check the meaning of CLOSE_WAIT here link ROOT CAUSE: It was found that an edits_in_progress file was stuck as an orphan file since last two months while the edits recorded in it are already captured in other completed edits file. Due to this, the port 8480 of the respective Journal node process was coming up in CLOSE_WAIT as the socket is not closed properly. SOLUTION : Removed the orphan edits_in_progress file and restarted journal nodes.
... View more
Labels:
12-28-2016
12:09 PM
Try increasing hbase.master.namespace.init.timeout to a bigger value say 2400000
... View more
12-28-2016
11:59 AM
what is the other activity happening before this exception ? Is splitting of WAL taking place ? Have you checked NN health and its logs during same time duration ?
... View more
12-28-2016
11:51 AM
Also share if you are seeing any application specific exceptions apart from this.
... View more
12-28-2016
11:41 AM
exit code 127 usually refers user application specific issues. https://issues.apache.org/jira/browse/YARN-3704
... View more
12-27-2016
06:30 PM
1 Kudo
SYMPTOMS: If MaxApplications=1000 set in configuration than jobs would start getting stuck in "accepted" state after 500+ submissions. The following error is seen in resource manager logs: caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1452018403088_0506 to YARN :
org.apache.hadoop.security.AccessControlException: Queue root.hive1 already has 1000 applications, cannot accept submission of application: application_1452018403088_0506
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271)
at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290)
... 18 more ROOT CAUSE: This a known issue reported in internal jira BUG-50642. As of date this issue is unresolved. However in Hadoop 2.8.0 this behavior is not seen. WORKAROUND: Please set yarn.scheduler.capacity.root.ordering-policy.fair.enable-size-based-weight=false
... View more
Labels:
12-27-2016
05:22 PM
ENVIRONMENT: HDP 2.5 with RM in HA. SYMPTOMS: gaurav@g1:~> hadoop jar /usr/hdp/2.5.0.0-1245/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 5 8
Number of Maps = 5
amples per Map = 8
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Starting Job
11/02/16 12:20:01 INFO impl.TimelineClientImpl: Timeline service address: http://g1.openstacklocal:8188/ws/v1/timeline
/11/02/16 12:20:02 INFO client.AHSProxy: Connecting to Application History server at /0.0.0.0:10200
11/02/16 12:20:05 WARN ipc.Client: Failed to connect to server: g2.openstacklocal:8032: retries get failed due to exceeded maximum allowed retries number: 0 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ROOT CAUSE: This a known behavior reported in internal jira BUG-65968.This is not a problem but an expected behavior.No matter which RM is currently active, the application would always try to connect to the rm1, then rm2. Please note that this message is just a warning and wont affect the job run in any way.
... View more
Labels: