Member since
04-20-2016
86
Posts
27
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2960 | 03-13-2017 04:06 AM | |
4198 | 03-09-2017 01:55 PM | |
1687 | 01-05-2017 02:13 PM | |
6180 | 12-29-2016 05:43 PM | |
5179 | 12-28-2016 11:03 PM |
12-28-2016
10:29 PM
Good.... please accept the answer if this has helped to address the issue
... View more
12-28-2016
09:52 PM
Can you check and compare the o/p for /etc/krb5.conf from the working node and the non working node?
... View more
12-28-2016
07:45 PM
Also cross verify the /etc/krb5.conf to see if its set up right
... View more
12-28-2016
07:45 PM
1 Kudo
The error indicates that the KDC is not reachable here. Is the KDC server up ? You will need to check on that . Try doing a telnet or nc to the KDC server from the client node as below: nc -vz kdc_host 88
Eg: [kafka@ambari-slave1 SimpleJava]$ nc -vz ambari-slave2 88
Connection to ambari-slave2 88 port [tcp/kerberos] succeeded!
[kafka@ambari-slave1 SimpleJava]$
[kafka@ambari-slave1 SimpleJava]$ telnet ambari-slave2 88
Trying 192.168.59.13...
Connected to ambari-slave2.
Escape character is '^]'.
^CConnection closed by foreign host.
[kafka@ambari-slave1 SimpleJava]$
... View more
12-28-2016
06:31 PM
Do you have the MIT KDC set up ? If not then please install it as documented below: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Security_Guide/content/_optional_install_a_new_mit_kdc.html Once done choose the option "Use an Existing MIT KDC"
... View more
12-28-2016
04:19 PM
2 Kudos
ISSUE While trying to connect to the HBase cluster from an edge node or a client HBase API we get the expection "Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations "
SYMPTOM
The exact stack that we encounter is as below:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:312)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:151)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:821)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableAvailable(ConnectionManager.java:991)
at org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:1400)
at org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:1408)
at Table.main(Table.java:15)
ROOT CAUSE This happens when user has an incorrect value defined for "zookeeper.znode.parent" in the hbase-site.xml sourced on the client side or in case of a custom API written , the "zookeeper.znode.parent" was incorrectly updated to a wrong location . For example the default "zookeeper.znode.parent" is set to "/hbase-unsecure" , but if you incorrectly specify that as lets say "/hbase" as opposed to what we have set up in the cluster, we will encounter this exception while trying to connect to the HBase cluster.
RESOLUTION The solution here would be to update the hbase-site.xml / source out the same hbase-site.xml from the cluster or update the HBase API to correctly point out the "zookeeper.znode.parent" value as updated in the HBase cluster.
... View more
Labels:
12-28-2016
04:12 PM
1 Kudo
PROBLEM:
1. Create a source external hive table as below:
CREATE EXTERNAL TABLE `casesclosed`(
`number` int,
`manager` string,
`owner` string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://sumeshhdp/tmp/casesclosed'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='true',
'numFiles'='1',
'totalSize'='3693',
'transient_lastDdlTime'='1478557456')
2. Create an ORC table with CTAS from the source table as below:
CREATE TABLE casesclosed_mod
STORED AS ORC tblproperties("orc.compress"="ZLIB", "orc.compress.size"="8192")
AS
SELECT
cast(number as int) as number,
cast(manager as varchar(40)) as manager,
cast(owner as varchar(40)) as owner
FROM cases closed;
3. On creating the Spark DataFrame against both non-orc table ( source ) and the orc table, we are unable to list out the column names in the ORC table :
scala> val df = sqlContext.table("default.casesclosed")
df: org.apache.spark.sql.DataFrame = number: int, manager: string, owner: string
scala> val df = sqlContext.table("default.casesclosed_mod")
16/11/07 22:41:48 INFO OrcRelation: Listing hdfs://sumeshhdp/apps/hive/warehouse/casesclosed_mod on driver
df: org.apache.spark.sql.DataFrame = _col0: int, _col1: string, _col2: string
3. On creating the Spark DataFrame against both non-orc table ( source ) and the orc table, we are unable to list out the column names in the ORC table :
scala> val df = sqlContext.table("default.casesclosed")
df: org.apache.spark.sql.DataFrame = number: int, manager: string, owner: string
scala> val df = sqlContext.table("default.casesclosed_mod")
16/11/07 22:41:48 INFO OrcRelation: Listing hdfs://sumeshhdp/apps/hive/warehouse/casesclosed_mod on driver
df: org.apache.spark.sql.DataFrame = _col0: int, _col1: string, _col2: string
TWO WORKAROUNDS:
- Use Spark to create the tables instead of Hive.
- Set: sqlContext.setConf("spark.sql.hive.convertMetastoreOrc", "false")
ROOT CAUSE:
The table "casesclosed_mod" is "STORED AS ORC tblproperties("orc.compress"="ZLIB", "orc.compress.size"="8192")". Spark supports ORC data source format internally, and has its own logic/ method to deal with ORC format, which is different from Hive's. So in this bug, Spark can not "understand" the format of the ORC file created by Hive. In Hive, if create a table "casesclosed_mod" without "STORED AS ORC tblproperties("orc.compress"="ZLIB", "orc.compress.size"="8192")", everything works fine. In Hive:
hive> CREATE TABLE casesclosed_mod0007
> AS
> SELECT
> cast(number as int) as number,
> cast(manager as varchar(40)) as manager,
> cast(owner as varchar(40)) as owner
> FROM casesclosed007;
In Spark-shell:
scala> val df = sqlContext.table("casesclosed_mod0007") ;
df: org.apache.spark.sql.DataFrame = [number: int, manager: string, owner: string]
This is a known bug which is tracked through the Apache Bug :
https://issues.apache.org/jira/browse/SPARK-16628
... View more
Labels:
12-28-2016
02:29 PM
1 Kudo
The steps are documented below: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Security_Guide/content/_installing_and_configuring_the_kdc.html I would not recommend uninstalling Kerberos once its been installed since it can potentially mess you OS installation. Rather if you can follow the route of regenerating the principals and the keytabs . The steps are documented in the link above. If you want to manually generate the keytabs/principals then you can refer the link below: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/creating_service_principals_and_keytab_files_for_hdp.html
... View more
12-28-2016
02:04 PM
1 Kudo
@kotesh banoth Need more information here
... View more
12-28-2016
06:04 AM
1 Kudo
Please check which queue is the job getting submitted to and how many jobs are running in the queue. Possible that the queue where the jobs are submitted queue does not have any resources and hence unable to allocate any resources to the job . You can check the RM UI which will give you a snapshot of the actual state of the yarn resource allocation.
... View more
- « Previous
- Next »