Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14966 | 01-18-2018 08:38 AM | |
1611 | 05-11-2017 06:50 PM | |
9278 | 04-28-2017 11:00 AM | |
3486 | 04-12-2017 01:36 AM | |
2872 | 02-14-2017 05:11 AM |
01-24-2016
08:03 PM
Hi @Ancil McBarnett , thank you so much! ....stupid me 😉
... View more
01-24-2016
07:54 PM
Hi, I am facing an error while running enable Kerberos wizard. Installation of Kerberos client is done, but the command for "Test Kerberos client" is failing. In Ambari-Log I detected that the following command is failing (I executed it directly in shell to see what happens): $ sudo /usr/bin/kadmin -s b0d095j2.<domain> -p admin/admin@<realm> -w <pw> -r <realm> -q "get_principal admin/admin@<realm>"
Authenticating as principal admin/admin@<realm> with password.
kadmin: Communication failure with server while initializing kadmin interface ?!?! What is going there ?!?! MIT KerberosKDC is running: $ sudo /etc/init.d/krb5kdc status
krb5kdc (pid 102972) is running...
$ sudo netstat -pant | grep 102972
tcp 0 0 0.0.0.0:88 0.0.0.0:* LISTEN 102972/krb5kdc In krb5.log there is one line for the above kadmin command => Jan 24 20:53:30 b0d095j2 krb5kdc[102972](info): AS_REQ (4 etypes {18 17 16 23}) 10.41.27.13: ISSUE: authtime 1453665210, etypes {rep=18 tkt=18 ses=18}, admin/admin@<realm> for kadmin/b0d095j2.<domain>@<realm> Any hint highly appreciated...
... View more
Labels:
- Labels:
-
Apache Ambari
01-21-2016
03:17 PM
1 Kudo
Thanks @Neeraj . Just to give you feedback of another 'solution'. In the meantime I received two more datanodes back (which were failing during installation time). After adding those hosts and restarting HDFS the corrupt block error disappeared without any further file deletion or HDFS re-formatting Regards, Gerd
... View more
01-21-2016
12:35 PM
Hi, during installation procedure of a cluster I was facing some hw issues, so that at the end I now have a (almost) running cluster but with corrupt file blocks. HDFS service is up and running in HA mode but it is complaining about corrupt blocks: FSCK started by hdfs (auth:SIMPLE) from /10.41.27.10 for path / at Thu Jan 21 13:22:00 CET 2016
..............
/hdp/apps/2.2.4.2-2/hive/hive.tar.gz: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073741862
/hdp/apps/2.2.4.2-2/hive/hive.tar.gz: MISSING 1 blocks of total size 83000677 B..
/hdp/apps/2.2.4.2-2/mapreduce/hadoop-streaming.jar: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073741863
/hdp/apps/2.2.4.2-2/mapreduce/hadoop-streaming.jar: MISSING 1 blocks of total size 104996 B..
/hdp/apps/2.2.4.2-2/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073741827
/hdp/apps/2.2.4.2-2/mapreduce/mapreduce.tar.gz: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073741829
/hdp/apps/2.2.4.2-2/mapreduce/mapreduce.tar.gz: MISSING 2 blocks of total size 192697367 B..
/hdp/apps/2.2.4.2-2/pig/pig.tar.gz: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073741861
/hdp/apps/2.2.4.2-2/pig/pig.tar.gz: MISSING 1 blocks of total size 97548644 B..
/hdp/apps/2.2.4.2-2/tez/tez.tar.gz: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073741826
/hdp/apps/2.2.4.2-2/tez/tez.tar.gz: MISSING 1 blocks of total size 40658186 B..
/mr-history/done/2016/01/08/000000/job_1452263100546_0003-1452263260432-ambari%2Dqa-PigLatin%3ApigSmoke.sh-1452263277399-1-0-SUCCEEDED-default-1452263269870.jhist: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073742129
...
/user/ambari-qa/passwd: MISSING 1 blocks of total size 2637 B...
/user/ambari-qa/pigsmoke.out/part-v000-o000-r-00000: CORRUPT blockpool BP-1565025838-10.41.27.10-1452263064113 block blk_1073742141
/user/ambari-qa/pigsmoke.out/part-v000-o000-r-00000: MISSING 1 blocks of total size 358 B.Status: CORRUPT
Total size: 414892275 B
Total dirs: 7291
Total files: 38
Total symlinks: 0
Total blocks (validated): 35 (avg. block size 11854065 B)
********************************
CORRUPT FILES: 23
MISSING BLOCKS: 24
MISSING SIZE: 414887859 B
CORRUPT BLOCKS: 24
********************************
Minimally replicated blocks: 11 (31.428572 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 0.62857145
Corrupt blocks: 24
Missing replicas: 0 (0.0 %)
Number of data-nodes: 4
Number of racks: 1
FSCK ended at Thu Jan 21 13:22:00 CET 2016 in 157 milliseconds
The filesystem under path '/' is CORRUPT What I want to do now is to re-format HDFS to start with a blank HDFS, since it is a new installation and no data has been uploaded to HDFS. How can I properly re-format HDFS to get rid of the corrupt blocks ? I am afraid of deleting just the files it is complaining about, because if I delete e.g. /hdp/apps/2.2.4.2-2/hive/hive.tar.gz will it be re-deployed at restarting services or how will those .gz and .jar's will be provided afterwards ?!?!
... View more
Labels:
- Labels:
-
Apache Hadoop
01-19-2016
11:52 PM
Hi, just to extend @asinghal 's answer, the whole solution was: 1) adding export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HBASE_CLASSPATH to hadoop-env section in Ambari=>HDFS=>config 2) restart HDFS (will be shown in Ambari to restart all affected services) 3) restart Hive (this won't show up in Ambari, but needs to be restarted to apply the change from steps 1. and 2.) Regards....
... View more
01-19-2016
12:01 PM
Hi, I am trying to access a HBase-backed Hive table via 'select * from tbl_name', but seems like some HBase jar's are not in place. Any help highly appreciated 😉 Details: 0: jdbc:hive2://deala.corp:1> select * from tbl_name;
Error: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration (state=,code=0) Regards, Gerd PS: HDP2.2.4.2, Ambari 2.1.2
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Hive
01-17-2016
09:05 PM
Hi @Neeraj Sabharwal , nope, unfortunately not 😉
... View more
01-17-2016
07:46 PM
and, in addition, the process itself lists also the hadoop.root.logger property as DEBUG,CLA Is that setting be applied from file /usr/hdp/2.2.4.2-2/hadoop/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties, or from where ?!?!
... View more
01-17-2016
07:40 PM
Hello @Neeraj Sabharwal , thanks for your hint, this property is set to I checked the process list, and detected that the processes are using file 'container-log4j.properties' as log4j configuration. The only file with that name I could find is: /usr/hdp/2.2.4.2-2/hadoop/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties ...and indeed, there is the property => hadoop.root.logger=DEBUG,CLA Do I have to edit that file directly or is that just kind of template ?!?!
... View more
01-17-2016
07:25 PM
1 Kudo
Hi, I am facing lots of "bad dir status" issues on Nodemanagers while running huge job. This is caused by flooding the container log files with [DEBUG] messages (in detail the 'syslog' file is getting very large). How can I modify the loglevel of the container specific log to reduce the size of the syslog file ? E.g.: $ ls -alh
./container_e12_1453036276967_0004_01_000004:
total 7.4G
drwxr-s--- 2 siad hadoop 4.0K Jan 17 18:39 .
drwxr-s--- 10 siad hadoop 4.0K Jan 17 20:21 ..
-rw-r----- 1 siad hadoop 222 Jan 17 18:47 stderr
-rw-r----- 1 siad hadoop 0 Jan 17 18:39 stdout
-rw-r----- 1 siad hadoop 7.4G Jan 17 18:47 syslog In /etc/hadoop I cannot find any config setting including the value 'debug', so where does this DEBUG output come from ? Thanks, Gerd
... View more
Labels:
- Labels:
-
Apache YARN