Member since
02-20-2015
10
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
41368 | 02-22-2015 02:18 AM |
03-11-2015
02:35 AM
HI tarek you can try running this , this will fix the issue. service cloudera-scm-agent restart
... View more
03-01-2015
10:34 PM
1 Kudo
Hi Gautam My issue is I think with filled space , after troubleshooting i found i was earlier using /dfs/dn for HDFS block storage , later i added non OS partition under /home (/home/hdfs/dfs/dn) and then started importing 100 of GB data. Looks like some how my old path /dfs/dn had also sotred some of HDFS blocks and filled that root partition. Sais so if now by chnagging the configuration remove (/dfs/dn) dfs.data.dir and restart cluster will it do automatic move data to only left location /home/hdfs/dfs/dn or how to handle that. I guess this will fix my problem for now. Do not worry about data much what ever best and quick will be fine . [root@hadoop-vm2 /]# du -sh ./* 7.9M ./bin 61M ./boot 4.0K ./cgroup 196K ./dev 40G ./dfs 30M ./etc 55G ./home 12K ./impala 263M ./lib 27M ./lib64 16K ./lost+found 4.0K ./media 0 ./misc 4.0K ./mnt 0 ./net 3.5G ./opt du: cannot access `./proc/20676/task/20676/fd/4': No such file or directory du: cannot access `./proc/20676/task/20676/fdinfo/4': No such file or directory du: cannot access `./proc/20676/fd/4': No such file or directory du: cannot access `./proc/20676/fdinfo/4': No such file or directory 0 ./proc 92K ./root 15M ./sbin 4.0K ./selinux 4.0K ./srv 0 ./sys 1.2M ./tmp 2.9G ./usr 387M ./var 223M ./yarn [root@hadoop-vm2 /]# ls /dfs/dn/current/ BP-1505211549-172.28.172.30-1424252944658 VERSION
... View more
03-01-2015
09:36 PM
Hi I am facing critical warning on CDH manager interface for log directory This role's log directory is on a filesystem with less than 5.0 GiB of its space free. /var/log/hadoop-hdfs (free: 119.0 MiB (0.24%), capacity: 49.1 GiB).While on my system I can see I have the root directory filled , but i do have space in home directory. I would like to know how and what prorpoerty i need to chnage to make log to /home instead of small root . i didn't get any link to fix this issue ,if you can just point me to write info it will be really helpful here [root@hadoop-vm2 subdir0]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_hadoopvm2-lv_root 50G 47G 111M 100% / tmpfs 15G 8.0K 15G 1% /dev/shm /dev/sda1 477M 63M 389M 14% /boot /dev/mapper/vg_hadoopvm2-lv_home 742G 55G 650G 8% /home cm_processes 15G 5.3M 15G 1% /var/run/cloudera-scm-agent/process [root@hadoop-vm2 subdir0]#
... View more
Labels:
02-22-2015
02:18 AM
Thanks alot Gautam , its working fine now after restart , I can access hadoop file system @namenode machine now from command line . many many thanks for your help here !!! By the way shall I run this command on data node as well ? i logged in to one of data node and its not able to recognize hadoop there [root@hadoopvm1 ~]# hadoop fs -ls / -bash: hadoop: command not found service cloudera-scm-agent restart ?
... View more
02-22-2015
02:09 AM
yes file does exist and it has below content [root@hadoop-vm3 ~]# cat /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop #!/bin/bash # Reference: http://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in SOURCE="${BASH_SOURCE[0]}" BIN_DIR="$( dirname "$SOURCE" )" while [ -h "$SOURCE" ] do SOURCE="$(readlink "$SOURCE")" [[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" BIN_DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" done BIN_DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" LIB_DIR=$BIN_DIR/../lib # Autodetect JAVA_HOME if not defined . $LIB_DIR/bigtop-utils/bigtop-detect-javahome export HADOOP_LIBEXEC_DIR=//$LIB_DIR/hadoop/libexec exec $LIB_DIR/hadoop/bin/hadoop "$@" [root@hadoop-vm3 ~]#
... View more
02-21-2015
06:24 PM
HI Gautam please find the conetent as below, let me know next... [root@hadoop-vm3 alternatives]# pwd /var/lib/alternatives [root@hadoop-vm3 alternatives]# cat hadoop auto /usr/bin/hadoop /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop 10 [root@hadoop-vm3 alternatives]#
... View more
02-21-2015
12:18 PM
Hi Gautam thanks alot for your reply , need some more info to fix this. can this be fixed from cloudera manager or i need to fix it on each machine by running some command I checked /etc/alternatives/hadoop and it points to hadoop -> /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop while in /var/lib/alternatives also i found that hadoop is there . Now I am not sure what to delete here..? So my question is how to restore this for all hadoop command like hadoop/hive etc....just to give you backgroud when this issue started, i got some configuration warning in cloudera manager and it was referring some outdated parcel etc, i just fixed that warning and this issue started. so is there any way i can fix this from cloudera manager. your help will be really appericiated .... one interesting thing i noticed that when i gave the complete path , its able to list files [root@hadoop-vm3 bin]# /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop fs -ls / Found 4 items drwxr-xr-x - hbase hbase 0 2015-02-20 05:41 /hbase drwxrwxr-x - solr solr 0 2015-02-18 04:50 /solr drwxrwxrwt - hdfs supergroup 0 2015-02-20 06:44 /tmp drwxr-xr-x - hdfs supergroup 0 2015-02-20 00:54 /user ========================================================================== [root@hadoop-vm3 bin]# ls /var/lib/alternatives/ avro-tools hadoop-httpfs-conf hiveserver2 jre_openjdk mapred solr-conf sqoop-create-hive-table sqoop-version beeline hadoop-kms-conf hive-webhcat-conf kite-dataset mta solrctl sqoop-eval statestored catalogd hbase hue-conf libnssckbi.so.x86_64 oozie spark-conf sqoop-export whirr cli_mt hbase-conf impala-conf links oozie-conf spark-executor sqoop-help yarn cli_st hbase-indexer impalad llama pig spark-shell sqoop-import zookeeper-client flume-ng hbase-solr-conf impala-shell llamaadmin pig-conf spark-submit sqoop-import-all-tables zookeeper-conf flume-ng-conf hcat ip6tables.x86_64 llama-conf print sqoop sqoop-job zookeeper-server hadoop hdfs iptables.x86_64 load_gen pyspark sqoop2 sqoop-list-databases zookeeper-server-cleanup hadoop-0.20 hive java mahout sentry sqoop2-conf sqoop-list-tables zookeeper-server-initialize hadoop-conf hive-conf jre_1.6.0 mahout-conf sentry-conf sqoop-codegen sqoop-merge hadoop-fuse-dfs hive-hcatalog-conf jre_1.7.0 mail senty-conf sqoop-conf sqoop-metastore ==========================================================================
... View more
02-20-2015
10:31 AM
Hi Team facing a very strange issue with cloudera latest installation. I am able to view the HDFS directory from web interface, but when i run a simple hadoop fs -ls in putty shell , it says -bash: hadoop: command not found. while i can see all HDFS files from web interface. Can you please help. [root@hadoop-vm3 log]# hadoop fs -ls / -bash: hadoop: command not found Version: Cloudera Express 5.3.1 (#191 built by jenkins on 20150123-2020 git: b0377087cf605a686591e659eb14078923bc3c83) Server Time: Feb 20, 2015 1:29:11 PM, Eastern Standard Time (EST) Copyright © 2011-2014 Cloudera, Inc. All rights reserved. Hadoop and the Hadoop elephant logo are trademarks of the Apache Software Foundation. Browse Directory Go! /user/hive/warehouse/cdr_test_demo_self_partition Permission Owner Group Size Replication Block Size Name drwxrwxrwt admin hive 0 B 0 0 B .hive-staging_hive_2015-02-20_04-40-09_720_8287848305105515146-1 drwxrwxrwt root hive 0 B 0 0 B ttime=2015-02-20 drwxrwxrwt root hive 0 B 0 0 B ttime=2015-02-21
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
HDFS