Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2245 | 12-06-2018 12:25 PM | |
2302 | 11-27-2018 06:00 PM | |
1792 | 11-22-2018 03:42 PM | |
2846 | 11-20-2018 02:00 PM | |
5166 | 11-19-2018 03:24 PM |
10-05-2018
02:27 PM
@Saravana V, It doesn't have any error logs. Can you see the file if you have any errors
... View more
10-05-2018
01:16 PM
@Saravana V, Can you check the logs under /var/log/hadoop/hdfs/ folder to see if there are any errors found in the datanode logs. It would be great if you can attach the logs to investigate. . -Aditya
... View more
10-03-2018
05:51 PM
@Lakshmi Prathyusha, You can download the hadoop aws jar and put it in /usr/hdp/{hdp-version}/hadoop folder and pass it while running the spark shell command ./spark-shell --master yarn --jars /usr/hdp/{hdp-version}/hadoop/hadoop-aws.jar ... You can also try passing --packages param to download the package in run time without downloading the jar before. Example shown below ./spark-shell --packages org.apache.hadoop:hadoop-aws:2.7.3 Note: Make sure to download all the dependent packages as well. . https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.7.3. . Please "Accept" the answer if this helps
... View more
10-03-2018
12:10 PM
1 Kudo
@Saravana V, Can you try running the below command and check the output hdp-select status hadoop-hdfs-namenode Also run the below command on the both the namenode hosts and see if it passes yum reinstall -y hadoop*
... View more
10-03-2018
10:16 AM
1 Kudo
@Saravana V, Check the folders inside /usr/hdp folder. If you have folder with name "2.6.1.0-129", try moving it to some other directory and retry the operation. . Please "Accept" the answer if this helps 🙂
... View more
10-03-2018
06:35 AM
@Amit I suggest you to start all the services from Ambari and not from CUI. It may change the permissions of the files depending on the user who start the services. Zookeeper data directory should be owned by zookeeper user Permission of /hadoop/zookeeper (except myid file) should be zookeeper:hadoop
... View more
10-02-2018
04:02 AM
@subhash parise, In HDP 3.0, yarn has a new implementation of Timeline server called Timeline server v2 (previouly timeline 1.5 was used). TS v2 uses HBase for storing the information of all the applications. TS v2 will be started either as a system service (which is a yarn application) or in embedded mode ( runs a standalone hbase server) based on the resources. If the cluster has enough resources it will be started as a system service or yarn application which is true in your case. You can check the config ( is_hbase_system_service ) to check if is running as a yarn application or in standalone mode. . You can read more about timeline server 2.0 here https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/data-operating-system/content/yarn_timeline_service_2.0_overview.html . Please "Accept" if this helps 🙂
... View more
10-02-2018
03:54 AM
1 Kudo
@Amit Samanta, Looks like your hbase got into some inconsistent state somehow. The namespace creation is failed. You can check the logs under /var/log/hbase/hbase-master-xxx.log You can try doing the below steps and see if this works. For non kerberized environment, # su hbase
# zookeeper-client -server {some-zookeeper-hostname}:2181
## rmr /hbase-unsecure ----> This should be run inside zookeeper shell
## quit . For kerberized environment, # kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase
# zookeeper-client -server {some-zookeeper-hostname}:2181
## rmr /hbase-secure
## quit . Restart HBase after performing above steps. . Please "Accept" the answer if this helps
... View more
10-01-2018
01:21 PM
@Maryem Mary, You can use the mapreduce command line tool to get the execution time for each job. Use the below command to get all the stats of a job mapred job -status {job-id} . To get the execution time, use mapred job -status {job_id} | grep "CPU time" . If you have YARN application Id ,then replace 'application_xxx' with 'job_xxx' to get the mapreduce job id. . If you have written the Mapreduce application, then you can have custom counters as well to print extra information. See the example in the link https://acadgild.com/blog/counters-in-mapreduce. . Please "Accept" the answer if this helps.
... View more
08-23-2018
08:46 AM
@Manikandan Jeyabal, Yes. Schema evolution is supported in ORC from Hive 2.1 . Check the link below https://www.slideshare.net/Hadoop_Summit/orc-file-optimizing-your-big-data . -Aditya
... View more