Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3141 | 06-30-2017 05:30 PM | |
| 4099 | 06-30-2017 02:57 PM | |
| 3404 | 05-30-2017 07:00 AM | |
| 3984 | 01-20-2017 10:18 AM | |
| 8629 | 01-11-2017 02:11 PM |
10-17-2016
11:40 AM
@Mourad Chahri Can you pass me the output of below command - $df -h $fdisk -l Let me know which is ambari-agent disk within above command output and which is new disk.
... View more
10-13-2016
07:46 AM
1 Kudo
@Mourad Chahri 1. Is your HDFS disk is same as OS disk ? If so then if your OS disk is LVM then you can extend you can add new disk and extend OS disk. 2. If HDFS is using seperate disk [which is not OS disk] then you can easily mount new disk on a filesystem drive and make changes in config of HDFS to take new disk in effect. 3. If OS disk is not LVM as per point 1, then you can still mount the new disk on a filesystem and add the same disk into HDFS configs by creating "new config groups"
... View more
10-12-2016
05:52 PM
@Hugo Schieck As Ambari has embedded database Postgresql hence it does not provide with mysql/other package in Ambari repo. Its clearly mentioned in link - https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-installation/content/set_up_the_ambari_server.html It seems HDP_Utils repo has mysql packages for hive databases. Let me know if that answers your question.
... View more
10-12-2016
04:05 PM
@Jasper Wherever you enable ranger plugin for 1st time and restart the service -[Eg. say HDFS], it will create HDFS repository with cluster name in Ranger Web UI. This repository contains the config params which indicates which cluster-HDFS service its connecting to [incase if there are multiple HDFS repositories in place]. In future if your namenode is moved to different machine the policies will not work and you need to modify the configs on the HDFS repository page accordingly to get it working. Many reasons I see.. -few are 1. This page shows option to disable repository if you dont want it any more. 2. If you have kerberized cluster then default policies will not work. You need to modify repository settings properly to get the policies working. etc..
... View more
10-12-2016
12:49 PM
@Nitin Saraswat Can you login to the VM and try below command - $netstat -taupen |grep 8080
... View more
10-12-2016
12:00 PM
@Nitin Saraswat Are both machines running simultaneously ? Can you try login to Sandbox 2.5 and get IP using ifconfig and access all urls using that ip ?
... View more
10-11-2016
04:19 PM
you might need to enable hadoop debug mode to get more visibility over the issue- export hadoop.root.logger=DEBUG and run the job from cli and test
... View more
10-11-2016
01:05 PM
3 Kudos
@Frankie Bollaert Repositories for Maintenance releases are available publicly, but patch level repositories are available as per subscription.
... View more
10-11-2016
12:35 PM
Also what is the size of the data you are uploading to HDFS ? I see the free HDFS space is 450MB approx..
... View more
10-11-2016
12:32 PM
can you do - $cat /etc/hadoop/conf/dfs.exclude
... View more