Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2438 | 04-27-2020 03:48 AM | |
4866 | 04-26-2020 06:18 PM | |
3972 | 04-26-2020 06:05 PM | |
3209 | 04-13-2020 08:53 PM | |
4904 | 03-31-2020 02:10 AM |
01-29-2017
01:57 PM
@doron zukerman You can remove "--privileged" if you don't intend to use Kerberos. By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a “privileged” container is given access to all device. Please see: https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities
... View more
01-29-2017
03:51 AM
@Sunile Manjee Yes, Postgres is one of the dependency for Ambari Server that should come from OS repositories. You can find the following github code that describes what all dependencies ambari installation needs *that should come from OS repos*: ( for RHEL/CentOS, SUSE and DEB OS) https://github.com/apache/ambari/blob/release-2.4.2/ambari-server/src/main/package/dependencies.properties Example: rpm.dependency.list=postgresql-server >= 8.1,\nRequires: openssl,\nRequires: python >= 2.6
rpm.dependency.list.suse=postgresql-server >= 8.1,\nRequires: openssl,\nRequires: python-xml,\nRequires: python >= 2.6
deb.dependency.list=openssl, postgresql (>= 8.1), python (>= 2.6), curl . So for a proper ambari-server installation you will need to make sure that the above RPM packages are either already installed on your ambari host OR During ambari installation they should be downloadable from the Operating system repository. .
... View more
01-27-2017
03:23 AM
@Ganesh Raju Before starting to build Ambari Metrics did you set the correct version to it? As described in : https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Development mvn versions:set -DnewVersion=${AMBARI_VERSION} . - Also you can have a look at the following repo where you can find the versions: http://nexus-private.hortonworks.com/nexus/content/groups/public/org/apache/ambari/ambari-metrics-storm-sink-legacy/ .
... View more
01-26-2017
03:52 PM
@Volodymyr Ostapiv You may want to refer to the following article which talks about a new feature added to ambari 2.4 in order to have dynamic auto-recovery, which allows auto-start properties to be configured without needing an ambari-agent / ambari-server restart. https://community.hortonworks.com/content/kbentry/71748/how-do-i-enable-automatic-restart-recovery-of-serv.html Example: {"recovery_enabled":"true"} curl -u admin:<password> -H "X-Requested-By: ambari" -X PUT 'http://localhost:8080/api/v1/clusters/<cluster_name>/components?ServiceComponentInfo/component_name.in(HBASE_REGIONSERVER)' -d '{"ServiceComponentInfo" : {"recovery_enabled":"true"}}' In the above curl call you can define your own service component. Above will help in setting auto start in case of host reboot. Also for your custom service you may customize your scripts like "package/scripts" to have your own desired start feature. - You can take a look at the "metainfo.xml" file of AMS something like following 'recovery_enabled': <component>
<name>METRICS_COLLECTOR</name>
<displayName>Metrics Collector</displayName>
<category>MASTER</category>
<recovery_enabled>true</recovery_enabled> - More info regarding this feature for ambari 2.4.x (and prior versions can be found at: https://cwiki.apache.org/confluence/display/AMBARI/Recovery%3A+auto+start+components ) .
... View more
01-25-2017
07:20 AM
@Punit kumar
Please run the following command to verify the permissions on HDFS directory "/" # su - hdfs
# hdfs dfs -stat "%u %g" /
hdfs hdfs
It should be owned by "hdfs:hdfs" if it is not then you can fix it by running the following command: # hdfs dfs -chwon hdfs:hdfs / . - It is Hue code that is explicitly checking for the default permission : https://github.com/cloudera/hue/blob/9916b2b6389323ad8139d709e248156a7cf943f5/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py#L914-L916 . And the default superuser is defined in "hue.ini" as following: default_hdfs_superuser=hdfs https://github.com/cloudera/hue/blob/ef8812cb2033b628d90f5bffacc4ce7c27374e45/desktop/conf.dist/hue.ini#L62-L63 .
... View more
01-24-2017
06:29 PM
Wonderful.
... View more
01-24-2017
02:21 PM
@Baruch AMOUSSOU DJANGBAN Your Posted repo file contents are not right, Even base url is wrong. Not sure from where you have downloaded them?
The content from the "hdp.repo" should be something like following according to [1]. Please see the link [2] for the list of repos for different OS. #VERSION_NUMBER=2.5.3.0-37
[HDP-2.5.3.0]
name=HDP Version - HDP-2.5.3.0
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS Version - HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1 As you have mentioned that you do not have internet connectivity, hence you will need to configure Local Offline Repository as mentioned in my previous comment. Please see [3]
[1] http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/hdp.repo
[2] https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/hdp_25_repositories.html
[3] https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-installation/content/setting_up_a_local_repository_with_no_internet_access.html .
... View more
01-24-2017
01:16 PM
@Baruch AMOUSSOU DJANGBAN
As you do not have internet connectivity in that case you should configure HDP Offline Local repository. You can get more detailed information about it in: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-installation/content/setting_up_a_local_repository.html
AND https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-installation/content/setting_up_a_local_repository_with_no_internet_access.html .
... View more
01-24-2017
01:10 PM
@Baruch AMOUSSOU DJANGBAN What is the content of "HDP-2.5.3.0.repo" and "HDP.repo" ? There should not be duplicate repo usually.
... View more
01-24-2017
12:26 PM
@Punit kumar Regarding your latest error: java.io.IOException: Incompatible clusterIDs in
/mnt/disk1/hadoop/hdfs/data: namenode clusterID =
CID-297a140f-7cd6-4c73-afc8-bd0a7d01c0ee; datanode clusterID =
CID-7591e6bd-ce9b-4b14-910c-c9603892a0f1 at Looks like your VERSION file has different cluster IDs present in NameNode and DataNode that need to be correct. So please check. cat <dfs.namenode.name.dir>/current/VERSION
cat <dfs.datanode.data.dir>/current/VERSION Hence Copy the clusterID from nematode and put it in the VERSION file of datanode and then try again. Please refer to: http://www.dedunu.info/2015/05/how-to-fix-incompatible-clusterids-in.html .
... View more