Member since
11-03-2017
94
Posts
13
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5066 | 04-11-2018 09:48 AM | |
1821 | 12-08-2017 01:07 PM | |
2317 | 12-01-2017 04:23 PM | |
11511 | 11-06-2017 04:08 PM |
01-05-2018
03:11 PM
@Jay Kumar SenSharma Regarding reinstalling HBase service. How can I remove the HBase Service from Ambari UI? and How can I reinstall it surely ? Thanks
... View more
12-11-2017
01:39 AM
1 Kudo
Secondary Namenode is one of the poorly named component in Hadoop. By its name, it gives a sense that its a backup for the Namenode.But in reality its not. Lot of beginners in Hadoop get confused about what exactly SecondaryNamenode does and why its present in HDFS. This article explains exactly how SecondaryNamenode works http://blog.madhukaraphatak.com/secondary-namenode---what-it-really-do/ In a very simple way as if Standby were Streaming Replication with integrated failover; While SecondaryNamenode was Timed LogShipping
... View more
12-08-2017
01:07 PM
solution in two lines of code: sudo chown -R zeppelin:zeppelin /var/log/zeppelin/
sudo chown -R zeppelin:zeppelin /home/zeppelin
... View more
12-01-2017
04:23 PM
1 Kudo
I fix the problem by reinstalling ambari agent manuelly : yum remove ambari-agent
yum install ambari-agent
I think that Ambari couldn't remove and install ambari agent if it is not installed correctly at the first time. so it works manually
... View more
11-16-2018
06:36 AM
Use vi /etc/ambari-server/conf/ambari.properties , add the entry, then esc, then :, then wq for save and quite or q! for without saving , use vi to edit and wq to save and quite or q! to exit without saving
... View more
11-07-2017
01:43 PM
@yassine sihi Have you installed all Clients in your Edge Node? (that should make the requierd file push to the edge node). Ambari UI --> Hosts (Tab) --> Click on the Edge Node link --> On the host page click on "Install Clients" button. . .
... View more
11-06-2017
10:57 PM
thanks you @Sonu Sahi for replaying, i think that's the issue is fixed other way and it works fine with the both services on the same machine.
... View more
11-06-2017
04:20 PM
@yassine sihi Thanks for sharing the solution to check the password file first. [root@sandbox ~]# cat /etc/ambari-server/conf/password.dat
bigdata
... View more
02-15-2018
06:06 AM
I had same issue on rhel 7 with below error Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_3_0_235-hdfs' returned 1. Error: Package: hadoop_2_6_3_0_235-hdfs-2.7.3.2.6.3.0-235.x86_64 (HDP-2.6-repo-101)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem Solution : Please check the Red Hat Enterprise Linux Server 7 Optional (RPMs) enabled on all nodes with below command # yum repolist all (To check enabled or disabled) !rhui-REGION-rhel-server-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (RPMs) Disabled: #yum-config-manager --enable rhui-REGION-rhel-server-optional ( enabling the optional rpms) Cross verify with first command to get it optional rpms enabled # yum repolist all !rhui-REGION-rhel-server-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (RPMs) enabled: 13,201
... View more
01-04-2018
05:18 PM
make sure that this error haven't caused by tables that Hive Create in your MySql database: check out if there is something looks like this error: Error: Index column size too large. The maximum column size is 767 bytes. (state=HY000,code=1709)
or just:
The maximum column size is 767 bytes
... View more
- « Previous
-
- 1
- 2
- Next »