Member since
08-02-2018
46
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8303 | 08-09-2018 03:05 AM |
02-01-2019
10:58 AM
@Geoffrey Shelton Okot @Jean-François Vandemoortele Does this issue resolved if yes. Can you please suggest me the steps. Thanks in advance.
... View more
11-20-2018
02:45 PM
Thank you.........it works for me
... View more
11-20-2018
02:24 PM
if i want to give resource manager web ui (8088) to others. do we need to add /etc/hosts entries in their laptops ?
... View more
11-20-2018
11:18 AM
Hi Everyone, I have created 6 nodes cluster 3 masters and 3 workers on aws using private ip's and vpn etc..
there is no public ip is assigned for the instance. now i am able to ping the instances using private ip's only when i connect through vpn. i have changed hostnames and i have added all private ip's and aliases in /etc/hosts file. my ambari is in one of the master nodes lets say "master01.abc.com". but when i try to access ambari using "master01.abc.com:8080" it is not giving me the access. when i try to use private ip instead of master01.abc.com i am able to access to it. now my question is how can i get the ambari access through "master01.abc.com:8080". Please help me how to resolve this. Thanks in advance
... View more
Labels:
- Labels:
-
Apache Ambari
10-25-2018
02:55 PM
Thank you so much @Akhil S Naik I have changed mpack from hdf 3.1 to hdf 3.2. My nifi installation is completed without upgrading HDP.
... View more
10-25-2018
10:13 AM
Thank you @Akhil S Naik I have a doubt i've upgraded to ambari2.7. 1) Do i need to upgrade HDP if i want to install HDF 3.2 or can i proceed wih HDF installation directly without upgrading HDP???? Please suggest me the possibility. Thanks in advance....
... View more
10-24-2018
07:54 PM
Hi everyone, I am having 4 node cluster hdp installed. i am trying to install nifi using ambari it is throwing an error. followed steps: 1)i have upgraded ambari. now the current version is ambari 2.7, hdp version is 2.6.1.0 and HDF 3.1.2.0 using below link i have upgraded ambari.. https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/installing-hdf-on-hdp/content/hdf-upgrade-ambari-and-hdp.html 2)Installed M-pack using ambari server. and i am able to see nifi in my ambari add services list. here comes the problem when i try to add a nifi service in one of the nodes in the cluster it is throwing an error. I am attaching the error here. std_err Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 231, in <module>
Master().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 56, in install
import params
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/params.py", line 284, in <module>
for host in config['clusterHostInfo']['zookeeper_hosts']:
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'zookeeper_hosts' was not found in configurations dictionary! std_out 2018-10-24 19:33:00,075 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-10-24 19:33:00,089 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-10-24 19:33:00,096 - Group['livy'] {}
2018-10-24 19:33:00,097 - Group['spark'] {}
2018-10-24 19:33:00,097 - Group['hdfs'] {}
2018-10-24 19:33:00,098 - Group['zeppelin'] {}
2018-10-24 19:33:00,098 - Group['hadoop'] {}
2018-10-24 19:33:00,099 - Group['nifi'] {}
2018-10-24 19:33:00,099 - Group['users'] {}
2018-10-24 19:33:00,099 - Group['knox'] {}
2018-10-24 19:33:00,101 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,108 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,110 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,112 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,117 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-10-24 19:33:00,119 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2018-10-24 19:33:00,121 - User['nifi'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['nifi'], 'uid': None}
2018-10-24 19:33:00,123 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2018-10-24 19:33:00,128 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2018-10-24 19:33:00,131 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-10-24 19:33:00,132 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,138 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2018-10-24 19:33:00,140 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,142 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,147 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,149 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,151 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2018-10-24 19:33:00,153 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-10-24 19:33:00,158 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-24 19:33:00,160 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-10-24 19:33:00,167 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-10-24 19:33:00,168 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-10-24 19:33:00,169 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-24 19:33:00,171 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-10-24 19:33:00,172 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-10-24 19:33:00,187 - call returned (0, '1015')
2018-10-24 19:33:00,188 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-10-24 19:33:00,195 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] due to not_if
2018-10-24 19:33:00,195 - Group['hdfs'] {}
2018-10-24 19:33:00,196 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2018-10-24 19:33:00,197 - FS Type: HDFS
2018-10-24 19:33:00,197 - Directory['/etc/hadoop'] {'mode': 0755}
2018-10-24 19:33:00,236 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2018-10-24 19:33:00,239 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-10-24 19:33:00,272 - Repository['HDP-UTILS-2.6.1.0-129'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-2.6.1.0-129', 'mirror_list': ''}
2018-10-24 19:33:00,290 - File['/etc/yum.repos.d/HDP-2.6.1.0-129.repo'] {'content': '[HDP-UTILS-2.6.1.0-129]\nname=HDP-UTILS-2.6.1.0-129\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-10-24 19:33:00,291 - Writing File['/etc/yum.repos.d/HDP-2.6.1.0-129.repo'] because contents don't match
2018-10-24 19:33:00,291 - Repository['HDP-2.6.1.0-129'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.1.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-2.6.1.0-129', 'mirror_list': ''}
2018-10-24 19:33:00,301 - File['/etc/yum.repos.d/HDP-2.6.1.0-129.repo'] {'content': '[HDP-UTILS-2.6.1.0-129]\nname=HDP-UTILS-2.6.1.0-129\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6.1.0-129]\nname=HDP-2.6.1.0-129\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-10-24 19:33:00,301 - Writing File['/etc/yum.repos.d/HDP-2.6.1.0-129.repo'] because contents don't match
2018-10-24 19:33:00,302 - Repository['HDF-2.6.1.0-129'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/3.1.2.0', 'action': ['create'], 'components': [u'HDF', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-2.6.1.0-129', 'mirror_list': ''}
2018-10-24 19:33:00,311 - File['/etc/yum.repos.d/HDP-2.6.1.0-129.repo'] {'content': '[HDP-UTILS-2.6.1.0-129]\nname=HDP-UTILS-2.6.1.0-129\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6.1.0-129]\nname=HDP-2.6.1.0-129\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDF-2.6.1.0-129]\nname=HDF-2.6.1.0-129\nbaseurl=http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/3.1.2.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-10-24 19:33:00,311 - Writing File['/etc/yum.repos.d/HDP-2.6.1.0-129.repo'] because contents don't match
2018-10-24 19:33:00,312 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-10-24 19:33:00,530 - Skipping installation of existing package unzip
2018-10-24 19:33:00,534 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-10-24 19:33:00,558 - Skipping installation of existing package curl
2018-10-24 19:33:00,558 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-10-24 19:33:00,587 - Skipping installation of existing package hdp-select
2018-10-24 19:33:00,601 - The repository with version 2.6.1.0-129 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2018-10-24 19:33:00,621 - Skipping stack-select on NIFI because it does not exist in the stack-select package structure.
2018-10-24 19:33:01,101 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-10-24 19:33:01,151 - The repository with version 2.6.1.0-129 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2018-10-24 19:33:01,201 - Skipping stack-select on NIFI because it does not exist in the stack-select package structure.
Command failed after 1 tries please help me out to troubleshoot the issue. Thanks in advance....
... View more
Labels:
09-10-2018
05:35 AM
Hi, I am unable to connect from beeline to hive meatstore. i am attaching the error which was thrown. Please help me resolve this issue. Thanks in advance [centos@e1 ~]$ beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://server1.abc.com:10000
Connecting to jdbc:hive2://server1.abc.com:10000
Enter username for jdbc:hive2://server1.abc.com:10000: hive
Enter password for jdbc:hive2://server1.abc.com:10000: ********
2018-09-10 12:26:25 INFO Utils:310 - Supplied authorities: server1.abc.com:10000
2018-09-10 12:26:25 INFO Utils:397 - Resolved authority: server1.abc.com:10000
2018-09-10 12:26:25 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2://server1.abc.com:10000
2018-09-10 12:26:25 ERROR HiveConnection:593 - Error opening session
org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default})
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:156)
at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:143)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:583)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:192)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142)
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207)
at org.apache.hive.beeline.Commands.connect(Commands.java:1149)
at org.apache.hive.beeline.Commands.connect(Commands.java:1070)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:970)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:813)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:771)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467)
Error: Could not establish connection to jdbc:hive2://server1.abc.com:10000: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) (state=08S01,code=0)
0: jdbc:hive2://server1.abc.com:10000 (closed)>
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
08-19-2018
12:22 AM
please tell me how to get the logs
... View more
08-18-2018
09:41 AM
Can you please tell me the step by step procedure Thanks in advance
... View more