Member since
12-12-2015
14
Posts
6
Kudos Received
0
Solutions
01-06-2017
07:15 PM
There are people who instead of helping what they do is confusing. This is a straight answer to the point. Congratulations and thank you.
... View more
12-29-2016
09:17 PM
1 Kudo
Renew Knox Gateway SSL certificate following the link: http://www-01.ibm.com/support/docview.wss?uid=swg21987527
... View more
04-14-2016
03:31 AM
1 Kudo
Dear @ajaysingh Effectively, Hue is not supported neither CentOS 7 nor RHEL 7 in HDP 2.3. However, if you want to install Hue on HDP 2.3 you must remove hue 2.6 packages, i.e, yum list hue*, yum remove hue*.
After that follow the procedure to Installing Hue 3.9 on HDP 2.3 – Amazon EC2 RHEL 7: http://gethue.com/hadoop-hue-3-on-hdp-installation-tutorial. For me works on HDP 2.3 on a cluster of 4 nodes on Oracle Linux 7 on Azure without Kerberos. Recommendations: 1)Complement the steps in the above link with https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/ch_installing_hue_chapter.html 2)Create hue linux user, before installing Hue 3.9 for example:
id
uid=1023(hue) gid=54324(hadoop) groups=54324(hadoop),100(users),1023(hue) 3)Create hdfs directories: /usr/admin (it must be create for ambary hive views), /usr/hue.
hadoop fs -mkdir /user/hue
hadoop fs -chown hue:hadoop /user/hue Best regards, JAG Good luck!
... View more
04-06-2016
03:49 PM
Hi, @Predrag Minovic, It's an elegant solution to hide the temporary files from Hive using the dot value (.) for the attribute hdfs.inUsePrefix. This also solves some problems running a DDL to refresh a view or recreate tables ending in an exception file not found. Thank you very much!. Best regards, JOSE GUILLEN
... View more
04-05-2016
11:33 PM
1 Kudo
Dear Hortonworks Community, I create an external table to process data coming from Twitter Steaming API using Flume with the following script: ADD JAR /usr/hdp/2.3.2.0-2950/hive/lib/hive-serdes-1.0-SNAPSHOT.jar; USE dbtwitter; DROP TABLE IF EXISTS dbtwitter.tweets_raw; CREATE EXTERNAL TABLE IF EXISTS dbtwitter.tweets_raw ( contributors string,
coordinates string, ... ... ) ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe' LOCATION '/user/flume/twitter/landing/bod'; Every thing was OK when few tweets were streaming. Once I have more than (aprox) 40.000 (even fewer) tweets the Query's and every DDL commnad begin to fail with java exceptions. For example when I ran: SELECT count(*) FROM tweets_raw I got the desired results but if I ran SELECT * FROM tweets_raw begin listing but after showing a lot of line Failed with exception similar to these: java.io.IOException:java.io.FileNotFoundException: File does not exist: /user/flume/twitter/landing/bod/FlumeData.1459892472851.tmp It seens that hive it is not able to handle the files that are landing in the LOCATION while I'm executing some querys or DDL commands. Please, any help will be apreciated. Best regards, JOSE GUILLEN
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache Hive
03-22-2016
05:10 PM
As Cedric Colpaert points out in his answer, for me the best, you must change the params_linux.py file on both agent and server directories: /var/lib/ambari-agent/cache/common-services/KNOX/0.5.0.2.2/package/scripts /var/lib/ambari-server/resources/common-services/KNOX/0.5.0.2.2/package/scripts
... View more
12-23-2015
08:05 PM
1 Kudo
Thanks @Kevin Minder, but I didn't install HDP 2.2, I have installed HDP-2.3.2.0-2950 the latest versión available as I know. So if the problema persist in this version (2.3), according to you the only solution is downgrade the JDK.
... View more
12-23-2015
02:19 PM
1 Kudo
Thanks
Neeraj Sabharwal, but I don´t understand what you proposed. When I
installed the cluster, everything was OK, including Knox. I stop all the
services through Ambari, I restarted the server node, and start up all
services, everything OK, except Knox. @Kevin Minder
... View more
12-23-2015
12:23 AM
1 Kudo
I re-start a one node cluster with Ambari. All service start sucessfully except Knox Gateway. Oracle Linux 7 Apache Ambari
Version 2.1.1 ---- stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/KNOX/0.5.0.2.2/package/scripts/knox_gateway.py", line 267, in <module>
KnoxGateway().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 218, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/KNOX/0.5.0.2.2/package/scripts/knox_gateway.py", line 146, in start
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/KNOX/0.5.0.2.2/package/scripts/knox_gateway.py", line 63, in configure
knox()
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/KNOX/0.5.0.2.2/package/scripts/knox.py", line 125, in knox
not_if=master_secret_exist,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 258, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/hdp/current/knox-server/bin/knoxcli.sh create-master --master [PROTECTED]' returned 1. Master secret is already present on disk. Please be aware that overwriting it will require updating other security artifacts. Use --force to overwrite the existing master secret.
ERROR: Invalid Command
Unrecognized option:create-master
A fatal exception has occurred. Program will exit.
stdout:
2015-12-22 18:46:33,793 - Directory['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/'] {'recursive': True}
2015-12-22 18:46:33,800 - File['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jce_policy-8.zip'] {'content': DownloadSource('http://BODTEST02.bod.com.ve:8080/resources//jce_policy-8.zip')}
2015-12-22 18:46:33,801 - Not downloading the file from <a href="http://BODTEST02.bod.com.ve:8080/resources//jce_policy-8.zip">http://BODTEST02.bod.com.ve:8080/resources//jce_policy-8.zip</a>, because /var/lib/ambari-agent/data/tmp/jce_policy-8.zip already exists
2015-12-22 18:46:33,801 - Group['spark'] {'ignore_failures': False}
2015-12-22 18:46:33,801 - Group['hadoop'] {'ignore_failures': False}
2015-12-22 18:46:33,802 - Group['users'] {'ignore_failures': False}
2015-12-22 18:46:33,802 - Group['knox'] {'ignore_failures': False}
2015-12-22 18:46:33,802 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,803 - User['storm'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,803 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,804 - User['oozie'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-12-22 18:46:33,810 - User['atlas'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,810 - User['ams'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,811 - User['falcon'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-12-22 18:46:33,811 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-12-22 18:46:33,812 - User['accumulo'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,812 - User['mahout'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,813 - User['spark'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,813 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-12-22 18:46:33,814 - User['flume'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,820 - User['kafka'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,821 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,821 - User['sqoop'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,822 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,822 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,823 - User['hbase'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,823 - User['knox'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,824 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-12-22 18:46:33,830 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-12-22 18:46:33,831 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2015-12-22 18:46:33,901 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2015-12-22 18:46:33,901 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2015-12-22 18:46:33,902 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-12-22 18:46:33,903 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2015-12-22 18:46:33,913 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2015-12-22 18:46:33,913 - Group['hdfs'] {'ignore_failures': False}
2015-12-22 18:46:33,913 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', u'hdfs']}
2015-12-22 18:46:33,914 - Directory['/etc/hadoop'] {'mode': 0755}
2015-12-22 18:46:33,939 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2015-12-22 18:46:33,969 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2015-12-22 18:46:34,011 - Skipping Execute[('setenforce', '0')] due to not_if
2015-12-22 18:46:34,011 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2015-12-22 18:46:34,013 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2015-12-22 18:46:34,013 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2015-12-22 18:46:34,017 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-12-22 18:46:34,026 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2015-12-22 18:46:34,026 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-12-22 18:46:34,045 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-12-22 18:46:34,045 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-12-22 18:46:34,046 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-12-22 18:46:34,056 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2015-12-22 18:46:34,075 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2015-12-22 18:46:34,459 - Directory['/var/lib/knox/data'] {'owner': 'knox', 'group': 'knox', 'recursive': True}
2015-12-22 18:46:34,470 - Directory['/var/log/knox'] {'owner': 'knox', 'group': 'knox', 'recursive': True}
2015-12-22 18:46:34,470 - Directory['/var/run/knox'] {'owner': 'knox', 'group': 'knox', 'recursive': True}
2015-12-22 18:46:34,480 - Directory['/usr/hdp/current/knox-server/conf'] {'owner': 'knox', 'group': 'knox', 'recursive': True}
2015-12-22 18:46:34,488 - Directory['/usr/hdp/current/knox-server/conf/topologies'] {'owner': 'knox', 'group': 'knox', 'recursive': True}
2015-12-22 18:46:34,488 - XmlConfig['gateway-site.xml'] {'owner': 'knox', 'group': 'knox', 'conf_dir': '/usr/hdp/current/knox-server/conf', 'configuration_attributes': {}, 'configurations': ...}
2015-12-22 18:46:34,538 - Generating config: /usr/hdp/current/knox-server/conf/gateway-site.xml
2015-12-22 18:46:34,538 - File['/usr/hdp/current/knox-server/conf/gateway-site.xml'] {'owner': 'knox', 'content': InlineTemplate(...), 'group': 'knox', 'mode': None, 'encoding': 'UTF-8'}
2015-12-22 18:46:34,556 - Writing File['/usr/hdp/current/knox-server/conf/gateway-site.xml'] because contents don't match
2015-12-22 18:46:34,557 - File['/usr/hdp/current/knox-server/conf/gateway-log4j.properties'] {'content': ..., 'owner': 'knox', 'group': 'knox', 'mode': 0644}
2015-12-22 18:46:34,568 - File['/usr/hdp/current/knox-server/conf/topologies/default.xml'] {'content': InlineTemplate(...), 'owner': 'knox', 'group': 'knox'}
2015-12-22 18:46:34,568 - Execute[('chown', '-R', u'knox:knox', '/var/lib/knox/data', '/var/log/knox', u'/var/run/knox', '/usr/hdp/current/knox-server/conf', '/usr/hdp/current/knox-server/conf/topologies')] {'sudo': True}
2015-12-22 18:46:34,582 - Execute['/usr/hdp/current/knox-server/bin/knoxcli.sh create-master --master [PROTECTED]'] {'environment': {'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_40'}, 'not_if': "ambari-sudo.sh su knox -l -s /bin/bash -c 'test -f /var/lib/knox/data/security/master'", 'user': 'knox'}
... View more
Labels:
- Labels:
-
Apache Knox
12-14-2015
07:10 PM
Thanks to all of you.
... View more