Member since
12-10-2015
36
Posts
16
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2062 | 02-01-2016 06:50 AM |
07-18-2018
07:40 AM
synchronize the Tez configurations on all nodes, and restart hiveserver2, it should work fine.
... View more
01-12-2018
06:18 AM
I have found the reason for the problem. I changed the FQDN to lowercase, then it works ok. Thank you for your reply
... View more
01-04-2018
04:46 PM
I have installed Ambari 2.6.0.0 version with HDP 2.6.3 and added some of the services.
But while enabling Kerberos I am facing one weird issue. The error log is: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 361, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 970, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 99, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 234, in namenode
create_hdfs_directories()
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 301, in create_hdfs_directories
mode=0777,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 328, in action_delayed
self._assert_valid()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in _assert_valid
self.target_status = self._get_file_status(target)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 430, in _get_file_status
list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 248, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://a01-r03-i164-156-515w9ay.xxx.xxx:50070/webhdfs/v1/tmp?op=GETFILESTATUS'' returned status_code=403.
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 403 org.apache.hadoop.security.authentication.client.AuthenticationException</title>
</head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /webhdfs/v1/tmp. Reason:
<pre> org.apache.hadoop.security.authentication.client.AuthenticationException</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/> Any suggestions will be appreciated. Thanks!
... View more
Labels:
- Labels:
-
Apache Ambari
08-07-2016
01:47 PM
Thank you very much for your help, according to the plan you provide, the problem has been solved smoothly
... View more
08-06-2016
09:29 AM
When i upgraded Ambari from 2.2.1 to 2.2.2, then i restart all services in the cluster fails with below error: raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'managed_hdfs_resource_property_names' was not found in configurations dictionary!
... View more
Labels:
- Labels:
-
Apache Ambari
08-02-2016
08:06 AM
4 Kudos
Zeppelin since version 0.6, has provided support for R Interpreter.
By default, the R Interpreter appears as two Zeppelin Interpreters, %r and %knitr.
To run Zeppelin with the R Interpreter, some environment variables must be set: R (3.0+) JAVA_HOME (Oracle JDK 1.7+) SPARK_HOME (The best way to do this is by editing conf/zeppelin-env.sh. If it is not set, the R Interpreter will not be able to interface with Spark.You should also copy conf/zeppelin-site.xml.template to conf/zeppelin-site.xml. That will ensure that Zeppelin sees the R Interpreter the first time it starts up.)
Then, clone the Zeppelin repository and build it with options: >git clone https://github.com/apache/zeppelin.git
# enable the "r" and "sparkr" profile
>mvn clean install -e -DskipTests -Dspark.version={spark_version} -Dhadoop.version={hadoop_version} -Pr -Psparkr -Pvendor-repo -Pexamples -Drat.skip=true -Dcheckstyle.skip=true -Dcobertura.skip=true
And next, install SparkR package. In Spark 1.6 or earlier, the SparkR need to manually install. >cd $SPARK_HOME
>./R/install-dev.sh
Now, you can starting Apache Zeppelin with command line: >cd $ZEPPELIN_HOME
>bin/zeppelin-daemon.sh start -e
After successful start, visit http://localhost:8080 with your web browser. And you can execute commands as in the CLI.
... View more
04-22-2016
04:26 PM
@Pierre Villard If I do not want to enable KMS and Ranger, now what do i need do to ensure that the HDFS data is readable. Thanks for your reply.
... View more
04-22-2016
04:25 PM
If I do not want to enable KMS and Ranger, now what do i need do to ensure that the HDFS data is readable. Thanks for your reply.
... View more
04-22-2016
04:04 PM
And What I mean is, If I completely uninstall KMS and Ranger, those stored in HDFS file will be readable?
... View more