Member since
02-13-2017
23
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1374 | 02-13-2017 05:25 PM |
06-06-2019
08:35 AM
Hello i have the issue onan other cluster where i have two config groups for Yarn
... View more
06-03-2019
05:52 PM
@Geoffrey Shelton Okot Unfortunatly, it's not the source of the issue, i am in an update process, so DSM installation is after BigSql Upgrade https://www.ibm.com/support/knowledgecenter/SSCRJT_5.0.4/com.ibm.swg.im.bigsql.doc/doc/upgrade_process.html As i said i have the same issue with any components (like oozie)
... View more
06-03-2019
04:24 PM
I am on HDP cluster 2.6.5 i am trying to add a service and i have always the same issue with any service on any hosts i can't con figure the new service during the wizard. We can see that the service doesn't belong to a Config Group. I have only one config group "Default", i had make some tests a few month ago with config groups but i have removed them When i click on "Manage Config Groups", nothing happens. If i select an other services i have this warning message "You are changing not default group, please select config group to which you want to save dependent configs from other services" and below the details Have you any suggestion ?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
01-09-2019
04:01 PM
it seems the webpage needs some CSS located on the website maxcdn.bootstrapcdn.com
... View more
01-09-2019
12:59 PM
I have installed HDP-3.1 (ambari 2.7.3) and knox with default parameters. I have activated the "Demo LDAP" process and the authentication to knox admin UI is successfull, but i have nothing on the page. i have reinstalled severals times knox but nothing change any idea ?
... View more
Labels:
01-09-2019
12:45 PM
Hello @Francesco we use an Isilon for our Hadoop platform in production since two years. In production we are working on cluster HDP 2.6.5 and yesterday i was able to install a cluster HDP 3.1. Can you explain more precisely your problem ? maybe with some error messages
... View more
11-19-2018
03:01 PM
I have again this issue with an other upgrade Do you have any idea how to fix it ?
... View more
11-05-2018
01:42 PM
Hello,
it's not the first time i have this issue and i need a solution to reduce the downtime.
after each OS upgrade i was not able to start any service. each time i have this error message :
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn_client.py", line 62, in <module>
YarnClient().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn_client.py", line 34, in install
self.install_packages(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 811, in install_packages
name = self.format_package_name(package['name'])
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 546, in format_package_name
raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hadoop_${stack_version}-yarn. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_5_0_292', 'accumulo_2_6_5_0_292-conf-standalone', 'accumulo_2_6_5_0_292-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_5_0_292', 'atlas-metadata_2_6_5_0_292-falcon-plugin', 'atlas-metadata_2_6_5_0_292-storm-plugin', 'bigtop-jsvc', 'bigtop-tomcat', 'datafu', 'druid', 'druid_2_6_5_0_292', 'falcon', 'falcon-doc', 'falcon_2_6_5_0_292', 'falcon_2_6_5_0_292-doc', 'flume', 'flume-agent', 'flume_2_6_5_0_292', 'flume_2_6_5_0_292-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_5_0_292-conf-pseudo', 'hadoop_2_6_5_0_292-doc', 'hadoop_2_6_5_0_292-hdfs-datanode', 'hadoop_2_6_5_0_292-hdfs-fuse', 'hadoop_2_6_5_0_292-hdfs-journalnode', 'hadoop_2_6_5_0_292-hdfs-namenode', 'hadoop_2_6_5_0_292-hdfs-secondarynamenode', 'hadoop_2_6_5_0_292-hdfs-zkfc', 'hadoop_2_6_5_0_292-httpfs', 'hadoop_2_6_5_0_292-httpfs-server', 'hadoop_2_6_5_0_292-mapreduce-historyserver', 'hadoop_2_6_5_0_292-source', 'hadoop_2_6_5_0_292-yarn-nodemanager', 'hadoop_2_6_5_0_292-yarn-proxyserver', 'hadoop_2_6_5_0_292-yarn-resourcemanager', 'hadoop_2_6_5_0_292-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_5_0_292-doc', 'hbase_2_6_5_0_292-master', 'hbase_2_6_5_0_292-regionserver', 'hbase_2_6_5_0_292-rest', 'hbase_2_6_5_0_292-thrift', 'hbase_2_6_5_0_292-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive_2_6_5_0_292-hcatalog-server', 'hive_2_6_5_0_292-metastore', 'hive_2_6_5_0_292-server', 'hive_2_6_5_0_292-server2', 'hive_2_6_5_0_292-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_5_0_292', 'knox', 'knox_2_6_5_0_292', 'livy', 'livy2', 'livy_2_6_5_0_292', 'mahout', 'mahout-doc', 'mahout_2_6_5_0_292', 'mahout_2_6_5_0_292-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_5_0_292', 'oozie_2_6_5_0_292-client', 'oozie_2_6_5_0_292-common', 'oozie_2_6_5_0_292-sharelib', 'oozie_2_6_5_0_292-sharelib-distcp', 'oozie_2_6_5_0_292-sharelib-hcatalog', 'oozie_2_6_5_0_292-sharelib-hive', 'oozie_2_6_5_0_292-sharelib-hive2', 'oozie_2_6_5_0_292-sharelib-mapreduce-streaming', 'oozie_2_6_5_0_292-sharelib-pig', 'oozie_2_6_5_0_292-sharelib-spark', 'oozie_2_6_5_0_292-sharelib-sqoop', 'oozie_2_6_5_0_292-webapp', 'phoenix', 'phoenix-queryserver', 'phoenix_2_6_5_0_292', 'phoenix_2_6_5_0_292-queryserver', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_5_0_292-admin', 'ranger_2_6_5_0_292-atlas-plugin', 'ranger_2_6_5_0_292-kafka-plugin', 'ranger_2_6_5_0_292-kms', 'ranger_2_6_5_0_292-knox-plugin', 'ranger_2_6_5_0_292-solr-plugin', 'ranger_2_6_5_0_292-storm-plugin', 'ranger_2_6_5_0_292-tagsync', 'ranger_2_6_5_0_292-usersync', 'shc', 'slider', 'spark', 'spark-history-server', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-history-server', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_5_0_292-history-server', 'spark2_2_6_5_0_292-master', 'spark2_2_6_5_0_292-worker', 'spark_2_6_5_0_292', 'spark_2_6_5_0_292-history-server', 'spark_2_6_5_0_292-master', 'spark_2_6_5_0_292-python', 'spark_2_6_5_0_292-worker', 'spark_llap', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_5_0_292-metastore', 'storm', 'storm-slider-client', 'storm_2_6_5_0_292', 'superset', 'superset_2_6_5_0_292', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_6_5_0_292', 'zookeeper', 'zookeeper-server', 'Uploading', 'Report', 'langpacks,', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'epel-release', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'lzo-minilzo', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'pdsh', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'tcl-rrdtool', 'Uploading', 'Report', 'langpacks,', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'epel-release', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'lzo-minilzo', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'pdsh', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'tcl-rrdtool', 'Uploading', 'Report', 'langpacks,']
this error means that the name of the package is not found on the list of installed package.
in my opinion the error came from the name of the yum repository. In fact when we upgrade the OS, the version of the previous OS is added to the name of the (old) yum repository
Example:
before OS upgrade :
yum list installed | grep hadoop_2_6_5
hadoop_2_6_5_0_292.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101
hadoop_2_6_5_0_292-client.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101
hadoop_2_6_5_0_292-hdfs.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101
hadoop_2_6_5_0_292-libhdfs.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101
hadoop_2_6_5_0_292-yarn.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101
After OS upgrade:
yum list installed | grep hadoop_2_6_5
hadoop_2_6_5_0_292.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101/7.4
hadoop_2_6_5_0_292-client.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101/7.4
hadoop_2_6_5_0_292-hdfs.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101/7.4
hadoop_2_6_5_0_292-libhdfs.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101/7.4
hadoop_2_6_5_0_292-yarn.x86_64 2.7.3.2.6.5.0-292 @HDP-2.6-repo-101/7.4
Do you have a solution ? Is it possible to specify to Ambari to not check the version of the repository ? Is it possible to refresh yum database ?
The only solution i have, is to reinstall all HDP packages 😞
... View more
Labels:
- Labels:
-
Apache Accumulo
-
Apache Ambari
-
Apache Atlas
-
Apache Falcon
-
Apache Flume
-
Apache Hadoop
-
Apache HBase
-
Apache HCatalog
-
Apache Hive
-
Apache Kafka
-
Apache Knox
-
Apache Oozie
-
Apache Phoenix
-
Apache Pig
-
Apache Ranger
-
Apache Solr
-
Apache Spark
-
Apache Sqoop
-
Apache Storm
-
Apache Tez
-
Apache YARN
-
Apache Zeppelin
-
Apache Zookeeper
-
Cloudera Hue
-
HDFS
-
Hortonworks Data Platform (HDP)
-
MapReduce
10-01-2018
12:04 PM
We use Nifi 1.5.0 (HDF 3.1.2) on HDP 2.6.4. We access to Nifi through Knox and we are able to create and use some data flow. But we have some issues to Enable or Disable Controller services, we have a 404 ERROR when we click on the Enable/Disable button. With fiddler, we are able to see that the URL used is wrong, the base url of the Know gateway is missing. In my opinion it's a bug in Nifi, but maybe i am wrong. if you have an idea or a by-pass, please let me know
... View more
Labels:
- Labels:
-
Apache Knox
-
Apache NiFi
04-04-2018
08:42 AM
Okay nice to know that. but it's strange that when i used IOP 4.2.5 (so with Ambari 2.4.2) this functionnality was implemented Thanks for your time
... View more
03-26-2018
12:30 PM
Before the migration from IOP 4.2.5 to HDP 2.6.4, we used PAM authentication for Ranger Admin. Now we can only have UNIX, LDAP or AD. Why ? it's very useful to use PAM authentication.
... View more
Labels:
- Labels:
-
Apache Ranger
03-19-2018
03:47 PM
@Geoffrey Shelton Okot thanks for your response. Why did you say that BIGSQL is not managed by Ambari ? it was installed by Ambari becaus it is an extension. What method should i use to remove the service without lose the configuration ?
... View more
03-19-2018
09:21 AM
hello i have this requirements failed when i try to launch the express upgrade : Reason: The following services exist in HDP-2.6 but are not included in HDP-2.6. They must be removed before upgrading.
Failed on: BIGSQL
does it the behavior expected ? i didn't find anything about this i am using Ambari 2.5.2 (i can't use ambari 2.6.X)
... View more
Labels:
- Labels:
-
IBM Db2 Big SQL
11-27-2017
02:18 PM
Hi, I have changed the Ambari File view jar file under /var/lib/ambari-server/resources/views/ with the version of an Ambari 2.2.X Regards Florent
... View more
10-03-2017
07:11 AM
For issue with knox gateway, can i modify something to fix it ?
... View more
10-02-2017
11:41 AM
Hi @Jay SenSharma Yes i use the "Local Cluster" option, as i said that worked with a previous version of ambari ( Ambari 2.2.0) because before Ambari 2.4.0, Ambari File view used the native HDFS client. Since 2.4.X, File view works with Webhdfs My issue probably comes from the implementation of the protocol WebHDFS into the Dell-EMC Isilon. it is the Isilon which demands a Content length
... View more
10-02-2017
10:07 AM
Any suggestion ?
... View more
09-29-2017
03:10 PM
I have an issue with upload file an the Ambari File view Server Status: 500
Server Message:
Unexpected HTTP response: code=411 != 201, op=CREATE, message=Length Required
Error Trace:
java.io.IOException: Unexpected HTTP response: code=411 != 201, op=CREATE, message=Length Required
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:356)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:98)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:821)
at org.apache.ambari.view.commons.hdfs.UploadService.uploadFile(UploadService.java:64)
at org.apache.amba...
(more...)
I made some test with the curl command and i am able to reproduce the issue when i try to access to the webhdfs with Knox Gateway (it's ok directly) hdfs@dal-b-23331 $ curl -ki -u d100989:$passwd "https://localhost:8443/gateway/default/webhdfs/v1/tmp/upload_test?op=CREATE" -X PUT
HTTP/1.1 307 Temporary Redirect
Date: Fri, 29 Sep 2017 14:59:08 GMT
Set-Cookie: JSESSIONID=1h9evu0uqv458t79tx7cgiq81;Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Thu, 28-Sep-2017 14:59:09 GMT
Date: Fri, 29 Sep 2017 14:59:09 GMT
Server: Apache/2.2.31 (FreeBSD) mod_ssl/2.2.31 OpenSSL/1.0.2j-fips mod_fastcgi/2.4.6
Location: https://localhost:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/upload_test?_=AAAACAAAABAAAABwAyIkSO3dMnr2JnyCxF9WHh7Ll-F_0Kr6HtWkiDUiP_l_NQCUua97Q-cc2T7vHUtITOLQzFCu6PUc6r5QKtREU50pMCuSUcK5nG38oEu3dHAHL4MZM7INogHO99LtkB8xALE5_9ltH3qEHTauvJdoWCMAPBnTwXcGH7OTF1NtGH7Gv8degILTIQ
Keep-Alive: timeout=15, max=500
Content-Type: text/plain
Content-Length: 0
hdfs@dal-b-23331 $ curl -ki -u d100989:$passwd "https://localhost:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/upload_test?_=AAAACAAAABAAAABwAyIkSO3dMnr2JnyCxF9WHh7Ll-F_0Kr6HtWkiDUiP_l_NQCUua97Q-cc2T7vHUtITOLQzFCu6PUc6r5QKtREU50pMCuSUcK5nG38oEu3dHAHL4MZM7INogHO99LtkB8xALE5_9ltH3qEHTauvJdoWCMAPBnTwXcGH7OTF1NtGH7Gv8degILTIQ" -X PUT -T /tmp/test
HTTP/1.1 100 Continue
HTTP/1.1 411 Length Required
Date: Fri, 29 Sep 2017 15:01:21 GMT
Set-Cookie: JSESSIONID=1d4fl184ol30blh2vfqgu6gv9;Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Thu, 28-Sep-2017 15:01:22 GMT
Date: Fri, 29 Sep 2017 15:01:22 GMT
Server: Apache/2.2.31 (FreeBSD) mod_ssl/2.2.31 OpenSSL/1.0.2j-fips mod_fastcgi/2.4.6
Content-Type: text/html; charset=ISO-8859-1
Connection: close
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>411 Length Required</title>
</head><body>
<h1>Length Required</h1>
<p>A request of the requested method PUT requires a valid Content-length.<br/>
</p>
</body></html>
If i add an header "Content-Length: 0", it works : hdfs@dal-b-23331 $ curl -ki -u d100989:$passwd "https://localhost:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/upload_test?_=AAAACAAAABAAAABwAyIkSO3dMnr2JnyCxF9WHh7Ll-F_0Kr6HtWkiDUiP_l_NQCUua97Q-cc2T7vHUtITOLQzFCu6PUc6r5QKtREU50pMCuSUcK5nG38oEu3dHAHL4MZM7INogHO99LtkB8xALE5_9ltH3qEHTauvJdoWCMAPBnTwXcGH7OTF1NtGH7Gv8degILTIQ" -X PUT -T /tmp/test -H "Content-Length: 0"
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
Date: Fri, 29 Sep 2017 15:02:37 GMT
Set-Cookie: JSESSIONID=18tw47to3wrpbstd5zuw43lzf;Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Thu, 28-Sep-2017 15:02:38 GMT
Date: Fri, 29 Sep 2017 15:02:38 GMT
Server: Apache/2.2.31 (FreeBSD) mod_ssl/2.2.31 OpenSSL/1.0.2j-fips mod_fastcgi/2.4.6
Content-Type: text/plain
Connection: close
Other important points for me : with an ambari 2.2.X, it works. Probably because before ambari 2.4.X, file view don't use webhdfs but use the hdfs client We use the EMC Isilon for hdfs storage, and it's him who ask an information about the length. How can i by pass the issue for Ambari File view ? and for knox, does it possible to modify the webhdfs definition to add an header ? Regards Florent
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Knox
08-24-2017
09:27 AM
I have the same issue, we are using an Isilon (8.0.1.1) too. it seems this command is not implemented in the Isilon. other commands like "hdfs -dfs -ls" work fine and our cluster isn't kerberized and we aren't using ranger. Isilon logs : DAISICLUSTER0-3: 2017-08-24T11:24:35+02:00 <30.6> DAISICLUSTER0-3 hdfs[84852]: [hdfs] RPC V9 user: exception: org.apache.hadoop.security.authorize.AuthorizationException cause: Unknown protocol: org.apache.hadoop.tools.GetUserMappingsProtocol
... View more
02-13-2017
05:25 PM
Found it ... some change i have made into java.security file ... Thanks for your time
... View more
02-13-2017
01:29 PM
it's look like good /usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/bin/jrunscript -e 'exit (javax.crypto.Cipher.getMaxAllowedKeyLength("RC5") >= 256);'; if [ $? -eq 1 ]; then echo "JCE Unlimited OK"; else echo "JCE NOT unlimited"; fi
JCE Unlimited OK
... View more
02-13-2017
12:40 PM
hi Pierre Villard i think i have jce installed, because it should be included with openjdk i have this lib with the jdk : /usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/lib/jce.jar Everything works fine until i have rebooted the server
... View more
02-13-2017
11:18 AM
Hello Ambari-server can't start, i have this error : cat /var/log/ambari-server/ambari-server.out
java.security.NoSuchAlgorithmException: PBKDF2WithHmacSHA1 SecretKeyFactory not available
at javax.crypto.SecretKeyFactory.<init>(SecretKeyFactory.java:122)
at javax.crypto.SecretKeyFactory.getInstance(SecretKeyFactory.java:160)
at org.apache.ambari.server.security.encryption.AESEncryptor.getKeyFromPassword(AESEncryptor.java:104)
at org.apache.ambari.server.security.encryption.AESEncryptor.getKeyFromPassword(AESEncryptor.java:97)
at org.apache.ambari.server.security.encryption.AESEncryptor.<init>(AESEncryptor.java:51)
at org.apache.ambari.server.security.encryption.MasterKeyServiceImpl.<clinit>(MasterKeyServiceImpl.java:44)
at org.apache.ambari.server.security.encryption.CredentialProvider.<init>(CredentialProvider.java:57)
at org.apache.ambari.server.configuration.Configuration.loadCredentialProvider(Configuration.java:858)
at org.apache.ambari.server.configuration.Configuration.readPasswordFromStore(Configuration.java:1494)
at org.apache.ambari.server.configuration.Configuration.getDatabasePassword(Configuration.java:1442)
at org.apache.ambari.server.controller.ControllerModule.buildJpaPersistModule(ControllerModule.java:368)
at org.apache.ambari.server.controller.ControllerModule.configure(ControllerModule.java:313)
at com.google.inject.AbstractModule.configure(AbstractModule.java:59)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:223)
at com.google.inject.spi.Elements.getElements(Elements.java:101)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:133)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:103)
at com.google.inject.Guice.createInjector(Guice.java:95)
at com.google.inject.Guice.createInjector(Guice.java:72)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:803)
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.ambari.server.security.encryption.CredentialProvider.<init>(CredentialProvider.java:57)
at org.apache.ambari.server.configuration.Configuration.loadCredentialProvider(Configuration.java:858)
at org.apache.ambari.server.configuration.Configuration.readPasswordFromStore(Configuration.java:1494)
at org.apache.ambari.server.configuration.Configuration.getDatabasePassword(Configuration.java:1442)
at org.apache.ambari.server.controller.ControllerModule.buildJpaPersistModule(ControllerModule.java:368)
at org.apache.ambari.server.controller.ControllerModule.configure(ControllerModule.java:313)
at com.google.inject.AbstractModule.configure(AbstractModule.java:59)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:223)
at com.google.inject.spi.Elements.getElements(Elements.java:101)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:133)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:103)
at com.google.inject.Guice.createInjector(Guice.java:95)
at com.google.inject.Guice.createInjector(Guice.java:72)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:803)
Caused by: java.lang.NullPointerException
at org.apache.ambari.server.security.encryption.AESEncryptor.<init>(AESEncryptor.java:52)
at org.apache.ambari.server.security.encryption.MasterKeyServiceImpl.<clinit>(MasterKeyServiceImpl.java:44)
... 15 more
Ambari is unable to read encrypt passwords and i don't know why : # /usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/bin/java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*' org.apache.ambari.server.security.encryption.CredentialProvider GET ambari.db.password
java.security.NoSuchAlgorithmException: PBKDF2WithHmacSHA1 SecretKeyFactory not available
at javax.crypto.SecretKeyFactory.<init>(SecretKeyFactory.java:122)
at javax.crypto.SecretKeyFactory.getInstance(SecretKeyFactory.java:160)
at org.apache.ambari.server.security.encryption.AESEncryptor.getKeyFromPassword(AESEncryptor.java:104)
at org.apache.ambari.server.security.encryption.AESEncryptor.getKeyFromPassword(AESEncryptor.java:97)
at org.apache.ambari.server.security.encryption.AESEncryptor.<init>(AESEncryptor.java:51)
at org.apache.ambari.server.security.encryption.MasterKeyServiceImpl.<clinit>(MasterKeyServiceImpl.java:44)
at org.apache.ambari.server.security.encryption.CredentialProvider.<init>(CredentialProvider.java:57)
at org.apache.ambari.server.security.encryption.CredentialProvider.main(CredentialProvider.java:151)
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.ambari.server.security.encryption.CredentialProvider.<init>(CredentialProvider.java:57)
at org.apache.ambari.server.security.encryption.CredentialProvider.main(CredentialProvider.java:151)
Caused by: java.lang.NullPointerException
at org.apache.ambari.server.security.encryption.AESEncryptor.<init>(AESEncryptor.java:52)
at org.apache.ambari.server.security.encryption.MasterKeyServiceImpl.<clinit>(MasterKeyServiceImpl.java:44)
... 2 more
any idea ? thanks
... View more
Labels: