Member since
05-27-2016
22
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
64 | 12-23-2020 12:05 AM |
12-23-2020
12:05 AM
LLAP internally uses MAP-REDUCE shuffle handler because inside TEZ code we have a hard-coded constant TEZ_SHUFFLE_HANDLER_SERVICE_ID = "mapreduce_shuffle". https://github.com/apache/tez/blob/master/tez-api/src/main/java/org/apache/tez/dag/api/TezConstants.java So, when we are using tez_shuffle with LLAP its giving ‘NullpointerException’.
... View more
12-18-2020
05:57 AM
why don't you write 20 insert into query in append mode in the same sql file ';' separated, In this case you don't have to use union. And probably won't encounter heap space issue.
... View more
12-18-2020
05:49 AM
After Setup for the Tez Shuffle Handler following instructions on : https://tez.apache.org/shuffle-handler.html I'm getting below error in query execution: Vertex failed, vertexName=Map 2, vertexId=vertex_1608273679503_0002_2_01, diagnostics=[Task failed, taskId=task_1608273679503_0002_2_01_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1608273679503_0002_2_01_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: Map operator initialization failed at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:363) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) ... 15 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Async Initialization failed. abortRequested=false at org.apache.hadoop.hive.ql.exec.Operator.completeInitialization(Operator.java:461) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:395) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:568) at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:520) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:381) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:335) ... 16 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.tez.LlapObjectCache.retrieve(LlapObjectCache.java:118) at org.apache.hadoop.hive.ql.exec.tez.LlapObjectCache$1.call(LlapObjectCache.java:143) ... 4 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.vector.mapjoin.fast.VectorMapJoinFastHashTableLoader.load(VectorMapJoinFastHashTableLoader.java:113) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTableInternal(MapJoinOperator.java:331) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:400) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.lambda$initializeOp$0(MapJoinOperator.java:207) at org.apache.hadoop.hive.ql.exec.tez.LlapObjectCache.retrieve(LlapObjectCache.java:116) ... 5 more Caused by: java.lang.NullPointerException at org.apache.tez.runtime.api.impl.TezTaskContextImpl.getServiceConsumerMetaData(TezTaskContextImpl.java:190) at org.apache.tez.runtime.library.common.shuffle.impl.ShuffleManager.(ShuffleManager.java:264) at org.apache.tez.runtime.library.input.UnorderedKVInput.start(UnorderedKVInput.java:146) at org.apache.hadoop.hive.ql.exec.vector.mapjoin.fast.VectorMapJoinFastHashTableLoader.load(VectorMapJoinFastHashTableLoader.java:109) ... 9 more Can anyone help here please?
... View more
Labels:
11-04-2020
07:08 AM
i've downloaded the HDP-3.1.0.78-4-tag source code from HDP git repo.
while trying to build the code using
mvn clean package -Phadoop-3,dist
I'm getting below error:
[WARNING] The requested profile "hadoop-3" could not be activated because it does not exist.
[ERROR] Failed to execute goal on project hive-storage-api: Could not resolve dependencies for project org.apache.hive:hive-storage-api:jar:3.1.0.3.1.0.78-4: Could not find artifact org.apache.hadoop:hadoop-common:jar:3.1.1.3.1.0.78-4 in central (https://repo.maven.apache.org/maven2)
when i tried below command:
mvn clean package -DskipTests
i'm getting error as below:
[WARNING] The POM for org.apache.hadoop:hadoop-common:jar:3.1.1.3.1.0.78-4 is missing, no dependency information available
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Hive 3.1.0.3.1.0.78-4:
[INFO]
[INFO] Hive Storage API ................................... FAILURE [ 0.340 s]
:
:
:
[ERROR] Failed to execute goal on project hive-storage-api: Could not resolve dependencies for project org.apache.hive:hive-storage-api:jar:3.1.0.3.1.0.78-4: Failure to find org.apache.hadoop:hadoop-common:jar:3.1.1.3.1.0.78-4 in https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced ->
Can anyone help here?
... View more
10-30-2020
12:09 AM
Thank you! This was helpful.
... View more
10-28-2020
12:00 PM
I want to set logging level of hiveserver2Interactive.log to DEBUG in HDP-3.1. could anyone help here?
... View more
Labels:
04-20-2017
10:39 AM
what is the solution?
... View more
04-20-2017
10:27 AM
how was the issue resolved? could you please provide some information.
... View more
04-20-2017
08:45 AM
command "hdfs fsck / -delete" worked for me.
... View more
04-18-2017
05:24 AM
Why solr service not present in HDP2.4 stack? Although it is shown(in the above image) that solr is available since HDP2.1. Can anyone please help why I'am not able to find solr service in HDP stack.
... View more
03-10-2017
04:36 PM
Just to add, if any of the component is installed then package of that component will not reinstall, so in case any file of that component is removed from /usr/hdp directory, then start of the service for that particular component will fail. Correct me if I'm wrong.
... View more
03-09-2017
05:23 PM
@Yu Song: you can check which service is giving this error. after that you need to remove the sym link of that service from /usr/hdp/current/ directory and restart the installation of that service.
... View more
01-24-2017
11:35 AM
@Kuldeep Kulkarni, Yes it was a problem with my rpm command, that's why yum hung. I rebuild rpmdb and everything working fine now. thanks a ton. 🙂
... View more
01-24-2017
06:00 AM
@Kuldeep Kulkarni
I encountered similar issue with HDFS_CLIENT, I've successfully deleted the client using curl call as mentioned in your post.
But, Installation of HDFS_CLIENT stuck at:
2017-01-24 08:44:43,873 - Package['unzip'] {}
And installation failed with following error message:
stderr: /var/lib/ambari-agent/data/errors-2631.txt
Python script has been killed due to timeout after waiting 1800 secs Could you please suggest me some pointers?
... View more
11-07-2016
06:30 AM
@jss I'm using ambari 2.2.1. After creating instance through the rest API. I'm able to see the FILES view. curl -u admin:admin -H "X-Requested-By: ambari" -X POST http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/myfile But I'm getting another error: 500 HdfsApi connection failed. Check "webhdfs.url" property org.apache.ambari.view.utils.hdfs.HdfsApiException: HDFS070 fs.defaultFS is not configured
at org.apache.ambari.view.utils.hdfs.ConfigurationBuilder.getDefaultFS(ConfigurationBuilder.java:112)
at org.apache.ambari.view.utils.hdfs.ConfigurationBuilder.parseProperties(ConfigurationBuilder.java:83)
at org.apache.ambari.view.utils.hdfs.ConfigurationBuilder.buildConfig(ConfigurationBuilder.java:224)
at org.apache.ambari.view.utils.hdfs.HdfsApi.<init>(HdfsApi.java:65)
at org.apache.ambari.view.utils.hdfs.HdfsUtil.connectToHDFSApi(HdfsUtil.java:126)
at org.apache.ambari.view.filebrowser.HdfsService.getApi(HdfsService.java:79)
at org.apache.ambari.view.filebrowser.FileOperationService.listdir(FileOperationService.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:540)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:715)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:196)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:216)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:205)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:152)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745) files-view-error.png
fs.defaultFS value is set in web hdfs configs : core-site.xml section. Am I missing something here?
... View more
11-07-2016
04:57 AM
1 Kudo
I've created a custom stack and added the following files in /var/lib/ambari-server/resources/views directory: capacity-scheduler-2.2.1.1.70.jar
files-2.2.1.1.70.jar
hive-2.2.1.1.70.jar
pig-2.2.1.1.70.jar
slider-2.2.1.1.70.jar
tez-view-2.2.1.1.70.jar After which I've restarted the ambari-server. Now, I'm able to get views list through Rest API: curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/views/
{
"href" : "http://localhost:8080/api/v1/views/",
"items" : [
{
"href" : "http://localhost:8080/api/v1/views/ADMIN_VIEW",
"ViewInfo" : {
"view_name" : "ADMIN_VIEW"
}
},
{
"href" : "http://localhost:8080/api/v1/views/CAPACITY-SCHEDULER",
"ViewInfo" : {
"view_name" : "CAPACITY-SCHEDULER"
}
},
{
"href" : "http://localhost:8080/api/v1/views/FILES",
"ViewInfo" : {
"view_name" : "FILES"
}
},
{
"href" : "http://localhost:8080/api/v1/views/HIVE",
"ViewInfo" : {
"view_name" : "HIVE"
}
},
{
"href" : "http://localhost:8080/api/v1/views/PIG",
"ViewInfo" : {
"view_name" : "PIG"
}
},
{
"href" : "http://localhost:8080/api/v1/views/SLIDER",
"ViewInfo" : {
"view_name" : "SLIDER"
}
},
{
"href" : "http://localhost:8080/api/v1/views/TEZ",
"ViewInfo" : {
"view_name" : "TEZ"
}
}
]
}
But not able to see views on web api. I'm not able to check the problem? Are there any logs which can help in debug? no-views.png
... View more
Labels:
10-12-2016
05:12 AM
only restart of ambari-agent did the work for me. thanks.
... View more
10-04-2016
09:34 AM
After performing the mentioned steps i'm still facing the same issue. The alert is same: “Connection failed: [Errno 111] Connection refused to localhost:6667” curl -u admin:admin -H 'X-Requested-By: ambari' -X GET "http://localhost:8080/api/v1/clusters/XXXX/alert_definitions/17"
{
"href" : "http://localhost:8080/api/v1/clusters/flumetest/alert_definitions/17",
"AlertDefinition" : {
"cluster_name" : "XXXX",
"component_name" : "KAFKA_BROKER",
"description" : "This host-level alert is triggered if the Kafka Broker cannot be determined to be up.",
"enabled" : true,
"id" : 17,
"ignore_host" : false,
"interval" : 1,
"label" : "Kafka Broker Process",
"name" : "kafka_broker_process",
"scope" : "HOST",
"service_name" : "KAFKA",
"source" : {
"default_port" : 9092.0,
"reporting" : {
"critical" : {
"value" : 5.0,
"text" : "Connection failed: {0} to {1}:{2}"
},
"warning" : {
"value" : 1.5,
"text" : "TCP OK - {0:.3f}s response on port {1}"
},
"ok" : {
"text" : "TCP OK - {0:.3f}s response on port {1}"
}
},
"type" : "PORT",
"uri" : "{{kafka-broker/listeners}}"
}
}
}
After changing the listener port in config to : 9092 I'm getting: Connection failed: [Errno 111] Connection refused to localhost:9092
... View more
09-29-2016
04:14 AM
After performing : hadoop fs -rm -r /apps/accumulo/data (using accumulo user) 1. Use below command to initialize the accumulo, [accumulo@phdns01 ~]$ ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init 2. Then perform: [accumulo@phdns01 ~]$ ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init --reset-security 3. change passwd in configs as listed on Ambari(so as the passwd changed in step 2 is same as in config). 4. start accumulo from Ambari.
... View more
06-21-2016
10:24 AM
@Joy Following rpms are installed on my machine: # rpm -qa|grep openssl
openssl-1.0.1e-51.el7_2.5.x86_64
openssl098e-0.9.8e-29.el7.centos.2.x86_64
openssl-libs-1.0.1e-51.el7_2.5.x86_64
openssl-devel-1.0.1e-51.el7_2.5.x86_64
Also you can validate whether your "openssl" has the option "-create_serial"
available or not by running the following command (just pass any string
after "openssl ca") to see the valid options like following.Ffollowing is the output on my machine: # openssl ca kuldeep
unknown option kuldeep
usage: ca args
-verbose - Talk alot while doing things
-config file - A config file
-name arg - The particular CA definition to use
-gencrl - Generate a new CRL
-crldays days - Days is when the next CRL is due
-crlhours hours - Hours is when the next CRL is due
-startdate YYMMDDHHMMSSZ - certificate validity notBefore
-enddate YYMMDDHHMMSSZ - certificate validity notAfter (overrides -days)
-days arg - number of days to certify the certificate for
-md arg - md to use, see openssl dgst -h for list
-policy arg - The CA 'policy' to support
-keyfile arg - private key file
-keyform arg - private key file format (PEM or ENGINE)
-key arg - key to decode the private key if it is encrypted
-cert file - The CA certificate
-selfsign - sign a certificate with the key associated with it
-in file - The input PEM encoded certificate request(s)
-out file - Where to put the output file(s)
-outdir dir - Where to put output certificates
-infiles .... - The last argument, requests to process
-spkac file - File contains DN and signed public key and challenge
-ss_cert file - File contains a self signed cert to sign
-preserveDN - Don't re-order the DN
-noemailDN - Don't add the EMAIL field into certificate' subject
-batch - Don't ask questions
-msie_hack - msie modifications to handle all those universal strings
-revoke file - Revoke a certificate (given in file)
-subj arg - Use arg instead of request's subject
-utf8 - input characters are UTF8 (default ASCII)
-multivalue-rdn - enable support for multivalued RDNs
-extensions .. - Extension section (override value in config file)
-extfile file - Configuration file with X509v3 extentions to add
-crlexts .. - CRL extension section (override value in config file)
-engine e - use engine e, possibly a hardware device.
-status serial - Shows certificate status given the serial number
-updatedb - Updates db for expired certificates
It is not showing -create_serial in the usage options. However when I run my command with "-create_serial" option its not mentioned it as an unkown option: # openssl ca -create_serial kuldeep
unknown option kuldeep
usage: ca args
-verbose - Talk alot while doing things
-config file - A config file
-name arg - The particular CA definition to use
-gencrl - Generate a new CRL
-crldays days - Days is when the next CRL is due
-crlhours hours - Hours is when the next CRL is due
-startdate YYMMDDHHMMSSZ - certificate validity notBefore
-enddate YYMMDDHHMMSSZ - certificate validity notAfter (overrides -days)
-days arg - number of days to certify the certificate for
-md arg - md to use, see openssl dgst -h for list
-policy arg - The CA 'policy' to support
-keyfile arg - private key file
-keyform arg - private key file format (PEM or ENGINE)
-key arg - key to decode the private key if it is encrypted
-cert file - The CA certificate
-selfsign - sign a certificate with the key associated with it
-in file - The input PEM encoded certificate request(s)
-out file - Where to put the output file(s)
-outdir dir - Where to put output certificates
-infiles .... - The last argument, requests to process
-spkac file - File contains DN and signed public key and challenge
-ss_cert file - File contains a self signed cert to sign
-preserveDN - Don't re-order the DN
-noemailDN - Don't add the EMAIL field into certificate' subject
-batch - Don't ask questions
-msie_hack - msie modifications to handle all those universal strings
-revoke file - Revoke a certificate (given in file)
-subj arg - Use arg instead of request's subject
-utf8 - input characters are UTF8 (default ASCII)
-multivalue-rdn - enable support for multivalued RDNs
-extensions .. - Extension section (override value in config file)
-extfile file - Configuration file with X509v3 extentions to add
-crlexts .. - CRL extension section (override value in config file)
-engine e - use engine e, possibly a hardware device.
-status serial - Shows certificate status given the serial number
-updatedb - Updates db for expired certificates
... View more
06-21-2016
10:17 AM
@vpoornalingam Could you please let me know how to verify if SSL is enabled in the Ambari Server? Due to this problem ca.crt file is not being generated, due to which keystore.p12 is not created. Following this ambari server failed to start. I'm getting below error message in /var/log/ambari-server/ambari-server.log ERROR [main] AmbariServer:820 - Failed to run the Ambari Server
MultiException[java.io.FileNotFoundException: /var/lib/ambari-server/keys/keystore.p12 (No such file or directory), java.io.FileNotFoundException: /var/lib/ambari-server/keys/keystore.p12 (No such file or directory)]
... View more
06-21-2016
07:43 AM
I've added a new stack in amabari-server and built it. now after
installing the ambari-server rpm on single host, when i start
ambari-server,
its giving following error: -----------Failure case------------- openssl ca -create_serial -out /var/lib/ambari-server/keys/ca.crt -days 365 -keyfile /var/lib/ambari-server/keys/ca.key -key **** -selfsign -extensions jdk7_ca -config /var/lib/ambari-server/keys/ca.config -batch -infiles /var/lib/ambari-server/keys/ca.csr
was finished with exit code: 1 - an error occurred parsing the command options. ----------- logs are mentioned below------------- 20 Jun 2016 16:25:53,020 INFO [main] Configuration:1067 - Web App DIR test /usr/lib/ambari-server/web
20 Jun 2016 16:25:53,027 INFO [main] CertificateManager:68 - Initialization of root certificate
20 Jun 2016 16:25:53,027 INFO [main] CertificateManager:70 - Certificate exists:false
20 Jun 2016 16:25:53,027 INFO [main] CertificateManager:137 - Generation of server certificate
20 Jun 2016 16:25:55,627 INFO [main] ShellCommandUtil:44 - Command
openssl genrsa -des3 -passout pass:**** -out /var/lib/ambari-server/keys/ca.key 4096
was finished with exit code: 0 - the operation was completely successfully.
20 Jun 2016 16:25:55,644 INFO [main] ShellCommandUtil:44 - Command
openssl req -passin pass:**** -new -key /var/lib/ambari-server/keys/ca.key -out /var/lib/ambari-server/keys/ca.csr -batch
was finished with exit code: 0 - the operation was completely successfully.
20 Jun 2016 16:25:55,654 WARN [main] ShellCommandUtil:46 - Command
openssl ca -create_serial -out /var/lib/ambari-server/keys/ca.crt -days 365 -keyfile /var/lib/ambari-server/keys/ca.key -key **** -selfsign -extensions jdk7_ca -config /var/lib/ambari-server/keys/ca.config -batch -infiles /var/lib/ambari-server/keys/ca.csr
was finished with exit code: 1 - an error occurred parsing the command options.
20 Jun 2016 16:25:55,663 WARN [main] ShellCommandUtil:46 - Command
openssl pkcs12 -export -in /var/lib/ambari-server/keys/ca.crt -inkey /var/lib/ambari-server/keys/ca.key -certfile /var/lib/ambari-server/keys/ca.crt
-out /var/lib/ambari-server/keys/keystore.p12 -password pass:**** -passin pass:****
was finished with exit code: 1 - an error occurred parsing the command options. Can anyone help with this?
I'm using CentOS7 on host for ambari-installation.
... View more
Labels: