Member since
03-14-2016
4721
Posts
1108
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1221 | 04-27-2020 03:48 AM | |
1855 | 04-26-2020 06:18 PM | |
1735 | 04-26-2020 06:05 PM | |
1264 | 04-13-2020 08:53 PM | |
1594 | 03-31-2020 02:10 AM |
03-28-2022
03:15 AM
What is stored inside this blockmgr-* files ? It has any relation to the input files spark reading ?
... View more
02-24-2022
07:34 PM
Hi! Do you resolve the question?
... View more
02-03-2022
01:09 AM
@prashanthshetty as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
02-02-2022
11:33 PM
@er_sharma_shant @jsensharma Can you guys please tell me how to resolve hiveserver2 start issue by adding the znode name for hiveserver2 in zkcli shell?.. I have hiveserver2 instance created in zkcli shell, but it does not have znode name because of which hiveserver2 is failing to start. Welcome to ZooKeeper!
2022-02-03 02:31:36,504 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-03 02:31:36,592 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:36762, server: localhost/127.0.0.1:2181
2022-02-03 02:31:36,609 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc014c, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, registry, controller, brokers, storm, infra-solr, zookeeper, hbase-unsecure, hadoop-ha, tracers, admin, isr_change_notification, log_dir_event_notification, accumulo, controller_epoch, hiveserver2, hiveserver2-leader, druid, rmstore, atsv2-hbase-unsecure, consumers, ambari-metrics-cluster, latest_producer_id_block, config]
[zk: localhost:2181(CONNECTED) 1] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 2] Any help would be much appreciated!!!. Thank you
... View more
01-17-2022
08:19 AM
I suggest you try to set your path with "\" instead of "/" for example: C:\ Users\kiran\Downloads\mysql\mysql-connector-java-8.0.17.jar it worked out for me!
... View more
01-10-2022
11:47 PM
@yvettew in that pop up window copy the url and paste it in normal chrome browser and then follow the below steps to get rid from 500 error in ambari. 1) 1.1) click on i symbol -> then click on site settings it will redirect you to that particular tab settings 2) 2.1) click on reset permissions and then click on clear data Now Go back to that ambari login opening site : click on reload you will find login page here and error issue will be resolved. rest permissions and clear data in chrome browser settings
... View more
12-17-2021
07:19 PM
Dennis, Thanks for your kind answer, but -> I'm not a company, I'm a person trying to use open source software to learn and contribute like anyone else ... if Cloudera claims that his software is open source so why an agreement is needed? It shouldn't, it defeats completely the purpose. I'm not signing any "agreements", either the code is free on NOT, as simple as that!! If NOT, then just say so ... Stop using "open source" as a fashion flag, this will simply diminish the real coders that produce and put their code available REALLY FOR FREE!! If Cloudera got the free code available and built upon on and now wants to sell it, OK, I have no problem with that, but DONT say it is opne source, because it isn't if you charge for it ... Sorry, but this is so obvious to me ... am I so blind, dumb or missing anything here?
... View more
12-17-2021
08:00 AM
Hi @mike_bronson7 , were you able to solve the issue with those steps? Thank you
... View more
11-02-2021
03:58 AM
@abbas_kurdistan Did you fix this problem ? If yes, what was the solution ?
... View more
10-11-2021
08:35 AM
To resolve the issue, import the Ambari certificates to the Ambari truststore. To import the Ambari certificates, do the following:
STEP 1:
Get certificate from ambari-server
echo | openssl s_client -showcerts -connect <AMBARI_HOst>:<AMBARI_HTTPs_PORT> 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/ambari_certificate.cr
STEP 2:
Get path of ambari trustore and truststore password from Ambari properties
cat /etc/ambari-server/conf/ambari.properties |grep truststore
As per your ambari.properties below is the path and password :-
ssl.trustStore.password=refer from ambari.property file
ssl.trustStore.path=/etc/ambari-server/conf/ambari-server-truststore
STEP 3:
keytool -importcert -file /tmp/ambari_certificate.crt -keystore <keystore-path>
STEP 4:
ambari-server restart
... View more
10-11-2021
08:30 AM
To resolve the issue, import the Ambari certificates to the Ambari truststore. To import the Ambari certificates, do the following:
STEP 1:
Get certificate from ambari-server
echo | openssl s_client -showcerts -connect <AMBARI_HOst>:<AMBARI_HTTPs_PORT> 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/ambari_certificate.cr
STEP 2:
Get path of ambari trustore and truststore password from Ambari properties
cat /etc/ambari-server/conf/ambari.properties |grep truststore
As per your ambari.properties below is the path and password :-
ssl.trustStore.password=refer from ambari.property file
ssl.trustStore.path=/etc/ambari-server/conf/ambari-server-truststore
STEP 3:
keytool -importcert -file /tmp/ambari_certificate.crt -keystore <keystore-path>
STEP 4:
ambari-server restart
... View more
10-11-2021
08:25 AM
To resolve the issue, import the Ambari certificates to the Ambari truststore. To import the Ambari certificates, do the following:
STEP 1:
Get certificate from ambari-server
echo | openssl s_client -showcerts -connect <AMBARI_HOst>:<AMBARI_HTTPs_PORT> 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/ambari_certificate.cr
STEP 2:
Get path of ambari trustore and truststore password from Ambari properties
cat /etc/ambari-server/conf/ambari.properties |grep truststore
As per your ambari.properties below is the path and password :-
ssl.trustStore.password=refer from ambari.property file
ssl.trustStore.path=/etc/ambari-server/conf/ambari-server-truststore
STEP 3:
keytool -importcert -file /tmp/ambari_certificate.crt -keystore <keystore-path>
STEP 4:
ambari-server restart
... View more
10-03-2021
01:13 AM
I was facing the similar error and got it resolved by added Hadoop users to passwd file. resource_management.core.exceptions.ExecutionFailed: Execution of 'usermod -G hadoop -g hadoop hive' returned 6. usermod: user 'hive' does not exist in /etc/passwd
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-59009.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-59009.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', ''] >> File location /etc/passwd >> Adduser hadoop
... View more
08-24-2021
11:30 PM
@Bishal as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
08-19-2021
05:12 AM
HI, used cmd like mysql -u root -p cloudera
... View more
08-06-2021
05:04 AM
1 Kudo
Just tried repo.hortsonwork.com, it is accessible. I have noticed that most of the links other than nexus-private.hortonworks.com and dev.hortonwork.com.s3.awsamazon.com are no more accessible as all the public REPO links are no longer private. Even the new links shared by the CDH/HDP require credentials to get the repos. Still looking for an alternative. if anyone has found any solution, please guide us the way forward.
... View more
- Tags:
- download
- public-repo
07-05-2021
09:52 AM
Hi @srinivasp, I see that you posted similar concerns in multiple posts. I recommend you create a start a new thread and provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question.
... View more
07-05-2021
05:04 AM
Hi, I have a requirement like, i need to create hive policy with two groups .one group with "ALL" permissions to some "x" user and 2nd group with "select" permission to "y" user. i have created policy through REST APi with one group but with "all" permissions but how to mention 2nd group with "select" permission in same create policy command. Thanks in advance! Srini Podili
... View more
06-15-2021
02:17 AM
@KPG1 as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
06-08-2021
02:31 PM
Hi @Winne, Your question went into a thread that was over three years old. You would have a better chance of receiving a prompt and satisfactory resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
06-02-2021
01:50 PM
@dmharshit Please have a look at my other posting on keytabs https://community.cloudera.com/t5/Support-Questions/Headless-Keytab-Vs-User-Keytab-Vs-Service-Keytab/m-p/175277/highlight/true#M137536 Having said that you are switched to the hive user and attempting to use hdfs-headless-keytab. That's not possible. As the root user run the following steps # su - hdfs
[hdfs@server-hdp ~]$ kinit -kt /etc/security/keytabs/hdfs.headless.keytab Now you should have a valid ticket [hdfs@server-hdp ~]$ klist Happy hadooping !!!
... View more
04-15-2021
09:30 AM
I followed up the directions, but It is still not working 😞
... View more
04-09-2021
02:37 AM
Hi Team. Please find the below is application log details. Please do needful on this issue 2021-04-08 11:16:27,499 INFO - [main:] ~ Loading atlas-application.properties from file:/root/apache-atlas-2.0.0/conf/atlas-application.properties (ApplicationProperties:123) 2021-04-08 11:16:27,509 INFO - [main:] ~ Using graphdb backend 'janus' (ApplicationProperties:273) 2021-04-08 11:16:27,510 INFO - [main:] ~ Using storage backend 'hbase2' (ApplicationProperties:284) 2021-04-08 11:16:27,510 INFO - [main:] ~ Using index backend 'solr' (ApplicationProperties:295) 2021-04-08 11:16:27,513 INFO - [main:] ~ Setting solr-wait-searcher property 'true' (ApplicationProperties:301) 2021-04-08 11:16:27,513 INFO - [main:] ~ Setting index.search.map-name property 'false' (ApplicationProperties:305) 2021-04-08 11:16:27,513 INFO - [main:] ~ Property (set to default) atlas.graph.cache.db-cache = true (ApplicationProperties:318) 2021-04-08 11:16:27,513 INFO - [main:] ~ Property (set to default) atlas.graph.cache.db-cache-clean-wait = 20 (ApplicationProperties:318) 2021-04-08 11:16:27,513 INFO - [main:] ~ Property (set to default) atlas.graph.cache.db-cache-size = 0.5 (ApplicationProperties:318) 2021-04-08 11:16:27,514 INFO - [main:] ~ Property (set to default) atlas.graph.cache.tx-cache-size = 15000 (ApplicationProperties:318) 2021-04-08 11:16:27,514 INFO - [main:] ~ Property (set to default) atlas.graph.cache.tx-dirty-size = 120 (ApplicationProperties:318) 2021-04-08 11:16:27,537 INFO - [main:] ~ ######################################################################################## Atlas Server (STARTUP) project.name: apache-atlas project.description: Metadata Management and Data Governance Platform over Hadoop build.user: root build.epoch: 1617878818196 project.version: 2.0.0 build.version: 2.0.0 vc.revision: release vc.source.url: scm:git:git://git.apache.org/atlas.git/atlas-webapp ######################################################################################## (Atlas:215) 2021-04-08 11:16:27,537 INFO - [main:] ~ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (Atlas:216) 2021-04-08 11:16:27,537 INFO - [main:] ~ Server starting with TLS ? false on port 21000 (Atlas:217) 2021-04-08 11:16:27,537 INFO - [main:] ~ <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< (Atlas:218) 2021-04-08 11:16:28,445 INFO - [main:] ~ No authentication method configured. Defaulting to simple authentication (LoginProcessor:102) 2021-04-08 11:16:28,591 WARN - [main:] ~ Unable to load native-hadoop library for your platform... using builtin-java classes where applicable (NativeCodeLoader:60) 2021-04-08 11:16:28,618 INFO - [main:] ~ Logged in user root (auth:SIMPLE) (LoginProcessor:77) 2021-04-08 11:16:29,102 INFO - [main:] ~ Not running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:189) 2021-04-08 11:16:53,718 WARN - [main:] ~ org.apache.solr.client.solrj.impl.Krb5HttpClientBuilder is configured without specifying system property 'java.security.auth.login.config' (Krb5HttpClientBuilder:142) 2021-04-08 11:16:54,047 INFO - [main:] ~ Failed to obtain graph instance, retrying 3 times, error: java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.solr.Solr6Index (AtlasGraphProvider:100) 2021-04-08 11:17:24,369 WARN - [main:] ~ org.apache.solr.client.solrj.impl.Krb5HttpClientBuilder is configured without specifying system property 'java.security.auth.login.config' (Krb5HttpClientBuilder:142) 2021-04-08 11:17:24,379 WARN - [main:] ~ Failed to obtain graph instance on attempt 1 of 3 (AtlasGraphProvider:118) java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.solr.Solr6Index at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:64) at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:476) at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:463) at org.janusgraph.diskstorage.Backend.<init>(Backend.java:148) at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1840) at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:138) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111) at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraphInstance(AtlasJanusGraphDatabase.java:165) at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraph(AtlasJanusGraphDatabase.java:263) at org.apache.atlas.repository.graph.AtlasGraphProvider.getGraphInstance(AtlasGraphProvider.java:52) at org.apache.atlas.repository.graph.AtlasGraphProvider.retry(AtlasGraphProvider.java:114) at org.apache.atlas.repository.graph.AtlasGraphProvider.get(AtlasGraphProvider.java:102) at org.apache.atlas.repository.graph.AtlasGraphProvider$$EnhancerBySpringCGLIB$$ddd7ec69.CGLIB$get$1(<generated>) at org.apache.atlas.repository.graph.AtlasGraphProvider$$EnhancerBySpringCGLIB$$ddd7ec69$$FastClassBySpringCGLIB$$35af08e.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358) at org.apache.atlas.repository.graph.AtlasGraphProvider$$EnhancerBySpringCGLIB$$ddd7ec69.get(<generated>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1181) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1075) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:208) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1066) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835) at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:189) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:208) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1066) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835) at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:189) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) at org.springframework.aop.framework.autoproxy.BeanFactoryAdvisorRetrievalHelper.findAdvisorBeans(BeanFactoryAdvisorRetrievalHelper.java:92) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findCandidateAdvisors(AbstractAdvisorAutoProxyCreator.java:102) at org.springframework.aop.aspectj.autoproxy.AspectJAwareAdvisorAutoProxyCreator.shouldSkip(AspectJAwareAdvisorAutoProxyCreator.java:103) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessBeforeInstantiation(AbstractAutoProxyCreator.java:248) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:1045) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.resolveBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:1019) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:473) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) at org.apache.atlas.web.setup.KerberosAwareListener.contextInitialized(KerberosAwareListener.java:31) at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:843) at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:533) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:816) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:345) at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778) at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:520)
... View more
03-30-2021
11:28 AM
@Jona Great article...may i ask can you use "custom spark2-metrics-properties" in spark config instead of advanced spark2-metrics-properties?
... View more
03-22-2021
02:43 AM
Hi @Priya09, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-13-2021
10:31 AM
On my version (6.3.3) It is found not in CDH/lib/hadoop/lib where it gets looked for, but out in CDH/lib64 for some reason. A symlink from hadoop/native out to lib64 would solve it. cloudera/parcels/CDH/lib64/libhdfs.so
... View more
12-07-2020
12:36 AM
@jvlearn as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
11-25-2020
02:25 AM
I appreciate your answer, could you please tell me what is the meaning of No JVM present. And what is the default location of the Namenode log file? Thank you,
... View more
11-16-2020
09:24 AM
@jsensharma Getting error while running this query given by You. Error _____________ Error: <!DOCTYPE html> <html> <head> <title>Runtime Error</title> <meta name="viewport" content="width=device-width" /> <style> body {font-family:"Verdana";font-weight:normal;font-size: .7em;color:black;} p {font-family:"Verdana";font-weight:normal;color:black;margin-top: -5px} b {font-family:"Verdana";font-weight:bold;color:black;margin-top: -5px} H1 { font-family:"Verdana";font-weight:normal;font-size:18pt;color:red } H2 { font-family:"Verdana";font-weight:normal;font-size:14pt;color:maroon } pre {font-family:"Consolas","Lucida Console",Monospace;font-size:11pt;margin:0;padding:0.5em;line-height:14pt} .marker {font-weight: bold; color: black;text-decoration: none;} .version {color: gray;} .error {margin-bottom: 10px;} .expandable { text-decoration:underline; font-weight:bold; color:navy; cursor:hand; } @media screen and (max-width: 639px) { pre { width: 440px; overflow: auto; white-space: pre-wrap; word-wrap: break-word; } } @media screen and (max-width: 479px) { pre { width: 280px; } } </style> </head>
... View more
11-10-2020
08:27 AM
In my case, I had a wrong `hadoop.registry.dns.bind-address` value in /etc/hadoop/conf/yarn-site.xml. It should have been 0.0.0.0, but there was another address.
... View more