Member since
10-20-2016
106
Posts
0
Kudos Received
0
Solutions
12-31-2019
05:13 AM
@senthh seems like spark doesn't recognize the hdfs namespace raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.IllegalArgumentException: u'java.net.UnknownHostException: datalakedev'
... View more
12-31-2019
04:56 AM
@senthh exported the same but still the issue is persisting. >>> sqlContext.sql('select * from project.relationship_type_ext limit 10').show() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/hdp/current/spark2-client/python/pyspark/sql/context.py", line 353, in sql return self.sparkSession.sql(sqlQuery) File "/usr/hdp/current/spark2-client/python/pyspark/sql/session.py", line 716, in sql return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped) File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__ File "/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py", line 79, in deco raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.IllegalArgumentException: u'java.net.UnknownHostException: datalakedev' >>> Traceback (most recent call last): File "/usr/hdp/current/spark2-client/python/pyspark/context.py", line 256, in signal_handler raise KeyboardInterrupt() KeyboardInterrupt >>> [1]+ Stopped pyspark
... View more
12-31-2019
04:34 AM
@senthh I have passed the hive-site.xml in /etc/spark2/conf but don't know how to map other configuration files. And also, in test environment it is working fine whereas in dev alone causing problem.
... View more
12-31-2019
01:57 AM
@Shelton even from pyspark also not able to access the hive table. could you please look into this. >>> sqlContext.sql('select * from project.relationship_type_ext limit 10').show() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/hdp/current/spark2-client/python/pyspark/sql/context.py", line 353, in sql return self.sparkSession.sql(sqlQuery) File "/usr/hdp/current/spark2-client/python/pyspark/sql/session.py", line 716, in sql return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped) File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__ File "/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py", line 79, in deco raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.IllegalArgumentException: u'java.net.UnknownHostException: datalakedev' >>> sqlContext.sql('describe table project.relationship_type_ext').show() +--------------------+---------+-------+ | col_name|data_type|comment| +--------------------+---------+-------+ | uuid| string| null| | source| string| null| | sourceobject| string| null| |securityclassific...| string| null| |timestampwithmill...| string| null| | activeflag| string| null| | versionofgenerator| string| null| | updated_by| string| null| | active_ind| string| null| |relationship_type...| string| null| |relationship_type...| string| null| |relationship_type...| string| null| | horizontal_ind| string| null| | created_by| string| null| | created_on| string| null| | updated_on| string| null| +--------------------+---------+-------+
... View more
12-30-2019
11:44 PM
Hi Team,
I am unable to access any of the hive table from spark-sql terminal but able to list the databases and table from spark terminal.
looks like the spark-sql does not able to find the hdfs name space. Kindly look into the below error.
i.e datalake dev is the hdfs name space
spark-sql> show tables; 19/12/31 02:39:46 INFO CodeGenerator: Code generated in 24.580491 ms product adjustment_type false product adjustment_type_ext false product co_financing_arrangement_type false product co_financing_arrangement_type_ext false
19/12/31 02:39:57 ERROR SparkSQLDriver: Failed in [select * from snapshot_table_list limit 10] java.lang.IllegalArgumentException: java.net.UnknownHostException: datalakedev at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:445) at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:353) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:287) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:177) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at org.apache.spark.sql.execution.streaming.FileStreamSink$.ancestorIsMetadataDirectory(FileStreamSink.scala:68) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$1.apply(InMemoryFileIndex.scala:61) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$1.apply(InMemoryFileIndex.scala:61) at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247) at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267) at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:61) at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$9.apply(HiveMetastoreCatalog.scala:235) at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$9.apply(HiveMetastoreCatalog.scala:233) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.hive.HiveMetastoreCatalog.org$apache$spark$sql$hive$HiveMetastoreCatalog$$inferIfNeeded(HiveMetastoreCatalog.scala:233) at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6$$anonfun$7.apply(HiveMetastoreCatalog.scala:193) at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6$$anonfun$7.apply(HiveMetastoreCatalog.scala:192) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6.apply(HiveMetastoreCatalog.scala:192) at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$6.apply(HiveMetastoreCatalog.scala:185) at org.apache.spark.sql.hive.HiveMetastoreCatalog.withTableCreationLock(HiveMetastoreCatalog.scala:54) at org.apache.spark.sql.hive.HiveMetastoreCatalog.convertToLogicalRelation(HiveMetastoreCatalog.scala:185) at org.apache.spark.sql.hive.RelationConversions.org$apache$spark$sql$hive$RelationConversions$$convert(HiveStrategies.scala:212) at org.apache.spark.sql.hive.RelationConversions$$anonfun$apply$4.applyOrElse(HiveStrategies.scala:239) at org.apache.spark.sql.hive.RelationConversions$$anonfun$apply$4.applyOrElse(HiveStrategies.scala:228) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306) at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306) at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306) at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306) at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306) at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187) at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
12-27-2019
03:17 AM
Hi Team,
I have been facing slowness in NIFI Web UI for the last 2 months , as a fix I would restart NIFI service and working normal for some time and again after certain period of time it started responding slow. Kindly provide a permanent fix for this issue.
... View more
- Tags:
- NiFi
Labels:
- Labels:
-
Apache NiFi
12-24-2019
03:15 AM
@Shelton I have tried to set the attribute for the file hadoop-yarn-nodemanager.pid however, the file system /var/run seems to be XFS file system. The chattr commad does not work with xfs FS as per redhat. Please provide an alternate solution for this issue. [root@w0lxdhdp05 yarn]# lsattr lsattr: Inappropriate ioctl for device While reading flags on ./hadoop-yarn-nodemanager.pid chattr: Inappropriate ioctl for device while reading flags on hadoop-yarn-nodemanager.pid Please refer this -> https://access.redhat.com/solutions/184693
... View more
12-23-2019
01:45 AM
I have created new notebook in zeppelin but unable to open it. can some one can help on this.
Attaching the screenshot
... View more
Labels:
- Labels:
-
Apache Zeppelin
12-20-2019
05:00 AM
Hi Team,
I have installed NIFI service in the existing HDP cluster and need to integrate with AD as currently the nifi web ui does not have authentication. Can someone help on this?
... View more
Labels:
- Labels:
-
Apache NiFi
12-20-2019
03:53 AM
@Shelton I have changed it to 644 but however after starting node manager it remains the same 444. Before: -rw-r--r-- 1 yarn hadoop 6 Dec 20 05:00 hadoop-yarn-nodemanager.pid After -r--r--r-- 1 yarn hadoop 6 Dec 20 05:00 hadoop-yarn-nodemanager.pid Not able to find the exact cause why it is changing again to 444 though i did the permission manually.
... View more
12-20-2019
02:07 AM
@Shelton I tried the below solution even though the pid file created with 444 permission upon multiple restarts. -r--r--r-- 1 yarn hadoop 6 Dec 20 05:00 hadoop-yarn-nodemanager.pid Still the above issue is persisting resource_management.core.exceptions.ExecutionFailed: Execution of 'ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/3.0.1.0-187/hadoop/libexec && /usr/hdp/3.0.1.0-187/hadoop-yarn/bin/yarn --config /usr/hdp/3.0.1.0-187/hadoop/conf --daemon start nodemanager' returned 1. -bash: line 0: ulimit: core file size: cannot modify limit: Operation not permitted /usr/hdp/3.0.1.0-187/hadoop/libexec/hadoop-functions.sh: line 1847: /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid: Permission denied ERROR: Cannot write nodemanager pid /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid. /usr/hdp/3.0.1.0-187/hadoop/libexec/hadoop-functions.sh: line 1866: /var/log/hadoop-yarn/yarn/hadoop-yarn-nodemanager-Hostname.org.out: Permission denied
... View more
12-19-2019
05:16 AM
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/nodemanager.py", line 102, in <module> Nodemanager().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/nodemanager.py", line 53, in start service('nodemanager',action='start') File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/service.py", line 93, in service Execute(daemon_cmd, user = usr, not_if = check_process) File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run returns=self.resource.returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/3.0.1.0-187/hadoop/libexec && /usr/hdp/3.0.1.0-187/hadoop-yarn/bin/yarn --config /usr/hdp/3.0.1.0-187/hadoop/conf --daemon start nodemanager' returned 1. -bash: line 0: ulimit: core file size: cannot modify limit: Operation not permitted /usr/hdp/3.0.1.0-187/hadoop/libexec/hadoop-functions.sh: line 1847: /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid: Permission denied ERROR: Cannot write nodemanager pid /var/run/hadoop-yarn/yarn/hadoop-yarn-nodemanager.pid. /usr/hdp/3.0.1.0-187/hadoop/libexec/hadoop-functions.sh: line 1866: /var/log/hadoop-yarn/yarn/hadoop-yarn-nodemanager
... View more
- Tags:
- node manager
- YARN
Labels:
- Labels:
-
Apache YARN
12-19-2019
05:00 AM
I had installed datanode and trying to start it via ambari but it was throwing some error. I tried to check the logs in /var/log/hadoop/hdfs and could not understand the issue. Does anyone faced this issue before ?
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1417) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:500) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2782) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2690) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2732) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2876) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2900) 2019-12-19 07:42:35,127 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@53812a9b{HTTP/1.1,[http/1.1]}{localhost:41704} 2019-12-19 07:42:35,127 INFO server.Server (Server.java:doStart(414)) - Started @3594ms 2019-12-19 07:42:35,131 INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@53812a9b{HTTP/1.1,[http/1.1]}{localhost:0} 2019-12-19 07:42:35,131 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@cb191ca{/,null,UNAVAILABLE}{/datanode} 2019-12-19 07:42:35,137 INFO datanode.DataNode (DataNode.java:shutdown(2134)) - Shutdown complete. 2019-12-19 07:42:35,137 ERROR datanode.DataNode (DataNode.java:secureMain(2883)) - Exception in secureMain java.io.IOException: Problem starting http server at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1165) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:141) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:957) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1417) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:500) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2782) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2690) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2732) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2876) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2900) Caused by: java.lang.NullPointerException at org.eclipse.jetty.util.IO.delete(IO.java:344) at org.eclipse.jetty.webapp.WebInfConfiguration.deconfigure(WebInfConfiguration.java:195) at org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1380) at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:880) at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:272) at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:546) at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89) at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:142) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:160) at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:73) at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89) at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:142) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:160) at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:73) at org.eclipse.jetty.server.Server.doStop(Server.java:493) at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1155) ... 9 more 2019-12-19 07:42:35,139 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: java.io.IOException: Problem starting http server 2019-12-19 07:42:35,143 INFO datanode.DataNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG: /************************************************************
... View more
Labels:
- Labels:
-
Apache Ambari
12-10-2019
07:21 AM
Hi Team,
Yarn is not allocating resources when submitting multiple queries by users. Currently, I could see only two applications are running and when user submits third query, yarn is not allocating resources though we had resources in the queue.
... View more
- Tags:
- YARN
Labels:
- Labels:
-
Apache YARN
12-05-2019
12:18 AM
Hi Team,
I am getting meta store exception while dropping hive external table. Can some one please look into this?
Error:
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Couldn't acquire the DB log notification lock because we reached the maximum # of retries: 5 retries. If this happens too often, then is recommended to increase the maximum number of retries on the hive.notification.sequence.lock.max.retries configuration :: Error executing SQL query "select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" for update".) INFO : Completed executing command(queryId=hive_20191205030920_435129bd-3f44-4ac0-8751-b811fc679a49); Time taken: 4.313 seconds Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Couldn't acquire the DB log notification lock because we reached the maximum # of retries: 5 retries. If this happens too often, then is recommended to increase the maximum number of retries on the hive.notification.sequence.lock.max.retries configuration :: Error executing SQL query "select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" for update".) (state=08S01,code=1) 0: jdbc:hive2://w0lxdhdp02.ifc.org:2181,w0lxd>
... View more
Labels:
- Labels:
-
Apache Hive
11-28-2019
03:12 AM
@Shelton I could see some improvement on the UI, after clearing the DB old logs
... View more
- Tags:
- shel
11-01-2019
12:45 AM
@Shelton Tried everything, but still not able to access the hdfs
... View more
10-31-2019
05:00 AM
yarn is not utilizing the cluster resources and jobs are waiting in the accepted state.
... View more
- Tags:
- YARN
Labels:
- Labels:
-
Apache YARN
10-31-2019
04:57 AM
Especially while clicking the services tab like hdfs config etc and it became normal after ambari server restart. Kindly help me to fix this issue.
... View more
- Tags:
- Ambari
10-31-2019
04:51 AM
Ambari UI is running slow. Currently, the cluster size is 6 nodes
... View more
- Tags:
- Ambari
Labels:
- Labels:
-
Apache Ambari
10-24-2019
05:59 AM
@Shelton can't send the details here. Hope you understand
... View more
10-24-2019
03:03 AM
@Shelton kindly look into this
... View more
10-23-2019
11:30 PM
@paras Please check the above curl O/P
... View more