Member since
04-25-2018
19
Posts
0
Kudos Received
0
Solutions
12-31-2018
05:10 AM
Setting quota will work. Queries will fail with quota errors.
... View more
12-14-2018
03:23 AM
Do we have an option in CM to include "concerning" health status as well included in the alerts? I have included the service in "bad" status to the alert delivery.
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
12-14-2018
02:47 AM
Hello rufusayeni, Thank you for the update. Issue is with only one user. Others in the same group can access the navigator.
... View more
09-14-2018
08:21 AM
One of the user accounts is not able to access the Cloudera navigator. He is able to access the hue with the same credentials. Navigator throws the attached result. In the Cloudera scm server logs reporting auth failure. 2018-09-14 15:56:48 , 378 INFO 169007312@scm-web-587:com.cloudera.server.web.cmf.CmfLdapAuthenticationProvider: LDAP/AD authentication failure for user 2018-09-14 15:56:48 , 379 INFO 169007312@scm-web-587:com.cloudera.server.web.cmf.AuthenticationFailureEventListener : Authentication failure for user: ' user ' from Role assignment is properly configured and all other accounts from the same group are able to access the navigator.
... View more
Labels:
- Labels:
-
Cloudera Navigator
09-04-2018
10:13 AM
Hello Harsh, Thank you for your reply. I was able to narrow down the cause. It was due to the membership of this user account in a specific group. Once we removed the user from that group the issue got resolved.
... View more
08-07-2018
08:33 AM
- Do any of the outputs in the groups command you run return pure numeric results, instead of actual string names? No. - What's the exit code after you execute 'id -gn username' for the affected user? You may run 'echo $?' to grab exit code after the command. $ id -gn user ; echo $? 1 $ - Please paste the full stack trace, which should include a trace of an IOException after the log message as an underlying 'Caused by'. This would explain the reason behind why the partial group resolution further fails. +++++++ 2018-08-07 15:17:35 , 638 WARN org.apache.sentry.provider.common.HadoopGroupMappingService: [ HiveServer2-Handler-Pool: Thread-2934561 ] : Unable to obtain groups for < user> java.io.IOException: No groups found for user <user> at org.apache.hadoop.security.Groups.noGroupsForUser ( Groups.java:197 ) at org.apache.hadoop.security.Groups.getGroups ( Groups.java:220 ) at org.apache.sentry.provider.common.HadoopGroupMappingService.getGroups ( HadoopGroupMappingService.java:60 ) at org.apache.sentry.provider.common.ResourceAuthorizationProvider.getGroups ( ResourceAuthorizationProvider.java:167 ) at org.apache.sentry.provider.common.ResourceAuthorizationProvider.doHasAccess ( ResourceAuthorizationProvider.java:97 ) at org.apache.sentry.provider.common.ResourceAuthorizationProvider.hasAccess ( ResourceAuthorizationProvider.java:91 ) at org.apache.sentry.binding.hive.authz.HiveAuthzBinding.authorize ( HiveAuthzBinding.java:319 ) at org.apache.sentry.binding.hive.HiveAuthzBindingHook.filterShowDatabases ( HiveAuthzBindingHook.java:907 ) at org.apache.sentry.binding.metastore.SentryMetaStoreFilterHook.filterDb ( SentryMetaStoreFilterHook.java:131 ) at org.apache.sentry.binding.metastore.SentryMetaStoreFilterHook.filterDatabases ( SentryMetaStoreFilterHook.java:59 ) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases ( HiveMetaStoreClient.java:1042 ) at sun.reflect.GeneratedMethodAccessor146.invoke ( Unknown Source ) at sun.reflect.DelegatingMethodAccessorImpl.invoke ( DelegatingMethodAccessorImpl.java:43 ) at java.lang.reflect.Method.invoke ( Method.java:498 ) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke ( RetryingMetaStoreClient.java:105 ) at com.sun.proxy.$Proxy19.getDatabases ( Unknown Source ) at sun.reflect.GeneratedMethodAccessor146.invoke ( Unknown Source ) at sun.reflect.DelegatingMethodAccessorImpl.invoke ( DelegatingMethodAccessorImpl.java:43 ) at java.lang.reflect.Method.invoke ( Method.java:498 ) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke ( HiveMetaStoreClient.java:2034 ) at com.sun.proxy.$Proxy19.getDatabases ( Unknown Source ) at org.apache.hive.service.cli.operation.GetSchemasOperation.runInternal ( GetSchemasOperation.java:59 ) at org.apache.hive.service.cli.operation.Operation.run ( Operation.java:337 ) at org.apache.hive.service.cli.session.HiveSessionImpl.getSchemas ( HiveSessionImpl.java:503 ) at org.apache.hive.service.cli.CLIService.getSchemas ( CLIService.java:320 ) at org.apache.hive.service.cli.thrift.ThriftCLIService.GetSchemas ( ThriftCLIService.java:546 ) at org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult ( TCLIService.java:1373 ) at org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult ( TCLIService.java:1358 ) at org.apache.thrift.ProcessFunction.process ( ProcessFunction.java:39 ) at org.apache.thrift.TBaseProcessor.process ( TBaseProcessor.java:39 ) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process ( HadoopThriftAuthBridge.java:746 ) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run ( TThreadPoolServer.java:286 ) at java.util.concurrent.ThreadPoolExecutor.runWorker ( ThreadPoolExecutor.java:1149 ) at java.util.concurrent.ThreadPoolExecutor$Worker.run ( ThreadPoolExecutor.java:624 ) at java.lang.Thread.run ( Thread.java:748) +++++++ - Is there any particular difference to this username vs. others? For ex., does it start with a special character instead of alpha-num, etc.? Normal user account.
... View more
07-24-2018
03:40 AM
INFO - 2018-07-24 05:16:24,511 INFO [main] retry.RetryInvocationHandler (RetryInvocationHandler.java:invoke(148)) - Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over namenode_host/IP after 1 fail over attempts. Trying to fail over immediately. INFO - java.net.BindException: Problem binding to [scheduler_hostname/IP:0] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException INFO - at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) INFO - at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) INFO - at java.lang.reflect.Constructor.newInstance(Constructor.java:526) INFO - at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) INFO - at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:720) INFO - at org.apache.hadoop.ipc.Client.call(Client.java:1476) INFO - at org.apache.hadoop.ipc.Client.call(Client.java:1409) INFO - at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) INFO - at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source) INFO - at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) INFO - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) INFO - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) INFO - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) INFO - at java.lang.reflect.Method.invoke(Method.java:606) INFO - at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) INFO - at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) INFO - at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source) INFO - at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2123) INFO - at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1253) INFO - at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1249) INFO - at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) INFO - at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1249) INFO - at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1417) INFO - Caused by: java.net.BindException: Cannot assign requested address INFO - at sun.nio.ch.Net.connect0(Native Method) INFO - at sun.nio.ch.Net.connect(Net.java:465) INFO - at sun.nio.ch.Net.connect(Net.java:457) INFO - at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) INFO - at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192) INFO - at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) INFO - at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) INFO - at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) INFO - at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:714) INFO - at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) INFO - at org.apache.hadoop.ipc.Client.getConnection(Client.java:1525) INFO - at org.apache.hadoop.ipc.Client.call(Client.java:1448) INFO - ... 17 more
... View more
07-24-2018
03:37 AM
Hello, We are seeing the below error for some job failures, +++++++ INFO - java.net.BindException: Problem binding to [hostname/IP:0] java.net.BindException: Cannot assign requested address; +++++++ As per apache wiki, ++++++++++ If the port is "0", then the OS is looking for any free port -so the port-in-use and port-below-1024 problems are highly unlikely to be the cause of the problem. Hostname confusion and network setup are the likely causes. ++++++++++ workflow job scheduler hostname is mentioned in the error above and this happens during an HDFS command execution step. Any idea why it is happening?
... View more
Labels:
- Labels:
-
HDFS
07-23-2018
02:47 AM
Hello, I have the following settings for navigator metadata purge, +++++ HDFS entities deleted more than 60 days ago will be purged Select operations older than 60 days will be purged The Metadata and Lineage purge is setup to run every Saturday at 12:00 AM. +++++ However, I am not seeing any entries for the completed purges, I have "nav.purge.enabled" set to true in configuration. So I would expect the recent purge details in the completed section. Any idea?
... View more
Labels:
- Labels:
-
Cloudera Navigator
07-12-2018
08:39 AM
Hi, I am getting the same outputs in my name nodes as well. #groups <user ID> Returns proper group mapping. # hdfs groups <user ID> No groups returned. This is happening only for a specific user account and we are using ShellBasedUnixGroupsMapping. Sample log: ++++++++ org .apache.hadoop.security.ShellBasedUnixGroupsMapping: unable to return groups for user I D PartialGroupNameException can ' t execute the shell command to get the list of group id for user 'ID ' at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames ( ShellBasedUnixGroupsMapping.java:228 ) +++++++
... View more
07-11-2018
06:38 AM
Thanks, Harsh for your reply. I am executing this from gateway node. I am using SSSD and able to fetch right groups using "groups <ID>" command. However, "hdfs groups" is not showing any groups. This is the same when checked from other nodes in the cluster as well. This is happening to only one particular user.
... View more
07-10-2018
01:55 PM
Hello, I have one user ID which is not returning any groups for hdfs groups <ID>. However, groups <ID> is giving proper group mapping. Any thoughts?
... View more
Labels:
- Labels:
-
HDFS
06-21-2018
08:08 AM
Also, if I am using mem_lim query option while running query, will it bypass the "max mem" set at admission control setting? For example, I have set 400GB in max_mem and using mem_lim as 450 while running the query. Another issue, if mem_lim is set at pool level, the number of queries that can be executed will be reduced right? Since mem_lim amount of RAM will be reserved for each query.
... View more
06-21-2018
06:10 AM
Thanks, Tim for your reply. However, if set, Impala requests this amount of memory from each node, and the query does not proceed until that much memory is available. This can cause query failures since memory required for queries will vary from query to query. A query can fail if specified memory is not available in the nodes even if it requires less memory than that.
... View more
06-20-2018
07:43 AM
No. I have only set "Max Memory". In the cloudera doc it says, ++++++ Note: If you specify Max Memory for an Impala dynamic resource pool, you must also specify the Default Query Memory Limit . Max Memory relies on the Default Query Memory Limit to produce a reliable estimate of overall memory consumption for a query. +++++ Is this what you meant?
... View more
06-15-2018
08:52 AM
We have configured Impala admission control with a memory limit of 460 GB for a specific pool. However, we have noticed that a specific query was using memory way more than this. ++++++++++ SELECT User: xxxx Database: default Query Type: QUERY Coordinator: Duration: 75.8m Query Status: Memory limit exceeded Admission Result: Admitted immediately Admission Wait Time: 0ms Aggregate Peak Memory Usage: 879.5 GiB Estimated per Node Peak Memory: 2.2 GiB HDFS Bytes Read: 105.7 MiB Memory Accrual: 367 GiB hours Memory Spilled: 38.1 GiB Node with Peak Memory Usage: xxxx Out of Memory: true Per Node Peak Memory Usage: 102.6 GiB Pool: root.impalaxxxxpool Query State: EXCEPTION Threads: CPU Time: 117.46s ++++++ Ideally, it should fail once the aggregate memory for that pool croses 460 but here it seems like it failed once the total cluster memory got exhausted. For your advice.
... View more
Labels:
- Labels:
-
Apache Impala
06-13-2018
03:20 AM
Is this active-standby transition is logged somewhere in the logs? I am seeing lots of below entries in logs, +++++ PriviledgedActionException as:user ( auth:TOKEN ) cause:org.apache.hadoop.ipc.RemoteException ( org.apache.hadoop.ipc.StandbyException ) : Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error +++++
... View more
05-25-2018
05:10 AM
Hello Tim, Thank you for your help in this thread. Yes. For now, I have set the scratch limit for the specific resource pool. I set it to zero to prevent disk-to-spill and created a trigger to test whether it is working or not [IF (SELECT queries_spilled_memory_rate WHERE serviceName=$SERVICENAME AND max(queries_spilled_memory_rate) > 1) DO health:concerning]. Ideally, the trigger should not fire since the scratch limit is set to zero. For your advice.
... View more