Member since
06-10-2017
39
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
620 | 02-20-2020 11:15 AM | |
2763 | 06-11-2018 02:04 PM | |
1846 | 06-06-2018 05:22 PM |
12-13-2022
07:17 AM
Thanks for the responses. This is a old one and resolved long back.
... View more
11-10-2021
09:23 AM
2021-11-02 09:27:36,555 WARN org.apache.hive.common.util.RetryUtilities$ExponentiallyDecayingBatchWork: [HiveServer2-Background-Pool: Thread-180]: Exception thrown while processing using a batch size 15 org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Expecting a partition with name extract_date=2018-02-15, but metastore is returning a partition with name extract_date=2018-02-15 .) at org.apache.hadoop.hive.ql.metadata.Hive.createPartitions(Hive.java:2201) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask$1.execute(DDLTask.java:2020) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask$1.execute(DDLTask.java:1999) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.common.util.RetryUtilities$ExponentiallyDecayingBatchWork.run(RetryUtilities.java:93) [hive-common-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask.createPartitionsInBatches(DDLTask.java:2027) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1918) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:413) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_231] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_231] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.2.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_231] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_231] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_231] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_231] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_231] Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Expecting a partition with name extract_date=2018-02-15, but metastore is returning a partition with name extract_date=2018-02-15 . at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result$add_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java:64399) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result$add_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java:64358) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result.read(ThriftHiveMetastore.java:64281) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_add_partitions_req(ThriftHiveMetastore.java:1819) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.add_partitions_req(ThriftHiveMetastore.java:1806) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1]
... View more
11-09-2021
08:13 AM
Hi, We recently upgraded the cluster from CDH 5.16.2 to CDH 6.2.1 and after the upgrade MSCK repair in hive is failing for random partitions and for random tables, does anyone has any insights on how to fix this issue? Thanks
... View more
Labels:
- Labels:
-
Apache Hive
01-07-2021
01:26 PM
Hi All, Yarn RM UI and History server are throwing java.lang.illegalargumentexception when try to launch. Java version: 1.8.0_241 CDH: 6.2.1 Any Suggestions on how to over come this error.
... View more
Labels:
- Labels:
-
Apache YARN
11-30-2020
11:47 AM
Recently we added few jars to the AUX jar location in our environments and I can see the desired functions from Hive and these functions are accessible by all users who are connecting to hadoop. Is there any way to restrict the newly added functions(added UDF jar file on all hive and HS2 servers) only to fewer users instead of all using roles?
current env: CDH 6.21.
... View more
Labels:
06-29-2020
08:56 AM
Hi Everyone, Currently we are at 5.16.2 CDH version and planning to upgrade to 6.2.1. I'm searching for the notable changes from the current version to the new version, but cannot find any. can anyone provide the notable changes from 5.16.2 to 6.2.1. Does CDH 6.2.X supports erasure coding?? Thanks, RV.
... View more
Labels:
05-20-2020
12:39 PM
Thanks @StevenOD. we have a similar layout. But more jobs are adding to the cluster every day, as a result, there are some resource constraints. we are not expanding the cluster this year, hence searching for other routes like adding NM on top of RM and NN. Is it good to add NM on the Master nodes where NN and RM are hosted? Thanks, Raghu.
... View more
05-19-2020
08:51 PM
Hi All, Is it suggested to have both the Resource Manager and Node Manager on the same nodes? Cluster is with 51 nodes and each node is 512Gb of memory. Thanks, Raghu.
... View more
Labels:
- Labels:
-
Apache YARN
02-20-2020
11:15 AM
@venkatsambath Thanks for your input's on this. I built the chart and your redirect provided extra metrics.
... View more
02-20-2020
08:10 AM
Hi all,
Does any one know how to create timeseries graph for httpfs service from cloudera manager.
Currently we have httpfs service on 3 servers and We need to build a graph on top of it. Is there any query which I Can use of it to create a graph.
Thanks,
Raghu.
... View more
Labels:
- Labels:
-
Cloudera Manager
01-17-2020
06:32 AM
Thanks Rohit, I tried both options but it did not helped me. Apparently cloudera engineer is saying that this option(i.e, connecting to hive via zookeeper) will not be supported in CDH 5.16.2 version. Is it true? beeline -u "jdbc:hive2://ZK1:2181,ZK2:2181,ZK3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hive_zookeeper_namespace_hive;transportMode=http;httpPath=cliservice;" Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release scan complete in 1ms Connecting to jdbc:hive2://ZK1:2181,ZK2:2181,ZK3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hive_zookeeper_namespace_hive;transportMode=http;httpPath=cliservice; 20/01/17 09:27:26 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 20/01/17 09:27:27 [main]: INFO jdbc.HiveConnection: Failed to connect to null:0 Error: Could not open client transport for any of the Server URI's in ZooKeeper: Unable to read HiveServer2 configs from ZooKeeper (state=08S01,code=0) Beeline version 1.1.0-cdh5.16.2 by Apache Hive beeline> beeline -u "jdbc:hive2://ZK1:2181,ZK2:2181,ZK3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hive_zookeeper_namespace_hive;" Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release scan complete in 1ms Connecting to jdbc:hive2://ZK1:2181,ZK2:2181,ZK3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hive_zookeeper_namespace_hive; java.lang.NullPointerException Beeline version 1.1.0-cdh5.16.2 by Apache Hive 0: jdbc:hive2://ZK1:2181, (closed)> Thanks, Raghu.
... View more
01-15-2020
06:50 AM
Thanks Lewis, but it didn't worked!! Caused by: java.lang.IllegalArgumentException: A HostProvider may not be empty! at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:72) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445) at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) at org.apache.curator.HandleHolder.internalClose(HandleHolder.java:128) at org.apache.curator.HandleHolder.closeAndClear(HandleHolder.java:71) at org.apache.curator.ConnectionState.close(ConnectionState.java:114) ... 30 more Error: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper (state=,code=0) Beeline version 1.1.0-cdh5.16.2 by Apache Hive beeline>
... View more
01-15-2020
06:15 AM
Hi,
I'm trying to connect to hive from zookeeper, I'm getting a java null pointer exception with beeline cmd. currently using CDH 5.16.2 version.
I have a valid kerberos ticket:
beeline --verbose=true -u "jdbc:hive2://ZK1:2181,ZK2:2181,ZK3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveServer2;transportMode=binary;httpPath=cliservice;" Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release issuing: !connect jdbc:hive2://ZK1:2181,ZK2:2181,ZK3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveServer2;transportMode=binary;httpPath=cliservice; '' [passwd stripped] scan complete in 2ms Connecting to jdbc:hive2://ZK1:2181,ZK2:2181,ZK3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveServer2;transportMode=binary;httpPath=cliservice; java.lang.NullPointerException at org.apache.thrift.transport.TSocket.open(TSocket.java:209) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:266) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:168) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:146) at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:211) at org.apache.hive.beeline.Commands.connect(Commands.java:1529) at org.apache.hive.beeline.Commands.connect(Commands.java:1424) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52) at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1139) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1178) at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:818) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:898) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:518) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:226) at org.apache.hadoop.util.RunJar.main(RunJar.java:141) Beeline version 1.1.0-cdh5.16.2 by Apache Hive 0: jdbc:hive2://ZK1:2181, (closed)>
... View more
Labels:
08-21-2019
11:35 AM
Hi All,
New to presto. Can anyone help with the documentation on how to install presto as an alternative to hive in CDH cluster using CM or CLI. we have 50 node cluster.
TIA
... View more
Labels:
- Labels:
-
Cloudera Manager
07-27-2018
05:53 PM
@Jay Kumar SenSharma will it work for postgres too? thanks
... View more
06-11-2018
02:04 PM
installed mariadb and followed this doc for manually installing hive using existing database https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-administration/content/using_hive_with_mysql.html
... View more
06-11-2018
01:35 PM
Hi all, I have created a new 3 node cluster. I want to apply all the prod configs to the new cluster. Is there any way to deploy prod configs on the new built cluster using ambari blueprint.
... View more
Labels:
- Labels:
-
Apache Ambari
06-07-2018
01:06 PM
@Jay Kumar SenSharma Jay thanks for headsup on this tried the above cmd. still facing the conflict. I installed maria db as an option earlier Loaded plugins: enabled_repos_upload, langpacks, package_upload, product-id, search-disabled-repos, subscription-manager
Examining mysql57-community-release-el7-8.noarch.rpm: mysql57-community-release-el7-8.noarch Marking mysql57-community-release-el7-8.noarch.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package mysql57-community-release.noarch 0:el7-8 will be installed
--> Processing Conflict: mysql57-community-release-el7-8.noarch conflicts mysql-community-release
rhel-7-server-extras-rpms | 2.0 kB 00:00:00 rhel-7-server-fastrack-rpms | 1.7 kB 00:00:00 rhel-7-server-optional-rpms | 2.0 kB 00:00:00 rhel-7-server-rpms | 2.0 kB 00:00:00 rhel-7-server-satellite-tools-6.3-puppet4-rpms | 2.1 kB 00:00:00
rhel-7-server-satellite-tools-6.3-rpms | 2.1 kB 00:00:00 No package matched to upgrade: mysql57-community-release
--> Finished Dependency Resolution Error: mysql57-community-release conflicts with mysql-community-release-el7-5.noarch You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
Uploading Enabled Repositories Report
Loaded plugins: langpacks, product-id, subscription-manager tried --skip-broken option too
... View more
06-07-2018
12:51 PM
@Felix Albani Thanks for the reply. I tried the above cmd. but no use: # yum -d 0 -e 0 -y install mysql-community-server Error: Nothing to do Loaded plugins: langpacks, product-id, subscription-manager
... View more
06-06-2018
05:26 PM
I'm trying to install 3 node cluster on redhat 7.5. I'm able to install all basic services except hive. 1) getting mysql-community-server error 2) Installed maria db to resolve this and i'm unable to install mysql server. here is the log: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py", line 64, in <module>
MysqlServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py", line 33, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 708, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 53, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install mysql-community-server' returned 1. Error: Nothing to do
Loaded plugins: langpacks, product-id, subscription-manager
... View more
Labels:
- Labels:
-
Apache Hive
06-06-2018
05:22 PM
finally able to resolve this by downgrading java to "1.8.0_161"
... View more
05-31-2018
06:01 PM
@Geoffrey roll back to previous version of OS(7.4) ?
... View more
05-31-2018
01:33 PM
@Geoffrey Shelton Okot we are not authorized to use GUI. Is there any way to overcome this issue?
... View more
05-31-2018
01:14 PM
@Geoffrey Shelton Okot HDP version-- 2.6.2.0 OS type and versions--redhat, 7.5 Cluster type (single/multi-node)--multinode cluster(no kerberos) Cluster Size(affected nodes)- 11nodes(ALL)cluster is up and running and can retrieve the data but cannot copt/put the data into the cluster.
... View more
05-31-2018
12:52 PM
After OS patching, people are unable to write/copy the data into the encrypted zones. can any one know how to resolve this? Here is the error: copyFromLocal: java.util.concurrent.ExecutionException:
java.io.IOException:
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException:
No KeyVersion exists for key 'encryptionkey'
... View more
Labels:
- Labels:
-
Cloudera Navigator Encrypt
05-01-2018
05:53 PM
All, setting the hiveser2 authorization to LDAP. I changed below properties to hive.server2.authentication =LDAP hive.server2.authentication.ldap.url=ldap://hostFQDN:636(tried 389 too)
hive.server2.authentication.ldap.baseDn=ou=xxxxx,dc=xxxx Hive server log produces the following error: Caused by: javax.security.sasl.AuthenticationException: LDAP Authentication failed for user [Caused by javax.naming.ServiceUnavailableException: hostFQDN.com:636; socket closed] Do i need to change any parameters apart from these three? Thanks in adv
... View more
Labels:
- Labels:
-
Apache Hive
03-27-2018
05:23 PM
@Ramesh Mani @vperiasamy Ramesh I made the above changes. I got 6 diff logs in the /ranger/audit/hdfs/ directory in hdfs. and also I'm unable to see the content in those log files --- I pasted the cat output of the log file. Can you help me on this hdfs dfs -cat /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.1.log cat: Cannot obtain block length for LocatedBlock{BP-211226024-10.224.60.23-1481061235494:blk_1091267231_17616185; getBlockSize()=1483776; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.224.60.21:50010,DS-d9d6b48a-2212-4529-a719-827215e3967a,DISK], DatanodeInfoWithStorage[10.224.60.22:50010,DS-04f30f6e-20b7-48af-9872-7d2782dff0ad,DISK], DatanodeInfoWithStorage[10.224.60.52:50010,DS-3f1ae50a-9ade-419f-9b39-fa3ac1d4f308,DISK]]} hdfs@instance-1 ~]$ hdfs dfs -ls /ranger/audit/hdfs/20180326 Found 6 items
-rw-r--r-- 3 hdfs hdfs 1419264 2018-03-26 23:56 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.1.log
-rw-r--r-- 3 hdfs hdfs 1894 2018-03-26 22:44 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.2.log
-rw-r--r-- 3 hdfs hdfs 59252 2018-03-26 22:56 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.3.log
-rw-r--r-- 3 hdfs hdfs 580608 2018-03-27 00:59 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.4.log
-rw-r--r-- 3 hdfs hdfs 29635 2018-03-26 23:58 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.5.log
-rw-r--r-- 3 hdfs hdfs 193536 2018-03-26 17:43 /ranger/audit/hdfs/20180326/hdfs_ranger_audit_instance-1.c.neat-pagoda-198122.internal.log
... View more
03-22-2018
05:49 PM
Hi all, Currently all our ranger audit logs under /ranger/audit/<service> in hdfs. Is there any possibility to disable storing audit logs in hdfs and store all the audit logs in the different server out side of the cluster but in the same network
... View more
Labels:
- Labels:
-
Apache Ranger
03-21-2018
03:59 PM
@Ramesh Mani Is it possible to copy the ranger audit logs on to different server which is in the same network but outside of the cluster?
... View more
03-20-2018
01:19 AM
hosting team need all the logs related to their userId's. for example if any audit log is written to /ranger/audit/hiveServer2. then automatically that log need to save in the local directory as the second copy(not to delete from hdfs logs)
... View more