Member since
07-24-2019
21
Posts
0
Kudos Received
0
Solutions
06-27-2021
01:15 AM
Hi All I am working for a customer to set up a DR environment. Customer has created a DR as a Separate instance of Cloudera and PROD is a different instance. The replication is schedule-based as of now. Can we use Kafka production offset location in DR instance? the customer is working on replication of Hive data and Kafka topics. ? will this work out ? What is the ideal way to restore a streaming job in DR when PROD is down? Any best practices? Let me know if you need more details. Thanks in advance Muthu
... View more
Labels:
- Labels:
-
Apache Kafka
08-04-2019
10:06 AM
Hi All
I am trying to achieve similar functionality to below Unix command in HDFS level.
find /temp -name '*.avro' -cnewer sample.avro
or to retrieve the files greater than a specific timestamp from HDFS level.
From the Hadoop documentation I came to know that we have limited functionality of
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#find
Let me know how this can be achieved in Hadoop level. Any workarounds.
Thanks - Muthu
... View more
Labels:
- Labels:
-
Apache Hadoop
10-10-2018
10:22 AM
Hi Naresh the above comment worked fine for the hive logging. But the actual issue is due to permission issue in ./tmp/infa folder. Once I granted the permissions to this folder the issue got resolved. thx Muthu
... View more
10-04-2018
06:17 AM
hi Karthick I resolved this issue by formatting the NN node since i found few corrupted blocks in hdfs. mine is single node cluster. Formatting resolved the issue. Thx Muthu
... View more
09-28-2018
03:12 AM
Hi All I am having a strange issue when I tried to login from "infa" user. I have no issue with "hive" user. [infa@bdm ~]$ hive log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender. Logging initialized using configuration in file:/etc/hive/2.6.3.0-235/0/hive-log4j.properties Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:552) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:233) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) Caused by: java.lang.RuntimeException: java.io.IOException: Permission denied at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:545) ... 8 more Caused by: java.io.IOException: Permission denied at java.io.UnixFileSystem.createFileExclusively(Native Method) at java.io.File.createTempFile(File.java:2024) at org.apache.hadoop.hive.common.FileUtils.createTempFile(FileUtils.java:885) at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:858) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:543) ... 8 more I have checked the HDFS directories in Hive under /tmp for write access to infa user for below folders and it has the write access. drwxrwxrwx - infa hdfs 0 2018-09-27 10:33 /tmp/hive drwxrwxrwx - hive hdfs 0 2018-09-25 15:01 /tmp/hive1 drwxr-xr-x - hdfs hdfs 0 2018-09-06 21:21 /tmp/infa Do i miss something here ? Thx Muthu
... View more
Labels:
- Labels:
-
Apache Hive
09-06-2018
05:52 AM
I did some analysis with and found that few blocks are corrupted in HDFS. even tried to delete the corrupted files. once it is done. It worked fine for few hours. The moment i restart the server, my NN is coming again. in safe mode.. Since NN is in safe mode.. other services are not coming up.. Any suggestions. Thx Muthu
... View more
09-05-2018
08:24 AM
Hi All I have installed HDP 2.6 in RHEL 7.4 in Azure. The installation process is completed. But when I started the cluster Name node is going in safe mode. Due to that other services are not able to come up.. I tried to manually exit from safe-mode, and tried restarting the services. its all starting fine. Any suggestions. Thanks in advance Muthu Error log : 2018-09-05 12:04:03,445 - Retrying after 10 seconds. Reason:
Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs
hdfs://bdm.localdomain:8020 -safemode get | grep 'Safe mode is OFF'' returned
1. 2018-09-05 12:04:15,700 - Retrying after 10 seconds. Reason:
Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs
hdfs://bdm.localdomain:8020 -safemode get | grep 'Safe mode is OFF'' returned
1. 2018-09-05 12:04:27,959 - Retrying after 10 seconds. Reason:
Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs
hdfs://bdm.localdomain:8020 -safemode get | grep 'Safe mode is OFF'' returned
1. 2018-09-05 12:04:40,239 - Retrying after 10 seconds. Reason:
Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs
hdfs://bdm.localdomain:8020 -safemode get | grep 'Safe mode is OFF'' returned
1. 2018-09-05 12:04:52,457 - The NameNode is still in Safemode.
Please be careful with commands that need Safemode OFF. 2018-09-05 12:04:52,458 - HdfsResource['/tmp']
{'security_enabled': False, 'hadoop_bin_dir':
'/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '',
'default_fs': 'hdfs://bdm.localdomain:8020', 'hdfs_resource_ignore_file':
'/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ...,
'kinit_path_local': 'kinit', 'principal_name': None, 'user': 'hdfs', 'owner':
'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type':
'directory', 'action': ['create_on_execute'], 'immutable_paths':
[u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode':
0777} 2018-09-05 12:04:52,461 - call['ambari-sudo.sh su hdfs -l -s
/bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET
'"'"'http://bdm.localdomain:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'"'"'
1>/tmp/tmpE8fWXX 2>/tmp/tmptJfEmS''] {'logoutput': None, 'quiet': False} 2018-09-05 12:04:53,922 - call returned (0, '') 2018-09-05 12:04:53,923 - Skipping the operation for not
managed DFS directory /tmp since immutable_paths contains it. 2018-09-05 12:04:53,924 - HdfsResource['/user/ambari-qa']
{'security_enabled': False, 'hadoop_bin_dir':
'/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '',
'default_fs': 'hdfs://bdm.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore',
'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': None, 'user':
'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir':
'/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'],
'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs',
u'/tmp'], 'mode': 0770} 2018-09-05 12:04:53,926 - call['ambari-sudo.sh su hdfs -l -s
/bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET
'"'"'http://bdm.localdomain:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS&user.name=hdfs'"'"'
1>/tmp/tmpujRH4r 2>/tmp/tmp4594S5''] {'logoutput': None, 'quiet': False} 2018-09-05 12:04:53,998 - call returned (0, '') 2018-09-05 12:04:53,999 - HdfsResource[None] {'security_enabled':
False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab':
[EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://bdm.localdomain:8020',
'hdfs_resource_ignore_file':
'/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local':
'kinit', 'principal_name': None, 'user': 'hdfs', 'action': ['execute'],
'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'immutable_paths':
[u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']} 2018-09-05 12:04:53,999 - Ranger Hdfs plugin is not enabled Command completed successfully!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
08-03-2018
02:52 PM
Hi All I am also facing the same issue. but My server hard disk is new. There is no warning/alert in my Ambari level. Once my current job completes, then only the second job is allowed to execute.. If my first job is running for 60 minutes, my second job is on hold. Any suggestions. server capacity : 16 core , 64 GB RAM Thx Muthu
... View more
08-03-2018
07:59 AM
Hi Jonathan Finally Issue is resolved now. Initially, I tried hdp 2.6 in RHEL 7.2 it gave me the error. So i tried using RHEL 7.4 Hope its due to compatibility issues. everything is fine in RHEL 7.4 One quick question to you. Is compatibility matrix link available for partners in community site? When I checked it asked for customer login or employee login. Any sugestions. Thx Muthu
... View more
07-31-2018
01:40 PM
The latest update from my end, What I observed in my cluster is start failed during the process. the core issues are . 1. NN is coming up after 25 minutes in safe mode. 2. Since the NN is coming up in safe mode, the yarn timeline server is not able to comeup.. and aborting the startup process. 3. Once i manually issue the "hdfs dfsadmin -safemode leave". i can able to start other services including yarn timeline services manually except hbase master. Any suggestions Thx Muthu
... View more