Member since
04-18-2017
39
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2772 | 03-06-2018 05:40 PM |
06-27-2018
10:24 AM
Hi Team, I have a table called 'test' and its owner is 'ben', I need to change the ownership of the same table from 'ben' to 'sam' and i don't have login access to sam. How can i achieve this , Is there any possibility to change the ownership or is there any command to change the ownership Regards, Mathivanan
... View more
Labels:
- Labels:
-
Apache Hive
03-06-2018
05:40 PM
@Aymen Rahal Hi, When registering host go with Manual Installation. and edit /etc/ambari-agent/conf/ambari-agent.ini change hostname=ambari server name Then Retry Failed.
... View more
12-14-2017
06:16 AM
Hi All, I have done upgrade from HDP 2.4.3 to 2.6.1, where my hive jobs are running but i cant get the Full log information of those hive jobs i.e. Map and Reduce logs are missing and i can only see the Query execution time and Application job id and others. It reflects with "log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender". While executing the hive command also i'm getting the same warn. I have tried with setting maxFileSize to 256 in hdfs log4.j configuration, yarn log4.j configuration, Hive log4.j configuration but it reflects the same. Please guide me how to ignore this error and bring back all my log information. Regards, Mathivanan
... View more
Labels:
10-20-2017
09:15 AM
Hi All, I'm having the requirement of retrieving a Particular Rowkey from Hbase tables and Store those in HDFS for taking Backup, is there any option to achieve this scenario.I tried with scan 'TABLENAME',{FILTER =>"(PrefixFilter ('ROWKEY'))"} for retrieving, but i don't know how to store those ROWKEY information in HDFS. Is there best way to do this.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
07-07-2017
02:30 PM
Hi, In my case i need to get feed from json and that need to be streamed via spark - scala, so i need to know how to convert json data into case class using scala. For example: {"Id":"1","Name":"Babu","Address":"dsfjskkjfs"} to (1,babu,dsfjskkjfs}
... View more
Labels:
- Labels:
-
Apache Spark
06-14-2017
04:56 AM
hmas.txtregionserver.txtHi nshelke, I had Set this properties as you mentioned , but Hbase-Master and Region server are getting down and there is a backup process running behind this.It too fails. dfs.client.block.write.replace-datanode-on-failure.policy=ALWAYS dfs.client.block.write.replace-datanode-on-failure.best-effort=true Please find the attached Log of Hbase master and Regions server. Kindly suggest.
... View more
05-24-2017
09:48 AM
All of my datanodes are healthy and having enough space.My replication factor is 3 as default. By setting dfs.client.block.write.replace-datanode-on-failure.best-effort=true, will not result in data loss?.kindly suggest
... View more
05-24-2017
09:47 AM
@nshelke All of my datanodes are healthy and having enough space.My replication factor is 3 as default. By setting dfs.client.block.write.replace-datanode-on-failure.best-effort=true, will not result in data loss?.kindly suggest
... View more
05-24-2017
06:50 AM
1 Kudo
hbase-error.txtWe are having 7node cluster, in which 5node have region server running, on daily basis any one of the region server node goes down. and im getting the same error from the respective nodes.please find the attached log.
... View more
Labels:
- Labels:
-
Apache HBase
05-05-2017
05:58 AM
Hi nshelke, Yes, I have Checked it's Healthy and no datanode failure but i found missing replicas.
... View more