Member since
11-15-2016
50
Posts
2
Kudos Received
0
Solutions
07-29-2021
12:21 AM
i just suffered from this. you should change the parameter in hdfs-site.xml <property>
<name>dfs.block.invalidate.limit</name>
<value>50000</value>
</property> the default value is 1000 , which is too slow may be you should also change report size, if you have exception about that <property>
<name>ipc.maximum.data.length</name>
<value>1073741824</value>
</property>
... View more
02-28-2018
03:04 PM
@kskp Maybe you might try it out on a newer Hadoop version. As of HDP 2.6.1, it contains Hadoop 2.7.3, which contains a known bug very similar to yours. Hope this helps!
... View more
10-05-2017
07:35 PM
Desc <Table_name>; you'll come to know your table would have columns like _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9.. What you need to do is Add the alias to your existing table so that your column names are the actual names and not default _col1, etc.
... View more
09-30-2017
11:22 AM
@Sree Kupp I see you are attempting something that is taken care of by real HA setup with active and standby namenode, you can use below command to force failover. $ hdfs haadmin -failover Let me know whether that helps
... View more
09-29-2017
03:46 PM
Thanks @Sridhar Reddy. My name node is doing good. I only wanted to move the name node which I did using the "Move Snamenode" wizard on Ambari. I want to sync up the name node with the cluster and am not sure how to do that.
... View more
09-23-2017
12:32 AM
Thanks Sonu, that helped 🙂
... View more
09-22-2017
03:25 PM
Thanks a ton @Jay SenSharma. That really helped.
... View more
04-10-2017
05:53 PM
That was a great video @Kshitij Badani. Thanks for the pointers.
... View more
04-10-2017
05:54 PM
Thanks @Constantin Stanca It all makes sense now.
... View more
03-22-2017
12:55 AM
1 Kudo
@Sree Kupp, 1. Both the Spark Thrift Servers keep failing suddenly out of the blue. I am not sure if it is some configuration issue (like not having enough heap size so even though it starts up when I start it, eventually it fails). A cluster can have spark1 and spark2 thrift server running together. Is spark1 and spark2 thrift server deployed on same host ? Can you please check what is the error message for spark thrift server failure? 2. Can I have both the Sparks running simulatneously? Or will that cause any memory overload on the cluster? Yes , you can have both the spark running simultaneously. Regarding memory overload, If you are using yarn-client or yarn-cluster mode to run the spark applications, It won't memory overload the client machine. 3. In the ODBC Driver DSN setup, when I click on "Test" option, sometimes it fails even when the thrift server is up and running. The error is: "[Hortonworks][Hardy] (34) Error from server: connect() failed: errno = 10061." I found few good links to handle this issue. Seems like many people hit similar issue. I hope this helps. http://kb.tableau.com/articles/issue/error-connect-failed-hadoop-hive https://community.hortonworks.com/questions/33046/hortonworks-hive-odbc-driver-dsn-setup.html https://community.hortonworks.com/questions/10192/facing-issue-with-odbc-connection.html
... View more