Member since
05-16-2016
785
Posts
114
Kudos Received
39
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2326 | 06-12-2019 09:27 AM | |
| 3568 | 05-27-2019 08:29 AM | |
| 5721 | 05-27-2018 08:49 AM | |
| 5237 | 05-05-2018 10:47 PM | |
| 3113 | 05-05-2018 07:32 AM |
09-11-2018
06:51 PM
Please check the link https://hortonworks.com/blog/update-hive-tables-easy-way/ hope this helps.
... View more
09-08-2018
05:45 AM
It seems your namenode or datanode is in safemode. So we need to leave the safe mode to insert data in hdfs or hbase table in hdfs. We can use following command to leave the safe mode of the namenode or datanode: hadoop@Sanjeev:~$ hdfs dfsadmin -safemode leave On successful execution this message is shown: Safe mode is OFF And after this above error does not occur and the hbase shell runs properly.
... View more
08-22-2018
11:43 PM
@vijithv, First, firewalls can easily block UDP and allow TCP. I mentioned that was a possible cause. Also, depending on how you have your /etc/krb5.conf configured, a different KDC could have been contacted. You can see distinctly in the failure via UDP that there is a socket timeout for each attempt to connect to the KDC. This is a failure at the networking side where a client cannot connect to a server. Since no connection was ever made via UDP, there was no change for it to know to try TCP. That "switching" is done based on a response of KRB5KRB_ERR_RESPONSE_TOO_BIG I believe so if no response is made, no "switching" to TCP will occur. If you really want to get to the bottom of this, recreate the problem while capturing packets via tcpdump like this: # tcpdump -i any -w ~/kerberos_broken.pcap port 88 Then, with the problem fixed reproduce again while capturing packets: # tcpdump -i any -w ~/kerberos_fixed.pcap port 88 Use Wireshark (it does a great job of decoding Kerberos packets) and you will be able to see the entire interaction. This will show us information to help determine the cause. Wireshark is here: https://www.wireshark.org/
... View more
08-11-2018
01:03 PM
I know this is old post, but for the benefit of others... I was getting the same error while trying to execute sqoop commands (sqoop list command was working but not exec & show) directly on Edge node and I was able to resolve it by reconnecting to Edge node, it was open for long time may be session got expired. For some reasons I restricted myself to use edge node not Hue-Oozie.
... View more
08-11-2018
04:07 AM
Do you have any gateway role or hadoop client jars in the host of the accumulo to reach and whats your instance.volumes ? configured
... View more
08-07-2018
04:34 AM
Were you able to set custom prefix ? I want to do multiple inserts into same partition. I hope if custom prefix works, I can do multiple inserts in hive table. Any suggestions appreciated
... View more
07-27-2018
10:55 AM
I am seeing similar issue with ServiceMonitor and Host monitor when using Redhat 6.8 (Santiago) CM/CDH is 5.11.1 After adding JAVA_TOOL_OPTIONS=-Xss2m to hostmonitor and service monitor configuration is works fine. Is this a known issue with Redhat 6.7 as well ? (The link you mentioned is centos and its 6.9)
... View more
07-20-2018
09:24 AM
Hello - I am unable to see any other details. It does pull the file in the first 1-2 hours, and later prevents it from dragging logs from rabbitmq into the hdfs path. After running for 1-2 hours, it throws an error in which no file is being created. So every time, we need to stop the agent and start it again.
... View more
07-18-2018
01:26 AM
Ok I understand your point but what if mappers are failing ? Yarn already sets up as many mappers as files number, should I increase this more ? Since only a minority of my jobs are failing, how can I tune yarn to use more mappers for these particular jobs?
... View more
07-10-2018
11:04 AM
Were you able to figure this out?
... View more