Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 742 | 06-04-2025 11:36 PM | |
| 1310 | 03-23-2025 05:23 AM | |
| 647 | 03-17-2025 10:18 AM | |
| 2374 | 03-05-2025 01:34 PM | |
| 1541 | 03-03-2025 01:09 PM |
05-05-2018
04:36 PM
@Geoffrey Shelton Okot Thanks
... View more
05-07-2018
07:37 AM
@Sim kaur Can you share the latest version of these 2 files /var/log/hive/*.err and /var/log/hive/*.log
... View more
12-04-2018
09:08 PM
@harsha vardhan bandaru Can you accept the answer to close this long overdue thread.
... View more
05-08-2018
09:55 PM
Adding all nodes in /etc/hosts across all of them fixed the problem. Thanks!
... View more
05-01-2018
07:39 PM
2 Kudos
@Michael Bronson You can safely delete them
... View more
05-16-2019
06:46 AM
@Geoffrey Shelton Okot Please find the link for new thread https://community.hortonworks.com/questions/246319/failed-to-connect-to-kdc-failed-to-communicate-wit.html Please guide me on this its really critical for me.
... View more
04-26-2018
03:25 PM
@raj pati Error "Causedby: org.apache.hadoop.hbase.ipc.CallTimeoutException:Call id=9, waitTime=10001, operationTimeout=10000 expired." is a timeout exception. Whats the value of hbase.rpc.timeout ? hbase.client.scanner.timeout.period is a timeout specifically for RPCs that come from the HBase Scanner classes (e.g. ClientScanner) while hbase.rpc.timeout is the default timeout for any RPC. I believe that the hbase.client.scanner.timeout.period is also used by the RegionServers to define the lifetime of the Lease (the cause of the LeaseException you're seeing). Generally, when you see these kinds of exceptions while scanning data in HBase, it is just a factor of your hardware and current performance (in other words, how long it takes to read your data). I can't really give a firm answer because it is dependent on your system's performance Could you adjust these parameters and restart the Hbase stale configs and test Change the below values through Ambari and test <property>
<name>hbase.client.scanner.timeout.period</name>
<value>70000</value>
</property> And also <property>
<name>hbase.rpc.timeout</name>
<value>70000</value>
</property> It should run successfully.
... View more
04-18-2018
08:36 PM
@Christian Lunesa Unfortunately no but setting the attribute --map-column-hive Date=Timestamp to Sqoop will definitely work
... View more
04-11-2018
01:51 PM
@Jay Kumar SenSharma I have selected option 3 (custom) . this path works for "Node1" and "Node2" . but in case of "Node3",it gives an error during the "Confirm Hosts" Step Thanks
... View more
04-09-2018
09:24 PM
1 Kudo
@Anurag Mishra fs.defaultFS The fs.defaultFS makes HDFS a file abstraction over a cluster, so that its root is not the same as the local system's. You need to change the value in order to create the distributed file system. The fs.defaultFS in core-site.xml gives the datanode address of namenode. The datanode looks here for the namenode address and tries to contact it using RPC. Without setting the fs.defaultFS the command $ hdfs dfs -ls / would initially show the local root filesystem as below $ hdfs hadoop fs -ls /
Warning: fs.defaultFS is not set when running "ls" command.
Found 21 items
dr-xr-xr-x - root root 4096 2017-05-16 20:03 /boot
drwxr-xr-x - root root 3040 2017-06-07 18:31 /dev
drwxr-xr-x - root root 8192 2017-06-10 07:22 /etc
drwxr-xr-x - root root 56 2017-06-10 07:22 /home
................
.............
drwxr-xr-x - root root 167 2017-06-07 19:43 /usr
drwxr-xr-x - root root 4096 2017-06-07 19:46 /var dfs.namenode.http-address The location for the NameNode URL in the hdfs-site.xml configuration file e.g <property>
<name>dfs.namenode.http-address</name>
<value>node1.texas.us:50070</value>
<final>true</final>
</property> The NameNode HTTP server address is controlled by configuration property dfs.namenode.http-address in hdfs-site.xml. Typically this specifies a hostname or IP address, and this maps to a single network interface like above but you can tell it to bind to all network interfaces by setting property dfs.namenode.http-bind-host to 0.0.0.0 (the wildcard address, matching all network interfaces). This is the base port where the dfs namenode web ui will listens to.It's good to make the name node HTTP server listen on all interfaces by setting it to 0.0.0.0 this will require a reboot/restart of NameNode Hope that clarifies for you the difference
... View more