Member since
01-28-2015
61
Posts
35
Kudos Received
0
Solutions
02-25-2016
12:48 PM
Yes i am using / for hadoop. This partitions are done automatically by ambari, so want to increase the HDFS size, for the POC wanted to import a table of size 70 GB, but because of the current HDFS size, I am able to import only 30+ GB's and the job gets hanged with alerts all over the ambari about the disk usage.
... View more
02-25-2016
12:22 PM
1 Kudo
My /home partition is already 130 GB, but I am not able to use it for HDFS as metioned above as per the gadget. My concern is it should not hamper my HDP installation which I have already done on it
... View more
02-25-2016
11:55 AM
1 Kudo
Yes the cluster is on VM Vsphere
... View more
02-25-2016
11:45 AM
3 Kudos
In the 3 node cluster installation for POC, My 3rd note is datanode, it has a disk space of about 200 GB. As per the widget, my current HDFS Usage is as follows: DFS Used: 512.8 MB (1.02%); non DFS used 8.1 GB (16.52%); remaining 40.4GB (82.46 %) When I do df -h to check the disk size i can see a lot of space is taken by tmpfs as shown in the following screenshot: How can I increase my HDFS disk size?
... View more
Labels:
- Labels:
-
Apache Hadoop
02-25-2016
09:08 AM
1 Kudo
I guess! ill wait for the driver release
... View more
02-24-2016
12:19 PM
I did a lot of steps to be true, not sure which resolved it actually! I'll list all that I did: I installed knox manually, changed the firewall settings, checked if all port were accessible from all the nodes. From what I think changing the port number for fs.default.name from 50070 to 8020 resolved it.
... View more
02-24-2016
11:15 AM
@Neeraj Sabharwal Yes i was able to read that from my laptop only the ones with node2 and node 3 were not accessible
... View more
02-24-2016
10:26 AM
1 Kudo
Yes I have created a symlink, but after creating a symlink, I still get the same error while installing the driver
... View more
02-24-2016
09:48 AM
1 Kudo
The documentation says it should have CentOS 5.0 or 6.0. I also reffered https://community.hortonworks.com/questions/2210/libsasl-version-fro-hive-obdc-rhel-7.html. I have 3 node cluster of HDP 2.3. o CentOS 7 when I try installing I get the following output: [root@node1 enggusr]# yum --nogpgcheck localinstall hive-odbc-native-2.0.5.1005-1.el6.x86_64.rpm
Loaded plugins: fastestmirror, priorities
Examining hive-odbc-native-2.0.5.1005-1.el6.x86_64.rpm: hive-odbc-native-2.0.5.1005-1.x86_64
Marking hive-odbc-native-2.0.5.1005-1.el6.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package hive-odbc-native.x86_64 0:2.0.5.1005-1 will be installed
--> Processing Dependency: cyrus-sasl-gssapi(x86-64) >= 2.1.22 for package: hive-odbc-native-2.0.5.1005-1.x86_64
HDP-2.3 | 2.9 kB 00:00
HDP-UTILS-1.1.0.20 | 2.9 kB 00:00
Updates-ambari-2.2.0.0 | 2.9 kB 00:00
base | 3.6 kB 00:00
extras | 3.4 kB 00:00
mysql-connectors-community | 2.5 kB 00:00
mysql-tools-community | 2.5 kB 00:00
mysql56-community | 2.5 kB 00:00
updates | 3.4 kB 00:00
Loading mirror speeds from cached hostfile
* base: ftp.iitm.ac.in
* extras: ftp.iitm.ac.in
* updates: ftp.iitm.ac.in
--> Processing Dependency: cyrus-sasl-plain(x86-64) >= 2.1.22 for package: hive-odbc-native-2.0.5.1005-1.x86_64
--> Processing Dependency: libsasl2.so.2()(64bit) for package: hive-odbc-native-2.0.5.1005-1.x86_64
--> Running transaction check
---> Package cyrus-sasl-gssapi.x86_64 0:2.1.26-20.el7_2 will be installed
---> Package cyrus-sasl-plain.x86_64 0:2.1.26-20.el7_2 will be installed
---> Package hive-odbc-native.x86_64 0:2.0.5.1005-1 will be installed
--> Processing Dependency: libsasl2.so.2()(64bit) for package: hive-odbc-native-2.0.5.1005-1.x86_64
--> Finished Dependency Resolution
Error: Package: hive-odbc-native-2.0.5.1005-1.x86_64 (/hive-odbc-native-2.0.5.1005-1.el6.x86_64)
Requires: libsasl2.so.2()(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
02-24-2016
09:00 AM
Thank you all! @Geoffrey Shelton Okot @Artem Ervits @Neeraj Sabharwal for the expert solutions here!
... View more