Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 998 | 06-04-2025 11:36 PM | |
| 1567 | 03-23-2025 05:23 AM | |
| 783 | 03-17-2025 10:18 AM | |
| 2817 | 03-05-2025 01:34 PM | |
| 1860 | 03-03-2025 01:09 PM |
01-06-2020
11:07 PM
@Chittu Can you share your code example? The should be an option to specify mode='overwrite' when saving a DataFrame: myDataFrame.save(path='"/output/folder/path"', source='parquet', mode='overwrite') Please revert
... View more
01-06-2020
01:15 PM
@md88 Are you trying to install Ambari on an existing 2.6 cluster or you are upgrading Ambari? If you have already installed HDP 2.6 as you stated how is the registration of the host failing? Here are the steps to help resolve your issue. Can you please rephrase your question because personally I can't understand !
... View more
01-05-2020
04:28 PM
@sow Impala does not allow binary data. What you can do is use a serialize-deserialize methodology. This means you convert your image to a String format that still contains all the information necessary to transform it back. Once you need to retrieve an image on HDFS you will need to deserialize, meaning converting the string to the original format. Found this example using Python it would work like this: import base64 def img_to_string(image_path): with open(image_path, "rb") as imageFile: image_string= base64.b64encode(imageFile.read()) print image_string def string_to_img(image_string): with open("new_image.png", "wb") as imageFile: imageFile.write(str.decode('base64'))
... View more
01-02-2020
11:54 AM
@shyamshaw I am already answering a similar question see this thread https://community.cloudera.com/t5/Support-Questions/Unable-to-start-Node-Manager/td-p/285976 Please go through the thread and update me with what isn't working, I will answer to both threads soon
... View more
01-02-2020
09:06 AM
@Uppal Great if all went well we usually run msck repair table daily once you have loaded a new partition in HDFS location. Why you need to run msck Repair table statement every time after each ingestion? Hive stores a list of partitions for each table in its metastore. If, however, new partitions are directly added to HDFS , the metastore (and hence Hive) will not be aware of these partitions unless the user runs either of below ways to add the newly add partitions. msck will add metadata about partitions to the Hive metastore for partitions for which such metadata doesn't already exist If you will find the need remember to do that else accept the answer and close the thread :
... View more
01-02-2020
07:47 AM
@Uppal Great that worked out better for you, did you run MSCK REPAIR TABLE table_name; on the target table? f you found this answer addressed your initial question, please take a moment to login and click "accept" on the answer. Happy hadooping
... View more
01-02-2020
03:13 AM
@saivenkatg55 I see in the hadoop-yarn-nodemanager-w0lxdhdp05.ifc.org.log errors pointing to "Unable to start NodeManager: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in java.library.path, /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6279667856305652637.8 (Permission denied)] My suspicion: Please verify that /tmp on the host does not have the noexec option set. You can verify this by running /bin/mount and checking the mount options. If you are able to, remount /tmp without noexec and try starting the NodeManager again. I am sure its issue with noexec on /tmp. See my sample output [root@tokyo ~]# /bin/mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,size=7167976k,nr_inodes=1791994,mode=755) ....... ... systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15609) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) /dev/sda1 on /boot type ext4 (rw,relatime,data=ordered) /dev/sda5 on /opt type ext4 (rw,relatime,data=ordered) /dev/sda8 on /home type ext4 (rw,relatime,data=ordered) /dev/sda11 on /u02 type ext4 (rw,relatime,data=ordered) /dev/sda6 on /var type ext4 (rw,relatime,data=ordered) /dev/sda10 on /u01 type ext4 (rw,relatime,data=ordered) /dev/sda9 on /tmp type ext4 (rw,relatime,data=ordered) This issue occurs when the user running the Hadoop [Nodemanager start] process does not have the necessary rights and cannot generate temporary files under the /tmp directory. Solution - Allow the user running node manager startup process read/write/execute access on /tmp - Remove the noexec parameter when mounting /tmp - Change the execution rights on /tmp. ie: sudo chmod 777 /tmp In the /var/log/messages I can also see Jan 2 05:14:23 w0lxdhdp05 abrt-server: Package 'ambari-agent' isn't signed with proper key Jan 2 05:14:23 w0lxdhdp05 abrt-server: 'post-create' on '/var/spool/abrt/Python-2020-01-02-05:14:22-11897' exited with 1 Jan 2 05:14:23 w0lxdhdp05 abrt-server: Deleting problem directory '/var/spool/abrt/Python-2020-01-02-05:14:22-11897' Please edit /etc/abrt/abrt-action-save-package-data.conf change the value for OpenGPGCheck should be changed from yes to no. OpenGPGCheck = no It might also be necessary to change the value of limit coredumpsize: limit coredumpsize unlimited After editing the file restart the process with the following command: # service abrtd restart Restart the node manager and share your joy !
... View more
01-01-2020
11:51 AM
@pra_big hbase user is the admin user of hbase one connects to a running instance of HBase using the hbase shell command, located in the bin/ directory of your HBase install. Here the version information that is printed when you start HBase Shell has been omitted. The HBase Shell prompt ends with a > character. As hbase user $ ./bin/hbase shell hbase(main):001:0> All the below methods will give you access to the HBase shell as the admin user [hbase] If you have root access # su - hbase It will give you the same above If you have sudo privileges # sudo su hbase -l I don't see the reason for changing to bash or didn't I understand your question well?
... View more
01-01-2020
11:06 AM
@Uppal Any updates on this thread.
... View more
01-01-2020
11:04 AM
@saivenkatg55 You didn't respond to this answer, do you still need help or it was resolved if so please do accept and close the thread.
... View more