Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 665 | 06-04-2025 11:36 PM | |
| 1241 | 03-23-2025 05:23 AM | |
| 613 | 03-17-2025 10:18 AM | |
| 2263 | 03-05-2025 01:34 PM | |
| 1463 | 03-03-2025 01:09 PM |
04-07-2023
01:14 AM
@Sanchari It could be good to share a snippet of your code. logically I think you copy FROM -->TO Below is the function being used: fs.copyFromLocalFile(new Path(src_HDFSPath), new Path(dest_edgePath)) Disclaimer I am not a Spark/Python developer
... View more
04-06-2023
02:15 PM
@BrianChan You will need to manually perform the checkpoint on the faulty node. If the standby NameNode is faulty for a long time, generated edit log will accumulate. In this case, this will cause the HDFS or active NN to take a long time to restart and could even fail to restart because if the HDFS or active NameNode is restarted, the active NameNode reads a large amount of unmerged editlog. Is your NN setup active/standby? Fr the below steps you could as well use CM UI to perfom the tasks Quickest solution 1 I have had occasions when a simple rolling restart of the Zk's would resolve that biut I see the checkpoint lag goes to > 2 days Solution 2 Check the most up to date on both NN by comparing the dates of files in the directory. $ ls -lrt /dfs/nn/current/ On the Active NN with the latest editlogs as hdfs user $ hdfs dfsadmin -safemode enter $ hdfs dfsadmin -saveNamespace Check whether the latest generated fsimage timestamp is the current time. If yes, the combination is executed correctly and is complete. $ hdfs dfsadmin -safemode leave Before restarting the HDFS or active NameNode, perform a checkpoint manually to merge the metadata of the active NameNode. The restart the standby the newly generated files should now automatically be shipped and synced this could take a while < 5 minutes and your NN should all be green
... View more
04-06-2023
01:19 PM
@pankshiv1809 Can you share the spark-submit conf for UPSS_PROMO_PROMOTIONS Spark JOB ? JConsole, which helps to detect performance problems in the code including java.lang.OutOfMemoryErrors. Depending on the available memory on your cluster you can then re-adjust as suggested by @RangaReddy
... View more
04-06-2023
12:58 PM
@pankshiv1809 Can you share a more detailed log and background on your environment,Python versions etc Geoffrey
... View more
04-06-2023
12:49 PM
@SSandhu The first question is I am just wondering if you have an HDP subscription. If not sure you could independently run the below to see whether the repo URL is valid and reachable Display enabled HDP software repositories # yum repolist Clean out all HDP packages and metadata from cache # yum clean all Refresh HDP packages on your system # yum update Reinstall AMC yum reinstall ambari-metrics-collector The above steps should help you resolve AMC does not exist in the stack-select package
... View more
03-29-2023
04:47 PM
2 Kudos
@vciampa I tried to recreate your scenarion using HDP and ODBC ODBC screens I created a hive database and table and used a non ssl connection as shown below as my HDP is not Secured, though the ODBS delivers a certificate in C:\Program Files\Cloudera ODBC Driver for Apache Hive\lib\cacerts.pem Ping from Windows DSN Config ODBC config SSL Config Table creation Connect test So it does work for non-TLS but when I enable TLS I get the below because TLS is not enabled on my HDP cluster , I will try in install sa self signed certificate and revert Geoffrey
... View more
03-24-2023
06:53 AM
@ambari275 These are the steps to follow assuming you are logged in as root # su - hdfs $ klist -kt /etc/security/keytabs/hdfs-headless.keytab Then the output should give you the principal to use $ kinit -kt /etc/security/keytabs/hdfs-headless.keytab
... View more
03-23-2023
11:02 AM
@BrianChan HUE uses a back end database and it would be interesting if you shared the creation steps The DB configuration is in the hue.ini check the [database] section for the path to the DB (or look at /dump_config/, desktop, database). If MySQL/MariaDB please check the config in /etc/my.cnf [mysqld] ... bind-address=0.0.0.0 default-storage-engine=innodb sql_mode=STRICT_ALL_TABLES Verify the DB connectivity mysql -u hue -p Enter password: <password> quit The HUE database file seems gone or not readable
... View more
03-23-2023
07:15 AM
@deepikaw What is the version of RHEL/CentOS? Try removing the kernel quiet parameter in /etc/grub.conf Memory looks okay for Quicstart though the higher the better This message occurs when kickstarting a RHEL you need to provide another "vmlinuz" and "initrd.img" to kickstart the machine. In case for example of kickstarting RHEL x.3, I simply used the vmlinuz and initrd.img from RHEL x.4. higher version For "mp-BIOS bug :8254 timer not connected to IO-APIC"There is a kernel workaround, I believe booting with the kernel parameter "noapic" could work
... View more
03-23-2023
05:39 AM
@hive1 What is the hive version? Can you try the below and revert Set the below two properties before executing the rename partition hive> set fs.hdfs.impl.disable.cache=false; hive> set fs.file.impl.disable.cache=false; Then run the rename partition command hive> ALTER TABLE <table_name> (code='YATHAH%0188QW') RENAME TO PARTITION (code='new_name'); Share the output for further analysis
... View more