Member since
01-31-2019
26
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7080 | 01-30-2020 08:10 AM | |
3606 | 08-02-2019 02:35 AM | |
1271 | 04-24-2019 10:07 AM | |
5688 | 04-24-2019 02:27 AM |
07-13-2022
12:10 AM
hi @pszabados - via Spark: timestamp will be converted to UTC in Kudu (however you can change this behavior in spark.conf) please, can you share the option to set?
... View more
03-30-2022
12:42 PM
It worked perfect for me. Installing from scratch Cloudera Manager 7.4.4 and CDP 7.1.7.
... View more
12-16-2021
05:53 AM
2 Kudos
hello Becky You can either reduce "split max size" in order to gain more mappers SET mapreduce.input.fileinputformat.split.maxsize; Or you can try : Set mapreduce.job.maps=XX for the second option, you may need to disable map files hive.merge.mapfiles=false Let me know if any solution works for you Good luck
... View more
02-11-2021
11:10 PM
Since you are using Ambari, you can you can try to use Rebalance HDFS action, or directly the Hadoop Balancer tool.
... View more
01-18-2021
02:50 AM
hello, try to find your log4j.properties file ( my case /etc/hadoop/conf.cloudera.hdfs/log4j.properties) and add these two lines : log4j.appender.RFA=org.apache.log4j.ConsoleAppender log4j.appender.RFA.layout=org.apache.log4j.PatternLayout good luck
... View more
01-18-2021
02:16 AM
hello, So to have a clearer vision of what you're seeking can you tell us if you're willing to connect your host to an already installed cluster or you just want to install Hadoop on a single machine (standalone )? On another side, your server needs repositories to install from the components (you have to configure your local repository on your server or any other server that can connect to it via a private IP for example )
... View more
09-21-2020
08:44 AM
I have tested the backup/restore solution and seems to be working like charm with spark :
-First, check and record the names as given in the list of the kudu_master (or the primary elected master in case of multi masters ) http://Master1:8051/tables
-Download the kudu-backupX.X.jar in case you can't find it in /opt/cloudera/parcels/CDH-X.Xcdh.XX/lib/ and put it there
-In kuduMasterAddresses you put the name of your Kudu_master or the names of your three masters separated by ','
-Backup : sudo -u hdfs spark2-submit --class org.apache.kudu.backup.KuduBackup /opt/cloudera/parcels/CDH-X.Xcdh.XX/lib/kudu-backup2_2.11-1.13.0.jar --kuduMasterAddresses MASTER1(,MASTER2,..) --rootPath hdfs:///PATH_HDFS impala::DB.TABLE
-COPY : sudo -u hdfs hadoop distcp -i - hdfs:///PATH_HDFS/DB.TABLE hdfs://XXX:8020/kudu_backups/ -Restore:
sudo -u hdfs spark2-submit --class org.apache.kudu.backup.KuduRestore /opt/cloudera/parcels/CDH-X.Xcdh.XX/lib/kudu-backup2_2.11-1.13.0.jar --kuduMasterAddresses MASTER1(,MASTER2,..) --rootPath hdfs:///PATH_HDFS impala::DB.TABLE finally INVALIDATE METADATA
... View more
03-24-2020
08:08 AM
1 Kudo
Hi! thanks for your response and good luck too! 🙂 Rosita
... View more
08-22-2019
01:50 PM
Hi, Kudu requires the machine clock of master and tablet servers nodes is synchronized using NTP : https://kudu.apache.org/docs/troubleshooting.html#ntp Kudu is tested with ntpd, but I guess chronyd might work as well. Whether using ntpd or chronyd, it's necessary to make sure the machine's clock is synchronized so ntp_adjtime() Linux system call doesn't return an error (see http://man7.org/linux/man-pages/man2/adjtimex.2.html for more technical details). It's not enough just to have ntpd (or chronyd) running. It's necessary to make sure the clock is synchronized. I would verify that the NTP daemon is properly configured and tracks the clocks of the reference servers. For the instructions to check the sync status of machine's clock, see https://kudu.apache.org/docs/troubleshooting.html#ntp if using ntpd or https://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/sect-Checking_if_chrony_is_synchronized.html for chronyd. Hope this helps, Alexey
... View more
08-13-2019
04:44 PM
Hi @Harish19, There is SSL Options button somewhere in the ODBC driver configuration window, please click through and confirm if you have SSL enabled on the client side. Cheers Eric
... View more