Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2724 | 08-28-2018 02:00 AM | |
| 2696 | 07-31-2018 06:55 AM | |
| 5686 | 07-26-2018 03:02 AM | |
| 2982 | 07-19-2018 02:30 AM | |
| 6466 | 05-21-2018 03:42 AM |
07-28-2017
10:00 AM
1 Kudo
@alexe yes, you can login with pyspark (or) spark-shell and run your commands one by one to get the result. All they need is your result and code that you have used to generate the result. So it is not necessary to create .jar from your code and execute (also it will take additional time)
... View more
07-25-2017
01:45 PM
2 Kudos
@ponypony No, it will not support. The below doc refers to CDH 5.11.x, also it refers to the features from relational databases or Hive are not available in Impala. In which Indexing is one among them https://www.cloudera.com/documentation/enterprise/5-11-x/topics/impala_faq.html#faq_features__faq_unsupported
... View more
07-25-2017
06:22 AM
@syamsri how much you have increased? so i think in your case it is not sufficient, you have to increase more and try your update & delete operations
... View more
07-24-2017
03:19 PM
@cllearner There could be many reasons 1. CM -> hosts -> check the hosts are in green 2. Login as root and start ntpd service if it is not started automatically service ntpd status service ntpd start chkconfig --list ntpd chkconfig ntpd on; wait for few mins and check the hosts status again. post more logs if you are still facing issue
... View more
07-23-2017
01:32 PM
@syamsri Go to Yarn -> Configuration -> search for "yarn.nodemanager.resource.memory-mb". If it is 1 GB by default, increase it to 2 GB Save it and restart YARN (Sometime, may need to restart HUE as well, need to chk) try again now
... View more
07-22-2017
07:25 AM
@SKumar2000 There could be multiple reasons for this issue 1. MySQL jdbc connector/driver is missing - you can ignore this as your first command is working 2. your sqoop command. Pls specify the target database, table in your import command as follows and try again sqoop import \ --connect "jdbc:mysql://host:3306/dbname" \ --username uid \ --password pwd \ --table table_name \ --delete-target-dir \ --hive-database hive_db_name \ --hive-table hive_tablename \ --split-by col1 \ --target-dir 'dir_path'
... View more
07-20-2017
07:34 PM
1 Kudo
@ponypony This is due to java heap space issue. Pls try the below steps 1. Check the current mapreduce.map.memory.mb and mapreduce.reduce.memory.mb (there are different ways to check - you can use either Cloudera manager -> Yarn -> configuration (or) from hive/beeline CLI (or) mapred-site.xml or yarn-site.xml ) 2. Increase (1 or 2 GB) java heap space temporarly, i've already shared the details in the below link. NOTE: The below link refers to different issue but you can use this soluation for your issue too as sqoop uses MR http://community.cloudera.com/t5/Hadoop-101-Training-Quickstart/Map-and-Reduce-Error-Java-heap-space/m-p/47023#M4622 3. Try the sqoop now, if the issue fixed then work with your admin and increase it permanently
... View more
07-19-2017
01:35 PM
@keeblerh In general, the namenode port # is 50070. We can also customize, so You can double check that in CM -> HDFS -> Configuration. so you need to use namenode1:50070 instead of namenode1:8020 (in both source & target) also i saw that you have mentioned CDH 5, but make sure both source & target are same version (including sub versions like 5.2, 5.3, 5.7, etc). If they are not same version then you need to use different commands, you can get more details in the below link https://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_admin_distcp_data_cluster_migrate.html#topic_7_2
... View more
07-16-2017
11:08 AM
@vikash145 when do you get this error? when you try to launch spark shell (or) when you start the history servicer (or) in a different situation? how did you start all other services? using the same user also i don't see any keyword called error in your log, it says only warning
... View more
07-16-2017
09:35 AM
@vikash145 According to this log, your user id doesn't have permission to start the server, login as root and try again additionally you can check the 'service <service name> status>' and try to start it also check the config by running the below command and on it chkconfig --list
... View more