Member since
07-30-2018
60
Posts
14
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1359 | 06-20-2019 10:14 AM | |
13551 | 06-11-2019 07:04 AM | |
1387 | 03-05-2019 07:25 AM | |
3184 | 01-03-2019 10:42 AM | |
8018 | 12-04-2018 11:59 PM |
03-12-2019
09:26 AM
Hi Naveen, If you have limited number of ports available. You can assign port for each application. --conf "spark.driver.port=4050" —conf "spark.executor.port=51001" --conf "spark.ui.port=4005" Hope it helps Thanks Jerry
... View more
03-05-2019
07:25 AM
Hi Sandeep, It seems the data has been imported as the count is same. But sometimes the datatype might be different between MS-SQL and Hive tables which result in NULL values. Let's try to check the datatype between the tables and also share the sqoop command for further check Thanks Jerry
... View more
02-12-2019
06:34 AM
Hi, If there is any changes in the Hive metadata. Please try to run msck repair table <tablename> to get it in sync Reference Link: https://www.cloudera.com/documentation/enterprise/5-13-x/topics/cdh_ig_hive_troubleshooting.html Thanks Jerry
... View more
01-29-2019
06:47 AM
Hi Tulasi, Could you check the value of this property Container Executor Group from the file "container-executor.cfg" file and cross check with CM configuration Thanks Jerry
... View more
01-28-2019
10:33 AM
Hi Tulasi, Could you please verify the "container-executor.group" are same on both from Cloudera manager (Yarn->Configuration->Container Executor Group) and /etc/hadoop/conf.cloudera.yarn/container-executor.cfg (from Node manager host) Let us know if you have questions Thanks Jerry
... View more
01-03-2019
10:42 AM
1 Kudo
Hi, When importing an empty table from Teradata to HDFS via Sqoop using the --table option, We are getting the below exception com.teradata.connector.common.exception.ConnectorException: Input source table is empty Its a bug from teradata end and the fix is yet to be released. Until the fix is available, We recommend using sqoop import --query as a workaround instead of --table Hope it helps. Let us know if you have any questions Thanks Jerry
... View more
12-20-2018
01:10 AM
Hi, This can be caused by the lack of /var/lib/alternatives/hadoop-conf in a specific host. Did you try to restart cloudera agent service this could rebuild the alternatives Run the below script to check the alternatives are linked properly. ls -lart /etc/alternatives | grep "CDH" | while read a b c d e f g h i j k do alternatives --display $i done let us know if you have any questions Thanks Jerry
... View more
12-14-2018
08:27 AM
2 Kudos
Hi, yarn.scheduler.maximum-allocation-mb is specified as 20 GB which means the largest amount of physical memory, that can be requested for a container and yarn.scheduler.minimum-allocation-mb will be the least amount of physical memory, that can be requested for a container. When we submit a MR job requested container memory will be assigned “mapreduce.map.memory.mb” which is by default 1 GB. If it is not specified then we will be given container of 1GB.(Same for reducer) This can be verified in the yarn logs -: mapreduce.map.memory.mb - requested container memory 1GB INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:1024, vCores:1> mapreduce.map.java.opts - Which is 80% of container memory by default org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 1 GB is the default and it is quite low. I recommend reading the below link. It provides a good understanding of YARN and MR memory setting, how they relate, and how to set some baseline settings based on the cluster node size (disk, memory, and cores). https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_yarn_tuning.html Hope it helps. Let us know if you have any questions Thanks Jerry
... View more
12-11-2018
01:07 AM
I think you might have configured Load balancer between impala daemons.
... View more
12-10-2018
09:54 AM
1 Kudo
Hi, Impala uses 21000 port to transmit commands and receive results by impala-shell and some ODBC drivers. And 21050 to transmit commands and receive results by applications, such as Business Intelligence tools, using JDBC Could you try as below impala-shell -i hdp01.***.***.ru:21000 -k -f /tmp/qql.sql Thanks Jerry
... View more
- « Previous
-
- 1
- 2
- Next »