Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2309 | 08-28-2018 02:00 AM | |
2160 | 07-31-2018 06:55 AM | |
5070 | 07-26-2018 03:02 AM | |
2433 | 07-19-2018 02:30 AM | |
5863 | 05-21-2018 03:42 AM |
04-12-2017
11:01 PM
1 Kudo
Please use the impala jdbc not hive jdbc like this: [root@nightly59-unsecure-3 ~]# mkdir -p /root/impala-jdbc/jdbc
[root@nightly59-unsecure-3 ~]# hadoop classpath
/etc/hadoop/conf:/opt/cloudera/parcels/CDH-5.9.3-1.cdh5.9.3.p0.56/lib/hadoop/libexec/../../hadoop/lib/*:/opt/clo...
[root@nightly59-unsecure-3 ~]# cd /root/impala-jdbc/jdbc/
[root@nightly59-unsecure-3 jdbc]# ll
total 13584
-rwxr-xr-x 1 root root 1554773 Apr 10 23:46 ImpalaJDBC4.jar
-rwxr-xr-x 1 root root 1307923 Apr 10 23:46 TCLIServiceClient.jar
-rwxr-xr-x 1 root root 46725 Apr 10 23:45 commons-codec-1.3.jar
-rwxr-xr-x 1 root root 60686 Apr 10 23:45 commons-logging-1.1.1.jar
-rwxr-xr-x 1 root root 7670596 Apr 10 23:45 hive_metastore.jar
-rwxr-xr-x 1 root root 596600 Apr 10 23:45 hive_service.jar
-rwxr-xr-x 1 root root 352585 Apr 10 23:46 httpclient-4.1.3.jar
-rwxr-xr-x 1 root root 181201 Apr 10 23:46 httpcore-4.1.3.jar
-rwxr-xr-x 1 root root 275186 Apr 10 23:46 libfb303-0.9.0.jar
-rwxr-xr-x 1 root root 347531 Apr 10 23:46 libthrift-0.9.0.jar
-rwxr-xr-x 1 root root 367444 Apr 10 23:46 log4j-1.2.14.jar
-rwxr-xr-x 1 root root 294796 Apr 10 23:46 ql.jar
-rwxr-xr-x 1 root root 23671 Apr 10 23:46 slf4j-api-1.5.11.jar
-rwxr-xr-x 1 root root 9693 Apr 10 23:46 slf4j-log4j12-1.5.11.jar
-rwxr-xr-x 1 root root 792964 Apr 10 23:46 zookeeper-3.4.6.jar
[root@nightly59-unsecure-3 jdbc]# export HADOOP_CLASSPATH=`hadoop classpath`:/opt/cloudera/parcels/CDH-*/lib/hive/lib:/root/impala-jdbc/jdbc/*
[root@nightly59-unsecure-3 jdbc]# beeline
beeline> !connect 'jdbc:impala://nightly59-unsecure-3.gce.cloudera.com:21050;AuthMech=0'
Connecting to jdbc:impala://nightly59-unsecure-3.gce.cloudera.com:21050;AuthMech=0
Enter username for jdbc:impala://nightly59-unsecure-3.gce.cloudera.com:21050;AuthMech=0:
Enter password for jdbc:impala://nightly59-unsecure-3.gce.cloudera.com:21050;AuthMech=0:
Connected to: Impala (version 2.7.0-cdh5.9.x)
Driver: ImpalaJDBC (version 02.05.37.1057)
Error: [Simba][JDBC](11975) Unsupported transaction isolation level: 4. (state=HY000,code=11975)
0: jdbc:impala://nightly59-unsecure-3.gce.clo> show tables;
+------------+--+
| name |
+------------+--+
| customers |
| sample_07 |
| sample_08 |
| web_logs |
+------------+--+
4 rows selected (0.177 seconds)
0: jdbc:impala://nightly59-unsecure-3.gce.clo> select * from customers limit 5;
+--------+---------------------+--+
| id | name |
+--------+---------------------+--+
| 75012 | Dorothy Wilk |
| 17254 | Martin Johnson |
| 12532 | Melvin Garcia |
| 42632 | Raymond S. Vestal |
| 77913 | Betty J. Giambrone |
+--------+---------------------+--+
5 rows selected (0.523 seconds) For the detail information, please check this document. https://www.cloudera.com/documentation/enterprise/latest/topics/impala_jdbc.html
... View more
04-12-2017
02:28 PM
There is probably an issue with the client connecting to the Datanode. It is reporting that you have one live data nodes but it is failing to place any replica on it. I would expect the client to get a different error if it was failing to write out the first replica. Check the NN UI to validate that the DN is live, and check the NN and DN logs to see if there is more information on what the issue is.
... View more
04-12-2017
01:36 AM
sorry for any confusion, in parallel "Sentry Service" got disabled for SOLR because of the issue posted here I will close this issue, since the linked issue is the real one 😉
... View more
04-06-2017
10:48 AM
1 Kudo
@Vitali1, If you have "Dump Heap When Out of Memory" enabled, a heap dump will be generated if the process runs out of heap. The files generated can be deleted as they are only for debugging purposes. Navigator does not require them. That said, you can if you are running out of heap, increasing the heap size is likely a good first try to resolve that issue. -Ben
... View more
03-24-2017
05:48 AM
I do think this is a defect. Not sure how Cloudera will see it. But to be fair, this particular way of inserting data (with the VALUES syntax) into a table is pretty much limited to small testing.
... View more
03-22-2017
01:56 PM
@Shafiullah Yes Hue as dependency on HDFS, YARN, Hive and Oozie. So before you remove Hive, you have to remove Hue
... View more
03-22-2017
01:51 PM
@dmishraoc You can get the parameters mentioned in step 1 & 2 from yarn-site.xml and follow the step3: go to the path /var/log/hadoop-mapreduce Note: If you have 10 history file + one current file and each file size is 201M, then you are good. It is automatically purging & you don't need to purge anything
... View more
03-20-2017
09:28 PM
5 Kudos
Zeppelin is an Open source product, so you can use it with any Hadoop distribution: https://zeppelin.apache.org/' On Google there is documentation on how to install Zeppelin on Cloudera etc. Like: http://blog.cloudera.com/blog/2015/07/how-to-install-apache-zeppelin-on-cdh/
... View more
03-15-2017
04:46 PM
@dwill Pls go to the YARN application monitor/page that you have mentioned above, there will be a link called 'Finished' in the left side
... View more
03-13-2017
07:27 AM
There seems to be issues around update-alternatives command. Which is often caused by a broken alternatives link under /etc/alternatives/ or a bad (zero length, see [0]) alternatives configuration file under /var/lib/alternatives, and based on your description it appears to be the former.
The root cause is that Cloudera Manager Agents relies in the OS provided binary of update-alternatives, however the binary doesn't relay feedback on bad entries or problems, therefore we have to resort to manually rectifying issues like these. We have an internal improvement JIRA OPSAPS-39415 to explore options on how to make alternatives updates during upgrades more resilient.
To recover from the issue, you would need to remove CDH related entries from alternatives configuration files.
[0] https://bugzilla.redhat.com/show_bug.cgi?id=1016725
= = = = = = =
# Stop CM agent service on node
service cloudera-scm-agent stop
# Delete hadoop /etc/alternatives - below will displays the rm command you'll need to issue.
ls -l /etc/alternatives/ | grep "\/opt\/cloudera" | awk {'print $9'} | while read m; do if [[ -e /var/lib/alternatives/${m} ]]; then echo "rm -fv /var/lib/alternatives/${m}"; fi; echo "rm -fv /etc/alternatives/${m}"; done
# Remove 0 byte /var/lib/alternatives
cd /var/lib/alternatives
find . -size 0 | awk '{print $1 " "}' | tr -d '\n'
# The above command will give you a multi-line output of all 0 byte files in /var/lib/alternatives. Copy all the files, and put into the rm -f
rm -f
# Start CM agent
service cloudera-scm-agent start
= = = = = = =
... View more