Member since
11-23-2017
15
Posts
5
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1748 | 05-15-2019 02:07 AM | |
27330 | 04-24-2019 02:42 PM | |
2286 | 07-25-2018 12:11 PM | |
1932 | 07-25-2018 12:01 PM | |
7508 | 07-03-2018 07:16 AM |
05-15-2019
02:07 AM
1 Kudo
There were some network configuration problems solved by unix admin.
... View more
05-15-2019
01:58 AM
@EricL You are right - thank you for pointing this out.
... View more
04-24-2019
02:42 PM
3 Kudos
Hi, This command moves column_name after column_name2: alter table table_name change column column_name column_name column_name_type after column_name2; You have to put the column_name twice (or you can change column name) and type of the column. Regards Andrzej
... View more
11-28-2018
05:48 AM
Hi,
We have CDH version 5.15.0 with HA configuration using Corosync and Pacemaker.
We had to stop the cluster - all components, including VMs, were stopped.
When I was restarting the Cloudera Management Services did not start.
We also have sometimes problems that some processes (spark, hive from oozie) do not see a hive database, although the database exists.
I'm not able to find in logs any meaningfull informations.
I found that in corosync log after restart there are only raws with information about 'lrmd'. There is no raws with 'crmd' and 'pengine' - such raws were before in this logs.
Log before restart: [root@bdp-lb1 cluster]# more corosync.log-20181106 Nov 05 04:16:55 [2000] bdp-lb1.zzz.com lrmd: notice: operation_finished: cloudera_haproxy_status_60000:10507:stderr [ /etc/init.d/haproxy: line 26: [: =: unary operator expected ] Nov 05 04:18:35 [2003] bdp-lb1.zzz.com crmd: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped (900000ms) Nov 05 04:18:35 [2003] bdp-lb1.zzz.com crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALCcause=C_TIMER_POPPED origin=crm_timer_popped ] Nov 05 04:18:35 [2002] bdp-lb1.zzz.com pengine: info: process_pe_message: Input has not changed since last time, not saving to disk Nov 05 04:18:35 [2002] bdp-lb1.zzz.com pengine: info: determine_online_status_fencing: Node onair-tel-bdp-lb1 is active ... Log after restart: [root@lb1 cluster]# more corosync.log Nov 28 03:28:51 [2208] bdp-lb1.zzz.com lrmd: notice: operation_finished: cloudera_haproxy_status_60000:17261:stderr [ /etc/init.d/haproxy: line 26: [: =: unary operator expected ] Nov 28 03:29:51 [2208] bdp-lb1.zzz.com lrmd: notice: operation_finished: cloudera_haproxy_status_60000:18015:stderr [ /etc/init.d/haproxy: line 26: [: =: unary operator expected ] Nov 28 03:30:51 [2208] bdp-lb1.zzz.com lrmd: notice: operation_finished: cloudera_haproxy_status_60000:18786:stderr [ /etc/init.d/haproxy: line 26: [: =: unary operator expected ] Nov 28 03:31:51 [2208] bdp-lb1.zzz.com lrmd: notice: operation_finished: cloudera_haproxy_status_60000:19541:stderr [ /etc/init.d/haproxy: line 26: [: =: unary operator expected ] Nov 28 03:32:51 [2208] bdp-lb1.zzz.com lrmd: notice: operation_finished: cloudera_haproxy_status_60000:20293:stderr [ /etc/init.d/haproxy: line 26: [: =: unary operator expected ] ...
Thank you
Andrzej
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Manager
07-25-2018
12:11 PM
1 Kudo
Hi , For example: * load this data to hive, * run queries such as: select category, count(*) number_of_videos from youtube_data order by number_of_videos desc limit 5; Regards Andrzej
... View more
07-25-2018
12:01 PM
Hi @Sayan, From documentations: "The Cloudera Manager minor version must always be equal to or greater than the CDH minor version because older versions of Cloudera Manager may not support features in newer versions of CDH. For example, if you want to upgrade to CDH 5.4.8 you must first upgrade to Cloudera Manager 5.4 or higher." So, if you want to upgrade cdh to 5.15 first you have to upgrade Cloudera Manager to 5.15. Best regards Andrzej
... View more
07-20-2018
02:18 AM
Hi, @lsdbreaker I had such situation after upgrade. Using Cloudera Manager I add HDFS Gateway roles on hosts where I wanted to use hdfs command. After that hdfs command started to run properly. Regards Andrzej
... View more
07-20-2018
01:46 AM
@bgooley Thank you for your help. We finished the upgrade with success. After some cleanings in the cluster we started the upgrade using CM Upgrade page. We did all steps until the new parcel activation. When we tried to do next steps from CM Upgrade page we got a error box without any messages. We redeployed configuration and restarted services manually using CM. At the end we had to redeploy Oozie Shared libraries and SQL Server JDBC dirver.
... View more
07-10-2018
09:11 AM
@bgooley Thank you for you answers. They are very helpfull for me. I'm cleaing our cluster according your advice. I have a question concerning (2): Removing for example Impala parcel will not impact Impala service that we are using? Upgrade of Impala is done by upgrade of CDH so parcel for Impala is not required, am I correct? I also have additional question. On Friday I have CDH 5.9.3 parcel in status Downloaded. And I have possibility to do a few action with it. I do not remember exactly by at least: distribute and remove. But now, I do nothing with this parcel, I have status: Undistributin 63% and no action. Why this state change - I think I do nothing with the cluster? What should I do in such case? Thanks in advanced Andrzej
... View more
07-06-2018
06:52 AM
Hi,
I tried to upgrade CDH fro 5.9.0 to 5.9.3.
I used Cloudera Manager:
Downloading, distribution and unpacking of new parcels was done without problems.
Stopping services was done without problems.
Activation of new parcel also was done without problems. I got in CM info "Activating parcel - Successfully activated parcel."
But ferst service to start - ZooKeeper - did not start. I got message: "This role requires the following additional parcels to be activated before it can start: [cdh]"
I was not able to finished and I returned to 5.9.0.
This screenshot shows the error messages:
I'm tring to find the reason of upgrade failure.
What should I checked?
When I'm checking different elements I have some questions.
Thanks for any answers or advice.
1.Is it OK that .flood directory owner is cloudera-scm? Other directories in parcel directories are owned by root.
[root@etl1 parcels]# ls -al
total 0
drwxr-xr-x 5 root root 125 Jul 6 12:11 .
drwxr-xr-x 4 cloudera-scm cloudera-scm 39 Nov 9 2016 ..
lrwxrwxrwx 1 root root 26 Jul 3 13:23 CDH -> CDH-5.9.0-1.cdh5.9.0.p0.23
drwxr-xr-x 11 root root 110 Oct 21 2016 CDH-5.9.0-1.cdh5.9.0.p0.23
drwxr-xr-x 2 cloudera-scm cloudera-scm 6 Jul 5 09:58 .flood
lrwxrwxrwx 1 root root 43 Jul 3 09:58 SPARK2 -> SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658
drwxr-xr-x 6 root root 47 Sep 25 2017 SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658
2. In Cloudera Manager/ Parcels I have all time such errors? Should I be worring about them?
Example: Error for parcel SPARK-0.9.0-1.cdh4.6.0.p0.98-el7 : Parcel not available for OS Distribution RHEL7.
I have CentOS 7 not RedHat?
3. I did some removal of 5.9.3 parcel from Cloudera Manager.
parcel CDH 5.9.3-1.cdh5.9.3.p0.4 changed status in CM from Distributed to Downloaded
I expect that in /opt/cloudera directories on all servers i shouldn't have 5.9.3 files. But I have such files in:
* all parcel-cache directories,
* in some parcels directories.
Should I remove them?
4. I noticed that some 5.9.3 files in parcels directories that were not removed are older. So previous admin has to do something with this upgrade. May it cause a problem?
[root@etl2 cloudera]# ls -al parcels
total 4
drwxr-xr-x 6 root root 4096 Jul 3 13:23 .
drwxr-xr-x 4 cloudera-scm cloudera-scm 39 Nov 9 2016 ..
lrwxrwxrwx 1 root root 26 Jul 3 13:23 CDH -> CDH-5.9.0-1.cdh5.9.0.p0.23
drwxr-xr-x 11 root root 110 Oct 21 2016 CDH-5.9.0-1.cdh5.9.0.p0.23
drwxr-xr-x 11 root root 110 Jun 28 2017 CDH-5.9.3-1.cdh5.9.3.p0.4
drwxr-xr-x 2 cloudera-scm cloudera-scm 6 Jul 4 12:45 .flood
lrwxrwxrwx 1 root root 43 Jun 16 05:18 SPARK2 -> SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658
drwxr-xr-x 6 root root 47 Sep 25 2017 SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658
5. Should I remove 5.9.3 files from parcel-repo directory on Cloudera Manager Server?
I think that Delete comand for that parcel from Cloudera Manager should remove it.
[root@cms1 parcel-repo]# ls -al
total 3091828
drwxr-xr-x. 2 cloudera-scm cloudera-scm 4096 Jul 3 11:20 .
drwxr-xr-x. 4 cloudera-scm cloudera-scm 34 Nov 4 2016 ..
-rw-r----- 1 cloudera-scm cloudera-scm 1492922238 Nov 10 2016 CDH-5.9.0-1.cdh5.9.0.p0.23-el7.parcel
-rw-r----- 1 cloudera-scm cloudera-scm 41 Nov 10 2016 CDH-5.9.0-1.cdh5.9.0.p0.23-el7.parcel.sha
-rw-r----- 1 cloudera-scm cloudera-scm 57125 Nov 10 2016 CDH-5.9.0-1.cdh5.9.0.p0.23-el7.parcel.torrent
-rw-r----- 1 cloudera-scm cloudera-scm 1500799059 Jul 3 11:19 CDH-5.9.3-1.cdh5.9.3.p0.4-el7.parcel
-rw-r----- 1 cloudera-scm cloudera-scm 41 Jul 3 11:19 CDH-5.9.3-1.cdh5.9.3.p0.4-el7.parcel.sha
-rw-r----- 1 cloudera-scm cloudera-scm 57424 Jul 3 11:20 CDH-5.9.3-1.cdh5.9.3.p0.4-el7.parcel.torrent
-rw-r----- 1 cloudera-scm cloudera-scm 172161150 Jan 29 14:35 SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658-el7.parcel
-rw-r----- 1 cloudera-scm cloudera-scm 41 Jan 29 14:35 SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658-el7.parcel.sha
-rw-r----- 1 cloudera-scm cloudera-scm 6760 Jan 29 15:17 SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658-el7.parcel.torrent
6. What exactly does parcel activation mean?
This activation changes CDH directory definition and links it to CDH-5.9.3... directory instead of CDH5.9.1>
lrwxrwxrwx 1 root root 26 Jul 3 13:23 CDH -> CDH-5.9.0-1.cdh5.9.0.p0.23
drwxr-xr-x 11 root root 110 Oct 21 2016 CDH-5.9.0-1.cdh5.9.0.p0.23
drwxr-xr-x 11 root root 110 Jun 28 2017 CDH-5.9.3-1.cdh5.9.3.p0.4
Thanks in advance
Andrzej
... View more
Labels: