Member since
01-11-2016
36
Posts
3
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3817 | 04-26-2017 05:31 PM | |
3295 | 02-14-2017 06:57 PM | |
6632 | 01-26-2017 02:47 PM | |
4295 | 04-21-2016 04:26 PM | |
2633 | 04-18-2016 11:11 PM |
04-26-2017
05:31 PM
Thanks Romain ! I forgot to mention, we are using CDH 5.7.1 and I originally setup the Hue database to be a MySQL external DB (migrated from embedded). This morning (only about 30 minutes ago) I restored the Hue MySQL database from a backup (before the Oozie Workflows were deleted), and at this stage all the Hue Control Nodes and DAG components for each WF look like they are fine. Our Oozie Workflow Guru (my collegue who wrote all the WF's) is now looking through them in Hue to confirm they all look and work OK. Thankyou so much for your quick response. Cheers, Damion.
... View more
04-25-2017
11:00 PM
Hi Anyone, I was just wondering....if I accidentally deleted a whole bunch of Oozie Workflows within the Hue (3.9.0) GUI, is there a simple way to restore them ? I understand this can be done to get the JSON workflows back in Hue: http://gethue.com/exporting-and-importing-oozie-workflows/ But I am looking to get the actual Hue Control Nodes and DAG (Direct Acyclic Graph) definitions, not just the JSON and XML. Have I explained that correctly ? I thought perhaps restoring the Oozie database (MySQL database called "oozie") and Hue database (MySQL database called "hue") might somehow "recreate" the Hue Control Node DAG components in the Hue editor ? But I'm not 100% sure doing this will work (and most likely it will actually get back just the CML/JSON....not the Control Nodes and DAG's). As I mentioned I need the XML/JSON and the actual Hue Control Nodes/DAG's that were originally created using: Hue -> Workflows -> Editors -> Workflows - > "The name of my Workflow". Any assistance and advice would be greatly appreciated. Cheers, Damion.
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Hue
02-14-2017
07:26 PM
Hi, We have a CDH 5.9.0 cluster that has been kerberized....with Microsoft 2012 R2 Active Directory acting as the AD/LDAP and Kerberos domain and realm. AD/LDAP "ldapsearch" commands work, GSSAPI works, "ktutil", "klist" and "kinit" all work for various users (including my "dreeves" user). We have also setup the HAProxy load balancer for Impala (HAProxy is running on a non-CDH worker node via port 25003 and we have 4 x CDH worker nodes running impalad). I can connect using the Hue GUI (and use both the Hive Query and Impala Query editors to run Hive HQL and Impala QL queries). I can also connect via "impala-shell" command line using: [dreeves@{obfuscated_fqdn_client_machine} ~]$ impala-shell Starting Impala Shell without Kerberos authentication Kerberos ticket found in the credentials cache, retrying the connection with a secure transport. Error connecting: TTransportException, Could not connect to {obfuscated_fqdn_client_machine}:21000 *********************************************************************************** Welcome to the Impala shell. (Impala Shell v2.7.0-cdh5.9.0 (4b4cf19) built on Fri Oct 21 01:07:22 PDT 2016) Run the PROFILE command after a query has finished to see a comprehensive summary of all the performance and diagnostic information that Impala gathered for that query. Be warned, it can be very long! *********************************************************************************** [Not connected] > [Not connected] > connect {obfuscated_fqdn_haproxy_client_machine}:25003; Connected to {obfuscated_fqdn_haproxy_client_machine}:25003 Server version: impalad version 2.7.0-cdh5.9.0 RELEASE (build 4b4cf1936bd6cdf34fda5e2f32827e7d60c07a9c) [{obfuscated_fqdn_haproxy_client_machine}:25003] > show databases; .... .... List of databases .... my_dev .... [{obfuscated_fqdn_haproxy_client_machine}:25003] > exit; However, I am unable to connect using either of the following impala-shell commands: 1) This command tries to use the client machine where I've installed HAProxy and port 25003: [dreeves@{obfuscated_fqdn_client_machine} ~]$ impala-shell -l -u dreeves@CDH.{OBFUSCATED_REALM}.COM.AU --ssl --database=my_dev --impalad={obfuscated_fqdn_haproxy_client_machine}:25003; Starting Impala Shell using LDAP-based authentication SSL is enabled. Impala server certificates will NOT be verified (set --ca_cert to change) LDAP password for dreeves@CDH.{OBFUSCATED_REALM}.COM.AU: {my_obfuscated_LDAP_password} Error connecting: TTransportException, Could not connect to {obfuscated_fqdn_haproxy_client_machine}:25003 Kerberos ticket found in the credentials cache, retrying the connection with a secure transport. Error connecting: TTransportException, Could not connect to {obfuscated_fqdn_haproxy_client_machine}:25003 *********************************************************************************** Welcome to the Impala shell. (Impala Shell v2.7.0-cdh5.9.0 (4b4cf19) built on Fri Oct 21 01:07:22 PDT 2016) The HISTORY command lists all shell commands in chronological order. *********************************************************************************** [Not connected] > 2) This commands tries to use my LDAP user id and SSL to one of the machines where an impalad runs on port 21000: [dreeves@{obfuscated_fqdn_client_machine} ~]$ impala-shell -l -u dreeves@CDH.{OBFUSCATED_REALM}.COM.AU --ssl --database=my_dev --impalad={obfuscated_fqdn_impalad_worker_machine}:21000; Starting Impala Shell using LDAP-based authentication SSL is enabled. Impala server certificates will NOT be verified (set --ca_cert to change) LDAP password for dreeves@CDH.{OBFUSCATED_REALM}.COM.AU: {my_obfuscated_LDAP_password} Error connecting: TTransportException, Could not connect to {obfuscated_fqdn_impalad_worker_machine}:21000 Kerberos ticket found in the credentials cache, retrying the connection with a secure transport. Error connecting: TTransportException, Could not connect to {obfuscated_fqdn_impalad_worker_machine}:21000 *********************************************************************************** Welcome to the Impala shell. (Impala Shell v2.7.0-cdh5.9.0 (4b4cf19) built on Fri Oct 21 01:07:22 PDT 2016) Want to know what version of Impala you're connected to? Run the VERSION command to find out! *********************************************************************************** [Not connected] > Is someone able to confirm if I am entering correct "impala-shell" commands at 1) and 2) ? If they are correct I can go away and look in the /var/log/impalad/ location for potential issues.... Thanks, Damion.
... View more
Labels:
- Labels:
-
Apache Impala
-
Kerberos
02-14-2017
06:57 PM
1 Kudo
We have resolved this issue. Turns out it was a compatibility issue in our wonderfull Internet Explorer browser.
... View more
02-01-2017
06:28 PM
Hi, Wondering if someone could provide advice.... We have CDH 5.9.0 installed and being managed by CM, with a myriad of services configured, installed and running (including Zookeeper, HDFS, YARN, Hue, Hive, HS2, Impala, Sqoop2, HBase, Oozie, Sentry, Solr/Search, Spark and Cloudera Navigator). When I open a Microsoft IE 11 web browser, point it at the CM URL (port 7180) and navigate to: Cloudera Management Service -> Navigator Metadata Server -> Cloudera Navigator (from the top menu choice) and try and open the Cloudera Navigator Server URL (https://{management_node}:7187/login) in a new IE browser tab, I receive a blank screen. I have read the following Community Support posts: http://community.cloudera.com/t5/Data-Discovery-Optimization/Cloudera-Navigator-Blank-Page-on-Internet-Explorer-11/m-p/38188/highlight/true#M44 And have enable the IE 11 "Enterprise Mode" using this URL (it was originally disabled in the IE 11 broswer): https://answers.uillinois.edu/page.php?id=59740 I can now see the "Enterprise Mode" option under the MS IE 11 "Tools" menubar, but I still cannot see any content on the Cloudera Navigator page. I have executed "tree -a" and "ps -afe" command as follows to show the process tree strcture for Navigator: [root@{obfuscated_management_node_name}]# tree -a/var/run/cloudera-scm-agent/process/1297-cloudera-mgmt-NAVIGATOR 1297-cloudera-mgmt-NAVIGATOR ├── clouderaManagerVersion.txt ├── cloudera-monitor.properties ├── cloudera-navigator-cm-auth.properties ├── cloudera-navigator.properties ├── cloudera-stack-monitor.properties ├── db.navigator.properties ├── event-filter-rules.json ├── log4j.properties ├── logs │ ├── stderr.log │ └── stdout.log ├── monitoringEntities.properties ├── navigator.jaas.conf ├── navigator.keytab ├── service-metrics.properties └── supervisor.conf 1 directory, 15 files [root@{obfuscated_management_node_name}]# ps -afe | grep 1297 clouder+ 10696 7187 0 Feb01 ? 00:06:58 /usr/java/jdk1.7.0_71/bin/java -server -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -Dmgmt.log.file=mgmt-cmf-mgmt-NAVIGATOR-{obfuscated_fqdn_of_management_node}.log.out -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dnavigator.schema.dir=/usr/share/cmf/cloudera-navigator-audit-server/schema -Dnavigator.auditModels.dir=/usr/share/cmf/cloudera-navigator-audit-server/auditModels -Xms1073741824 -Xmx1073741824 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/mgmt_mgmt-NAVIGATOR-5a18597f006f273bd64dc317f86b2618_pid10696.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -cp /run/cloudera-scm-agent/process 1297-cloudera-mgmt-NAVIGATOR:/usr/share/java/mysql-connector-java.jar:/usr/share/cmf/lib/postgresql-9.0-801.jdbc4.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/cloudera-navigator-audit-server/*: com.cloudera.navigator.NavigatorMain --conf-dir /run/cloudera-scm-agent/process/1297-cloudera-mgmt-NAVIGATOR I have also confirmed port 7187 is open and listening: [root@{obfuscated_management_node_name}# netstat -lntup | grep LIST | grep 7187 tcp 0 0 0.0.0.0:7187 0.0.0.0:* LISTEN 15276/java tcp 0 0 127.0.0.1:19001 0.0.0.0:* LISTEN 7187/python Please help if you are out there Obi Wan Kenobi ! Thanks, Damion.
... View more
Labels:
- Labels:
-
Cloudera Navigator
01-26-2017
02:47 PM
Problem resolved....a case of PEBCAK.... I needed to generate the Kerberos user/principal keytab file using the "ktutil" command before trying to "kinit" using the keytab: kinit dreeves@{obfuscated-realm}.COM.AU -k -t dreeves.keytab Once that was completed, "hdfs dfs -ls /" worked without a problem.
... View more
01-26-2017
02:07 PM
Hi, I'm not sure if this is in the correct Board/Topic....but I wasn't sure which board to post into. We have setup a Kerberized CDH cluster (CDH 5.9.0) via the CM Security -> Kerberos Wizard and have the cluster communicating with an MS Active Directory pair for LDAP/Kerberos etc No issues there. I have setup a client node that has the following services installed: HDFS HttpFS Hive Gateway HiveServer2 Hive WebHCat Server Hue Server Hue Kerberos Ticket Renewer Oozie Server Spark Gateway Sqoop 2 Server YARN (MR2 Included) Gateway When I try and access HDFS after generating a kerberos TGT for my principal "dreeves", it works, but I cannot then use HDFS... Please see below. Any advice and assistance anyone could provide would be great ! [root@{obfuscated-machinename}-ecli001~]# su - dreeves@{obfuscated-domain}.COM.AU Last login: Thu Jan 26 06:04:01 AEDT 2017 on pts/1 id: cannot find name for group ID 33600512 Kerberos kinit seems OK: [dreeves@{obfuscated-domain}@{obfuscated-machinename}]$ kinit dreeves@{obfuscated-domain}.COM.AU Password for dreeves@{obfuscated-domain}.COM.AU: {obfuscated-password} Kerberos klist seems OK: [dreeves@{obfuscated-domain}@{obfuscated-machinename}]$ klist Ticket cache: FILE:/tmp/krb5cc_33601114 Default principal: dreeves@{obfuscated-realm}.COM.AU Valid starting Expires Service principal 01/27/2017 08:27:18 01/27/2017 18:27:18 krbtgt/{obfuscated-domain}.COM.AU@{obfuscated-realm}.COM.AU renew until 02/03/2017 08:27:07 But HDFS commands have issues: [dreeves@{obfuscated-domain}@{obfuscated-machinename}]$ hdfs dfs -ls / ls: failure to login Thanks, Damion.
... View more
04-21-2016
04:26 PM
Hi, My issue was caused by me incorrectly having multiple of the same "role" on the same instance. ie: in the configuration file "roles" section I had 2 x NAMENODE's when I only wanted a single NN. I removed the 2nd "NAMENODE" entry and added additional "master_x" groups with specific services/roles and I am now able to create a non-HA CDH 5.7 cluster ! Thanks to those who made a comment and assisted me, much appreciated. masters_1 { count: 1 instance: ${instances.d22x} { tags { group: MWHAD_Group_1 name: Client1_POC_MWHAD_Instance } } roles { HDFS: [NAMENODE, NAMENODE, DATANODE, BALANCER, HTTPFS, GATEWAY] YARN: [NODEMANAGER, GATEWAY] ZOOKEEPER: [SERVER] OOZIE: [OOZIE_SERVER] HUE: [HUE_SERVER] HIVE: [HIVESERVER2, HIVEMETASTORE, WEBHCAT, GATEWAY] } } Cheers, Damion.
... View more
04-19-2016
05:57 PM
Hi, Just wondering if anyone has experienced this issue with Director 2….and whether Cloudera Director Support Guru's can advise on it ? Configuration Details: ----------------------------- When using both the Director Server GUI and Director Client "remote-bootstrap" command to try and create a cluster, I'm using: a) The same AWS VPC/Subnet/Security Group/Network ACL's and have inbound rules/outbound rules set to "ALL Traffic". b) For the Director 2.0 instance: my own AWS EC2 "AMI" (a "c4.xlarge" instance running CentOS 6.7, fully patched with /root volume set to 50GB). c) For the Cloudera Manager instance: my own AWS EC2 "AMI" (a "r3.xlarge" instance running CentOS 6.7, fully patched with /root volume set to 500GB)….that is successfully created by the Director instance above. d) For the CDH Master/Worker instances: my own AWS EC2 "AMI" (a "d2.2xlarge" instance running CentOS 6.7, fully patched with /root volume set to 500GB and 6 x 2TB instance store volumes for HDFS)….again, these get created successfully by the Director instance above. Issue Occurring: ---------------------- 1) When I use the Director Server GUI to create a cluster: All seems to go well, until I get the error shown below in the applications.log file…. If I login to the Cloudera Manager GUI, it shows all "Services" (HDFS, HIVE, HUE, OOZIE, YARN, ZOOKEEPER) with red configuration issue beacons next to them. [2016-04-19 12:31:14] INFO [pipeline-thread-3] - c.c.l.pipeline.util.PipelineRunner: >> AddServices/5 [Environment{name='DEV_Environment', provider=InstanceProviderConfig{type='aws'}, credentials=SshCre ... [2016-04-19 12:31:14] INFO [pipeline-thread-3] - c.c.l.bootstrap.cluster.AddServices: Creating and configuring services [HDFS, HIVE, HUE, OOZIE, SQOOP, YARN, ZOOKEEPER] [2016-04-19 12:31:14] INFO [pipeline-thread-3] - c.c.launchpad.pipeline.AbstractJob: Creating cluster services [2016-04-19 12:31:14] INFO [pipeline-thread-3] - c.c.launchpad.pipeline.AbstractJob: Assigning roles to instances [2016-04-19 12:31:14] INFO [pipeline-thread-3] - c.c.l.bootstrap.cluster.AddServices: Creating 24 roles for service CD-HDFS-caRUetBM [2016-04-19 12:31:14] ERROR [pipeline-thread-3] - c.c.l.pipeline.util.PipelineRunner: Attempt to execute job failed com.cloudera.launchpad.pipeline.UnrecoverablePipelineError: ClouderaManagerException{message="API call to Cloudera Manager failed. Method=RolesResource.createRoles",causeClass=class javax.ws.rs.BadRequestException, causeMessage="null"} at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:319) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:98) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] 2) When I use the Director Server Client CLI (remote-bootstrap command): I have created a new config file on this Director instance (/usr/lib64/cloudera-director/client/client1_dev_cdh_cluster.aws.cluster.conf). As per the section above titled "Configuration Details", the config file references a single AWS VPC Subnet/Security Group (with specific inbound/outbound rules defined in the Director 2.0 User Guide). The config file doesn't use external databases for anything (just the normal H2 database and local PostgreSQL databases for the Cloudera amon/rman/nav/navms/hue/hive metastore etc etc). When I try and create a new cluster using the following "bootstrap-remote" command using the Cloudera Director Client, it fails with the same error as when using Cloudera Director Server GUI. The Director Client "remote-bootstrap" command I am using is shown below: [root@]# cloudera-director bootstrap-remote client1_dev_cdh_cluster.aws.cluster.conf --lp.remote.username=admin --lp.remote.password={obfuscated} --lp.remote.hostAndPort=10.0.1.247:7189 Process logs can be found at /root/.cloudera-director/logs/application.log Plugins will be loaded from /var/lib/cloudera-director-plugins Cloudera Director 2.0.0 initializing ... Connecting to http://10.0.1.247:7189 Current user roles: [ROLE_ADMIN, ROLE_READONLY] Configuration file passes all validation checks. Creating a new environment... Creating external database servers if configured... Creating a new Cloudera Manager... Creating a new CDH cluster... * Requesting an instance for Cloudera Manager ...... done * Installing screen package (1/1) .... done * Running custom bootstrap script on 10.0.1.240 ..... done * Inspecting capabilities of 10.0.1.240 ... done * Normalizing 10.0.1.240 ... done * Installing ntp package (1/4) ... done * Installing curl package (2/4) .... done * Installing nscd package (3/4) .... done * Installing gdisk package (4/4) ........... done * Resizing instance root partition .... done * Mounting all instance disk drives ...... done * Waiting for new external database servers to start running ... done * Installing repositories for Cloudera Manager ... done * Installing cloudera-manager-daemons package (1/2) ... done * Installing cloudera-manager-server package (2/2) .... done * Installing cloudera-manager-server-db-2 package (1/1) .... done * Starting embedded PostgreSQL database ... done * Starting Cloudera Manager server ... done * Waiting for Cloudera Manager server to start ... * Waiting for Cloudera Manager server to start ... done * Setting Cloudera Manager License ... done * Enabling Enterprise Trial ... done * Deploying Cloudera Manager agent ... done * Waiting for Cloudera Manager to deploy agent on 10.0.1.240 … * Setting up Cloudera Management Services ......... done * Inspecting capabilities of 10.0.1.240 ... done * Done ... Cloudera Manager ready. * Preparing instances in parallel (20 at a time) ................................................ done * Installing Cloudera Manager agents on all instances in parallel (20 at a time)................. done * Creating cluster: Client1_DEV_CDH_Cluster ......................................................done * Downloading parcels: CDH-5.5.2-1.cdh5.5.2.p0.4 .................................................done * Raising rate limits for parcel distribution to 256000KB/s with 5 concurrent uploads.............done * Distributing parcels: CDH-5.5.2-1.cdh5.5.2.p0.4 ................................................done * Activating parcels: CDH-5.5.2-1.cdh5.5.2.p0.4 ..................................................done * Creating cluster services.......................................................................done * ClouderaManagerException{message="API call to Cloudera Manager failed. Method=RolesResource.createRoles",causeClass=class javax.ws.rs.BadRequestException, causeMessage="null"} … Extract showing the failure from the /usr/lib64/cloudera-director/server/logs/application.log file is shown below: [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> ParallelForEachInBatches/5 [20, class com.cloudera.launchpad.bootstrap.cluster.InstallJdbcDriverPackages, [PluggableComputeInst ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.u.ParallelForEachInBatches: Generating batch for job class com.cloudera.launchpad.bootstrap.cluster.InstallJdbcDriverPackages of size 3 [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9105, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> UnboundedParallelForEach/4 [class com.cloudera.launchpad.bootstrap.cluster.InstallJdbcDriverPackages, [PluggableComputeInstance ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.DatabasePipelineService: Starting pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00000-ba5d7e82-4484-4823-b5dc-e89cdcd5bac4' with root job com.cloudera.launchpad.bootstrap.cluster.InstallJdbcDriverPackages and listener com.cloudera.launchpad.pipeline.listener.NoopPipelineStageListener [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.DatabasePipelineService: Starting pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00001-9637f2ff-daa9-452d-bcbb-475a6d8e4759' with root job com.cloudera.launchpad.bootstrap.cluster.InstallJdbcDriverPackages and listener com.cloudera.launchpad.pipeline.listener.NoopPipelineStageListener [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: >> InstallJdbcDriverPackages/3 [PluggableComputeInstance{ipAddress=Optional.of(10.0.1.49), delegate=null} Instance{virtualInstance= ... [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.DatabasePipelineService: Starting pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00002-b5b6a1fa-9404-4929-bce0-6d0ea967e699' with root job com.cloudera.launchpad.bootstrap.cluster.InstallJdbcDriverPackages and listener com.cloudera.launchpad.pipeline.listener.NoopPipelineStageListener [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.p.s.PipelineRepositoryService: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00000-ba5d7e82-4484-4823-b5dc-e89cdcd5bac4': RUNNING -> COMPLETED [2016-04-20 09:06:05] INFO [pipeline-thread-59] - c.c.l.pipeline.util.PipelineRunner: >> InstallJdbcDriverPackages/3 [PluggableComputeInstance{ipAddress=Optional.of(10.0.1.50), delegate=null} Instance{virtualInstance= ... [2016-04-20 09:06:05] INFO [pipeline-thread-59] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9122, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-59] - c.c.l.p.s.PipelineRepositoryService: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00001-9637f2ff-daa9-452d-bcbb-475a6d8e4759': RUNNING -> COMPLETED [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: >> InstallJdbcDriverPackages/3 [PluggableComputeInstance{ipAddress=Optional.of(10.0.1.48), delegate=null} Instance{virtualInstance= ... [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.p.s.PipelineRepositoryService: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00002-b5b6a1fa-9404-4929-bce0-6d0ea967e699': RUNNING -> COMPLETED [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> UnboundedWaitForAllPipelines/3 [9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00000-ba5d7e82-4484-4823-b5dc-e89cdcd5bac4, 9f8c84b8-8f9 ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> WaitForAllValues/3 [null, null, null] [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9123, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> CreateList/1 [[]] [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9124, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> ParallelForEachInBatches/5 [20, class com.cloudera.launchpad.bootstrap.cluster.InstallKerberosPackages, [PluggableComputeInstan ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.u.ParallelForEachInBatches: Generating batch for job class com.cloudera.launchpad.bootstrap.cluster.InstallKerberosPackages of size 3 [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9130, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> UnboundedParallelForEach/4 [class com.cloudera.launchpad.bootstrap.cluster.InstallKerberosPackages, [PluggableComputeInstance{i ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.DatabasePipelineService: Starting pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00000-781b56ba-9a23-48eb-83fd-edaef2e1b797' with root job com.cloudera.launchpad.bootstrap.cluster.InstallKerberosPackages and listener com.cloudera.launchpad.pipeline.listener.NoopPipelineStageListener [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.DatabasePipelineService: Starting pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00001-85c97693-7df7-48da-80fa-cd5a75dce057' with root job com.cloudera.launchpad.bootstrap.cluster.InstallKerberosPackages and listener com.cloudera.launchpad.pipeline.listener.NoopPipelineStageListener [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: >> InstallKerberosPackages/3 [PluggableComputeInstance{ipAddress=Optional.of(10.0.1.49), delegate=null} Instance{virtualInstance= ... [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.DatabasePipelineService: Starting pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00002-f7cbec2b-6f4c-4da7-8e4e-77a32bdb521c' with root job com.cloudera.launchpad.bootstrap.cluster.InstallKerberosPackages and listener com.cloudera.launchpad.pipeline.listener.NoopPipelineStageListener [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.p.s.PipelineRepositoryService: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00000-781b56ba-9a23-48eb-83fd-edaef2e1b797': RUNNING -> COMPLETED [2016-04-20 09:06:05] INFO [pipeline-thread-59] - c.c.l.pipeline.util.PipelineRunner: >> InstallKerberosPackages/3 [PluggableComputeInstance{ipAddress=Optional.of(10.0.1.50), delegate=null} Instance{virtualInstance= ... [2016-04-20 09:06:05] INFO [pipeline-thread-59] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9147, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-59] - c.c.l.p.s.PipelineRepositoryService: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00001-85c97693-7df7-48da-80fa-cd5a75dce057': RUNNING -> COMPLETED [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: >> InstallKerberosPackages/3 [PluggableComputeInstance{ipAddress=Optional.of(10.0.1.48), delegate=null} Instance{virtualInstance= ... [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-58] - c.c.l.p.s.PipelineRepositoryService: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00002-f7cbec2b-6f4c-4da7-8e4e-77a32bdb521c': RUNNING -> COMPLETED [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> UnboundedWaitForAllPipelines/3 [9f8c84b8-8f97-48b6-9def-9cd786eefae1/child-00000-781b56ba-9a23-48eb-83fd-edaef2e1b797, 9f8c84b8-8f9 ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << None{} [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> WaitForAllValues/3 [null, null, null] [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9148, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> CreateList/1 [[]] [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: << DatabaseValue{delegate=PersistentValueEntity{id=9149, pipeline=9f8c84b8-8f97-48b6-9def-9cd786eefae1, ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: >> AddServices/5 [Environment{name='Client1_DEV_CDH_Cluster Environment', provider=InstanceProviderConfig{type='aws'} ... [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.bootstrap.cluster.AddServices: Creating and configuring services [HDFS, YARN, ZOOKEEPER, HIVE, HUE, OOZIE] [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.launchpad.pipeline.AbstractJob: Creating cluster services [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.launchpad.pipeline.AbstractJob: Assigning roles to instances [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.bootstrap.cluster.AddServices: Creating 24 roles for service CD-HDFS-yAisuTBh [2016-04-20 09:06:05] ERROR [pipeline-thread-51] - c.c.l.pipeline.util.PipelineRunner: Attempt to execute job failed com.cloudera.launchpad.pipeline.UnrecoverablePipelineError: ClouderaManagerException{message="API call to Cloudera Manager failed. Method=RolesResource.createRoles",causeClass=class javax.ws.rs.BadRequestException, causeMessage="null"} at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:319) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:98) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.job.Job5.runUnchecked(Job5.java:34) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.job.Job5$$FastClassBySpringCGLIB$$54178505.invoke(<generated>) ~[spring-core-4.1.6.RELEASE.jar!/:2.0.0] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:97) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at com.cloudera.launchpad.pipeline.PipelineJobProfiler$1.call(PipelineJobProfiler.java:67) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.codahale.metrics.Timer.time(Timer.java:101) ~[metrics-core-3.1.0.jar!/:3.1.0] at com.cloudera.launchpad.pipeline.PipelineJobProfiler.profileJobRun(PipelineJobProfiler.java:63) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at sun.reflect.GeneratedMethodAccessor133.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_71] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_71] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at com.cloudera.launchpad.bootstrap.cluster.AddServices$$EnhancerBySpringCGLIB$$fe51a31f.runUnchecked(<generated>) ~[spring-core-4.1.6.RELEASE.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.util.PipelineRunner$JobCallable.call(PipelineRunner.java:159) [launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.util.PipelineRunner$JobCallable.call(PipelineRunner.java:130) [launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.github.rholder.retry.AttemptTimeLimiters$NoAttemptTimeLimit.call(AttemptTimeLimiters.java:78) [guava-retrying-1.0.6.jar!/:na] at com.github.rholder.retry.Retryer.call(Retryer.java:110) [guava-retrying-1.0.6.jar!/:na] at com.cloudera.launchpad.pipeline.util.PipelineRunner.attemptMultipleJobExecutionsWithRetries(PipelineRunner.java:99) [launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.DatabasePipelineRunner.run(DatabasePipelineRunner.java:125) [launchpad-pipeline-database-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.ExceptionHandlingRunnable.run(ExceptionHandlingRunnable.java:57) [launchpad-common-2.0.0.jar!/:2.0.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_71] at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_71] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_71] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_71] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71] Caused by: com.cloudera.api.ext.ClouderaManagerException: API call to Cloudera Manager failed. Method=RolesResource.createRoles at com.cloudera.api.ext.ClouderaManagerClientProxy.invoke(ClouderaManagerClientProxy.java:97) ~[launchpad-cloudera-manager-api-ext-2.0.0.jar!/:2.0.0] at com.sun.proxy.$Proxy201.createRoles(Unknown Source) ~[na:na] at com.cloudera.launchpad.bootstrap.cluster.AddServices.manuallyAssignRoles(AddServices.java:402) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:285) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] ... 33 common frames omitted [2016-04-20 09:06:05] ERROR [pipeline-thread-51] - c.c.l.p.DatabasePipelineRunner: Encountered an unrecoverable error com.cloudera.launchpad.pipeline.UnrecoverablePipelineError: ClouderaManagerException{message="API call to Cloudera Manager failed. Method=RolesResource.createRoles",causeClass=class javax.ws.rs.BadRequestException, causeMessage="null"} at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:319) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:98) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.job.Job5.runUnchecked(Job5.java:34) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.job.Job5$$FastClassBySpringCGLIB$$54178505.invoke(<generated>) ~[spring-core-4.1.6.RELEASE.jar!/:2.0.0] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:97) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at com.cloudera.launchpad.pipeline.PipelineJobProfiler$1.call(PipelineJobProfiler.java:67) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.codahale.metrics.Timer.time(Timer.java:101) ~[metrics-core-3.1.0.jar!/:3.1.0] at com.cloudera.launchpad.pipeline.PipelineJobProfiler.profileJobRun(PipelineJobProfiler.java:63) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at sun.reflect.GeneratedMethodAccessor133.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_71] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_71] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653) ~[spring-aop-4.1.6.RELEASE.jar!/:4.1.6.RELEASE] at com.cloudera.launchpad.bootstrap.cluster.AddServices$$EnhancerBySpringCGLIB$$fe51a31f.runUnchecked(<generated>) ~[spring-core-4.1.6.RELEASE.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.util.PipelineRunner$JobCallable.call(PipelineRunner.java:159) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.util.PipelineRunner$JobCallable.call(PipelineRunner.java:130) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.github.rholder.retry.AttemptTimeLimiters$NoAttemptTimeLimit.call(AttemptTimeLimiters.java:78) ~[guava-retrying-1.0.6.jar!/:na] at com.github.rholder.retry.Retryer.call(Retryer.java:110) ~[guava-retrying-1.0.6.jar!/:na] at com.cloudera.launchpad.pipeline.util.PipelineRunner.attemptMultipleJobExecutionsWithRetries(PipelineRunner.java:99) ~[launchpad-pipeline-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.pipeline.DatabasePipelineRunner.run(DatabasePipelineRunner.java:125) ~[launchpad-pipeline-database-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.ExceptionHandlingRunnable.run(ExceptionHandlingRunnable.java:57) [launchpad-common-2.0.0.jar!/:2.0.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_71] at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_71] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_71] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_71] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71] Caused by: com.cloudera.api.ext.ClouderaManagerException: API call to Cloudera Manager failed. Method=RolesResource.createRoles at com.cloudera.api.ext.ClouderaManagerClientProxy.invoke(ClouderaManagerClientProxy.java:97) ~[launchpad-cloudera-manager-api-ext-2.0.0.jar!/:2.0.0] at com.sun.proxy.$Proxy201.createRoles(Unknown Source) ~[na:na] at com.cloudera.launchpad.bootstrap.cluster.AddServices.manuallyAssignRoles(AddServices.java:402) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] at com.cloudera.launchpad.bootstrap.cluster.AddServices.run(AddServices.java:285) ~[launchpad-bootstrap-2.0.0.jar!/:2.0.0] ... 33 common frames omitted [2016-04-20 09:06:05] ERROR [pipeline-thread-51] - c.c.l.p.DatabasePipelineRunner: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1' failed at com.cloudera.launchpad.bootstrap.cluster.AddServices$$EnhancerBySpringCGLIB$$fe51a31f at com.cloudera.launchpad.bootstrap.cluster.BootstrapClouderaManagerCluster:7 [2016-04-20 09:06:05] INFO [pipeline-thread-51] - c.c.l.p.s.PipelineRepositoryService: Pipeline '9f8c84b8-8f97-48b6-9def-9cd786eefae1': RUNNING -> ERROR [2016-04-20 09:06:06] INFO [pipeline-thread-51] - c.c.l.d.ClusterRepositoryService: Cluster 'Client1_DEV_CDH_Cluster': BOOTSTRAPPING -> BOOTSTRAP_FAILED Thanks, Damion.
... View more
04-18-2016
11:11 PM
Hi, I have found the issue. Lets just say it was a case of PEBCAK. My AMI's were created with 500GB /root volumes, but in my Director config file I had miss-typed the /root volume to be 50GB.... Modified it to 500GB in config file and re-running "bootstrap-remote". Thanks, Damion.
... View more