Member since
04-05-2017
12
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9583 | 04-25-2017 01:53 PM | |
9703 | 04-17-2017 12:56 PM |
08-22-2017
10:15 AM
I have verified that all of the parcels requested in teh cloudera-scm-server.log are contained in the /opt/cloudera/parcel-repos directory on the master/name node host server as are all of teh SLES parcels. hadoop1:/opt/cloudera/parcel-repo # ls -l total 7392988 -rw-r----- 1 cloudera-scm users 16033068 Aug 1 08:15 ACCUMULO-1.7.2-5.5.0.ACCUMULO5.5.0.p0.8-el7.parcel -rw-r----- 1 cloudera-scm users 41 Aug 1 08:15 ACCUMULO-1.7.2-5.5.0.ACCUMULO5.5.0.p0.8-el7.parcel.sha -rw-r----- 1 cloudera-scm users 794 Aug 1 08:15 ACCUMULO-1.7.2-5.5.0.ACCUMULO5.5.0.p0.8-el7.parcel.torrent -rw-r----- 1 cloudera-scm users 16085305 May 23 12:00 ACCUMULO-1.7.2-5.5.0.ACCUMULO5.5.0.p0.8-sles12.parcel -rw-r----- 1 cloudera-scm users 41 May 23 12:00 ACCUMULO-1.7.2-5.5.0.ACCUMULO5.5.0.p0.8-sles12.parcel.sha -rw-r----- 1 cloudera-scm users 797 May 23 12:00 ACCUMULO-1.7.2-5.5.0.ACCUMULO5.5.0.p0.8-sles12.parcel.torrent -rw-r----- 1 cloudera-scm users 1510291137 Apr 12 12:28 CDH-5.10.1-1.cdh5.10.1.p0.10-sles12.parcel -rw-r----- 1 cloudera-scm users 41 Apr 12 12:28 CDH-5.10.1-1.cdh5.10.1.p0.10-sles12.parcel.sha -rw-r----- 1 cloudera-scm users 57790 Apr 12 12:28 CDH-5.10.1-1.cdh5.10.1.p0.10-sles12.parcel.torrent -rw-r----- 1 cloudera-scm users 1528077457 May 15 18:30 CDH-5.11.0-1.cdh5.11.0.p0.34-sles12.parcel -rw-r----- 1 cloudera-scm users 41 May 15 18:30 CDH-5.11.0-1.cdh5.11.0.p0.34-sles12.parcel.sha -rw-r----- 1 cloudera-scm users 58470 May 15 18:31 CDH-5.11.0-1.cdh5.11.0.p0.34-sles12.parcel.torrent -rw-r----- 1 cloudera-scm users 1592734904 Aug 17 17:43 CDH-5.11.1-1.cdh5.11.1.p0.4-el7.parcel -rw-r----- 1 cloudera-scm users 41 Aug 17 17:43 CDH-5.11.1-1.cdh5.11.1.p0.4-el7.parcel.sha -rw-r----- 1 cloudera-scm users 60926 Aug 17 17:43 CDH-5.11.1-1.cdh5.11.1.p0.4-el7.parcel.torrent -rw-r----- 1 cloudera-scm users 1520750328 Aug 17 18:07 CDH-5.11.1-1.cdh5.11.1.p0.4-sles12.parcel -rw-r----- 1 cloudera-scm users 41 Aug 17 18:07 CDH-5.11.1-1.cdh5.11.1.p0.4-sles12.parcel.sha -rw-r----- 1 cloudera-scm users 58189 Aug 17 18:08 CDH-5.11.1-1.cdh5.11.1.p0.4-sles12.parcel.torrent -rw-r----- 1 cloudera-scm users 68055715 May 15 18:27 KAFKA-2.1.1-1.2.1.1.p0.18-sles12.parcel -rw-r----- 1 cloudera-scm users 41 May 15 18:27 KAFKA-2.1.1-1.2.1.1.p0.18-sles12.parcel.sha -rw-r----- 1 cloudera-scm users 2764 May 15 18:27 KAFKA-2.1.1-1.2.1.1.p0.18-sles12.parcel.torrent -rw-r----- 1 cloudera-scm users 445585375 Apr 20 21:35 KUDU-1.3.0-1.cdh5.11.0.p0.12-sles12.parcel -rw-r----- 1 cloudera-scm users 41 Apr 20 21:35 KUDU-1.3.0-1.cdh5.11.0.p0.12-sles12.parcel.sha -rw-r----- 1 cloudera-scm users 17169 Apr 20 21:35 KUDU-1.3.0-1.cdh5.11.0.p0.12-sles12.parcel.torrent -rw-r----- 1 cloudera-scm users 362683461 Aug 17 17:40 KUDU-1.4.0-1.cdh5.12.0.p0.25-el7.parcel -rw-r----- 1 cloudera-scm users 41 Aug 17 17:40 KUDU-1.4.0-1.cdh5.12.0.p0.25-el7.parcel.sha -rw-r----- 1 cloudera-scm users 14006 Aug 17 17:40 KUDU-1.4.0-1.cdh5.12.0.p0.25-el7.parcel.torrent -rw-r----- 1 cloudera-scm users 509716601 Aug 17 17:41 KUDU-1.4.0-1.cdh5.12.0.p0.25-sles12.parcel -rw-r----- 1 cloudera-scm users 41 Aug 17 17:41 KUDU-1.4.0-1.cdh5.12.0.p0.25-sles12.parcel.sha -rw-r----- 1 cloudera-scm users 19629 Aug 17 17:41 KUDU-1.4.0-1.cdh5.12.0.p0.25-sles12.parcel.torrent
... View more
08-22-2017
09:56 AM
I have a four node SLES v12.2 cluster with 1 master/name node and three data nodes that needs to be moved to RHEL so Solr and other parcels can be used. Added a new RHEL Edge node and attempted to config using Cloudera Manager (Installation Path A). CM required that all nodes have the same Agent so chose default update all nodes to the same agent version as the CM host. The new RHEL Edge node completed but the entire SLES cluster got stuck Activating the Agent (which shouldn't have been required). After restarting the CM host the entire environment is down and the cloudera-scm-server.log shows the same three error messages over and over.... 2017-08-21 17:00:54,110 WARN 1179261114@scm-web-0:com.cloudera.parcel.components.LocalParcelManagerImpl: Parcel does not exist in local repo: KAFKA-2.1.1-1.2.1.1.p0.18-el7.parcel 2017-08-21 17:00:54,130 WARN 1179261114@scm-web-0:com.cloudera.parcel.components.LocalParcelManagerImpl: Parcel does not exist in local repo: CDH-5.11.0-1.cdh5.11.0.p0.34-el7.parcel 2017-08-21 17:00:54,146 WARN 1179261114@scm-web-0:com.cloudera.parcel.components.LocalParcelManagerImpl: Parcel does not exist in local repo: KUDU-1.3.0-1.cdh5.11.0.p0.12-el7.parcel It appears these are the RHEL agents and not the SLES agents which I have validated and are found here: /opt/cloudera/parcel-repo/CDH-5.11.0-1.cdh5.11.0.p0.34-sles12.parcel /opt/cloudera/parcel-repo/KAFKA-2.1.1-1.2.1.1.p0.18-sles12.parcel /opt/cloudera/parcel-repo/KUDU-1.3.0-1.cdh5.11.0.p0.12-sles12.parcel I have verified CM is pointing to the correct parcel locations (/opt/cloudera/parcels and /opt/cloudera/parcel-repo). Any suggestions on next steps?
... View more
Labels:
- Labels:
-
Cloudera Manager
08-02-2017
01:13 PM
Your answer is very confusing. Is there a single example of using a job.properties file to provide the --username and --password information needed by a jdbc:sqlserver:..... driver? Do I need to add Arguments? I can add arguments for the entire Sqoop action command within my Hue..Oozie workflow editor and it works with HARDCODED username and password values but I don't want hardcoded values. How about a screenshot showing the Sqoop Command arguments and a separate screenshot showing the Properties page? Does anyone at Cloudera have one of these that they can share with the community? It would make the trial and error, brute force method of trying all possible combinations of everything much easier. Thanks.
... View more
06-28-2017
08:13 AM
chd5.11.0 sqoop 1.4.6 Using Hue to create Workflow with Sqoop 1 works great for pulling entire table from MSSQL Server but fails for tables that are not in the default (dbo) schema. Adding --schema schema_name_here anywhere in the sqoop command results in an "Error parsing arguments for import:" with an "Unrecognized argument: --schema". Here is my sqoop command as entered in Hue: import --connect jdbc:sqlserver://MSSQLP001;database=Cust --username ******* --password ******* --table Customer_Address --delete-target-dir --target-dir hdfs://cdh.dom:8020/user/sqlget/Cust/Incoming/Raw/Customer_Address -m 1 --schema customer Here is the error it produces in the logs: 2017-06-28 09:21:26,416 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2017-06-28 09:21:26,465 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.6-cdh5.11.0
2017-06-28 09:21:26,481 [main] WARN org.apache.sqoop.tool.BaseSqoopTool - Setting your password on the command-line is insecure. Consider using -P instead.
2017-06-28 09:21:26,482 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Error parsing arguments for import:
2017-06-28 09:21:26,482 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized argument: --schema
2017-06-28 09:21:26,482 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized argument: customer <<< Invocation of Sqoop command completed <<< I have tried moving the --schema argument in each possible location between the other arguments (connect and username, username and password, password and table, table and delete-target-dir, delete-target-dir and target-dir, target-dir and m, and as shown as the last command. The only difference this makes is all arguments after the --schema are unrecognized. Apache shows v1.4.6 supports this argument in its Sqoop User Guide. Is the --schema argument not supported within the Sqoop 1 command. This works until table is not in the dbo schema I have successfully used this to pull dozens of tables but it only works for tables in the default (dbo) schema of the MS SQL Server database. If there is any Cloudera documentation, examples, or other information on how to use Hue Workflows (Oozie) I would really appreciate it. I have had to resort to trial and error and brute force methods trying every possible permutation so having realy documentation or examples would be really helpful.
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Sqoop
-
Cloudera Hue
06-14-2017
04:21 PM
I'm not sure how the to identify the Kafka broker version easily. I did find the the file kafka_2.10-0.10.1.2.1.0.0-165.jar.asc in the /libs folder where Kafka is installed so I am assuming I am running Kafka 0.10.1. I did get both the ConsumeKafka and ConsumeKafka_0_10 connectors to work. Thanks. Now off to figure out why the PutHiveStreaming doesn't work, but that will be for a different post.
... View more
06-14-2017
12:14 PM
I am unable to get the GetKafka connector to connect to Kafka. I have a single server install of HDF-2.1. NiFi, Kafka, and ZooKeeper are all installed on the same server. There is no security. I have verified the ZooKeeper port by connecting to ZooKeeper from other machines in this manner:
ex)$ telnet 192.168.99.100 2181
Trying 192.168.99.100...
Connected to cb675348f5c8.
Escape character is'^]'
(Ctrl C to disconnect) Here is the error I get from the GetKafka connector: Any help would be greatly appreciated.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
04-25-2017
01:53 PM
Here is the actual solution to uninstalling the lower version of JAVA located in the /usr/jdk-1.6.0_31-fcs.x86_64 directory. Determine if this java version was installed using RPM Open the Terminal Window Login as the super user Try to find the package by typing: rpm -qa Copy the long list of packages to a text editor so you can check for jdk-1.6.0_31-fcs.x86_64 Do a find for ' jdk-1.6.0_31-fcs.x86_64' If found, it was installed using RPM and should be uninstalled using RPM Go back to the Terminal Window RPM Uninstall Verify the directory is there by going to the /usr/java directory and listing it's contents cd /usr/java ls -l Check to see what your JAVA_HOME variable is set to $JAVA_HOME returns jdk-1.6.0_31-fcs.x86_64 Remove the lower version of Java using the RPM rpm -e jdk-1.6.0_31-fcs.x86_64 Verify the directory has been removed ls -l Exit the Terminal Window Exit Open the Termina Window and login as the Super User Verify the JAVA_HOME has changed to the correct version of Java $JAVA_HOME Results should be the correct version of Java, not the old jdk-1.6.0_31-fcs.x86_64 or whatever version yours is that needed to be removed. Repeat 2-4 for each node in your cluster. This is what I actually had to do to remove the offending version of Java and ensure that all nodes were running on the same version of JAVA.
... View more
04-19-2017
10:03 PM
1 Kudo
Where can I find the Syntax for loading SQL Server (v2012 SP3) data into Impala-Kudu? We are running CDH 5.10. I was planning to use Sqoop from either Hue or the command line but can't find any examples to work from. Any help or other suggestions for loading is greatly appreciated.
... View more
Labels:
04-17-2017
12:56 PM
We uninstalled the earlier version of Java from all nodes, restarted all, and confirmed our $JAVA_HOME is now pointed to the correct version of Java. This corrected the issue and we were able to validate the environment.
... View more
04-14-2017
01:16 PM
Found my $JAVA_HOME returns /usr/java/jdk1.6.0_31 Unsure how to change this for all users to /usr/java/jdk1.7.0_67-cloudera
... View more
04-14-2017
10:41 AM
Clean install of CDH 5.10 - Running any hadoop ** command returns the "Unsupported major.minor version 51.0" error message on any node in the cluster. Found while working through Testing the Installation.... Occurs with or without the "Java Home Directory" setting override in Hosts Configuration in Cloudera Manager. hdfs@hadoop1:/> hadoop fs Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/fs/FsShell : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: org.apache.hadoop.fs.FsShell. Program will exit. login as: root Using keyboard-interactive authentication. Password: Last login: Fri Apr 14 12:31:18 2017 from 10.4.4.44 hadoop1:~ # sudo su hdfs hdfs@hadoop1:/root> hadoop version Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/util/VersionInfo : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: org.apache.hadoop.util.VersionInfo. Program will exit There are two versions of Java in /usr/java jdk1.6.0_31 jdk1.7.0_67-cloudera $PATH = /usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Manager
-
HDFS