Member since
07-12-2013
435
Posts
117
Kudos Received
82
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1651 | 11-02-2016 11:02 AM | |
2599 | 10-05-2016 01:58 PM | |
7245 | 09-07-2016 08:32 AM | |
7472 | 09-07-2016 08:27 AM | |
1690 | 08-23-2016 08:35 AM |
10-19-2015
10:38 AM
Reposting as it looks like my reply via email was made into a new thread: If you're connecting to Impala, use the IP address of one of the Worker Nodes, and use this query string: jdbc:hive2://**:21050/;auth=noSasl If you're connecting to Hive, then use the IP address of the Manager Node and port 10000. Because there isn't anything about the authentication for these services that is configured beyond the default, these ports are blocked from machines outside the cluster. If you want to connect from outside the cluster, you will need to look up the Network ACL for your CloudFormation stack, and edit that Network ACL to allow access on the ports you're using.
... View more
10-19-2015
08:09 AM
The "Guidance Page" (linked to in the emails you received after the cluster started) has a table with the IP addresses of all your nodes, including the "Manager Node". If you're accessing the tutorial from the link on that page, it should fill in the value of the IP addresses in the example commands (such as the Sqoop command) for you. The user ID to use for SSH is ec2-user and there is no password - it's the EC2 key-pair you selected when deploying the CloudFormation template. The first couple of pages in the tutorial have more detail. For the MySQL database, the username is retail_dba and the password is cloudera (again - this should be shown in the tutorial) - but MySQL will only accept connections from the machines in your cluster. Can you be more specific about why you think you're missing a driver? The copy of Tableau Desktop hosted on the Windows instance should have built-in support for connecting to Impala, etc. and other than being able to connect to Remote Desktop, you should not need any other drivers.
... View more
10-19-2015
05:54 AM
There was a bit of a delay on Friday as the system caught up after an issue was fixed.
... View more
10-14-2015
11:55 AM
I had a quick search around on Github for anything that implements this, and couldn't find anything I'm afraid. Pretty sure Cloudera's product line doesn't include any such plugins...
... View more
10-13-2015
11:41 AM
There's a firewall set up that may be your problem. The Live clusters only expose ports that are considered secure and are required for the tutorials. If you go beyond that, you may need to open some ports in the firewall, and then you need to make sure you configure the services to be secure enough to meet your needs. The authentication configuration with Hive and Impala is the default, so the firewall blocks access to those ports by default. The firewall in question is the Network ACL. You can find the ID of the specific ACL in the Resources Tab in CloudFormation, and then edit the configuration of the Network ACL from the EC2 Management Console. Remember the Network ACL is stateless, meaning it is unaware of established connections, etc. and filters individual packets based on their ports. The EC2 security group and any operating-system-level firewall should not be getting in the way here, but keep those in mind as other options if you need to change the networking configuration. Remember that EC2 security groups are stateful, but they apply to traffic even between the nodes of your cluster. I'd also make sure you've confirmed the credentials and other details of your connection string by connecting from a node in the cluster, so you can differentiate between the firewall blocking the connection and any other potential problems.
... View more
10-08-2015
06:47 AM
1 Kudo
I'm not aware of any examples of such plugins, but found some details looking through the code. Plugins should implement the org.apache.hadoop.util.ServicePlugin Java interface, and you can find the code for that here: https://github.com/cloudera/hadoop-common/blob/cdh5-2.6.0_5.4.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ServicePlugin.java. It says "Service plug-ins may be used to expose functionality of datanodes or namenodes using arbitrary RPC protocols". Basically the service will read in the class name(s), instantiate them, and then call the start() method and pass it a reference to the service. You can then do whatever you want, and it will later call stop() on the plugin when things are shutting down. The are other ways to write plugins for Hadoop. Sentry's HDFS support is implemented as a plugin, but it's a more specific type of authorization plugin, rather than a class that's just started and stopped along with the service. Hope that helps!
... View more
09-28-2015
12:26 PM
So since you typed 'hadoop-mapreduce-examples.jar' it's looking for that JAR in the current directory. You'll need to provide the path to it. Assuming you're in Cloudera Live (or any other cluster in which CDH was installed with parcels), if you provide the path '/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' it should work. If you're in the QuickStart VM or any other Linux-packages-based installation, it's in /usr/lib/hadoop-mapreduce instead.
... View more
09-28-2015
10:45 AM
1 Kudo
So this is just warning you that the Oozie server is not working, so some of the apps in Hue won't work. I don't believe Oozie is needed for any of the tutorials in Cloudera Live, so you may just be able to ignore this depending on what you're wanting to do. However you can get to the bottom of this by logging to Cloudera Manager, clicking on the Oozie service, and seeing what's going on. The monitoring may already have figured out what's wrong and might tell you. Otherwise I'd follow the links to see the logs for the Oozie process and see what Exceptions, etc. you see.
... View more
09-24-2015
05:47 AM
I'm not sure why that's happening to you. The cloudera user should be set up pretty similarly to the root user - I can't imagine why one would try and use MR1 and the other would use YARN, unless there was an environment variable set in that terminal or something.
... View more
09-24-2015
05:42 AM
1 Kudo
As long as your virtual servers are deleted from the GoGrid portal, your entire cluster should be gone (and if it's done before the end of the 2 weeks you shouldn't be charged anything). But remember Cloudera does not have direct access to your account - only GoGrid support would be able to help confirm or resolve any specific issues. The reference to the Cloudera software free trial is referring to the 60-day trial of Cloudera Enterprise. Features that typically require a paid license (like Cloudera Navigator) are enabled during the trial, but at the end of the 60 days those features simply become unavailable. Everything else in Cloudera Manager (like the configuration / services management, I think most of the monitoring, etc.) continues to work. You wouldn't be automatically charged or lose data or anything like that for Cloudera software at the end of that trial.
... View more