Member since
08-19-2013
392
Posts
29
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2546 | 09-12-2019 01:04 PM | |
2420 | 08-21-2019 04:56 PM | |
8213 | 07-03-2018 07:59 AM | |
5867 | 10-09-2015 08:02 AM | |
2524 | 04-29-2015 12:14 PM |
07-02-2018
07:11 AM
@VinayM,
Kafka is not included in the Quickstart VM. It has to be installed as a separate parcel. See our documentation about Installing, Migrating and Upgrading CDK Powered By Apache Kafka.
... View more
04-16-2018
11:11 AM
The JDK is considered part of the operating system and is not managed by Cloudera Manager. You will have to deploy the new JDK manually, via scripts, or an OS management software. There is a way to have Cloudera managed services to only pick up the new JDK during a rolling restart and limit the window where different versions of the JDK will be in use: Before deploying the new JDK, go to the Hosts configuration in Cloudera Manager and specify the "Java Home Directory". This will override the auto-detection logic that is normally used to identify the Java version. Don't remove the prior version of the JDK while installing the new JDK on your hosts. Once you have deployed the new version of the JDK on all hosts either clear the "Java Home Directory" configuration or set it to your new version. You may now perform a rolling restart and the services will pick up the new JDK.
... View more
04-11-2018
11:14 AM
1 Kudo
Hadoop wasn't designed to run multiple DataNodes on a single host and is prohibited by Cloudera Manager. The reason for a single DataNode per host is to prevent data loss. Using the default replication factor of 3, every block in a file will be replicated to 3 different hosts. If a host containing a block replica were to go down, the NameNode will mark the block as under-replicated. A new copy of the block will be created on another DataNode bringing the number of replicas back to 3. If you do not care about data integrity my suggestion is to set the replication factor to 1 or use virtual hosts.
... View more
04-11-2018
10:51 AM
How "dead" is the host? If the system is still able to start the operating system then the Cloudera Manager Agent may be running. This will cause it to heartbeat to the Cloudera Manager Server and will show up in the lists of hosts. If this is the case, remove the cloudera-manager-agent package from the dead host. You should be able to delete the host from Cloudera Manager without it returning.
... View more
11-03-2016
08:59 AM
Hello Azim, If you are using MIT Kerberos, you would configure one or more slave KDCs. See "Install the slave KDCs" under the MIT Kerberos Documentation: https://web.mit.edu/kerberos/krb5-latest/doc/admin/install_kdc.html You will need to run the kprop command in a cron to synchronize the master with the slave KDCs. Update the /etc/krb5.conf file on your hosts to include the additional KDCs for your realm. Example: [realms] EXAMPLE.REALM = { kdc = kdc1.example.com kdc = kdc2.example.com kdc = kdc3.example.com:750 admin_server = kdc1.example.com master_kdc = kdc1.example.com } Kerberos does not support load balancing. If a timeout occurs connecting to the first KDC in the list, the next KDC will be tried.
... View more
05-12-2016
07:48 AM
Hello Ajay, The first thing I would check is that you have the heap size set correctly. Memory requirements are discussed in the Cloudera Documentation under Configuring HiveServer2.
... View more
02-11-2016
11:21 AM
1 Kudo
The non-printable character may be located anywhere in the filename. You just need to insert it in the appropriate location when quoting the filename. Using ctrl-v to insert special characters is the default for the bash shell, but your terminal emulator (especially if you are coming in from Windows) may be catching it instead. Try using shift-insert instead of ctrl-v. If that fails, you may need to find an alternate method to embed control characters, such as use vi to create to bash script and insert them using vi.
... View more
02-11-2016
08:48 AM
1 Kudo
In my example I used -mv. You would use -rmdir. hdfs dfs -rmdir "/a/b/c/d//20160205^M" Remember, to get "^M" type ctrl-v ctrl-m.
... View more
02-11-2016
08:40 AM
1 Kudo
You would handle this in the same way if the issue occurred on a linux filesystem. Use quotes around the filename and ctrl-v to insert the special characters. In this case, I type ctrl-v then ctrl-m to insert ^M into my strings. $ hdfs dfs -put /etc/group "/tmp/abc^M"
$ hdfs dfs -ls /tmp
Found 4 items
drwxrwxrwx - hdfs supergroup 0 2016-02-11 11:29 /tmp/.cloudera_health_monitoring_canary_files
-rw-r--r-- 3 hdfs supergroup 954 2016-02-11 11:30 /tmp/abc
drwx-wx-wx - hive supergroup 0 2016-01-11 12:10 /tmp/hive
drwxrwxrwt - mapred hadoop 0 2016-01-11 12:08 /tmp/logs
$ hdfs dfs -ls /tmp | cat -v
Found 4 items
drwxrwxrwx - hdfs supergroup 0 2016-02-11 11:30 /tmp/.cloudera_health_monitoring_canary_files
-rw-r--r-- 3 hdfs supergroup 954 2016-02-11 11:30 /tmp/abc^M
drwx-wx-wx - hive supergroup 0 2016-01-11 12:10 /tmp/hive
drwxrwxrwt - mapred hadoop 0 2016-01-11 12:08 /tmp/logs
$ hdfs dfs -mv "/tmp/abc^M" /tmp/abc
$ hdfs dfs -ls /tmp | cat -v
Found 4 items
drwxrwxrwx - hdfs supergroup 0 2016-02-11 11:31 /tmp/.cloudera_health_monitoring_canary_files
-rw-r--r-- 3 hdfs supergroup 954 2016-02-11 11:30 /tmp/abc
drwx-wx-wx - hive supergroup 0 2016-01-11 12:10 /tmp/hive
drwxrwxrwt - mapred hadoop 0 2016-01-11 12:08 /tmp/logs
... View more
02-03-2016
12:03 PM
This is a Kerberos configuration issue, most likely with the principal for the second NameNode. When a checkpoint is attempted (copying the fsimage file from the Standby NameNode to the Active), the connection is failing due to the GSSAPI authentication with the Kerberos credential. The failover controller logs will probably contain similar messages. Since the server is able to start, your basic Kerberos setup is allowing the server to obtain it's initial credential but it appears it is expiring. A few possible causes: * The principal needs to have renewable tickets. In your output this is set to false. The problem could be with the /etc/krb5.conf file on the Standby or with the principal in your KDC. * Reverse DNS lookup for the hostname is not working. The packet sent from one server has the information "my hostname is: server2.example.com, IP: 10.1.2.3". The source does a reverse DNS lookup for 10.1.2.3 and is not receiving a hostname or is receiving a hostname that does not match the one provided. * You are having an intermittent outage with your KDC or DNS that is causing the above mentioned problems. Depending upon the type of KDC in use and how it is configured, there may be additional issues. Since you report the rest of the cluster is functional (no loss to the DataNodes), this is most likely isolated to the one NameNode's principal.
... View more