Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3459 | 10-13-2017 09:42 PM | |
6241 | 09-14-2017 11:15 AM | |
3188 | 09-13-2017 10:35 PM | |
5120 | 09-13-2017 10:25 PM | |
5769 | 09-13-2017 10:05 PM |
10-13-2017
10:05 PM
Kerberos service principals have three parts, the service name, the hostname, and the domain name. The hostname must be in the formation of fully qualified domain name. That is why the service is looking for it in that format while the keytab does not contain an entry for that principal. Recreate the keytab file with the principal in the correct format and you should be good.
... View more
10-13-2017
09:42 PM
1 Kudo
It is a group. By default Hadoop create the user hdfs in the group hdfs. The first statement does make it confusing but assumes the defaults as that is the only user in the group. You could add users to the group as well (not recommended). The last portion referencing the Kerberos principal is just pointing out that it isn't enough to have a user in the superusergroup/supergroup they also need a valid Kerberos principal. In reality, the users in the group you assign to that property will have Kerberos principals already. I also recommend, as Cloudera does, to not use the default hdfs group.
... View more
10-04-2017
01:49 PM
It is an issue with the installation. I don't know precisely what is the issue though. You can disable it, or set it to permissive, complete the installation, and then revert it back. I have always just kept it off, but presumably, you would need to repeat this for each upgrade.
... View more
09-25-2017
07:56 AM
That appears to be between the ZK servers. Anything useful in the SM logs?
... View more
09-24-2017
11:04 PM
Looking for something that shows which znode it is accessing as that will give us a clue as to what it is doing. I would try INFO before DEBUG but you may have to go to that level. Also, try the SM logs since it is the client and may have the info there. As an example the below line shows that the HBase client was trying to access the znode /hbase/hbaseid This is what I am thinking should be in the logs to help us figure this out. org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
... View more
09-24-2017
10:46 PM
Those are the defaults and should be good. Hmm, the Service Monitor is what is running the canary test and the likely source of the connections. We need to figure out what SM is doing. Either go through the ZK logs to determine what node it is accessing. This may require you to increase the logging level. I want to make sure that I got this right. The Canary test is disabled, since being disabled you have restarted the Service Monitor, and since restarting the connections from SM to ZK have climbed until hitting the maximum of 1000. Do I have that right?
... View more
09-24-2017
10:32 PM
I have never seen such and issues with ZK and SM. What do you have set for the SM timeout values in ZK. I may have not seen this as I have these set. This alone should prevent SM from consuming all of the connections to ZK. ZooKeeper Canary Connection Timeout ZooKeeper Canary Session Timeout ZooKeeper Canary Operation Timeout
... View more
09-14-2017
11:22 AM
1 Kudo
These can be set globally, try searching for just spark memory as CM doesn't always include the actual setting name. These can be set per job as well. Spark-submit --executor-memory https://spark.apache.org/docs/1.6.0/submitting-applications.html
... View more
09-14-2017
11:15 AM
1 Kudo
You need to increase the HS2 heap size as whatever it is at is too low to process and return that much data for your query. In CM, browse to the Hive service Configuration tab and search for 'Java Heap Size of HiveServer2 in Bytes'. I don't know what you have but increase it by 1 GB and test.
... View more
09-13-2017
10:35 PM
1. To install the new license you don't need to have internet access. It can upload the file from your local machine. 2. I only skimmed but I think it only requires that as it assumes that Cloudera Express clusters are using the embedded database. You are using an external DB already so it shouldn't be needed. 3. It should be non-destructive as it is just updating the licensing and unlocking features within CM.
... View more