Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3453 | 10-13-2017 09:42 PM | |
6217 | 09-14-2017 11:15 AM | |
3183 | 09-13-2017 10:35 PM | |
5113 | 09-13-2017 10:25 PM | |
5750 | 09-13-2017 10:05 PM |
09-13-2017
10:25 PM
2 Kudos
I just tried it. It is just a POST to the /api/v10/users endpoint. curl -u uname:passw -H "Content-Type: application/json" -X POST -d '{ "items" : [ { "name" : "matt", "password" : "test" } ] }' http://cm_host:7180/api/v15/users
... View more
09-13-2017
10:05 PM
1 Kudo
I think 'No storage dirs specified.' is referencing your dfs.data.dirs. Also, it is possible that the env vars like HADOOP_CONF_DIR are not set correctly for the session you are running that command in. As for the JN error, it seems that it is trying to format the NN but data already exists in the JN edits directory. Was NN HA working prior to Kerberos being enabled? If you are cool with formatting the NN then you are likely fine with manually removing the data in the JN edits directory. I would back it up in case and then remove it and see if the NN can come online. Also, did you have NN HA enabled and then disabled it? This is the only time I have seen data already in place in the JN edits directory. Rolling back NN HA in CM does not clear out this data.
... View more
09-13-2017
09:53 PM
The only way I can think of would be to have the Spark2 gateway installed on a node that doesn't have the Spark1 gateway or any Spark1 roles. Then create a symlink of spark2-submit to spark-submit.
... View more
08-16-2017
11:51 AM
I would test with the hdfs command first to ensure that HDFS with Kerberos is good. On .a node with the HDFS Gateway installed: kinit <enter password> hdfs dfs -ls / Can you share you jaas.conf file? For the Java program, I believe there are a few more config settings that tell a client to use Kerberos. I don't recall them off the top of my head. I would try just using the hdfs and core site files in the configuration object.
... View more
08-16-2017
09:23 AM
1 Kudo
Log into the postgresql instance you installed for CM and Hive, and create the hive user and give it access to the metastore db. Then update the Hive configuration to use this account.
... View more
08-16-2017
09:21 AM
Did you do these steps prior to Level 1? https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cm_sg_tls_browser.html#xd_583c10bfdbd326ba-7dae4aa6-147c30d0933--7a61 Did you check that your keystore contains the CM certificate and has the correct hostname? Is the keystore file readable by the CM process user?
... View more
08-02-2017
03:13 PM
Run 'jar -tf /opt/spark/yarn/spark-2.1.0-yarn-shuffle.jar | grep -i YarnShuffleService' This will tell you if the jar file contains the class with the correct name.
... View more
08-02-2017
08:11 AM
Why won't it work? Have you tried /tmp and /tmp/hive/<user.name>. The alternative if quotas can't be applied to /tmp or its subdirs is to set alerts for HDFS capacity or disk space on the disks hosting the DFS directories.
... View more
08-01-2017
10:41 AM
I don't know HDFS quotas well enough but should fit the bill. https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html In CM you can configure alerts to notify you when disk and HDFS is nearing capacity.
... View more
07-31-2017
11:32 PM
1 Kudo
It is related to an JMX counter within the Datanode process. I am not sure what it is counting but it something within it is throwing an NPE. This is likely coming after the write stream has process all data but since it hits this exception it throws and exits. It should be safe to ignore this error. getDatanodeNetworkCounts A related JIRA although it doesn't seem to be part of CDH yet. https://issues.apache.org/jira/browse/HDFS-7331
... View more