Member since
01-15-2018
93
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
240 | 09-13-2018 02:23 PM |
03-31-2020
05:39 AM
2 Kudos
What happened?
Starting up a ZooKeeper server in a Kerberized CDP-DC 7.0.3 cluster failed with the logs below.
2020-03-30 12:23:10,251 ERROR org.apache.zookeeper.server.quorum.QuorumPeerMain: Unexpected exception, exiting abnormally java.io.IOException: Could not configure server because SASL configuration did not allow the ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Message stream modified (41) at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:243) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:646) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:148) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
The JDK for this environment is OpenJDK 1.8.0_242.
# java -version openjdk version "1.8.0_242" OpenJDK Runtime Environment (build 1.8.0_242-b08) OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
Solution
Removing the line of renew_lifetime in /etc/krb5.conf.
Removing this line means to use the default value, 0, for renew_lifetime.
Thus, it may also need to specify renew_lifetime when running kinit command.
See also
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201909.mbox/%3CCAKRKJ1O3yrYKDZ=WhU=i6A+zqxFnUidxvwQzNCTW0mnEv2WFPA@mail.gmail.com%3E
On this page, Akira Ajisaka, one of Hadoop PMCs, described the solution.
He also mentioned a related OpenJDK's JIRA ticket.
https://bugs.centos.org/view.php?id=17000
This page also introduced the same solution.
Additionally, this page showed another solution, setting sun.security.krb5.disableReferrals=true in java.security file. But in my case, this solution didn't work.
https://my.cloudera.com/knowledge/Cloudera-Customer-Advisory-Servers-with-Kerberos-enabled-stop?id=292027
This is a related article from the Cloudera Knowledge Base.
It also describes sun.security.krb5.disableReferrals=true as its workaround.
... View more
Labels:
02-04-2019
03:53 AM
As far as I know, the HDP 2.6.5 document does not have the migration guide for IOP to HDP, but the HDP 2.6.4 documentation has it. Is that mean upgrading from IOP to HDP 2.6.5 directly is impossible?
... View more
Labels:
01-24-2019
06:32 AM
Is there any case to prefer RDD instead of DataFrame/Dataset/SparkSQLin Spark 2.3.0 or later? AFAIK, using DataFrame/Dataset/SparkSQL has a lot of merits. For example, simpler coding and Optimization by Catalyst. If there's some concrete example that should be written with RDD, let me know! Thanks,
... View more
Labels:
11-16-2018
03:25 AM
I read https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/data-operating-system/content/installation_modes_hbase_timeline_service.html This page says "System service mode: This mode of installation works on clusters where the capacity on each node is at least 10 GB and the total cluster capacity is at least 50 GB." Does it mean "`yarn.nodemanager.resource.memory-mb` > 10GB and the sum of that property in the cluster > 50GB" ?
... View more
Labels:
11-16-2018
03:14 AM
11-13-2018
02:35 AM
As far as I know, when I run `hdfs dfs -put` command on a DataNode server, the first block replica will be created on that DataNode server. Which DataNode does NameNode choose for the first replica when I run `hdfs dfs -put` command on a DataNode server that does not satisfy the Storage Policy of the file to be written? (e.g. The Storage Policy is "One_SSD" but the DataNode server which `hdfs dfs -put` command is invoked does not have any "SSD" labeled directory.)
... View more
Labels:
11-08-2018
12:37 AM
I wrote "restarting Ambari Server" but that does not mean "restarting a host that Ambari Server is installed". Just restarting "Ambari Server" takes effect. (= ambari-server restart)
... View more
11-06-2018
12:11 AM
Unfortunately, I haven't got a solution yet.
... View more
10-24-2018
06:14 AM
What functionality becomes unavailable when App Timeline Server isn't running? I know Ambari Tez View becomes unavailable (showing an error that indicates it is impossible to connect with ATS) when ATS isn't running. Of course, ATS WebUI becomes unavailable. But, as far as I saw, ResourceManager runs as usual and yarn cli like "yarn application -status" also runs as usual. As for Spark application, it looks no damage. Spark History Server WebUI shows as usual because Spark History Server only depends on HDFS. Is there any other problem?
... View more
Labels:
10-18-2018
09:49 AM
Thanks, @Jay Kumar SenSharma
... View more
10-17-2018
01:42 AM
How do I restrict access to Ambari WebUI by IP address? Of course, using a firewall included in OS is a solution, but I'd like to know the way which requires only to modify Ambari's configurations. I know Ambari uses Jetty for HTTP server and Jetty provides IP address restriction by IPAccessHandler https://www.eclipse.org/jetty/documentation/9.4.x/ipaccess-handler.html, but I'm not sure how to apply this to Ambari.
... View more
Labels:
10-15-2018
08:14 AM
Does Spark application communicate with App Timeline Server? Hive has some hook points (pre, post, and failure hooks) and ATSHook is set to those properties in default. How about Spark? As far as I checked, I couldn't find...
... View more
Labels:
10-09-2018
07:51 AM
Thanks, @Jay Kumar SenSharma!
... View more
10-09-2018
06:11 AM
I'm using Ambari 2.6.2.2 and Ambari 2.6.2.2 has "Service Auto Start Configuration" that enables a component restarting when it went down unexpectedly. However, I could not find an automatic "Restart" operation in the operations history when the automatic restart functionality worked. How can I get to know that the automatic component restart happened?
... View more
- Tags:
- ambari-server
Labels:
10-05-2018
07:30 AM
Hi, Please try adding <service>
<role>ATLAS-API</role>
<url>http://${atlas_metadata_server_host}:${atlas_metadata_server_port}/</url>
</service> (e.g. <service>
<role>ATLAS-API</role>
<url>http://toide-2.toide.hortonworks.com:21000/</url>
</service> ) to your "Advanced topology". I met "Something went wrong" error on Atlas WebUI before adding it.
... View more
09-28-2018
02:48 AM
Thanks @Sharmadha Sainath
... View more
09-28-2018
02:24 AM
Thanks @akulkarni
... View more
09-27-2018
06:34 AM
How do I delete an Atlas tag which is associated with entities those are already deleted? I found that I couldn't delete an Atlas tag named "mytag" and when I tried to delete that tag I saw "Given type mytag has references". However, "mytag" was not associated with any entities. Then I noticed "Show historical entities" checkbox and I checked the checkbox, then an entity already deleted appeared. I thought I could delete "mytag" after deleting the association between "mytag" and that entity, but I couldn't find the way to remove that association.
... View more
Labels:
09-25-2018
05:51 AM
How do I reload the file (specified by "FilePath" attribute in enricherOptions) or RangerFileBasedGeolocationProvider? As far as I tried, that file is not automatically reloaded. Restarting HiveServer2 if the component using RangerFileBasedGeolocationProvider is Hive causes reloading the file. Doing "Save" a policy also causes reloading the file, no modification required. This is very inconvenient. (Of course, just "Save" an already-existence policy is very light, but hacky a little bit) I want only to reload the file if possible.
... View more
Labels:
09-19-2018
10:14 AM
Thanks, @Sandeep Nemuri That reference link is also helpful!
... View more
09-18-2018
03:43 AM
What stores clusterID, namespaceID, and blockpoolID? As far as I know... ${dfs.namenode.name.dir}/current/VERSION stores clusterID, namespaceID, and blockpoolID. ${dfs.journalnode.edits.dir}/${nameservice}/current/VERSION stores clusterID and namespaceID. ${dfs.datanode.data.dir}/current/VERSION stores clusterID. Directory name under ${dfs.datanode.data.dir}/current includes blockpoolID. ${dfs.datanode.data.dir}/current/${blockpoolID}/current/VERSION stores namespaceID and blockpoolID. Is there any other file or file name/directory name storing those IDs?
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
09-14-2018
09:37 AM
Thanks, @Chiran Ravani
... View more
09-13-2018
02:23 PM
This is very easy. I've just found soon before.
You can create an additional interpreter from "+Create" button on "Interpreters" screen. If you want to create one more "livy" interpreter, choose "livy" from "Interpreter group". And then I can enable the additional livy interpreter by clicking the interpreter in "Interpreter binding" menu on a notebook that you want to use the interpreter on.
... View more
09-13-2018
08:22 AM
What should I set to an "entity" access policy? As far as I tried, Atlas requires an "entity" access policy defined in Ranger must be "*". If I set other than "*" for "entity", Atlas says "You are not authorized for READ on [ENTITY] : *". I thought I can set the name of an entity like Hive table name to a value of an "entity" access policy. Is this incorrect?
... View more
Labels:
09-13-2018
06:22 AM
How can I take tags those are not associated with any entity by Atlas Export API? As far as I tried, Atlas Export API does not return tags those are not associated with any entity. My current request body for Atlas Export API is below. {
"itemsToExport" : [
{ "typeName" : "Asset", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "DataSet", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "fs_path", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "hdfs_path", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "hive_column", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "hive_column_lineage", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "hive_db", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "hive_process", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "hive_storagedesc", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "hive_table", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "Infrastructure", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "Process", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "Referenceable", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "sqoop_dbdatastore", "uniqueAttributes" : { "name" : ".+" } },
{ "typeName" : "sqoop_process", "uniqueAttributes" : { "name" : ".+" } }
],
"options" : {
"fetchType" : "FULL",
"matchType" : "matches"
}
}
... View more
Labels:
09-12-2018
08:29 AM
Thanks, @Jay Kumar SenSharma These days your comments help me a lot! Thank you very much!
... View more
09-12-2018
07:20 AM
What does "Recovery Host" do in Ambari 2.6.2.2? I found "Recovery Host" in "Host Actions". I learned that this functionality is enabled if all the components on the host are stopped. And then, after stopping all the components and click "Recovery Host", the confirmation window appeared and it shows "This action will completely re-install all components on this host." What does "completely re-install" mean? Additionally, as far as I tried, nothing happened when I clicked "Yes" on the confirmation window.
... View more
Labels:
09-11-2018
02:21 PM
How can I take backup of App Timeline Server and restore App Timeline Server from backup? Taking backups of yarn.timeline-service.leveldb-state-store.path and yarn.timeline-service.leveldb-timeline-store.path is enough for backup? And Copying the backups to yarn.timeline-service.leveldb-state-store.path and yarn.timeline-service.leveldb-timeline-store.path and starting ATS is enough for restoring?
... View more
09-11-2018
12:56 PM
As far as I saw, running ZooKeeper's "Restart All" on Ambari looks rolling-restart style. That is when one ZK is restarting, others are waiting. Is it right?
... View more
Labels:
09-11-2018
12:18 PM
What does MapReduce2 History Server depend on? Does it only depend on HDFS? Doesn't it depend on App Timeline Server?
... View more
- Tags:
- historyserver