Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2438 | 04-27-2020 03:48 AM | |
4861 | 04-26-2020 06:18 PM | |
3966 | 04-26-2020 06:05 PM | |
3205 | 04-13-2020 08:53 PM | |
4902 | 03-31-2020 02:10 AM |
12-28-2016
09:02 AM
Yes, It is fine. KDC and Ambari can be co located on the same host. Or they can be remotely located as well. I have a setup where i am running KDC and Ambari on the same host without any issue so far.
... View more
12-28-2016
08:51 AM
@Zhao Chaofeng As this issue is basically related to "GSS initiate failed", Hence can you please check if you see a valid ticket and if you are able to do a "kinit" manually? Also are you using Sun JDK? If yes then you will have to install the JCE policies for encryption. Please check the below link which says "Before enabling Kerberos in
the cluster, you must deploy the Java Cryptography Extension (JCE)
security policy files on the Ambari Server and on all hosts in the
cluster." : https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-security/content/installing_the_jce.html
... View more
12-27-2016
02:47 PM
1 Kudo
@Timothy Spann HDF 2.1 repos for centos7 as listed here: http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.0/bk_dataflow-release-notes/content/ch_hdf_relnotes.html#hdf_repo
... View more
12-27-2016
02:42 PM
1 Kudo
@Timothy Spann
It is strange to see the "el6.noarch" package extension for CentOS 7 operating system. For CentOS7 usually the package extension should be something like "el7.noarch". Are the repositories URLs pointing to the correct repo?
... View more
12-27-2016
07:56 AM
Regarding your query: "is there any alternative how the script can get executed automatically from crontab?"\ crontab -e
Then add the following entry in the crontab: 0 15 * * * /PATH/TO/ambari_ldap_sync_all.sh
However above approach has nothing to do with HDP or Ambari. It's simple Linux.
... View more
12-27-2016
07:46 AM
@Rahul Buragohain The error that you posted clearly indicates that the packages are not properly installed (OR) the current users PATH does is not correct. (OR) after installing the "expect" package you have not included the "#!/usr/bin/expect" at the beginning of your script. ambari_ldap_sync_all.sh: line 3: spawn: command not found
ambari_ldap_sync_all.sh: line 7: send: command not found - Please double check those packages. Command like "spawn" comes from expect package. - As you are able to run the command manually, which means these commands might now be available in your PATH variable when you are running them using cron. So please fix the PATH to include the PATH to point to the location of these commands .
... View more
12-27-2016
07:30 AM
@Rahul Buragohain Have you done # yum install expect -y .
... View more
12-23-2016
02:58 AM
1 Kudo
While accessing the Zeppelin View is shows the "zeppelin service is not running". But when we check at the back end then we find that the Zeppelin server was running. PID file has the correct PID information mentioned in it. - As we see that Zepplin Notebok is running via ambari as well. But still when we try to access the Zeppelin View it shows failure saying "Zeppelin service is not running". We will need to check the Zeppelin log to see if it shows any error? We see the following kind of error: INFO [2016-12-01 09:42:08,926] ({main} ZeppelinServer.java[setupWebAppContext]:266) - ZeppelinServer Webapp path: /usr/hdp/current/zeppelin-server/webapps
INFO [2016-12-01 09:42:09,331] ({main} ZeppelinServer.java[main]:114) - Starting zeppelin server
INFO [2016-12-01 09:42:09,333] ({main} Server.java[doStart]:327) - jetty-9.2.15.v20160210
WARN [2016-12-01 09:42:09,367] ({main} WebAppContext.java[doStart]:514) - Failed startup of context o.e.j.w.WebAppContext@69b794e2{/,null,null}{/usr/hdp/current/zeppelin-server/lib/zeppelin-web-0.6.0.2.5.0.0-1133.war}
java.lang.IllegalStateException: Failed to delete temp dir /usr/hdp/2.5.0.0-1133/zeppelin/webapps
at org.eclipse.jetty.webapp.WebInfConfiguration.configureTempDirectory(WebInfConfiguration.java:372)
at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:260)
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:69)
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:163)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.server.Server.start(Server.java:387)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:354)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116)
.
.
INFO [2016-12-01 09:42:09,390] ({main} AbstractConnector.java[doStart]:266) - Started ServerConnector@28cab3cc{HTTP/1.1}{0.0.0.0:9995}
INFO [2016-12-01 09:42:09,391] ({main} Server.java[doStart]:379) - Started @1094ms . However the port was opened fine but the the Zeppelin WebAppContext was not initialized properly due to the above error. So Zeppeline View was showing "Service check failed" error with message "zeppelin service is not running". In this case check we will need to check if the "/usr/hdp/2.5.0.0-1133/zeppelin/webapps" has proper permission/ownership as "zeppelin:hadoop" the user who is running zeppelin something like following: INCORRECT: # ls -l /usr/hdp/2.5.0.0-1133/zeppelin/web*
drwxr-xr-x. 3 root root 4096 Dec 1 09:37 webapps .
CORRECT: # ls -l /usr/hdp/2.5.0.0-1133/zeppelin/web*
drwxr-xr-x. 10 zeppelin hadoop 4096 Dec 1 09:59 webapp .
... View more
Labels:
12-23-2016
02:49 AM
2 Kudos
Ambari allows its users to have different configurations for different hosts for different components via configuration groups. Ambari initially assigns all hosts in your cluster to one, default configuration group for each service you install. When using Configuration groups, it enforces configuration properties that allow override, based on installed components for the selected service and group. For more information on this please refer to: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-user-guide/content/using_host_config_groups.html Here we will try to make a very small change in "flume-conf" content and will apply it to specific host ("erie3.example.com") where as all the other hosts will be using the default "flume-conf" configuration. We can simply use the following ambari API to list all the config_groups. http://AMBARI_HOST:8080/api/v1/clusters/CLUSTER_NAME/config_groups Example: http://erie1.example.com:8080/api/v1/clusters/ErieCluster/config_groups . Step-1).
======== Lets see the current "flume-conf" from ambari which looks something like following and applied to all the Ambari Hosts: # Flume agent config
sandbox.sources = eventlog
sandbox.channels = file_channel
sandbox.sinks = sink_to_hdfs
# Define / Configure source
sandbox.sources.eventlog.type = exec
sandbox.sources.eventlog.command = tail -F /var/log/eventlog-demo.log
sandbox.sources.eventlog.restart = true
sandbox.sources.eventlog.batchSize = 1000
#sandbox.sources.eventlog.type = seq
# HDFS sinks
sandbox.sinks.sink_to_hdfs.type = hdfs
sandbox.sinks.sink_to_hdfs.hdfs.fileType = DataStream
sandbox.sinks.sink_to_hdfs.hdfs.path = /flume/events
sandbox.sinks.sink_to_hdfs.hdfs.filePrefix = eventlog
sandbox.sinks.sink_to_hdfs.hdfs.fileSuffix = .log
sandbox.sinks.sink_to_hdfs.hdfs.batchSize = 1000
# Use a channel which buffers events in memory
sandbox.channels.file_channel.type = file
sandbox.channels.file_channel.checkpointDir = /var/flume/checkpoint
sandbox.channels.file_channel.dataDirs = /var/flume/data
# Bind the source and sink to the channel
sandbox.sources.eventlog.channels = file_channel
sandbox.sinks.sink_to_hdfs.channel = file_channel . Step-2).
======== Now suppose for Host ("erie3.example.com") we want to run the flume with a slightly different properly like [sandbox.sources.eventlog.command = tail -F /var/log/eventlog-demo-new-location.log] So in order to achieve that we will need to create a "config_group" json data which we will need to push to Ambari. Here we will create a file like "/tmp/erie3_flume_conf.json" [
{
"ConfigGroup": {
"cluster_name": "ErieCluster",
"group_name": "cfg_group_test1",
"tag": "FLUME",
"description": "FLUME configs for Changes",
"hosts": [
{
"host_name": "erie3.example.com"
}
],
"desired_configs": [
{
"type": "flume-conf",
"tag": "nextgen1",
"properties": {
"content":
"
# Flume agent config\r\n
sandbox.sources = eventlog \r\n
sandbox.channels = file_channel \r\n
sandbox.sinks = sink_to_hdfs \r\n
\r\n
# Define / Configure source \r\n
sandbox.sources.eventlog.type = exec \r\n
sandbox.sources.eventlog.command = tail -F /var/log/eventlog-demo-new-location.log \r\n
sandbox.sources.eventlog.restart = true \r\n
sandbox.sources.eventlog.batchSize = 1000 \r\n
#sandbox.sources.eventlog.type = seq \r\n
\r\n
# HDFS sinks \r\n
sandbox.sinks.sink_to_hdfs.type = hdfs \r\n
sandbox.sinks.sink_to_hdfs.hdfs.fileType = DataStream \r\n
sandbox.sinks.sink_to_hdfs.hdfs.path = /flume/events \r\n
sandbox.sinks.sink_to_hdfs.hdfs.filePrefix = eventlog \r\n
sandbox.sinks.sink_to_hdfs.hdfs.fileSuffix = .log \r\n
sandbox.sinks.sink_to_hdfs.hdfs.batchSize = 2000 \r\n
# Use a channel which buffers events in memory \r\n
sandbox.channels.file_channel.type = file \r\n
sandbox.channels.file_channel.checkpointDir = /var/flume/checkpoint \r\n
sandbox.channels.file_channel.dataDirs = /var/flume/data \r\n
# Bind the source and sink to the channel \r\n
sandbox.sources.eventlog.channels = file_channel \r\n
sandbox.sinks.sink_to_hdfs.channel = file_channel \r\n
"
}
},
{
"type": "flume-env",
"tag": "nextgen1"
}
]
}
}
] **Notice:** If our configuration contains new line then the json data should use "\r\n" characters sequence.
- Also notice that the above JSON configuration is specified for host "erie3.example.com" as following, (However we can have more comma separated hosts there): . "hosts": [
{
"host_name": "erie3.example.com"
}
] - We have also specified the name for our config group ["group_name": "cfg_group_test1",] . Step-3).
======== Lets now PUT these changes to ambari and see if it works: curl -u admin:admin -H "X-Requested-By: ambari" -X POST -d @/Users/jsensharma/Cases/Articles/Flume_Config_Group/erie3_flume_conf.json http://erie1.example.com:8080/api/v1/clusters/ErieCluster/config_groups
OUTPUT:
$ curl -u admin:admin -H "X-Requested-By: ambari" -X POST -d @/Users/jsensharma/Cases/Articles/Flume_Config_Group/erie3_flume_conf.json http://erie1.example.com:8080/api/v1/clusters/ErieCluster/config_groups
{
"resources" : [
{
"href" : "http://erie1.example.com:8080/api/v1/clusters/ErieCluster/config_groups/252",
"ConfigGroup" : {
"id" : 252
}
}
]
} .
... View more
Labels:
12-22-2016
11:59 AM
2 Kudos
@Ye Jun As there are no view JARs present inside "/var/lib/ambari-server/resources/views/" hence it indicates either someone mistakenly deleted those JARs OR incomplete installation. So either get the same version of View JARs from another working installation and put them in the same directory then restart the ambari server. Or reinstall the ambari server to get those JARs. .
... View more