Member since
05-20-2016
155
Posts
220
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5939 | 03-23-2018 04:54 AM | |
2154 | 10-05-2017 02:34 PM | |
1140 | 10-03-2017 02:02 PM | |
7734 | 08-23-2017 06:33 AM | |
2471 | 07-27-2017 10:20 AM |
10-12-2016
12:54 PM
@chris obia Can you please try kinit using the hiveserver2 keytab and see whether this is successful. keytab will be found on the machine with hiveserver2 at below location /etc/security/keytabs/hive.service.keytab To find the principle name, try below command which returns the principal name [hive@XYX ~]$ klist -kt /etc/security/keytabs/hive.service.keytab
Keytab name: FILE:/etc/security/keytabs/hive.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
4 09/06/16 04:04:06 hive/XYZ@EXAMPLE.COM
Now kinit using the principle name as below and let us know whether it successful. kinit -kt /etc/security/keytabs/hive.service.keytab hive/XYZ@EXAMPLE.COM
... View more
10-12-2016
10:54 AM
1 Kudo
@Aaron Harris Couple of things to check. 1 ] Can you please confirm whether nimbus service is up and running 2] Check for error in nimbus log available on nimbus host at path /var/log/storm/nimbus.log 3] Check for error in storm ui log available at storm ui server at path /var/log/storm/ui.log
... View more
10-12-2016
06:48 AM
@mliem Please refer to below link for configuring namenode HA. Make sure that step#15 is executed https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_Ambari_Users_Guide/content/_how_to_configure_namenode_high_availability.html Below is the details of Step#15 If you are using Hive, you must manually change the Hive Metastore FS root to point to the Nameservice URI instead of the NameNode URI. You created the Nameservice ID in the Get Started step.
Check the current FS root. On the Hive host:
hive --config /etc/hive/conf.server --service metatool -listFSRoot
The output looks similar to the following: Listing FS Roots... hdfs://<namenode-host>/apps/hive/warehouse
Use this command to change the FS root:
$ hive --config /etc/hive/conf.server --service metatool -updateLocation <new-location><old-location>
For example, where the Nameservice ID is mycluster:
$ hive --config /etc/hive/conf.server --service metatool -updateLocation hdfs://mycluster/apps/hive/warehouse hdfs://c6401.ambari.apache.org/apps/hive/warehouse
The output looks similar to the following:
Successfully updated the following locations...Updated X records in SDS table
... View more
10-05-2016
12:07 PM
can you please specify what configuration file you are looking for ? altas configuration file ?
... View more
10-05-2016
07:53 AM
2 Kudos
@PARIVALLAL R What is the HDP stack version you are working with ? I do not recollect whether Ambari 2.1.2 supported "Add Service" feature via Ambari UI , if it available in the Home page under "Actions" please use the same or else to install a component via API you could refer to below document. https://cwiki.apache.org/confluence/display/AMBARI/Adding+a+New+Service+to+an+Existing+Cluster to uninstall a component you could use below doc as reference because uninstall is not supported via UI in Ambari 2.1.2 https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host
... View more
10-05-2016
04:59 AM
2 Kudos
@PARIVALLAL R which version of Ambari are you using ? If on Ambari 2.2.2.X you could do it using Ambari delete service API curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://AMBARI_SERVER_HOST:8080/api/v1/clusters/c1/services/ATLAS If on Ambari 2.4.X , DELETE service option is available under the Altas "Service Actions" Please make sure to stop the service below doing the same.
... View more
09-27-2016
04:55 PM
1 Kudo
@ssharma
try building a uber jar and submitting the same. Below the pom snippet to achieve the same. <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>1.4</version>
<configuration>
<createDependencyReducedPom>true</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass></mainClass>
</transformer>
</transformers>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
... View more
09-26-2016
09:21 AM
3 Kudos
I am facing below exception on submitted co-ordinator job. Caused by: org.apache.oozie.workflow.WorkflowException: E0736: Workflow definition length [637,397] exceeded maximum allowed length [100,000]
at org.apache.oozie.service.WorkflowAppService.readDefinition(WorkflowAppService.java:130)
at org.apache.oozie.service.LiteWorkflowAppService.parseDef(LiteWorkflowAppService.java:45)
at org.apache.oozie.command.wf.SubmitXCommand.execute(SubmitXCommand.java:179)
is there any way to bypass this ?
... View more
Labels:
- Labels:
-
Apache Oozie
09-23-2016
07:50 AM
2 Kudos
@tauqeer khan You can try local repository approach if there is no access to internet and yum steps should be optional. http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-installation/content/setting_up_a_local_repository.html Also the tar balls for Ambari is available here. Please use the same, this would not require yum access.
... View more
09-21-2016
02:37 PM
4 Kudos
Ambari 2.4.0.0 officially supports LogSearch Component [ Tech Preview] . To learn more about Log Search component please refer to link Ambari LogSearch . While LogSearch component does support pushing the logs to Kafka topic [ based on which real time log analytics can be performed ] this is official not supported in Ambari 2.4.0.0. This might get addressed in Ambari 2.5.0.0 probably. This article provides the details on how to configure LogSearch [ LogFeeder component ] to push to Kafka topic if there is a need to capture and perform real time analytics based on logs in your cluster. 1.After installing LogSearch component from Ambari 2.4.0.0, go to the LogSearch component config screen and add below property under "Advanced logfeeder-properties" to property "logfeeder.config.files" {default_config_files},kafka-output.json 2. Create the kafka-output.json file with below content under directory /etc/ambari-logsearch-logfeeder/conf/ on the nodes which has logfeeder [ ideally all the nodes in your cluster ] {
"output": [
{
"is_enabled": "true",
"destination": "kafka",
"broker_list": "ctr-e25-1471039652053-0001-01-000006.test.domain:6667",
"topic": "log-streaming",
"conditions": {
"fields": {
"rowtype": [
"service"
]
}
}
}
]
}
3. Configure Kafka PLAINTEXT listener as below if the cluster is Kerberozied because workaround to push to PLAINTEXTSASL is not available.Make sure broker endpoint configured in Step#2 is PLAINTEXT PLAINTEXT://localhost:6667,PLAINTEXTSASL://localhost:6668 4. Create Kafka topic and provide ACLs to ANONYMOUS user. Below command help in the same. ./bin/kafka-topics.sh --zookeeper zookeeper-node:2181 --create --topic log-streaming --partitions 1 --replication-factor 1
./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-node:2181 --add --allow-principal User:ANONYMOUS --operation Read --operation Write --operation Describe --topic log-streaming
5. Restart LogSearch service from Ambari and thats it ! logs should be pushing by now. Below is the command to check the same. /bin/kafka-console-consumer.sh --zookeeper zookeeper-node:2181 --topic log-streaming --from-beginning --security-protocol PLAINTEXT
... View more
Labels: