Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 14989 | 03-08-2019 06:33 PM | |
| 6172 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12586 | 09-07-2018 10:33 PM | |
| 7444 | 04-25-2018 01:55 AM |
11-09-2017
06:46 AM
Changing REALM to UPPERCASE in Ambari helps. No need to change in AD(it worked for me on windows server 2012 r2)
... View more
04-19-2018
09:40 PM
Thanks @Chad Woodhead - Updated! 🙂
... View more
02-21-2017
05:26 PM
SYMPTOM: Oozie sqoop action fails with below error while inserting data into Hive. 20217 [Thread-30] INFO org.apache.sqoop.hive.HiveImport - Sorry ! hive-shell is disabled use 'Beeline' or 'Hive View' instead. Please contact cluster administrators for further information
20218 [main] ERROR org.apache.sqoop.tool.ImportTool - Encountered IOException running import job: java.io.IOException: Hive exited with status 1
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:389)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:342)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:246)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:524)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:243)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:298)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:202)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:182)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:242)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) . ROOT CAUSE: Sqoop uses CliDriver class and does not use hive script whereas Oozie was not able to find that class in classpath hence it was trying to use hive cli. . WORKAROUND: N/A . RESOLUTION: Add below property in job.properties file and re-run failed Oozie workflow. oozie.action.sharelib.for.sqoop=sqoop,hive
... View more
Labels:
02-11-2017
04:51 PM
I was able to stop services, amberi-agent on node and was able to delete node. Installed the deleted services on another node. Thank you @Jay SenSharma
... View more
02-09-2017
05:49 PM
I did the bit of research and looked into the code and found that currently there is no TIMEOUT parameter on Oozie level. I have raised an internal enhancement request for this. ##Snipped from JavaActionExecutor.java## try {
Element actionXml = XmlUtils.parseXml(action.getConf());
FileSystem actionFs = context.getAppFileSystem();
JobConf jobConf = createBaseHadoopConf(context, actionXml);
jobClient = createJobClient(context, jobConf);
RunningJob runningJob = getRunningJob(context, action, jobClient);
if (runningJob == null) {
context.setExecutionData(FAILED, null);
throw new ActionExecutorException(ActionExecutorException.ErrorType.FAILED, "JA017",
"Unknown hadoop job [{0}] associated with action [{1}]. Failing this action!", action
.getExternalId(), action.getId());
} protected RunningJob getRunningJob(Context context, WorkflowAction action, JobClient jobClient) throws Exception{
RunningJob runningJob = jobClient.getJob(JobID.forName(action.getExternalId()));
return runningJob;
} ##Snippet from Mapreduce code(JobClient.java)## public RunningJob getJob(JobID jobid) throws IOException {
JobStatus status = jobSubmitClient.getJobStatus(jobid);
JobProfile profile = jobSubmitClient.getJobProfile(jobid);
if (status != null && profile != null) {
return new NetworkedJob(status, profile, jobSubmitClient);
} else {
return null;
}
} ##Snippet from JobSubmissionProtocol.java (mapreduce code)## /**
* Grab a handle to a job that is already known to the JobTracker.
* @return Status of the job, or null if not found.
*/
public JobStatus getJobStatus(JobID jobid) throws IOException; So I got answer to my question! 🙂
... View more
02-07-2017
07:08 PM
PROBLEM: Ambari Server won't be able to start because of DB inconsistencies. Sample Error: 2017-02-06 05:08:43,975 ERROR - You have non selected configs: zeppelin-ambari-config for service ZEPPELIN from cluster XXXX!
2017-02-06 05:08:43,976 INFO - ******************************* Check database completed *******************************
2017-02-06 05:10:12,834 INFO - Checking DB store version
2017-02-06 05:10:14,094 INFO - DB store version is compatible
2017-02-07 13:50:31,769 INFO - ******************************* Check database started *******************************
2017-02-07 13:50:41,247 INFO - Checking for configs not mapped to any cluster
2017-02-07 13:50:41,322 INFO - Checking for configs selected more than once
2017-02-07 13:50:41,326 INFO - Checking for hosts without state
2017-02-07 13:50:41,330 INFO - Checking host component states count equals host component desired states count
2017-02-07 13:50:41,334 INFO - Checking services and their configs
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / SQOOP
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / HDFS
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / MAPREDUCE2
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / TEZ
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / SPARK
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / HBASE
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / ZOOKEEPER
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / YARN
2017-02-07 13:50:45,793 INFO - Processing HDP-2.5 / KNOX
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / PIG
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / RANGER
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / HIVE
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / SLIDER
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / AMBARI_INFRA
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / KAFKA
2017-02-07 13:50:45,794 INFO - Processing HDP-2.5 / SMARTSENSE
2017-02-07 13:50:45,809 ERROR - You have non selected configs: zeppelin-ambari-config for service ZEPPELIN from cluster XXXXX!
2017-02-07 13:50:45,810 INFO - ******************************* Check database completed ******************************* . BUSINESS IMPACT: It's not recommended to make any changes in service configurations because backend Database is not consistent. . WORKAROUND: ambari-server start --skip-database-check Note - This is not recommended for production clusters, if you do this, please do not make any modifications in service configurations till you resolves the conflicts. . RESOLUTION: 1. Stop Ambari
server ambari-server stop . 2. Take backup of
Ambari Database For postgres - Use pg_dump command. For MySql - Use mysqldump command. . 3. Run below queries
to resolve conflicts delete from hostcomponentstate where service_name = 'ZEPPELIN';
delete from hostcomponentdesiredstate where service_name = 'ZEPPELIN';
delete from servicecomponentdesiredstate where service_name = 'ZEPPELIN';
delete from servicedesiredstate where service_name = 'ZEPPELIN';
delete from serviceconfighosts where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZEPPELIN');
delete from serviceconfigmapping where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZEPPELIN');
delete from serviceconfig where service_name = 'ZEPPELIN';
delete from requestresourcefilter where service_name = 'ZEPPELIN';
delete from requestoperationlevel where service_name = 'ZEPPELIN';
delete from clusterservices where service_name ='ZEPPELIN';
delete from clusterconfig where type_name like 'zeppelin%';
delete from clusterconfigmapping where type_name like 'zeppelin%'; . 4. Start Ambari Server and it should come up without any inconsistencies. . Please feel free to comment if you need any further help on this. Happy Hadooping!!
... View more
Labels:
09-13-2017
06:52 AM
@Kuldeep Kulkarni This is a 2 buck question does Hortonworks allow access to official documentation during the HDPCA exams ?
... View more
02-07-2017
06:42 AM
@Saurabh You can check /var/log/messages to see if installation has started. Also if you want to check how much data has been downloaded, yum keeps package in yum cache while downloading, you can run 'du -sh' in watch command to check the status. Example. Before downloading package [root@prodnode1 ~]# ls -lrt /var/cache/yum//x86_64/6/Updates-ambari-2.4.0.1/packages/
total 0 Download started [root@prodnode1 ~]# /usr/bin/yum -d 0 -e 0 -y install ambari-metrics-hadoop-sink Status of cache directory [root@prodnode1 ~]# ls -lrt /var/cache/yum//x86_64/6/Updates-ambari-2.4.0.1/packages/
total 4552
-rw-r--r--. 1 root root 4660232 Aug 30 20:49 ambari-metrics-hadoop-sink-2.4.0.1-1.x86_64.rpm After installation is complete, package gets removed from cached location. You can run something like below to keep watch over download [root@prodnode1 ~]# watch du -sh /var/cache/yum//x86_64/6/Updates-ambari-2.4.0.1/packages/ambari-metrics-hadoop-sink-2.4.0.1-1.x86_64.rpm Hope this what you were looking for. Please do let us know if you have any further questions! 🙂
... View more
01-24-2017
02:21 PM
@Baruch AMOUSSOU DJANGBAN Your Posted repo file contents are not right, Even base url is wrong. Not sure from where you have downloaded them?
The content from the "hdp.repo" should be something like following according to [1]. Please see the link [2] for the list of repos for different OS. #VERSION_NUMBER=2.5.3.0-37
[HDP-2.5.3.0]
name=HDP Version - HDP-2.5.3.0
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS Version - HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1 As you have mentioned that you do not have internet connectivity, hence you will need to configure Local Offline Repository as mentioned in my previous comment. Please see [3]
[1] http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/hdp.repo
[2] https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/hdp_25_repositories.html
[3] https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-installation/content/setting_up_a_local_repository_with_no_internet_access.html .
... View more
03-06-2017
08:38 PM
@Georg Heiler - Yes. Please use refer below curl command for the same curl -H "X-Requested-By: ambari" -X GET-u <admin-user>:<admin-password> http://<ambari-server>:8080/api/v1/clusters/<cluster-name>?format=blueprint
... View more