Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11421 | 03-08-2019 06:33 PM | |
4864 | 02-15-2019 08:47 PM | |
4148 | 09-26-2018 06:02 PM | |
10542 | 09-07-2018 10:33 PM | |
5588 | 04-25-2018 01:55 AM |
04-29-2017
02:56 AM
1 Kudo
@Funamizu Koshi Can you please start your Hive shell in DEBUG mode and check if you get any relevant error in DEBUG logs? To start Hive shell in DEBUG mode: hive --hiveconf hive.root.logger=DEBUG,console
... View more
04-29-2017
02:52 AM
2 Kudos
We no longer support Hue versions later than 2.6.1. Please use better alternative called Ambari views! 🙂
... View more
04-13-2017
11:01 PM
1 Kudo
Also see this for your reference - https://community.hortonworks.com/questions/33017/issue-with-oozie-after-upgrading-ambari.html
... View more
04-13-2017
11:01 PM
1 Kudo
@Andres Urrego 1. Can you please copy falcon-<version>.jar to /usr/hdp/current/oozie-server/libext/ 2. Stop Oozie server daemon 3. Run Oozie prepare war command /usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war 4. Start oozie server
... View more
04-10-2017
08:55 PM
1 Kudo
@Nitin Gangasagar mv ~/.ssh/known_hosts file to /tmp and try again, it should work. Another option is Edit ~/.ssh/known_hosts file and remove line number 4. Save and try again
... View more
04-06-2017
03:53 AM
1 Kudo
@Michael DeGuzis Ambari metrics API docs - https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Metrics+API+specification
... View more
04-06-2017
03:32 AM
@darkz yu In addition to other answers. you may want to run hive shell in DEBUG mode to understand what's going on hive --hiveconf hive.root.logger=DEBUG,console Hope this helps.
... View more
03-06-2017
08:38 PM
@Georg Heiler - Yes. Please use refer below curl command for the same curl -H "X-Requested-By: ambari" -X GET-u <admin-user>:<admin-password> http://<ambari-server>:8080/api/v1/clusters/<cluster-name>?format=blueprint
... View more
03-06-2017
08:31 PM
2 Kudos
In previous post we have seen how to Automate HDP installation with Kerberos authentication on multi node cluster using Ambari Blueprints. In this post, we will see how to deploy multi-node node HDP Cluster with Resource Manager HA via Ambari blueprint. . Below are simple steps to install HDP multi node cluster with Resource Manager HA using internal repository via Ambari Blueprints. . Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications. Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html . Step 1: Install Ambari server using steps mentioned under below link http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Installing_Ambari.html . Step 2: Register ambari-agent manually Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini . Step 3: Configure blueprints Please follow below steps to create Blueprints . 3.1 Create hostmap.json(cluster creation template) file as shown below: Note – This file will have information related to all the hosts which are part of your HDP cluster. This is also called as cluster is creation template as per Apache Ambari documentation. {
"blueprint" : "hdptest",
"default_password" : "hadoop",
"host_groups" :[
{
"name" : "blueprint1",
"hosts" : [
{
"fqdn" : "blueprint1.crazyadmins.com"
}
]
},
{
"name" : "blueprint2",
"hosts" : [
{
"fqdn" : "blueprint2.crazyadmins.com"
}
]
},
{
"name" : "blueprint3",
"hosts" : [
{
"fqdn" : "blueprint3.crazyadmins.com"
}
]
}
]
}
. 3.2 Create cluster_config.json(blueprint) file, it contents mapping of hosts to HDP components {
"configurations" : [
{
"core-site": {
"properties" : {
"fs.defaultFS" : "hdfs://%HOSTGROUP::blueprint1%:8020"
}}
},{
"yarn-site" : {
"properties" : {
"hadoop.registry.rm.enabled" : "false",
"hadoop.registry.zk.quorum" : "%HOSTGROUP::blueprint3%:2181,%HOSTGROUP::blueprint2%:2181,%HOSTGROUP::blueprint1%:2181",
"yarn.log.server.url" : "http://%HOSTGROUP::blueprint3%:19888/jobhistory/logs",
"yarn.resourcemanager.address" : "%HOSTGROUP::blueprint2%:8050",
"yarn.resourcemanager.admin.address" : "%HOSTGROUP::blueprint2%:8141",
"yarn.resourcemanager.cluster-id" : "yarn-cluster",
"yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
"yarn.resourcemanager.ha.enabled" : "true",
"yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
"yarn.resourcemanager.hostname" : "%HOSTGROUP::blueprint2%",
"yarn.resourcemanager.hostname.rm1" : "%HOSTGROUP::blueprint2%",
"yarn.resourcemanager.hostname.rm2" : "%HOSTGROUP::blueprint3%",
"yarn.resourcemanager.webapp.address.rm1" : "%HOSTGROUP::blueprint2%:8088",
"yarn.resourcemanager.webapp.address.rm2" : "%HOSTGROUP::blueprint3%:8088",
"yarn.resourcemanager.recovery.enabled" : "true",
"yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::blueprint2%:8025",
"yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::blueprint2%:8030",
"yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
"yarn.resourcemanager.webapp.address" : "%HOSTGROUP::blueprint2%:8088",
"yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::blueprint2%:8090",
"yarn.timeline-service.address" : "%HOSTGROUP::blueprint3%:10200",
"yarn.timeline-service.webapp.address" : "%HOSTGROUP::blueprint3%:8188",
"yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::blueprint3%:8190"
}
}
}
],
"host_groups" : [
{
"name" : "blueprint1",
"components" : [
{
"name" : "NAMENODE"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
}
],
"cardinality" : 1
},
{
"name" : "blueprint2",
"components" : [
{
"name" : "SECONDARY_NAMENODE"
},
{
"name" : "RESOURCEMANAGER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
"cardinality" : 1
},
{
"name" : "blueprint3",
"components" : [
{
"name" : "RESOURCEMANAGER"
},
{
"name" : "APP_TIMELINE_SERVER"
},
{
"name" : "HISTORYSERVER"
},
{
"name" : "NODEMANAGER"
},
{
"name" : "DATANODE"
},
{
"name" : "ZOOKEEPER_CLIENT"
},
{
"name" : "ZOOKEEPER_SERVER"
},
{
"name" : "HDFS_CLIENT"
},
{
"name" : "YARN_CLIENT"
},
{
"name" : "MAPREDUCE2_CLIENT"
}
],
"cardinality" : 1
}
],
"Blueprints" : {
"blueprint_name" : "hdptest",
"stack_name" : "HDP",
"stack_version" : "2.5"
}
}
Note - I have kept Resource Managers on blueprint1 and blueprint2, you can change it according to your requirement. . Step 4: Create an internal repository map . 4.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file. {
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-2.5.3.0",
"verify_base_url":true
}
} . 4.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file. {
"Repositories":{
"base_url":"http://<ip-address-of-repo-server>/hdp/centos6/HDP-UTILS-1.1.0.21",
"verify_base_url":true
}
} . Step 5: Register blueprint with ambari server by executing below command curl -H "X-Requested-By: ambari"-X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp -d @cluster_config.json . Step 6: Setup Internal repo via REST API. Execute below curl calls to setup internal repositories. curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-2.4 -d @repo.json
curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.4/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.20 -d @hdputils-repo.json . Step 7: Pull the trigger! Below command will start cluster installation. curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d @hostmap.json . Please feel free to comment if you need any further help on this. Happy Hadooping!!
... View more
Labels:
02-21-2017
05:26 PM
SYMPTOM: Oozie sqoop action fails with below error while inserting data into Hive. 20217 [Thread-30] INFO org.apache.sqoop.hive.HiveImport - Sorry ! hive-shell is disabled use 'Beeline' or 'Hive View' instead. Please contact cluster administrators for further information
20218 [main] ERROR org.apache.sqoop.tool.ImportTool - Encountered IOException running import job: java.io.IOException: Hive exited with status 1
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:389)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:342)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:246)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:524)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:243)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:298)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:202)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:182)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:242)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) . ROOT CAUSE: Sqoop uses CliDriver class and does not use hive script whereas Oozie was not able to find that class in classpath hence it was trying to use hive cli. . WORKAROUND: N/A . RESOLUTION: Add below property in job.properties file and re-run failed Oozie workflow. oozie.action.sharelib.for.sqoop=sqoop,hive
... View more
Labels: