Member since
01-06-2016
36
Posts
22
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
368 | 04-29-2016 02:51 PM | |
428 | 03-10-2016 07:41 PM | |
625 | 02-15-2016 08:22 PM |
03-06-2017
10:29 PM
1 Kudo
Hi - Looking for some better methods of reducing existing table region count. We have a few tables that were originally pre split with far too many regions. Time goes on and Issues like performance and compaction storms are evident. With smaller tables I've been using Export/Import or CopyTable to move the data to new tables with less regions, but larger tables (TBs) are very challenging to run to completion. Are there any better strategies for accomplishing the above? In some cases the region counts are so high that manually merging them is not feasible so I find myself back at Export/Import.
... View more
Labels:
11-17-2016
07:33 PM
This happens when using the Ambari cluster install wizard, Ambari v. 2.2.2.0, HDP 2.4.2.0, centOS 7: Host Check Package Issue The following packages should be uninstalled Package nut-client.x86_64 2.7.2-3.el7
... View more
11-15-2016
09:06 PM
Wondering why Ambari warns that nut-client.x86_64 should be uninstalled when running host checks. Anyone know the problem with Nut?
... View more
10-27-2016
08:24 PM
I added a few nodes to install Storm, but Ambari (2.2.2.0) is apparently trying to to use the wrong version.
On the destination node, the path /var/lib/ambari-agent/cache/common-services/STORM contains 2 versions, 0.9.1 and 0.9.1.2.1.
The directory 0.9.1 is empty and is the one Ambari is trying to execute script in. The other directory contains the items needed for installation. So the install fails with errors like:
Caught an exception while executing custom service command: <class 'ambari_agent.AgentException.AgentException'>: 'Script /var/lib/ambari-agent/cache/common-services/STORM/0.9.1/package/scripts/drpc_server.py does not exist'; 'Script /var/lib/ambari-agent/cache/common-services/STORM/0.9.1/package/scripts/drpc_server.py does not exist'
I suppose the question is how do I get Ambari to into shape here?
Any help/thoughts are greatly appreciated.
edit: I found that my repo_version table in the Ambari database has 2 different entries for the stack_id I'm using:
53 | 2.4.0.0-169 | HDP-2.4.0.0 | [{"repositories":[{"Repositories/repo_id":"HDP-2.4","Repositories/base_url":"http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.4.0.0","Repositories/repo_name
":"HDP"},{"Repositories/repo_id":"HDP-UTILS-1.1.0.20","Repositories/base_url":"http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6","Repositories/repo_name":"HDP-UTILS"}],"OperatingSystems/os_type":"r
edhat6"}] | 51
103 | 2.4.2.0-258 | HDP-2.4.2.0 | [{"repositories":[{"Repositories/repo_id":"HDP-2.4","Repositories/base_url":"http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.4.2.0","Repositories/repo_name
":"HDP"},{"Repositories/repo_id":"HDP-UTILS-1.1.0.20","Repositories/base_url":"http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6","Repositories/repo_name":"HDP-UTILS"}],"OperatingSystems/os_type":"r
edhat6"}] | 51
... View more
Labels:
09-26-2016
08:58 PM
Thanks for your response. I'll follow that course. Cheers
... View more
09-23-2016
03:09 PM
2 Kudos
Hello- I have a Kafka cluster that has a dedicated (not managed by Ambari) Zookeeper ensemble for which I need to change the Hostnames. No Ambari, no HDP, no anything except Kafka and ZK Beyond updating Kafka's configuration files, is there more to consider (e.g. consumer retaining consumer offsets)? The ZK ensemble will remain untouched aside from the hostname change. Thanks.
... View more
Labels:
09-12-2016
05:09 PM
Ah, thanks. Feeling very silly I left that param in. 🙂
... View more
09-11-2016
08:34 PM
1 Kudo
Hello - I'm having trouble using the Sqoop CLI with HCatalog. The error is always: org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class
org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not
found at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:519)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:499)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1598)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:499)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:285)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1556)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1553)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1486)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class
org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not
found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getOutputFormatClass(JobContextImpl.java:222)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:515)
... 11 more
Caused by: java.lang.ClassNotFoundException:
Class org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
... 13 more
I added the following to the sqoop-env template within Amabari (advice I found in another post) with no luck. This job works fine when in an Oozie Sqoop action, but failing with the above when executed via CLI: $ sqoop import --skip-dist-cache --username xx --password-file /dir/xx.dat --connect jdbc:postgresql://server.x.lan/xx --split-by download_id --hcatalog-table project_xx_0 --hcatalog-database default --query "select [lots of stuff with several joins] AND \$CONDITIONS" So this is a sqoop free-form query import from a Postgresql database/table to an existing hive-hcatalog table. Any help, tips are greatly appreciated, Thanks.
... View more
Labels:
07-08-2016
04:27 PM
1 Kudo
@Artem Ervits This is a long standing bug in Ambari setup. For an external Postgres database the script is: Ambari-DDL-Postgres-CREATE.sql
... View more
05-11-2016
05:50 PM
Upgrading to Ambari 2.2.2 from 2.2 and when executing $ambari-server upgrade am receiving the following warnings: "Updating properties in ambari.properties ...
WARNING: Can not find ambari-env.sh.rpmsave file from previous version, skipping restore of environment settings
Fixing database objects owner" I am using an external PostreSQL database. I do have backups, but what is this going to wipeout?
... View more
Labels:
04-29-2016
02:51 PM
Thanks for the responses but the install state was not recoverable.
... View more
04-28-2016
10:48 PM
During an HDP deployment (automated with Ambari) we've found some issues on our end to resolve. Am we able resume installation from the point I left off in the Ambari installer at a later time, or will we need to start from scratch? In this case the stopping point is after cluster configuration, but before deployment. Ambari 2.2.1, HDP 2.4 Thanks
... View more
04-11-2016
06:16 PM
Accepting @Alejandro Fernandez 's answer as this is what we essentially did, but I definitely recommend treading cautiously and of course backup your database before making changes. This does appear to be a bug.
... View more
04-06-2016
06:31 PM
Thanks, but since I don't have an "upgrade" table I can't run a select.
... View more
04-06-2016
06:01 PM
Hello- I the process of upgrading Ambari to 2.2.1.1 I hit the following error after $ ambari-server upgrade: 06 Apr 2016 13:22:52,129 ERROR [main] SchemaUpgradeHelper:315 - Exception occurred during upgrade, failed
org.apache.ambari.server.AmbariException: Errors found while populating the upgrade table with values for columns upgrade_type and upgrade_package.
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executePreDMLUpdates(SchemaUpgradeHelper.java:215)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:302)
Caused by: org.apache.ambari.server.AmbariException: Errors found while populating the upgrade table with values for columns upgrade_type and upgrade_package. Took a look at my (external) Postgres database and found the ambari database does not have an "upgrade" table. I checked another cluster with a similar external Postgres instance and it does have that table. Should I create the table manually? If so, anyone have the appropriate create statement for the table? If there's a more ambari-friendly way get the tables straight, that would be preferable. Thanks.
... View more
Labels:
03-22-2016
07:44 PM
1 Kudo
We tried all of Ali's suggestions but we were only successful by hard-coding the ${hdp.version} in the mapreduce.application.classpath: $PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/2.3.4.0-3485/hadoop/lib/hadoop-lzo-0.6.0.2.3.4.0-3485.jar:/etc/hadoop/conf/secure
... View more
03-10-2016
07:41 PM
2 Kudos
Confirmed by HWX support that this is not possible on the current Hadoop version.
... View more
03-09-2016
07:00 PM
1 Kudo
I know the mapred config "yarn.app.mapreduce.am.job.client.port-range" can be configured, but I've read conflicting info on whether this setting actually sticks. Is this known to be working on HDP 2.3?
... View more
02-19-2016
07:54 PM
Hi Artem--looked in the file and did not see an export for that variable. Quick look: /etc/hadoop/conf$ cat hadoop-env.sh | grep -i hdp
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}
# Path to jsvc required by secure HDP 2.0 datanode
if [ -d "/usr/hdp/current/tez-client" ]; then
# When using versioned RPMs, the tez-client will be a symlink to the current folder of tez in HDP.
export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec
export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS"
... View more
02-19-2016
07:46 PM
2 Kudos
Experiencing inconsistencies with ${hdp.version}, would like to check the value. Once verified or changed, does this require service restarts to propagate? Thanks
... View more
02-15-2016
08:22 PM
1 Kudo
Thank you both for your help. Support case was escalated and received excellent guidance for getting Ambari back in sync. The primary resolution for the sync issue raised here was executing $ ambari-server upgradestack HDP-2.3
... View more
02-15-2016
04:16 PM
1 Kudo
Also: $ ls /usr/hdp
2.2.0.0-2041 2.2.4.2-2 2.3.2.0-2950 current Where current is correctly pointing to 2.3.x
... View more
02-15-2016
04:01 PM
1 Kudo
Nope, not upgrading anything. Ambari database stack options: select * from stack;
stack_id | stack_name | stack_version
----------+------------+-----------------
1 | HDP | 2.3
2 | HDP | 2.1.GlusterFS
3 | HDP | 2.2
4 | HDP | 2.1
5 | HDP | 2.0
6 | HDP | 2.0.6
7 | HDP | 2.3.GlusterFS
8 | HDP | 2.0.6.GlusterFS
... View more
02-15-2016
03:49 PM
1 Kudo
Thank you Artem. My current status is up in response to Neeraj's comments.
... View more
02-15-2016
03:47 PM
1 Kudo
As suggested, I have the blueprint which reflects all the current information for our stack (2.3.2). When attempting to start services on the cluster Ambari favors 2.2, the previously installed version. My assumption is the Ambari database needs some adjusting to properly recognize 2.3.2. I've tried a few updates to set the installed stack to 2.3.2, but Ambari is unsuccessful at starting the cluster components.
... View more