Member since
09-14-2015
111
Posts
28
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
633 | 07-06-2017 08:16 PM | |
3655 | 07-05-2017 04:57 PM | |
1840 | 07-05-2017 04:52 PM | |
2317 | 12-30-2016 09:29 PM | |
779 | 12-30-2016 09:14 PM |
12-06-2017
10:04 PM
If you come across this error while doing HBase Service check in a non-kerberized environemnt: "ERROR: Table ambarismoketest does not exist.'" Please follow these steps to resolve it: 1. Stop HBase from Ambari.
2. Go to hbase zookeeper CLI and look for $ hbase zkcli
$ls /hbase-unsecure/table/ambarismoketest
3. If the table exists then delete the table: rmr /hbase-unsecure/table/ambarismoketest
4. Start HBase from Ambari 5. Now, recreate ambarismoketest table: $ hbase shell
hbase> create 'ambarismoketest','family'
hbase> quit 6. It should fix the issue.
... View more
- Find more articles tagged with:
- ambarismoketest
- Cloud & Operations
- FAQ
Labels:
12-02-2017
06:36 PM
@Samant Thakur Yes, it is very annoying when User ID is in upper or mixed case, which is very normal in AD, which is not case-sensitive. But, linux is case-sensitive and so is Ranger. You can remove case-sensitivity in Ranger. But, it is ideal to do it during the installation. You can refer to this article: https://community.hortonworks.com/content/kbentry/145832/ranger-user-sync-issues-due-to-case-difference.html PS: As usual, If you think my response helped you to find a solution then please accept my response as the best answer.
... View more
11-29-2017
11:39 PM
@Michael Martinez By default, Ambari pushes all service configuration (identified as default configuration group) to any newly added node. If you want to make changes for a particular node then you need to go create a different config group for that node. But, if you still want to compare then the best way to compare is: to download config for the new node and for an old node from Ambari UI and compare them using linux (diff command) or any third party comparison tool/editor.
... View more
11-29-2017
10:21 PM
1 Kudo
@Samant Thakur Please check Ranger Audit first to find out whether it was blocked by Ranger or not. If it is being blocked then it must be the Hive policy, which is blocking you. Please let me know.
... View more
11-29-2017
10:18 PM
@Rajesh K First move forward by ignoring the error and clicking next. Oce you are at Ambari Dashboard please add
the following values to the HDFS custom core-site (Services -> HDFS -> Config -> Custom core-site ) configuration: hadoop.proxyuser.hcat.groups=* hadoop.proxyuser.hcat.hosts=* Save your changes and restart all impacted services. Now, rerun the service check. It should work this time. PS: If this solution work for you, then please accept my answer as a best answer
... View more
10-27-2017
05:14 PM
@Florin Miron Please try to start one service at at time. Start with: Ambari-metrices Zookeeper HDFS If anything fails. Then get the logs from Ambari itself and let us know.
... View more
10-27-2017
05:03 PM
Hi @Vijay Gorania There is an error is hive creation command. totmiles must be double instead of bigint. Please use following steps and try to run the pig script again. drop table riskfactor;
CREATE TABLE riskfactor (driverid string,events bigint,totmiles double,riskfactor float) STORED AS ORC;
Please check this post for further detail: https://community.hortonworks.com/questions/58614/i-need-help-with-the-riskfactor-pig-script-from-th.html
... View more
10-09-2017
10:28 PM
@Pradheep Shan both are 2 independent products. You can use CloudBreak without subscribing to Hortonworks Data Cloud AWS Marketplace product. Cloudbreak: https://hortonworks.com/open-source/cloudbreak/#section_1
... View more
10-09-2017
10:13 PM
@Lakhman Pervatoju The simple answer is Yes. But, if you are using LLAP(im memory processing) then you may not see a new job in RM because it uses already running LLAP jobs, which keeps running in memory. If you are using Ambari View 2 then it runs a job first time and 2nd time it gets data from cache.
... View more
10-09-2017
08:07 PM
@kotesh banoth the problem is the single quotes, you are using in the command. You are having a pair of single quotes inside single quote. The first single quote which starts@ 'select, actually is ending at DateViewed >=' You need to escape inside pair of single qoute using this example: https://stackoverflow.com/questions/8254120/how-to-escape-a-single-quote-in-single-quote-string-in-bash
... View more
10-09-2017
07:58 PM
@eric valoschin the solution in the above link is not storing the output on local FS. It is streaming the output from HDFS to HDFS: ============================ A command line scriptlet to do this could be as follows: hadoop fs -text *_fileName.txt | hadoop fs -put - targetFilename.txt
This will cat all files that match the glob to standard output, then
you'll pipe that stream to the put command and output the stream to an
HDFS file named targetFilename.txt =============================
... View more
10-09-2017
07:53 PM
@eric valoschin how about this: https://stackoverflow.com/questions/14831117/merging-hdfs-files
... View more
10-09-2017
07:49 PM
@Pradheep Shan 1. is it any different in creating a cluster through cloudbreak than using cloud controller ui. Manish: What is cloud controller UI? Are you talking about AWS UI? If yes, AWS UI will provide you only RAW OS images or marketplace customize images, But, you need to install/config the cluster yourself. CloudBreak creates/configs HDP cluster with default settings for you using few clicks. 2. can a long running cluster be created using cloudbreak with HA.. Manish: Yes 3. can the cluster be created with master and worker unlike(master, slave and compute) Manish: Yes, you can have different set of machines for master and workers nodes. But, make sure they have the same OS. 4. do we need cloud subscriptions to create cluster with cloudbreak. Manish: You must have AWS account to use cloudbreak.
... View more
10-09-2017
07:41 PM
@Prasant Soni Please see this: https://stackoverflow.com/questions/10624511/upgrade-python-without-breaking-yum
... View more
10-09-2017
06:57 PM
@Prasant Soni It seems OS is dependent on the default python version OR there is some dependency on previous version, which stops working when you upgrade.
... View more
10-09-2017
05:23 PM
@Prasant Soni If you are using Hortonworks Sandbox on Virtualbox then you should not run yum update (to update all the installed packages). The VM is based on Docker and it works with specific packages only. Once you run yum update it breaks its integrity. It is better to update individual packages as needed basis.
... View more
10-09-2017
05:15 PM
@kotesh banoth can you run the select statement only using the same command but, remove the insert statement and check whether it returns some data or not. It is a possibility that the select statement is not returning any data.
... View more
10-09-2017
04:21 PM
@Lou Richard Can you share the Ambari log after starting Atlas Metadata server? It must have some information about the failure.
... View more
10-09-2017
04:19 PM
@kotesh banoth What is the error?
... View more
09-06-2017
04:36 PM
Where are steps to configure R-server for Kerberos? Does it not need any kerberos related settings? No kinit etc..? I noticed that the connect string has kerberos related variables though. Is it really good enough? Please confirm.
rhive.connect(host="node1.hortonworks.com:10000/default;principal=hive/node1.hortonworks.com@HDP.COM;AuthMech=1;KrbHostFQDN=service.hortonworks.com;KrbServiceName=hive;KrbRealm=HDP.COM", defaultFS="hdfs://node1.hortonworks.com/rhive", hiveServer2=TRUE,updateJar=FALSE))
... View more
09-05-2017
10:19 PM
Interesting! These steps are talking about: Download R server Use R to install Rserver Download rhive library Set the appropriate path for HIVE_HOME and HADOOP_HOME Change directory to RHive Use ant to build Use R to build RHive, this will build a file within the same directory Where are steps to configure R-server for Kerberos? Does it not need any kerberos related settings? No kinit etc..? I noticed that the connect string has kerberos related variables though. Is it really good enough? rhive.connect(host="node1.hortonworks.com:10000/default;principal=hive/node1.hortonworks.com@HDP.COM;AuthMech=1;KrbHostFQDN=service.hortonworks.com;KrbServiceName=hive;KrbRealm=HDP.COM", defaultFS="hdfs://node1.hortonworks.com/rhive", hiveServer2=TRUE,updateJar=FALSE))
... View more
09-05-2017
04:00 PM
Can we Install and manage R in a kerberized cluster? Do R libraries support Kerberos? Any documentation available?
... View more
Labels:
- Labels:
-
Apache Ambari
07-28-2017
05:37 PM
@Avinash Reddy I assume, you are using /data for HDP datafiles. If you are using a cloud environment then please follow their guideline on "adding additional disk-space in an existing volume" You can also add additional volumes on your machine and mount them to /data1, /data2 etc.. directories. Now, you can add them in HDP through Ambari using following steps: 1. Login to Ambari 2. Click on HDFS -> Settings 3. Search for "dir" keyword 4. Add new volumes under Namenode data dir and DataNode data dir 5. Save your changes and restart all impacted services. NOTE: As usual, if you think this answer satisfies your need then please accept it as a best answer.
... View more
07-27-2017
04:05 PM
Is there any easy way to replace /tmp folder across all services duing HDP installation. HDP uses /tmp folder heavily. Is there any way to reallocate tmp data for all the services to a new folder?
... View more
- Tags:
- Ambari
Labels:
- Labels:
-
Apache Ambari
07-07-2017
05:21 PM
@Jorge Luis Hernandez Olmos I'm happy it worked for you. Sometimes, it happens and I always prefer to take care of mysql from CLI first. As always, if you find this post helpful, don't forget to "accept" answer.
... View more
07-06-2017
08:16 PM
@Jorge Luis Hernandez Olmos Please check the log file (usually at /var/log/mysql/) and look for the actual error. Also check if there any stale instance of mysqld, is already running. To check this, please execute following command: ps -ef | grep mysql If you find any running instance then try to stop mysqld gracefully and if it does not work then kill the daemon. Now try again to start mysql from Ambari.
... View more
07-06-2017
02:11 PM
@Smart Solutions No changes would be required for ZooKeeper. Unfortunatley,
load-balancing of Hive Server2 feature is not available in HDP
currently. Hopefully in future release. As always, if you find this post helpful, don't forget to "accept" answer.
... View more
07-05-2017
04:57 PM
1 Kudo
@pavan p As gnovak mentioned, you can track long running jobs from ResourceManager UI. Here are the steps: 1. Login to Ambari 2. Click on YARN (under Services) 3. Click on Quick Links 4. Click on resource Manager UI 5. By default you will see a list of all submitted jobs 6. Click on "Jobs -> Running" from left hand side menu. It will show you all jobs which are running currently. 7. Then click on sort by StartTime. Please let me know if it works for you.
... View more