Member since
05-05-2016
147
Posts
223
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3600 | 12-28-2018 08:05 AM | |
3549 | 07-29-2016 08:01 AM | |
2905 | 07-29-2016 07:45 AM | |
6762 | 07-26-2016 11:25 AM | |
1339 | 07-18-2016 06:29 AM |
12-28-2018
08:14 AM
Hey @Mukesh Kumar, so i assume the service zeppelin was not running was the issue . i thought i asked you about it in last comment 🙂 Glad that it resolved.
... View more
07-08-2017
06:25 AM
It worked for me to change the database to mysql.
1.you need to create user and databases named 'superset' using mysql shell
GRANT ALL PRIVILEGES ON *.* TO 'superset'@'%' IDENTIFIED BY '*****' WITH GRANT OPTION;
create database superset; 2. change the config for Druid , set Mysql's hostname port user password .etc 3. start the druid, and it worked!
... View more
05-03-2017
05:47 PM
hello i hit the same error and ambari-server.log shows "ERROR [ambari-client-thread-25] HostImpl:1374 - Config inconsistency exists: unknown configType"
... View more
04-11-2017
02:34 PM
@Mukesh Kumar - Login to your Ambari DB. For example postgres then: # psql -U ambari ambari
Password for user ambari: bigdata
- Then run the following queries to completely remove the Zookeeper service from DB. delete from hostcomponentstate where service_name = 'ZOOKEEPER';
delete from hostcomponentdesiredstate where service_name = 'ZOOKEEPER';
delete from servicecomponentdesiredstate where service_name = 'ZOOKEEPER';
delete from servicedesiredstate where service_name = 'ZOOKEEPER';
delete from alert_current where history_id in (select alert_id from alert_history where service_name = 'ZOOKEEPER');
delete from alert_notice where history_id in (select alert_id from alert_history where service_name = 'ZOOKEEPER');
delete from alert_history where service_name = 'ZOOKEEPER';
delete from alert_grouping where definition_id in (select definition_id from alert_definition where service_name = 'ZOOKEEPER');
delete from alert_history where alert_definition_id in (select definition_id from alert_definition where service_name = 'ZOOKEEPER');
delete from alert_current where definition_id in (select definition_id from alert_definition where service_name = 'ZOOKEEPER');
delete from alert_definition where service_name = 'ZOOKEEPER';
delete from alert_group_target where group_id in ( select group_id from alert_group where service_name = 'ZOOKEEPER');
delete from alert_group where service_name = 'ZOOKEEPER';
delete from serviceconfighosts where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZOOKEEPER');
delete from serviceconfigmapping where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZOOKEEPER');
delete from serviceconfig where service_name = 'ZOOKEEPER';
delete from requestresourcefilter where service_name = 'ZOOKEEPER';
delete from requestoperationlevel where service_name = 'ZOOKEEPER';
delete from clusterservices where service_name ='ZOOKEEPER';
delete from clusterconfig where type_name like 'zookeeper%';
delete from clusterconfigmapping where type_name like 'zookeeper%';
- Now restart ambari server: # ambari-server restart . Additionally the 500 error for the API call "POST method for API: /api/v1/stacks/HDP/versions/2.5/recommendations." indicates a DB inconsistency. So if it is a fresh cluster then or if you do not want to have any service installed to is and move to the very initial state of ambari installation then better to perform an ambari reset instead of deleting services from ambari UI (As we see you only have zookeeper installed there) # ambari-server reset . *NOTE:* It is always recommended to keep a DB dump as a backup before making any manual DB changes.
... View more
11-14-2016
07:31 AM
Do you have any idea regarding this GPU database Planner? Basically in GPU Database
operations, the planner checks for different scan and join methods, and
then find the cheapest one and creates a query plan tree. While going
for same thing in GPU, the checks should also be made for, whether it is
device executable or not and the query plan tree from CPU has been
updated. I just wanted to know in
higher level and just for knowledge. I am just curious about this planning factors in GPU. There can be more than one appropriate paths in query plan tree.
How the decision for particular path has been made considering those
planning factors?
... View more
08-26-2016
06:43 AM
Thanks @lgeorge for your response. i have tried with and without new consumer and same error message.
... View more
08-02-2016
04:10 AM
yes @Mukesh Kumar provided that. thank you for your help.
... View more
07-27-2016
01:17 PM
Thanks for the update @Mukesh Kumar. Is it worth doing a 1-1 write or do you want to explore the BulkLoad option in Cassandra?
... View more
07-22-2016
11:05 PM
3 Kudos
@Mukesh Kumar it would have been easier to provide a response if you were more specific on the infrastructure that you have available (your laptop/desktop or a bunch of servers or AWS/Azure account), whether a single node was enough (you could use HDP sandbox:http://hortonworks.com/products/sandbox/#tutorial_gallery?utm_source=google&utm_medium=cpc&utm_campaign=Hortonworks_Sandbox_Search ) or you wanted a cluster... Also, what version of Hadoop you wanted to evaluate etc. OS images have slightly different configurations. Anyhow, if it is only for play, use HDP sandbox or if you want to use your laptop/desktop to simulate cluster experience, consider the use of Vagrant. Check my article: https://community.hortonworks.com/articles/39156/setup-hortonworks-data-platform-using-vagrant-virt.html#comment-43820 This is a good starter to test the component. As you can see from the steps of the demo (read VagrantFile), the Centos 6.7 image is extracted from a public repository. Other versions are available. If this response lead you in the right direction, please vote and accept the response.
... View more