Member since
04-03-2016
32
Posts
3
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5183 | 08-04-2017 07:22 PM | |
1247 | 09-08-2016 07:45 PM | |
2165 | 07-18-2016 09:51 PM |
11-18-2018
03:23 AM
Try pyHive https://github.com/dropbox/PyHive
... View more
03-27-2018
08:24 PM
What is df -h /mountpoint output. Hadoop don’t deal with raw disk . Remount and datanode service should work for you.
... View more
03-13-2018
10:29 PM
Microsoft do not have Bigdata product, so they team up with Hortonworks and providing hadoop framework in Azure as HDinsight.
... View more
10-31-2017
07:30 PM
https://community.hortonworks.com/questions/26199/hiveserver2-ssl-with-kerberos-authentication.html
... View more
08-18-2017
04:14 PM
We are using hdp 2.5.3 and sqoop version "1.4.6.2.5.3.0-37". Please let us know if any one used custom delimiter for line(record) other than default newline '\n' delimiter ? Sqoop guide no details mentioned on this. https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html --lines-terminated-by <char>
Sets the end-of-line character We have requirement to use octal character 016 as record delimiter.
... View more
- Tags:
- Sqoop
Labels:
- Labels:
-
Apache Sqoop
08-04-2017
07:32 PM
hwx-odbc-options.jpg
... View more
08-04-2017
07:22 PM
Issue is fixed after enabling the Use native Query option from Advanced Options. Hwx odbc driver verrsion we got the issue is v2.1.5.1006 64bit). ODBC driver logs showed, without the native query option, it tries to describe the table "desc db1.tabl1". As some of the columns are restricted in that table for the end users, describe query is failing. When "Use native query" option enabled it did not tried to describe the table, so the end users query on permitted columns work well. Feel free to to comment on this explanation and corrections.
... View more
08-01-2017
06:54 PM
SELECT table1.col1, table1.col2 FROM db1.table1; The import process encountered an unexpected error ERROR [42S02] [ACL][SQLEngine] (31740) Table or view not found: HIVE.db1.table1Command Failed Database tables access was restricted through ranger by restricting some of the sensitive columns. Above selected columns are not part of the ranger policy and are accessible through ambari hive view and beeline. Above error coming only when access through odbc driver.
... View more
Labels:
- Labels:
-
Apache Ranger
06-02-2017
09:23 PM
@Sami Ahmad if you could assign nodes to various racks from ambari console that's more than enough.
... View more
04-28-2017
04:14 PM
hi @William Gonzalez have some mercy and show some curtesy.
... View more
04-25-2017
08:49 PM
@William Gonzalez sent email from raju.konduru@rcggs.com
... View more
04-25-2017
06:33 PM
@William Gonzalez Is there any phone number I can speak to some one . It's been a month and no update. Certification Support is least bothered to respond to my emails. Free exam does not deserve this kind of treatment.
... View more
04-11-2017
07:24 PM
@William Gonzalez Still waiting for my results, any one to help ?
... View more
04-07-2017
04:20 AM
@William Gonzalez still not heard from certification team about my results any help ?
... View more
03-31-2017
03:58 PM
@William Gonzalez Thank you William. And thanks to HWX for the free campaign.
... View more
03-31-2017
03:42 PM
HDP Certified Developer: Spark exam - Not received result update from hwx Contacted examslocal, they ask me to contact certification@hortonworks.com. Sent couple of mails and no response. Is there any phone number I can reach ?
... View more
- Tags:
- Hadoop Core
- hdpcd
Labels:
03-27-2017
08:04 PM
@samarth srivastava I created custom scripts using below service based approach to stop the components on all the hosts service wise. This is do not fit your requirement, though it will be helpful to know this too.
STEP1: Get components related to a service: HBASE is example
curl -s -u admin:$PASSWORD -H "X-Requested-By: ambari" -X GET http://$AMBARI_SERVER_HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/HBASE|grep component_name
"component_name" : "HBASE_CLIENT",
"component_name" : "HBASE_MASTER",
"component_name" : "HBASE_REGIONSERVER",
"component_name" : "PHOENIX_QUERY_SERVER",
STEP2: Stop the components (in all hosts)
for COMP in HBASE_CLIENT HBASE_MASTER HBASE_REGIONSERVER PHOENIX_QUERY_SERVER;do
echo $COMP;
curl -u admin:$PASSWORD -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop All Components"},"Body":{"ServiceComponentInfo":{"state":"INSTALLED"}}}' http://$AMBARI_SERVER_HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/HBASE/components/$COMP;
done
... View more
10-24-2016
03:06 PM
I am fully aware about the user_limits. My question was is there option to turn it Off/On. As per my understanding there is no way we can disable the user_limits, Only way is to optimize it. Do anyone has disagreement with this statemet, please let me know.
... View more
10-20-2016
09:43 PM
@Maxim Panteleev do we have option to set default mappings to these queues similar to capacity scheduler queue mappings.
... View more
10-18-2016
08:30 PM
1 Kudo
When we create queue it's creating below default value. Is there anyway to disable to this feature, to allow individual users to use cluster capacity max when it's available.
scheduler.capacity.root.default.user-limit-factor=1
... View more
Labels:
- Labels:
-
Apache YARN
09-27-2016
06:31 PM
Thanks @Enis https://hortonworks.secure.force.com/articles/en_US/Issue/Hbase-replication-command-to-add-peer-fails-with-ERROR-KeeperErrorCode-NoAuth-for-hbase?caseId=5004400000ecuin&isCaseCreation=1&popup=true
... View more
09-27-2016
06:12 PM
hbase(main):002:0> add_peer '1',"zknode1,zknode2,zknode3:2181/hbase-secure" ERROR: KeeperErrorCode = NoAuth for /hbase-secure/replication/peers Here is some help for this command:
A peer can either be another HBase cluster or a custom replication endpoint. In either case an id
must be specified to identify the peer. Background: hbase(Dev Cluster) with service id dev-hbase --> hbase(QA Cluster) with service id qa-hbase.
... View more
Labels:
- Labels:
-
Apache HBase
09-08-2016
07:49 PM
Correction @Luis Size hortonworks link for the yarn queue is https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Ambari_Users_Guide/content/_grafana_yarn_queues.html
... View more
09-08-2016
07:45 PM
1 Kudo
Hi Luis If you are using Ambari 2.2.2 and above version. Then it comes with a service grafana. Grafana has the option to create dash board using AMS (Ambari metrics).. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Ambari_Users_Guide/content/_grafana_yarn_NodeManagers.html I found this github, yet to be tried how it works. https://github.com/prajwalrao/ambari-metrics-grafana/blob/master/README.md Note: Don't forget to vote
... View more
07-18-2016
09:51 PM
Issue is resolved after deleting the zookeeper services and reinstalling. Unable to start the service due to improper configuration of the service.
... View more
07-18-2016
03:56 AM
nope it's not timeout issue, see my error its related to library issue I guess. Anyway here is the start output # rm /var/run/zookeeper/zookeeper_server.pid
rm: remove regular file `/var/run/zookeeper/zookeeper_server.pid'? y
# /usr/hdp/2.3.2.0-2950/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/hdp/2.3.2.0-2950/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# cat /var/run/zookeeper/zookeeper_server.pid
8760# ps -ef|grep 8760
root 8876 27093 0 23:53 pts/9 00:00:00 grep 8760
# /usr/hdp/2.3.2.0-2950/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/hdp/2.3.2.0-2950/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running. cat /var/log/zookeeper/zookeeper.out
Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain
... View more
07-18-2016
02:41 AM
Unable to start zookeeper, getting following error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
06-03-2016
06:53 PM
We are also migrating to ambari views with starting active users of 50 and can go upto 100 by the year end. We are configuring hive view, files view, pig view, Tez view.
... View more