Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2724 | 12-25-2018 10:42 PM | |
12394 | 10-09-2018 03:52 AM | |
4257 | 02-23-2018 11:46 PM | |
1964 | 09-02-2017 01:49 AM | |
2286 | 06-21-2017 12:06 AM |
04-19-2020
04:41 PM
Hello Sir I got the output as below but I am not getting any data do you know why? hive> select * from BOOKDATA;
OK
Hadoop Defnitive Guide 24.9
Programming Pig 30.9
Time taken: 0.081 seconds, Fetched: 2 row(s)
... View more
02-29-2020
06:47 PM
Hi @Rajesh07530,
As this thread is older and was marked 'Solved' in 2016 you would have a better chance of receiving a resolution by starting a new thread. This will also provide you with the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.
... View more
01-06-2020
06:55 AM
Is there a way to setup a Kafka container without ambari and then add kafka broker to ambari ?
... View more
12-20-2019
07:16 AM
https://github.com/apache/oozie/blob/9c288fe5cea6f2fbbae76f720b9e215acdd07709/webapp/src/main/webapp/oozie-console.js#L384
... View more
11-11-2019
12:00 PM
What is the menu option on Ambari where I can check this infos?
... View more
12-21-2017
08:53 AM
Also I've tried spark-llap on HDP-2.6.2.0 with Spark 1.6.3 and http://repo.hortonworks.com/content/repositories/releases/com/hortonworks/spark-llap/1.0.0.2.5.5.5-2/spark-llap-1.0.0.2.5.5.5-2-assembly.jar, but unfortunately, when I tried to execute a simple "select count" query in beeline, got the following error messages: 0: jdbc:hive2://node-05:10015/default> select count(*) from ods_order.cc_customer;
Error: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
TungstenAggregate(key=[], functions=[(count(1),mode=Final,isDistinct=false)], output=[_c0#56L])
+- TungstenExchange SinglePartition, None
+- TungstenAggregate(key=[], functions=[(count(1),mode=Partial,isDistinct=false)], output=[count#59L])
+- Scan LlapRelation(org.apache.spark.sql.hive.llap.LlapContext@690c5838,Map(table -> ods_order.cc_customer, url -> jdbc:hive2://node-01.hdp.wiseda.com.cn:10500))[] (state=,code=0) thriftserver-err-msg.txt and the log messages in thriftserver as shown in attached "thriftserver-err-msg.txt".
... View more
05-31-2019
08:38 AM
Hello, My ENV is HDP3.0, spark-2.11.2.3.1,hive3.0, kerberos enabled, I followed as mentioned above, and connected the sparkthriftserver to execute sql: explain select * from tb1,finally got the results: Physical plan:HiveTableScan,HiveTableRelation'''',org.apche.hadoop,hive.serde2.lazy.lazySimpleSerDe instead of llapRealtion. it seems that llap does not work. ps I use the package spark-llap_2-11-1.0.2.1-assembly.jar.
... View more
04-27-2018
04:48 AM
Right. I’ve been having the same issue, so I just restored to a previous Ubuntu snapshot and reinstalled Hive 2.3.3 in hdp 3.0.1. Fingers crossed on me correcting what was wrong. Thx.
... View more
04-01-2017
12:35 AM
3 Kudos
Cloudbreak is a popular, easy to use HDP component for cluster deployment on various cloud environments including
Azure, AWS, OpenStac and GCP. This article shows how to create an Azure application for Cloudbreak using Azure CLI. Note: To do this, you need access to "Owner" account on your Azure subscription. "Developer" and other roles are not enough.
Download and install Azure CLI using instructions provided here. CLI versions are available for Windows, Mac-OS and Linux https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
Type "az" to make sure the CLI is available and in your command path. Login to your Azure account in your web browser, and then also login from your command line: az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code HPBCSXTPJ to authenticate.
Follow the instructions on the web page. When done you will see confirmation on the command line that your login was successful. Run the following command. You can freely choose values to enter here including dummy URIs. Identifier URI and the homepage are never used on Azure but they are
required. Also make sure that identifier URI is unique on your subscription. So, instead of "mycbdapp" you may choose a more
descriptive name.
URIs are dummy, never used, but required az ad app create --identifier-uris http://mycbdapp.com --display-name mycbdapp --homepage http://mycbdapp.com
Ignore the output of this command, including appId, that's not the one we need! Choose your password, and run the following command az ad sp create-for-rbac --name "mycbdapp" --password "mytopsecretpassword" --role Owner
{
"appId": "c19a48f3-492f-a87b-ac4a-b1d8e456f14e",
"displayName": "mycbdapp",
"name": "http://mycbdapp",
"password": "mytopsecretpassword",
"tenant": "891fd956-21c9-4c40-bfa7-ab88c1d8364c"
}
Now login to your Cloudbreak instance, select "manage credentials", "+ create credential", and on the
"Configure credential" page select Azure and fill the form like on the screenshot.
Use appId, password, and tenant ID from the
output above. Add you Azure subscription ID, and paste the public key of your ssh key pair your created before
(this will be used to provide ssh access to cluster machines to the "cloudbreak" user).
Then, proceed by providing other settings, and enjoy HDP on Cloudbreak!
... View more
Labels:
04-23-2018
06:09 PM
1 Kudo
Hi @Andrey Ne The following solution worked for me. I added these two properties on my customized %spark2py3 interpreter. PYSPARK_DRIVER_PYTHON /usr/local/anaconda3/bin/python3 PYSPARK_PYTHON /usr/local/anaconda3/bin/python3
... View more