Member since
04-30-2019
53
Posts
5
Kudos Received
1
Solution
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4517 | 01-08-2020 07:09 AM |
04-28-2023
01:26 AM
"none of the solutions from the internet are helping" 🙄. So I asked what binaries?
... View more
12-17-2021
07:19 PM
Dennis, Thanks for your kind answer, but -> I'm not a company, I'm a person trying to use open source software to learn and contribute like anyone else ... if Cloudera claims that his software is open source so why an agreement is needed? It shouldn't, it defeats completely the purpose. I'm not signing any "agreements", either the code is free on NOT, as simple as that!! If NOT, then just say so ... Stop using "open source" as a fashion flag, this will simply diminish the real coders that produce and put their code available REALLY FOR FREE!! If Cloudera got the free code available and built upon on and now wants to sell it, OK, I have no problem with that, but DONT say it is opne source, because it isn't if you charge for it ... Sorry, but this is so obvious to me ... am I so blind, dumb or missing anything here?
... View more
05-30-2021
01:17 AM
[hdfs@c****-node* hive-testbench-hive14]$ ./tpcds-build.sh Building TPC-DS Data Generator make: Nothing to be done for `all’. TPC-DS Data Generator built, you can now use tpcds-setup.sh to generate data. [hdfs@c4237-node2 hive-testbench-hive14]$ ./tpcds-setup.sh 2 TPC-DS text data generation complete. Loading text data into external tables. make: *** [time_dim] Error 1 make: *** Waiting for unfinished jobs.... make: *** [date_dim] Error 1 Data loaded into database tpcds_bin_partitioned_orc_2. INFO : OK +---------------------+ | database_name | +---------------------+ | default | | information_schema | | sys | +---------------------+ 3 rows selected (1.955 seconds) 0: jdbc:hive2://c4237-node2.coelab.cloudera.c> tpcds_bin_partitioned_orc_2 database is not created, I have some issues in testing the tpcds queries sudo -u hdfs -s 13 cd /home/hdfs 14 wget https://github.com/hortonworks/hive-testbench/archive/hive14.zip 15 unzip hive14.zip 17 export JAVA_HOME=/usr/jdk64/jdk1.8.0_77 18 export PATH=$JAVA_HOME/bin:$PATH ./tpcds-build.sh beeline -i testbench.settings -u "jdbc:hive2://c****-node9.coe***.*****.com:10500/tpcds_bin_partitioned_orc_2" I'm not able to test the tpcds queries, any help would be appreciated.
... View more
08-24-2020
07:07 AM
https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/ambari-release-notes/content/known_issues.html
... View more
01-17-2020
01:34 AM
ldap config had missing OU info , ldap auth works fine now.
... View more
07-06-2019
08:59 PM
The above question and the entire response thread below was originally posted in the Community Help track. On Wed Jun 26 21:14 UTC 2019, a member of the HCC moderation staff moved it to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself, not technical questions about using Ambari or upgrading HDP.
... View more
02-05-2019
12:21 PM
1 Kudo
STOP Command:: curl -u admin:admin -H "X-Requested-By:ambari" -i -X PUT http://172.26.78.29:8080/api/v1/clusters/Mycluster/hosts/prakash-ambariagent-node3/host_components/FLUME_HANDLER -d '{"RequestInfo":{"context":"Stop Flume","operation_level":{"level":"HOST_COMPONENT","cluster_name":"Mycluster","host_name":"prakash-ambariagent-node3","service_name":"FLUME"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}' START Command::
curl -u admin:admin -H "X-Requested-By:ambari" -i -X PUT http://172.26.78.29:8080/api/v1/clusters/Mycluster/hosts/prakash-ambariagent-node3/host_components/FLUME_HANDLER -d '{"RequestInfo":{"context":"Start Flume","operation_level":{"level":"HOST_COMPONENT","cluster_name":"Mycluster","host_name":"prakash-ambariagent-node3","service_name":"FLUME"}},"Body":{"HostRoles":{"state":"STARTED"}}}' RESTART Command:: curl --insecure -v -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://prakash-ambariserver-node1:8080/api/v1/clusters/Mycluster/requests -d '{"RequestInfo":{"command":"RESTART","context":"Restart all components for Flume","operation_level":{"level":"SERVICE","cluster_name":"Mycluster","service_name":"FLUME"}},"Requests/resource_filters":[{"service_name":"FLUME","component_name":"FLUME_HANDLER","hosts":"prakash-ambariagent-node3"}]}' Note::Replace the ambari server host and clustername with your cluster details and execute the curl commands.
... View more
Labels: