Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 866 | 06-04-2025 11:36 PM | |
| 1438 | 03-23-2025 05:23 AM | |
| 718 | 03-17-2025 10:18 AM | |
| 2587 | 03-05-2025 01:34 PM | |
| 1710 | 03-03-2025 01:09 PM |
03-29-2018
12:33 AM
@Rahul Soni If you have been getting this duplicate table name error try the below solution Solution: Check for and delete the existing extract BEFORE using the dataextract.Extract() method. So my original code was this: ############BAD############ # #create the extract# try:
# #try to create the extract file
# tdefile = tde.Extract(dataFolder+'DataExtract.tde')
# except:# #if the file already exists, delete it
# os.remove(dataFolder+'DataExtract.tde')
# #create the file now
# tdefile = tde.Extract(dataFolder+'DataExtract.tde') My new, working code does this: ############GOOD############ #if the extract already exists, delete it.
if os.path.isfile(dataFolder+'DataExtract.tde'):
os.remove(dataFolder+'DataExtract.tde')
#now create a new one
tdefile = tde.Extract(dataFolder+'DataExtract.tde') Hopefully this helps!
... View more
03-28-2018
10:13 PM
@Jacky Hung You should use scp as root from both the source and destination part of the cluster, this should be in a local directory eg /tmp # cd /home
# scp * root@destination:/tmp Then as hdfs the hdfs super user you will have to create a home directory in HDFS for each user you copied earlier Creating the home directory for user1 in hdfs $ hdfs dfs -mkdir /user/user1
$ hdfs dfs -chown user1 /user/user1 Subsequently, If you want to create the subdirectories and change recursively the permission and owner $ hdfs dfs -mkdir -p /user/user1/test/another/final
$ hdfs dfs -chown -R user1 /user/user1/test/another/final Then as the HDFS user go to the directory when you scp'ed earlier eg. /tmp $ cd /tmp
$ hdfs dfs -cp user1_objects /user/user1 or
$ hdfs dfs -cp user1_objects /user/user1/test/another/final
Check the permission and ownership $ hdfs dfs -ls /user/user1 You will need to do this for all the other users However its unfortunate you can't use DISTCP as the source isn't Hadoop. Hope that helps
... View more
03-28-2018
08:40 PM
@Rahul Soni Can you paste here the code you are running the below SQL doesn't look correct. Is the table you are querying stb_headers_v6 then the FROM part should be replaced with FROM stb_headers_v6 it shouldn't be qualified with "pde_gold`.`stb_headers_v6`" like below SELECT `stb_headers_v6`.`day` AS `day`
FROM `pde_gold`.`stb_headers_v6` `stb_headers_v6` <-- Table name twice here
GROUP BY `stb_headers_v6`.`day` Hope that helps
... View more
03-28-2018
08:11 PM
@Michael Bronson You will need first to identify the Ambari Service and Component name to be used in the API, this for sure will also bring down the Metrics collector curl -u admin:admin get http://<AMBARI_SERVER>:8080/api/v1/clusters/<CLUSTER_NAME>/services Replace the particular service below <Service_name> with the previous output eg AMBARI_METRICS Stop AMBARI_METRICS curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop service "}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://<AMBARI_SERVER_HOSTNAME>:8080/api/v1/clusters/<CLUSTER_NAME>/services/<Service_name>; The service will stop check the Ambari UI Hope that helps!!
... View more
03-28-2018
02:45 PM
@Dinesh Jadhav Can you paste the content of the following file you can scramble the REALM and other sensitive info: - kinit command
- krb5.conf
- kdc.conf
- kadm5.acl Make sure that you have a local copy of krb5.conf on all hosts and that the kadmin is up and running
... View more
03-28-2018
10:43 AM
@Praveen Atmakuri This is the format of the hdfsadmin command in Azure, where mycluster is your cluster name hdfs dfsadmin -D "fs.default.name=hdfs://mycluster/" -report Please revert
... View more
03-28-2018
10:16 AM
@L V Yes, you guessed it right, otherwise, Ambari will faile to authenticate your LDAP users. Ambari_LDAP and Ambari_LDAPS If my previous answer resolved your initial problem,please could you Accept the answer by Clicking on Accept button below, That would be great help to Community users to find solution quickly for these kind of errors.
... View more
03-28-2018
03:31 AM
1 Kudo
@Michael Dennis "MD" Uanang That URL is inaccessible is your host able to connect to the internet? Have you setup and started your Ambari server? Can you post contents of your /etc/apt/sources.list.d/ambari.list
... View more
03-28-2018
03:23 AM
1 Kudo
@L V To connect to YARN UI through knox default gateway port 8443, create a topology file in /etc/knox/conf/topologies directory and replace the YARN_HOSTNAME and YARN_PORT with relevant values. If your newly created topology is named ui.xml, you can access the YARN UI using Web URL: https://KNOX_HOST:KNOX_PORT/gateway/ui/yarn/ <topology>
<gateway>
<provider>
<role>authentication</role>
<name>Anonymous</name>
<enabled>true</enabled>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>false</enabled>
</provider>
</gateway>
<service>
<role>YARN</role>
<url>http://<YARN_HOSTNAME>:<YARN_PORT></url>
</service>
<service>
<role>YARNUI</role>
<url>http://<YARN_HOSTNAME>:<YARN_PORT></url>
</service>
</topology> Please revert
... View more
03-26-2018
03:43 PM
@Takefumi Oide This is the hive parameter that you should toggle valid values are true/ false,please use the filter to get the values. You should restart the all stale configs thereafter hive.server2.enable.doAs Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to true for the storage based model.
... View more