Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2567 | 11-01-2016 05:43 PM | |
| 8499 | 11-01-2016 05:36 PM | |
| 4859 | 07-01-2016 03:20 PM | |
| 8179 | 05-25-2016 11:36 AM | |
| 4334 | 05-24-2016 05:27 PM |
02-27-2016
01:47 AM
@Prakash Punj Did you copy the file locally instead hdfs as I mentioned in my reply?
... View more
03-18-2016
11:06 AM
@Robin Dong As mentioned by Ancil, you might want to have a script to do the sqoop download in parallel. And you need to control quite well how big is your parallelism. Above all if you want to avoid the typical "No more spool space in...". Here's a script to do that: https://community.hortonworks.com/articles/23602/sqoop-fetching-lot-of-tables-in-parallel.html Another problem I saw in Teradata, is that it is some data types are not supported when you try to directly insert the data into Hive from Sqoop. So the solution I took was the traditional one: 1) Sqoop to HDFS. 2) Build external tables on top of them 3) Create ORC file and then insert the data or the external tables
... View more
02-18-2016
05:55 PM
1 Kudo
I found it the problem, it was the password. I issues the statement you gave and it still failed. Then I went into ambari and changed the password to password and it worked. thank you
... View more
10-17-2017
06:39 AM
For me this setting is disabled .I can not make any changes.Can you please let me know how to on the ACID properties?
... View more
11-17-2017
11:24 AM
Nope, reducers don't communicate with each other and neither the mappers do. All of them runs in a separate JVM containers and don't have information of each other. AppMaster is the demon which takes care and manage these JVM based containers (Mapper/Reducer).
... View more
01-03-2019
01:25 PM
1 Kudo
Hi, I'd like to share a situation we encountered where 99% of our HDFS blocks were reported missing and we were able to recover them. We had a system with 2 namenodes with high availability enabled. For some reason, under the data folders of the datanodes, i.e /data0x/hadoop/hdfs/data/current - we had 2 Block Pools folders listed (example of such folder is BP-1722964902-1.10.237.104-1541520732855). There was one folder containing the IP of namenode1 and another containing the IP of namenode 2. All the data was under the BlockPool of namenode 1, but inside the VERSION files of the namenodes (/data0x/hadoop/hdfs/namenode/current/) the BlockPool id and the namespace ID were of namenode 2 - the namenode was looking for blocks in the wrong block pool folder. I don't know how we got to the point of having 2 block pools folders, but we did. In order to fix the problem - and get HDFS healthy again - we just needed to update the VERSION file on all the namenode disks (on both NN machines) and on all the journal node disks (on all JN machines), to point to Namenode 1. We then restarted HDFS and made sure all the blocks are
reported and there's no more missing blocks.
... View more
05-25-2018
06:16 PM
Hello, From what version of Ranger this delegate admin feature is supported? This is a cool feature for multi-level provisioning powers
... View more
02-24-2016
01:52 AM
@Sunile Manjee See this Demo my_ns1:my_table - demouser can access it hbase(main):005:0> scan "my_ns1:my_table" ROW COLUMN+CELL 0 row(s) in 0.0340 seconds hbase(main):006:0> I removed demouser in policy hbase(main):006:0> scan "my_ns1:my_table" ROW COLUMN+CELL ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user ‘demouser',action: scannerOpen, tableName:my_ns1:my_table, family:fam. Here is some help for this command:
... View more
02-18-2016
12:11 AM
@Shakthar Khahadir Help me to close the thread by accepting the answer
... View more
04-19-2016
02:55 PM
I run into the same problem, where the Ambari says it's installed, but the sqoop directory is not there on the data nodes.
I am running in a cluster, but it should be the same for sandbox.
The current answer does not address this, but the only way to fix this is to uninstall the sqoop client, and re-install it with Ambari.
Unfortunately, current web UI does not allow uninstall of clients.
Fortunately, you can do it through API calls.
Command Syntax is follows: URL=https://${AMBARI_HOST}/api/v1/clusters/${CLUSTER_NAME}/hosts/${HOST_FQDN}/host_components/SQOOP
curl -k -u admin:admin -H "X-Requested-By:ambari" -i -X DELETE $URL
After that, you can re-install the sqoop client from the Web UI.
... View more