Member since
04-16-2019
373
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
23799 | 10-16-2018 11:27 AM | |
7876 | 09-29-2018 06:59 AM | |
1207 | 07-17-2018 08:44 AM | |
6679 | 04-18-2018 08:59 AM |
04-19-2018
01:50 PM
i want to take abckup of fsimage maintaining everything so when something happens then with the use of backup I namenode could be up .
... View more
Labels:
- Labels:
-
Apache Hadoop
04-18-2018
08:59 AM
Hi , i have done this poc successfully actually because of some configurations issue i was getting issue but usually it works if you create external table in source cluster and load the data then you create external data with same schema as in source in destination cluster . and then with the help of distcp move data from source to destination and rename folder to the one where hive table in destination pointing .
... View more
04-17-2018
07:06 AM
@Jay Kumar SenSharma Hi jay , thanks for you reply , I have gone trough above link and I have some queries please help me on the same . As article explains "If you have data in some directories outside of the normal warehouse directory (e.g. /apps/hive/warehouse), you must run the metatool with updateLocations to get those other paths in the FS Roots output." but When we give LOCATION ( hdfs path other than /apps/hive/warehouse) while creating external table I do not encounter any issue while fetching the records , but when I move the external table to some to some other cluster and then rename it to folder what I have given to location filed during creating table in destination end , and then I try to fetch the data this error comes. My point is what mentioned in the article is if I have data in some other directories outside of the normal warehouse directory I must run metatool to get those path in FS roots output , but If i create table and provide data folder out of warehouse , I do not get any issue while fetching the records but when i move it some other cluster and rename the folder to location what I have given there then it throws error while fetching the record .
... View more
04-17-2018
06:35 AM
@Jay Kumar SenSharma @Geoffrey Shelton Okot @Sindhu Hi anyone please help me on this , what is other way to get it done ? Thanks in Advance.
... View more
04-17-2018
06:30 AM
I am doing one activity in which I have created external table in one cluster say source cluster and created same table in destination table with different hdfs location, Then I have moved hdfs folder of source location (hive data) to destination and then renamed this folder to the destination hive location path , but when I am trying to select data in destination cluster, I am getting below error: FAILED: SemanticException Unable to determine if hdfs://<namenode>:8020/apps/hive/warehouse/test_ext is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://<namenode>:8020/apps/hive/warehouse/test_ext, expected: hdfs://<namenode>:8020 Please find below details on above : 1. created hive table in source cluster: CREATE EXTERNAL TABLE IF NOT EXISTS test_ext
(ID int,
DEPT int,
NAME string
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/poc_hive'; 2.loading data into table : load data local inpath '/tmp/hive_data.txt' into table test_ext; 3. Destination cluster : CREATE EXTERNAL TABLE IF NOT EXISTS test_ext
(ID int,
DEPT int,
NAME string
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/old_hive'; but here did not load any data . 4. distcp from source to destination hadoop distcp hdfs://<namenode>:8020/poc_hive hdfs://<namenode>:8020/ 5. renamed /poc_hive to /old_hive 6. when I try to fetch data in destination cluster I get error : select *from test_ext; error : FAILED: SemanticException Unable to determine if hdfs://<namenode>:8020/apps/hive/warehouse/test_ext is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://<namenode>:8020/apps/hive/warehouse/test_ext, expected: hdfs://<namenode>:8020
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
04-09-2018
07:20 PM
I am not able to understand difference between "fs.defaultFS and dfs.namenode.http-address" . I have setup single node address in which value of fs.defaultFS is hdfs://<host>:8020 and dfs.namenode.http-address is <host>:50070 .
... View more
Labels:
- Labels:
-
Apache Hadoop
04-09-2018
06:52 PM
I want to check from when hadoop services are up or running . is there some command to check same ? for e.g if there is hbase service i want to check its uptime in other way its last start time . in the same way I want to check for other hadoop services as well. Thanks in advance
... View more
Labels:
- Labels:
-
Apache Hadoop
04-03-2018
07:27 AM
I am trying to integrate Knox with Ldap but i have some doubts on the same .Please help me out . Please find below queries on the same : 1. I can see below property under /etc/knox/conf/topologies/admin.xml file <role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled> what is shiroProvider , can we customize it ? where does it exist ldap server end or knox ? 2. value of main.ldapRealm.contextFactory.authenticationMechanism is set to Simple and in documentation it is mentioned as well Apache Knox supports only simple authentication. What does it really mean , what is here contextFactory and main.ldapRealm.contextFactory.authenticationMechanism value simple ? what does simple refer to ? 3. urls./** : authcBasic what does it really signify I have gone through below link below but not much understanding , please help me on this . https://developer.ibm.com/hadoop/2016/08/03/ldap-integration-with-apache-knox/ 4. How to deny access to the user which is present already in the main.ldapRealm.userDnTemplate . Thanks in advance
... View more
Labels:
- Labels:
-
Apache Knox
-
Apache Ranger
03-27-2018
06:47 AM
i want to check logs for my oozie application , I know there is way to check the logs from oozie ui clicking on application id and then logs but I want to gett all info using command from command line. I am using below command to access the same : oozie job -oozie http://<host>11000/oozie/ -info applicationid but this does not fetch the complete logs what I can see from oozie web ui , how can i check the complete logs from the command line utility ? my second question : When I use command line utility I get some of information on logs for the oozie job , does this information also persist completely in /var/log/oozie or there could be more info present under /var/log/oozie for the ozoie job.
... View more
Labels:
- Labels:
-
Apache Oozie