Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3275 | 09-01-2017 06:26 AM | |
| 2118 | 05-04-2017 07:09 AM | |
| 1915 | 09-12-2016 05:58 PM | |
| 2642 | 07-22-2016 05:22 AM | |
| 2054 | 07-21-2016 07:50 AM |
09-22-2016
05:40 AM
@Sowmya Ramesh So as of now we can't stop or suspend all entity at once. Is there any plan in future to implement such features ?
... View more
09-21-2016
10:16 AM
@Gurmukh Singh: I tried this script and not getting anything just below output. [user@server2~]$ ./cleanup.sh Usage: dir_diff.sh [30] I have same thing in script which you have mentioned.
... View more
09-21-2016
10:08 AM
I have many process and feeds scheduled in my clusters and during some maintennace I need to stop them one by one. So is there any way to stop all falcon feeds and process with one command ?
... View more
Labels:
- Labels:
-
Apache Falcon
09-12-2016
05:58 PM
@kishore sanchina: You are getting permission issue because user(which you have used to login) does not have admin role on cluster. So please try with oozie user, you can do sudo to oozie. sudo su - oozie or su oozie
... View more
09-12-2016
12:21 PM
When you create a database or internal tables in hive cli then by default it creates with 777 permission.Even though if you have umask in hdfs then also it will be same permission. But now you can change it with the help of following steps. 1.From the command line in the Ambari server node, edit the file vi /var/lib/ambari–server/resources/common–services/HIVE/0.12.0.2.0/package/scripts/hive.py Search for hive_apps_whs_dir which should go to this block: params.HdfsResource(params.hive_apps_whs_dir, type=“directory”, action=“create_on_execute”, owner=params.hive_user, group=params.user_group, mode=0755 ) 2. Modify the value for mode from 0777 to the desired permission, for example 0750.Save and close the file. 3. Restart the Ambari server to propagate the change to all nodes in the cluster: ambari–server restart 4. From the Ambari UI, restart HiveServer2 to apply the new permission to the warehouse directory. If multiple HiveServer2 instances are configured, any one instance can be restarted. hive> create database test2; OK Time taken: 0.156 seconds hive> dfs -ls /apps/hive/warehouse; Found 9 items drwxrwxrwx – hdpuser hdfs 0 2016-09-08 01:54 /apps/hive/warehouse/test.db drwxr-xr-x -hdpuser hdfs 0 2016-09-08 02:04 /apps/hive/warehouse/test1.db drwxr-x— -hdpuser hdfs 0 2016-09-08 02:09 /apps/hive/warehouse/test2.db I hope this will help you to serve your purpose.
... View more
Labels:
09-07-2016
08:42 AM
1 Kudo
Team, If we create a database or internal tables then by default it is creating with 777 permission. I have umask in hdfs but not sure why it is not applicable for /apps/hive/warehouse dir. So any idea how we can prevent it from 777 to 755 or 750. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hadoop
08-31-2016
12:54 PM
Team, I want to use -w (or) --password-file <password file> option in beeline so can some please help me how to use it ?
... View more
Labels:
- Labels:
-
Apache Hive
08-03-2016
06:23 AM
Thanks @mqureshi. It is working now when I changed my jar to solr-yarn.jar. Actually earlier I was using separate jar program to create solr index. As Kuldeep mentioned I tried that property and it worked for my old program as well.
... View more
08-03-2016
06:21 AM
Thanks a lot @Kuldeep Kulkarni it is working for this jar.
... View more
08-02-2016
04:58 PM
@mqureshi: I tried but getting below error. [solr@m1 solr]$ hadoop jar /opt/lucidworks-hdpsearch/job/lucidworks-hadoop-job-2.0.3.jar -queue=ado com.lucidworks.hadoop.ingest.IngestJob -DcsvFieldMapping=0=id,1=cat,2=name,3=price,4=instock,5=author -DcsvFirstLineComment -DidField=id -DcsvDelimiter="," -Dlww.commit.on.close=true -cls com.lucidworks.hadoop.ingest.CSVIngestMapper -c test -i csv/* -of com.lucidworks.hadoop.io.LWMapRedOutputFormat -zk m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181/solr WARNING: Use "yarn jar" to launch YARN applications. Exception in thread "main" java.lang.ClassNotFoundException: -queue=ado at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:270) at org.apache.hadoop.util.RunJar.run(RunJar.java:214) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) I also tested below one but getting same error. [solr@m1 solr]$ yarn jar /opt/lucidworks-hdpsearch/job/lucidworks-hadoop-job-2.0.3.jar -queue=ado com.lucidworks.hadoop.ingest.IngestJob -DcsvFieldMapping=0=id,1=cat,2=name,3=price,4=instock,5=author -DcsvFirstLineComment -DidField=id -DcsvDelimiter="," -Dlww.commit.on.close=true -cls com.lucidworks.hadoop.ingest.CSVIngestMapper -c test -i csv/* -of com.lucidworks.hadoop.io.LWMapRedOutputFormat -zk m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181/solr Exception in thread "main" java.lang.ClassNotFoundException: -queue=ado at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:270) at org.apache.hadoop.util.RunJar.run(RunJar.java:214) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
... View more