Member since
05-09-2016
190
Posts
51
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
612 | 02-07-2018 12:18 AM | |
1499 | 06-08-2017 09:13 AM | |
277 | 03-31-2017 02:54 PM | |
608 | 03-31-2017 09:53 AM | |
611 | 03-20-2017 05:00 PM |
06-23-2020
04:18 PM
Hi, Nifi is one of the options. See https://community.cloudera.com/t5/Support-Questions/Trouble-converting-JSON-to-AVRO-in-Nifi/td-p/202590
... View more
05-25-2018
12:18 AM
@Naresh Kumar Korvi Let me know if below works --columns \\\"column name\\\"
... View more
05-10-2018
07:33 PM
It was just for example. You can replace * with list of users separated by comma and then space and list of groups comma separated.
... View more
05-10-2018
03:58 AM
@Jarosław Gronowski You need to set tez.am.view-acls https://community.hortonworks.com/articles/73837/enable-the-specified-users-or-groups-to-tez-view-w.html
... View more
05-10-2018
03:32 AM
1 Kudo
@Abe Ram Check below link: https://stackoverflow.com/questions/32191260/beeline-equivalent-of-hive-silent-mode Basically add export HADOOP_CLIENT_OPTS="-Djline.terminal=jline.UnsupportedTerminal" before running beeline in background. https://issues.apache.org/jira/browse/HIVE-6758
... View more
05-10-2018
03:30 AM
@Carol Elliott Check this out https://stackoverflow.com/questions/32191260/beeline-equivalent-of-hive-silent-mode
... View more
05-09-2018
05:41 AM
Hi @Megh Vidani Not sure how your poc.test2 table looks like but I tried to create and insert into hive druid based table using wikiticker data and it worked fine. Note that hive-druid integration required hive Interactive and HDP 2.6+ I did my testing on HDP-2.6.4 Below are the queries which I executed. CREATE EXTERNAL TABLE druid_table_1
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES ("druid.datasource" = "wikiticker");
CREATE TABLE druid_table_2 STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' as select `__time`, channel, countryname, regionname from druid_table_1;
insert into table druid_table_2 select `__time`, channel, countryname, regionname from druid_table_1 where channel='#ca.wikipedia';
... View more
03-28-2018
10:56 PM
You need to figure out from where this malformed row is coming. You can check this in yarn application log.
... View more
03-28-2018
08:37 PM
Hi Robert, You have to follow something like this. https://community.hortonworks.com/articles/65914/how-to-add-ports-to-the-hdp-25-virtualbox-sandbox.html
... View more
03-28-2018
08:30 PM
@santhosh ch The issue here is with json file. Caused by: com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected a string but was BEGIN_ARRAY You can find the file in log which is malformed.
... View more
02-21-2018
02:11 AM
Hi @Fawze AbuJaber There is no strict requirement for naming convention for the servers. You can keep the hostname as per your convenience. Ambari by default do not provide single user mode like cloudera-scm user, you have to manually configure the user as per documents below. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/_how_to_configure_an_ambari_agent_for_non-root.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-security/content/how_to_configure_ambari_server_for_non-root.html
... View more
02-07-2018
12:22 AM
@PJ Try running set hive.exec.stagingdir=<new location> before running your query.
... View more
02-07-2018
12:18 AM
1 Kudo
@Pee Tankulrat Also make sure that it is not failing back to POSIX permission. Remove all POSIX permission from the directory using hdfs dfs -chmod
... View more
06-30-2017
02:42 PM
Hi @Mahesh Thumar If you are using ambari then you can do this in Advanced gateway log4j Rahul
... View more
06-08-2017
09:13 AM
THis has been resolved by changing the apllication.properties to atlas-application.properties.
... View more
06-04-2017
06:53 AM
Hi @Chandan Kumar Can you elaborate when you say loading file. What exactly are you doing?
... View more
06-04-2017
06:48 AM
@JT Ng I suspect this can be issue with missing hive-site and tez-site. You can find more hint in yarn MR job log launched by oozie.
... View more
05-25-2017
07:52 PM
1 Kudo
Currently ranger does not support defining password of default admin users(rangerusersync and rangertagsync) during installation.
This is a security concern if you miss to update the password post installation as default password for these users is same as username, thereby compromising admin access to ranger.
There is plan to address this in future HDP release by allowing to define the passwords for admin users during installation, however for the time being below workaround can be used to enforce password policy.
Once ranger installation is successful one can use below rest api get and put command to change the password of any user.
This is helpful in case of automated cluster installation using ambari blueprint.
To change the password of the user first get the user details using below call: curl -s -u admin:admin -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://h2.openstacklocal:6080/service/xusers/users/2
You will get something like this. {"id":2,"createDate":"2017-05-24T17:36:49Z","updateDate":"2017-05-25T08:09:25Z","owner":"Admin","updatedBy":"Admin","name":"rangerusersync","firstName":"rangerusersync","lastName":"","password":"*******","description":"rangerusersync","groupIdList":[],"groupNameList":[],"status":1,"isVisible":1,"userSource":0,"userRoleList":["ROLE_SYS_ADMIN"]}
Now send the PUT request with the above information after setting required password as give below: curl -iv -u admin:admin -X PUT -H "Accept: application/json" -H "Content-Type: application/json" http://h2.openstacklocal:6080/service/xusers/secure/users/2 -d '{"id":2,"createDate":"2017-05-24T17:36:49Z","updateDate":"2017-05-25T08:09:25Z","owner":"Admin","updatedBy":"Admin","name":"rangerusersync","firstName":"rangerusersync","lastName":"","password":"Password@123","description":"rangerusersync","groupIdList":[],"groupNameList":[],"status":1,"isVisible":1,"userSource":0,"userRoleList":["ROLE_SYS_ADMIN"]}'
PS : To change password of keyadmin user, use keyadmin credentials in REST api call. Thanks @Ronak bansal for testing this.
... View more
- Find more articles tagged with:
- How-ToTutorial
- Ranger
- Sandbox & Learning
- Security
Labels:
05-25-2017
10:01 AM
@subash sharma You can do it using below api. https://cwiki.apache.org/confluence/display/RANGER/REST+APIs+for+Policy+Management#RESTAPIsforPolicyManagement-Deletepolicy:
... View more
05-24-2017
05:14 PM
@Chavz Linde Check falcon.application.log. Are you doing this from ambari. Any error you see?
... View more
03-31-2017
02:54 PM
3 Kudos
@Sandeep Nemuri Looks like you are hitting ATLAS-1403. This is fixed in next patch release of HDP-2.5.
... View more
03-31-2017
10:12 AM
No, it is not cumpulsory to have kerberos before installing Ranger/Ranger KMS
... View more
03-31-2017
10:00 AM
3 Kudos
@Saurabh Have you added oozie's cert in ambari truststore? https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_Ambari_Security_Guide/content/_set_up_truststore_for_ambari_server.html
... View more
03-31-2017
09:53 AM
3 Kudos
@rahul gulati Please find below your answers: 1) Can i use the same PostgreSQL DB for creating ranger Audit and Policy DB's? Yes you can. However audit to DB has been removed so essentially db will be used for policy only. 2) I already have Ambari Infra installed on Cluster? Do i need to use URL as http://solr_host:6083/solr/ranger_audits? where solr_host will be the hostname of machine where Ambari Metrics is installed? Solr URL will be host where Ambari Infra is installed. 3) Do we need to setup Knox with kerberos before installing Ranger or can it be done later as well? Any help would be appreciated? Can you please share more details on this? Do you see any document to setup Knox with kerberos?
... View more
03-23-2017
10:23 AM
@Zhao Chaofeng What error you see after adding execute permission?
... View more
03-21-2017
05:43 PM
Check /var/log/hive/hivemetastore.out as well.
... View more
03-21-2017
09:46 AM
SYMPTOM
Concurrency issues hit during multi-threaded moveFile issued when processing queries such as "INSERT OVERWRITE TABLE ... SELECT .."
The following pattern is displayed in stack trace: Loading data to table testdb.test_table from hdfs://xyz/ra_hadoop/.hive-staging_hive_2017-01-31_14-09-52_561_8101886747064006778-4/-ext-10000
ERROR : Failed with exception java.util.ConcurrentModificationException
org.apache.hadoop.hive.ql.metadata.HiveException: java.util.ConcurrentModificationException
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2883)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3140)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1727)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:353)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1745)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1491)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1289)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1151) ROOT CAUSE This issue occurs because of issue described in HIVE-15355.
WORKAROUND
Set the following property at the client side: set hive.mv.files.thread=0;
Re-run the hive query Note: The fix for HIVE-15355 is expected to be included in the next major release of HDP.
... View more
- Find more articles tagged with:
- Data Processing
- Issue Resolution
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
03-20-2017
05:05 PM
I have seen apache jira which says 2 way ssl should not be required for ranger if using kerberos. Has this been added in Hortonworks release? RANGER-1094
... View more
Labels:
03-20-2017
05:00 PM
@krajguru You can't run MR or Tez jobs using the REST API. The YARN REST API is meant for developers of applications such as Distributed shell, MR and Tez, not for users who submit applications. However for spark jobs on yarn REST api can be used as given here. https://community.hortonworks.com/articles/28070/starting-spark-jobs-directly-via-yarn-rest-api.html
... View more
03-17-2017
01:58 PM
1 Kudo
@Jan Horton Yes, you can install both at the same time without any problem. Spark 2.0 is TP as of HDP 2.5.3 and not recommended for production use, however you are welcome to use it for study.
... View more