Member since
03-27-2017
20
Posts
1
Kudos Received
0
Solutions
05-16-2018
02:25 PM
This might help to other people. This seems like a bug : https://issues.apache.org/jira/browse/AMBARI-21687 The bug is fixed in 2.6.0.0 branch only ( not fixed in 2.4.x.x or 2.5.x.x versions of ambari). So, The available workaround is to add the node via 'Admin' user or use the ambari 2.6.x
... View more
02-18-2018
03:24 PM
You can try adding this "hadoop.ssl.enabled.protocols" property in custom core-site in ambari->HDFS service. See [1] for information on this property : [1] https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/core-default.xml
... View more
01-03-2018
06:26 PM
@Michael Bronson See this link http://henning.kropponline.de/2015/06/07/services-and-state-with-ambari-rest-api/which would helps.
... View more
12-27-2017
06:34 PM
Yes, that's correct. Hortonworks does not support the fair scheduler and it's not suggested to use this in production. See [1] [1] https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_release-notes/content/community_features.html
... View more
12-23-2017
07:02 PM
@Jacqualin jasmin See if you are also facing the warning defined in link [1]. If yes, try the same resolution. [1] https://community.hortonworks.com/articles/11805/how-to-solve-ambari-metrics-corrupted-data.html ref : https://cwiki.apache.org/confluence/display/AMBARI/Cleaning+up+Ambari+Metrics+System+Data
... View more
11-13-2017
09:55 AM
In what scenario we have to choose the binary type or string type in ORC table? It my understanding that, we have to use the type as binary when the datatype string value is more than > 2Gb and If it's below than 2GB then we can go for string data type. Is that correct? I've already seen this : https://orc.apache.org/docs/encodings.html When we use string type value less than 2GB, getting below error but when switched to binary it's working as expected. Any idea what could be the cause. " java.lang.RuntimeException: java.io.IOException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. "
... View more
Labels:
- Labels:
-
Apache Hive
07-25-2017
05:50 AM
@sulanta thanks but my requirement is little different. What I want is to restrict the user from dropping the database but that user should be able to drop the table. Is this possible?
... View more
07-24-2017
11:26 AM
- Is it possible to DROP TABLE but not DROP DATABASE in Hive using Ranger Hive Plugin on latest version of HDP?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
06-30-2017
10:13 AM
@krajguru Thanks !!
... View more
06-30-2017
10:06 AM
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
-
Apache Solr
06-15-2017
07:10 AM
@hari Kishore javvaji Dr.who is the default username/static for hadoop core property : "hadoop.htttp.staticuser.user" The description say's : The user name to filter as, on static web filters while rendering content. An example use is the HDFS web UI (user to be used for browsing files)
See this: [1] http://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/core-default.xml Hope this answered your question.
... View more
06-11-2017
05:50 PM
@Sara Alizadeh Did you mean after enabling maintenance mode on namenode, you can not edit the config changes from ambari ? Please note Without enabling maintenance mode you can change the java heap size. Please see article [1] which talks on how to set namenode heap size. Also, see thread [2] which also talks on same. [1] https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html [2] https://community.hortonworks.com/questions/45351/how-to-set-the-namenode-heap-memory-in-ambari.html
... View more
06-11-2017
05:26 PM
@oula.alshiekh@gmail.com alshiekh Did you resolved the issue? I am also getting the same issue.
... View more
06-10-2017
06:24 PM
My Hive and mysqld services are already running and while starting the Hive metastore I'm getting below error. HDP version: 2.5.5 Ambari version: 2.5.0.3 Closing: 0: jdbc:mysql://pnkj-1.openstacklocal/hive?createDatabaseIfNotExist=true org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:304)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.io.IOException: Schema script failed, errorcode 2
at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:410)
at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:367)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:300)
... 8 more
*** schemaTool failed ***
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
06-06-2017
05:17 PM
1 Kudo
@white wartih The ambari-metrics-collector log showing the below message only: ambari-metrics-collector.log : 2017-06-06 05:56:48,415 WARN org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource: Unable to connect to HBase store using Phoenix.
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG
at org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:436)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:939
So, as asked earlier; did you followed all the steps defined in article [1]. If yes and still facing issue then please attach the hbase-ams-master.log file from location of "/var/log/ambari-metrics-collector/hbase-ams-master-<hostname -f>.log" and also share the '/etc/ambari-metrics-monitor/conf/metric_monitor.ini" file from any host where ambari-metrics-monitor is running. Also, Can you try to telnet to ambari-metrics-collector from any host using cmd : telnet c2m.xdata.com 6188 [1] https://community.hortonworks.com/articles/11805/how-to-solve-ambari-metrics-corrupted-data.html
... View more
06-06-2017
02:09 PM
- Is there any way using which we can get the list of user who submitted the Job in the specific queue? Eg : 1) If I am having queue as 'myQueue' then List all user who submitted the Job to 'myQueue' 2) List all user who submitted the specific type of Job : eg List all user who submitted only Hive Job in 'myQueue'.
... View more
Labels:
- Labels:
-
Apache YARN