Member since
03-21-2016
233
Posts
62
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
923 | 12-04-2020 07:46 AM | |
1198 | 11-01-2019 12:19 PM | |
1632 | 11-01-2019 09:07 AM | |
2552 | 10-30-2019 06:10 AM | |
1280 | 10-28-2019 10:03 AM |
12-04-2020
07:46 AM
Policy type is missing. By default policyType is 0 which is Access type. Try with below API. curl -u admin -H 'Content-Type: application/json' -H 'Accept: application/json' -X POST -d '
{"policyType":"2","name":"row_policy_1","isEnabled":true,"policyPriority":0,"policyLabels":[],"description":"","isAuditEnabled":true,"resources":{"database":{"values":["default"],"isRecursive":false,"isExcludes":false},"table":{"values":["test_table"],"isRecursive":false,"isExcludes":false}},"rowFilterPolicyItems":[{"users":["hr1"],"accesses":[{"type":"select","isAllowed":true}],"rowFilterInfo":{"filterExpr":"c1=true"}}],"service":"c116_hive"}' http://ranger-admin:6080/service/plugins/policies -v
... View more
05-07-2020
08:43 AM
Here is the very good explanation of hive view and hue replace in hdp 3.0 https://hadoopcdp.com/data-analytics-studio-das-replace-of-hue-hive-views-in-cdp/
... View more
03-25-2020
11:29 AM
Hive view is no longer available in Ambari 2.7.x version (required for HDP 3). And is deprecated in support of DAS/DAS lite. Alternatively you can use the JDBC tools like DBvisulaizer or Squirrel or Hue.
... View more
11-21-2019
06:41 AM
I'm having a similar issue with the YARN clients in my cluster. When the password is set in Ambari and it has a special character of ">" the YARN Client interprets it as ">". Example: Set passwd - ba(PxO463$bd;> Passwd in the yarn core-site.xml lx963:/usr/hdp/2.6.5.0-292/hadoop-yarn/etc/hadoop #grep PxO463$bd core-site.xml <value>ba(PxO463$bd;></value> <value>ba(PxO463$bd;></value> Is there a workaround or fix for this?
... View more
11-18-2019
10:40 AM
Knox admin ui is SSO enabled by default. Accessing api /gateway/manager/admin-ui/ will redirect to SSO page where you should give the matching credentials (as per ldap/shiro configured in knoxsso.xml). Access admin-ui from browser and verify.
... View more
11-08-2019
06:26 AM
Hi @rguruvannagari Thanks for alot for the reply , not sure if heap space is filled during compaction or ranger hive audit if we set hive authentication to none then it is ok , please see the following issue. https://community.cloudera.com/t5/Support-Questions/hive-metastore-is-not-responding-but-alive-with-the/m-p/282224 Thanks Nag
... View more
11-06-2019
12:32 PM
After looking into this some more, we found the error trace below the first time that a paragraph was called after the interpreter was restarted. This didn't show up originally since the above log was only trying to run a paragraph, not necessarily just after the interpreter was restarted. As you can see, in the end there is an exception about a class not being accessible. Once we made sure the wandisco class was accessible to the interpreter in the classpath, then everything started to work properly. 2019-11-06 10:24:48,850 ERROR [pool-2-thread-2] PhoenixInterpreter:108 - Cannot open connection
java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:288)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:171)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1881)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1860)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1860)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:131)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.zeppelin.phoenix.PhoenixInterpreter.open(PhoenixInterpreter.java:99)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:493)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:410)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:319)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
at org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:286)
... 22 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
... 27 more
Caused by: java.lang.NoClassDefFoundError: com/wandisco/shadow/com/google/protobuf/InvalidProtocolBufferException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1844)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1809)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1903)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2573)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2586)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
... View more
11-02-2019
09:39 AM
first thank you for your answer the reason that I ask this question is because the blueprint json file is with the logsearch configuration as the following example }, { "zookeeper-logsearch-conf" : { "properties_attributes" : { }, "properties" : { "component_mappings" : "ZOOKEEPER_SERVER:zookeeper", "content" : "\n{\n \"input\":[\n {\n \"type\":\"zookeeper\",\n \"rowtype\":\"service\",\n \"path\":\"{{default('/configurations/zookeeper-env/zk_log_dir', '/var/log/zookeeper')}}/zookeeper*.log\"\n }\n ],\n \"filter\":[\n {\n \"filter\":\"grok\",\n \"conditions\":{\n \"fields\":{\"type\":[\"zookeeper\"]}\n },\n \"log4j_format\":\"%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\",\n \"multiline_pattern\":\"^(%{TIMESTAMP_ISO8601:logtime})\",\n \"message_pattern\":\"(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}-%{SPACE}%{LOGLEVEL:level}%{SPACE}\\\\[%{DATA:thread_name}\\\\@%{INT:line_number}\\\\]%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}\",\n \"post_map_values\": {\n \"logtime\": {\n \"map_date\":{\n \"target_date_pattern\":\"yyyy-MM-dd HH:mm:ss,SSS\"\n }\n }\n }\n }\n ]\n}", "service_name" : "Zookeeper" } } }, can we get advice about how to remove the logsearch configuration tag's from the blueprint json file
... View more
11-01-2019
12:19 PM
1 Kudo
@JeffEvans I think below thread answers the same question about spark client libs on worker nodes. https://community.cloudera.com/t5/Support-Questions/Spark-on-Yarn-Do-nodes-need-Spark-installed/td-p/181241 We dont need spark clients installed on all the worker nodes, should be installed only on edge nodes.
... View more
10-31-2019
11:30 AM
@nirajp Either way HIVE or Beeline you MUST provide username /password to authenticate to be able to execute any SQL statement against the DB. See below examples Hive CLI [hive@calgary ~]$ hive .......... SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://calgary.canada.ca:2181,ottawa.canada.ca:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 Enter username for jdbc:hive2://calgary.canada.ca:2181,ottawa.canada.ca:2181/default: hive Enter password for jdbc:hive2://calgary.canada.ca:2181,ottawa.canada.ca:2181/default: **** Beeline Connection [hive@london ~]$ beeline Beeline version 1.2.1000.2.5.3.0-37 by Apache Hive beeline> ! connect jdbc:hive2://london.tesco.co.uk:10000/;principal=hive/london.tesco.co.uk@TESCO.CO.UK Connecting to jdbc:hive2://london.tesco.co.uk:10000/;principal=hive/london.tesco.co.uk@TESCO.CO.UK Enter username for jdbc:hive2://london.tesco.co.uk:10000/;principal=hive/london.tesco.co.uk@TESCO.CO.UK:xxxxx Enter password for jdbc:hive2://london.tesco.co.uk:10000/;principal=hive/london.tesco.co.uk@TESCO.CO.UK:xxxxx Connected to: Apache Hive (version 1.2.1000.2.5.3.0-37) Driver: Hive JDBC (version 1.2.1000.2.5.3.0-37) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://london.tesco.co.uk:10000/> show databases; +----------------+--+ | database_name | +----------------+--+ | default | | uxbribge | | White_city | +----------------+--+ 3 rows selected (2.863 seconds) If you have ranger plugin enable for hive then you will have authorization centrally handles by Ranger. HTH
... View more