Member since
08-16-2016
642
Posts
130
Kudos Received
68
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2746 | 10-13-2017 09:42 PM | |
4423 | 09-14-2017 11:15 AM | |
2424 | 09-13-2017 10:35 PM | |
3742 | 09-13-2017 10:25 PM | |
4110 | 09-13-2017 10:05 PM |
02-02-2023
03:32 AM
@45, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-17-2021
05:21 PM
1 Kudo
This error shows up if you have selected Sentry/Ranger as dependencies but not checked true for the below config (i.e. did not enable Kerberos) kerberos.auth.enable
... View more
10-01-2020
04:29 AM
Hbase stores data as a sorted map by keys. HBase is considered a persistent, multidimensional, sorted map, where each cell is indexed by a row key and column key (family and qualifier). A rowkey, which is immutable and uniquely defines a row, usually spans multiple HFiles. Rowkeys are treated as byte arrays (byte[]) and are stored in a sorted order in the multi-dimensional sorted map. If you look for a row_key, Hbase is able to identify the node where this data is present. Hadoop runs its computation on the same node where the key is present and hence the performance with technologies like Spark is really good. This is called data localization.
... View more
02-12-2020
05:18 AM
Hi . any solution you found for same . i having the same issue the accessing the hive through python. Thanks HadoopHelp
... View more
01-05-2020
06:33 AM
Hi, This parameter spark.executor.memory (or) spark.yarn.executor.memoryOverhead can be set in Spark submit command or you can set it Advanced configurations. Thanks AKR
... View more
12-04-2019
08:35 AM
Do you have a documentation for this
... View more
07-25-2019
10:00 AM
is a bit late but i post the solution that worked for me. the problem was the hostnames, impala with kerberos wants the hostnames in lowercase.
... View more
06-14-2019
03:39 AM
The Spark 2 now is the only Spark that is supported by CDH 6.x so I am not sure you will get any reply here. Is there any reason you are still in Spark 1.6.x?
... View more
06-11-2019
02:00 AM
1 Kudo
Exit code 143 is related to Memory/GC issues. Your default Mapper/reducer memory setting may not be sufficient to run the large data set. Thus, try setting up higher AM, MAP and REDUCER memory when a large yarn job is invoked.
... View more
06-10-2019
10:31 AM
Hi, This Error seems to be Jar file got missing. Did you tried adding relevant jars in the classpath Thanks AK
... View more
06-05-2019
12:15 AM
Could you let me know what is the issue you are facing ? whats the error ?
... View more
05-28-2019
07:40 AM
I assume you want to make want to have column-level authorization with the SELECT privilege if so you need an AD account . then create role and then grant role to group . then finally perform the below step GRANT SELECT <column name> ON TABLE <table name> TO ROLE <role name>; Please go through this link and let me know if you need any more information https://www.cloudera.com/documentation/enterprise/5-5-x/topics/sg_hive_sql.html#create_role_statement
... View more
05-22-2019
01:13 PM
Not able to tag you in another thread. let me know if anything else need to be tried.
... View more
05-20-2019
07:24 AM
Hi Dennis, As mentioned in the (edited) post, the solution suggested above finally worked for me. Thanks again for the help! Regards, Michal
... View more
04-25-2019
04:07 PM
EC2 recently introduced Partition Placement Groups for rack-aware applications - https://aws.amazon.com/blogs/compute/using-partition-placement-groups-for-large-distributed-and-replicated-workloads-in-amazon-ec2/
... View more
04-25-2019
04:24 AM
can you provide solution for this in Spark2+? Following code working in Spark 1.6 but not in Spark 2.3.0 Class.forName(impalaJdbcDriver).newInstance
UserGroupInformation.getLoginUser.doAs(
new PrivilegedAction[Connection] {
override def run(): Connection = DriverManager.getConnection(impalaJdbcUrl) We are getting following exception User class threw exception: java.security.PrivilegedActionException: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://XXx:21050/;principal=impala/XXXX: GSS initiate failed
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:694)
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://XXX:21050/;principal=impala/XXXX@XXX.com: GSS initiate failed
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:231)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:176)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
.....
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed Thanks
... View more
04-25-2019
03:01 AM
Please guide me I'm trying to send simple file from hortonworks to cloudera using distcp command but getting error "could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation". I am looking to hear from you. Thanks.
... View more
03-26-2019
06:43 AM
I would suggest contacting sales with your questions as they would be the best to answer them.
... View more
03-04-2019
06:53 PM
vmem checks have been disabled in CDH almost since their introduction. The vmem check is not stable and highly dependent on Linux version and distro. If you run CDH you are already running with it disabled. Wilfred
... View more
03-02-2019
02:25 AM
Hey, I have got the same problem. I have installed external db MySql instead of embedded Postgresql. I need to know that about which table you are talking? my Metastore server is stopped and gives the following error. Could not create process: com.cloudera.cmf.service.config.ConfigGenException: Unable to generate config file creds.localjceks Role Log says following error. 4:55:22.240 PM WARN ObjectStore [pool-5-thread-12]: Failed to get database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_76ba12f0572a8224b22b6ce00e2d92da, returning NoSuchObjectException Can you please help me out with this problem?
... View more
02-14-2019
02:18 AM
@AmitAdhau Could you kindly help me out with steps to deploy tls using self-signed certificate.
... View more
02-13-2019
06:27 AM
Below issue can happen if certifcate is expired? I see in some logs that certificates are expired. Please send documentation for certification renewal. 2019-02-13 23:31:58,038 WARN 1168879507@agentServer-54778:org.mortbay.log: javax.net.ssl.SSLException: Received fatal alert: certificate_expired 2019-02-13 23:31:58,703 WARN 1168879507@agentServer-54778:org.mortbay.log: javax.net.ssl.SSLException: Received fatal alert: certificate_expired 2019-02-13 23:32:01,494 INFO 1645307921@scm-web-99151:com.cloudera.server.web.cmf.AuthenticationSuccessEventListener: Authentication success for user: 'admin' from 192.168.10.51 2019-02-13 23:32:03,490 WARN 1168879507@agentServer-54778:org.mortbay.log: javax.net.ssl.SSLException: Received fatal alert: certificate_expired
... View more
01-31-2019
06:45 AM
i have similar issue with Oracle 12.1.0.1.0 JDBC has a memleak ojdbc7.jar - Oracle 12.1.0.2.0 JDBC should fix
... View more
01-24-2019
06:43 AM
Can you please explain if it is necessary, when integrating LDAP, still create users and groups on OS level or it needs for service users only such hive, impala, hdfs and etc? Whta is the role of SSSD or Centrify in this case? As I understand we can create various groups in LDAP and not in OS.
... View more
12-31-2018
04:35 PM
It should be set in the cloudera manager configuration in the UI.
... View more
12-31-2018
05:10 AM
Setting quota will work. Queries will fail with quota errors.
... View more
12-28-2018
05:48 AM
I don't think that it (Cloudera ODBC driver doesn't support insert) is true. By defining table as transcational table, you can insert data. CREATE TABLE insert_test( column1 string, column2 string) clustered by (column1) into 3 buckets stored as orcfile TBLPROPERTIES ('transactional'='true'); insert into table efvci_lnd_edw_dev.insert_test values('1', 'One'); insert into table efvci_lnd_edw_dev.insert_test values('2', 'Two'); Thanks, Chirag Patel
... View more
12-21-2018
12:55 AM
I can't see the relationship between yarn.scheduler.minimum-allocation-mb and the error is reported. According to hive documentation, yarn.scheduler.minimum-allocation-mb is the "container memory minimum". But in this case, the container is running of memory, so it makes sense to increase the "maximum-allocation" instead. Anyway, as it was answered, increasing "mapreduce.map.memory.mb" and "mapreduce.reduce.memory.mb" must work, as those parameters controls how much memory is used by the map-reduce task is run by Hive.
... View more