Member since
02-11-2019
81
Posts
3
Kudos Received
0
Solutions
09-16-2019
02:07 PM
Is there a way to store intermediate query results in Beeline - Hive something like below in pseudo code. within a beeline session beeline -u -e" set var1 = select max(date) from tableA; set var2 = select count(emps) from tableB; set var3 = select min(date) from tableC; Insert into table TableD select var1, var2, var2 " --...
... View more
Labels:
- Labels:
-
Apache Hive
09-12-2019
01:35 PM
How can I find the password for the hdfs user. last time I looked it was all asterix in the box in CM. is there a way to get the actual text of the hdfs user PS: The guys that setup the cluster lost all details. I'm trying to recover it now. Just migrated from 5.12 to 6.2 and been facing one issue after another regards,
... View more
09-10-2019
07:32 AM
Having more problems. struggling with Unix. issued command: hdfs dfs -chown -R mapred:hdfs /user/history/ got Error: chown: changing ownership of '/user/history': Permission denied. user=root is not the owner of inode=/user/history
... View more
09-09-2019
09:19 AM
hdfs dfs -ls -d /user/history drwxrwxrwx - hdfs hdfs 0 2017-07-21 02:41 /user/history hdfs dfs -ls -d /user/history/done drwxrwxrwx - hdfs hdfs 0 2018-01-09 12:27 /user/history/done
... View more
09-06-2019
01:54 PM
Hi
YARN keeps failing with the error below:
Permission denied. user=mapred is not the owner of inode=/user/history/done
The folder does exists and here's the ownership:
drwxrwxrwx - cloudera-scm hdfs 0 2017-12-01 09:29 /user/history/done/2017 drwxrwxrwx - cloudera-scm hdfs 0 2018-11-01 07:20 /user/history/done/2018
Context: we just upgraded from Cloudera/CDH from 5/12 to 6.2
Exception Snippet
Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://gislny10.elab.fictcorp.com:8020/user/history/done] org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://gislny10.elab.fictcorp.com:8020/user/history/done] at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:696) at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:630) at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:591) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:97) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:150) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:226) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:236) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied. user=mapred is not the owner of inode=/user/history/done at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:303) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:270) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1784) at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:64) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1861) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:856) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:509)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
08-27-2019
10:24 AM
Using Squirell, I finally able to connect only with these additional JDBC URL settings: jdbc:mysql://localhost/db?useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC How can I set those properties in CM for MySQL
... View more
08-26-2019
08:06 PM
I am able to connect from the localhost as sentry, root etc. without any issues does the password provided in the sentry configuration in CM need to be encrypted or something.? How can I confirm that the jdbc url is correct ? We just upgraded CM/CDH from 5.12 to 6.2
... View more
08-26-2019
07:50 AM
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:211)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:300)
... 19 more
+ NUM_TABLES='[ main] SqlRunner ERROR Error connecting to db with user '\''sentry'\'' and jdbcUrl '\''jdbc:mysql://localhost:3306/sentry?useUnicode=true&characterEncoding=UTF-8'\'''
+ [[ 1 -ne 0 ]]
+ echo 'Failed to count existing tables.'
+ exit 1
... View more
Labels:
08-06-2019
03:11 PM
Hi, I am getting above error from Hive while trying to query a table. This is coincidentally during an insert operation on the table... does it mean I cant access the table while a hive insert operation is ongoing...? table contains lots of rows partitioned by two columns
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
07-06-2019
09:57 AM
Hi, I have a table with a lot of data, I want to create a new table based on some column values from this based which method is most efficient and cluster resources friendly Pseudo-Code 1. single job insert into myNewTable select * from myOldTable where a=xxx etc. 2. two jobs: job1. create datafame from select statement select * from myOldTable where a=xxx etc. as dataframe job2 write dataframe as new table insert into myNewTable select from dataframe
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
-
Cloudera Search