Member since
02-11-2019
78
Posts
1
Kudos Received
0
Solutions
09-09-2019
09:19 AM
hdfs dfs -ls -d /user/history drwxrwxrwx - hdfs hdfs 0 2017-07-21 02:41 /user/history hdfs dfs -ls -d /user/history/done drwxrwxrwx - hdfs hdfs 0 2018-01-09 12:27 /user/history/done
... View more
09-06-2019
01:54 PM
Hi
YARN keeps failing with the error below:
Permission denied. user=mapred is not the owner of inode=/user/history/done
The folder does exists and here's the ownership:
drwxrwxrwx - cloudera-scm hdfs 0 2017-12-01 09:29 /user/history/done/2017 drwxrwxrwx - cloudera-scm hdfs 0 2018-11-01 07:20 /user/history/done/2018
Context: we just upgraded from Cloudera/CDH from 5/12 to 6.2
Exception Snippet
Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://gislny10.elab.fictcorp.com:8020/user/history/done] org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://gislny10.elab.fictcorp.com:8020/user/history/done] at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:696) at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:630) at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:591) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:97) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:150) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:226) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:236) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied. user=mapred is not the owner of inode=/user/history/done at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:303) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:270) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1784) at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:64) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1861) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:856) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:509)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
08-27-2019
10:24 AM
Using Squirell, I finally able to connect only with these additional JDBC URL settings: jdbc:mysql://localhost/db?useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC How can I set those properties in CM for MySQL
... View more
08-26-2019
08:06 PM
I am able to connect from the localhost as sentry, root etc. without any issues does the password provided in the sentry configuration in CM need to be encrypted or something.? How can I confirm that the jdbc url is correct ? We just upgraded CM/CDH from 5.12 to 6.2
... View more
08-26-2019
07:50 AM
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:211)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:300)
... 19 more
+ NUM_TABLES='[ main] SqlRunner ERROR Error connecting to db with user '\''sentry'\'' and jdbcUrl '\''jdbc:mysql://localhost:3306/sentry?useUnicode=true&characterEncoding=UTF-8'\'''
+ [[ 1 -ne 0 ]]
+ echo 'Failed to count existing tables.'
+ exit 1
... View more
Labels:
08-06-2019
03:11 PM
Hi, I am getting above error from Hive while trying to query a table. This is coincidentally during an insert operation on the table... does it mean I cant access the table while a hive insert operation is ongoing...? table contains lots of rows partitioned by two columns
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
07-06-2019
09:57 AM
Hi, I have a table with a lot of data, I want to create a new table based on some column values from this based which method is most efficient and cluster resources friendly Pseudo-Code 1. single job insert into myNewTable select * from myOldTable where a=xxx etc. 2. two jobs: job1. create datafame from select statement select * from myOldTable where a=xxx etc. as dataframe job2 write dataframe as new table insert into myNewTable select from dataframe
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
-
Cloudera Search
05-10-2019
06:51 AM
We do not want to keep the old partitions. We just want to re-partition the data using the timestamps value. The data only exists currently as partitioned by the string value
... View more
05-09-2019
09:01 AM
Hi EricL ColumnA is of a different data type than ColumnB ColumnA contains Department Names (string) and ColumnB contains TimeStamps (Date-Time) the table is already paritioned by the department names which is strings now we want to change and partition by the TimeStamp column (date-time) could you explain your process little more
... View more
05-08-2019
06:52 AM
I was considering .... 1. Create new ext table with new partition 2. insert into newtable select ... from oldtable ... to new hdfs location 3. drop old table and delete hdfs folders problem here is... at some point both tables will have to exists
... View more