Reply
Explorer
Posts: 30
Registered: ‎02-01-2017

Spark t bit error when trying to overwrite table using .parquet

I receive this error when I try to overwrite a table in apache spark.

 

Machine generated alternative text:
URL:
agnostics:
User class threw exceptiom org.apache hadoop securityAccessControIExceptiom Permission denied by sticky bit: user—...expl.db"
at org.apache nadoop fidts server. namenode DefaultAuthorizationProvider c heckStickyBit(DefaultAuthorizationProvider java:3B7)
at org.apache nadoop fidts server. name node. DefaultAuthcrizationProvider c heckPermission(DetaultAuthorizationProvider.java: 159)
at org.apache nadoop fidts server. name node F S PermissionChecker c heckPermissicn(F S PermissionChecker.java: 152)
at org.apache hadoop hdts.server_namenode_FSDirectory_checkPermission(FSDirectory java:352g)
at org.apache hadoop hdts.server_namenode_FSNamesystem chec kPermission(FSNamesystem java:BE50)
at org.apache hadoop hdts.server_namenode_FSNamesystem
at org.apache hadoop hdts.server_namenode_FSNamesystem deletelnt(FSNamesystem.java:4070)
at org.apache hadoop hdts.server_namenode_FSNamesystem delete(FSNamesystem.java:4054)
at org.apache hadoop
at org.apache hadoop hdts.server_namenode_AuthorizationProviderProxyCIientProtocoLdelete(AuthorizationProviderProxyClientProtocoLjava:30B)
at org.apache hadoop hdts.protocoIPB.ClientNamenodeProtocolServerSideTransIatorPB_delete(ClientNamenodeProtocolServerSideTranslatorPB.java:603)
at org.apache hadoop hdts.protocoI.proto.ClientNamenodeProtocoIProtosSClientNamenodeProtocolS2 callBIockingMethod(CIientNamenodeProtocolProtos.java)
at org.apache hadoop ipc_ProtobufRpcEngineSServerSProtoaufRpclnvoker.calI(ProtoöutRpcEngine.java:E17)
at org.apache hadoop 1073)
at org.apache hadoop
at org.apache hadoop
at java.security.AccessController doPrivileged(Native Method)
at javax_security auth
at org.apache hadoop security_userGroupIntormation
at org.apache hadoop 2214)

 

 

I am using spark 1.6 and cdh 5.7.2 

I try to overwrite using a hiveContext and persisting the df

 

  dfPartId.coalesce(nCoalesce).write.mode("overwrite").partitionBy(partitionCol).parquet(expl_hdfs)
hiveContext.sql(s"msck repair table $capiTbl")

Thanks

Announcements