<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: ​sqoop hive: ​User does not belong to hdfs in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170886#M41879</link>
    <description>&lt;A rel="user" href="https://community.cloudera.com/users/13286/huangdengke.html" nodeid="13286"&gt;@Huahua Wei&lt;/A&gt;&lt;P&gt;You are running the sqoop import command as a 'root' which does not belong to hadoop user group. For a sqoop import, some internal temporary directories are created in hdfs, so the current user(in your case) should have permissions to create/modify files in hdfs.  So I would recommend re-run the same command as some other user who belongs to hadoop user group, for example: hive. With this, you should not see the above issue. Let me know if this helps.&lt;/P&gt;</description>
    <pubDate>Tue, 27 Sep 2016 14:17:54 GMT</pubDate>
    <dc:creator>apathan</dc:creator>
    <dc:date>2016-09-27T14:17:54Z</dc:date>
    <item>
      <title>​sqoop hive: ​User does not belong to hdfs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170885#M41878</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@insightcluster135 /]# sqoop import --connect jdbc:oracle:thin:@10.107.217.161:1521/odw --username ****** --password ******** --query "select * from hw_cpb_relation where LAST_UPDATED_DATE &amp;gt; TO_DATE('2016-09-21 00:00:00', 'YYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') AND \$CONDITIONS" --target-dir /user/root/mytest  --hive-import --hive-table hw_cpb_relation -m 1
Warning: /usr/hdp/2.5.0.0-1245/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/09/26 16:05:22 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.5.0.0-1245
16/09/26 16:05:22 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/09/26 16:05:22 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
16/09/26 16:05:22 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
16/09/26 16:05:22 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
16/09/26 16:05:22 INFO manager.SqlManager: Using default fetchSize of 1000
16/09/26 16:05:22 INFO tool.CodeGenTool: Beginning code generation
16/09/26 16:05:23 INFO manager.OracleManager: Time zone has been set to GMT
16/09/26 16:05:23 INFO manager.SqlManager: Executing SQL statement: select * from hw_cpb_relation where LAST_UPDATED_DATE &amp;gt; TO_DATE('2016-09-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') AND  (1 = 0) 
16/09/26 16:05:23 INFO manager.SqlManager: Executing SQL statement: select * from hw_cpb_relation where LAST_UPDATED_DATE &amp;gt; TO_DATE('2016-09-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') AND  (1 = 0) 
16/09/26 16:05:23 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.5.0.0-1245/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/e319b0ed1331b8c84f27e37105ddd274/QueryResult.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/09/26 16:05:25 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/e319b0ed1331b8c84f27e37105ddd274/QueryResult.jar
16/09/26 16:05:25 INFO mapreduce.ImportJobBase: Beginning query import.
16/09/26 16:05:26 INFO impl.TimelineClientImpl: Timeline service address: &lt;A href="http://insightcluster132.aaabbb.com:8188/ws/v1/timeline/"&gt;http://insightcluster132.aaabbb.com:8188/ws/v1/timeline/&lt;/A&gt;
16/09/26 16:05:26 INFO client.RMProxy: Connecting to ResourceManager at insightcluster133.aaabbb.com/202.1.2.133:8050
16/09/26 16:05:26 INFO client.AHSProxy: Connecting to Application History server at insightcluster132.aaabbb.com/202.1.2.132:10200
16/09/26 16:05:28 INFO db.DBInputFormat: Using read commited transaction isolation
16/09/26 16:05:28 INFO mapreduce.JobSubmitter: number of splits:1
16/09/26 16:05:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474533507895_0038
16/09/26 16:05:28 INFO impl.YarnClientImpl: Submitted application application_1474533507895_0038
16/09/26 16:05:28 INFO mapreduce.Job: The url to track the job: &lt;A href="http://InsightCluster133.aaabbb.com:8088/proxy/application_1474533507895_0038/"&gt;http://InsightCluster133.aaabbb.com:8088/proxy/application_1474533507895_0038/&lt;/A&gt;
16/09/26 16:05:28 INFO mapreduce.Job: Running job: job_1474533507895_0038
16/09/26 16:05:34 INFO mapreduce.Job: Job job_1474533507895_0038 running in uber mode : false
16/09/26 16:05:34 INFO mapreduce.Job:  map 0% reduce 0%
16/09/26 16:08:02 INFO mapreduce.Job:  map 100% reduce 0%
16/09/26 16:08:02 INFO mapreduce.Job: Job job_1474533507895_0038 completed successfully
16/09/26 16:08:02 INFO mapreduce.Job: Counters: 30
  File System Counters
  FILE: Number of bytes read=0
  FILE: Number of bytes written=158561
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
  HDFS: Number of bytes read=87
  HDFS: Number of bytes written=4085515414
  HDFS: Number of read operations=4
  HDFS: Number of large read operations=0
  HDFS: Number of write operations=2
  Job Counters 
  Launched map tasks=1
  Other local map tasks=1
  Total time spent by all maps in occupied slots (ms)=145371
  Total time spent by all reduces in occupied slots (ms)=0
  Total time spent by all map tasks (ms)=145371
  Total vcore-milliseconds taken by all map tasks=145371
  Total megabyte-milliseconds taken by all map tasks=818729472
  Map-Reduce Framework
  Map input records=18119433
  Map output records=18119433
  Input split bytes=87
  Spilled Records=0
  Failed Shuffles=0
  Merged Map outputs=0
  GC time elapsed (ms)=859
  CPU time spent (ms)=150120
  Physical memory (bytes) snapshot=1008168960
  Virtual memory (bytes) snapshot=6935724032
  Total committed heap usage (bytes)=983040000
  File Input Format Counters 
  Bytes Read=0
  File Output Format Counters 
  Bytes Written=4085515414
16/09/26 16:08:02 INFO mapreduce.ImportJobBase: Transferred 3.8049 GB in 156.1788 seconds (24.9474 MB/sec)
16/09/26 16:08:02 INFO mapreduce.ImportJobBase: Retrieved 18119433 records.
16/09/26 16:08:02 INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners
16/09/26 16:08:02 INFO manager.OracleManager: Time zone has been set to GMT
16/09/26 16:08:02 INFO manager.SqlManager: Executing SQL statement: select * from hw_cpb_relation where LAST_UPDATED_DATE &amp;gt; TO_DATE('2016-09-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') AND  (1 = 0) 
16/09/26 16:08:02 INFO manager.SqlManager: Executing SQL statement: select * from hw_cpb_relation where LAST_UPDATED_DATE &amp;gt; TO_DATE('2016-09-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') AND  (1 = 0) 
16/09/26 16:08:02 WARN hive.TableDefWriter: Column ORGANIZATION_ID had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column WIP_ENTITY_ID had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column CPB_ITEM_ID had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column ZCB_ITEM_ID had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column CREATED_BY had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column CREATED_DATE had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column LAST_UPDATED_BY had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column LAST_UPDATED_DATE had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column LOAD_BY had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column LOAD_DATE had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column COLLECT_USER had to be cast to a less precise type in Hive
16/09/26 16:08:02 WARN hive.TableDefWriter: Column CHK_FLAG had to be cast to a less precise type in Hive
16/09/26 16:08:02 INFO hive.HiveImport: Loading uploaded data into Hive&lt;/P&gt;&lt;P&gt;Logging initialized using configuration in jar:file:/usr/hdp/2.5.0.0-1245/hive/lib/hive-common-1.2.1000.2.5.0.0-1245.jar!/hive-log4j.properties
OK
Time taken: 1.748 seconds
Loading data to table default.hw_cpb_relation
Failed with exception org.apache.hadoop.security.AccessControlException: &lt;STRONG&gt;User does not belong to hdfs&lt;/STRONG&gt;
  at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:88)
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1708)
  at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:821)
  at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:472)
  at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)&lt;/P&gt;&lt;P&gt;FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask&lt;/P&gt;&lt;P&gt;==========================================&lt;/P&gt;&lt;P&gt;Is this hitting HIVE bug? &lt;/P&gt;&lt;P&gt;&lt;A href="https://issues.apache.org/jira/browse/HIVE-13810" target="_blank"&gt;https://issues.apache.org/jira/browse/HIVE-13810&lt;/A&gt;&lt;/P&gt;&lt;P&gt; If hit, how to fix it in HDP 2.5? 
&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Tue, 27 Sep 2016 14:03:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170885#M41878</guid>
      <dc:creator>diablo2</dc:creator>
      <dc:date>2016-09-27T14:03:49Z</dc:date>
    </item>
    <item>
      <title>Re: ​sqoop hive: ​User does not belong to hdfs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170886#M41879</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/13286/huangdengke.html" nodeid="13286"&gt;@Huahua Wei&lt;/A&gt;&lt;P&gt;You are running the sqoop import command as a 'root' which does not belong to hadoop user group. For a sqoop import, some internal temporary directories are created in hdfs, so the current user(in your case) should have permissions to create/modify files in hdfs.  So I would recommend re-run the same command as some other user who belongs to hadoop user group, for example: hive. With this, you should not see the above issue. Let me know if this helps.&lt;/P&gt;</description>
      <pubDate>Tue, 27 Sep 2016 14:17:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170886#M41879</guid>
      <dc:creator>apathan</dc:creator>
      <dc:date>2016-09-27T14:17:54Z</dc:date>
    </item>
    <item>
      <title>Re: ​sqoop hive: ​User does not belong to hdfs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170887#M41880</link>
      <description>&lt;P&gt;Solved.Thanks&lt;/P&gt;</description>
      <pubDate>Wed, 28 Sep 2016 10:45:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170887#M41880</guid>
      <dc:creator>diablo2</dc:creator>
      <dc:date>2016-09-28T10:45:44Z</dc:date>
    </item>
    <item>
      <title>Re: ​sqoop hive: ​User does not belong to hdfs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170888#M41881</link>
      <description>&lt;P&gt;Hello Ayub,&lt;/P&gt;&lt;P&gt;Why have you given 'hive' as an example of hadoop user group?&lt;/P&gt;&lt;P&gt;Request you to please help me understand how can I run this using Hive as user.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Thu, 01 Jun 2017 04:37:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170888#M41881</guid>
      <dc:creator>varunjoshi</dc:creator>
      <dc:date>2017-06-01T04:37:15Z</dc:date>
    </item>
    <item>
      <title>Re: ​sqoop hive: ​User does not belong to hdfs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170889#M41882</link>
      <description>&lt;P&gt;Even I'm facing same issue, Can you explain how is this resolved ? &lt;/P&gt;</description>
      <pubDate>Tue, 29 Aug 2017 18:03:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170889#M41882</guid>
      <dc:creator>nitishpoul93</dc:creator>
      <dc:date>2017-08-29T18:03:15Z</dc:date>
    </item>
    <item>
      <title>Re: ​sqoop hive: ​User does not belong to hdfs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170890#M41883</link>
      <description>&lt;P&gt;Hello Ayub,&lt;BR /&gt;The error msg I'm getting is as below :- &lt;BR /&gt;Failed with exception org.apache.hadoop.security.AccessControlException: User does not belong to SVRDCHDP&lt;/P&gt;&lt;PRE&gt;	at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:88)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1708)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:821)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:472)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
 &lt;/PRE&gt;&lt;P&gt;FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask&lt;/P&gt;&lt;P&gt;,
&lt;/P&gt;&lt;P&gt;May I know how is this solved ? &lt;/P&gt;</description>
      <pubDate>Tue, 29 Aug 2017 22:02:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170890#M41883</guid>
      <dc:creator>nitishpoul93</dc:creator>
      <dc:date>2017-08-29T22:02:32Z</dc:date>
    </item>
    <item>
      <title>Re: ​sqoop hive: ​User does not belong to hdfs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170891#M41884</link>
      <description>&lt;P&gt;I had the same issue with sqoop. &lt;/P&gt;&lt;P&gt;I solved it pointing my "target-dir" to a hdfs path where my user has read/write privileges.&lt;/P&gt;</description>
      <pubDate>Wed, 06 Sep 2017 05:03:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-hive-User-does-not-belong-to-hdfs/m-p/170891#M41884</guid>
      <dc:creator>nahuel-tarello</dc:creator>
      <dc:date>2017-09-06T05:03:55Z</dc:date>
    </item>
  </channel>
</rss>

