Support Questions

Find answers, ask questions, and share your expertise

CREATE TABLE AS SELECT returns error 'Failed to open HDFS file for writing'

avatar
Explorer

I created a Cloudera cluster for Impala.

  • Cloudera version: Cloudera Express 5.8.1
  • Impala version: 2.6.0-cdh5.8.0 RELEASE

If I run the following command via impala-shell:

create table test as select 1;


The following error is returned:

WARNINGS: Failed to open HDFS file for writing: hdfs://[DNhostname]:8020/user/hive/warehouse/test/_impala_insert_staging/[...]/[...].0.
Error(255): Unknown error 255

 

However, if I run:

create table test (testcol int);
insert into test select 1;


...The table is created and data is inserted without a hitch.

 


Any ideas on why the first statement might fail while the second set of commands would succeed, and what I could do to fix it? I might have messed something up with directory permissions, either locally or on HDFS, however I've set 'dfs.permissions' to false to turn off HDFS permissions. I don't know what to check on the local folders to ensure the correct user(s) have the right permissions. In either case, I don't know why the permissions would cause the 'CREATE TABLE AS SELECT' statement to fail but not the 'CREATE... INSERT'.

 


I should also mention that 'DNhostname' is the hostname of the HDFS datanode/impala daemon that I'm SSHed into, not the hostname of the namenode. This worries me because 'DNhostname' was originally where my namenode was located; I moved it to a different host for reasons outside the scope of this question. Is it possible that 'CREATE TABLE AS SELECT' is still expecting the namenode to be 'DNhostname' for some reason?

1 ACCEPTED SOLUTION

avatar

@ski309

 

Was the below action performed after moving the namenode?

 

https://www.cloudera.com/documentation/enterprise/5-7-x/topics/admin_nn_migrate_roles.html#concept_f...

 

The HiveMetastoreDatabase maintains the location of tables and databases. So once after moving the namenode, it is necessary to perform the above step to update the locations in HMS.

View solution in original post

14 REPLIES 14

avatar

1. Can you check the value of "fs.defaultFS" in core-site.xml file in impalad process directory

 

a. impalad process directory -- 

/var/run/cloudera-scm-agent/process/<num>-impala-IMPALAD

 

Replace <num> with the latest number under process directory

 

Then you can run 

grep -Rn 8020 * -b1

 

Please let me know if the hostname in the value tag matches the current namenode

avatar
Explorer
@mbigelow
If the test table exists before I run the CTAS statement, I receive an error that the test table already exists.

@venkatsambath

The hostname in the value tag *does* match the current namenode.

avatar

@ski309

 

Was the below action performed after moving the namenode?

 

https://www.cloudera.com/documentation/enterprise/5-7-x/topics/admin_nn_migrate_roles.html#concept_f...

 

The HiveMetastoreDatabase maintains the location of tables and databases. So once after moving the namenode, it is necessary to perform the above step to update the locations in HMS.

avatar
Explorer
@venkatsambath
I did not do those actions! Must have missed that, thank you for pointing it out. Everything is working as expected now.

avatar

You're welcome!