Member since
01-19-2017
7
Posts
0
Kudos Received
0
Solutions
01-20-2017
06:53 AM
@alex.behm No, I'm not. I just missed a final step when I moved our primary NN, which @venkatsambath pointed out.
... View more
01-19-2017
09:14 AM
@venkatsambath I did not do those actions! Must have missed that, thank you for pointing it out. Everything is working as expected now.
... View more
01-19-2017
09:02 AM
@mbigelow If the test table exists before I run the CTAS statement, I receive an error that the test table already exists. @venkatsambath The hostname in the value tag *does* match the current namenode.
... View more
01-19-2017
08:58 AM
I think I've found the problem but I'm unsure how to best fix it. If I run: desc database default; The 'location' property shows the hdfs: url to the datanode and not the namenode. My second, working cluster's "location" property shows the url to the namenode. The problem is there doesn't appear to be an ALTER DATABASE statement. How can I easily change the location property of the default database? The database already has many internal and external tables; I can try to back them up and recreate the default database from scratch, but if there's a way to modify the location property that would be great.
... View more
01-19-2017
08:28 AM
After messing around a little more, I found something strange. If I switch to a different database before running the 'create table as select' statement, the table is created and populated without error: create database testdb; use testdb;
create table test as select 1;
... View more
01-19-2017
08:20 AM
Thanks for the quick reply. The query 'create table test as select 1' works in a second Impala cluster I have in place, which uses the same Cloudera and Impala versions. It appears the unnamed column in the create statement is automatically named '_c0'. The query 'create table db.table2 as select * from db.table1' fails in the same way in my problem cluster but works correctly in my second, working cluster.
... View more
01-19-2017
06:00 AM
I created a Cloudera cluster for Impala. Cloudera version: Cloudera Express 5.8.1 Impala version: 2.6.0-cdh5.8.0 RELEASE If I run the following command via impala-shell: create table test as select 1; The following error is returned: WARNINGS: Failed to open HDFS file for writing: hdfs://[DNhostname]:8020/user/hive/warehouse/test/_impala_insert_staging/[...]/[...].0.
Error(255): Unknown error 255 However, if I run: create table test (testcol int);
insert into test select 1; ...The table is created and data is inserted without a hitch. Any ideas on why the first statement might fail while the second set of commands would succeed, and what I could do to fix it? I might have messed something up with directory permissions, either locally or on HDFS, however I've set 'dfs.permissions' to false to turn off HDFS permissions. I don't know what to check on the local folders to ensure the correct user(s) have the right permissions. In either case, I don't know why the permissions would cause the 'CREATE TABLE AS SELECT' statement to fail but not the 'CREATE... INSERT'. I should also mention that 'DNhostname' is the hostname of the HDFS datanode/impala daemon that I'm SSHed into, not the hostname of the namenode. This worries me because 'DNhostname' was originally where my namenode was located; I moved it to a different host for reasons outside the scope of this question. Is it possible that 'CREATE TABLE AS SELECT' is still expecting the namenode to be 'DNhostname' for some reason?
... View more
Labels:
- Labels:
-
Apache Impala
-
HDFS