Member since
09-26-2014
44
Posts
10
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4927 | 02-19-2015 03:41 AM | |
965 | 01-07-2015 01:16 AM | |
8373 | 12-10-2014 04:59 AM | |
4340 | 12-08-2014 01:39 PM | |
4542 | 11-20-2014 08:16 AM |
09-14-2018
03:07 AM
Queries via jdbc client. Yes tried refresh, but after ~ 30 min all the queries appeared.
... View more
09-14-2018
01:15 AM
Hi, I have a brand new installation of CDH 5.15, where all services (MNG,Impala) are in green. I executed ~10 queries on Impala, and was checking the queries in Cloudera Manager -> Impala Queries. I noticed two issues: - Some of the queries were in the list when there were in "running" state - After the statements were finished, no query were reported in Impala Queries. The obvious reason could be time filter: I checked 30m, 1h, 2h, 1d, still NO results. Also another obvious reason can be a search filter. The filter is empty, NO results. I checked the Service Monitor of CM, it is in green, so I suppose it collects data. I checked the Impala storage ( firehose_impala_storage_bytes) it is 1GB. The only "warning" is about the memory of Service Monitor what is much less, but this is a new cluster, no workload running, and the CM reports that the heap usage is under 1G The recommended non-Java memory size is 12.0 GiB, 10.5 GiB more than is configure What could be the problem of the empty list? Why Cloudera Manager is not collecting the Impala queries? Or maybe it is, but then why it is not reporting, showing me them? Thanks
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Impala
09-12-2018
10:28 AM
How does the data looks like? I think the json has to be in one row (so cant contain newlines) and you have to have one json per line. At least I had a similar issue when I wanted to load a data via external table, where the json contained one big list with many dict elements.
... View more
09-12-2018
10:24 AM
You should use ntp or chrony to synchronize clocks. If they are used, and the clocks are out of sync, maybe some issue is on the network. Regarding the Hbase restart, I would do a Stop -> then check on all nodes that no hbase is running and then start.
... View more
04-08-2015
03:00 AM
Hi, we installed 64bit ODBC driver from DataDirect for Impala and tried to establish a connection between SQL Server 2014 (running on Windows Srv 2012R2) and Cloudera Impala. After setting up the ODBC driver, the test connection was ok. But the linked server is not working, listing tables works, but a simple select statmenet returns this kind of error: OLE DB provider "MSDASQL" for linked server "IMPALA" returned message "Unspecified error". Msg 7311, Level 16, State 2, Line 1 Cannot obtain the schema rowset "DBSCHEMA_COLUMNS" for OLE DB provider "MSDASQL" for linked server "IMPALA". The provider supports the interface, but returns a failure code when it is used. I also contacted the technical team from Progress Software but no response yet, Any ideas?
... View more
03-27-2015
12:48 AM
Created a case from this issue, hopefully the engineering team will come back with a solution Tomas
... View more
03-23-2015
02:16 AM
Hi, we are trying to download a bulk of data from CDH cluster via Windows ODBC Driver for Impala version 2.5.22 to a Windows server. The ODBC driver works well, but the performance of rows dispatching is really bad - roughly 3M rows/minute. We checked the possible bottlenecks for this kind of download, but the cluster and also the receiving Windows server were not under load at all, the cpu around 5%, the network cards running on 10Gbit, there are plenty of RAM memory, the target disk where the data is written is RAID-0 SSD with 1GB/s max throughput, so we dont know what component on the trasnfer slows down the records. We tried to run in multiple parallel threads, what helped a little bit (50% perf increase) but the overall perf is still low.. Also tried to tweak the transfer batch size in ODBC driver, it looks that it doesnt affect the performance at all. The setup is CDH5.3, and Microsoft SQL Server 2014, the Impala is linked via linked server in MS SQL. Any ideas how to increase the transfer speed? Thanks Tomas
... View more
02-19-2015
03:44 AM
I have a simple pig program, with a simple LOAD and STORE/DUMP statement, but it refuse to load the test data file. The path in HDFS is in /user/dwh/ the file is called test.txt. I assume the pig is not aware of the HA setting of my cluster, Any ideas? Input path does not exist: hdfs://nameservice1/user/dwh/test.txt
... View more
02-19-2015
03:41 AM
1 Kudo
I found the piggybank.jar in /opt/cloudera/parcels/CDH/lib/pig/. The problem was in fact that when I called register piggybank, the grunt shell gave me this exception: grunt> REGISTER piggybank.jar 2015-02-19 12:38:49,841 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS 2015-02-19 12:38:49,849 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 101: file 'piggybank.jar' does not exist. after changing the directory into the lib path the register worked well.. or use: REGISTER /opt/cloudera/parcels/CDH/lib/pig/piggybank.jar Tomas
... View more
02-19-2015
03:20 AM
Hey guys, does cloudera packs piggy UDF into the CDH? I tried to find in the distribution anything called piggybank, but was not succesfull. Can somebody advice me how to add the piggybank UDFs into the existing pig installation in CDH? https://cwiki.apache.org/confluence/display/PIG/PiggyBank Thanks. Tomas
... View more
- Tags:
- Pig
01-07-2015
01:16 AM
This issue - with reading large tables compressed by Impala - was (based on my experiences) solved in the release of Impala 2.1 (CDH 5.3.1) Cloudera did not confirm this as a bug - when I tried to arrange a conf call with cloudera support and they tried to investigate where is the problem - they were not able define what is the root cause of this bug. I assume that this changed helped to solve the problem (Impala 2.1.0 release notes): The memory requirement for querying gzip-compressed text is reduced. Now Impala decompresses the data as it is read, rather than reading the entire gzipped file and decompressing it in memory But this is not confirmed, after upgrade Impala did not crash anymore. T
... View more
01-07-2015
01:12 AM
More interestingly this differencce dissappeared after upgrading to CDH 5.3.1. T.
... View more
01-06-2015
06:21 AM
Anybody has an experience how to process xml data (imported from MSSQL) and how to store and analyze them in Impala? Thanks Tomas
... View more
12-10-2014
04:59 AM
During my test I came to one (maybe not correct) conclusion. The table is big and partitioned, and maybe Impala just limits the query to a subset of a table. Because if I change the query like create table result as select * from tmp_ext_item where item_id in ( 3040607, 5645020, 69772482, 2030547, 1753459, 9972822, 1846553, 6098104, 1874789, 1834370, 1829598, 1779239, 7932306 ) then it runs correctly and returns all items with the specified item_id.
... View more
12-08-2014
01:39 PM
I solved the issue with from_utc_timestamp(Create_Time, 'CEST'). Impala assumes the the timestamp value is stored in UCT. So converting to central european time with summery daylight saving will produce the correct result. As far as I know there is no way to tell Impala that the current timezone is CEST, so in every query this conversion should be made.
... View more
12-08-2014
04:51 AM
Hi, running a simple query where in the WHERE condition is a column IN ( ) condition and the list contains 13 elements (numbers). The column is type of int. Every time I run a query I got a different result, sometimes 5 rows, sometimes 2 rows, sometimes 10 rows. Of course I checked ID by ID that all elements are in the table... is this a known bug or I am missing something? select * from tmp_ext_item where item_id in ( 3040607, 5645020, 69772482, 2030547, 1753459, 9972822, 1846553, 6098104, 1874789, 1834370, 1829598, 1779239, 7932306 ) T.
... View more
11-26-2014
02:17 PM
I have a same issue, the same query returns different dates. In impala the date is one hour less than in Hive. Table was created in hive, loaded with data via insert overwrite table in hive (table is partitioned). And for example the timestamp 2014-11-18 00:30:00 - 18th of november was correctly written to partition 20141118. But when I fetch the table in impala, whith condition day_id (partition column) = 20141118 I see a value 2014-11-17 23:30:00 So the difference is one hour. If I query the minimum and maximum start_time from the table in one partition in the Imapal (partition day_id = 2014118) I get this wrong result: min( start_time ) = 2014-11-17 23:00 max( start_time ) = 2014-11-18 22:59 when I run the same query in Hive the result is ok: min( start_time ) = 2014-11-18 00:00 max( start_time ) = 2014-11-18 23:59 Any help?
... View more
11-20-2014
08:16 AM
Works great! Simply setting the --class-name overrides the name of the jar file. Thanks!
... View more
11-19-2014
12:06 PM
Have you changed somethin in directory or file permissions in /var/run? If yes, you should probably reconfigure YARN to use a NEW directory (for example if YARN used /data/yarn/nm for NodeManager, configure a new path as /data/yarn/nm2) After setting changing EVERY directory for YARN and restarting the Cluster the YARN started, created the new directories and set the permissions correctly, so now we dont have this kind of problem with permissions. If you didnt change any permission in the local file system, then I dont know what is the issue. Try another user - such as run for example a hive job under root/hdfs/yarn or other user, to see whether this is user related or it fails always. T.
... View more
11-19-2014
11:50 AM
Hi guys, have anybody tried to rename the output of the sqoop import command? It is always named as QueryResult.jar. When we run multiple sqoop import commands in parallel, in Cloudera Manager the Yarn applications does not distinct between them, every command is named as QueryResult.jar. The sqoop import command looks like: sqoop import --connect jdbc:sqlserver://OUR.SQL.SERVER.COM --username XXX --query 'select * from XXXXZZZ where Start_Time >= getdate()-7 and $CONDITIONS' -m 6 --split-by Start_Time --as-textfile --fields-terminated-by '\t' --delete-target-dir -z --target-dir /user/sql2hadoop/ext_xxxxzzzz sqoop import --connect jdbc:sqlserver:// OUR.SQL.SERVER.COM --username XXX --query 'select * from XXXXYYY where Start_Time >= getdate()-7 and $CONDITIONS' -m 6 --split-by Start_Time --as-textfile --fields-terminated-by '\t' --delete-target-dir -z --target-dir /user/sql2hadoop/ext_ xxxxyyyyyy I would like to see in YARN that for example there are two applications running: Import_XXXZZZ.jar and Import_XXXXYYY.jar Is there any parameter for setting the application name? Thanks
... View more
11-07-2014
06:08 AM
I solved it with increasing the block size to the largest possible value of the partition, so when one partition is always less than 800MB I set the block size for this table to 1GB, and the warnings do not appear any more. T.
... View more
11-07-2014
05:11 AM
Hi guys, we are trying to read(select count(*)..) the external tables imported via sqoop in impala, but the impala crashes every time. The impala deamon has to be restarted. CDH version 5.2. External table imported via sqoop and loaded to HDFS as textfile, compressed by Gzip. External table definition created in Hive. If the external table is plain textfile, the Impala is ok with that, so I assume the problem is in decompression. The query run by impala crashes, does not matter wheter it runs via Impala ODBC driver, or Impala shell. Does anybody have the same issue? Any ideas what could be wrong? Thanks Tomas
... View more
10-30-2014
03:06 AM
Had the same issue, created a partitioned table stored as parquet in Hive, and loaded with data. Then whe running the query in Impala got the same error message. I tried these settings in Hive before running the insert, but the files produced are greater than the HDFS block size (128MB) SET parquet.block.size=128000000; SET dfs.blocksize=128000000; Can anybody give an advice? Tomas
... View more
10-03-2014
06:19 AM
I have googled the solution. After setting the propperty org.apache.sqoop.connector.autoupgrade=true the Sqoop server started ok. I dont know what this propperty means, but it worked 🙂 Thanks, Tomas. http://www.developerscloset.com/?p=961#respond Sqoop Server Startup Failure: Upgrade required but not allowed After an upgrade from CDH 5.0.2 to CDH 5.0.3, Sqoop failed to start with the following error: Server startup failure, Connector registration failed, Upgrade required but not allowed - Connector: generic-jdbc-connector. To resolve this problem I had to add the following property to the Sqoop 2 Server Advanced Configuration Snippet (Safety Valve) for sqoop.properties. You can find this property under Cloudera Manager, Sqoop Service, Configuration, Sqoop 2 Server Default Group, and Advanced: org.apache.sqoop.connector.autoupgrade=true After the upgrade has completed successfully, the property can be removed.
... View more
09-26-2014
12:32 AM
After upgrading from 5.1.2.1 to 5.1.3.1. the cluster fails to start the Sqoop2 server. It fails to load ConnectionManagers. The only thing I changed in sqoop was that I added the Microsoft JDBC driver (jar file) to the lib directory. Even after removing the file the cluster fails to start the Sqoop2 server.
... View more