Member since
09-29-2015
186
Posts
63
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3184 | 08-11-2017 05:27 PM | |
2144 | 06-27-2017 10:58 PM | |
2256 | 04-09-2017 09:43 PM | |
3214 | 04-01-2017 02:04 AM | |
4411 | 03-13-2017 06:35 PM |
06-30-2017
11:13 PM
7 Kudos
PROBLEM: For example, while running from shell, we usually append the single quotes around the date type: hive> select * from students where datestamp='2014-09-23';
OK
fred flintstone351.282014-09-23
Time taken: 0.761 seconds,
Fetched: 1 row(s)
But in case of Hue, internally it doesn't append these single quotes and throws error on browse data. STEPS TO REPRODUCE: 1. Create table: CREATE TABLE students(name varchar(64), age int, gpa decimal(3,2)) PARTITIONED BY ( datestamp date); 2. Insert: INSERT INTO TABLE students PARTITION (datestamp = '2014-09-23') VALUES ('fred flintstone', 35, 1.28);
3. Login into hue -> Go to HCatalog -> Tables -> select 'students' -> Click on Browse data 4. This will generate error: RESOLUTION: There is an internal bug reported. Please reach out to support.
... View more
Labels:
06-30-2017
10:55 PM
6 Kudos
PROBLEM: We see below message on initial runs: 2017-02-28 19:37:48,681 INFO [main] impl.TimelineClientImpl: Timeline service address: http://<timeline-server-hostname>:8188/ws/v1/timeline/
2017-02-28 19:37:48,823 INFO [main] client.AHSProxy: Connecting to Application History server at <history-server-hostname>/<history-server-ip>:10200
2017-02-28 19:37:49,016 WARN [main] ipc.Client: Failed to connect to server: <resource-manager-A>/<resource-manager-A-ip>:8032: retries get failed due to exceeded maximum allowed retries number: 0
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ROOT CAUSE: In Yarn configs -> custom yarn-site.xml, you might have configuration for rm1 as resource-manager-A. This is the reason first attempt to connect was made to this server and then got connection refused. Then it goes to active resource manager which is resource-manager-B. This warning is not seen when resource-manager-A is active, because the property rm1 pointing to this server, first connect attempt is successful. First connection attempt is always made to the resource manager which is specified in rm1. RESOLUTION: This warning cannot be suppressed as of now. There is a jira open to change the logging: https://issues.apache.org/jira/browse/YARN-6145
... View more
Labels:
06-30-2017
06:05 AM
8 Kudos
PROBLEM: For an external hive table created based on hbase, if there are any missing mappings or any other issues (Syntactical), create table statement is executed successfully. You can see the table is created. However, while trying to insert data in that table, following error is seen: 08S01: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1495472057323_3947_1_00, Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.serde2.SerDeException: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyStringObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.MapObjectInspector
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:800)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:133)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:170)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) ROOT CAUSE Per:
https://wiki.apache.org/hadoop/Hive/HBaseIntegration?action=diff&rev1=13&rev2=14
If there are any issues with the section: WITH SERDEPROPERTIES ( "hbase.columns.mapping" = "cf1:val", "hbase.table.name" = "xyz" );
while creating the table, then the CREATE TABLE will succeed, but attempts to insert data
will fail with this internal error: {{{ java.lang.RuntimeException: org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyStringObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.MapObjectInspector }}} RESOLUTION Create table should have clear definition of mappings to hbase table.
Refer: https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration
... View more
Labels:
06-28-2017
07:27 AM
@Mugdha Test connection works..The database is on ranger host.. Is the default DB port 3306..?
... View more
11-22-2017
05:56 PM
Thank you @Mugdha and @Md Ali.
... View more
12-05-2017
09:16 PM
Just a note - on older versions of HDP (2.6.1 and below iirc) it is possible to receive InvalidACL at start time because the LLAP application has failed to start and thus failed to create the path entirely. So, it might be worth checking the LLAP app log if the path does not exist.
... View more
03-13-2017
06:35 PM
2 Kudos
In the zookeeper-env.sh, add Dzookeeper.skipACL=yes export SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Dzookeeper.skipACL=yes"
... View more
03-09-2017
09:53 PM
hdp242-s1.openstacklocal: Doesn't resolve for me. If the metric is not in metadata that means that HBase has never sent this metric to AMS. Did you perform a scan / run Ambari Smoke test to see if any data is sent? Could be this metric is not sent by the HBase version you are using?
... View more
12-23-2016
09:49 PM
1 Kudo
PROBLEM: When we go to Grafana UI: Under HBase - Tables: 1. NUM FLUSHES 2. NUM WRITE REQUESTS 3. NUM READ REQUESTS Under HBase - Users: 1. Num Get Requests 2. Num Scan Next Requests We just see: Problem! java.lang.Exception: Invalid number of functions specified. grafana.log show: [I] Completed X.X.X.X - "GET /ws/v1/timeline/metrics HTTP/1.1" 400 Bad Request 144 bytes in 7653us
[I] Completed X.X.X.X - "GET /ws/v1/timeline/metrics HTTP/1.1" 400 Bad Request 144 bytes in 3316us
[I] Completed X.X.X.X - "GET /ws/v1/timeline/metrics HTTP/1.1" 400 Bad Request 144 bytes in 1734us
RESOLUTION: 1. Login with Grafana admin. 2. Set transform=none in panels.
... View more
Labels:
- « Previous
- Next »