Member since
09-29-2015
186
Posts
63
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3420 | 08-11-2017 05:27 PM | |
2312 | 06-27-2017 10:58 PM | |
2417 | 04-09-2017 09:43 PM | |
3429 | 04-01-2017 02:04 AM | |
4649 | 03-13-2017 06:35 PM |
06-30-2017
11:30 PM
6 Kudos
PROBLEM: When the hostname is mixed like: 172.26.93.148 GRAFANA-hdp253-s1.openstacklocal GRAFANA-hdp253-s1
172.26.93.149 GRAFANA-hdp253-s2.openstacklocal GRAFANA-hdp253-s2
172.26.93.150 GRAFANA-hdp253-s3.openstacklocal GRAFANA-hdp253-s3 Ambari creates the datasource with a lower case hostname: Once you enter the hostname with mixed case, Grafana works: RESOLUTION Grafana 2.6.0 backend uses Go 1.5.
Go's DNS lookup had a bug where the look up is case sensitive: https://github.com/golang/go/issues/12806 We can manual workaround by using the original casing in the Grafana Data Source URL as mentioned in the description. Another workaround is to update the /etc/hosts file to have all lowercase patterns as well.
This bug was fixed in Go 1.6, so we will no longer have this issue once we upgrade Grafana to a later version (say 4.1.x) in a future version of Ambari.
... View more
Labels:
06-30-2017
11:13 PM
7 Kudos
PROBLEM: For example, while running from shell, we usually append the single quotes around the date type: hive> select * from students where datestamp='2014-09-23';
OK
fred flintstone351.282014-09-23
Time taken: 0.761 seconds,
Fetched: 1 row(s)
But in case of Hue, internally it doesn't append these single quotes and throws error on browse data. STEPS TO REPRODUCE: 1. Create table: CREATE TABLE students(name varchar(64), age int, gpa decimal(3,2)) PARTITIONED BY ( datestamp date); 2. Insert: INSERT INTO TABLE students PARTITION (datestamp = '2014-09-23') VALUES ('fred flintstone', 35, 1.28);
3. Login into hue -> Go to HCatalog -> Tables -> select 'students' -> Click on Browse data 4. This will generate error: RESOLUTION: There is an internal bug reported. Please reach out to support.
... View more
Labels:
06-30-2017
10:55 PM
6 Kudos
PROBLEM: We see below message on initial runs: 2017-02-28 19:37:48,681 INFO [main] impl.TimelineClientImpl: Timeline service address: http://<timeline-server-hostname>:8188/ws/v1/timeline/
2017-02-28 19:37:48,823 INFO [main] client.AHSProxy: Connecting to Application History server at <history-server-hostname>/<history-server-ip>:10200
2017-02-28 19:37:49,016 WARN [main] ipc.Client: Failed to connect to server: <resource-manager-A>/<resource-manager-A-ip>:8032: retries get failed due to exceeded maximum allowed retries number: 0
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ROOT CAUSE: In Yarn configs -> custom yarn-site.xml, you might have configuration for rm1 as resource-manager-A. This is the reason first attempt to connect was made to this server and then got connection refused. Then it goes to active resource manager which is resource-manager-B. This warning is not seen when resource-manager-A is active, because the property rm1 pointing to this server, first connect attempt is successful. First connection attempt is always made to the resource manager which is specified in rm1. RESOLUTION: This warning cannot be suppressed as of now. There is a jira open to change the logging: https://issues.apache.org/jira/browse/YARN-6145
... View more
Labels:
06-30-2017
06:05 AM
8 Kudos
PROBLEM: For an external hive table created based on hbase, if there are any missing mappings or any other issues (Syntactical), create table statement is executed successfully. You can see the table is created. However, while trying to insert data in that table, following error is seen: 08S01: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1495472057323_3947_1_00, Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.serde2.SerDeException: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyStringObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.MapObjectInspector
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:800)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:133)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:170)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) ROOT CAUSE Per:
https://wiki.apache.org/hadoop/Hive/HBaseIntegration?action=diff&rev1=13&rev2=14
If there are any issues with the section: WITH SERDEPROPERTIES ( "hbase.columns.mapping" = "cf1:val", "hbase.table.name" = "xyz" );
while creating the table, then the CREATE TABLE will succeed, but attempts to insert data
will fail with this internal error: {{{ java.lang.RuntimeException: org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyStringObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.MapObjectInspector }}} RESOLUTION Create table should have clear definition of mappings to hbase table.
Refer: https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration
... View more
Labels:
06-27-2017
10:58 PM
1 Kudo
@Sujatha Veeswar This error is seen when JDBC URL(ranger.jpa.jdbc.url) is not correct. Does the test connection work? Is the database on ranger host? Can you try: jdbc:mysql://ip_of_DB_host:DBPORT/ranger
... View more
05-27-2017
12:48 AM
Comment by @Jay SenSharma should fix the issue. 🙂
... View more
05-04-2017
02:02 AM
@Todd Wilson Hadoop classpath needs to be set. See page 9 of: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_HortonworksConnectorForTeradata/bk_HortonworksConnectorForTeradata.pdf
... View more
04-13-2017
08:53 PM
Yes, all the host components on that node will be omitted from the bulk restart/start/stop. See: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-operations/content/how_to_turn_on_maintenance_mode_for_a_host.html
... View more
04-09-2017
09:43 PM
It should be present in the repo. Can you check with - yum list available hadoop-httpfs
... View more