Member since
09-24-2015
144
Posts
72
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1279 | 08-15-2017 08:15 AM | |
6006 | 01-24-2017 06:58 AM | |
1561 | 08-03-2016 06:45 AM | |
2818 | 06-01-2016 10:08 PM | |
2452 | 04-07-2016 10:30 AM |
11-03-2015
02:35 AM
1 Kudo
Thanks. Found http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8080129
... View more
11-02-2015
06:10 AM
3 Kudos
HDP version : 2.3.0
Ambari version: 2.1.0 Enabled Kerberos with Windows Active Directory (not cross-realm) from Ambari. Confirmed kinit and curl to WebHDFS worked with an AD user. Followed http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Security_Guide/content/_configuring_http_authentication_for_HDFS_YARN_MapReduce2_HBase_Oozie_Falcon_and_Storm.html Again, confirmed kdestroy & kinit and curl to WebHDFS and NameNode UI worked with an AD user. Now, tried to access HDP NameNode UI page from Firefox on Windows PC. Copied /etc/krb5.conf to C:\Windows\krb5.ini.
Followed some instruction found on internet to set up Firefox.
And didn't work. The error was "GSSHeader did not find the right tag" For troubleshooting purpose, downloaded curl.exe from http://curl.haxx.se/download.html Trying to access HDP, for example, NameNode:50070 with curl --negotiate, and got same error "GSSHeader did not find the right tag" Copied "/etc/security/keytabs/spnego.service.keytab" into Windows and did kinit -k -t and curl --negotiate but same error. Does anyone know what would be missing to make Windows PC work to access secured web page?
... View more
Labels:
10-30-2015
07:38 AM
2 Kudos
Thanks Jonas! Based on your advise, I'll do "yum erase `yum list | grep -P '\b2.3.0.0-2557\b' | awk '{ print $1 }'`"
... View more
10-29-2015
08:25 AM
1 Kudo
Upgraded a cluster many times and current version is HDP 2.3.2 Would like to delete the following directories.
Would it be safe to delete with rm command? $ sudo du -hx --max-depth=1 /usr/hdp | grep G 2.8G /usr/hdp/2.3.2.0-2950
1.7G /usr/hdp/2.2.0.0-2041 2.0G /usr/hdp/2.2.6.0-2800 2.0G /usr/hdp/2.2.4.2-2 2.6G /usr/hdp/2.3.0.0-2557 ...
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
10-28-2015
10:44 AM
Thanks! Does this mean Oozie's Tomcat heap size would not be important to run 100 concurrentworkflow?
... View more
10-28-2015
06:23 AM
If a cluster needs to run 100 Oozie workflow concurrently, is there any formula to estimate oozie_heapsize? Or is there any internal/external best practice document mentioning about heap size?
... View more
Labels:
- Labels:
-
Apache Oozie
10-28-2015
12:52 AM
1 Kudo
ORACLE: CREATE TABLE HAJIME.SQ_TEST
(
COLUMN_NUMBER NUMBER(5,0),
P_SC VARCHAR2(16 CHAR),
P_YMDH NUMBER(14,0)
);
INSERT INTO HAJIME.SQ_TEST (P_YMDH, P_SC, COLUMN_NUMBER) VALUES (5, 't', 4);
INSERT INTO HAJIME.SQ_TEST (P_YMDH, P_SC, COLUMN_NUMBER) VALUES (2, 'jlfkooakywhsc', 1);
INSERT INTO HAJIME.SQ_TEST (P_YMDH, P_SC, COLUMN_NUMBER) VALUES (3, 'vp', 3); HIVE: create table sq_test (
column_number bigint,
logacc_no string,
ymdh bigint
); SQOOP COMMAND: sqoop import --verbose --username hajime --password ***** --connect jdbc:oracle:thin:@ORACLE_SERVER:SID --query "select * from hajime.SQ_TEST WHERE \$CONDITIONS" --split-by COLUMN_NUMBER --hcatalog-table sq_test Then I get NullPointerException with the following error: 15/10/28 00:47:40 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Caught Exception checking database column p_sc in hcatalog table.
... View more
10-27-2015
08:35 AM
1 Kudo
HDP version: 2.3.0 upgraded from 2.2.4.2 A user reported his sqoop command fails if database column name and HCatalog column name are different.
If he uses same column names for both side, it works.
But he thinks it used to work as long as the number of columns and column types match. Is this true? The part of the error is: java.lang.NullPointerException
at org.apache.hive.hcatalog.data.schema.HCatSchema.get(HCatSchema.java:105)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:390)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:783)
at org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:98)
...
... View more
Labels:
- Labels:
-
Apache HCatalog
-
Apache Hive
-
Apache Sqoop
10-23-2015
06:56 AM
1 Kudo
Thanks everyone! What Terry described looks very close to the symptom. SmartSense has been installed, and Capacity Scheduler has been configured and will review the config. Also will check Yarn nodemanager params
... View more
10-22-2015
07:26 AM
On relatively busy cluster, ran a huge job which consumed almost 100% resources, then during shuffle phase, it died with OOM on a NodeManager, after that, all jobs including this job are not progressing. To recover from this state, needed to kill this job and also other jobs. This can't reproduce at will but occasionally happens. Have you come across any similar symptom?
Is there any smarter way to recover from this state? Killing jobs manually wouldn't be ideal.
Maybe need to check/modify some yarn config?
... View more
Labels:
- Labels:
-
Apache Tez
-
Apache YARN
- « Previous
- Next »