Member since
11-30-2017
20
Posts
0
Kudos Received
0
Solutions
01-22-2019
06:17 PM
Could you check NTP sync status across the nodes? , Could you check NTP sync status across the nodes.
... View more
05-25-2018
10:57 AM
@Geoffrey Shelton Okot
Sorry for the delay. Horton Work Support claimed that '@' symbol is not support, so we are doing reinstall freshly with local user and sync with LDAP. Thanks
... View more
05-09-2018
11:07 AM
@Geoffrey Shelton Okot Thank you. will let you know the updates shortly.
... View more
05-09-2018
11:01 AM
@Geoffrey Shelton Okot Thank you so much. we dont have KDC server installed , we are using LDAP. do i need to mention AD server in the place of "{your kdc server}? admin_server ={your_kdc_server}
... View more
05-09-2018
08:48 AM
@Geoffrey Shelton Okot sorry i work till 2PM EST thats why delay in answering. I am using AD and users already got created in the AD before HDP installation Yes One way trust made . hostName=node1.test.co Contents of /etc/krb5.conf : includedir /etc/krb5.conf.d/ includedir /var/lib/sss/pubconf/krb5.include.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false # default_realm = EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} default_realm = TEST.CO [realms] # EXAMPLE.COM = { # kdc = kerberos.example.com # admin_server = kerberos.example.com # } TEST.CO = { } [domain_realm] # .example.com = EXAMPLE.COM # example.com = EXAMPLE.COM test.co = TEST.CO .test.co = TEST.CO
... View more
05-08-2018
09:14 PM
@Geoffrey Shelton Okot I struck here for long. Appreciate your help!
... View more
05-08-2018
03:45 PM
Geoffrey Shelton Okot AD user is shdfs@Test.co .Could you please let me know the which format the rule should to start the web hdfs?
... View more
05-07-2018
01:29 PM
@Geoffrey Shelton Okot sorry for the delay in replying. same error. INFO [ambari-heartbeat-processor-0] ServiceComponentHostImpl:1039 - Ho st role transitioned to a new state, serviceComponentName=NAMENODE, hostName=node1.test .co, oldState=STARTING, currentState=INSTALLED since users are available in AD,do i need to map to local. could you please guide me here? RULE:[1:$1@$0](shdfs@test.co)s/.*/hdfs/ "message": "Invalid value for webhdfs parameter \"user.name\": Invalid value: \"shdfs@test.co\" does not belong to the domain ^[A-Za-z_][A-Za-z0-9._-]*[$]?$"
}
... View more
05-05-2018
08:34 AM
@Geoffrey Shelton Okot I am able to start it from both , but status getting changed to "INSTALLED" immediately after startup, but in the server i am able to see name node and data node running, but ambari console shows down.
... View more
05-04-2018
08:22 PM
@Geoffrey Shelton Okot also services going to installed state automatically after startup. Could you please guide me ? service component DATANODE of service HDFS of cluster TSTHDPCLST has changed from STARTED to INSTALLED at host test.co according to STATUS_COMMAND report
... View more
05-04-2018
01:04 PM
@Geoffrey Shelton Okot Thank you . Let me validate the rules. while starting the namenode manually ,please find the log su thdfs@test.co -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start namenode' starting namenode, logging to /var/log/hadoop/thdfs@test.co/hadoop-thdfs@test.co-namenode-node1.test.co.out Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 ulimit -a for user thdfs@test.co core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127967 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
... View more
05-04-2018
12:17 PM
@Geoffrey Shelton Okot While starting the namenode with wedhdfs enabled, getting following errors File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 250, in _run_command raise WebHDFSCallException(err_msg, result_dict) resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://node1.test.co:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=thdfs@test.co'' returned status_code=400. { "RemoteException": { "exception": "IllegalArgumentException", "javaClassName": "java.lang.IllegalArgumentException", "message": "Invalid value for webhdfs parameter \"user.name\": Invalid value: \"thdfs@test.co\" does not belong to the domain ^[A-Za-z_][A-Za-z0-9._-]*[$]?$" } }
... View more
05-04-2018
10:17 AM
@Geoffrey Shelton Okot Services running in the server but from ambari , it shows and GC logs show following errors. Could you please check? 2018-05-04T06:39:20.038-0400: 130.745: [GC (Allocation
Failure) 2018-05-04T06:39:20.038-0400: 130.745: [ParNew:
152348K->17472K(157248K), 0.0294015 secs]
152348K->30350K(506816K), 0.0294737 secs] [Times: user=0.15 sys=0.03,
real=0.03 secs]
... View more
05-03-2018
08:16 PM
@Geoffrey Shelton Okot Me too getting same error. Could you please suggest?
... View more
03-15-2018
09:24 AM
We are facing the same issue. is it a Bug?
... View more
02-07-2018
12:47 PM
hive> SHOW CREATE TABLE fact; OK CREATE TABLE `fact`( `ldate` date, `lid` int, `afid` int) CLUSTERED BY ( lid) INTO 3 BUCKETS ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://test1.com:8020/apps/hive/warehouse/fact' TBLPROPERTIES ( 'numFiles'='914', 'numRows'='0', 'rawDataSize'='0', 'totalSize'='653965', 'transactional'='true', 'transient_lastDdlTime'='1518006954') Time taken: 0.233 seconds, Fetched: 22 row(s) Hive view taking more time than hive/beeline CLI. i am using both hive/beeline cli.
... View more
02-06-2018
10:17 AM
@Mayan Nath I am not using kerberos. Our setups are in VM with SAN(HP). yes checked Tez UI, allocation of resource taking so much time ( server with zero jobs running period ) Sample : took 7 seconds for single insertion. hive> INSERT INTO fact values ('2018-02-01',123,1); Query ID = hdfs_20180206050752_0b80fc68-b3a3-4804-8c1a-349d2f3674f3 Total jobs = 1 Launching Job 1 out of 1 Status: Running (Executing on YARN cluster with App id application_1517831625144_0018) -------------------------------------------------------------------------------- VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED -------------------------------------------------------------------------------- Map 1 .......... SUCCEEDED 1 1 0 0 0 0 Reducer 2 ...... SUCCEEDED 3 3 0 0 0 0 -------------------------------------------------------------------------------- VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 3.70 s -------------------------------------------------------------------------------- Loading data to table default.fact Table default.fact stats: [numFiles=660, numRows=0, totalSize=469636, rawDataSize=0] OK Time taken: 7.51 seconds hive>
... View more
02-03-2018
03:49 AM
Simple insert taking more time INSERT INTO test values ('2018-02-01',123,1); INFO : Tez session hasn't been created yet. Opening session taking (19.598 seconds) for single insert and next inserts taking 2 to 3 seconds. could you please suggest ?
... View more
Labels:
- Labels:
-
Apache Tez